From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5BA7CC48260 for ; Fri, 16 Feb 2024 16:45:17 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BBFB16B0089; Fri, 16 Feb 2024 11:45:16 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id B6E586B008A; Fri, 16 Feb 2024 11:45:16 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A0EDA6B008C; Fri, 16 Feb 2024 11:45:16 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 8A8C46B0089 for ; Fri, 16 Feb 2024 11:45:16 -0500 (EST) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 4F90B1602CD for ; Fri, 16 Feb 2024 16:45:16 +0000 (UTC) X-FDA: 81798242232.20.AF0DE30 Received: from mail-yw1-f171.google.com (mail-yw1-f171.google.com [209.85.128.171]) by imf30.hostedemail.com (Postfix) with ESMTP id 9A5E280003 for ; Fri, 16 Feb 2024 16:45:13 +0000 (UTC) Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b="J/Cl847q"; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf30.hostedemail.com: domain of surenb@google.com designates 209.85.128.171 as permitted sender) smtp.mailfrom=surenb@google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1708101913; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=yqMJBdmCbC/RM9PRo3lm7SAMuIGxpy4kB2I1M6Qh1Sw=; b=Z7Wk2wnHu6AVqARIWdLfaTQlD+KeowMfaKHu1lWspEUrnEYKkwpZF25msjLSnZu6PQ4fEX n6m0DB6S6TfUivsy9jd1OjNhieBt9EbFmrvyYg4lIGUFbl4RZYPj4J2G1VP3LOaPfwDegP 141ppTFgPijrngD6Er1c17p0bL6easI= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b="J/Cl847q"; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf30.hostedemail.com: domain of surenb@google.com designates 209.85.128.171 as permitted sender) smtp.mailfrom=surenb@google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1708101913; a=rsa-sha256; cv=none; b=DpC2XSNR+C8COrsqg0ULkL+CxFEhWPQcLnvPkFvM1J+wShYQcYliq5s8bdWUMQMnaGtFr2 5gG6MHNkX1VwDltUvRendmI1LmvE9LPItAZ9SZP0UpRWdzAyVw8gjUZryHKBohy2zGTY/B 8CPuRK6WevIUUAj3F0XJVLLz1OkkZJY= Received: by mail-yw1-f171.google.com with SMTP id 00721157ae682-60777bcfed2so17554707b3.1 for ; Fri, 16 Feb 2024 08:45:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1708101912; x=1708706712; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=yqMJBdmCbC/RM9PRo3lm7SAMuIGxpy4kB2I1M6Qh1Sw=; b=J/Cl847q3Ne3HQ7f+D6TEPMkU5n70XpvxvRWarPf4mnS0WzwA414UGY8d4Q5EHgkg5 HtykGeBD5bnWswSklG3ZwGoI1D+3nqUhBM8ed+Tq0+5DFmwkLFqWFYTVHroauBuKyctR uNknXZ33GgFuLlDveLMCBw/xU9e91azYInP2XOvf+OTHj2Gz98Xq5JqcbtVUHL+jY+K2 EXYtaDpEBqExK4l9QGYQ/UwZ8lAUfrsFSbAfaD3ZEquu1yt0wribpGwrCvwZsSZqHV1z lTxyAbug/JsMouaHQWOIfDnJ9c/F3va6jG3vyVXrk7oYL96On+L/FP2NMtXFkayPYOC2 dtQw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1708101912; x=1708706712; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=yqMJBdmCbC/RM9PRo3lm7SAMuIGxpy4kB2I1M6Qh1Sw=; b=P7Wo6YubvnZVl8/ar1tYOAPO3AbRTbwk6oyG9N57H2NO54+mMf4YE/yPVASY+x0Wel aQuAJ2bIJPqa3sz1SPmOriDnoTedSXWvsAaD/a7ysBJLvcUO4oMeI5EHbI1hH2/Xv9Dr gPM8/lI4Pw1bQsF1OxdOe7YYXq4mlRkFOXdnZKzKY/9yOoH+BTWZo5WGjZwZBi61fGdR Sd/WRVNATsueBq53CytPDwRdd6kRdDDdUyJCMRLn8p6u8jbr8S/cbqBcwZwV9Phc8Tia VmICZQjkOBzwehBOgFUiv4U6f/hGZrWBntqYWvobg2IhNlJexw1Vo87iGO/4vLRvBmd7 1VcQ== X-Forwarded-Encrypted: i=1; AJvYcCVV7FOdAlXoCivHhiMgZtdiDaqezACugJoIx8D2zGjog7YFS/IaNszz6ZGzxySZl3dLtFh0qybBQFaHk6uGzkDDZL8= X-Gm-Message-State: AOJu0YwCVZDEKy2w3fj/RnNPDvu//hrDl6gyBy/UolKQ8ZP0PL87rPyQ AzO4ZEr2cy5j8xGxzalFCqX8mYGZQpIRCVv9nF8nSQJn/nXG5eicZtW0gr3ewIdGA2xj6ictQe8 gGOxLqDwWE/vG7JbQR4BDyOI4xpsnvwDeEMfO X-Google-Smtp-Source: AGHT+IG8/ofM2uf/y914CU8c22R1iloHjqFZxLjTIvqCNCo3t1Vtfj5gxB8K/liUcdy0qkkmiNCjnhDlVc5UviII96I= X-Received: by 2002:a0d:d489:0:b0:607:d02f:3587 with SMTP id w131-20020a0dd489000000b00607d02f3587mr6663621ywd.4.1708101912180; Fri, 16 Feb 2024 08:45:12 -0800 (PST) MIME-Version: 1.0 References: <20240212213922.783301-1-surenb@google.com> <20240212213922.783301-15-surenb@google.com> <039a817d-20c4-487d-a443-f87e19727305@suse.cz> In-Reply-To: <039a817d-20c4-487d-a443-f87e19727305@suse.cz> From: Suren Baghdasaryan Date: Fri, 16 Feb 2024 08:44:58 -0800 Message-ID: Subject: Re: [PATCH v3 14/35] lib: introduce support for page allocation tagging To: Vlastimil Babka Cc: akpm@linux-foundation.org, kent.overstreet@linux.dev, mhocko@suse.com, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, vvvvvv@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: 9A5E280003 X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: 3xgmzkpygqjjwuuyqxzm6w9euznm9bfa X-HE-Tag: 1708101913-622087 X-HE-Meta: U2FsdGVkX19dfKx6bJIE5tXVQ7JWANkofvlPFtTAjplSmWRQE5gc3syGKVulAAP+JO1WWZo17R0lbXr4TddK/v0tY8yyxHSQZkychucuRl3LgT6oeA0RX1kupib3pHEmMyJcDFcGbdVbkj/3WIZVUvyaCd4rhBOenq/Qnu2yh4O1J8AdCWfdIk0NZcBr7yOh8kN72VsJRhCNl5AD8vFoF/sTygRwhNMibadBziNZjVXb43hgaN2ANN+63jfwZGqbz082iYA7Em/1xinsCdqc6k8uZIGwLq67nbdG4NsLlA9o/hO6ctdBI5ken7v2RLTxShUaCEoPjMrdaAuE8dAZMaVnpXb4HaRKrsq7Ik6nk36ihxwan7TjyKsm4YDYJjBkHi2z9jI+cggIsWh5O6F/gN4SAC7CkMVCQWUuicXDQTZPI+omwRtL+u6bliTGYhqfmyOoXvs2RpACklLA6hgQIB1CKQhYM+9dOc7SRnuWKWaBQC27li5Ss/YMpBCRNPbEmUfZQAN6wF1Y0YeFzEtK9fs2qZjnmU2krW+m6rjHUx1Qz8pv9Wasn2Gmi5PYRacYm0w/YUMWglV41WXyHAcvlXD2L3ybQBPXyOs7MOA7BwmJ9x1U0DXpNFLucm33UwMTe4nwpoNXJbNOK0vFGzjqgEuYHn78oxFgaYUrdZmiO1s/mxAFrJXFfd7L57Ij0AN2Jx1uTViLKhsCfMSaWtXYZPrBe33qfv1Snp49uJVY+yE0KsvnHR4ZrmW07kGAbt+iOKgYAZW3ewuCmWcW3v2vl1Hv2J/agvRxOT4yme19dr6PrWuaT2oI/fMgWeDvZdgjKXK1gKJsMdGfHsNaGuY4N33K0HB/g3n/tkelldA9pIVLimOc/nj6eZrKVy76YYW8dlxODMxct2Ie0CQHPH4DIRNjCyyECjrfv+30BFBuYp/fr83e7xGIgPTp979izMsL9WJ6ADTlhFiHXLN6w1u rmdK1soX cZPQ9a5+IYlmTerRJROMlipChGpBt9roSmsDqFewCh6JeCDeTjECoPytw5CU/a/LFqBgM X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Fri, Feb 16, 2024 at 1:45=E2=80=AFAM Vlastimil Babka wr= ote: > > On 2/12/24 22:39, Suren Baghdasaryan wrote: > > Introduce helper functions to easily instrument page allocators by > > storing a pointer to the allocation tag associated with the code that > > allocated the page in a page_ext field. > > > > Signed-off-by: Suren Baghdasaryan > > Co-developed-by: Kent Overstreet > > Signed-off-by: Kent Overstreet > > + > > +#ifdef CONFIG_MEM_ALLOC_PROFILING > > + > > +#include > > + > > +extern struct page_ext_operations page_alloc_tagging_ops; > > +extern struct page_ext *page_ext_get(struct page *page); > > +extern void page_ext_put(struct page_ext *page_ext); > > + > > +static inline union codetag_ref *codetag_ref_from_page_ext(struct page= _ext *page_ext) > > +{ > > + return (void *)page_ext + page_alloc_tagging_ops.offset; > > +} > > + > > +static inline struct page_ext *page_ext_from_codetag_ref(union codetag= _ref *ref) > > +{ > > + return (void *)ref - page_alloc_tagging_ops.offset; > > +} > > + > > +static inline union codetag_ref *get_page_tag_ref(struct page *page) > > +{ > > + if (page && mem_alloc_profiling_enabled()) { > > + struct page_ext *page_ext =3D page_ext_get(page); > > + > > + if (page_ext) > > + return codetag_ref_from_page_ext(page_ext); > > I think when structured like this, you're not getting the full benefits o= f > static keys, and the compiler probably can't improve that on its own. > > - page is tested before the static branch is evaluated > - when disabled, the result is NULL, and that's again tested in the calle= rs Yes, that sounds right. I'll move the static branch check earlier like you suggested. Thanks! > > > + } > > + return NULL; > > +} > > + > > +static inline void put_page_tag_ref(union codetag_ref *ref) > > +{ > > + page_ext_put(page_ext_from_codetag_ref(ref)); > > +} > > + > > +static inline void pgalloc_tag_add(struct page *page, struct task_stru= ct *task, > > + unsigned int order) > > +{ > > + union codetag_ref *ref =3D get_page_tag_ref(page); > > So the more optimal way would be to test mem_alloc_profiling_enabled() he= re > as the very first thing before trying to get the ref. > > > + if (ref) { > > + alloc_tag_add(ref, task->alloc_tag, PAGE_SIZE << order); > > + put_page_tag_ref(ref); > > + } > > +} > > + > > +static inline void pgalloc_tag_sub(struct page *page, unsigned int ord= er) > > +{ > > + union codetag_ref *ref =3D get_page_tag_ref(page); > > And same here. > > > + if (ref) { > > + alloc_tag_sub(ref, PAGE_SIZE << order); > > + put_page_tag_ref(ref); > > + } > > +} > > + > > +#else /* CONFIG_MEM_ALLOC_PROFILING */ > > + > > +static inline void pgalloc_tag_add(struct page *page, struct task_stru= ct *task, > > + unsigned int order) {} > > +static inline void pgalloc_tag_sub(struct page *page, unsigned int ord= er) {} > > + > > +#endif /* CONFIG_MEM_ALLOC_PROFILING */ > > + > > +#endif /* _LINUX_PGALLOC_TAG_H */ > > diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug > > index 78d258ca508f..7bbdb0ddb011 100644 > > --- a/lib/Kconfig.debug > > +++ b/lib/Kconfig.debug > > @@ -978,6 +978,7 @@ config MEM_ALLOC_PROFILING > > depends on PROC_FS > > depends on !DEBUG_FORCE_WEAK_PER_CPU > > select CODE_TAGGING > > + select PAGE_EXTENSION > > help > > Track allocation source code and record total allocation size > > initiated at that code location. The mechanism can be used to t= rack > > diff --git a/lib/alloc_tag.c b/lib/alloc_tag.c > > index 4fc031f9cefd..2d5226d9262d 100644 > > --- a/lib/alloc_tag.c > > +++ b/lib/alloc_tag.c > > @@ -3,6 +3,7 @@ > > #include > > #include > > #include > > +#include > > #include > > #include > > #include > > @@ -124,6 +125,22 @@ static bool alloc_tag_module_unload(struct codetag= _type *cttype, > > return module_unused; > > } > > > > +static __init bool need_page_alloc_tagging(void) > > +{ > > + return true; > > So this means the page_ext memory overead is paid unconditionally once > MEM_ALLOC_PROFILING is compile time enabled, even if never enabled during > runtime? That makes it rather costly to be suitable for generic distro > kernels where the code could be compile time enabled, and runtime enablin= g > suggested in a debugging/support scenario. It's what we do with page_owne= r, > debug_pagealloc, slub_debug etc. > > Ideally we'd have some vmalloc based page_ext flavor for later-than-boot > runtime enablement, as we now have for stackdepot. But that could be > explored later. For now it would be sufficient to add an early_param boot > parameter to control the enablement including page_ext, like page_owner a= nd > other features do. Sounds reasonable. In v1 of this patchset we used early boot parameter but after LSF/MM discussion that was changed to runtime controls. Sounds like we would need both here. Should be easy to add. Allocating/reclaiming dynamically the space for page_ext, slab_ext, etc is not trivial and if done would be done separately. I looked into it before and listed the encountered issues in the cover letter of v2 [1], see "things we could not address" section. [1] https://lore.kernel.org/all/20231024134637.3120277-1-surenb@google.com/ > > > +} > > + > > +static __init void init_page_alloc_tagging(void) > > +{ > > +} > > + > > +struct page_ext_operations page_alloc_tagging_ops =3D { > > + .size =3D sizeof(union codetag_ref), > > + .need =3D need_page_alloc_tagging, > > + .init =3D init_page_alloc_tagging, > > +}; > > +EXPORT_SYMBOL(page_alloc_tagging_ops); > > > -- > To unsubscribe from this group and stop receiving emails from it, send an= email to kernel-team+unsubscribe@android.com. >