From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AD9CEC4829E for ; Mon, 12 Feb 2024 22:59:39 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 345216B0071; Mon, 12 Feb 2024 17:59:39 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 2F5596B0089; Mon, 12 Feb 2024 17:59:39 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 16EE46B008A; Mon, 12 Feb 2024 17:59:39 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 056AC6B0071 for ; Mon, 12 Feb 2024 17:59:39 -0500 (EST) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 9D606409D2 for ; Mon, 12 Feb 2024 22:59:38 +0000 (UTC) X-FDA: 81784670436.12.5DE3674 Received: from mail-pl1-f182.google.com (mail-pl1-f182.google.com [209.85.214.182]) by imf24.hostedemail.com (Postfix) with ESMTP id 9E60D180028 for ; Mon, 12 Feb 2024 22:59:36 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=chromium.org header.s=google header.b=Xe+g8TzQ; spf=pass (imf24.hostedemail.com: domain of keescook@chromium.org designates 209.85.214.182 as permitted sender) smtp.mailfrom=keescook@chromium.org; dmarc=pass (policy=none) header.from=chromium.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1707778776; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=cK/4hrivw4h5tLS29gd+rDqZ0u3L8uHk5jM+4TIOhzA=; b=Xp3GpNqn7U+kLxmpr1BEo7XBvnputZv8eloqrr2JOOMIkUW2+qDoMSJ2IJz383K2P99/0t 1Mla9XqwuQRsJtWYh+hBikVRxK9ZXIpTm+EY/oSiVkBR+kQ/RNvxq2OTmn/t6hS0kZc2l8 1dPKOhl5icwU7/bD4ijrbU50kto0k1Q= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=chromium.org header.s=google header.b=Xe+g8TzQ; spf=pass (imf24.hostedemail.com: domain of keescook@chromium.org designates 209.85.214.182 as permitted sender) smtp.mailfrom=keescook@chromium.org; dmarc=pass (policy=none) header.from=chromium.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1707778776; a=rsa-sha256; cv=none; b=nBCkMCiRqAb0JlqWtg7rp/wzJuYb+hCKKD5hVEgrfC4v0UrylrFPILrEN54dDhU8HFeZIk n8q+AWaH+qsvhRVT47fJR89kVhHLaOTO+516xjC5BC3tX76zPP2gU7ExKxNcvzYXrrvUDt kKXtUQ1xw1ftb7PDLEvvVoudqzDfKm0= Received: by mail-pl1-f182.google.com with SMTP id d9443c01a7336-1d746ce7d13so32139065ad.0 for ; Mon, 12 Feb 2024 14:59:36 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1707778775; x=1708383575; darn=kvack.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=cK/4hrivw4h5tLS29gd+rDqZ0u3L8uHk5jM+4TIOhzA=; b=Xe+g8TzQN693/QKz8Q9+Y7/+4yt1Kdk0v5/35hKYknGsDBx3H6ebdPu4j7M3K6WNTV +7sShBc925sx6UDvDnOTnklO3QQOfz2qgH4Pvjg3WOH+j19j7vzc/pAihF0J2IcxC03C wm/NKKakbUXBiF+5tc4Gx1Kr5w6e7H2F7cdkI= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1707778775; x=1708383575; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=cK/4hrivw4h5tLS29gd+rDqZ0u3L8uHk5jM+4TIOhzA=; b=Xk3/UMQHRc74CfroPhtL+xp/NZ9u2MukZ8lSLHB+8arnfh3Oha0lYUUNdK1ZZZHOxV lpZZHD5AKaXyIn1QIbZF7LCUzE1qiT+JJoJufunsgjxfvuS58pAdyLggR+r2ZmV0Mnto r9dM81k6528NdqrP5JVMBE8plqGA6Zy2OVreV5yz7ZS/AhQueIWsfweRALcKixsAZ2C7 /gpnZ+VvIHAts1cmWc4AU6qQeXp1ywmLzBKZ0RNWaLsGyKtyPbUCrgAxEVuEcXB9uPMc dGYI+YZei0LabmRm+xZGNbxDrpd2yF2FTBZoBEzJXHVi7JATNvQ7/HN8Nj89MhsaOm6s v9Mg== X-Gm-Message-State: AOJu0YzDFeHJoakVUPWG+OcnfqNobvpMDJC+7BwhfCcILHNzf3Gqgt4l OOjYVF5CuYazcUJ/SfPxpnpeqe133S0g8PqvobfLb86H4Yxkk9QMrRKM/1qleQ== X-Google-Smtp-Source: AGHT+IFIZnWCu+SQio7gJ00m++UUTYbYKO5kCKiaB8m65weSZUUT8Qr0iwslfa8Hk5SamksPJItdDg== X-Received: by 2002:a17:902:d48b:b0:1d9:b5d8:854c with SMTP id c11-20020a170902d48b00b001d9b5d8854cmr9936488plg.58.1707778775229; Mon, 12 Feb 2024 14:59:35 -0800 (PST) X-Forwarded-Encrypted: i=1; AJvYcCVNardpvwereV4FQHzQFLtod9BM5VLLL7JIz7/hRKNZr4lHFlyQGhbj3q2taJVUlTcqIhLMO51Y17U3ESBCcqzATSZCDUKdLoQi+hs5gETVV5Pn+oxF3w/Hbzj3/u7hfFS3vBjgI2qPeqXeHgWxK29hPaK3ThuzFKVCpIRQT595X5KGpXawf9I4NVpN4C5WS+w5NBhDap9iQ7FdTRV9PF/vqOmiG6sL36tmgefnEi/kPiAvW5G/Qmrc1v3dV5/yNqKNEBdZBK59KmVbp9NEuZgQiY7lScwfqL2/Y0TKftZLuHBhfyafg0tWs3rkKv8dwxEMSeqPkp47u1sBU9wCiiS47zQ2OBLNKAKD2NSZjI+973FCmVLHpu4BjZjcd6b/C24su/9hFjtPGFw1JEAe8z7ZzfFzR4YBxwSOS6EV7lBNYvOozWhHflOZpLfXEbRrWZCYmZ6z0v/chEldRv+E5YMws8xDbNHSbsRX8Fz/ah5VdlTh33fdYu7JbLucms8C4tNnu9Oxf4ihKgS5QAyE7MwYKqw0lRX9aSBFlhz6yLiGYvcuPyKrvpHnwzYYOBXB5QbvP7oXouqatYjPPLChqaGAmDNpKCqLOL1V31YJlIzKRMYkhXwza7NwShk4D4yscaN2bBePbXEHA7BCk402IkTD/IxoGqW67bDvkQGWUlyY7SAGpiMkn+JO8uR1R0GL5vm9h62i1iivjGgCdp1XGfVLj9yoLfIjZaFsXSVt7dp0fEVbDeGuOvuR9gJFGQWTFhjfqaQT64Tkfk0MbqSSp4GW59ybd1phXO6GDS1sm2QpfTahqAwl6R0m5hJ6rqwBBLvjSzrvqBxO21ifBRixzaH1Nq0JpE/a2QDtGsD7UfSvPFLpVLuOLMw5CfH7jbY/B4Npd98i30QtnJ5aW5K88FkGRGdV99tjBZj+9TNMX7jT6UBjMjztLu+CEoZ5Jb xBOq0AwO Hl3+bOH5fyCsYeo/LwpnzKnnsxMtE7YTvDIaxyI6XcI+WWNDiBafaDCt9Rwe6o9eO6C3PoNwJ/80LG3ehYK0w4J5RWt4DKzG2k3d+eeNCa+uAK65RW45xchlKnX3wZ5ysmI1IwhyABK3QdUgmpCPzE/tXerFa7o0m9X9UgUIHw3Ep6vBn6FQ0eCh4FJNLOhAx3osq7dssrBIwIuyoYrEg4zDAZrInuyUwU6Mz8TR3nhUwutIF69wj4L01G4shK7RIrLiXljVk0Li7t2djdsku4Dmr1akyEOscaTLc1cURWcAvogn5sKxvcW6+ZcmaGm4r4C0GegsyF/1VnDACphX/zIYuYe3Gr6NHnNn/vbGeCrWgILxuI7ooxcgd8lVXR6Fo2VTnFhrylYNqcYnfrCBtLWcKGgQ3+eU7XmbH2zlgm01QRpvqkehDT2wGggKc= Received: from www.outflux.net ([198.0.35.241]) by smtp.gmail.com with ESMTPSA id u12-20020a170902b28c00b001d8f393f3cfsm820482plr.248.2024.02.12.14.59.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 12 Feb 2024 14:59:35 -0800 (PST) Date: Mon, 12 Feb 2024 14:59:33 -0800 From: Kees Cook To: Suren Baghdasaryan Cc: akpm@linux-foundation.org, kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, ndesaulniers@google.com, vvvvvv@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org Subject: Re: [PATCH v3 17/35] mm: enable page allocation tagging Message-ID: <202402121458.A4A62E62B@keescook> References: <20240212213922.783301-1-surenb@google.com> <20240212213922.783301-18-surenb@google.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20240212213922.783301-18-surenb@google.com> X-Rspamd-Queue-Id: 9E60D180028 X-Rspam-User: X-Stat-Signature: 1uz37wnto18kpotugtb1qaecddxkenz6 X-Rspamd-Server: rspam01 X-HE-Tag: 1707778776-773733 X-HE-Meta: U2FsdGVkX18IpHuUxCp6lnXFj8dSoGwRqZK1MjmH11DVZBx2rYWJDNybbhMgqmw32/pHZ5ZbhGMKtKhQEzoxoTZeV9axYDAbGpr5TmWi01tbgt1hCXZixgQbsd4Aox7Ze/nY5Bo5/RQD22cSNF5UI0ER9B/CjzMdOIVuLMQYGrOEIZ9dDkzqq7xA430Xd60Jvdie6pgRfpIbTOu98l4zHhBodo0YUDqVM0Pcg2FfW5uDD3G9BRj8yTTtXD6wjsbwjzYX79KPTZgRrn+ZiKxcjb/bFI7tqqFcu7XO1J8y2rC+1kcJ/X+XbsvET9QiV9Ww3Vr9uCSoqnQULdGrYttoll6ZekpmxkZfX4Vtwb4eu8zC2VeNrhq0w6o/cBmxRPNYTaE/G4kzVx1S6rhdIzLKd6p9LrQAz/e4EaI/bOBR0h4wat1i78sa1tV8avmlz559AULrqE/PxTtzvWj1Z/a0ztshzlDOx3Dabzv+9r4BIQc2OlTVAafYaCPAJ9I4J6jVbBnDuV0iO+yoQKPeixP0w41pniuayB391O3M1uLUKBLsPc937dw6xD/FdIsZvqgq5MS+lkv/bGVinwRZmfnW67LOF7TQgiYL9HW+XHgHjh9k3W0J682HnYG/DJ1EUnuyFLMCQ2Rs2g9vhq2NDZ0ne2FTYrCG3iAVIrjXMTtL/gEARUMWseEfSBuqqP56LJkaVVCKOx2r6uFBdyLSft45yaz1L/vk9InqcrwgF3lJsGfzlly6ngdHWvPjgfhn8SbAjg9o6UTPYCO6UJ909l3DOJ3e09pWywfibMu8+v21Ei8tz8q5oiw0IaSkdSShGXofXmkGtNGf3cnE/eL7vbQz51DmTagrW4FindeAmLf516VlV/Y+o57Jbg4iPu8bO5FEcV4sSmeNH2FOpQBjfkfLK1fvy4MDneXEd/aoi57GTMaWDwuOiUvqn3D+OZGNcm4BdTuzCKDuBiHUorghkCn w1gNeBYv Oc0N+iRKZwwqj1S0Wof0/rr9k/1OpScumvfaMPPoEG8U0NSpFyT7JdqPHPgwtWIm1+JaTe+Uh5CZ5KXlLq59vWKnaLBmQVBoorcsV4fwUpwTWUtVBuHgBYYt0BFzToJ9z3EJQNW1U2niUWmEp8z2OSJ4znIhE0ZWynP9bqkDznwKikAQpvhl0BQg7KL0iBljlT5NuNi6f/hY8LJYP2QweTXV46h2kC+9hSu81RenZRwMs25UHgp1e6kMP46YF6ELusJfg9QP+D6/SQwh7jZiXgXuSIoQdl3OetPwB6oKiT+F1KHIb2y2kBGg/hNDhO9CAElBcn340Yiw0MZWnHvAdhbn07qeCXyNdM+0Lhy2zmV/5J+aW/XYXGrYzixWtD3OL1XZAlXYRAwuC+F0DFquWh4i5zzLhSMBpuVeEnq0QXjcY12Pe+nxlCqJZmZB8659rFpvzC56CM7n4HnshlLdaPoyZp3CRMicwlXflEbytO0+oPBNpAhinRAh1bvunNCwYcR5X X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Mon, Feb 12, 2024 at 01:39:03PM -0800, Suren Baghdasaryan wrote: > Redefine page allocators to record allocation tags upon their invocation. > Instrument post_alloc_hook and free_pages_prepare to modify current > allocation tag. > > Signed-off-by: Suren Baghdasaryan > Co-developed-by: Kent Overstreet > Signed-off-by: Kent Overstreet > --- > include/linux/alloc_tag.h | 10 +++ > include/linux/gfp.h | 126 ++++++++++++++++++++++++-------------- > include/linux/pagemap.h | 9 ++- > mm/compaction.c | 7 ++- > mm/filemap.c | 6 +- > mm/mempolicy.c | 52 ++++++++-------- > mm/page_alloc.c | 60 +++++++++--------- > 7 files changed, 160 insertions(+), 110 deletions(-) > > diff --git a/include/linux/alloc_tag.h b/include/linux/alloc_tag.h > index cf55a149fa84..6fa8a94d8bc1 100644 > --- a/include/linux/alloc_tag.h > +++ b/include/linux/alloc_tag.h > @@ -130,4 +130,14 @@ static inline void alloc_tag_add(union codetag_ref *ref, struct alloc_tag *tag, > > #endif > > +#define alloc_hooks(_do_alloc) \ > +({ \ > + typeof(_do_alloc) _res; \ > + DEFINE_ALLOC_TAG(_alloc_tag, _old); \ > + \ > + _res = _do_alloc; \ > + alloc_tag_restore(&_alloc_tag, _old); \ > + _res; \ > +}) I am delighted to see that __alloc_size survives this indirection. AFAICT, all the fortify goo continues to work with this in use. Reviewed-by: Kees Cook -Kees > + > #endif /* _LINUX_ALLOC_TAG_H */ > diff --git a/include/linux/gfp.h b/include/linux/gfp.h > index de292a007138..bc0fd5259b0b 100644 > --- a/include/linux/gfp.h > +++ b/include/linux/gfp.h > @@ -6,6 +6,8 @@ > > #include > #include > +#include > +#include > > struct vm_area_struct; > struct mempolicy; > @@ -175,42 +177,46 @@ static inline void arch_free_page(struct page *page, int order) { } > static inline void arch_alloc_page(struct page *page, int order) { } > #endif > > -struct page *__alloc_pages(gfp_t gfp, unsigned int order, int preferred_nid, > +struct page *__alloc_pages_noprof(gfp_t gfp, unsigned int order, int preferred_nid, > nodemask_t *nodemask); > -struct folio *__folio_alloc(gfp_t gfp, unsigned int order, int preferred_nid, > +#define __alloc_pages(...) alloc_hooks(__alloc_pages_noprof(__VA_ARGS__)) > + > +struct folio *__folio_alloc_noprof(gfp_t gfp, unsigned int order, int preferred_nid, > nodemask_t *nodemask); > +#define __folio_alloc(...) alloc_hooks(__folio_alloc_noprof(__VA_ARGS__)) > > -unsigned long __alloc_pages_bulk(gfp_t gfp, int preferred_nid, > +unsigned long alloc_pages_bulk_noprof(gfp_t gfp, int preferred_nid, > nodemask_t *nodemask, int nr_pages, > struct list_head *page_list, > struct page **page_array); > +#define __alloc_pages_bulk(...) alloc_hooks(alloc_pages_bulk_noprof(__VA_ARGS__)) > > -unsigned long alloc_pages_bulk_array_mempolicy(gfp_t gfp, > +unsigned long alloc_pages_bulk_array_mempolicy_noprof(gfp_t gfp, > unsigned long nr_pages, > struct page **page_array); > +#define alloc_pages_bulk_array_mempolicy(...) \ > + alloc_hooks(alloc_pages_bulk_array_mempolicy_noprof(__VA_ARGS__)) > > /* Bulk allocate order-0 pages */ > -static inline unsigned long > -alloc_pages_bulk_list(gfp_t gfp, unsigned long nr_pages, struct list_head *list) > -{ > - return __alloc_pages_bulk(gfp, numa_mem_id(), NULL, nr_pages, list, NULL); > -} > +#define alloc_pages_bulk_list(_gfp, _nr_pages, _list) \ > + __alloc_pages_bulk(_gfp, numa_mem_id(), NULL, _nr_pages, _list, NULL) > > -static inline unsigned long > -alloc_pages_bulk_array(gfp_t gfp, unsigned long nr_pages, struct page **page_array) > -{ > - return __alloc_pages_bulk(gfp, numa_mem_id(), NULL, nr_pages, NULL, page_array); > -} > +#define alloc_pages_bulk_array(_gfp, _nr_pages, _page_array) \ > + __alloc_pages_bulk(_gfp, numa_mem_id(), NULL, _nr_pages, NULL, _page_array) > > static inline unsigned long > -alloc_pages_bulk_array_node(gfp_t gfp, int nid, unsigned long nr_pages, struct page **page_array) > +alloc_pages_bulk_array_node_noprof(gfp_t gfp, int nid, unsigned long nr_pages, > + struct page **page_array) > { > if (nid == NUMA_NO_NODE) > nid = numa_mem_id(); > > - return __alloc_pages_bulk(gfp, nid, NULL, nr_pages, NULL, page_array); > + return alloc_pages_bulk_noprof(gfp, nid, NULL, nr_pages, NULL, page_array); > } > > +#define alloc_pages_bulk_array_node(...) \ > + alloc_hooks(alloc_pages_bulk_array_node_noprof(__VA_ARGS__)) > + > static inline void warn_if_node_offline(int this_node, gfp_t gfp_mask) > { > gfp_t warn_gfp = gfp_mask & (__GFP_THISNODE|__GFP_NOWARN); > @@ -230,82 +236,104 @@ static inline void warn_if_node_offline(int this_node, gfp_t gfp_mask) > * online. For more general interface, see alloc_pages_node(). > */ > static inline struct page * > -__alloc_pages_node(int nid, gfp_t gfp_mask, unsigned int order) > +__alloc_pages_node_noprof(int nid, gfp_t gfp_mask, unsigned int order) > { > VM_BUG_ON(nid < 0 || nid >= MAX_NUMNODES); > warn_if_node_offline(nid, gfp_mask); > > - return __alloc_pages(gfp_mask, order, nid, NULL); > + return __alloc_pages_noprof(gfp_mask, order, nid, NULL); > } > > +#define __alloc_pages_node(...) alloc_hooks(__alloc_pages_node_noprof(__VA_ARGS__)) > + > static inline > -struct folio *__folio_alloc_node(gfp_t gfp, unsigned int order, int nid) > +struct folio *__folio_alloc_node_noprof(gfp_t gfp, unsigned int order, int nid) > { > VM_BUG_ON(nid < 0 || nid >= MAX_NUMNODES); > warn_if_node_offline(nid, gfp); > > - return __folio_alloc(gfp, order, nid, NULL); > + return __folio_alloc_noprof(gfp, order, nid, NULL); > } > > +#define __folio_alloc_node(...) alloc_hooks(__folio_alloc_node_noprof(__VA_ARGS__)) > + > /* > * Allocate pages, preferring the node given as nid. When nid == NUMA_NO_NODE, > * prefer the current CPU's closest node. Otherwise node must be valid and > * online. > */ > -static inline struct page *alloc_pages_node(int nid, gfp_t gfp_mask, > - unsigned int order) > +static inline struct page *alloc_pages_node_noprof(int nid, gfp_t gfp_mask, > + unsigned int order) > { > if (nid == NUMA_NO_NODE) > nid = numa_mem_id(); > > - return __alloc_pages_node(nid, gfp_mask, order); > + return __alloc_pages_node_noprof(nid, gfp_mask, order); > } > > +#define alloc_pages_node(...) alloc_hooks(alloc_pages_node_noprof(__VA_ARGS__)) > + > #ifdef CONFIG_NUMA > -struct page *alloc_pages(gfp_t gfp, unsigned int order); > -struct page *alloc_pages_mpol(gfp_t gfp, unsigned int order, > +struct page *alloc_pages_noprof(gfp_t gfp, unsigned int order); > +struct page *alloc_pages_mpol_noprof(gfp_t gfp, unsigned int order, > struct mempolicy *mpol, pgoff_t ilx, int nid); > -struct folio *folio_alloc(gfp_t gfp, unsigned int order); > -struct folio *vma_alloc_folio(gfp_t gfp, int order, struct vm_area_struct *vma, > +struct folio *folio_alloc_noprof(gfp_t gfp, unsigned int order); > +struct folio *vma_alloc_folio_noprof(gfp_t gfp, int order, struct vm_area_struct *vma, > unsigned long addr, bool hugepage); > #else > -static inline struct page *alloc_pages(gfp_t gfp_mask, unsigned int order) > +static inline struct page *alloc_pages_noprof(gfp_t gfp_mask, unsigned int order) > { > - return alloc_pages_node(numa_node_id(), gfp_mask, order); > + return alloc_pages_node_noprof(numa_node_id(), gfp_mask, order); > } > -static inline struct page *alloc_pages_mpol(gfp_t gfp, unsigned int order, > +static inline struct page *alloc_pages_mpol_noprof(gfp_t gfp, unsigned int order, > struct mempolicy *mpol, pgoff_t ilx, int nid) > { > - return alloc_pages(gfp, order); > + return alloc_pages_noprof(gfp, order); > } > -static inline struct folio *folio_alloc(gfp_t gfp, unsigned int order) > +static inline struct folio *folio_alloc_noprof(gfp_t gfp, unsigned int order) > { > return __folio_alloc_node(gfp, order, numa_node_id()); > } > -#define vma_alloc_folio(gfp, order, vma, addr, hugepage) \ > - folio_alloc(gfp, order) > +#define vma_alloc_folio_noprof(gfp, order, vma, addr, hugepage) \ > + folio_alloc_noprof(gfp, order) > #endif > + > +#define alloc_pages(...) alloc_hooks(alloc_pages_noprof(__VA_ARGS__)) > +#define alloc_pages_mpol(...) alloc_hooks(alloc_pages_mpol_noprof(__VA_ARGS__)) > +#define folio_alloc(...) alloc_hooks(folio_alloc_noprof(__VA_ARGS__)) > +#define vma_alloc_folio(...) alloc_hooks(vma_alloc_folio_noprof(__VA_ARGS__)) > + > #define alloc_page(gfp_mask) alloc_pages(gfp_mask, 0) > -static inline struct page *alloc_page_vma(gfp_t gfp, > + > +static inline struct page *alloc_page_vma_noprof(gfp_t gfp, > struct vm_area_struct *vma, unsigned long addr) > { > - struct folio *folio = vma_alloc_folio(gfp, 0, vma, addr, false); > + struct folio *folio = vma_alloc_folio_noprof(gfp, 0, vma, addr, false); > > return &folio->page; > } > +#define alloc_page_vma(...) alloc_hooks(alloc_page_vma_noprof(__VA_ARGS__)) > + > +extern unsigned long get_free_pages_noprof(gfp_t gfp_mask, unsigned int order); > +#define __get_free_pages(...) alloc_hooks(get_free_pages_noprof(__VA_ARGS__)) > > -extern unsigned long __get_free_pages(gfp_t gfp_mask, unsigned int order); > -extern unsigned long get_zeroed_page(gfp_t gfp_mask); > +extern unsigned long get_zeroed_page_noprof(gfp_t gfp_mask); > +#define get_zeroed_page(...) alloc_hooks(get_zeroed_page_noprof(__VA_ARGS__)) > + > +void *alloc_pages_exact_noprof(size_t size, gfp_t gfp_mask) __alloc_size(1); > +#define alloc_pages_exact(...) alloc_hooks(alloc_pages_exact_noprof(__VA_ARGS__)) > > -void *alloc_pages_exact(size_t size, gfp_t gfp_mask) __alloc_size(1); > void free_pages_exact(void *virt, size_t size); > -__meminit void *alloc_pages_exact_nid(int nid, size_t size, gfp_t gfp_mask) __alloc_size(2); > > -#define __get_free_page(gfp_mask) \ > - __get_free_pages((gfp_mask), 0) > +__meminit void *alloc_pages_exact_nid_noprof(int nid, size_t size, gfp_t gfp_mask) __alloc_size(2); > +#define alloc_pages_exact_nid(...) \ > + alloc_hooks(alloc_pages_exact_nid_noprof(__VA_ARGS__)) > + > +#define __get_free_page(gfp_mask) \ > + __get_free_pages((gfp_mask), 0) > > -#define __get_dma_pages(gfp_mask, order) \ > - __get_free_pages((gfp_mask) | GFP_DMA, (order)) > +#define __get_dma_pages(gfp_mask, order) \ > + __get_free_pages((gfp_mask) | GFP_DMA, (order)) > > extern void __free_pages(struct page *page, unsigned int order); > extern void free_pages(unsigned long addr, unsigned int order); > @@ -357,10 +385,14 @@ extern gfp_t vma_thp_gfp_mask(struct vm_area_struct *vma); > > #ifdef CONFIG_CONTIG_ALLOC > /* The below functions must be run on a range from a single zone. */ > -extern int alloc_contig_range(unsigned long start, unsigned long end, > +extern int alloc_contig_range_noprof(unsigned long start, unsigned long end, > unsigned migratetype, gfp_t gfp_mask); > -extern struct page *alloc_contig_pages(unsigned long nr_pages, gfp_t gfp_mask, > - int nid, nodemask_t *nodemask); > +#define alloc_contig_range(...) alloc_hooks(alloc_contig_range_noprof(__VA_ARGS__)) > + > +extern struct page *alloc_contig_pages_noprof(unsigned long nr_pages, gfp_t gfp_mask, > + int nid, nodemask_t *nodemask); > +#define alloc_contig_pages(...) alloc_hooks(alloc_contig_pages_noprof(__VA_ARGS__)) > + > #endif > void free_contig_range(unsigned long pfn, unsigned long nr_pages); > > diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h > index 2df35e65557d..35636e67e2e1 100644 > --- a/include/linux/pagemap.h > +++ b/include/linux/pagemap.h > @@ -542,14 +542,17 @@ static inline void *detach_page_private(struct page *page) > #endif > > #ifdef CONFIG_NUMA > -struct folio *filemap_alloc_folio(gfp_t gfp, unsigned int order); > +struct folio *filemap_alloc_folio_noprof(gfp_t gfp, unsigned int order); > #else > -static inline struct folio *filemap_alloc_folio(gfp_t gfp, unsigned int order) > +static inline struct folio *filemap_alloc_folio_noprof(gfp_t gfp, unsigned int order) > { > - return folio_alloc(gfp, order); > + return folio_alloc_noprof(gfp, order); > } > #endif > > +#define filemap_alloc_folio(...) \ > + alloc_hooks(filemap_alloc_folio_noprof(__VA_ARGS__)) > + > static inline struct page *__page_cache_alloc(gfp_t gfp) > { > return &filemap_alloc_folio(gfp, 0)->page; > diff --git a/mm/compaction.c b/mm/compaction.c > index 4add68d40e8d..f4c0e682c979 100644 > --- a/mm/compaction.c > +++ b/mm/compaction.c > @@ -1781,7 +1781,7 @@ static void isolate_freepages(struct compact_control *cc) > * This is a migrate-callback that "allocates" freepages by taking pages > * from the isolated freelists in the block we are migrating to. > */ > -static struct folio *compaction_alloc(struct folio *src, unsigned long data) > +static struct folio *compaction_alloc_noprof(struct folio *src, unsigned long data) > { > struct compact_control *cc = (struct compact_control *)data; > struct folio *dst; > @@ -1800,6 +1800,11 @@ static struct folio *compaction_alloc(struct folio *src, unsigned long data) > return dst; > } > > +static struct folio *compaction_alloc(struct folio *src, unsigned long data) > +{ > + return alloc_hooks(compaction_alloc_noprof(src, data)); > +} > + > /* > * This is a migrate-callback that "frees" freepages back to the isolated > * freelist. All pages on the freelist are from the same zone, so there is no > diff --git a/mm/filemap.c b/mm/filemap.c > index 750e779c23db..e51e474545ad 100644 > --- a/mm/filemap.c > +++ b/mm/filemap.c > @@ -957,7 +957,7 @@ int filemap_add_folio(struct address_space *mapping, struct folio *folio, > EXPORT_SYMBOL_GPL(filemap_add_folio); > > #ifdef CONFIG_NUMA > -struct folio *filemap_alloc_folio(gfp_t gfp, unsigned int order) > +struct folio *filemap_alloc_folio_noprof(gfp_t gfp, unsigned int order) > { > int n; > struct folio *folio; > @@ -972,9 +972,9 @@ struct folio *filemap_alloc_folio(gfp_t gfp, unsigned int order) > > return folio; > } > - return folio_alloc(gfp, order); > + return folio_alloc_noprof(gfp, order); > } > -EXPORT_SYMBOL(filemap_alloc_folio); > +EXPORT_SYMBOL(filemap_alloc_folio_noprof); > #endif > > /* > diff --git a/mm/mempolicy.c b/mm/mempolicy.c > index 10a590ee1c89..c329d00b975f 100644 > --- a/mm/mempolicy.c > +++ b/mm/mempolicy.c > @@ -2070,15 +2070,15 @@ static struct page *alloc_pages_preferred_many(gfp_t gfp, unsigned int order, > */ > preferred_gfp = gfp | __GFP_NOWARN; > preferred_gfp &= ~(__GFP_DIRECT_RECLAIM | __GFP_NOFAIL); > - page = __alloc_pages(preferred_gfp, order, nid, nodemask); > + page = __alloc_pages_noprof(preferred_gfp, order, nid, nodemask); > if (!page) > - page = __alloc_pages(gfp, order, nid, NULL); > + page = __alloc_pages_noprof(gfp, order, nid, NULL); > > return page; > } > > /** > - * alloc_pages_mpol - Allocate pages according to NUMA mempolicy. > + * alloc_pages_mpol_noprof - Allocate pages according to NUMA mempolicy. > * @gfp: GFP flags. > * @order: Order of the page allocation. > * @pol: Pointer to the NUMA mempolicy. > @@ -2087,7 +2087,7 @@ static struct page *alloc_pages_preferred_many(gfp_t gfp, unsigned int order, > * > * Return: The page on success or NULL if allocation fails. > */ > -struct page *alloc_pages_mpol(gfp_t gfp, unsigned int order, > +struct page *alloc_pages_mpol_noprof(gfp_t gfp, unsigned int order, > struct mempolicy *pol, pgoff_t ilx, int nid) > { > nodemask_t *nodemask; > @@ -2117,7 +2117,7 @@ struct page *alloc_pages_mpol(gfp_t gfp, unsigned int order, > * First, try to allocate THP only on local node, but > * don't reclaim unnecessarily, just compact. > */ > - page = __alloc_pages_node(nid, > + page = __alloc_pages_node_noprof(nid, > gfp | __GFP_THISNODE | __GFP_NORETRY, order); > if (page || !(gfp & __GFP_DIRECT_RECLAIM)) > return page; > @@ -2130,7 +2130,7 @@ struct page *alloc_pages_mpol(gfp_t gfp, unsigned int order, > } > } > > - page = __alloc_pages(gfp, order, nid, nodemask); > + page = __alloc_pages_noprof(gfp, order, nid, nodemask); > > if (unlikely(pol->mode == MPOL_INTERLEAVE) && page) { > /* skip NUMA_INTERLEAVE_HIT update if numa stats is disabled */ > @@ -2146,7 +2146,7 @@ struct page *alloc_pages_mpol(gfp_t gfp, unsigned int order, > } > > /** > - * vma_alloc_folio - Allocate a folio for a VMA. > + * vma_alloc_folio_noprof - Allocate a folio for a VMA. > * @gfp: GFP flags. > * @order: Order of the folio. > * @vma: Pointer to VMA. > @@ -2161,7 +2161,7 @@ struct page *alloc_pages_mpol(gfp_t gfp, unsigned int order, > * > * Return: The folio on success or NULL if allocation fails. > */ > -struct folio *vma_alloc_folio(gfp_t gfp, int order, struct vm_area_struct *vma, > +struct folio *vma_alloc_folio_noprof(gfp_t gfp, int order, struct vm_area_struct *vma, > unsigned long addr, bool hugepage) > { > struct mempolicy *pol; > @@ -2169,15 +2169,15 @@ struct folio *vma_alloc_folio(gfp_t gfp, int order, struct vm_area_struct *vma, > struct page *page; > > pol = get_vma_policy(vma, addr, order, &ilx); > - page = alloc_pages_mpol(gfp | __GFP_COMP, order, > - pol, ilx, numa_node_id()); > + page = alloc_pages_mpol_noprof(gfp | __GFP_COMP, order, > + pol, ilx, numa_node_id()); > mpol_cond_put(pol); > return page_rmappable_folio(page); > } > -EXPORT_SYMBOL(vma_alloc_folio); > +EXPORT_SYMBOL(vma_alloc_folio_noprof); > > /** > - * alloc_pages - Allocate pages. > + * alloc_pages_noprof - Allocate pages. > * @gfp: GFP flags. > * @order: Power of two of number of pages to allocate. > * > @@ -2190,7 +2190,7 @@ EXPORT_SYMBOL(vma_alloc_folio); > * flags are used. > * Return: The page on success or NULL if allocation fails. > */ > -struct page *alloc_pages(gfp_t gfp, unsigned int order) > +struct page *alloc_pages_noprof(gfp_t gfp, unsigned int order) > { > struct mempolicy *pol = &default_policy; > > @@ -2201,16 +2201,16 @@ struct page *alloc_pages(gfp_t gfp, unsigned int order) > if (!in_interrupt() && !(gfp & __GFP_THISNODE)) > pol = get_task_policy(current); > > - return alloc_pages_mpol(gfp, order, > - pol, NO_INTERLEAVE_INDEX, numa_node_id()); > + return alloc_pages_mpol_noprof(gfp, order, pol, NO_INTERLEAVE_INDEX, > + numa_node_id()); > } > -EXPORT_SYMBOL(alloc_pages); > +EXPORT_SYMBOL(alloc_pages_noprof); > > -struct folio *folio_alloc(gfp_t gfp, unsigned int order) > +struct folio *folio_alloc_noprof(gfp_t gfp, unsigned int order) > { > - return page_rmappable_folio(alloc_pages(gfp | __GFP_COMP, order)); > + return page_rmappable_folio(alloc_pages_noprof(gfp | __GFP_COMP, order)); > } > -EXPORT_SYMBOL(folio_alloc); > +EXPORT_SYMBOL(folio_alloc_noprof); > > static unsigned long alloc_pages_bulk_array_interleave(gfp_t gfp, > struct mempolicy *pol, unsigned long nr_pages, > @@ -2229,13 +2229,13 @@ static unsigned long alloc_pages_bulk_array_interleave(gfp_t gfp, > > for (i = 0; i < nodes; i++) { > if (delta) { > - nr_allocated = __alloc_pages_bulk(gfp, > + nr_allocated = alloc_pages_bulk_noprof(gfp, > interleave_nodes(pol), NULL, > nr_pages_per_node + 1, NULL, > page_array); > delta--; > } else { > - nr_allocated = __alloc_pages_bulk(gfp, > + nr_allocated = alloc_pages_bulk_noprof(gfp, > interleave_nodes(pol), NULL, > nr_pages_per_node, NULL, page_array); > } > @@ -2257,11 +2257,11 @@ static unsigned long alloc_pages_bulk_array_preferred_many(gfp_t gfp, int nid, > preferred_gfp = gfp | __GFP_NOWARN; > preferred_gfp &= ~(__GFP_DIRECT_RECLAIM | __GFP_NOFAIL); > > - nr_allocated = __alloc_pages_bulk(preferred_gfp, nid, &pol->nodes, > + nr_allocated = alloc_pages_bulk_noprof(preferred_gfp, nid, &pol->nodes, > nr_pages, NULL, page_array); > > if (nr_allocated < nr_pages) > - nr_allocated += __alloc_pages_bulk(gfp, numa_node_id(), NULL, > + nr_allocated += alloc_pages_bulk_noprof(gfp, numa_node_id(), NULL, > nr_pages - nr_allocated, NULL, > page_array + nr_allocated); > return nr_allocated; > @@ -2273,7 +2273,7 @@ static unsigned long alloc_pages_bulk_array_preferred_many(gfp_t gfp, int nid, > * It can accelerate memory allocation especially interleaving > * allocate memory. > */ > -unsigned long alloc_pages_bulk_array_mempolicy(gfp_t gfp, > +unsigned long alloc_pages_bulk_array_mempolicy_noprof(gfp_t gfp, > unsigned long nr_pages, struct page **page_array) > { > struct mempolicy *pol = &default_policy; > @@ -2293,8 +2293,8 @@ unsigned long alloc_pages_bulk_array_mempolicy(gfp_t gfp, > > nid = numa_node_id(); > nodemask = policy_nodemask(gfp, pol, NO_INTERLEAVE_INDEX, &nid); > - return __alloc_pages_bulk(gfp, nid, nodemask, > - nr_pages, NULL, page_array); > + return alloc_pages_bulk_noprof(gfp, nid, nodemask, > + nr_pages, NULL, page_array); > } > > int vma_dup_policy(struct vm_area_struct *src, struct vm_area_struct *dst) > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index edb79a55a252..58c0e8b948a4 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -4380,7 +4380,7 @@ static inline bool prepare_alloc_pages(gfp_t gfp_mask, unsigned int order, > * > * Returns the number of pages on the list or array. > */ > -unsigned long __alloc_pages_bulk(gfp_t gfp, int preferred_nid, > +unsigned long alloc_pages_bulk_noprof(gfp_t gfp, int preferred_nid, > nodemask_t *nodemask, int nr_pages, > struct list_head *page_list, > struct page **page_array) > @@ -4516,7 +4516,7 @@ unsigned long __alloc_pages_bulk(gfp_t gfp, int preferred_nid, > pcp_trylock_finish(UP_flags); > > failed: > - page = __alloc_pages(gfp, 0, preferred_nid, nodemask); > + page = __alloc_pages_noprof(gfp, 0, preferred_nid, nodemask); > if (page) { > if (page_list) > list_add(&page->lru, page_list); > @@ -4527,13 +4527,13 @@ unsigned long __alloc_pages_bulk(gfp_t gfp, int preferred_nid, > > goto out; > } > -EXPORT_SYMBOL_GPL(__alloc_pages_bulk); > +EXPORT_SYMBOL_GPL(alloc_pages_bulk_noprof); > > /* > * This is the 'heart' of the zoned buddy allocator. > */ > -struct page *__alloc_pages(gfp_t gfp, unsigned int order, int preferred_nid, > - nodemask_t *nodemask) > +struct page *__alloc_pages_noprof(gfp_t gfp, unsigned int order, > + int preferred_nid, nodemask_t *nodemask) > { > struct page *page; > unsigned int alloc_flags = ALLOC_WMARK_LOW; > @@ -4595,38 +4595,38 @@ struct page *__alloc_pages(gfp_t gfp, unsigned int order, int preferred_nid, > > return page; > } > -EXPORT_SYMBOL(__alloc_pages); > +EXPORT_SYMBOL(__alloc_pages_noprof); > > -struct folio *__folio_alloc(gfp_t gfp, unsigned int order, int preferred_nid, > +struct folio *__folio_alloc_noprof(gfp_t gfp, unsigned int order, int preferred_nid, > nodemask_t *nodemask) > { > - struct page *page = __alloc_pages(gfp | __GFP_COMP, order, > + struct page *page = __alloc_pages_noprof(gfp | __GFP_COMP, order, > preferred_nid, nodemask); > return page_rmappable_folio(page); > } > -EXPORT_SYMBOL(__folio_alloc); > +EXPORT_SYMBOL(__folio_alloc_noprof); > > /* > * Common helper functions. Never use with __GFP_HIGHMEM because the returned > * address cannot represent highmem pages. Use alloc_pages and then kmap if > * you need to access high mem. > */ > -unsigned long __get_free_pages(gfp_t gfp_mask, unsigned int order) > +unsigned long get_free_pages_noprof(gfp_t gfp_mask, unsigned int order) > { > struct page *page; > > - page = alloc_pages(gfp_mask & ~__GFP_HIGHMEM, order); > + page = alloc_pages_noprof(gfp_mask & ~__GFP_HIGHMEM, order); > if (!page) > return 0; > return (unsigned long) page_address(page); > } > -EXPORT_SYMBOL(__get_free_pages); > +EXPORT_SYMBOL(get_free_pages_noprof); > > -unsigned long get_zeroed_page(gfp_t gfp_mask) > +unsigned long get_zeroed_page_noprof(gfp_t gfp_mask) > { > - return __get_free_page(gfp_mask | __GFP_ZERO); > + return get_free_pages_noprof(gfp_mask | __GFP_ZERO, 0); > } > -EXPORT_SYMBOL(get_zeroed_page); > +EXPORT_SYMBOL(get_zeroed_page_noprof); > > /** > * __free_pages - Free pages allocated with alloc_pages(). > @@ -4818,7 +4818,7 @@ static void *make_alloc_exact(unsigned long addr, unsigned int order, > } > > /** > - * alloc_pages_exact - allocate an exact number physically-contiguous pages. > + * alloc_pages_exact_noprof - allocate an exact number physically-contiguous pages. > * @size: the number of bytes to allocate > * @gfp_mask: GFP flags for the allocation, must not contain __GFP_COMP > * > @@ -4832,7 +4832,7 @@ static void *make_alloc_exact(unsigned long addr, unsigned int order, > * > * Return: pointer to the allocated area or %NULL in case of error. > */ > -void *alloc_pages_exact(size_t size, gfp_t gfp_mask) > +void *alloc_pages_exact_noprof(size_t size, gfp_t gfp_mask) > { > unsigned int order = get_order(size); > unsigned long addr; > @@ -4840,13 +4840,13 @@ void *alloc_pages_exact(size_t size, gfp_t gfp_mask) > if (WARN_ON_ONCE(gfp_mask & (__GFP_COMP | __GFP_HIGHMEM))) > gfp_mask &= ~(__GFP_COMP | __GFP_HIGHMEM); > > - addr = __get_free_pages(gfp_mask, order); > + addr = get_free_pages_noprof(gfp_mask, order); > return make_alloc_exact(addr, order, size); > } > -EXPORT_SYMBOL(alloc_pages_exact); > +EXPORT_SYMBOL(alloc_pages_exact_noprof); > > /** > - * alloc_pages_exact_nid - allocate an exact number of physically-contiguous > + * alloc_pages_exact_nid_noprof - allocate an exact number of physically-contiguous > * pages on a node. > * @nid: the preferred node ID where memory should be allocated > * @size: the number of bytes to allocate > @@ -4857,7 +4857,7 @@ EXPORT_SYMBOL(alloc_pages_exact); > * > * Return: pointer to the allocated area or %NULL in case of error. > */ > -void * __meminit alloc_pages_exact_nid(int nid, size_t size, gfp_t gfp_mask) > +void * __meminit alloc_pages_exact_nid_noprof(int nid, size_t size, gfp_t gfp_mask) > { > unsigned int order = get_order(size); > struct page *p; > @@ -4865,7 +4865,7 @@ void * __meminit alloc_pages_exact_nid(int nid, size_t size, gfp_t gfp_mask) > if (WARN_ON_ONCE(gfp_mask & (__GFP_COMP | __GFP_HIGHMEM))) > gfp_mask &= ~(__GFP_COMP | __GFP_HIGHMEM); > > - p = alloc_pages_node(nid, gfp_mask, order); > + p = alloc_pages_node_noprof(nid, gfp_mask, order); > if (!p) > return NULL; > return make_alloc_exact((unsigned long)page_address(p), order, size); > @@ -6283,7 +6283,7 @@ int __alloc_contig_migrate_range(struct compact_control *cc, > } > > /** > - * alloc_contig_range() -- tries to allocate given range of pages > + * alloc_contig_range_noprof() -- tries to allocate given range of pages > * @start: start PFN to allocate > * @end: one-past-the-last PFN to allocate > * @migratetype: migratetype of the underlying pageblocks (either > @@ -6303,7 +6303,7 @@ int __alloc_contig_migrate_range(struct compact_control *cc, > * pages which PFN is in [start, end) are allocated for the caller and > * need to be freed with free_contig_range(). > */ > -int alloc_contig_range(unsigned long start, unsigned long end, > +int alloc_contig_range_noprof(unsigned long start, unsigned long end, > unsigned migratetype, gfp_t gfp_mask) > { > unsigned long outer_start, outer_end; > @@ -6427,15 +6427,15 @@ int alloc_contig_range(unsigned long start, unsigned long end, > undo_isolate_page_range(start, end, migratetype); > return ret; > } > -EXPORT_SYMBOL(alloc_contig_range); > +EXPORT_SYMBOL(alloc_contig_range_noprof); > > static int __alloc_contig_pages(unsigned long start_pfn, > unsigned long nr_pages, gfp_t gfp_mask) > { > unsigned long end_pfn = start_pfn + nr_pages; > > - return alloc_contig_range(start_pfn, end_pfn, MIGRATE_MOVABLE, > - gfp_mask); > + return alloc_contig_range_noprof(start_pfn, end_pfn, MIGRATE_MOVABLE, > + gfp_mask); > } > > static bool pfn_range_valid_contig(struct zone *z, unsigned long start_pfn, > @@ -6470,7 +6470,7 @@ static bool zone_spans_last_pfn(const struct zone *zone, > } > > /** > - * alloc_contig_pages() -- tries to find and allocate contiguous range of pages > + * alloc_contig_pages_noprof() -- tries to find and allocate contiguous range of pages > * @nr_pages: Number of contiguous pages to allocate > * @gfp_mask: GFP mask to limit search and used during compaction > * @nid: Target node > @@ -6490,8 +6490,8 @@ static bool zone_spans_last_pfn(const struct zone *zone, > * > * Return: pointer to contiguous pages on success, or NULL if not successful. > */ > -struct page *alloc_contig_pages(unsigned long nr_pages, gfp_t gfp_mask, > - int nid, nodemask_t *nodemask) > +struct page *alloc_contig_pages_noprof(unsigned long nr_pages, gfp_t gfp_mask, > + int nid, nodemask_t *nodemask) > { > unsigned long ret, pfn, flags; > struct zonelist *zonelist; > -- > 2.43.0.687.g38aa6559b0-goog > -- Kees Cook