From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 94283C3ABD8 for ; Sat, 17 May 2025 00:34:53 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 22B576B0082; Fri, 16 May 2025 20:34:51 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1D9996B0083; Fri, 16 May 2025 20:34:51 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0A2926B0085; Fri, 16 May 2025 20:34:51 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id D03346B0082 for ; Fri, 16 May 2025 20:34:50 -0400 (EDT) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 16D7FE1249 for ; Sat, 17 May 2025 00:34:52 +0000 (UTC) X-FDA: 83450529624.06.D473AAD Received: from mail-pg1-f171.google.com (mail-pg1-f171.google.com [209.85.215.171]) by imf22.hostedemail.com (Postfix) with ESMTP id 6EBD3C0005 for ; Sat, 17 May 2025 00:34:50 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=gSHh82i1; spf=pass (imf22.hostedemail.com: domain of alexei.starovoitov@gmail.com designates 209.85.215.171 as permitted sender) smtp.mailfrom=alexei.starovoitov@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1747442090; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:references:dkim-signature; bh=nEuL141r+KLQPrI49mzWLWU+dMs/PLbr0ek531UWhMw=; b=eEfA5hbHxjG0io/uS8emojfNqYn/szT4Ib4JysWetJ7VAPZ4r4yKGtoOf5Sot1NxZZjsAT NxmGT8JvOqnuum17duyHlorH94AxExvdrgvgfzemqZDlda6j7d2ZCmvmhEg2yyeKW+2w/9 c12Vco0QzkmCIhoqjBOVSoxwglALB14= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1747442090; a=rsa-sha256; cv=none; b=MMv31r9ThHyK2ORufuF8rycklAyoOQz+/KGN57TMsHBH6S+KcQxX654sBojdY4cGg6vYoA S6p9CTac2dprLgCQbD0NKgZc2NjdXTCgF8y5Da+3+ZganOUOUbIO/dbgqC8k/jZCz+93H/ pEn+KYr/6y3Ffg3gkdzkA90RGjx5PXk= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=gSHh82i1; spf=pass (imf22.hostedemail.com: domain of alexei.starovoitov@gmail.com designates 209.85.215.171 as permitted sender) smtp.mailfrom=alexei.starovoitov@gmail.com; dmarc=pass (policy=none) header.from=gmail.com Received: by mail-pg1-f171.google.com with SMTP id 41be03b00d2f7-b074d908e56so1690669a12.2 for ; Fri, 16 May 2025 17:34:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1747442089; x=1748046889; darn=kvack.org; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date:message-id:reply-to; bh=nEuL141r+KLQPrI49mzWLWU+dMs/PLbr0ek531UWhMw=; b=gSHh82i1F2+9Q8Eh17Z9a/3fFysCs3mLXFkGkC8BLqU0OQBjR3+Xre2U1xV4DHo8dl iQtLkkWALoVc1othAVOQrhknJ1mZJeBe5PLs4hkHZ3rzxCzukCzo29Yq49FYjUvX4ZrI Oxm3wdthrcp1VS/+bsVu021zM7JGQ4NIdcqCS+uYfgxeTmOhf/ApzLoGsrwLrQsrnWe0 NkvWhIFdPvKrPmSfjQipRKd1AmMeW3rAPYtYowf5qEflxIx9d4FkoBqmMI6lI8oiuCax 0Fem6hhbEVNHhtqLVay5UZZtU04GewwMXc9oQagJGx00P4F+QHi3+oVDslhZrd/FrObM 2VxA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747442089; x=1748046889; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=nEuL141r+KLQPrI49mzWLWU+dMs/PLbr0ek531UWhMw=; b=lAQ1FZKl6gHRXaFdF566Amvn1FG99y/mFtimNf1HC9uaZMSWH95YZ4pQV1notnLdrj N/VZsI1A6vOSiIROR5NkCGDF+FSF6MCCL1pEkK729Yhjpc7vzUvwN04OoXTJx704p1CL +JMJUOX7emYtKDOL9vfUK2Cu87zxF/65P6jJITd4XC9GuJ6JPwbPVuK8mbsKfBEgwUyX EtOum94tdEjcfdQAfmjdS5A81lVRXqaW73YtooYY+I+W2BibX8Lk0qaWXuCed081gyfD ovW4zsjKU2jJ+filGvmvkwwYzjLiEfra47YWUWQwIIbN2YMwvt/C/kRyahHke/MxVurB Ahmw== X-Forwarded-Encrypted: i=1; AJvYcCXCdvl4F4OBVUFEEIXWevgWTtCgqu29vszVHXVv/TDkabhXqdVK6n/c+D37+yLU+uKFeva5bOdD+Q==@kvack.org X-Gm-Message-State: AOJu0YwINPsCCyc17usDk/JHBOL8y/MMxFUmOfqZT3hpIOYM8pCnKQ97 L+SFRt41a2oFo82eIJpoHOexvVsZhKDDMggtnPD+zj4R81v/U4sfZfM4 X-Gm-Gg: ASbGncsudOtHoI3WZlqkRoas+O4pLcYJUWBWygsJ/BU7ImTwCCCW7AzJgYV/rW5jUpo ETK8wbjREEnasaIQAaoPPK32T2QBIAJQNFIlgY3LGsiDHnXrr9gR1aisTWj8uB5SHNLIXZHRz2z mqG1Fl2yxYRzoIVqD3lbm8Togl3olnfg+FUbYrIHzypnuIKj2a9hK2YtsRmpYamspSmaYR9T6ah UYv6l36PBjVWcfu+Alv4abw/Ap0ch/H1nLfx5FZeEuPAjBgxFbsRc9iYj+7Fxxk982xOd0tE3t8 lI5ntUc6rEDqfqEF5ti0UhL1tciaFYfiuPZ7Ldho/PnKB/upKJ1GuFnRlRWY0lRYizK8vOasxP8 yjfPdp1vKM/FdXzf3 X-Google-Smtp-Source: AGHT+IHtU3IgZJBR4QvukTm0FLmBdfCVI2UYgxNxuo+Ke3jUz+0qknmYzYr/C+ajvbPGdARHrx1Gbg== X-Received: by 2002:a17:90b:1f90:b0:30c:52c5:3dc5 with SMTP id 98e67ed59e1d1-30e83216dc2mr5517347a91.26.1747442089215; Fri, 16 May 2025 17:34:49 -0700 (PDT) Received: from localhost.localdomain ([2001:558:600a:7:a83d:600f:32cc:235a]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-30e83378582sm1943688a91.3.2025.05.16.17.34.48 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Fri, 16 May 2025 17:34:48 -0700 (PDT) From: Alexei Starovoitov To: bpf@vger.kernel.org, linux-mm@kvack.org Cc: vbabka@suse.cz, harry.yoo@oracle.com, shakeel.butt@linux.dev, mhocko@suse.com, bigeasy@linutronix.de, andrii@kernel.org, memxor@gmail.com, akpm@linux-foundation.org, peterz@infradead.org, rostedt@goodmis.org, hannes@cmpxchg.org Subject: [PATCH] mm: Rename try_alloc_pages() to alloc_pages_nolock() Date: Fri, 16 May 2025 17:34:46 -0700 Message-Id: <20250517003446.60260-1-alexei.starovoitov@gmail.com> X-Mailer: git-send-email 2.39.5 (Apple Git-154) MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Stat-Signature: cix185dkq8yqupbyjabfirzd3snxne3u X-Rspamd-Queue-Id: 6EBD3C0005 X-Rspam-User: X-Rspamd-Server: rspam02 X-HE-Tag: 1747442090-570009 X-HE-Meta: U2FsdGVkX18yimbLEwkfB/3haCu/g4FsHP2hljgwJEacK+wFt3FHm7036oeLld0GXGRN8LvnV+RLSMZcoJ3iDlDsKWsIC/eU8KfCURir6UT+iaebrhCmDX5On6yr7t0tveAVlFT+0xW+T5wnEoGLmZzOVX7Q4u1MvrAKNRD4ZtCqDWheGfv0fwtky3+5m4Us38d7owafLPBHR3Fi8LX1PWLoKxPvHyJS857Lqv3tBsXeVJrwtCuyW8GAP7TJM1hWbDuIMaY5vj7FXgEabur85q7+q2XS/aTMHhfV8Rda4PMJcuUr/9q4mTP3pp88AVr+TpL/vZlrmHw6rqBrBIbzR6Wn97ri+6xAVpncgFQW1VLgVyguH7H2UTqdWMvzBogZDSZL8nSaw3J3lKYV5ZjdeDhULLgOEv8/mh/DBj11x998Od0nueSoOzY0CrPAbCrUzPNXxMWSqKSa2YohBfTrNmEPBoBzTrz/R8EUdP02Cx/bpcrbs6BR9U95UpNn2PhnAfnbhyf+l+uM3tJeYw4CMMJwgoJj+LCw3XNlrBnpGGAEO0eOVDeJdBs516Z5RIQtZ2srx9rAEHwTYuL6DY7r7puc4Y/y8VA3O7jkIkzCSX7fCNAiEj9YqR2yJ2a3X8MqUckzCKxA0o46wFCWDTMRCUoTmVG59X8LELEOM5bQprTmFwqLnsXcLe21jTxK84NUFGniNAjNbODwKeW+7RAxfDFtrYjbJtgzVSmm0OGk37vvOwyA+NCXAFRE8QNXsbRTnSoUlMX//b3h8aqbwSr1Od7CUkS2qAGof4Y5+mjuaEbM8IZw6nb6znQP7dSMm+ATDUo50YWpH+r72snkHJYxh533o/GePkIrIo4zCH/m6bK9Qt2UBASXdzdtqVg1qHhrvilgH0o80t+UFjT3h26bJXsC3M7gUYRuPNtQI9rQc2W0blcSwjQIp1gFDujvyg4TSUPzki3iOvvZs6y/geC OX7a/AX+ BrQlB60Xnaj9zgBUD+Z3WExNCffFdLH5gSI67sTwlaSZHXKFD77dVkijtvKMTkjknnt+S0T/YkstbGnCRkTl3RyYziZ2plDLkb8I+ X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Alexei Starovoitov The "try_" prefix is confusing, since it made people believe that try_alloc_pages() is analogous to spin_trylock() and NULL return means EAGAIN. This is not the case. If it returns NULL there is no reason to call it again. It will most likely return NULL again. Hence rename it to alloc_pages_nolock() to make it symmetrical to free_pages_nolock() and document that NULL means ENOMEM. Acked-by: Vlastimil Babka Signed-off-by: Alexei Starovoitov --- include/linux/gfp.h | 8 ++++---- kernel/bpf/syscall.c | 2 +- mm/page_alloc.c | 15 ++++++++------- mm/page_owner.c | 2 +- 4 files changed, 14 insertions(+), 13 deletions(-) diff --git a/include/linux/gfp.h b/include/linux/gfp.h index c9fa6309c903..be160e8d8bcb 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -45,13 +45,13 @@ static inline bool gfpflags_allow_spinning(const gfp_t gfp_flags) * !__GFP_DIRECT_RECLAIM -> direct claim is not allowed. * !__GFP_KSWAPD_RECLAIM -> it's not safe to wake up kswapd. * All GFP_* flags including GFP_NOWAIT use one or both flags. - * try_alloc_pages() is the only API that doesn't specify either flag. + * alloc_pages_nolock() is the only API that doesn't specify either flag. * * This is stronger than GFP_NOWAIT or GFP_ATOMIC because * those are guaranteed to never block on a sleeping lock. * Here we are enforcing that the allocation doesn't ever spin * on any locks (i.e. only trylocks). There is no high level - * GFP_$FOO flag for this use in try_alloc_pages() as the + * GFP_$FOO flag for this use in alloc_pages_nolock() as the * regular page allocator doesn't fully support this * allocation mode. */ @@ -354,8 +354,8 @@ static inline struct page *alloc_page_vma_noprof(gfp_t gfp, } #define alloc_page_vma(...) alloc_hooks(alloc_page_vma_noprof(__VA_ARGS__)) -struct page *try_alloc_pages_noprof(int nid, unsigned int order); -#define try_alloc_pages(...) alloc_hooks(try_alloc_pages_noprof(__VA_ARGS__)) +struct page *alloc_pages_nolock_noprof(int nid, unsigned int order); +#define alloc_pages_nolock(...) alloc_hooks(alloc_pages_nolock_noprof(__VA_ARGS__)) extern unsigned long get_free_pages_noprof(gfp_t gfp_mask, unsigned int order); #define __get_free_pages(...) alloc_hooks(get_free_pages_noprof(__VA_ARGS__)) diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c index 64c3393e8270..9cdb4f22640f 100644 --- a/kernel/bpf/syscall.c +++ b/kernel/bpf/syscall.c @@ -578,7 +578,7 @@ static bool can_alloc_pages(void) static struct page *__bpf_alloc_page(int nid) { if (!can_alloc_pages()) - return try_alloc_pages(nid, 0); + return alloc_pages_nolock(nid, 0); return alloc_pages_node(nid, GFP_KERNEL | __GFP_ZERO | __GFP_ACCOUNT diff --git a/mm/page_alloc.c b/mm/page_alloc.c index c77592b22256..b89c64a4245b 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -5078,7 +5078,7 @@ EXPORT_SYMBOL(__free_pages); /* * Can be called while holding raw_spin_lock or from IRQ and NMI for any - * page type (not only those that came from try_alloc_pages) + * page type (not only those that came from alloc_pages_nolock) */ void free_pages_nolock(struct page *page, unsigned int order) { @@ -7335,20 +7335,21 @@ static bool __free_unaccepted(struct page *page) #endif /* CONFIG_UNACCEPTED_MEMORY */ /** - * try_alloc_pages - opportunistic reentrant allocation from any context + * alloc_pages_nolock - opportunistic reentrant allocation from any context * @nid: node to allocate from * @order: allocation order size * * Allocates pages of a given order from the given node. This is safe to * call from any context (from atomic, NMI, and also reentrant - * allocator -> tracepoint -> try_alloc_pages_noprof). + * allocator -> tracepoint -> alloc_pages_nolock_noprof). * Allocation is best effort and to be expected to fail easily so nobody should * rely on the success. Failures are not reported via warn_alloc(). * See always fail conditions below. * - * Return: allocated page or NULL on failure. + * Return: allocated page or NULL on failure. NULL does not mean EBUSY or EAGAIN. + * It means ENOMEM. There is no reason to call it again and expect !NULL. */ -struct page *try_alloc_pages_noprof(int nid, unsigned int order) +struct page *alloc_pages_nolock_noprof(int nid, unsigned int order) { /* * Do not specify __GFP_DIRECT_RECLAIM, since direct claim is not allowed. @@ -7357,7 +7358,7 @@ struct page *try_alloc_pages_noprof(int nid, unsigned int order) * * These two are the conditions for gfpflags_allow_spinning() being true. * - * Specify __GFP_NOWARN since failing try_alloc_pages() is not a reason + * Specify __GFP_NOWARN since failing alloc_pages_nolock() is not a reason * to warn. Also warn would trigger printk() which is unsafe from * various contexts. We cannot use printk_deferred_enter() to mitigate, * since the running context is unknown. @@ -7367,7 +7368,7 @@ struct page *try_alloc_pages_noprof(int nid, unsigned int order) * BPF use cases. * * Though __GFP_NOMEMALLOC is not checked in the code path below, - * specify it here to highlight that try_alloc_pages() + * specify it here to highlight that alloc_pages_nolock() * doesn't want to deplete reserves. */ gfp_t alloc_gfp = __GFP_NOWARN | __GFP_ZERO | __GFP_NOMEMALLOC diff --git a/mm/page_owner.c b/mm/page_owner.c index cc4a6916eec6..9928c9ac8c31 100644 --- a/mm/page_owner.c +++ b/mm/page_owner.c @@ -302,7 +302,7 @@ void __reset_page_owner(struct page *page, unsigned short order) /* * Do not specify GFP_NOWAIT to make gfpflags_allow_spinning() == false * to prevent issues in stack_depot_save(). - * This is similar to try_alloc_pages() gfp flags, but only used + * This is similar to alloc_pages_nolock() gfp flags, but only used * to signal stack_depot to avoid spin_locks. */ handle = save_stack(__GFP_NOWARN); -- 2.47.1