From: Brendan Jackman <jackmanb@google.com>
To: Johannes Weiner <hannes@cmpxchg.org>
Cc: Andrew Morton <akpm@linux-foundation.org>,
Vlastimil Babka <vbabka@suse.cz>,
Mel Gorman <mgorman@techsingularity.net>,
Carlos Song <carlos.song@nxp.com>, <linux-mm@kvack.org>,
<linux-kernel@vger.kernel.org>,
kernel test robot <oliver.sang@intel.com>,
<stable@vger.kernel.org>
Subject: Re: [PATCH 1/2] mm: page_alloc: speed up fallbacks in rmqueue_bulk()
Date: Wed, 09 Apr 2025 17:30:48 +0000 [thread overview]
Message-ID: <D92AC0P9594X.3BML64MUKTF8Z@google.com> (raw)
In-Reply-To: <20250408185009.GF816@cmpxchg.org>
On Tue Apr 8, 2025 at 6:50 PM UTC, Johannes Weiner wrote:
>> > /*
>> > * Find the largest available free page in the other list. This roughly
>> > * approximates finding the pageblock with the most free pages, which
>> > * would be too costly to do exactly.
>> > */
>> > for (current_order = MAX_PAGE_ORDER; current_order >= min_order;
>> > --current_order) {
>>
>> IIUC we could go one step further here and also avoid repeating this
>> iteration? Maybe something for a separate patch though?
>
> That might be worth a test, but agree this should be a separate patch.
>
> AFAICS, in the most common configurations MAX_PAGE_ORDER is only one
> step above pageblock_order or even the same. It might not be worth the
> complication.
Oh yeah, makes sense.
>> /*
>> - * Try the different freelists, native then foreign.
>> + * First try the freelists of the requested migratetype, then try
>> + * fallbacks. Roughly, each fallback stage poses more of a fragmentation
>> + * risk.
>
> How about "then try fallback modes with increasing levels of
> fragmentation risk."
Yep, nice thanks.
>> * The fallback logic is expensive and rmqueue_bulk() calls in
>> * a loop with the zone->lock held, meaning the freelists are
>> @@ -2332,7 +2329,7 @@ __rmqueue(struct zone *zone, unsigned int order, int migratetype,
>> case RMQUEUE_CLAIM:
>> page = __rmqueue_claim(zone, order, migratetype, alloc_flags);
>> if (page) {
>> - /* Replenished native freelist, back to normal mode */
>> + /* Replenished requested migratetype's freelist, back to normal mode */
>> *mode = RMQUEUE_NORMAL;
>
> This line is kind of long now. How about:
>
> /* Replenished preferred freelist, back to normal mode */
Yep, sounds good - it's still 81 characters, the rest of this file
sticks to 80 for comments, I guess I'll leave it to Andrew to decide if
that is an issue?
> But yeah, I like your proposed changes. Would you care to send a
> proper patch?
Sure, pasting below. Andrew, could you fold this in? Also, I haven't
done this style of patch sending before, please let me know if I'm doing
something to make your life difficult.
> Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Aside from the commen stuff fixed by the patch below:
Reviewed-by: Brendan Jackman <jackmanb@google.com>
---
From 8ff20dbb52770d082e182482d2b47e521de028d1 Mon Sep 17 00:00:00 2001
From: Brendan Jackman <jackmanb@google.com>
Date: Wed, 9 Apr 2025 17:22:14 +000
Subject: [PATCH] page_alloc: speed up fallbacks in rmqueue_bulk() - comment updates
Tidy up some terminology and redistribute commentary.
Signed-off-by: Brendan Jackman <jackmanb@google.com>
---
mm/page_alloc.c | 22 +++++++++-------------
1 file changed, 9 insertions(+), 13 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index dfb2b3f508af4..220bd0bcc38c3 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2183,21 +2183,13 @@ try_to_claim_block(struct zone *zone, struct page *page,
}
/*
- * Try finding a free buddy page on the fallback list.
- *
- * This will attempt to claim a whole pageblock for the requested type
- * to ensure grouping of such requests in the future.
- *
- * If a whole block cannot be claimed, steal an individual page, regressing to
- * __rmqueue_smallest() logic to at least break up as little contiguity as
- * possible.
+ * Try to allocate from some fallback migratetype by claiming the entire block,
+ * i.e. converting it to the allocation's start migratetype.
*
* The use of signed ints for order and current_order is a deliberate
* deviation from the rest of this file, to make the for loop
* condition simpler.
*/
-
-/* Try to claim a whole foreign block, take a page, expand the remainder */
static __always_inline struct page *
__rmqueue_claim(struct zone *zone, int order, int start_migratetype,
unsigned int alloc_flags)
@@ -2247,7 +2239,10 @@ __rmqueue_claim(struct zone *zone, int order, int start_migratetype,
return NULL;
}
-/* Try to steal a single page from a foreign block */
+/*
+ * Try to steal a single page from some fallback migratetype. Leave the rest of
+ * the block as its current migratetype, potentially causing fragmentation.
+ */
static __always_inline struct page *
__rmqueue_steal(struct zone *zone, int order, int start_migratetype)
{
@@ -2307,7 +2302,8 @@ __rmqueue(struct zone *zone, unsigned int order, int migratetype,
}
/*
- * Try the different freelists, native then foreign.
+ * First try the freelists of the requested migratetype, then try
+ * fallbacks modes with increasing levels of fragmentation risk.
*
* The fallback logic is expensive and rmqueue_bulk() calls in
* a loop with the zone->lock held, meaning the freelists are
@@ -2332,7 +2328,7 @@ __rmqueue(struct zone *zone, unsigned int order, int migratetype,
case RMQUEUE_CLAIM:
page = __rmqueue_claim(zone, order, migratetype, alloc_flags);
if (page) {
- /* Replenished native freelist, back to normal mode */
+ /* Replenished preferred freelist, back to normal mode. */
*mode = RMQUEUE_NORMAL;
return page;
}
base-commit: aa42382db4e2a4ed1f4ba97ffc50e2ce45accb0c
--
2.49.0.504.g3bcea36a83-goog
next prev parent reply other threads:[~2025-04-09 17:30 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-04-07 18:01 Johannes Weiner
2025-04-07 18:01 ` [PATCH 2/2] mm: page_alloc: tighten up find_suitable_fallback() Johannes Weiner
2025-04-10 8:51 ` Vlastimil Babka
2025-04-10 10:50 ` Shivank Garg
2025-04-10 13:55 ` Brendan Jackman
2025-04-11 13:45 ` Johannes Weiner
2025-04-11 15:07 ` Brendan Jackman
2025-04-11 17:07 ` Johannes Weiner
2025-04-08 17:22 ` [PATCH 1/2] mm: page_alloc: speed up fallbacks in rmqueue_bulk() Brendan Jackman
2025-04-08 18:50 ` Johannes Weiner
2025-04-09 17:30 ` Brendan Jackman [this message]
2025-04-10 8:16 ` Vlastimil Babka
2025-04-09 8:02 ` Yunsheng Lin
2025-04-09 14:00 ` Johannes Weiner
2025-04-10 2:02 ` Zi Yan
2025-04-10 7:03 ` [EXT] " Carlos Song
2025-04-10 8:12 ` Vlastimil Babka
2025-04-10 10:48 ` Shivank Garg
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=D92AC0P9594X.3BML64MUKTF8Z@google.com \
--to=jackmanb@google.com \
--cc=akpm@linux-foundation.org \
--cc=carlos.song@nxp.com \
--cc=hannes@cmpxchg.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mgorman@techsingularity.net \
--cc=oliver.sang@intel.com \
--cc=stable@vger.kernel.org \
--cc=vbabka@suse.cz \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox