From: Johannes Weiner <hannes@cmpxchg.org>
To: Brendan Jackman <jackmanb@google.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
Vlastimil Babka <vbabka@suse.cz>,
Mel Gorman <mgorman@techsingularity.net>,
Carlos Song <carlos.song@nxp.com>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org,
kernel test robot <oliver.sang@intel.com>,
stable@vger.kernel.org
Subject: Re: [PATCH 1/2] mm: page_alloc: speed up fallbacks in rmqueue_bulk()
Date: Tue, 8 Apr 2025 14:50:09 -0400 [thread overview]
Message-ID: <20250408185009.GF816@cmpxchg.org> (raw)
In-Reply-To: <D91FIQHR9GEK.3VMV7CAKW1BFO@google.com>
On Tue, Apr 08, 2025 at 05:22:00PM +0000, Brendan Jackman wrote:
> On Mon Apr 7, 2025 at 6:01 PM UTC, Johannes Weiner wrote:
> > --- a/mm/page_alloc.c
> > +++ b/mm/page_alloc.c
> > @@ -2194,11 +2194,11 @@ try_to_claim_block(struct zone *zone, struct page *page,
> > * The use of signed ints for order and current_order is a deliberate
> > * deviation from the rest of this file, to make the for loop
> > * condition simpler.
> > - *
> > - * Return the stolen page, or NULL if none can be found.
> > */
>
> This commentary is pretty confusing now, there's a block of text that
> kinda vaguely applies to the aggregate of __rmqueue_steal(),
> __rmqueue_fallback() and half of __rmqueue(). I think this new code does
> a better job of speaking for itself so I think we should just delete
> this block comment and replace it with some more verbosity elsewhere.
I'm glad you think so, let's remove it then!
> > +/* Try to claim a whole foreign block, take a page, expand the remainder */
>
> Also on the commentary front, I am not a fan of "foreign" and "native":
>
> - "Foreign" is already used in this file to mean NUMA-nonlocal.
>
> - We already have "start" and "fallback" being used in identifiers
> as adjectives to describe the mitegratetype concept.
>
> I wouldn't say those are _better_, "native" and "foreign" might be
> clearer, but it's not worth introducing inconsistency IMO.
That's a fair point, no objection to renaming them.
> > static __always_inline struct page *
> > -__rmqueue_fallback(struct zone *zone, int order, int start_migratetype,
> > +__rmqueue_claim(struct zone *zone, int order, int start_migratetype,
> > unsigned int alloc_flags)
> > {
> > struct free_area *area;
>
> [pasting in more context that wasn't in the original diff..]
> > /*
> > * Find the largest available free page in the other list. This roughly
> > * approximates finding the pageblock with the most free pages, which
> > * would be too costly to do exactly.
> > */
> > for (current_order = MAX_PAGE_ORDER; current_order >= min_order;
> > --current_order) {
>
> IIUC we could go one step further here and also avoid repeating this
> iteration? Maybe something for a separate patch though?
That might be worth a test, but agree this should be a separate patch.
AFAICS, in the most common configurations MAX_PAGE_ORDER is only one
step above pageblock_order or even the same. It might not be worth the
complication.
> Anyway, the approach seems like a clear improvement, thanks. I will need
> to take a closer look at it tomorrow, I've run out of brain juice today.
Much appreciate you taking a look, thanks.
> Here's what I got from redistributing the block comment and flipping
> the terminology:
>
> diff --git i/mm/page_alloc.c w/mm/page_alloc.c
> index dfb2b3f508af..b8142d605691 100644
> --- i/mm/page_alloc.c
> +++ w/mm/page_alloc.c
> @@ -2183,21 +2183,13 @@ try_to_claim_block(struct zone *zone, struct page *page,
> }
>
> /*
> - * Try finding a free buddy page on the fallback list.
> - *
> - * This will attempt to claim a whole pageblock for the requested type
> - * to ensure grouping of such requests in the future.
> - *
> - * If a whole block cannot be claimed, steal an individual page, regressing to
> - * __rmqueue_smallest() logic to at least break up as little contiguity as
> - * possible.
> + * Try to allocate from some fallback migratetype by claiming the entire block,
> + * i.e. converting it to the allocation's start migratetype.
> *
> * The use of signed ints for order and current_order is a deliberate
> * deviation from the rest of this file, to make the for loop
> * condition simpler.
> */
> -
> -/* Try to claim a whole foreign block, take a page, expand the remainder */
> static __always_inline struct page *
> __rmqueue_claim(struct zone *zone, int order, int start_migratetype,
> unsigned int alloc_flags)
> @@ -2247,7 +2239,10 @@ __rmqueue_claim(struct zone *zone, int order, int start_migratetype,
> return NULL;
> }
>
> -/* Try to steal a single page from a foreign block */
> +/*
> + * Try to steal a single page from some fallback migratetype. Leave the rest of
> + * the block as its current migratetype, potentially causing fragmentation.
> + */
> static __always_inline struct page *
> __rmqueue_steal(struct zone *zone, int order, int start_migratetype)
> {
> @@ -2307,7 +2302,9 @@ __rmqueue(struct zone *zone, unsigned int order, int migratetype,
> }
>
> /*
> - * Try the different freelists, native then foreign.
> + * First try the freelists of the requested migratetype, then try
> + * fallbacks. Roughly, each fallback stage poses more of a fragmentation
> + * risk.
How about "then try fallback modes with increasing levels of
fragmentation risk."
> * The fallback logic is expensive and rmqueue_bulk() calls in
> * a loop with the zone->lock held, meaning the freelists are
> @@ -2332,7 +2329,7 @@ __rmqueue(struct zone *zone, unsigned int order, int migratetype,
> case RMQUEUE_CLAIM:
> page = __rmqueue_claim(zone, order, migratetype, alloc_flags);
> if (page) {
> - /* Replenished native freelist, back to normal mode */
> + /* Replenished requested migratetype's freelist, back to normal mode */
> *mode = RMQUEUE_NORMAL;
This line is kind of long now. How about:
/* Replenished preferred freelist, back to normal mode */
But yeah, I like your proposed changes. Would you care to send a
proper patch?
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
next prev parent reply other threads:[~2025-04-08 18:50 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-04-07 18:01 Johannes Weiner
2025-04-07 18:01 ` [PATCH 2/2] mm: page_alloc: tighten up find_suitable_fallback() Johannes Weiner
2025-04-10 8:51 ` Vlastimil Babka
2025-04-10 10:50 ` Shivank Garg
2025-04-10 13:55 ` Brendan Jackman
2025-04-11 13:45 ` Johannes Weiner
2025-04-11 15:07 ` Brendan Jackman
2025-04-11 17:07 ` Johannes Weiner
2025-04-08 17:22 ` [PATCH 1/2] mm: page_alloc: speed up fallbacks in rmqueue_bulk() Brendan Jackman
2025-04-08 18:50 ` Johannes Weiner [this message]
2025-04-09 17:30 ` Brendan Jackman
2025-04-10 8:16 ` Vlastimil Babka
2025-04-09 8:02 ` Yunsheng Lin
2025-04-09 14:00 ` Johannes Weiner
2025-04-10 2:02 ` Zi Yan
2025-04-10 7:03 ` [EXT] " Carlos Song
2025-04-10 8:12 ` Vlastimil Babka
2025-04-10 10:48 ` Shivank Garg
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250408185009.GF816@cmpxchg.org \
--to=hannes@cmpxchg.org \
--cc=akpm@linux-foundation.org \
--cc=carlos.song@nxp.com \
--cc=jackmanb@google.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mgorman@techsingularity.net \
--cc=oliver.sang@intel.com \
--cc=stable@vger.kernel.org \
--cc=vbabka@suse.cz \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox