From: Johannes Weiner <hannes@cmpxchg.org>
To: Qiliang Yuan <realwujing@gmail.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
David Hildenbrand <david@kernel.org>,
Lorenzo Stoakes <lorenzo.stoakes@oracle.com>,
"Liam R. Howlett" <Liam.Howlett@oracle.com>,
Vlastimil Babka <vbabka@suse.cz>, Mike Rapoport <rppt@kernel.org>,
Suren Baghdasaryan <surenb@google.com>,
Michal Hocko <mhocko@suse.com>,
Axel Rasmussen <axelrasmussen@google.com>,
Yuanchu Xie <yuanchu@google.com>, Wei Xu <weixugc@google.com>,
Brendan Jackman <jackmanb@google.com>, Zi Yan <ziy@nvidia.com>,
Lance Yang <lance.yang@linux.dev>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH v9] mm/page_alloc: boost watermarks on atomic allocation failure
Date: Fri, 13 Feb 2026 14:36:19 -0500 [thread overview]
Message-ID: <aY99MytyXnECJuel@cmpxchg.org> (raw)
In-Reply-To: <20260213-wujing-mm-page_alloc-v8-v9-1-cd99f3a6cb70@gmail.com>
On Fri, Feb 13, 2026 at 11:17:59AM +0800, Qiliang Yuan wrote:
> Atomic allocations (GFP_ATOMIC) are prone to failure under heavy memory
> pressure as they cannot enter direct reclaim. This patch introduces a
> watermark boost mechanism to mitigate this issue.
>
> When a GFP_ATOMIC request enters the slowpath, the preferred zone's
> watermark_boost is increased under zone->lock protection. This triggers
> kswapd to proactively reclaim memory, creating a safety buffer for
> future atomic allocations. A 1-second debounce timer prevents excessive
> boosts during traffic bursts.
>
> This approach reuses existing watermark_boost infrastructure with
> minimal overhead and proper locking to ensure thread safety.
>
> Allocation failure logs:
> [38535644.718700] node 0: slabs: 1031, objs: 43328, free: 0
> [38535644.725059] node 1: slabs: 339, objs: 17616, free: 317
> [38535645.428345] SLUB: Unable to allocate memory on node -1, gfp=0x480020(GFP_ATOMIC)
> [38535645.436888] cache: skbuff_head_cache, object size: 232, buffer size: 256, default order: 2, min order: 0
> [38535645.447664] node 0: slabs: 940, objs: 40864, free: 144
> [38535645.454026] node 1: slabs: 322, objs: 19168, free: 383
> [38535645.556122] SLUB: Unable to allocate memory on node -1, gfp=0x480020(GFP_ATOMIC)
> [38535645.564576] cache: skbuff_head_cache, object size: 232, buffer size: 256, default order: 2, min order: 0
> [38535649.655523] warn_alloc: 59 callbacks suppressed
> [38535649.655527] swapper/100: page allocation failure: order:0, mode:0x480020(GFP_ATOMIC), nodemask=(null)
> [38535649.671692] swapper/100 cpuset=/ mems_allowed=0-1
>
> Acked-by: Vlastimil Babka <vbabka@suse.cz>
> Signed-off-by: Qiliang Yuan <realwujing@gmail.com>
> ---
> v9:
> - Use mult_frac() for boost calculation. (SJ)
> - Add !can_direct_reclaim check. (Vlastimil)
> - Code cleanup: naming, scope, and line limits. (SJ)
> - Update tags: Add Vlastimil's Acked-by.
>
> v8:
> - Use spin_lock_irqsave() to prevent inconsistent lock state.
>
> v7:
> - Use local variable for boost_amount.
> - Add zone->lock protection.
> - Add lockdep assertion.
>
> v6:
> - Use ATOMIC_BOOST_SCALE_SHIFT define.
> - Add documentation for 0.1% rationale.
>
> v5:
> - Use native boost_watermark().
>
> v4:
> - Add watermark_scale_boost and gradual decay.
>
> v3:
> - Per-zone debounce timer.
>
> v2:
> - Debounce logic and zone-proportional boosting.
>
> v1:
> - Initial version.
> ---
> Link to v8: https://lore.kernel.org/r/20260212-wujing-mm-page_alloc-v8-v8-1-daba38990cd3@gmail.com
> ---
> include/linux/mmzone.h | 1 +
> mm/page_alloc.c | 49 +++++++++++++++++++++++++++++++++++++++++++++++--
> 2 files changed, 48 insertions(+), 2 deletions(-)
>
> diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
> index 75ef7c9f9307..8e37e4e6765b 100644
> --- a/include/linux/mmzone.h
> +++ b/include/linux/mmzone.h
> @@ -882,6 +882,7 @@ struct zone {
> /* zone watermarks, access with *_wmark_pages(zone) macros */
> unsigned long _watermark[NR_WMARK];
> unsigned long watermark_boost;
> + unsigned long last_boost_jiffies;
>
> unsigned long nr_reserved_highatomic;
> unsigned long nr_free_highatomic;
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index c380f063e8b7..8af88584a8bd 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -218,6 +218,13 @@ unsigned int pageblock_order __read_mostly;
> static void __free_pages_ok(struct page *page, unsigned int order,
> fpi_t fpi_flags);
>
> +/*
> + * Boost watermarks by ~0.1% of zone size on atomic allocation pressure.
> + * This provides zone-proportional safety buffers: ~1MB per 1GB of zone size.
> + * Larger zones under GFP_ATOMIC pressure need proportionally larger reserves.
> + */
> +#define ATOMIC_BOOST_FACTOR 1
> +
> /*
> * results with 256, 32 in the lowmem_reserve sysctl:
> * 1G machine -> (16M dma, 800M-16M normal, 1G-800M high)
> @@ -2161,6 +2168,9 @@ bool pageblock_unisolate_and_move_free_pages(struct zone *zone, struct page *pag
> static inline bool boost_watermark(struct zone *zone)
> {
> unsigned long max_boost;
> + unsigned long boost_amount;
> +
> + lockdep_assert_held(&zone->lock);
>
> if (!watermark_boost_factor)
> return false;
watermark_boost_factor is for fragmentation management. It's valid to
have this set to 0 and still want boosting for atomic.
> @@ -2189,12 +2199,43 @@ static inline bool boost_watermark(struct zone *zone)
>
> max_boost = max(pageblock_nr_pages, max_boost);
>
> - zone->watermark_boost = min(zone->watermark_boost + pageblock_nr_pages,
> - max_boost);
> + boost_amount = max(pageblock_nr_pages,
> + mult_frac(zone_managed_pages(zone), ATOMIC_BOOST_FACTOR, 1000));
> + zone->watermark_boost = min(zone->watermark_boost + boost_amount,
> + max_boost);
Likewise, you don't want to add the atomic boost every time there is a
fragmentation event. You need to separate these paths.
The mult_frac() with constants seems a bit funny to me. Just do
zone_managed_pages(zone) / 1000, drop the define, and move the comment
and move the comment here.
> +static void boost_zone_for_atomic(struct alloc_context *ac, gfp_t gfp_mask)
> +{
> + struct zoneref *z;
> + struct zone *zone;
> + unsigned long now = jiffies;
> +
> + for_each_zone_zonelist(zone, z, ac->zonelist, ac->highest_zoneidx) {
for_each_zone_zonelist_nodemask() with ac->nodemask?
> + /* Rate-limit boosts to once per second per zone */
> + if (time_after(now, zone->last_boost_jiffies + HZ)) {
> + unsigned long flags;
> + bool should_wake;
> +
> + zone->last_boost_jiffies = now;
> +
> + /* Modify watermark under lock, wake kswapd outside */
> + spin_lock_irqsave(&zone->lock, flags);
> + should_wake = boost_watermark(zone);
> + spin_unlock_irqrestore(&zone->lock, flags);
> +
> + if (should_wake)
> + wakeup_kswapd(zone, gfp_mask, 0,
> + ac->highest_zoneidx);
> +
> + /* Boost only the preferred zone */
> + break;
> + }
> + }
This is a bit strange to me. By the time you boost, all eligible zones
have been tried, and ALL their reserves were found to be inadequate
for the incoming atomic requests. They all *should* be boosted.
By doing them one by one, you risk additional failures even though you
already KNOW at this point that these other zones are problematic too.
So IMO, by the time you reach here, they should all be boosted.
> @@ -4742,6 +4783,10 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
> if (page)
> goto got_pg;
>
> + /* Boost watermarks for atomic requests entering slowpath */
> + if ((gfp_mask & GFP_ATOMIC) && order == 0 && !can_direct_reclaim)
This is a bit weird. GFP_ATOMIC is a mask, so this check will trigger
on anything that has __GFP_KSWAPD_RECLAIM set (which is most things),
so in turn you then have to filter out direct reclaim again (which the
real GFP_ATOMIC implies).
if (gfp_has_flags(gfp_mask, GFP_ATOMIC))
> + boost_zone_for_atomic(ac, gfp_mask);
> +
prev parent reply other threads:[~2026-02-13 19:36 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-02-13 3:17 Qiliang Yuan
2026-02-13 8:46 ` Vlastimil Babka
2026-02-13 15:07 ` SeongJae Park
2026-02-13 19:36 ` Johannes Weiner [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=aY99MytyXnECJuel@cmpxchg.org \
--to=hannes@cmpxchg.org \
--cc=Liam.Howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=axelrasmussen@google.com \
--cc=david@kernel.org \
--cc=jackmanb@google.com \
--cc=lance.yang@linux.dev \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lorenzo.stoakes@oracle.com \
--cc=mhocko@suse.com \
--cc=realwujing@gmail.com \
--cc=rppt@kernel.org \
--cc=surenb@google.com \
--cc=vbabka@suse.cz \
--cc=weixugc@google.com \
--cc=yuanchu@google.com \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox