* [PATCH mm-unstable v3 0/3] tweaks for __alloc_pages_slowpath()
@ 2026-01-06 11:52 Vlastimil Babka
2026-01-06 11:52 ` [PATCH mm-unstable v3 1/3] mm/page_alloc: ignore the exact initial compaction result Vlastimil Babka
` (2 more replies)
0 siblings, 3 replies; 7+ messages in thread
From: Vlastimil Babka @ 2026-01-06 11:52 UTC (permalink / raw)
To: Andrew Morton, Suren Baghdasaryan, Michal Hocko, Brendan Jackman,
Johannes Weiner, Zi Yan, David Rientjes, David Hildenbrand,
Lorenzo Stoakes, Liam R. Howlett, Mike Rapoport, Joshua Hahn,
Pedro Falcato
Cc: linux-mm, linux-kernel, Vlastimil Babka
First non-RFC of the page allocator cleanups on top of the THP
allocation fix:
https://lore.kernel.org/all/20251219-costly-noretry-thisnode-fix-v1-1-e1085a4a0c34@suse.cz/
Rebased to mm-unstable where the fix is included now.
In patch 1, in v1 (where it was patch 2) I have proposed not to attempt
reclaim for costly __GFP_NORETRY allocations at all. Johannes suggested
that we should not change the semantics that much, so now instead we
allow them all (except with __GFP_THISNODE). The main idea that remains
is that we don't decide based on the exact compaction result anymore.
Patch 2 is a cleanup also based on a suggestion from Johannes, and Patch
3 became the obvious next step in that direction. These should be making
no functional changes, while simplifying __alloc_pages_slowpath().
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
---
Changes in v3:
- Remove RFC tag, rebase to mm-unstable.
- Remove gfp_thisnode_noretry() in Patch 2 (per Joshua).
- Instead add gfp_has_flags() to shorten checking for multiple flags.
- Link to v2: https://patch.msgid.link/20251219-thp-thisnode-tweak-v2-0-0c01f231fd1c@suse.cz
Changes in v2:
- actual THP reclaim fix sent separately
- allow one reclaim attempt for __GFP_NORETRY allocations
- further __alloc_pages_slowpath() cleanups
- Link to v1: https://patch.msgid.link/20251216-thp-thisnode-tweak-v1-0-0e499d13d2eb@suse.cz
---
Vlastimil Babka (3):
mm/page_alloc: ignore the exact initial compaction result
mm/page_alloc: refactor the initial compaction handling
mm/page_alloc: simplify __alloc_pages_slowpath() flow
include/linux/gfp.h | 8 ++-
mm/page_alloc.c | 163 +++++++++++++++++++++++-----------------------------
2 files changed, 78 insertions(+), 93 deletions(-)
---
base-commit: 186f32b9f92ad7ef6bb90d1d0e9692665cfbb69b
change-id: 20251216-thp-thisnode-tweak-c9c2acb3a627
Best regards,
--
Vlastimil Babka <vbabka@suse.cz>
^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH mm-unstable v3 1/3] mm/page_alloc: ignore the exact initial compaction result
2026-01-06 11:52 [PATCH mm-unstable v3 0/3] tweaks for __alloc_pages_slowpath() Vlastimil Babka
@ 2026-01-06 11:52 ` Vlastimil Babka
2026-01-06 13:51 ` Michal Hocko
2026-01-06 11:52 ` [PATCH mm-unstable v3 2/3] mm/page_alloc: refactor the initial compaction handling Vlastimil Babka
2026-01-06 11:52 ` [PATCH mm-unstable v3 3/3] mm/page_alloc: simplify __alloc_pages_slowpath() flow Vlastimil Babka
2 siblings, 1 reply; 7+ messages in thread
From: Vlastimil Babka @ 2026-01-06 11:52 UTC (permalink / raw)
To: Andrew Morton, Suren Baghdasaryan, Michal Hocko, Brendan Jackman,
Johannes Weiner, Zi Yan, David Rientjes, David Hildenbrand,
Lorenzo Stoakes, Liam R. Howlett, Mike Rapoport, Joshua Hahn,
Pedro Falcato
Cc: linux-mm, linux-kernel, Vlastimil Babka
For allocations that are of costly order and __GFP_NORETRY (and can
perform compaction) we attempt direct compaction first. If that fails,
we continue with a single round of direct reclaim+compaction (as for
other __GFP_NORETRY allocations, except the compaction is of lower
priority), with two exceptions that fail immediately:
- __GFP_THISNODE is specified, to prevent zone_reclaim_mode-like
behavior for e.g. THP page faults
- compaction failed because it was deferred (i.e. has been failing
recently so further attempts are not done for a while) or skipped,
which means there are insufficient free base pages to defragment to
begin with
Upon closer inspection, the second condition has a somewhat flawed
reasoning. If there are not enough base pages and reclaim could create
them, we instead fail. When there are enough base pages and compaction
has already ran and failed, we proceed and hope that reclaim and the
subsequent compaction attempt will succeed. But it's unclear why they
should and whether it will be as inexpensive as intended.
It might make therefore more sense to just fail unconditionally after
the initial compaction attempt. However that would change the semantics
of __GFP_NORETRY to attempt reclaim at least once.
Alternatively we can remove the compaction result checks and proceed
with the single reclaim and (lower priority) compaction attempt, leaving
only the __GFP_THISNODE exception for failing immediately.
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
---
mm/page_alloc.c | 34 ++++++----------------------------
1 file changed, 6 insertions(+), 28 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index ac8a12076b00..b06b1cb01e0e 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -4805,44 +4805,22 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
* includes some THP page fault allocations
*/
if (costly_order && (gfp_mask & __GFP_NORETRY)) {
- /*
- * If allocating entire pageblock(s) and compaction
- * failed because all zones are below low watermarks
- * or is prohibited because it recently failed at this
- * order, fail immediately unless the allocator has
- * requested compaction and reclaim retry.
- *
- * Reclaim is
- * - potentially very expensive because zones are far
- * below their low watermarks or this is part of very
- * bursty high order allocations,
- * - not guaranteed to help because isolate_freepages()
- * may not iterate over freed pages as part of its
- * linear scan, and
- * - unlikely to make entire pageblocks free on its
- * own.
- */
- if (compact_result == COMPACT_SKIPPED ||
- compact_result == COMPACT_DEFERRED)
- goto nopage;
-
/*
* THP page faults may attempt local node only first,
* but are then allowed to only compact, not reclaim,
* see alloc_pages_mpol().
*
- * Compaction can fail for other reasons than those
- * checked above and we don't want such THP allocations
- * to put reclaim pressure on a single node in a
- * situation where other nodes might have plenty of
- * available memory.
+ * Compaction has failed above and we don't want such
+ * THP allocations to put reclaim pressure on a single
+ * node in a situation where other nodes might have
+ * plenty of available memory.
*/
if (gfp_mask & __GFP_THISNODE)
goto nopage;
/*
- * Looks like reclaim/compaction is worth trying, but
- * sync compaction could be very expensive, so keep
+ * Proceed with single round of reclaim/compaction, but
+ * since sync compaction could be very expensive, keep
* using async compaction.
*/
compact_priority = INIT_COMPACT_PRIORITY;
--
2.52.0
^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH mm-unstable v3 2/3] mm/page_alloc: refactor the initial compaction handling
2026-01-06 11:52 [PATCH mm-unstable v3 0/3] tweaks for __alloc_pages_slowpath() Vlastimil Babka
2026-01-06 11:52 ` [PATCH mm-unstable v3 1/3] mm/page_alloc: ignore the exact initial compaction result Vlastimil Babka
@ 2026-01-06 11:52 ` Vlastimil Babka
2026-01-06 13:56 ` Michal Hocko
2026-01-06 11:52 ` [PATCH mm-unstable v3 3/3] mm/page_alloc: simplify __alloc_pages_slowpath() flow Vlastimil Babka
2 siblings, 1 reply; 7+ messages in thread
From: Vlastimil Babka @ 2026-01-06 11:52 UTC (permalink / raw)
To: Andrew Morton, Suren Baghdasaryan, Michal Hocko, Brendan Jackman,
Johannes Weiner, Zi Yan, David Rientjes, David Hildenbrand,
Lorenzo Stoakes, Liam R. Howlett, Mike Rapoport, Joshua Hahn,
Pedro Falcato
Cc: linux-mm, linux-kernel, Vlastimil Babka
The initial direct compaction done in some cases in
__alloc_pages_slowpath() stands out from the main retry loop of
reclaim + compaction.
We can simplify this by instead skipping the initial reclaim attempt via
a new local variable compact_first, and handle the compact_prority as
necessary to match the original behavior. No functional change intended.
Suggested-by: Johannes Weiner <hannes@cmpxchg.org>
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
Reviewed-by: Joshua Hahn <joshua.hahnjy@gmail.com>
---
include/linux/gfp.h | 8 ++++-
mm/page_alloc.c | 100 +++++++++++++++++++++++++---------------------------
2 files changed, 55 insertions(+), 53 deletions(-)
diff --git a/include/linux/gfp.h b/include/linux/gfp.h
index aa45989f410d..6ecf6dda93e0 100644
--- a/include/linux/gfp.h
+++ b/include/linux/gfp.h
@@ -407,9 +407,15 @@ extern gfp_t gfp_allowed_mask;
/* Returns true if the gfp_mask allows use of ALLOC_NO_WATERMARK */
bool gfp_pfmemalloc_allowed(gfp_t gfp_mask);
+/* A helper for checking if gfp includes all the specified flags */
+static inline bool gfp_has_flags(gfp_t gfp, gfp_t flags)
+{
+ return (gfp & flags) == flags;
+}
+
static inline bool gfp_has_io_fs(gfp_t gfp)
{
- return (gfp & (__GFP_IO | __GFP_FS)) == (__GFP_IO | __GFP_FS);
+ return gfp_has_flags(gfp, __GFP_IO | __GFP_FS);
}
/*
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index b06b1cb01e0e..3b2579c5716f 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -4702,7 +4702,7 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
struct alloc_context *ac)
{
bool can_direct_reclaim = gfp_mask & __GFP_DIRECT_RECLAIM;
- bool can_compact = gfp_compaction_allowed(gfp_mask);
+ bool can_compact = can_direct_reclaim && gfp_compaction_allowed(gfp_mask);
bool nofail = gfp_mask & __GFP_NOFAIL;
const bool costly_order = order > PAGE_ALLOC_COSTLY_ORDER;
struct page *page = NULL;
@@ -4715,6 +4715,7 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
unsigned int cpuset_mems_cookie;
unsigned int zonelist_iter_cookie;
int reserve_flags;
+ bool compact_first = false;
if (unlikely(nofail)) {
/*
@@ -4738,6 +4739,19 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
cpuset_mems_cookie = read_mems_allowed_begin();
zonelist_iter_cookie = zonelist_iter_begin();
+ /*
+ * For costly allocations, try direct compaction first, as it's likely
+ * that we have enough base pages and don't need to reclaim. For non-
+ * movable high-order allocations, do that as well, as compaction will
+ * try prevent permanent fragmentation by migrating from blocks of the
+ * same migratetype.
+ */
+ if (can_compact && (costly_order || (order > 0 &&
+ ac->migratetype != MIGRATE_MOVABLE))) {
+ compact_first = true;
+ compact_priority = INIT_COMPACT_PRIORITY;
+ }
+
/*
* The fast path uses conservative alloc_flags to succeed only until
* kswapd needs to be woken up, and to avoid the cost of setting up
@@ -4780,53 +4794,6 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
if (page)
goto got_pg;
- /*
- * For costly allocations, try direct compaction first, as it's likely
- * that we have enough base pages and don't need to reclaim. For non-
- * movable high-order allocations, do that as well, as compaction will
- * try prevent permanent fragmentation by migrating from blocks of the
- * same migratetype.
- * Don't try this for allocations that are allowed to ignore
- * watermarks, as the ALLOC_NO_WATERMARKS attempt didn't yet happen.
- */
- if (can_direct_reclaim && can_compact &&
- (costly_order ||
- (order > 0 && ac->migratetype != MIGRATE_MOVABLE))
- && !gfp_pfmemalloc_allowed(gfp_mask)) {
- page = __alloc_pages_direct_compact(gfp_mask, order,
- alloc_flags, ac,
- INIT_COMPACT_PRIORITY,
- &compact_result);
- if (page)
- goto got_pg;
-
- /*
- * Checks for costly allocations with __GFP_NORETRY, which
- * includes some THP page fault allocations
- */
- if (costly_order && (gfp_mask & __GFP_NORETRY)) {
- /*
- * THP page faults may attempt local node only first,
- * but are then allowed to only compact, not reclaim,
- * see alloc_pages_mpol().
- *
- * Compaction has failed above and we don't want such
- * THP allocations to put reclaim pressure on a single
- * node in a situation where other nodes might have
- * plenty of available memory.
- */
- if (gfp_mask & __GFP_THISNODE)
- goto nopage;
-
- /*
- * Proceed with single round of reclaim/compaction, but
- * since sync compaction could be very expensive, keep
- * using async compaction.
- */
- compact_priority = INIT_COMPACT_PRIORITY;
- }
- }
-
retry:
/*
* Deal with possible cpuset update races or zonelist updates to avoid
@@ -4870,10 +4837,12 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
goto nopage;
/* Try direct reclaim and then allocating */
- page = __alloc_pages_direct_reclaim(gfp_mask, order, alloc_flags, ac,
- &did_some_progress);
- if (page)
- goto got_pg;
+ if (!compact_first) {
+ page = __alloc_pages_direct_reclaim(gfp_mask, order, alloc_flags,
+ ac, &did_some_progress);
+ if (page)
+ goto got_pg;
+ }
/* Try direct compaction and then allocating */
page = __alloc_pages_direct_compact(gfp_mask, order, alloc_flags, ac,
@@ -4881,6 +4850,33 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
if (page)
goto got_pg;
+ if (compact_first) {
+ /*
+ * THP page faults may attempt local node only first, but are
+ * then allowed to only compact, not reclaim, see
+ * alloc_pages_mpol().
+ *
+ * Compaction has failed above and we don't want such THP
+ * allocations to put reclaim pressure on a single node in a
+ * situation where other nodes might have plenty of available
+ * memory.
+ */
+ if (gfp_has_flags(gfp_mask, __GFP_NORETRY | __GFP_THISNODE))
+ goto nopage;
+
+ /*
+ * For the initial compaction attempt we have lowered its
+ * priority. Restore it for further retries, if those are
+ * allowed. With __GFP_NORETRY there will be a single round of
+ * reclaim and compaction with the lowered priority.
+ */
+ if (!(gfp_mask & __GFP_NORETRY))
+ compact_priority = DEF_COMPACT_PRIORITY;
+
+ compact_first = false;
+ goto retry;
+ }
+
/* Do not loop if specifically requested */
if (gfp_mask & __GFP_NORETRY)
goto nopage;
--
2.52.0
^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH mm-unstable v3 3/3] mm/page_alloc: simplify __alloc_pages_slowpath() flow
2026-01-06 11:52 [PATCH mm-unstable v3 0/3] tweaks for __alloc_pages_slowpath() Vlastimil Babka
2026-01-06 11:52 ` [PATCH mm-unstable v3 1/3] mm/page_alloc: ignore the exact initial compaction result Vlastimil Babka
2026-01-06 11:52 ` [PATCH mm-unstable v3 2/3] mm/page_alloc: refactor the initial compaction handling Vlastimil Babka
@ 2026-01-06 11:52 ` Vlastimil Babka
2026-01-06 14:00 ` Michal Hocko
2 siblings, 1 reply; 7+ messages in thread
From: Vlastimil Babka @ 2026-01-06 11:52 UTC (permalink / raw)
To: Andrew Morton, Suren Baghdasaryan, Michal Hocko, Brendan Jackman,
Johannes Weiner, Zi Yan, David Rientjes, David Hildenbrand,
Lorenzo Stoakes, Liam R. Howlett, Mike Rapoport, Joshua Hahn,
Pedro Falcato
Cc: linux-mm, linux-kernel, Vlastimil Babka
The actions done before entering the main retry loop include waking up
kswapds and an allocation attempt with the precise alloc_flags.
Then in the loop we keep waking up kswapds, and we retry the allocation
with flags potentially further adjusted by being allowed to use reserves
(due to e.g. becoming an OOM killer victim).
We can adjust the retry loop to keep only one instance of waking up
kswapds and allocation attempt. Introduce the can_retry_reserves
variable for retrying once when we become eligible for reserves. It is
still useful not to evaluate reserve_flags immediately for the first
allocation attempt, because it's better to first try succeed in a
non-preferred zone above the min watermark before allocating immediately
from the preferred zone below min watermark.
Additionally move the cpuset update checks introduced by e05741fb10c3
("mm/page_alloc.c: avoid infinite retries caused by cpuset race")
further down the retry loop. It's enough to do the checks only before
reaching any potentially infinite 'goto retry;' loop.
There should be no meaningful functional changes. The change of exact
moments the retry for reserves and cpuset updates are checked should not
result in different outomes modulo races with concurrent allocator
activity.
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
---
mm/page_alloc.c | 41 +++++++++++++++++++++++------------------
1 file changed, 23 insertions(+), 18 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 3b2579c5716f..c02564042618 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -4716,6 +4716,7 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
unsigned int zonelist_iter_cookie;
int reserve_flags;
bool compact_first = false;
+ bool can_retry_reserves = true;
if (unlikely(nofail)) {
/*
@@ -4783,6 +4784,8 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
goto nopage;
}
+retry:
+ /* Ensure kswapd doesn't accidentally go to sleep as long as we loop */
if (alloc_flags & ALLOC_KSWAPD)
wake_all_kswapds(order, gfp_mask, ac);
@@ -4794,19 +4797,6 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
if (page)
goto got_pg;
-retry:
- /*
- * Deal with possible cpuset update races or zonelist updates to avoid
- * infinite retries.
- */
- if (check_retry_cpuset(cpuset_mems_cookie, ac) ||
- check_retry_zonelist(zonelist_iter_cookie))
- goto restart;
-
- /* Ensure kswapd doesn't accidentally go to sleep as long as we loop */
- if (alloc_flags & ALLOC_KSWAPD)
- wake_all_kswapds(order, gfp_mask, ac);
-
reserve_flags = __gfp_pfmemalloc_flags(gfp_mask);
if (reserve_flags)
alloc_flags = gfp_to_alloc_flags_cma(gfp_mask, reserve_flags) |
@@ -4821,12 +4811,18 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
ac->nodemask = NULL;
ac->preferred_zoneref = first_zones_zonelist(ac->zonelist,
ac->highest_zoneidx, ac->nodemask);
- }
- /* Attempt with potentially adjusted zonelist and alloc_flags */
- page = get_page_from_freelist(gfp_mask, order, alloc_flags, ac);
- if (page)
- goto got_pg;
+ /*
+ * The first time we adjust anything due to being allowed to
+ * ignore memory policies or watermarks, retry immediately. This
+ * allows us to keep the first allocation attempt optimistic so
+ * it can succeed in a zone that is still above watermarks.
+ */
+ if (can_retry_reserves) {
+ can_retry_reserves = false;
+ goto retry;
+ }
+ }
/* Caller is not willing to reclaim, we can't balance anything */
if (!can_direct_reclaim)
@@ -4889,6 +4885,15 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
!(gfp_mask & __GFP_RETRY_MAYFAIL)))
goto nopage;
+ /*
+ * Deal with possible cpuset update races or zonelist updates to avoid
+ * infinite retries. No "goto retry;" can be placed above this check
+ * unless it can execute just once.
+ */
+ if (check_retry_cpuset(cpuset_mems_cookie, ac) ||
+ check_retry_zonelist(zonelist_iter_cookie))
+ goto restart;
+
if (should_reclaim_retry(gfp_mask, order, ac, alloc_flags,
did_some_progress > 0, &no_progress_loops))
goto retry;
--
2.52.0
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH mm-unstable v3 1/3] mm/page_alloc: ignore the exact initial compaction result
2026-01-06 11:52 ` [PATCH mm-unstable v3 1/3] mm/page_alloc: ignore the exact initial compaction result Vlastimil Babka
@ 2026-01-06 13:51 ` Michal Hocko
0 siblings, 0 replies; 7+ messages in thread
From: Michal Hocko @ 2026-01-06 13:51 UTC (permalink / raw)
To: Vlastimil Babka
Cc: Andrew Morton, Suren Baghdasaryan, Brendan Jackman,
Johannes Weiner, Zi Yan, David Rientjes, David Hildenbrand,
Lorenzo Stoakes, Liam R. Howlett, Mike Rapoport, Joshua Hahn,
Pedro Falcato, linux-mm, linux-kernel
On Tue 06-01-26 12:52:36, Vlastimil Babka wrote:
> For allocations that are of costly order and __GFP_NORETRY (and can
> perform compaction) we attempt direct compaction first. If that fails,
> we continue with a single round of direct reclaim+compaction (as for
> other __GFP_NORETRY allocations, except the compaction is of lower
> priority), with two exceptions that fail immediately:
>
> - __GFP_THISNODE is specified, to prevent zone_reclaim_mode-like
> behavior for e.g. THP page faults
>
> - compaction failed because it was deferred (i.e. has been failing
> recently so further attempts are not done for a while) or skipped,
> which means there are insufficient free base pages to defragment to
> begin with
>
> Upon closer inspection, the second condition has a somewhat flawed
> reasoning. If there are not enough base pages and reclaim could create
> them, we instead fail. When there are enough base pages and compaction
> has already ran and failed, we proceed and hope that reclaim and the
> subsequent compaction attempt will succeed. But it's unclear why they
> should and whether it will be as inexpensive as intended.
>
> It might make therefore more sense to just fail unconditionally after
> the initial compaction attempt. However that would change the semantics
> of __GFP_NORETRY to attempt reclaim at least once.
>
> Alternatively we can remove the compaction result checks and proceed
> with the single reclaim and (lower priority) compaction attempt, leaving
> only the __GFP_THISNODE exception for failing immediately.
>
> Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
Acked-by: Michal Hocko <mhocko@suse.com>
Thanks!
> ---
> mm/page_alloc.c | 34 ++++++----------------------------
> 1 file changed, 6 insertions(+), 28 deletions(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index ac8a12076b00..b06b1cb01e0e 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -4805,44 +4805,22 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
> * includes some THP page fault allocations
> */
> if (costly_order && (gfp_mask & __GFP_NORETRY)) {
> - /*
> - * If allocating entire pageblock(s) and compaction
> - * failed because all zones are below low watermarks
> - * or is prohibited because it recently failed at this
> - * order, fail immediately unless the allocator has
> - * requested compaction and reclaim retry.
> - *
> - * Reclaim is
> - * - potentially very expensive because zones are far
> - * below their low watermarks or this is part of very
> - * bursty high order allocations,
> - * - not guaranteed to help because isolate_freepages()
> - * may not iterate over freed pages as part of its
> - * linear scan, and
> - * - unlikely to make entire pageblocks free on its
> - * own.
> - */
> - if (compact_result == COMPACT_SKIPPED ||
> - compact_result == COMPACT_DEFERRED)
> - goto nopage;
> -
> /*
> * THP page faults may attempt local node only first,
> * but are then allowed to only compact, not reclaim,
> * see alloc_pages_mpol().
> *
> - * Compaction can fail for other reasons than those
> - * checked above and we don't want such THP allocations
> - * to put reclaim pressure on a single node in a
> - * situation where other nodes might have plenty of
> - * available memory.
> + * Compaction has failed above and we don't want such
> + * THP allocations to put reclaim pressure on a single
> + * node in a situation where other nodes might have
> + * plenty of available memory.
> */
> if (gfp_mask & __GFP_THISNODE)
> goto nopage;
>
> /*
> - * Looks like reclaim/compaction is worth trying, but
> - * sync compaction could be very expensive, so keep
> + * Proceed with single round of reclaim/compaction, but
> + * since sync compaction could be very expensive, keep
> * using async compaction.
> */
> compact_priority = INIT_COMPACT_PRIORITY;
>
> --
> 2.52.0
--
Michal Hocko
SUSE Labs
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH mm-unstable v3 2/3] mm/page_alloc: refactor the initial compaction handling
2026-01-06 11:52 ` [PATCH mm-unstable v3 2/3] mm/page_alloc: refactor the initial compaction handling Vlastimil Babka
@ 2026-01-06 13:56 ` Michal Hocko
0 siblings, 0 replies; 7+ messages in thread
From: Michal Hocko @ 2026-01-06 13:56 UTC (permalink / raw)
To: Vlastimil Babka
Cc: Andrew Morton, Suren Baghdasaryan, Brendan Jackman,
Johannes Weiner, Zi Yan, David Rientjes, David Hildenbrand,
Lorenzo Stoakes, Liam R. Howlett, Mike Rapoport, Joshua Hahn,
Pedro Falcato, linux-mm, linux-kernel
On Tue 06-01-26 12:52:37, Vlastimil Babka wrote:
> The initial direct compaction done in some cases in
> __alloc_pages_slowpath() stands out from the main retry loop of
> reclaim + compaction.
>
> We can simplify this by instead skipping the initial reclaim attempt via
> a new local variable compact_first, and handle the compact_prority as
> necessary to match the original behavior. No functional change intended.
>
> Suggested-by: Johannes Weiner <hannes@cmpxchg.org>
> Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
> Reviewed-by: Joshua Hahn <joshua.hahnjy@gmail.com>
LGTM and it makes the code flow easier to follow
Acked-by: Michal Hocko <mhocko@suse.com>
> ---
> include/linux/gfp.h | 8 ++++-
> mm/page_alloc.c | 100 +++++++++++++++++++++++++---------------------------
> 2 files changed, 55 insertions(+), 53 deletions(-)
>
> diff --git a/include/linux/gfp.h b/include/linux/gfp.h
> index aa45989f410d..6ecf6dda93e0 100644
> --- a/include/linux/gfp.h
> +++ b/include/linux/gfp.h
> @@ -407,9 +407,15 @@ extern gfp_t gfp_allowed_mask;
> /* Returns true if the gfp_mask allows use of ALLOC_NO_WATERMARK */
> bool gfp_pfmemalloc_allowed(gfp_t gfp_mask);
>
> +/* A helper for checking if gfp includes all the specified flags */
> +static inline bool gfp_has_flags(gfp_t gfp, gfp_t flags)
> +{
> + return (gfp & flags) == flags;
> +}
> +
> static inline bool gfp_has_io_fs(gfp_t gfp)
> {
> - return (gfp & (__GFP_IO | __GFP_FS)) == (__GFP_IO | __GFP_FS);
> + return gfp_has_flags(gfp, __GFP_IO | __GFP_FS);
> }
>
> /*
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index b06b1cb01e0e..3b2579c5716f 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -4702,7 +4702,7 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
> struct alloc_context *ac)
> {
> bool can_direct_reclaim = gfp_mask & __GFP_DIRECT_RECLAIM;
> - bool can_compact = gfp_compaction_allowed(gfp_mask);
> + bool can_compact = can_direct_reclaim && gfp_compaction_allowed(gfp_mask);
> bool nofail = gfp_mask & __GFP_NOFAIL;
> const bool costly_order = order > PAGE_ALLOC_COSTLY_ORDER;
> struct page *page = NULL;
> @@ -4715,6 +4715,7 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
> unsigned int cpuset_mems_cookie;
> unsigned int zonelist_iter_cookie;
> int reserve_flags;
> + bool compact_first = false;
>
> if (unlikely(nofail)) {
> /*
> @@ -4738,6 +4739,19 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
> cpuset_mems_cookie = read_mems_allowed_begin();
> zonelist_iter_cookie = zonelist_iter_begin();
>
> + /*
> + * For costly allocations, try direct compaction first, as it's likely
> + * that we have enough base pages and don't need to reclaim. For non-
> + * movable high-order allocations, do that as well, as compaction will
> + * try prevent permanent fragmentation by migrating from blocks of the
> + * same migratetype.
> + */
> + if (can_compact && (costly_order || (order > 0 &&
> + ac->migratetype != MIGRATE_MOVABLE))) {
> + compact_first = true;
> + compact_priority = INIT_COMPACT_PRIORITY;
> + }
> +
> /*
> * The fast path uses conservative alloc_flags to succeed only until
> * kswapd needs to be woken up, and to avoid the cost of setting up
> @@ -4780,53 +4794,6 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
> if (page)
> goto got_pg;
>
> - /*
> - * For costly allocations, try direct compaction first, as it's likely
> - * that we have enough base pages and don't need to reclaim. For non-
> - * movable high-order allocations, do that as well, as compaction will
> - * try prevent permanent fragmentation by migrating from blocks of the
> - * same migratetype.
> - * Don't try this for allocations that are allowed to ignore
> - * watermarks, as the ALLOC_NO_WATERMARKS attempt didn't yet happen.
> - */
> - if (can_direct_reclaim && can_compact &&
> - (costly_order ||
> - (order > 0 && ac->migratetype != MIGRATE_MOVABLE))
> - && !gfp_pfmemalloc_allowed(gfp_mask)) {
> - page = __alloc_pages_direct_compact(gfp_mask, order,
> - alloc_flags, ac,
> - INIT_COMPACT_PRIORITY,
> - &compact_result);
> - if (page)
> - goto got_pg;
> -
> - /*
> - * Checks for costly allocations with __GFP_NORETRY, which
> - * includes some THP page fault allocations
> - */
> - if (costly_order && (gfp_mask & __GFP_NORETRY)) {
> - /*
> - * THP page faults may attempt local node only first,
> - * but are then allowed to only compact, not reclaim,
> - * see alloc_pages_mpol().
> - *
> - * Compaction has failed above and we don't want such
> - * THP allocations to put reclaim pressure on a single
> - * node in a situation where other nodes might have
> - * plenty of available memory.
> - */
> - if (gfp_mask & __GFP_THISNODE)
> - goto nopage;
> -
> - /*
> - * Proceed with single round of reclaim/compaction, but
> - * since sync compaction could be very expensive, keep
> - * using async compaction.
> - */
> - compact_priority = INIT_COMPACT_PRIORITY;
> - }
> - }
> -
> retry:
> /*
> * Deal with possible cpuset update races or zonelist updates to avoid
> @@ -4870,10 +4837,12 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
> goto nopage;
>
> /* Try direct reclaim and then allocating */
> - page = __alloc_pages_direct_reclaim(gfp_mask, order, alloc_flags, ac,
> - &did_some_progress);
> - if (page)
> - goto got_pg;
> + if (!compact_first) {
> + page = __alloc_pages_direct_reclaim(gfp_mask, order, alloc_flags,
> + ac, &did_some_progress);
> + if (page)
> + goto got_pg;
> + }
>
> /* Try direct compaction and then allocating */
> page = __alloc_pages_direct_compact(gfp_mask, order, alloc_flags, ac,
> @@ -4881,6 +4850,33 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
> if (page)
> goto got_pg;
>
> + if (compact_first) {
> + /*
> + * THP page faults may attempt local node only first, but are
> + * then allowed to only compact, not reclaim, see
> + * alloc_pages_mpol().
> + *
> + * Compaction has failed above and we don't want such THP
> + * allocations to put reclaim pressure on a single node in a
> + * situation where other nodes might have plenty of available
> + * memory.
> + */
> + if (gfp_has_flags(gfp_mask, __GFP_NORETRY | __GFP_THISNODE))
> + goto nopage;
> +
> + /*
> + * For the initial compaction attempt we have lowered its
> + * priority. Restore it for further retries, if those are
> + * allowed. With __GFP_NORETRY there will be a single round of
> + * reclaim and compaction with the lowered priority.
> + */
> + if (!(gfp_mask & __GFP_NORETRY))
> + compact_priority = DEF_COMPACT_PRIORITY;
> +
> + compact_first = false;
> + goto retry;
> + }
> +
> /* Do not loop if specifically requested */
> if (gfp_mask & __GFP_NORETRY)
> goto nopage;
>
> --
> 2.52.0
--
Michal Hocko
SUSE Labs
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH mm-unstable v3 3/3] mm/page_alloc: simplify __alloc_pages_slowpath() flow
2026-01-06 11:52 ` [PATCH mm-unstable v3 3/3] mm/page_alloc: simplify __alloc_pages_slowpath() flow Vlastimil Babka
@ 2026-01-06 14:00 ` Michal Hocko
0 siblings, 0 replies; 7+ messages in thread
From: Michal Hocko @ 2026-01-06 14:00 UTC (permalink / raw)
To: Vlastimil Babka
Cc: Andrew Morton, Suren Baghdasaryan, Brendan Jackman,
Johannes Weiner, Zi Yan, David Rientjes, David Hildenbrand,
Lorenzo Stoakes, Liam R. Howlett, Mike Rapoport, Joshua Hahn,
Pedro Falcato, linux-mm, linux-kernel
On Tue 06-01-26 12:52:38, Vlastimil Babka wrote:
> The actions done before entering the main retry loop include waking up
> kswapds and an allocation attempt with the precise alloc_flags.
> Then in the loop we keep waking up kswapds, and we retry the allocation
> with flags potentially further adjusted by being allowed to use reserves
> (due to e.g. becoming an OOM killer victim).
>
> We can adjust the retry loop to keep only one instance of waking up
> kswapds and allocation attempt. Introduce the can_retry_reserves
> variable for retrying once when we become eligible for reserves. It is
> still useful not to evaluate reserve_flags immediately for the first
> allocation attempt, because it's better to first try succeed in a
> non-preferred zone above the min watermark before allocating immediately
> from the preferred zone below min watermark.
>
> Additionally move the cpuset update checks introduced by e05741fb10c3
> ("mm/page_alloc.c: avoid infinite retries caused by cpuset race")
> further down the retry loop. It's enough to do the checks only before
> reaching any potentially infinite 'goto retry;' loop.
>
> There should be no meaningful functional changes. The change of exact
> moments the retry for reserves and cpuset updates are checked should not
> result in different outomes modulo races with concurrent allocator
> activity.
>
> Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
LGTM
Acked-by: Michal Hocko <mhocko@suse.com>
> ---
> mm/page_alloc.c | 41 +++++++++++++++++++++++------------------
> 1 file changed, 23 insertions(+), 18 deletions(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 3b2579c5716f..c02564042618 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -4716,6 +4716,7 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
> unsigned int zonelist_iter_cookie;
> int reserve_flags;
> bool compact_first = false;
> + bool can_retry_reserves = true;
>
> if (unlikely(nofail)) {
> /*
> @@ -4783,6 +4784,8 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
> goto nopage;
> }
>
> +retry:
> + /* Ensure kswapd doesn't accidentally go to sleep as long as we loop */
> if (alloc_flags & ALLOC_KSWAPD)
> wake_all_kswapds(order, gfp_mask, ac);
>
> @@ -4794,19 +4797,6 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
> if (page)
> goto got_pg;
>
> -retry:
> - /*
> - * Deal with possible cpuset update races or zonelist updates to avoid
> - * infinite retries.
> - */
> - if (check_retry_cpuset(cpuset_mems_cookie, ac) ||
> - check_retry_zonelist(zonelist_iter_cookie))
> - goto restart;
> -
> - /* Ensure kswapd doesn't accidentally go to sleep as long as we loop */
> - if (alloc_flags & ALLOC_KSWAPD)
> - wake_all_kswapds(order, gfp_mask, ac);
> -
> reserve_flags = __gfp_pfmemalloc_flags(gfp_mask);
> if (reserve_flags)
> alloc_flags = gfp_to_alloc_flags_cma(gfp_mask, reserve_flags) |
> @@ -4821,12 +4811,18 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
> ac->nodemask = NULL;
> ac->preferred_zoneref = first_zones_zonelist(ac->zonelist,
> ac->highest_zoneidx, ac->nodemask);
> - }
>
> - /* Attempt with potentially adjusted zonelist and alloc_flags */
> - page = get_page_from_freelist(gfp_mask, order, alloc_flags, ac);
> - if (page)
> - goto got_pg;
> + /*
> + * The first time we adjust anything due to being allowed to
> + * ignore memory policies or watermarks, retry immediately. This
> + * allows us to keep the first allocation attempt optimistic so
> + * it can succeed in a zone that is still above watermarks.
> + */
> + if (can_retry_reserves) {
> + can_retry_reserves = false;
> + goto retry;
> + }
> + }
>
> /* Caller is not willing to reclaim, we can't balance anything */
> if (!can_direct_reclaim)
> @@ -4889,6 +4885,15 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
> !(gfp_mask & __GFP_RETRY_MAYFAIL)))
> goto nopage;
>
> + /*
> + * Deal with possible cpuset update races or zonelist updates to avoid
> + * infinite retries. No "goto retry;" can be placed above this check
> + * unless it can execute just once.
> + */
> + if (check_retry_cpuset(cpuset_mems_cookie, ac) ||
> + check_retry_zonelist(zonelist_iter_cookie))
> + goto restart;
> +
> if (should_reclaim_retry(gfp_mask, order, ac, alloc_flags,
> did_some_progress > 0, &no_progress_loops))
> goto retry;
>
> --
> 2.52.0
--
Michal Hocko
SUSE Labs
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2026-01-06 14:00 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2026-01-06 11:52 [PATCH mm-unstable v3 0/3] tweaks for __alloc_pages_slowpath() Vlastimil Babka
2026-01-06 11:52 ` [PATCH mm-unstable v3 1/3] mm/page_alloc: ignore the exact initial compaction result Vlastimil Babka
2026-01-06 13:51 ` Michal Hocko
2026-01-06 11:52 ` [PATCH mm-unstable v3 2/3] mm/page_alloc: refactor the initial compaction handling Vlastimil Babka
2026-01-06 13:56 ` Michal Hocko
2026-01-06 11:52 ` [PATCH mm-unstable v3 3/3] mm/page_alloc: simplify __alloc_pages_slowpath() flow Vlastimil Babka
2026-01-06 14:00 ` Michal Hocko
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox