* [RFC PATCH v2] mm: Improve pgdat_balanced() to avoid over-reclamation for higher-order allocation
@ 2026-04-22 2:18 Barry Song (Xiaomi)
0 siblings, 0 replies; only message in thread
From: Barry Song (Xiaomi) @ 2026-04-22 2:18 UTC (permalink / raw)
To: akpm, linux-mm
Cc: linux-kernel, Barry Song (Xiaomi),
Baolin Wang, Johannes Weiner, David Hildenbrand, Michal Hocko,
Qi Zheng, Shakeel Butt, Lorenzo Stoakes, Kairui Song,
Axel Rasmussen, Yuanchu Xie, Wei Xu, Wang Lian, Kunwu Chan
We may encounter cases where the system still has plenty of free
memory, but cannot satisfy higher-order allocations. On phones, we
have observed that bursty network transfers can cause devices to
heat up. Baolin and Kairui have seen similar behavior on servers.
Currently, kswapd behaves as follows: when a higher-order allocation
is issued with __GFP_KSWAPD_RECLAIM, pgdat_balanced() returns false
because __zone_watermark_ok() fails if no suitable higher-order
pages exist, even when free memory is well above the high watermark.
As a result, kswapd_shrink_node() sets an excessively large
sc->nr_to_reclaim and attempts aggressive reclamation:
for_each_managed_zone_pgdat(zone, pgdat, z, sc->reclaim_idx) {
sc->nr_to_reclaim += max(high_wmark_pages(zone), SWAP_CLUSTER_MAX);
}
We have an opportunity to re-evaluate the balance by resetting
sc->order to 0 after shrink_node() with the following code
in kswapd_shrink_node():
/*
* Fragmentation may mean that the system cannot be rebalanced for
* high-order allocations. If twice the allocation size has been
* reclaimed then recheck watermarks only at order-0 to prevent
* excessive reclaim.
*/
if (sc->order && sc->nr_reclaimed >= compact_gap(sc->order))
sc->order = 0;
But we have actually scanned and over-reclaimed far more than
compact_gap(sc->order). If higher-order allocations continue, we may
see persistently high kswapd CPU utilization coexisting with plenty of
free memory in the system.
We may want to evaluate the situation earlier at the beginning.
If there is plenty of free memory, we could avoid triggering
reclamation with an excessively large sc->nr_to_reclaim value
and instead prefer compaction.
Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: David Hildenbrand <david@kernel.org>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Qi Zheng <zhengqi.arch@bytedance.com>
Cc: Shakeel Butt <shakeel.butt@linux.dev>
Cc: Lorenzo Stoakes <ljs@kernel.org>
Cc: Kairui Song <kasong@tencent.com>
Cc: Axel Rasmussen <axelrasmussen@google.com>
Cc: Yuanchu Xie <yuanchu@google.com>
Cc: Wei Xu <weixugc@google.com>
Co-developed-by: Wang Lian <wanglian@kylinos.cn>
Co-developed-by: Kunwu Chan <chentao@kylinos.cn>
Signed-off-by: Barry Song (Xiaomi) <baohua@kernel.org>
---
-RFC v1 was "mm: net: disable kswapd for high-order network
buffer allocation":
https://lore.kernel.org/linux-mm/20251013101636.69220-1-21cnbao@gmail.com/
mm/vmscan.c | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/mm/vmscan.c b/mm/vmscan.c
index bd1b1aa12581..4f9668aa8eef 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -6964,6 +6964,13 @@ static bool pgdat_balanced(pg_data_t *pgdat, int order, int highest_zoneidx)
if (__zone_watermark_ok(zone, order, mark, highest_zoneidx,
0, free_pages))
return true;
+ /*
+ * Free pages may be well above the watermark, but if
+ * higher-order pages are unavailable, kswapd may still
+ * trigger excessive reclamation.
+ */
+ if (order && compaction_suitable(zone, order, mark, highest_zoneidx))
+ return true;
}
/*
--
2.39.3 (Apple Git-146)
^ permalink raw reply [flat|nested] only message in thread
only message in thread, other threads:[~2026-04-22 2:19 UTC | newest]
Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2026-04-22 2:18 [RFC PATCH v2] mm: Improve pgdat_balanced() to avoid over-reclamation for higher-order allocation Barry Song (Xiaomi)
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox