* [RFC PATCH v2] mm: Improve pgdat_balanced() to avoid over-reclamation for higher-order allocation
@ 2026-04-22 2:18 Barry Song (Xiaomi)
2026-04-22 6:58 ` Baolin Wang
2026-04-22 15:47 ` Johannes Weiner
0 siblings, 2 replies; 5+ messages in thread
From: Barry Song (Xiaomi) @ 2026-04-22 2:18 UTC (permalink / raw)
To: akpm, linux-mm
Cc: linux-kernel, Barry Song (Xiaomi),
Baolin Wang, Johannes Weiner, David Hildenbrand, Michal Hocko,
Qi Zheng, Shakeel Butt, Lorenzo Stoakes, Kairui Song,
Axel Rasmussen, Yuanchu Xie, Wei Xu, Wang Lian, Kunwu Chan
We may encounter cases where the system still has plenty of free
memory, but cannot satisfy higher-order allocations. On phones, we
have observed that bursty network transfers can cause devices to
heat up. Baolin and Kairui have seen similar behavior on servers.
Currently, kswapd behaves as follows: when a higher-order allocation
is issued with __GFP_KSWAPD_RECLAIM, pgdat_balanced() returns false
because __zone_watermark_ok() fails if no suitable higher-order
pages exist, even when free memory is well above the high watermark.
As a result, kswapd_shrink_node() sets an excessively large
sc->nr_to_reclaim and attempts aggressive reclamation:
for_each_managed_zone_pgdat(zone, pgdat, z, sc->reclaim_idx) {
sc->nr_to_reclaim += max(high_wmark_pages(zone), SWAP_CLUSTER_MAX);
}
We have an opportunity to re-evaluate the balance by resetting
sc->order to 0 after shrink_node() with the following code
in kswapd_shrink_node():
/*
* Fragmentation may mean that the system cannot be rebalanced for
* high-order allocations. If twice the allocation size has been
* reclaimed then recheck watermarks only at order-0 to prevent
* excessive reclaim.
*/
if (sc->order && sc->nr_reclaimed >= compact_gap(sc->order))
sc->order = 0;
But we have actually scanned and over-reclaimed far more than
compact_gap(sc->order). If higher-order allocations continue, we may
see persistently high kswapd CPU utilization coexisting with plenty of
free memory in the system.
We may want to evaluate the situation earlier at the beginning.
If there is plenty of free memory, we could avoid triggering
reclamation with an excessively large sc->nr_to_reclaim value
and instead prefer compaction.
Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: David Hildenbrand <david@kernel.org>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Qi Zheng <zhengqi.arch@bytedance.com>
Cc: Shakeel Butt <shakeel.butt@linux.dev>
Cc: Lorenzo Stoakes <ljs@kernel.org>
Cc: Kairui Song <kasong@tencent.com>
Cc: Axel Rasmussen <axelrasmussen@google.com>
Cc: Yuanchu Xie <yuanchu@google.com>
Cc: Wei Xu <weixugc@google.com>
Co-developed-by: Wang Lian <wanglian@kylinos.cn>
Co-developed-by: Kunwu Chan <chentao@kylinos.cn>
Signed-off-by: Barry Song (Xiaomi) <baohua@kernel.org>
---
-RFC v1 was "mm: net: disable kswapd for high-order network
buffer allocation":
https://lore.kernel.org/linux-mm/20251013101636.69220-1-21cnbao@gmail.com/
mm/vmscan.c | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/mm/vmscan.c b/mm/vmscan.c
index bd1b1aa12581..4f9668aa8eef 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -6964,6 +6964,13 @@ static bool pgdat_balanced(pg_data_t *pgdat, int order, int highest_zoneidx)
if (__zone_watermark_ok(zone, order, mark, highest_zoneidx,
0, free_pages))
return true;
+ /*
+ * Free pages may be well above the watermark, but if
+ * higher-order pages are unavailable, kswapd may still
+ * trigger excessive reclamation.
+ */
+ if (order && compaction_suitable(zone, order, mark, highest_zoneidx))
+ return true;
}
/*
--
2.39.3 (Apple Git-146)
^ permalink raw reply [flat|nested] 5+ messages in thread* Re: [RFC PATCH v2] mm: Improve pgdat_balanced() to avoid over-reclamation for higher-order allocation 2026-04-22 2:18 [RFC PATCH v2] mm: Improve pgdat_balanced() to avoid over-reclamation for higher-order allocation Barry Song (Xiaomi) @ 2026-04-22 6:58 ` Baolin Wang 2026-04-22 10:56 ` Barry Song 2026-04-22 15:47 ` Johannes Weiner 1 sibling, 1 reply; 5+ messages in thread From: Baolin Wang @ 2026-04-22 6:58 UTC (permalink / raw) To: Barry Song (Xiaomi), akpm, linux-mm Cc: linux-kernel, Johannes Weiner, David Hildenbrand, Michal Hocko, Qi Zheng, Shakeel Butt, Lorenzo Stoakes, Kairui Song, Axel Rasmussen, Yuanchu Xie, Wei Xu, Wang Lian, Kunwu Chan On 4/22/26 10:18 AM, Barry Song (Xiaomi) wrote: > We may encounter cases where the system still has plenty of free > memory, but cannot satisfy higher-order allocations. On phones, we > have observed that bursty network transfers can cause devices to > heat up. Baolin and Kairui have seen similar behavior on servers. > > Currently, kswapd behaves as follows: when a higher-order allocation > is issued with __GFP_KSWAPD_RECLAIM, pgdat_balanced() returns false > because __zone_watermark_ok() fails if no suitable higher-order > pages exist, even when free memory is well above the high watermark. > As a result, kswapd_shrink_node() sets an excessively large > sc->nr_to_reclaim and attempts aggressive reclamation: > > for_each_managed_zone_pgdat(zone, pgdat, z, sc->reclaim_idx) { > sc->nr_to_reclaim += max(high_wmark_pages(zone), SWAP_CLUSTER_MAX); > } > > We have an opportunity to re-evaluate the balance by resetting > sc->order to 0 after shrink_node() with the following code > in kswapd_shrink_node(): > /* > * Fragmentation may mean that the system cannot be rebalanced for > * high-order allocations. If twice the allocation size has been > * reclaimed then recheck watermarks only at order-0 to prevent > * excessive reclaim. > */ > if (sc->order && sc->nr_reclaimed >= compact_gap(sc->order)) > sc->order = 0; > > But we have actually scanned and over-reclaimed far more than > compact_gap(sc->order). If higher-order allocations continue, we may > see persistently high kswapd CPU utilization coexisting with plenty of > free memory in the system. > > We may want to evaluate the situation earlier at the beginning. > If there is plenty of free memory, we could avoid triggering > reclamation with an excessively large sc->nr_to_reclaim value > and instead prefer compaction. > > Cc: Baolin Wang <baolin.wang@linux.alibaba.com> > Cc: Johannes Weiner <hannes@cmpxchg.org> > Cc: David Hildenbrand <david@kernel.org> > Cc: Michal Hocko <mhocko@kernel.org> > Cc: Qi Zheng <zhengqi.arch@bytedance.com> > Cc: Shakeel Butt <shakeel.butt@linux.dev> > Cc: Lorenzo Stoakes <ljs@kernel.org> > Cc: Kairui Song <kasong@tencent.com> > Cc: Axel Rasmussen <axelrasmussen@google.com> > Cc: Yuanchu Xie <yuanchu@google.com> > Cc: Wei Xu <weixugc@google.com> > Co-developed-by: Wang Lian <wanglian@kylinos.cn> > Co-developed-by: Kunwu Chan <chentao@kylinos.cn> > Signed-off-by: Barry Song (Xiaomi) <baohua@kernel.org> > --- Thanks Barry for sending out the RFC patch for discussion. Yes, we have indeed seen reports from our customers' scenarios where fragmentation caused kswapd to be woken up and reclaim too many file folios (even when free memory was sufficient), leading to severe I/O contention that impacted some applications. However, I'm concerned that this patch might also have side effects, such as affecting system defragmentation. In some scenarios, directly reclaiming clean pagecache to free up space might be a faster way to defragment. At the very least, I think under defrag_mode, we should be more aggressive about defragmentation (including reclaiming some memory by kswapd). > -RFC v1 was "mm: net: disable kswapd for high-order network > buffer allocation": > https://lore.kernel.org/linux-mm/20251013101636.69220-1-21cnbao@gmail.com/ > > mm/vmscan.c | 7 +++++++ > 1 file changed, 7 insertions(+) > > diff --git a/mm/vmscan.c b/mm/vmscan.c > index bd1b1aa12581..4f9668aa8eef 100644 > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -6964,6 +6964,13 @@ static bool pgdat_balanced(pg_data_t *pgdat, int order, int highest_zoneidx) > if (__zone_watermark_ok(zone, order, mark, highest_zoneidx, > 0, free_pages)) > return true; > + /* > + * Free pages may be well above the watermark, but if > + * higher-order pages are unavailable, kswapd may still > + * trigger excessive reclamation. > + */ > + if (order && compaction_suitable(zone, order, mark, highest_zoneidx)) > + return true; > } > > /* ^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [RFC PATCH v2] mm: Improve pgdat_balanced() to avoid over-reclamation for higher-order allocation 2026-04-22 6:58 ` Baolin Wang @ 2026-04-22 10:56 ` Barry Song 0 siblings, 0 replies; 5+ messages in thread From: Barry Song @ 2026-04-22 10:56 UTC (permalink / raw) To: Baolin Wang Cc: akpm, linux-mm, linux-kernel, Johannes Weiner, David Hildenbrand, Michal Hocko, Qi Zheng, Shakeel Butt, Lorenzo Stoakes, Kairui Song, Axel Rasmussen, Yuanchu Xie, Wei Xu, Wang Lian, Kunwu Chan On Wed, Apr 22, 2026 at 2:59 PM Baolin Wang <baolin.wang@linux.alibaba.com> wrote: > > > > On 4/22/26 10:18 AM, Barry Song (Xiaomi) wrote: > > We may encounter cases where the system still has plenty of free > > memory, but cannot satisfy higher-order allocations. On phones, we > > have observed that bursty network transfers can cause devices to > > heat up. Baolin and Kairui have seen similar behavior on servers. > > > > Currently, kswapd behaves as follows: when a higher-order allocation > > is issued with __GFP_KSWAPD_RECLAIM, pgdat_balanced() returns false > > because __zone_watermark_ok() fails if no suitable higher-order > > pages exist, even when free memory is well above the high watermark. > > As a result, kswapd_shrink_node() sets an excessively large > > sc->nr_to_reclaim and attempts aggressive reclamation: > > > > for_each_managed_zone_pgdat(zone, pgdat, z, sc->reclaim_idx) { > > sc->nr_to_reclaim += max(high_wmark_pages(zone), SWAP_CLUSTER_MAX); > > } > > > > We have an opportunity to re-evaluate the balance by resetting > > sc->order to 0 after shrink_node() with the following code > > in kswapd_shrink_node(): > > /* > > * Fragmentation may mean that the system cannot be rebalanced for > > * high-order allocations. If twice the allocation size has been > > * reclaimed then recheck watermarks only at order-0 to prevent > > * excessive reclaim. > > */ > > if (sc->order && sc->nr_reclaimed >= compact_gap(sc->order)) > > sc->order = 0; > > > > But we have actually scanned and over-reclaimed far more than > > compact_gap(sc->order). If higher-order allocations continue, we may > > see persistently high kswapd CPU utilization coexisting with plenty of > > free memory in the system. > > > > We may want to evaluate the situation earlier at the beginning. > > If there is plenty of free memory, we could avoid triggering > > reclamation with an excessively large sc->nr_to_reclaim value > > and instead prefer compaction. > > > > Cc: Baolin Wang <baolin.wang@linux.alibaba.com> > > Cc: Johannes Weiner <hannes@cmpxchg.org> > > Cc: David Hildenbrand <david@kernel.org> > > Cc: Michal Hocko <mhocko@kernel.org> > > Cc: Qi Zheng <zhengqi.arch@bytedance.com> > > Cc: Shakeel Butt <shakeel.butt@linux.dev> > > Cc: Lorenzo Stoakes <ljs@kernel.org> > > Cc: Kairui Song <kasong@tencent.com> > > Cc: Axel Rasmussen <axelrasmussen@google.com> > > Cc: Yuanchu Xie <yuanchu@google.com> > > Cc: Wei Xu <weixugc@google.com> > > Co-developed-by: Wang Lian <wanglian@kylinos.cn> > > Co-developed-by: Kunwu Chan <chentao@kylinos.cn> > > Signed-off-by: Barry Song (Xiaomi) <baohua@kernel.org> > > --- > > Thanks Barry for sending out the RFC patch for discussion. > > Yes, we have indeed seen reports from our customers' scenarios where > fragmentation caused kswapd to be woken up and reclaim too many file > folios (even when free memory was sufficient), leading to severe I/O > contention that impacted some applications. > > However, I'm concerned that this patch might also have side effects, > such as affecting system defragmentation. In some scenarios, directly > reclaiming clean pagecache to free up space might be a faster way to balance_pgdat() can still reclaim clean page cache even when pgdat_balanced() returns true, provided that nr_boost_reclaim is non-zero. /* * If boosting is not active then only reclaim if there are no * eligible zones. Note that sc.reclaim_idx is not used as * buffer_heads_over_limit may have adjusted it. */ if (!nr_boost_reclaim && balanced) goto out; /* Limit the priority of boosting to avoid reclaim writeback */ if (nr_boost_reclaim && sc.priority == DEF_PRIORITY - 2) raise_priority = false; /* * Do not writeback or swap pages for boosted reclaim. The * intent is to relieve pressure not issue sub-optimal IO * from reclaim context. If no pages are reclaimed, the * reclaim will be aborted. */ sc.may_writepage = !nr_boost_reclaim; sc.may_swap = !nr_boost_reclaim; I find that nr_boost_reclaim is almost always non-zero in bursty network scenarios. So I guess clean page cache is still reclaimed, but with much lower kswapd pressure. > defragment. At the very least, I think under defrag_mode, we should be > more aggressive about defragmentation (including reclaiming some memory > by kswapd). I guess we can keep the current behavior if defrag_mode prefers over-reclaiming to form contiguous pages. Is it simply an if (defrag_mode) check? > > > -RFC v1 was "mm: net: disable kswapd for high-order network > > buffer allocation": > > https://lore.kernel.org/linux-mm/20251013101636.69220-1-21cnbao@gmail.com/ > > > > mm/vmscan.c | 7 +++++++ > > 1 file changed, 7 insertions(+) > > > > diff --git a/mm/vmscan.c b/mm/vmscan.c > > index bd1b1aa12581..4f9668aa8eef 100644 > > --- a/mm/vmscan.c > > +++ b/mm/vmscan.c > > @@ -6964,6 +6964,13 @@ static bool pgdat_balanced(pg_data_t *pgdat, int order, int highest_zoneidx) > > if (__zone_watermark_ok(zone, order, mark, highest_zoneidx, > > 0, free_pages)) > > return true; > > + /* > > + * Free pages may be well above the watermark, but if > > + * higher-order pages are unavailable, kswapd may still > > + * trigger excessive reclamation. > > + */ > > + if (order && compaction_suitable(zone, order, mark, highest_zoneidx)) > > + return true; > > } > > > > /* > Thanks Barry ^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [RFC PATCH v2] mm: Improve pgdat_balanced() to avoid over-reclamation for higher-order allocation 2026-04-22 2:18 [RFC PATCH v2] mm: Improve pgdat_balanced() to avoid over-reclamation for higher-order allocation Barry Song (Xiaomi) 2026-04-22 6:58 ` Baolin Wang @ 2026-04-22 15:47 ` Johannes Weiner 2026-04-22 21:19 ` Barry Song 1 sibling, 1 reply; 5+ messages in thread From: Johannes Weiner @ 2026-04-22 15:47 UTC (permalink / raw) To: Barry Song (Xiaomi) Cc: akpm, linux-mm, linux-kernel, Baolin Wang, David Hildenbrand, Michal Hocko, Qi Zheng, Shakeel Butt, Lorenzo Stoakes, Kairui Song, Axel Rasmussen, Yuanchu Xie, Wei Xu, Wang Lian, Kunwu Chan Hi Barry, On Wed, Apr 22, 2026 at 10:18:42AM +0800, Barry Song (Xiaomi) wrote: > We may encounter cases where the system still has plenty of free > memory, but cannot satisfy higher-order allocations. On phones, we > have observed that bursty network transfers can cause devices to > heat up. Baolin and Kairui have seen similar behavior on servers. > > Currently, kswapd behaves as follows: when a higher-order allocation > is issued with __GFP_KSWAPD_RECLAIM, pgdat_balanced() returns false > because __zone_watermark_ok() fails if no suitable higher-order > pages exist, even when free memory is well above the high watermark. > As a result, kswapd_shrink_node() sets an excessively large > sc->nr_to_reclaim and attempts aggressive reclamation: > > for_each_managed_zone_pgdat(zone, pgdat, z, sc->reclaim_idx) { > sc->nr_to_reclaim += max(high_wmark_pages(zone), SWAP_CLUSTER_MAX); > } > > We have an opportunity to re-evaluate the balance by resetting > sc->order to 0 after shrink_node() with the following code > in kswapd_shrink_node(): > /* > * Fragmentation may mean that the system cannot be rebalanced for > * high-order allocations. If twice the allocation size has been > * reclaimed then recheck watermarks only at order-0 to prevent > * excessive reclaim. > */ > if (sc->order && sc->nr_reclaimed >= compact_gap(sc->order)) > sc->order = 0; > > But we have actually scanned and over-reclaimed far more than > compact_gap(sc->order). Do you have traces for how much it overshoots? > If higher-order allocations continue, we may see persistently high > kswapd CPU utilization coexisting with plenty of free memory in the > system. > > We may want to evaluate the situation earlier at the beginning. > If there is plenty of free memory, we could avoid triggering > reclamation with an excessively large sc->nr_to_reclaim value > and instead prefer compaction. > > Cc: Baolin Wang <baolin.wang@linux.alibaba.com> > Cc: Johannes Weiner <hannes@cmpxchg.org> > Cc: David Hildenbrand <david@kernel.org> > Cc: Michal Hocko <mhocko@kernel.org> > Cc: Qi Zheng <zhengqi.arch@bytedance.com> > Cc: Shakeel Butt <shakeel.butt@linux.dev> > Cc: Lorenzo Stoakes <ljs@kernel.org> > Cc: Kairui Song <kasong@tencent.com> > Cc: Axel Rasmussen <axelrasmussen@google.com> > Cc: Yuanchu Xie <yuanchu@google.com> > Cc: Wei Xu <weixugc@google.com> > Co-developed-by: Wang Lian <wanglian@kylinos.cn> > Co-developed-by: Kunwu Chan <chentao@kylinos.cn> > Signed-off-by: Barry Song (Xiaomi) <baohua@kernel.org> > --- > -RFC v1 was "mm: net: disable kswapd for high-order network > buffer allocation": > https://lore.kernel.org/linux-mm/20251013101636.69220-1-21cnbao@gmail.com/ > > mm/vmscan.c | 7 +++++++ > 1 file changed, 7 insertions(+) > > diff --git a/mm/vmscan.c b/mm/vmscan.c > index bd1b1aa12581..4f9668aa8eef 100644 > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -6964,6 +6964,13 @@ static bool pgdat_balanced(pg_data_t *pgdat, int order, int highest_zoneidx) > if (__zone_watermark_ok(zone, order, mark, highest_zoneidx, > 0, free_pages)) > return true; > + /* > + * Free pages may be well above the watermark, but if > + * higher-order pages are unavailable, kswapd may still > + * trigger excessive reclamation. > + */ > + if (order && compaction_suitable(zone, order, mark, highest_zoneidx)) > + return true; I've tried this in the past, but it was regressing huge page requests under memory pressure and with higher levels of concurrency: https://lore.kernel.org/linux-mm/20250411182156.GE366747@cmpxchg.org/ The compaction gap is sized for a single allocation, but kswapd/kcompactd are a shared resource for potentially hundreds or thousands of incoming requests. So if there is high demand for contiguous memory this isn't enough - kswapd gives up too early, kcompactd efficiency drops, you get storms of direct reclaim/compaction, and still poor allocation success rates. Continued kswapd wakeups mean that there is ongoing unsatisfied demand. The system has to keep moving forward. That said, it's well possible that we're overshooting that progress buffer due to running reclaim scans with a high order. It might be a better idea to look into that? ^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [RFC PATCH v2] mm: Improve pgdat_balanced() to avoid over-reclamation for higher-order allocation 2026-04-22 15:47 ` Johannes Weiner @ 2026-04-22 21:19 ` Barry Song 0 siblings, 0 replies; 5+ messages in thread From: Barry Song @ 2026-04-22 21:19 UTC (permalink / raw) To: Johannes Weiner Cc: akpm, linux-mm, linux-kernel, Baolin Wang, David Hildenbrand, Michal Hocko, Qi Zheng, Shakeel Butt, Lorenzo Stoakes, Kairui Song, Axel Rasmussen, Yuanchu Xie, Wei Xu, Wang Lian, Kunwu Chan, Wenchao Hao On Wed, Apr 22, 2026 at 11:47 PM Johannes Weiner <hannes@cmpxchg.org> wrote: > > Hi Barry, > > On Wed, Apr 22, 2026 at 10:18:42AM +0800, Barry Song (Xiaomi) wrote: > > We may encounter cases where the system still has plenty of free > > memory, but cannot satisfy higher-order allocations. On phones, we > > have observed that bursty network transfers can cause devices to > > heat up. Baolin and Kairui have seen similar behavior on servers. > > > > Currently, kswapd behaves as follows: when a higher-order allocation > > is issued with __GFP_KSWAPD_RECLAIM, pgdat_balanced() returns false > > because __zone_watermark_ok() fails if no suitable higher-order > > pages exist, even when free memory is well above the high watermark. > > As a result, kswapd_shrink_node() sets an excessively large > > sc->nr_to_reclaim and attempts aggressive reclamation: > > > > for_each_managed_zone_pgdat(zone, pgdat, z, sc->reclaim_idx) { > > sc->nr_to_reclaim += max(high_wmark_pages(zone), SWAP_CLUSTER_MAX); > > } > > > > We have an opportunity to re-evaluate the balance by resetting > > sc->order to 0 after shrink_node() with the following code > > in kswapd_shrink_node(): > > /* > > * Fragmentation may mean that the system cannot be rebalanced for > > * high-order allocations. If twice the allocation size has been > > * reclaimed then recheck watermarks only at order-0 to prevent > > * excessive reclaim. > > */ > > if (sc->order && sc->nr_reclaimed >= compact_gap(sc->order)) > > sc->order = 0; > > > > But we have actually scanned and over-reclaimed far more than > > compact_gap(sc->order). > > Do you have traces for how much it overshoots? I previously worked for a company that had some data. I’m no longer engaged with them, but I’m trying to recollect the data on a new platform I recently moved to. My impression is that when over-reclamation occurs, the amount of free memory is actually much higher than what compaction_suitable() would indicate. Will present the data once recollection is complete with the help from Wenchao. > > > If higher-order allocations continue, we may see persistently high > > kswapd CPU utilization coexisting with plenty of free memory in the > > system. > > > > We may want to evaluate the situation earlier at the beginning. > > If there is plenty of free memory, we could avoid triggering > > reclamation with an excessively large sc->nr_to_reclaim value > > and instead prefer compaction. > > > > Cc: Baolin Wang <baolin.wang@linux.alibaba.com> > > Cc: Johannes Weiner <hannes@cmpxchg.org> > > Cc: David Hildenbrand <david@kernel.org> > > Cc: Michal Hocko <mhocko@kernel.org> > > Cc: Qi Zheng <zhengqi.arch@bytedance.com> > > Cc: Shakeel Butt <shakeel.butt@linux.dev> > > Cc: Lorenzo Stoakes <ljs@kernel.org> > > Cc: Kairui Song <kasong@tencent.com> > > Cc: Axel Rasmussen <axelrasmussen@google.com> > > Cc: Yuanchu Xie <yuanchu@google.com> > > Cc: Wei Xu <weixugc@google.com> > > Co-developed-by: Wang Lian <wanglian@kylinos.cn> > > Co-developed-by: Kunwu Chan <chentao@kylinos.cn> > > Signed-off-by: Barry Song (Xiaomi) <baohua@kernel.org> > > --- > > -RFC v1 was "mm: net: disable kswapd for high-order network > > buffer allocation": > > https://lore.kernel.org/linux-mm/20251013101636.69220-1-21cnbao@gmail.com/ > > > > mm/vmscan.c | 7 +++++++ > > 1 file changed, 7 insertions(+) > > > > diff --git a/mm/vmscan.c b/mm/vmscan.c > > index bd1b1aa12581..4f9668aa8eef 100644 > > --- a/mm/vmscan.c > > +++ b/mm/vmscan.c > > @@ -6964,6 +6964,13 @@ static bool pgdat_balanced(pg_data_t *pgdat, int order, int highest_zoneidx) > > if (__zone_watermark_ok(zone, order, mark, highest_zoneidx, > > 0, free_pages)) > > return true; > > + /* > > + * Free pages may be well above the watermark, but if > > + * higher-order pages are unavailable, kswapd may still > > + * trigger excessive reclamation. > > + */ > > + if (order && compaction_suitable(zone, order, mark, highest_zoneidx)) > > + return true; > > I've tried this in the past, but it was regressing huge page requests > under memory pressure and with higher levels of concurrency: > > https://lore.kernel.org/linux-mm/20250411182156.GE366747@cmpxchg.org/ > > The compaction gap is sized for a single allocation, but > kswapd/kcompactd are a shared resource for potentially hundreds or > thousands of incoming requests. So if there is high demand for > contiguous memory this isn't enough - kswapd gives up too early, > kcompactd efficiency drops, you get storms of direct > reclaim/compaction, and still poor allocation success rates. Continued > kswapd wakeups mean that there is ongoing unsatisfied demand. The > system has to keep moving forward. I guess one possibility is to check whether the gap is much larger than compaction_gap(), to ensure there are sufficient gaps. diff --git a/mm/vmscan.c b/mm/vmscan.c index 4f9668aa8eef..428ec7266aab 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -6969,6 +6969,7 @@ static bool pgdat_balanced(pg_data_t *pgdat, int order, int highest_zoneidx) * higher-order pages are unavailable, kswapd may still * trigger excessive reclamation. */ + mark += 2 * num_possible_cpus() * compact_gap(order); if (order && compaction_suitable(zone, order, mark, highest_zoneidx)) return true; } On the other hand, “ongoing unsatisfied demand” may not always make sense. For example, network code may first try higher orders with the kswapd flag but without the direct reclamation flag, and then quickly fall back to order-0 allocations if the higher-order allocation fails. In many cases, higher orders do not improve network performance, while kswapd performs unnecessary reclamation and may further trigger more refaults. > > That said, it's well possible that we're overshooting that progress > buffer due to running reclaim scans with a high order. It might be a > better idea to look into that? I could investigate this further, but I suspect it is mainly because we set a very large nr_to_reclaim, assuming we want to reach the high watermark, while we may already have been above the high watermark in kswapd_shrink_node()? /* Reclaim a number of pages proportional to the number of zones */ sc->nr_to_reclaim = 0; for_each_managed_zone_pgdat(zone, pgdat, z, sc->reclaim_idx) { sc->nr_to_reclaim += max(high_wmark_pages(zone), SWAP_CLUSTER_MAX); } A rough look suggests another place where high-order may affect scanning is here: static bool should_abort_scan(struct lruvec *lruvec, struct scan_control *sc) { int i; enum zone_watermarks mark; if (sc->nr_reclaimed >= max(sc->nr_to_reclaim, compact_gap(sc->order))) return true; ... } So maybe we set something like the following instead, if we are actually above the high watermark and are only short of high-order pages? sc->nr_to_reclaim = min(sc->nr_to_reclaim, 2 * compact_gap(sc->order)) Anyway, let me take a closer look. Thanks Barry ^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2026-04-22 21:19 UTC | newest] Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2026-04-22 2:18 [RFC PATCH v2] mm: Improve pgdat_balanced() to avoid over-reclamation for higher-order allocation Barry Song (Xiaomi) 2026-04-22 6:58 ` Baolin Wang 2026-04-22 10:56 ` Barry Song 2026-04-22 15:47 ` Johannes Weiner 2026-04-22 21:19 ` Barry Song
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox