linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] mm: Restore per-memcg proactive reclaim with !CONFIG_NUMA
@ 2026-01-16 20:52 Yosry Ahmed
  2026-01-17 19:07 ` Shakeel Butt
  2026-01-19  8:00 ` Michal Hocko
  0 siblings, 2 replies; 3+ messages in thread
From: Yosry Ahmed @ 2026-01-16 20:52 UTC (permalink / raw)
  To: Andrew Morton
  Cc: David Hildenbrand, Lorenzo Stoakes, Liam R. Howlett,
	Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
	Johannes Weiner, Qi Zheng, Shakeel Butt, Davidlohr Bueso,
	linux-mm, linux-kernel, Yosry Ahmed, stable

Commit 2b7226af730c ("mm/memcg: make memory.reclaim interface generic")
moved proactive reclaim logic from memory.reclaim handler to a generic
user_proactive_reclaim() helper to be used for per-node proactive
reclaim.

However, user_proactive_reclaim() was only defined under CONFIG_NUMA,
with a stub always returning 0 otherwise. This broke memory.reclaim on
!CONFIG_NUMA configs, causing it to report success without actually
attempting reclaim.

Move the definition of user_proactive_reclaim() outside CONFIG_NUMA, and
instead define a stub for __node_reclaim() in the !CONFIG_NUMA case.
__node_reclaim() is only called from user_proactive_reclaim() when a
write is made to sys/devices/system/node/nodeX/reclaim, which is only
defined with CONFIG_NUMA.

Fixes: 2b7226af730c ("mm/memcg: make memory.reclaim interface generic")
Cc: stable@vger.kernel.org
Signed-off-by: Yosry Ahmed <yosry.ahmed@linux.dev>
---
 mm/internal.h |  8 --------
 mm/vmscan.c   | 13 +++++++++++--
 2 files changed, 11 insertions(+), 10 deletions(-)

diff --git a/mm/internal.h b/mm/internal.h
index 33eb0224f461..9508dbaf47cd 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -615,16 +615,8 @@ extern unsigned long highest_memmap_pfn;
 bool folio_isolate_lru(struct folio *folio);
 void folio_putback_lru(struct folio *folio);
 extern void reclaim_throttle(pg_data_t *pgdat, enum vmscan_throttle_state reason);
-#ifdef CONFIG_NUMA
 int user_proactive_reclaim(char *buf,
 			   struct mem_cgroup *memcg, pg_data_t *pgdat);
-#else
-static inline int user_proactive_reclaim(char *buf,
-			   struct mem_cgroup *memcg, pg_data_t *pgdat)
-{
-	return 0;
-}
-#endif
 
 /*
  * in mm/rmap.c:
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 7b28018ac995..d9918f24dea0 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -7849,6 +7849,17 @@ int node_reclaim(struct pglist_data *pgdat, gfp_t gfp_mask, unsigned int order)
 	return ret;
 }
 
+#else
+
+static unsigned long __node_reclaim(struct pglist_data *pgdat, gfp_t gfp_mask,
+				    unsigned long nr_pages,
+				    struct scan_control *sc)
+{
+	return 0;
+}
+
+#endif
+
 enum {
 	MEMORY_RECLAIM_SWAPPINESS = 0,
 	MEMORY_RECLAIM_SWAPPINESS_MAX,
@@ -7956,8 +7967,6 @@ int user_proactive_reclaim(char *buf,
 	return 0;
 }
 
-#endif
-
 /**
  * check_move_unevictable_folios - Move evictable folios to appropriate zone
  * lru list
-- 
2.52.0.457.g6b5491de43-goog



^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2026-01-19  8:00 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2026-01-16 20:52 [PATCH] mm: Restore per-memcg proactive reclaim with !CONFIG_NUMA Yosry Ahmed
2026-01-17 19:07 ` Shakeel Butt
2026-01-19  8:00 ` Michal Hocko

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox