linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] mm/memcontrol: restore irq wrapper for lruvec_stat_mod_folio()
@ 2026-04-13  6:48 Cao Ruichuang
  2026-04-13 16:44 ` Shakeel Butt
  0 siblings, 1 reply; 2+ messages in thread
From: Cao Ruichuang @ 2026-04-13  6:48 UTC (permalink / raw)
  To: Johannes Weiner, Michal Hocko, Roman Gushchin, Shakeel Butt
  Cc: Muchun Song, Andrew Morton, David Hildenbrand, Lorenzo Stoakes,
	Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
	Suren Baghdasaryan, cgroups, linux-mm, linux-kernel,
	Cao Ruichuang, syzbot+1a3353a77896e73a8f53

Commit c1bd09994c4d ("memcg: remove __lruvec_stat_mod_folio") removed
the local_irq_save/restore wrapper around lruvec_stat_mod_folio(), based
on the assumption that the underlying stat update path was already
IRQ-safe.

That assumption is too broad for lruvec_stat_mod_folio() callers.
This helper is not just a thin stat primitive.  It also resolves
folio -> memcg -> lruvec under a helper-managed RCU read-side section.

syzbot now reports a PREEMPT_RT warning from:

  __filemap_add_folio()
    -> lruvec_stat_mod_folio()
       -> __rcu_read_unlock()

ending in bad unlock balance / negative RCU nesting.

The PREEMPT_RT detail matters here.  The affected filemap path calls
lruvec_stat_mod_folio() under xas_lock_irq(), but on PREEMPT_RT
xas_lock_irq() maps to spin_lock_irq(), and spin_lock_irq() does not
disable hard IRQs.  Before c1bd09994c4d, lruvec_stat_mod_folio() still
provided explicit hard-IRQ masking around the folio-based memcg/lruvec
lookup path.  After that commit, those callers no longer get real
hard-IRQ masking from either the xarray lock or the helper itself.

Direct mod_lruvec_state() callers do not have the same problem surface:
they already operate on a stable lruvec under caller-provided locking or
caller-provided RCU coverage.  The narrower regression boundary is the
folio-based helper that combines ownership lookup with helper-managed
RCU.  Restore only that helper's irq wrapper instead of reverting the
lower-level lruvec state update cleanups.

This restores the previous calling contract for lruvec_stat_mod_folio()
without changing the lower-level lruvec state interfaces.

Fixes: c1bd09994c4d ("memcg: remove __lruvec_stat_mod_folio")
Link: https://syzkaller.appspot.com/bug?extid=1a3353a77896e73a8f53
Reported-by: syzbot+1a3353a77896e73a8f53@syzkaller.appspotmail.com
Signed-off-by: Cao Ruichuang <create0818@163.com>
---
 include/linux/vmstat.h | 18 +++++++++++++++++-
 mm/memcontrol.c        |  4 ++--
 2 files changed, 19 insertions(+), 3 deletions(-)

diff --git a/include/linux/vmstat.h b/include/linux/vmstat.h
index 3c9c266cf78..59cf2676649 100644
--- a/include/linux/vmstat.h
+++ b/include/linux/vmstat.h
@@ -519,9 +519,19 @@ static inline const char *vm_event_name(enum vm_event_item item)
 void mod_lruvec_state(struct lruvec *lruvec, enum node_stat_item idx,
 			int val);
 
-void lruvec_stat_mod_folio(struct folio *folio,
+void __lruvec_stat_mod_folio(struct folio *folio,
 			     enum node_stat_item idx, int val);
 
+static inline void lruvec_stat_mod_folio(struct folio *folio,
+					 enum node_stat_item idx, int val)
+{
+	unsigned long flags;
+
+	local_irq_save(flags);
+	__lruvec_stat_mod_folio(folio, idx, val);
+	local_irq_restore(flags);
+}
+
 static inline void mod_lruvec_page_state(struct page *page,
 					 enum node_stat_item idx, int val)
 {
@@ -536,6 +546,12 @@ static inline void mod_lruvec_state(struct lruvec *lruvec,
 	mod_node_page_state(lruvec_pgdat(lruvec), idx, val);
 }
 
+static inline void __lruvec_stat_mod_folio(struct folio *folio,
+					 enum node_stat_item idx, int val)
+{
+	mod_node_page_state(folio_pgdat(folio), idx, val);
+}
+
 static inline void lruvec_stat_mod_folio(struct folio *folio,
 					 enum node_stat_item idx, int val)
 {
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 772bac21d15..ffe6ae885f5 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -787,7 +787,7 @@ void mod_lruvec_state(struct lruvec *lruvec, enum node_stat_item idx,
 		mod_memcg_lruvec_state(lruvec, idx, val);
 }
 
-void lruvec_stat_mod_folio(struct folio *folio, enum node_stat_item idx,
+void __lruvec_stat_mod_folio(struct folio *folio, enum node_stat_item idx,
 			     int val)
 {
 	struct mem_cgroup *memcg;
@@ -807,7 +807,7 @@ void lruvec_stat_mod_folio(struct folio *folio, enum node_stat_item idx,
 	mod_lruvec_state(lruvec, idx, val);
 	rcu_read_unlock();
 }
-EXPORT_SYMBOL(lruvec_stat_mod_folio);
+EXPORT_SYMBOL(__lruvec_stat_mod_folio);
 
 void mod_lruvec_kmem_state(void *p, enum node_stat_item idx, int val)
 {
-- 
2.39.5 (Apple Git-154)



^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: [PATCH] mm/memcontrol: restore irq wrapper for lruvec_stat_mod_folio()
  2026-04-13  6:48 [PATCH] mm/memcontrol: restore irq wrapper for lruvec_stat_mod_folio() Cao Ruichuang
@ 2026-04-13 16:44 ` Shakeel Butt
  0 siblings, 0 replies; 2+ messages in thread
From: Shakeel Butt @ 2026-04-13 16:44 UTC (permalink / raw)
  To: Cao Ruichuang
  Cc: Johannes Weiner, Michal Hocko, Roman Gushchin, Muchun Song,
	Andrew Morton, David Hildenbrand, Lorenzo Stoakes,
	Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
	Suren Baghdasaryan, cgroups, linux-mm, linux-kernel,
	syzbot+1a3353a77896e73a8f53

On Mon, Apr 13, 2026 at 02:48:33PM +0800, Cao Ruichuang wrote:
> Commit c1bd09994c4d ("memcg: remove __lruvec_stat_mod_folio") removed
> the local_irq_save/restore wrapper around lruvec_stat_mod_folio(), based
> on the assumption that the underlying stat update path was already
> IRQ-safe.

Why is that an assumption? Please explain how lruvec_stat_mod_folio() is not
safe against IRQs?

> 
> That assumption is too broad for lruvec_stat_mod_folio() callers.
> This helper is not just a thin stat primitive.  It also resolves
> folio -> memcg -> lruvec under a helper-managed RCU read-side section.
> 
> syzbot now reports a PREEMPT_RT warning from:

The syzbot link you have provided has the kernel config without PREEMPT_RT?
Where does this claim come from?

> 
>   __filemap_add_folio()
>     -> lruvec_stat_mod_folio()
>        -> __rcu_read_unlock()
> 
> ending in bad unlock balance / negative RCU nesting.

If there is bad unlock balance, how is disabling/enabling IRQs would solve that
issue?



^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2026-04-13 16:44 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2026-04-13  6:48 [PATCH] mm/memcontrol: restore irq wrapper for lruvec_stat_mod_folio() Cao Ruichuang
2026-04-13 16:44 ` Shakeel Butt

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox