From: Chris Mason <clm@meta.com>
To: Qi Zheng <qi.zheng@linux.dev>
Cc: Chris Mason <clm@meta.com>, <hannes@cmpxchg.org>,
<hughd@google.com>, <mhocko@suse.com>, <roman.gushchin@linux.dev>,
<shakeel.butt@linux.dev>, <muchun.song@linux.dev>,
<david@redhat.com>, <lorenzo.stoakes@oracle.com>,
<ziy@nvidia.com>, <harry.yoo@oracle.com>,
<baolin.wang@linux.alibaba.com>, <Liam.Howlett@oracle.com>,
<npache@redhat.com>, <ryan.roberts@arm.com>, <dev.jain@arm.com>,
<baohua@kernel.org>, <lance.yang@linux.dev>,
<akpm@linux-foundation.org>, <richard.weiyang@gmail.com>,
<linux-mm@kvack.org>, <linux-kernel@vger.kernel.org>,
<cgroups@vger.kernel.org>, Qi Zheng <zhengqi.arch@bytedance.com>
Subject: Re: [PATCH v6 4/4] mm: thp: reparent the split queue during memcg offline
Date: Tue, 13 Jan 2026 14:00:43 -0800 [thread overview]
Message-ID: <20260113220046.2274684-1-clm@meta.com> (raw)
In-Reply-To: <8703f907c4d1f7e8a2ef2bfed3036a84fa53028b.1762762324.git.zhengqi.arch@bytedance.com>
On Mon, 10 Nov 2025 16:17:58 +0800 Qi Zheng <qi.zheng@linux.dev> wrote:
> From: Qi Zheng <zhengqi.arch@bytedance.com>
>
> Similar to list_lru, the split queue is relatively independent and does
> not need to be reparented along with objcg and LRU folios (holding
> objcg lock and lru lock). So let's apply the similar mechanism as list_lru
> to reparent the split queue separately when memcg is offine.
>
Hi everyone,
I'm running all of the MM commits since v6.18 through the AI patch review
tools, and this one was flagged. I'm much less confident in this report,
AI tends to get these races wrong:
[AI review output]
> diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> index b0c6a4635c67b..cc6db20d7dca0 100644
> --- a/include/linux/memcontrol.h
> +++ b/include/linux/memcontrol.h
> @@ -1775,6 +1775,12 @@ static inline void count_objcg_events(struct obj_cgroup *objcg,
> bool mem_cgroup_node_allowed(struct mem_cgroup *memcg, int nid);
>
> void mem_cgroup_show_protected_memory(struct mem_cgroup *memcg);
> +
> +static inline bool memcg_is_dying(struct mem_cgroup *memcg)
> +{
> + return memcg ? css_is_dying(&memcg->css) : false;
> +}
[ ... ]
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index 13684e5376e82..d17d3810a882c 100644
[ ... ]
> @@ -1129,8 +1140,14 @@ split_queue_lock_irqsave(int nid, struct mem_cgroup *memcg, unsigned long *flags
> {
> struct deferred_split *queue;
>
> +retry:
> queue = memcg_split_queue(nid, memcg);
> spin_lock_irqsave(&queue->split_queue_lock, *flags);
> + if (unlikely(memcg_is_dying(memcg))) {
> + spin_unlock_irqrestore(&queue->split_queue_lock, *flags);
> + memcg = parent_mem_cgroup(memcg);
> + goto retry;
> + }
>
> return queue;
> }
There appears to be a race window that can cause split_queue_len counter
mismatch. Between the time CSS_DYING is set (in kill_css()) and when
reparent_deferred_split_queue() actually runs (asynchronously in
css_offline), folios can be unqueued via __folio_unqueue_deferred_split().
In this scenario:
__folio_unqueue_deferred_split()
-> folio_split_queue_lock_irqsave()
-> split_queue_lock_irqsave() sees memcg_is_dying() == true
-> returns parent's queue (locked)
-> ds_queue->split_queue_len-- decrements parent's counter
-> list_del_init() removes folio from child's list
The folio is correctly removed from the child's list (list_del_init is
self-referential), but the wrong queue's counter is decremented. If the
parent queue was empty (split_queue_len == 0), this would cause an
unsigned long underflow to ULONG_MAX.
Could this be addressed by checking that the folio is actually in the
queue being unlocked, perhaps by verifying the list head matches before
decrementing split_queue_len?
[ ... ]
> @@ -3920,6 +3920,7 @@ static void mem_cgroup_css_offline(struct cgroup_subsys_state *css)
> zswap_memcg_offline_cleanup(memcg);
>
> memcg_offline_kmem(memcg);
> + reparent_deferred_split_queue(memcg);
The race window exists because CSS_DYING is set in kill_css() before
css_offline() callbacks run asynchronously. The comment in split_queue_lock
mentions that THPs will be "hidden from the shrinker side" during this
period, but does not address the counter accounting issue when folios
are unqueued.
prev parent reply other threads:[~2026-01-13 22:01 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-11-10 8:17 [PATCH v6 0/4] reparent the THP split queue Qi Zheng
2025-11-10 8:17 ` [PATCH v6 1/4] mm: thp: replace folio_memcg() with folio_memcg_charged() Qi Zheng
2025-11-10 8:17 ` [PATCH v6 2/4] mm: thp: introduce folio_split_queue_lock and its variants Qi Zheng
2025-11-10 8:17 ` [PATCH v6 3/4] mm: thp: use folio_batch to handle THP splitting in deferred_split_scan() Qi Zheng
2025-11-10 8:17 ` [PATCH v6 4/4] mm: thp: reparent the split queue during memcg offline Qi Zheng
2026-01-13 22:00 ` Chris Mason [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260113220046.2274684-1-clm@meta.com \
--to=clm@meta.com \
--cc=Liam.Howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=baohua@kernel.org \
--cc=baolin.wang@linux.alibaba.com \
--cc=cgroups@vger.kernel.org \
--cc=david@redhat.com \
--cc=dev.jain@arm.com \
--cc=hannes@cmpxchg.org \
--cc=harry.yoo@oracle.com \
--cc=hughd@google.com \
--cc=lance.yang@linux.dev \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lorenzo.stoakes@oracle.com \
--cc=mhocko@suse.com \
--cc=muchun.song@linux.dev \
--cc=npache@redhat.com \
--cc=qi.zheng@linux.dev \
--cc=richard.weiyang@gmail.com \
--cc=roman.gushchin@linux.dev \
--cc=ryan.roberts@arm.com \
--cc=shakeel.butt@linux.dev \
--cc=zhengqi.arch@bytedance.com \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox