linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Qi Zheng <qi.zheng@linux.dev>
To: Harry Yoo <harry.yoo@oracle.com>, Chris Mason <clm@meta.com>
Cc: hannes@cmpxchg.org, hughd@google.com, mhocko@suse.com,
	roman.gushchin@linux.dev, shakeel.butt@linux.dev,
	muchun.song@linux.dev, david@redhat.com,
	lorenzo.stoakes@oracle.com, ziy@nvidia.com,
	baolin.wang@linux.alibaba.com, Liam.Howlett@oracle.com,
	npache@redhat.com, ryan.roberts@arm.com, dev.jain@arm.com,
	baohua@kernel.org, lance.yang@linux.dev,
	akpm@linux-foundation.org, richard.weiyang@gmail.com,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	cgroups@vger.kernel.org, Qi Zheng <zhengqi.arch@bytedance.com>
Subject: Re: [PATCH v6 4/4] mm: thp: reparent the split queue during memcg offline
Date: Wed, 14 Jan 2026 14:25:36 +0800	[thread overview]
Message-ID: <3d8dd14c-d3a7-4cae-99e3-10ebe4ad52aa@linux.dev> (raw)
In-Reply-To: <aWcrVRT_RjxdjoN7@hyeyoo>



On 1/14/26 1:36 PM, Harry Yoo wrote:
> On Tue, Jan 13, 2026 at 02:00:43PM -0800, Chris Mason wrote:
>> On Mon, 10 Nov 2025 16:17:58 +0800 Qi Zheng <qi.zheng@linux.dev> wrote:
>>
>>> From: Qi Zheng <zhengqi.arch@bytedance.com>
>>>
>>> Similar to list_lru, the split queue is relatively independent and does
>>> not need to be reparented along with objcg and LRU folios (holding
>>> objcg lock and lru lock). So let's apply the similar mechanism as list_lru
>>> to reparent the split queue separately when memcg is offine.
>>>
>>
>> Hi everyone,
>>
>> I'm running all of the MM commits since v6.18 through the AI patch review
>> tools, and this one was flagged.  I'm much less confident in this report,
>> AI tends to get these races wrong:
>>
>> [AI review output]
>>
>>> diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
>>> index b0c6a4635c67b..cc6db20d7dca0 100644
>>> --- a/include/linux/memcontrol.h
>>> +++ b/include/linux/memcontrol.h
>>> @@ -1775,6 +1775,12 @@ static inline void count_objcg_events(struct obj_cgroup *objcg,
>>>   bool mem_cgroup_node_allowed(struct mem_cgroup *memcg, int nid);
>>>
>>>   void mem_cgroup_show_protected_memory(struct mem_cgroup *memcg);
>>> +
>>> +static inline bool memcg_is_dying(struct mem_cgroup *memcg)
>>> +{
>>> +	return memcg ? css_is_dying(&memcg->css) : false;
>>> +}
>>
>> [ ... ]
>>
>>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>>> index 13684e5376e82..d17d3810a882c 100644
>>
>> [ ... ]
>>
>>> @@ -1129,8 +1140,14 @@ split_queue_lock_irqsave(int nid, struct mem_cgroup *memcg, unsigned long *flags
>>>   {
>>>   	struct deferred_split *queue;
>>>
>>> +retry:
>>>   	queue = memcg_split_queue(nid, memcg);
>>>   	spin_lock_irqsave(&queue->split_queue_lock, *flags);
>>> +	if (unlikely(memcg_is_dying(memcg))) {
>>> +		spin_unlock_irqrestore(&queue->split_queue_lock, *flags);
>>> +		memcg = parent_mem_cgroup(memcg);
>>> +		goto retry;
>>> +	}
>>>
>>>   	return queue;
>>>   }
>>
>> There appears to be a race window that can cause split_queue_len counter
>> mismatch. Between the time CSS_DYING is set (in kill_css()) and when
>> reparent_deferred_split_queue() actually runs (asynchronously in
>> css_offline), folios can be unqueued via __folio_unqueue_deferred_split().
>>
>> In this scenario:
>>    __folio_unqueue_deferred_split()
>>      -> folio_split_queue_lock_irqsave()
>>         -> split_queue_lock_irqsave() sees memcg_is_dying() == true
>>         -> returns parent's queue (locked)
>>      -> ds_queue->split_queue_len-- decrements parent's counter
>>      -> list_del_init() removes folio from child's list
>>
>> The folio is correctly removed from the child's list (list_del_init is
>> self-referential), but the wrong queue's counter is decremented.
> 
> Good point. Sounds pretty possible to me?
> 
> I don't think there's anything that prevents it from unqueued
> before it's reparented.
> 
>> If the parent queue was empty (split_queue_len == 0), this would cause an
>> unsigned long underflow to ULONG_MAX.
> 
> Although the accounting mismatch will only persist until
> reparent_deferred_split_queue() reparents the deferred split queue.

This period is very short, and the only thing affected should be
deferred_split_count(), but it does not cause a system error.

So I think maybe we can leave it unrepaired.

> 
> Ideally this should be fixed by checking if the folio has been
> reparented after acquiring the split queue lock, but since we don't reparent
> LRU pages yet ... do we need a band-aid before then?
> 
> Do we want to have is_dying property in the split queue as it was in v2? [1]
> [1] https://lore.kernel.org/linux-mm/55370bda7b2df617033ac12116c1712144bb7591.1758618527.git.zhengqi.arch@bytedance.com
> 
>> Could this be addressed by checking that the folio is actually in the
>> queue being unlocked, perhaps by verifying the list head matches before
>> decrementing split_queue_len?
> 
>> [ ... ]
>>
>>> @@ -3920,6 +3920,7 @@ static void mem_cgroup_css_offline(struct cgroup_subsys_state *css)
>>>   	zswap_memcg_offline_cleanup(memcg);
>>>
>>>   	memcg_offline_kmem(memcg);
>>> +	reparent_deferred_split_queue(memcg);
>>
>> The race window exists because CSS_DYING is set in kill_css() before
>> css_offline() callbacks run asynchronously. The comment in split_queue_lock
>> mentions that THPs will be "hidden from the shrinker side" during this
>> period, but does not address the counter accounting issue when folios
>> are unqueued.
> 



  reply	other threads:[~2026-01-14  6:25 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-11-10  8:17 [PATCH v6 0/4] reparent the THP split queue Qi Zheng
2025-11-10  8:17 ` [PATCH v6 1/4] mm: thp: replace folio_memcg() with folio_memcg_charged() Qi Zheng
2025-11-10  8:17 ` [PATCH v6 2/4] mm: thp: introduce folio_split_queue_lock and its variants Qi Zheng
2025-11-10  8:17 ` [PATCH v6 3/4] mm: thp: use folio_batch to handle THP splitting in deferred_split_scan() Qi Zheng
2025-11-10  8:17 ` [PATCH v6 4/4] mm: thp: reparent the split queue during memcg offline Qi Zheng
2026-01-13 22:00   ` Chris Mason
2026-01-14  5:36     ` Harry Yoo
2026-01-14  6:25       ` Qi Zheng [this message]
2026-01-14  7:19         ` Muchun Song

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=3d8dd14c-d3a7-4cae-99e3-10ebe4ad52aa@linux.dev \
    --to=qi.zheng@linux.dev \
    --cc=Liam.Howlett@oracle.com \
    --cc=akpm@linux-foundation.org \
    --cc=baohua@kernel.org \
    --cc=baolin.wang@linux.alibaba.com \
    --cc=cgroups@vger.kernel.org \
    --cc=clm@meta.com \
    --cc=david@redhat.com \
    --cc=dev.jain@arm.com \
    --cc=hannes@cmpxchg.org \
    --cc=harry.yoo@oracle.com \
    --cc=hughd@google.com \
    --cc=lance.yang@linux.dev \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=lorenzo.stoakes@oracle.com \
    --cc=mhocko@suse.com \
    --cc=muchun.song@linux.dev \
    --cc=npache@redhat.com \
    --cc=richard.weiyang@gmail.com \
    --cc=roman.gushchin@linux.dev \
    --cc=ryan.roberts@arm.com \
    --cc=shakeel.butt@linux.dev \
    --cc=zhengqi.arch@bytedance.com \
    --cc=ziy@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox