From: Qi Zheng <zhengqi.arch@bytedance.com>
To: David Hildenbrand <david@redhat.com>,
hannes@cmpxchg.org, hughd@google.com, mhocko@suse.com,
roman.gushchin@linux.dev, shakeel.butt@linux.dev,
muchun.song@linux.dev, lorenzo.stoakes@oracle.com,
ziy@nvidia.com, harry.yoo@oracle.com,
baolin.wang@linux.alibaba.com, Liam.Howlett@oracle.com,
npache@redhat.com, ryan.roberts@arm.com, dev.jain@arm.com,
baohua@kernel.org, lance.yang@linux.dev,
akpm@linux-foundation.org
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org,
cgroups@vger.kernel.org
Subject: Re: [PATCH v2 4/4] mm: thp: reparent the split queue during memcg offline
Date: Thu, 25 Sep 2025 14:11:49 +0800 [thread overview]
Message-ID: <46da5d33-20d5-4b32-bca5-466474424178@bytedance.com> (raw)
In-Reply-To: <b041b58d-b0e4-4a01-a459-5449c232c437@redhat.com>
Hi David,
On 9/24/25 8:38 PM, David Hildenbrand wrote:
> On 23.09.25 11:16, Qi Zheng wrote:
>> In the future, we will reparent LRU folios during memcg offline to
>> eliminate dying memory cgroups, which requires reparenting the split
>> queue
>> to its parent.
>>
>> Similar to list_lru, the split queue is relatively independent and does
>> not need to be reparented along with objcg and LRU folios (holding
>> objcg lock and lru lock). So let's apply the same mechanism as list_lru
>> to reparent the split queue separately when memcg is offine.
>>
>> Signed-off-by: Qi Zheng <zhengqi.arch@bytedance.com>
>> ---
>> include/linux/huge_mm.h | 2 ++
>> include/linux/mmzone.h | 1 +
>> mm/huge_memory.c | 39 +++++++++++++++++++++++++++++++++++++++
>> mm/memcontrol.c | 1 +
>> mm/mm_init.c | 1 +
>> 5 files changed, 44 insertions(+)
>>
>> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
>> index f327d62fc9852..a0d4b751974d2 100644
>> --- a/include/linux/huge_mm.h
>> +++ b/include/linux/huge_mm.h
>> @@ -417,6 +417,7 @@ static inline int split_huge_page(struct page *page)
>> return split_huge_page_to_list_to_order(page, NULL, ret);
>> }
>> void deferred_split_folio(struct folio *folio, bool partially_mapped);
>> +void reparent_deferred_split_queue(struct mem_cgroup *memcg);
>> void __split_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
>> unsigned long address, bool freeze);
>> @@ -611,6 +612,7 @@ static inline int try_folio_split(struct folio
>> *folio, struct page *page,
>> }
>> static inline void deferred_split_folio(struct folio *folio, bool
>> partially_mapped) {}
>> +static inline void reparent_deferred_split_queue(struct mem_cgroup
>> *memcg) {}
>> #define split_huge_pmd(__vma, __pmd, __address) \
>> do { } while (0)
>> diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
>> index 7fb7331c57250..f3eb81fee056a 100644
>> --- a/include/linux/mmzone.h
>> +++ b/include/linux/mmzone.h
>> @@ -1346,6 +1346,7 @@ struct deferred_split {
>> spinlock_t split_queue_lock;
>> struct list_head split_queue;
>> unsigned long split_queue_len;
>> + bool is_dying;
>
> It's a bit weird to query whether the "struct deferred_split" is dying.
> Shouldn't this be a memcg property? (and in particular, not exist for
There is indeed a CSS_DYING flag. But we must modify 'is_dying' under
the protection of the split_queue_lock, otherwise the folio may be added
back to the deferred_split of child memcg.
> the pglist_data part where it might not make sense at all?).
Maybe:
#ifdef CONFIG_MEMCG
bool is_dying;
#endif
>
>> };
>> #endif
>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>> index 48b51e6230a67..de7806f759cba 100644
>> --- a/mm/huge_memory.c
>> +++ b/mm/huge_memory.c
>> @@ -1094,9 +1094,15 @@ static struct deferred_split
>> *folio_split_queue_lock(struct folio *folio)
>> struct deferred_split *queue;
>> memcg = folio_memcg(folio);
>> +retry:
>> queue = memcg ? &memcg->deferred_split_queue :
>> &NODE_DATA(folio_nid(folio))->deferred_split_queue;
>> spin_lock(&queue->split_queue_lock);
>> + if (unlikely(queue->is_dying == true)) {
>
> if (unlikely(queue->is_dying))
Will do.
>
>> + spin_unlock(&queue->split_queue_lock);
>> + memcg = parent_mem_cgroup(memcg);
>> + goto retry;
>> + }
>> return queue;
>> }
>> @@ -1108,9 +1114,15 @@ folio_split_queue_lock_irqsave(struct folio
>> *folio, unsigned long *flags)
>> struct deferred_split *queue;
>> memcg = folio_memcg(folio);
>> +retry:
>> queue = memcg ? &memcg->deferred_split_queue :
>> &NODE_DATA(folio_nid(folio))->deferred_split_queue;
>> spin_lock_irqsave(&queue->split_queue_lock, *flags);
>> + if (unlikely(queue->is_dying == true)) {
>
> if (unlikely(queue->is_dying))
Will do.
>
>> + spin_unlock_irqrestore(&queue->split_queue_lock, *flags);
>> + memcg = parent_mem_cgroup(memcg);
>> + goto retry;
>> + }
>> return queue;
>> }
>
> Nothing else jumped at me, but I am not a memcg expert :)
Thanks,
Qi
>
next prev parent reply other threads:[~2025-09-25 6:12 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-09-23 9:16 [PATCH v2 0/4] reparent the THP split queue Qi Zheng
2025-09-23 9:16 ` [PATCH v2 1/4] mm: thp: replace folio_memcg() with folio_memcg_charged() Qi Zheng
2025-09-24 9:10 ` Roman Gushchin
2025-09-23 9:16 ` [PATCH v2 2/4] mm: thp: introduce folio_split_queue_lock and its variants Qi Zheng
2025-09-23 9:16 ` [PATCH v2 3/4] mm: thp: use folio_batch to handle THP splitting in deferred_split_scan() Qi Zheng
2025-09-23 15:31 ` Zi Yan
2025-09-24 9:57 ` Qi Zheng
2025-09-24 14:57 ` Zi Yan
2025-09-24 12:31 ` David Hildenbrand
2025-09-23 9:16 ` [PATCH v2 4/4] mm: thp: reparent the split queue during memcg offline Qi Zheng
2025-09-23 15:44 ` Zi Yan
2025-09-24 9:58 ` Qi Zheng
2025-09-24 9:23 ` Roman Gushchin
2025-09-24 10:06 ` Qi Zheng
2025-09-24 12:38 ` David Hildenbrand
2025-09-25 6:11 ` Qi Zheng [this message]
2025-09-25 19:35 ` David Hildenbrand
2025-09-25 19:49 ` Zi Yan
2025-09-25 22:15 ` Shakeel Butt
2025-09-25 22:35 ` Shakeel Butt
2025-09-26 6:57 ` Qi Zheng
2025-09-26 16:36 ` Shakeel Butt
2025-09-24 13:49 ` kernel test robot
2025-09-24 14:22 ` Harry Yoo
2025-09-25 6:29 ` Qi Zheng
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=46da5d33-20d5-4b32-bca5-466474424178@bytedance.com \
--to=zhengqi.arch@bytedance.com \
--cc=Liam.Howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=baohua@kernel.org \
--cc=baolin.wang@linux.alibaba.com \
--cc=cgroups@vger.kernel.org \
--cc=david@redhat.com \
--cc=dev.jain@arm.com \
--cc=hannes@cmpxchg.org \
--cc=harry.yoo@oracle.com \
--cc=hughd@google.com \
--cc=lance.yang@linux.dev \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lorenzo.stoakes@oracle.com \
--cc=mhocko@suse.com \
--cc=muchun.song@linux.dev \
--cc=npache@redhat.com \
--cc=roman.gushchin@linux.dev \
--cc=ryan.roberts@arm.com \
--cc=shakeel.butt@linux.dev \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox