From: Qi Zheng <zhengqi.arch@bytedance.com>
To: David Hildenbrand <david@redhat.com>,
hannes@cmpxchg.org, hughd@google.com, mhocko@suse.com,
roman.gushchin@linux.dev, shakeel.butt@linux.dev,
muchun.song@linux.dev, lorenzo.stoakes@oracle.com,
ziy@nvidia.com, baolin.wang@linux.alibaba.com,
Liam.Howlett@oracle.com, npache@redhat.com, ryan.roberts@arm.com,
dev.jain@arm.com, baohua@kernel.org, lance.yang@linux.dev,
akpm@linux-foundation.org
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org,
cgroups@vger.kernel.org, Muchun Song <songmuchun@bytedance.com>
Subject: Re: [PATCH 3/4] mm: thp: use folio_batch to handle THP splitting in deferred_split_scan()
Date: Mon, 22 Sep 2025 19:36:08 +0800 [thread overview]
Message-ID: <65b6c32a-7eb4-4023-94c0-968735b784f6@bytedance.com> (raw)
In-Reply-To: <40772b34-30c8-4f16-833c-34fdd7c69176@redhat.com>
Hi David,
On 9/22/25 4:43 PM, David Hildenbrand wrote:
> On 19.09.25 05:46, Qi Zheng wrote:
>> From: Muchun Song <songmuchun@bytedance.com>
>>
>> The maintenance of the folio->_deferred_list is intricate because it's
>> reused in a local list.
>>
>> Here are some peculiarities:
>>
>> 1) When a folio is removed from its split queue and added to a local
>> on-stack list in deferred_split_scan(), the ->split_queue_len
>> isn't
>> updated, leading to an inconsistency between it and the actual
>> number of folios in the split queue.
>
> deferred_split_count() will now return "0" even though there might be
> concurrent scanning going on. I assume that's okay because we are not
> returning SHRINK_EMPTY (which is a difference).
>
>>
>> 2) When the folio is split via split_folio() later, it's removed from
>> the local list while holding the split queue lock. At this time,
>> this lock protects the local list, not the split queue.
>>
>> 3) To handle the race condition with a third-party freeing or
>> migrating
>> the preceding folio, we must ensure there's always one safe (with
>> raised refcount) folio before by delaying its folio_put(). More
>> details can be found in commit e66f3185fa04 ("mm/thp: fix deferred
>> split queue not partially_mapped"). It's rather tricky.
>>
>> We can use the folio_batch infrastructure to handle this clearly. In this
>> case, ->split_queue_len will be consistent with the real number of folios
>> in the split queue. If list_empty(&folio->_deferred_list) returns false,
>> it's clear the folio must be in its split queue (not in a local list
>> anymore).
>>
>> In the future, we will reparent LRU folios during memcg offline to
>> eliminate dying memory cgroups, which requires reparenting the split
>> queue
>> to its parent first. So this patch prepares for using
>> folio_split_queue_lock_irqsave() as the memcg may change then.
>>
>> Signed-off-by: Muchun Song <songmuchun@bytedance.com>
>> Signed-off-by: Qi Zheng <zhengqi.arch@bytedance.com>
>> ---
>> mm/huge_memory.c | 88 +++++++++++++++++++++++-------------------------
>> 1 file changed, 42 insertions(+), 46 deletions(-)
>>
>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>> index d34516a22f5bb..ab16da21c94e0 100644
>> --- a/mm/huge_memory.c
>> +++ b/mm/huge_memory.c
>> @@ -3760,21 +3760,22 @@ static int __folio_split(struct folio *folio,
>> unsigned int new_order,
>> struct lruvec *lruvec;
>> int expected_refs;
>> - if (folio_order(folio) > 1 &&
>> - !list_empty(&folio->_deferred_list)) {
>> - ds_queue->split_queue_len--;
>> + if (folio_order(folio) > 1) {
>> + if (!list_empty(&folio->_deferred_list)) {
>> + ds_queue->split_queue_len--;
>> + /*
>> + * Reinitialize page_deferred_list after removing the
>> + * page from the split_queue, otherwise a subsequent
>> + * split will see list corruption when checking the
>> + * page_deferred_list.
>> + */
>> + list_del_init(&folio->_deferred_list);
>> + }
>> if (folio_test_partially_mapped(folio)) {
>> folio_clear_partially_mapped(folio);
>> mod_mthp_stat(folio_order(folio),
>> MTHP_STAT_NR_ANON_PARTIALLY_MAPPED, -1);
>> }
>> - /*
>> - * Reinitialize page_deferred_list after removing the
>> - * page from the split_queue, otherwise a subsequent
>> - * split will see list corruption when checking the
>> - * page_deferred_list.
>> - */
>> - list_del_init(&folio->_deferred_list);
>> }
>
> BTW I am not sure about holding the split_queue_lock before freezing the
> refcount (comment above the freeze):
>
> freezing should properly sync against the folio_try_get(): one of them
> would fail.
>
> So not sure if that is still required. But I recall something nasty
> regarding that :)
I'm not sure either, need some investigation.
>
>
>> split_queue_unlock(ds_queue);
>> if (mapping) {
>> @@ -4173,40 +4174,48 @@ static unsigned long
>> deferred_split_scan(struct shrinker *shrink,
>> struct pglist_data *pgdata = NODE_DATA(sc->nid);
>> struct deferred_split *ds_queue = &pgdata->deferred_split_queue;
>> unsigned long flags;
>> - LIST_HEAD(list);
>> - struct folio *folio, *next, *prev = NULL;
>> - int split = 0, removed = 0;
>> + struct folio *folio, *next;
>> + int split = 0, i;
>> + struct folio_batch fbatch;
>> + bool done;
>
> Is "done" really required? Can't we just use sc->nr_to_scan tos ee if
> there is work remaining to be done so we retry?
I think you are right, will do in the next version.
>
>> #ifdef CONFIG_MEMCG
>> if (sc->memcg)
>> ds_queue = &sc->memcg->deferred_split_queue;
>> #endif
>> + folio_batch_init(&fbatch);
>> +retry:
>> + done = true;
>> spin_lock_irqsave(&ds_queue->split_queue_lock, flags);
>> /* Take pin on all head pages to avoid freeing them under us */
>> list_for_each_entry_safe(folio, next, &ds_queue->split_queue,
>> _deferred_list) {
>> if (folio_try_get(folio)) {
>> - list_move(&folio->_deferred_list, &list);
>> - } else {
>> + folio_batch_add(&fbatch, folio);
>> + } else if (folio_test_partially_mapped(folio)) {
>> /* We lost race with folio_put() */
>> - if (folio_test_partially_mapped(folio)) {
>> - folio_clear_partially_mapped(folio);
>> - mod_mthp_stat(folio_order(folio),
>> - MTHP_STAT_NR_ANON_PARTIALLY_MAPPED, -1);
>> - }
>> - list_del_init(&folio->_deferred_list);
>> - ds_queue->split_queue_len--;
>> + folio_clear_partially_mapped(folio);
>> + mod_mthp_stat(folio_order(folio),
>> + MTHP_STAT_NR_ANON_PARTIALLY_MAPPED, -1);
>> }
>> + list_del_init(&folio->_deferred_list);
>> + ds_queue->split_queue_len--;
>> if (!--sc->nr_to_scan)
>> break;
>> + if (folio_batch_space(&fbatch) == 0) {
>
> Nit: if (!folio_batch_space(&fbatch)) {
OK, will do.
Thanks,
Qi
>
>
next prev parent reply other threads:[~2025-09-22 11:36 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-09-19 3:46 [PATCH 0/4] reparent the THP split queue Qi Zheng
2025-09-19 3:46 ` [PATCH 1/4] mm: thp: replace folio_memcg() with folio_memcg_charged() Qi Zheng
2025-09-19 21:30 ` Shakeel Butt
2025-09-22 8:17 ` David Hildenbrand
2025-09-19 3:46 ` [PATCH 2/4] mm: thp: introduce folio_split_queue_lock and its variants Qi Zheng
2025-09-19 15:39 ` Zi Yan
2025-09-22 7:56 ` Qi Zheng
2025-09-20 0:49 ` Shakeel Butt
2025-09-20 8:27 ` kernel test robot
2025-09-22 8:20 ` David Hildenbrand
2025-09-19 3:46 ` [PATCH 3/4] mm: thp: use folio_batch to handle THP splitting in deferred_split_scan() Qi Zheng
2025-09-22 8:43 ` David Hildenbrand
2025-09-22 11:36 ` Qi Zheng [this message]
2025-09-19 3:46 ` [PATCH 4/4] mm: thp: reparent the split queue during memcg offline Qi Zheng
2025-09-20 7:43 ` kernel test robot
2025-09-19 21:33 ` [PATCH 0/4] reparent the THP split queue Shakeel Butt
2025-09-22 7:51 ` Qi Zheng
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=65b6c32a-7eb4-4023-94c0-968735b784f6@bytedance.com \
--to=zhengqi.arch@bytedance.com \
--cc=Liam.Howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=baohua@kernel.org \
--cc=baolin.wang@linux.alibaba.com \
--cc=cgroups@vger.kernel.org \
--cc=david@redhat.com \
--cc=dev.jain@arm.com \
--cc=hannes@cmpxchg.org \
--cc=hughd@google.com \
--cc=lance.yang@linux.dev \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lorenzo.stoakes@oracle.com \
--cc=mhocko@suse.com \
--cc=muchun.song@linux.dev \
--cc=npache@redhat.com \
--cc=roman.gushchin@linux.dev \
--cc=ryan.roberts@arm.com \
--cc=shakeel.butt@linux.dev \
--cc=songmuchun@bytedance.com \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox