linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Jinjiang Tu <tujinjiang@huawei.com>
To: "Lorenzo Stoakes (Oracle)" <ljs@kernel.org>
Cc: <akpm@linux-foundation.org>, <david@kernel.org>,
	<lorenzo.stoakes@oracle.com>, <Liam.Howlett@oracle.com>,
	<vbabka@kernel.org>, <rppt@kernel.org>, <surenb@google.com>,
	<mhocko@suse.com>, <baohua@kernel.org>, <ryan.roberts@arm.com>,
	<linux-mm@kvack.org>, <wangkefeng.wang@huawei.com>,
	<sunnanyong@huawei.com>
Subject: Re: [PATCH v3] mm/huge_memory: fix folio isn't locked in softleaf_to_folio()
Date: Sat, 21 Mar 2026 10:40:39 +0800	[thread overview]
Message-ID: <86a944b5-34bb-4f08-861b-b8d6da3db8e7@huawei.com> (raw)
In-Reply-To: <63266e52-2644-4f4e-aca5-6db64052455f@lucifer.local>


在 2026/3/20 18:13, Lorenzo Stoakes (Oracle) 写道:
> On Thu, Mar 19, 2026 at 09:25:41AM +0800, Jinjiang Tu wrote:
>> On arm64 server, we found folio that get from migration entry isn't locked
>> in softleaf_to_folio(). This issue triggers when mTHP splitting and
>> zap_nonpresent_ptes() races, and the root cause is lack of memory barrier
>> in softleaf_to_folio(). The race is as follows:
>>
>> 	CPU0                                             CPU1
>>
>> deferred_split_scan()                              zap_nonpresent_ptes()
>>    lock folio
>>    split_folio()
>>      unmap_folio()
>>        change ptes to migration entries
>>      __split_folio_to_order()                         softleaf_to_folio()
>>        set flags(including PG_locked) for tail pages    folio = pfn_folio(softleaf_to_pfn(entry))
>>        smp_wmb()                                        VM_WARN_ON_ONCE(!folio_test_locked(folio))
>>        prep_compound_page() for tail pages
>>
>> In __split_folio_to_order(), smp_wmb() guarantees page flags of tail pages
>> are visible before the tail page becomes non-compound. smp_wmb() should
>> be paired with smp_rmb() in softleaf_to_folio(), which is missed. As a
>> result, if zap_nonpresent_ptes() accesses migration entry that stores
>> tail pfn, softleaf_to_folio() may see the updated compound_head of tail
>> page before page->flags.
>>
>> To fix it, add missing smp_rmb() if the softleaf entry is migration entry
>> in softleaf_to_folio() and softleaf_to_page().
>>
>> Fixes: e9b61f19858a ("thp: reintroduce split_huge_page()")
>> Signed-off-by: Jinjiang Tu <tujinjiang@huawei.com>
> I absolutely could have sworn I replied to this before, but I looked and it
> seems like I didn't :) am I getting old or something? :P
>
> Anyway the logic looks good, thanks for this, but some nits on the
> naming/comments below.

Thanks, I will update it.

>
> With those addressed:
>
> Reviewed-by: Lorenzo Stoakes (Oracle) <ljs@kernel.org>
>
>> ---
>>
>> Change in v3:
>>   * move softleaf_is_migration() check out of softleaf_migration_entry_check()
>>
>>   include/linux/leafops.h | 28 +++++++++++++++++-----------
>>   1 file changed, 17 insertions(+), 11 deletions(-)
>>
>> diff --git a/include/linux/leafops.h b/include/linux/leafops.h
>> index a9ff94b744f2..dd4130b7cb7f 100644
>> --- a/include/linux/leafops.h
>> +++ b/include/linux/leafops.h
>> @@ -363,6 +363,19 @@ static inline unsigned long softleaf_to_pfn(softleaf_t entry)
>>   	return swp_offset(entry) & SWP_PFN_MASK;
>>   }
>>
>> +static inline void softleaf_migration_entry_check(softleaf_t entry,
>> +			struct folio *folio)
> I'm not sure this is correctly named, you're doing a debug-only check here
> but the barrier is a LOT more important.
>
> Maybe softleaf_migration_sync()?
>
> The fact there's a check there is implied by the VM_WARN_ON_ONCE().
>
>> +{
>> +	/* See __split_folio_to_order() comment */
> NIT: reads better as '/* See comment in __split_folio_to_order() */'.
>
> But you're referencing a 1 line comment from __split_folio_to_order();
>
> 		/* Page flags must be visible before we make the page non-compound. */
> 		smp_wmb();
>
> Which also doesn't give sufficient context in my view.
>
> So I think overall better as:
>
> /*
>   * Ensure we do not race with split, which might alter tail pages into new
>   * folios and thus result in observing an unlocked folio.
>   * This matches the write barrier in __split_folio_to_order().
>   */
>
>> +	smp_rmb();
>> +
>> +	/*
>> +	 * Any use of migration entries may only occur while the
>> +	 * corresponding page is locked
>> +	 */
>> +	VM_WARN_ON_ONCE(!folio_test_locked(folio));
>> +}
>> +
>>   /**
>>    * softleaf_to_page() - Obtains struct page for PFN encoded within leaf entry.
>>    * @entry: Leaf entry, softleaf_has_pfn(@entry) must return true.
>> @@ -374,11 +387,8 @@ static inline struct page *softleaf_to_page(softleaf_t entry)
>>   	struct page *page = pfn_to_page(softleaf_to_pfn(entry));
>>
>>   	VM_WARN_ON_ONCE(!softleaf_has_pfn(entry));
>> -	/*
>> -	 * Any use of migration entries may only occur while the
>> -	 * corresponding page is locked
>> -	 */
>> -	VM_WARN_ON_ONCE(softleaf_is_migration(entry) && !PageLocked(page));
>> +	if (softleaf_is_migration(entry))
>> +		softleaf_migration_entry_check(entry, page_folio(page));
>>
>>   	return page;
>>   }
>> @@ -394,12 +404,8 @@ static inline struct folio *softleaf_to_folio(softleaf_t entry)
>>   	struct folio *folio = pfn_folio(softleaf_to_pfn(entry));
>>
>>   	VM_WARN_ON_ONCE(!softleaf_has_pfn(entry));
>> -	/*
>> -	 * Any use of migration entries may only occur while the
>> -	 * corresponding folio is locked.
>> -	 */
>> -	VM_WARN_ON_ONCE(softleaf_is_migration(entry) &&
>> -			!folio_test_locked(folio));
>> +	if (softleaf_is_migration(entry))
>> +		softleaf_migration_entry_check(entry, folio);
>>
>>   	return folio;
>>   }
>> --
>> 2.43.0
>>
>>
> Cheers, Lorenzo
>


      reply	other threads:[~2026-03-21  2:40 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-03-19  1:25 Jinjiang Tu
2026-03-19  8:49 ` David Hildenbrand (Arm)
2026-03-19 22:51 ` Andrew Morton
2026-03-20  1:52   ` Jinjiang Tu
2026-03-20  2:31     ` Andrew Morton
2026-03-20  8:10     ` David Hildenbrand (Arm)
2026-03-20  8:56       ` Jinjiang Tu
2026-03-20  9:57         ` Lorenzo Stoakes (Oracle)
2026-03-20 10:22           ` David Hildenbrand (Arm)
2026-03-20 10:13 ` Lorenzo Stoakes (Oracle)
2026-03-21  2:40   ` Jinjiang Tu [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=86a944b5-34bb-4f08-861b-b8d6da3db8e7@huawei.com \
    --to=tujinjiang@huawei.com \
    --cc=Liam.Howlett@oracle.com \
    --cc=akpm@linux-foundation.org \
    --cc=baohua@kernel.org \
    --cc=david@kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=ljs@kernel.org \
    --cc=lorenzo.stoakes@oracle.com \
    --cc=mhocko@suse.com \
    --cc=rppt@kernel.org \
    --cc=ryan.roberts@arm.com \
    --cc=sunnanyong@huawei.com \
    --cc=surenb@google.com \
    --cc=vbabka@kernel.org \
    --cc=wangkefeng.wang@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox