From: "David Hildenbrand (Arm)" <david@kernel.org>
To: Jinjiang Tu <tujinjiang@huawei.com>,
akpm@linux-foundation.org, lorenzo.stoakes@oracle.com,
Liam.Howlett@oracle.com, vbabka@suse.cz, rppt@kernel.org,
surenb@google.com, mhocko@suse.com, fengwei.yin@intel.com,
baohua@kernel.org, ryan.roberts@arm.com, linux-mm@kvack.org
Cc: wangkefeng.wang@huawei.com, sunnanyong@huawei.com
Subject: Re: [PATCH] mm/huge_memory: fix folio isn't locked in softleaf_to_folio()
Date: Mon, 2 Mar 2026 20:43:48 +0100 [thread overview]
Message-ID: <917a2a5a-f499-4759-8160-c8e7d9c0ed65@kernel.org> (raw)
In-Reply-To: <2a46b99d-7f03-4e47-b0ec-da2475d49af8@huawei.com>
On 2/26/26 03:01, Jinjiang Tu wrote:
>
> 在 2026/2/25 17:15, David Hildenbrand (Arm) 写道:
>> On 2/25/26 09:12, Jinjiang Tu wrote:
>>> On arm64 server, we found folio that get from migration entry isn't
>>> locked
>>> in softleaf_to_folio(). This issue triggers when mTHP splitting and
>>> zap_nonpresent_ptes() races, and the root cause is lack of memory
>>> barrier
>>> in softleaf_to_folio(). The race is as follows:
>>>
>>> CPU0 CPU1
>>>
>>> deferred_split_scan() zap_nonpresent_ptes()
>>> lock folio
>>> split_folio()
>>> unmap_folio()
>>> change ptes to migration entries
>>> __split_folio_to_order()
>>> softleaf_to_folio()
>>> set flags(including PG_locked) for tail pages folio =
>>> pfn_folio(softleaf_to_pfn(entry))
>>> smp_wmb()
>>> VM_WARN_ON_ONCE(!folio_test_locked(folio))
>>> prep_compound_page() for tail pages
>>>
>> In general, relying on a "struct page" for a migration entry is shaky,
>> because it can change any time from being a large folio to being a small
>> folio.
>>
>> So we generally only check properties that would be true for either the
>> old (large) or the new (smaller) folio, like folio_test_ksm() or
>> folio_test_anon().
>>
>> It's important that these properties were written for the new folio
>> before a migration entry user might look at the page indeed.
>>
>> So it's not just about the locked state.
>>
>>> In __split_folio_to_order(), smp_wmb() guarantees page flags of tail
>>> pages
>>> are visible before the tail page becomes non-compound. smp_wmb() should
>>> be paired with smp_rmb() in softleaf_to_folio(), which is missed. As a
>>> result, if zap_nonpresent_ptes() accesses migration entry that stores
>>> tail pfn, softleaf_to_folio() may see the updated compound_head of tail
>>> page before page->flags.
>>>
>>> Although the code exists for long time, this issue should only exist
>>> after
>>> mTHP splitting is supported. For THP splitting, there is only a pmd
>>> migration entry
>> When splitting, we first install a PTE table, no?
>>
>> unmap_folio() passes TTU_SPLIT_HUGE_PMD.
>>
>> and it's impossible to access migration entry that stores
>
> Indeed, I misunderstood the code. So the fix tag is incorrect.
>
>>> tail page pfn.
>>>
>>> To fix it, add missing smp_rmb() if the softleaf entry is migration
>>> entry
>>> in softleaf_to_folio() and softleaf_to_page().
>>>
>>> Fixes: 7dc7c5ef6463 ("mm: allow deferred splitting of arbitrary anon
>>> large folios")
>>> Signed-off-by: Jinjiang Tu <tujinjiang@huawei.com>
>>> ---
>>> include/linux/leafops.h | 39 ++++++++++++++++++++++++++-------------
>>> 1 file changed, 26 insertions(+), 13 deletions(-)
>>>
>>> diff --git a/include/linux/leafops.h b/include/linux/leafops.h
>>> index a9ff94b744f2..f823f390ba6b 100644
>>> --- a/include/linux/leafops.h
>>> +++ b/include/linux/leafops.h
>>> @@ -371,14 +371,21 @@ static inline unsigned long
>>> softleaf_to_pfn(softleaf_t entry)
>>> */
>>> static inline struct page *softleaf_to_page(softleaf_t entry)
>>> {
>>> - struct page *page = pfn_to_page(softleaf_to_pfn(entry));
>>> + struct page *page;
>>> VM_WARN_ON_ONCE(!softleaf_has_pfn(entry));
>>> - /*
>>> - * Any use of migration entries may only occur while the
>>> - * corresponding page is locked
>>> - */
>>> - VM_WARN_ON_ONCE(softleaf_is_migration(entry) && !PageLocked(page));
>>> +
>>> + page = pfn_to_page(softleaf_to_pfn(entry));
>>> + if (softleaf_is_migration(entry)) {
>>> + /* See __split_folio_to_order() comment */
>>> + smp_rmb();
>>> +
>>> + /*
>>> + * Any use of migration entries may only occur while the
>>> + * corresponding page is locked
>>> + */
>>> + VM_WARN_ON_ONCE(!PageLocked(page));
>>> + }
>> Conceptually, wouldn't the smb_rmb() have to happen *after* the
>> page_folio(), like we have in softleaf_to_folio()?
>
> the comments of page_folio() says:
> * Context: No reference, nor lock is required on @page. If the caller
> * does not hold a reference, this call may race with a folio split, so
> * it should re-check the folio still contains this page after gaining
> * a reference on the folio.
> * Return: The folio which contains this page.
>
> The old large folio is locked and freezed during splitting, page_folio()
> couldn't be called with reference held, so the result is unstable, the
> caller should recheck after gaining a reference, at that time splitting
> finishes, and folio_get() contains memory barrier already, ensuring the
> folio flags is seen after compound_head.
Let me elaborate what I mean:
__split_folio_to_order() does:
/* setup flags and mappings etc. for the new folio */
new_folio->flags.f = ...
new_folio->mapping = ...
smp_wmb();
/* Now re-route page_folio(). */
clear_compound_head(new_head);
if (new_order) {
prep_compound_page(new_head, new_order);
folio_set_large_rmappable(new_folio);
}
So I would expect the opposite direction to do:
/* Lookup either the old or the new folio. */
page = pfn_to_page(softleaf_to_pfn(entry));
folio = page_folio(page); /* looks up compound head etc. */
/* Make sure we'll see proper flags, mappings of new folio. */
smp_rmb();
/* Continue using new folio */
folio_test_locked() ... folio_test_anon() ...
Instead of
page = pfn_to_page(softleaf_to_pfn(entry));
smp_rmb();
PageLocked(page) /* internally does page_folio() */
folio = page_folio(page);
Maybe that is fine, but it sure is harder to argue about correctness?
--
Cheers,
David
prev parent reply other threads:[~2026-03-02 19:43 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-02-25 8:12 Jinjiang Tu
2026-02-25 9:15 ` David Hildenbrand (Arm)
2026-02-26 2:01 ` Jinjiang Tu
2026-03-02 19:43 ` David Hildenbrand (Arm) [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=917a2a5a-f499-4759-8160-c8e7d9c0ed65@kernel.org \
--to=david@kernel.org \
--cc=Liam.Howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=baohua@kernel.org \
--cc=fengwei.yin@intel.com \
--cc=linux-mm@kvack.org \
--cc=lorenzo.stoakes@oracle.com \
--cc=mhocko@suse.com \
--cc=rppt@kernel.org \
--cc=ryan.roberts@arm.com \
--cc=sunnanyong@huawei.com \
--cc=surenb@google.com \
--cc=tujinjiang@huawei.com \
--cc=vbabka@suse.cz \
--cc=wangkefeng.wang@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox