* [PATCH] mm/huge_memory: fix a folio_split() race condition with folio_try_get()
@ 2026-02-28 1:06 Zi Yan
2026-02-28 3:10 ` Lance Yang
` (2 more replies)
0 siblings, 3 replies; 6+ messages in thread
From: Zi Yan @ 2026-02-28 1:06 UTC (permalink / raw)
To: Andrew Morton
Cc: David Hildenbrand, Lorenzo Stoakes, Zi Yan, Hugh Dickins,
Baolin Wang, Liam R. Howlett, Nico Pache, Ryan Roberts, Dev Jain,
Barry Song, Lance Yang, Matthew Wilcox, Bas van Dijk, Eero Kelly,
Andrew Battat, Adam Bratschi-Kaye, linux-mm, linux-kernel,
linux-fsdevel, stable
During a pagecache folio split, the values in the related xarray should not
be changed from the original folio at xarray split time until all
after-split folios are well formed and stored in the xarray. Current use
of xas_try_split() in __split_unmapped_folio() lets some after-split folios
show up at wrong indices in the xarray. When these misplaced after-split
folios are unfrozen, before correct folios are stored via __xa_store(), and
grabbed by folio_try_get(), they are returned to userspace at wrong file
indices, causing data corruption.
Fix it by using the original folio in xas_try_split() calls, so that
folio_try_get() can get the right after-split folios after the original
folio is unfrozen.
Uniform split, split_huge_page*(), is not affected, since it uses
xas_split_alloc() and xas_split() only once and stores the original folio
in the xarray.
Fixes below points to the commit introduces the code, but folio_split() is
used in a later commit 7460b470a131f ("mm/truncate: use folio_split() in
truncate operation").
Fixes: 00527733d0dc8 ("mm/huge_memory: add two new (not yet used) functions for folio_split()")
Reported-by: Bas van Dijk <bas@dfinity.org>
Closes: https://lore.kernel.org/all/CAKNNEtw5_kZomhkugedKMPOG-sxs5Q5OLumWJdiWXv+C9Yct0w@mail.gmail.com/
Signed-off-by: Zi Yan <ziy@nvidia.com>
Cc: <stable@vger.kernel.org>
---
mm/huge_memory.c | 9 ++++++++-
1 file changed, 8 insertions(+), 1 deletion(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 56db54fa48181..e4ed0404e8b55 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -3647,6 +3647,7 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
const bool is_anon = folio_test_anon(folio);
int old_order = folio_order(folio);
int start_order = split_type == SPLIT_TYPE_UNIFORM ? new_order : old_order - 1;
+ struct folio *origin_folio = folio;
int split_order;
/*
@@ -3672,7 +3673,13 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
xas_split(xas, folio, old_order);
else {
xas_set_order(xas, folio->index, split_order);
- xas_try_split(xas, folio, old_order);
+ /*
+ * use the original folio, so that a parallel
+ * folio_try_get() waits on it until xarray is
+ * updated with after-split folios and
+ * the original one is unfrozen.
+ */
+ xas_try_split(xas, origin_folio, old_order);
if (xas_error(xas))
return xas_error(xas);
}
--
2.51.0
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH] mm/huge_memory: fix a folio_split() race condition with folio_try_get()
2026-02-28 1:06 [PATCH] mm/huge_memory: fix a folio_split() race condition with folio_try_get() Zi Yan
@ 2026-02-28 3:10 ` Lance Yang
2026-03-02 14:28 ` David Hildenbrand (Arm)
2026-03-02 13:30 ` Lorenzo Stoakes
2026-03-02 14:40 ` David Hildenbrand (Arm)
2 siblings, 1 reply; 6+ messages in thread
From: Lance Yang @ 2026-02-28 3:10 UTC (permalink / raw)
To: Zi Yan, Andrew Morton
Cc: David Hildenbrand, Lorenzo Stoakes, Hugh Dickins, Baolin Wang,
Liam R. Howlett, Nico Pache, Ryan Roberts, Dev Jain, Barry Song,
Matthew Wilcox, Bas van Dijk, Eero Kelly, Andrew Battat,
Adam Bratschi-Kaye, linux-mm, linux-kernel, linux-fsdevel,
stable
On 2026/2/28 09:06, Zi Yan wrote:
> During a pagecache folio split, the values in the related xarray should not
> be changed from the original folio at xarray split time until all
> after-split folios are well formed and stored in the xarray. Current use
> of xas_try_split() in __split_unmapped_folio() lets some after-split folios
> show up at wrong indices in the xarray. When these misplaced after-split
> folios are unfrozen, before correct folios are stored via __xa_store(), and
> grabbed by folio_try_get(), they are returned to userspace at wrong file
> indices, causing data corruption.
>
> Fix it by using the original folio in xas_try_split() calls, so that
> folio_try_get() can get the right after-split folios after the original
> folio is unfrozen.
>
> Uniform split, split_huge_page*(), is not affected, since it uses
> xas_split_alloc() and xas_split() only once and stores the original folio
> in the xarray.
>
> Fixes below points to the commit introduces the code, but folio_split() is
> used in a later commit 7460b470a131f ("mm/truncate: use folio_split() in
> truncate operation").
>
> Fixes: 00527733d0dc8 ("mm/huge_memory: add two new (not yet used) functions for folio_split()")
> Reported-by: Bas van Dijk <bas@dfinity.org>
> Closes: https://lore.kernel.org/all/CAKNNEtw5_kZomhkugedKMPOG-sxs5Q5OLumWJdiWXv+C9Yct0w@mail.gmail.com/
> Signed-off-by: Zi Yan <ziy@nvidia.com>
> Cc: <stable@vger.kernel.org>
> ---
Thanks for the fix!
I also made a C reproducer and tested this patch - the corruption
disappeared.
Without patch: corruption in < 10 iterations
With patch: 1000 iterations, all clean
Tested-by: Lance Yang <lance.yang@linux.dev>
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH] mm/huge_memory: fix a folio_split() race condition with folio_try_get()
2026-02-28 1:06 [PATCH] mm/huge_memory: fix a folio_split() race condition with folio_try_get() Zi Yan
2026-02-28 3:10 ` Lance Yang
@ 2026-03-02 13:30 ` Lorenzo Stoakes
2026-03-02 14:40 ` David Hildenbrand (Arm)
2 siblings, 0 replies; 6+ messages in thread
From: Lorenzo Stoakes @ 2026-03-02 13:30 UTC (permalink / raw)
To: Zi Yan
Cc: Andrew Morton, David Hildenbrand, Hugh Dickins, Baolin Wang,
Liam R. Howlett, Nico Pache, Ryan Roberts, Dev Jain, Barry Song,
Lance Yang, Matthew Wilcox, Bas van Dijk, Eero Kelly,
Andrew Battat, Adam Bratschi-Kaye, linux-mm, linux-kernel,
linux-fsdevel, stable
On Fri, Feb 27, 2026 at 08:06:14PM -0500, Zi Yan wrote:
> During a pagecache folio split, the values in the related xarray should not
> be changed from the original folio at xarray split time until all
> after-split folios are well formed and stored in the xarray. Current use
> of xas_try_split() in __split_unmapped_folio() lets some after-split folios
> show up at wrong indices in the xarray. When these misplaced after-split
> folios are unfrozen, before correct folios are stored via __xa_store(), and
> grabbed by folio_try_get(), they are returned to userspace at wrong file
> indices, causing data corruption.
>
> Fix it by using the original folio in xas_try_split() calls, so that
> folio_try_get() can get the right after-split folios after the original
> folio is unfrozen.
>
> Uniform split, split_huge_page*(), is not affected, since it uses
> xas_split_alloc() and xas_split() only once and stores the original folio
> in the xarray.
>
> Fixes below points to the commit introduces the code, but folio_split() is
> used in a later commit 7460b470a131f ("mm/truncate: use folio_split() in
> truncate operation").
>
> Fixes: 00527733d0dc8 ("mm/huge_memory: add two new (not yet used) functions for folio_split()")
> Reported-by: Bas van Dijk <bas@dfinity.org>
> Closes: https://lore.kernel.org/all/CAKNNEtw5_kZomhkugedKMPOG-sxs5Q5OLumWJdiWXv+C9Yct0w@mail.gmail.com/
> Signed-off-by: Zi Yan <ziy@nvidia.com>
> Cc: <stable@vger.kernel.org>
> ---
> mm/huge_memory.c | 9 ++++++++-
> 1 file changed, 8 insertions(+), 1 deletion(-)
>
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index 56db54fa48181..e4ed0404e8b55 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -3647,6 +3647,7 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
> const bool is_anon = folio_test_anon(folio);
> int old_order = folio_order(folio);
> int start_order = split_type == SPLIT_TYPE_UNIFORM ? new_order : old_order - 1;
> + struct folio *origin_folio = folio;
NIT: 'origin' folio is a bit ambigious, maybe old_folio, since it is of order old_order?
> int split_order;
>
> /*
> @@ -3672,7 +3673,13 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
> xas_split(xas, folio, old_order);
Aside, but this 'if (foo) bar(); else { ... }' pattern is horrible, think it's
justifiable to put both in {}... :)
> else {
> xas_set_order(xas, folio->index, split_order);
> - xas_try_split(xas, folio, old_order);
> + /*
> + * use the original folio, so that a parallel
> + * folio_try_get() waits on it until xarray is
> + * updated with after-split folios and
> + * the original one is unfrozen.
> + */
> + xas_try_split(xas, origin_folio, old_order);
Hmm, but won't we have already split the original folio by now? So is
origin_folio/old_folio a pointer to what was the original folio but now is
that but with weird tail page setup? :) like:
|------------------------|
| f |
|------------------------|
^old_folio ^ split_at
|-----------|------------|
| f | f2 |
|-----------|------------|
^old_folio
|-----------|-----|------|
| f | f3 | f4 |
|-----------|-----|------|
^old_folio
etc.
So the xarray would contain:
|-----------|-----|------|
| f | f | f |
|-----------|-----|------|
Wouldn't it after this?
Oh I guess before it'd contain:
|-----------|-----|------|
| f | f4 | f4 |
|-----------|-----|------|
Right?
You saying you'll later put the correct xas entries in post-split. Where does
that happen?
And why was it a problem when these new folios were unfrozen?
(Since the folio is a pointer to an offset in the vmemmap)
I guess if you update that later in the xas, it's ok, and everything waits on
the right thing so this is probably fine, and the f4 f4 above is probably not
fine...
I'm guessing the original folio is kept frozen during the operation?
Anyway please help my confusion not so familiar with this code :)
> if (xas_error(xas))
> return xas_error(xas);
> }
> --
> 2.51.0
>
Thanks, Lorenzo
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH] mm/huge_memory: fix a folio_split() race condition with folio_try_get()
2026-02-28 3:10 ` Lance Yang
@ 2026-03-02 14:28 ` David Hildenbrand (Arm)
2026-03-02 15:11 ` Lance Yang
0 siblings, 1 reply; 6+ messages in thread
From: David Hildenbrand (Arm) @ 2026-03-02 14:28 UTC (permalink / raw)
To: Lance Yang, Zi Yan, Andrew Morton
Cc: Lorenzo Stoakes, Hugh Dickins, Baolin Wang, Liam R. Howlett,
Nico Pache, Ryan Roberts, Dev Jain, Barry Song, Matthew Wilcox,
Bas van Dijk, Eero Kelly, Andrew Battat, Adam Bratschi-Kaye,
linux-mm, linux-kernel, linux-fsdevel, stable
On 2/28/26 04:10, Lance Yang wrote:
>
>
> On 2026/2/28 09:06, Zi Yan wrote:
>> During a pagecache folio split, the values in the related xarray
>> should not
>> be changed from the original folio at xarray split time until all
>> after-split folios are well formed and stored in the xarray. Current use
>> of xas_try_split() in __split_unmapped_folio() lets some after-split
>> folios
>> show up at wrong indices in the xarray. When these misplaced after-split
>> folios are unfrozen, before correct folios are stored via
>> __xa_store(), and
>> grabbed by folio_try_get(), they are returned to userspace at wrong file
>> indices, causing data corruption.
>>
>> Fix it by using the original folio in xas_try_split() calls, so that
>> folio_try_get() can get the right after-split folios after the original
>> folio is unfrozen.
>>
>> Uniform split, split_huge_page*(), is not affected, since it uses
>> xas_split_alloc() and xas_split() only once and stores the original folio
>> in the xarray.
>>
>> Fixes below points to the commit introduces the code, but
>> folio_split() is
>> used in a later commit 7460b470a131f ("mm/truncate: use folio_split() in
>> truncate operation").
>>
>> Fixes: 00527733d0dc8 ("mm/huge_memory: add two new (not yet used)
>> functions for folio_split()")
>> Reported-by: Bas van Dijk <bas@dfinity.org>
>> Closes: https://lore.kernel.org/all/CAKNNEtw5_kZomhkugedKMPOG-
>> sxs5Q5OLumWJdiWXv+C9Yct0w@mail.gmail.com/
>> Signed-off-by: Zi Yan <ziy@nvidia.com>
>> Cc: <stable@vger.kernel.org>
>> ---
>
> Thanks for the fix!
>
> I also made a C reproducer and tested this patch - the corruption
> disappeared.
Should we link that reproducer somehow from the patch description?
--
Cheers,
David
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH] mm/huge_memory: fix a folio_split() race condition with folio_try_get()
2026-02-28 1:06 [PATCH] mm/huge_memory: fix a folio_split() race condition with folio_try_get() Zi Yan
2026-02-28 3:10 ` Lance Yang
2026-03-02 13:30 ` Lorenzo Stoakes
@ 2026-03-02 14:40 ` David Hildenbrand (Arm)
2 siblings, 0 replies; 6+ messages in thread
From: David Hildenbrand (Arm) @ 2026-03-02 14:40 UTC (permalink / raw)
To: Zi Yan, Andrew Morton
Cc: Lorenzo Stoakes, Hugh Dickins, Baolin Wang, Liam R. Howlett,
Nico Pache, Ryan Roberts, Dev Jain, Barry Song, Lance Yang,
Matthew Wilcox, Bas van Dijk, Eero Kelly, Andrew Battat,
Adam Bratschi-Kaye, linux-mm, linux-kernel, linux-fsdevel,
stable
On 2/28/26 02:06, Zi Yan wrote:
> During a pagecache folio split, the values in the related xarray should not
> be changed from the original folio at xarray split time until all
> after-split folios are well formed and stored in the xarray. Current use
> of xas_try_split() in __split_unmapped_folio() lets some after-split folios
> show up at wrong indices in the xarray. When these misplaced after-split
> folios are unfrozen, before correct folios are stored via __xa_store(), and
> grabbed by folio_try_get(), they are returned to userspace at wrong file
> indices, causing data corruption.
>
> Fix it by using the original folio in xas_try_split() calls, so that
> folio_try_get() can get the right after-split folios after the original
> folio is unfrozen.
>
> Uniform split, split_huge_page*(), is not affected, since it uses
> xas_split_alloc() and xas_split() only once and stores the original folio
> in the xarray.
Could we make both code paths similar and store the original folio in
both cases?
IIUC, the __xa_store() is performed unconditionally after
__split_unmapped_folio().
I'm wondering, though, about the "new_folio->index >= end" case.
Wouldn't we leave some stale entries in the xarray? But that handling
has always been confusing to me :)
--
Cheers,
David
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH] mm/huge_memory: fix a folio_split() race condition with folio_try_get()
2026-03-02 14:28 ` David Hildenbrand (Arm)
@ 2026-03-02 15:11 ` Lance Yang
0 siblings, 0 replies; 6+ messages in thread
From: Lance Yang @ 2026-03-02 15:11 UTC (permalink / raw)
To: David Hildenbrand (Arm), Zi Yan, Andrew Morton
Cc: Lorenzo Stoakes, Hugh Dickins, Baolin Wang, Liam R. Howlett,
Nico Pache, Ryan Roberts, Dev Jain, Barry Song, Matthew Wilcox,
Bas van Dijk, Eero Kelly, Andrew Battat, Adam Bratschi-Kaye,
linux-mm, linux-kernel, linux-fsdevel, stable
On 2026/3/2 22:28, David Hildenbrand (Arm) wrote:
> On 2/28/26 04:10, Lance Yang wrote:
>>
>>
>> On 2026/2/28 09:06, Zi Yan wrote:
>>> During a pagecache folio split, the values in the related xarray
>>> should not
>>> be changed from the original folio at xarray split time until all
>>> after-split folios are well formed and stored in the xarray. Current use
>>> of xas_try_split() in __split_unmapped_folio() lets some after-split
>>> folios
>>> show up at wrong indices in the xarray. When these misplaced after-split
>>> folios are unfrozen, before correct folios are stored via
>>> __xa_store(), and
>>> grabbed by folio_try_get(), they are returned to userspace at wrong file
>>> indices, causing data corruption.
>>>
>>> Fix it by using the original folio in xas_try_split() calls, so that
>>> folio_try_get() can get the right after-split folios after the original
>>> folio is unfrozen.
>>>
>>> Uniform split, split_huge_page*(), is not affected, since it uses
>>> xas_split_alloc() and xas_split() only once and stores the original folio
>>> in the xarray.
>>>
>>> Fixes below points to the commit introduces the code, but
>>> folio_split() is
>>> used in a later commit 7460b470a131f ("mm/truncate: use folio_split() in
>>> truncate operation").
>>>
>>> Fixes: 00527733d0dc8 ("mm/huge_memory: add two new (not yet used)
>>> functions for folio_split()")
>>> Reported-by: Bas van Dijk <bas@dfinity.org>
>>> Closes: https://lore.kernel.org/all/CAKNNEtw5_kZomhkugedKMPOG-
>>> sxs5Q5OLumWJdiWXv+C9Yct0w@mail.gmail.com/
>>> Signed-off-by: Zi Yan <ziy@nvidia.com>
>>> Cc: <stable@vger.kernel.org>
>>> ---
>>
>> Thanks for the fix!
>>
>> I also made a C reproducer and tested this patch - the corruption
>> disappeared.
>
> Should we link that reproducer somehow from the patch description?
Yes, the original reproducer provided by Bas is available here[1].
Regarding the C reproducer, Zi plans to add it to selftests in a
follow-up patch (as we discussed off-list).
[1] https://github.com/dfinity/thp-madv-remove-test
Cheers,
Lance
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2026-03-02 15:11 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2026-02-28 1:06 [PATCH] mm/huge_memory: fix a folio_split() race condition with folio_try_get() Zi Yan
2026-02-28 3:10 ` Lance Yang
2026-03-02 14:28 ` David Hildenbrand (Arm)
2026-03-02 15:11 ` Lance Yang
2026-03-02 13:30 ` Lorenzo Stoakes
2026-03-02 14:40 ` David Hildenbrand (Arm)
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox