From: Kiryl Shutsemau <kirill@shutemov.name>
To: Matthew Wilcox <willy@infradead.org>
Cc: Chris J Arges <carges@cloudflare.com>,
akpm@linux-foundation.org, william.kucharski@oracle.com,
linux-fsdevel@vger.kernel.org, linux-mm@kvack.org,
linux-kernel@vger.kernel.org, kernel-team@cloudflare.com
Subject: Re: [PATCH RFC 1/1] mm/filemap: handle large folio split race in page cache lookups
Date: Fri, 6 Mar 2026 18:36:30 +0000 [thread overview]
Message-ID: <aasddOvIfcMYp3sk@thinkstation> (raw)
In-Reply-To: <aasAo8qRCV9XSuax@casper.infradead.org>
On Fri, Mar 06, 2026 at 04:28:19PM +0000, Matthew Wilcox wrote:
> On Fri, Mar 06, 2026 at 02:13:26PM +0000, Kiryl Shutsemau wrote:
> > On Thu, Mar 05, 2026 at 07:24:38PM +0000, Matthew Wilcox wrote:
> > > folio_split() needs to be sure that it's the only one holding a reference
> > > to the folio. To that end, it calculates the expected refcount of the
> > > folio, and freezes it (sets the refcount to 0 if the refcount is the
> > > expected value). Once filemap_get_entry() has incremented the refcount,
> > > freezing will fail.
> > >
> > > But of course, we can race. filemap_get_entry() can load a folio first,
> > > the entire folio_split can happen, then it calls folio_try_get() and
> > > succeeds, but it no longer covers the index we were looking for. That's
> > > what the xas_reload() is trying to prevent -- if the index is for a
> > > folio which has changed, then the xas_reload() should come back with a
> > > different folio and we goto repeat.
> > >
> > > So how did we get through this with a reference to the wrong folio?
> >
> > What would xas_reload() return if we raced with split and index pointed
> > to a tail page before the split?
> >
> > Wouldn't it return the folio that was a head and check will pass?
>
> It's not supposed to return the head in this case. But, check the code:
>
> if (!node)
> return xa_head(xas->xa);
> if (IS_ENABLED(CONFIG_XARRAY_MULTI)) {
> offset = (xas->xa_index >> node->shift) & XA_CHUNK_MASK;
> entry = xa_entry(xas->xa, node, offset);
> if (!xa_is_sibling(entry))
> return entry;
> offset = xa_to_sibling(entry);
> }
> return xa_entry(xas->xa, node, offset);
>
> (obviously CONFIG_XARRAY_MULTI is enabled)
>
> !node is almost certainly not true -- that's only the case if there's a
> single entry at offset 0, and we're talking about a situation where we
> have a large folio.
>
> I think we have two cases to consider; one where we've allocated a new
> node because we split an entry from order >=6 to order <6, and one where
> we just split an entry that stays at the same level in the tree.
>
> So let's say we're looking up an entry at index 1499 and first we got
> a folio that is at index 1024 order 9. So first, let's look at what
> happens if it's split into two order-8 folios. We get a reference on the
> first one, then we calculate offset as ((1499 >> 6) & 63) which is 23.
> Unless folio splitting is buggy, the original folio is in slot 16 and
> has sibling entries in 17,18,19 and the new folio is in slot 20 and has
> sibling entries in 21,22,23. So we should find a sibling entry in slot
> 23 that points to 20, then return the new folio in slot 20 which would
> mismatch the old folio that we got a refcount on.
>
> Then let's consider what happens if we split the index at 1499 into an
> order-0 folio. folio split allocated a new node and put it at offset 23
> (and populated the new node, but we don't need to be concerned with that
> here). This time the lookup finds the new node and actually returns the
> node instead of a folio. But that's OK, because we'ree just checking
> for pointer equality, and there's no way this node compares equal to
> any folio we found (not least because it has a low bit set to indicate
> this is a node and not a pointer). So again the pointer equality check
> fails and we drop the speculative refcount we obtained and retry the loop.
Thanks for the analysis. It is very helpful. I don't understand xarray
internals.
> Have I missed something? Maybe a memory ordering problem?
I also considered reclaim/refault scenario, but I don't see anything.
Maybe memory ordering. Who knows. I guess we need more breadcrumbs.
The proposed change doesn't fix anything, but hides the problem.
It would be better to downgrade the VM_BUG_ON_FOLIO() to a warning +
retry.
--
Kiryl Shutsemau / Kirill A. Shutemov
next prev parent reply other threads:[~2026-03-06 18:36 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-03-05 18:34 [PATCH RFC 0/1] fix for large folio split race in page cache Chris J Arges
2026-03-05 18:34 ` [PATCH RFC 1/1] mm/filemap: handle large folio split race in page cache lookups Chris J Arges
2026-03-05 19:24 ` Matthew Wilcox
2026-03-06 14:13 ` Kiryl Shutsemau
2026-03-06 16:28 ` Matthew Wilcox
2026-03-06 18:36 ` Kiryl Shutsemau [this message]
2026-03-06 18:41 ` Matthew Wilcox
2026-03-06 20:20 ` Kiryl Shutsemau
2026-03-06 20:11 ` Chris Arges
2026-03-06 20:21 ` Kiryl Shutsemau
2026-03-06 20:58 ` Chris Arges
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=aasddOvIfcMYp3sk@thinkstation \
--to=kirill@shutemov.name \
--cc=akpm@linux-foundation.org \
--cc=carges@cloudflare.com \
--cc=kernel-team@cloudflare.com \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=william.kucharski@oracle.com \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox