From: Matthew Wilcox <willy@infradead.org>
To: Chris J Arges <carges@cloudflare.com>
Cc: akpm@linux-foundation.org, william.kucharski@oracle.com,
linux-fsdevel@vger.kernel.org, linux-mm@kvack.org,
linux-kernel@vger.kernel.org, kernel-team@cloudflare.com
Subject: Re: [PATCH RFC 1/1] mm/filemap: handle large folio split race in page cache lookups
Date: Thu, 5 Mar 2026 19:24:38 +0000 [thread overview]
Message-ID: <aanYdvdJVG6f5WL2@casper.infradead.org> (raw)
In-Reply-To: <20260305183438.1062312-2-carges@cloudflare.com>
On Thu, Mar 05, 2026 at 12:34:33PM -0600, Chris J Arges wrote:
> We have been hitting VM_BUG_ON_FOLIO(!folio_contains(folio, index)) in
> production environments. These machines are using XFS with large folio
> support enabled and are under high memory pressure.
>
> >From reading the code it seems plausible that folio splits due to memory
> reclaim are racing with filemap_fault() serving mmap page faults.
>
> The existing code checks for truncation (folio->mapping != mapping) and
> retries, but there does not appear to be equivalent handling for the
> split case. The result is:
>
> kernel BUG at mm/filemap.c:3519!
> VM_BUG_ON_FOLIO(!folio_contains(folio, index), folio)
This didn't occur to me as a possibility because filemap_get_entry()
is _supposed_ to take care of it. But if this patch fixes it, then
we need to understand why it works.
folio_split() needs to be sure that it's the only one holding a reference
to the folio. To that end, it calculates the expected refcount of the
folio, and freezes it (sets the refcount to 0 if the refcount is the
expected value). Once filemap_get_entry() has incremented the refcount,
freezing will fail.
But of course, we can race. filemap_get_entry() can load a folio first,
the entire folio_split can happen, then it calls folio_try_get() and
succeeds, but it no longer covers the index we were looking for. That's
what the xas_reload() is trying to prevent -- if the index is for a
folio which has changed, then the xas_reload() should come back with a
different folio and we goto repeat.
So how did we get through this with a reference to the wrong folio?
prev parent reply other threads:[~2026-03-05 19:24 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-03-05 18:34 [PATCH RFC 0/1] fix for large folio split race in page cache Chris J Arges
2026-03-05 18:34 ` [PATCH RFC 1/1] mm/filemap: handle large folio split race in page cache lookups Chris J Arges
2026-03-05 19:24 ` Matthew Wilcox [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=aanYdvdJVG6f5WL2@casper.infradead.org \
--to=willy@infradead.org \
--cc=akpm@linux-foundation.org \
--cc=carges@cloudflare.com \
--cc=kernel-team@cloudflare.com \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=william.kucharski@oracle.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox