From: Barry Song <21cnbao@gmail.com>
To: Matthew Wilcox <willy@infradead.org>
Cc: akpm@linux-foundation.org, linux-mm@kvack.org,
linux-arm-kernel@lists.infradead.org,
linux-kernel@vger.kernel.org, loongarch@lists.linux.dev,
linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org,
linux-s390@vger.kernel.org, linux-fsdevel@vger.kernel.org
Subject: Re: [RFC PATCH 0/2] mm: continue using per-VMA lock when retrying page faults after I/O
Date: Fri, 28 Nov 2025 04:29:16 +0800 [thread overview]
Message-ID: <CAGsJ_4zWGYiu1wv=D7bV5zd0h8TEHTCARhyu_9_gL36PiNvbHQ@mail.gmail.com> (raw)
In-Reply-To: <aSip2mWX13sqPW_l@casper.infradead.org>
On Fri, Nov 28, 2025 at 3:43 AM Matthew Wilcox <willy@infradead.org> wrote:
>
> [dropping individuals, leaving only mailing lists. please don't send
> this kind of thing to so many people in future]
>
> On Thu, Nov 27, 2025 at 12:22:16PM +0800, Barry Song wrote:
> > On Thu, Nov 27, 2025 at 12:09 PM Matthew Wilcox <willy@infradead.org> wrote:
> > >
> > > On Thu, Nov 27, 2025 at 09:14:36AM +0800, Barry Song wrote:
> > > > There is no need to always fall back to mmap_lock if the per-VMA
> > > > lock was released only to wait for pagecache or swapcache to
> > > > become ready.
> > >
> > > Something I've been wondering about is removing all the "drop the MM
> > > locks while we wait for I/O" gunk. It's a nice amount of code removed:
> >
> > I think the point is that page fault handlers should avoid holding the VMA
> > lock or mmap_lock for too long while waiting for I/O. Otherwise, those
> > writers and readers will be stuck for a while.
>
> There's a usecase some of us have been discussing off-list for a few
> weeks that our current strategy pessimises. It's a process with
> thousands (maybe tens of thousands) of threads. It has much more mapped
> files than it has memory that cgroups will allow it to use. So on a
> page fault, we drop the vma lock, allocate a page of ram, kick off the
> read, sleep waiting for the folio to come uptodate, once it is return,
> expecting the page to still be there when we reenter filemap_fault.
> But it's under so much memory pressure that it's already been reclaimed
> by the time we get back to it. So all the threads just batter the
> storage re-reading data.
Is this entirely the fault of re-entering the page fault? Under extreme
memory pressure, even if we map the pages, they can still be reclaimed
quickly?
>
> If we don't drop the vma lock, we can insert the pages in the page table
> and return, maybe getting some work done before this thread is
> descheduled.
If we need to protect the page from being reclaimed too early, the fix
should reside within LRU management, not in page fault handling.
Also, I gave an example where we may not drop the VMA lock if the folio is
already up to date. That likely corresponds to waiting for the PTE mapping to
complete.
>
> This use case also manages to get utterly hung-up trying to do reclaim
> today with the mmap_lock held. SO it manifests somewhat similarly to
> your problem (everybody ends up blocked on mmap_lock) but it has a
> rather different root cause.
>
> > I agree there’s room for improvement, but merely removing the "drop the MM
> > locks while waiting for I/O" code is unlikely to improve performance.
>
> I'm not sure it'd hurt performance. The "drop mmap locks for I/O" code
> was written before the VMA locking code was written. I don't know that
> it's actually helping these days.
I am concerned that other write paths may still need to modify the VMA, for
example during splitting. Tail latency has long been a significant issue for
Android users, and we have observed it even with folio_lock, which has much
finer granularity than the VMA lock.
>
> > The change would be much more complex, so I’d prefer to land the current
> > patchset first. At least this way, we avoid falling back to mmap_lock and
> > causing contention or priority inversion, with minimal changes.
>
> Uh, this is an RFC patchset. I'm giving you my comment, which is that I
> don't think this is the right direction to go in. Any talk of "landing"
> these patches is extremely premature.
While I agree that there are other approaches worth exploring, I
remain entirely unconvinced that this patchset is the wrong
direction. With the current retry logic, it substantially reduces
mmap_lock acquisitions and represents a clear low-hanging fruit.
Also, I am not referring to landing the RFC itself, but to a subsequent formal
patchset that retries using the per-VMA lock.
Thanks
Barry
next prev parent reply other threads:[~2025-11-27 20:29 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-11-27 1:14 Barry Song
2025-11-27 1:14 ` [RFC PATCH 1/2] mm/filemap: Retry fault by VMA lock if the lock was released for I/O Barry Song
2025-11-27 10:52 ` Pedro Falcato
2025-11-27 11:39 ` Barry Song
2025-11-27 16:26 ` Pedro Falcato
2025-11-27 1:14 ` [RFC PATCH 2/2] mm/swapin: Retry swapin " Barry Song
2025-11-27 4:09 ` [RFC PATCH 0/2] mm: continue using per-VMA lock when retrying page faults after I/O Matthew Wilcox
2025-11-27 4:22 ` Barry Song
2025-11-27 4:42 ` Barry Song
2025-11-27 19:43 ` Matthew Wilcox
2025-11-27 20:29 ` Barry Song [this message]
2025-11-27 21:52 ` Barry Song
2025-11-30 0:28 ` Suren Baghdasaryan
2025-11-30 2:56 ` Barry Song
2025-11-30 5:38 ` Shakeel Butt
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CAGsJ_4zWGYiu1wv=D7bV5zd0h8TEHTCARhyu_9_gL36PiNvbHQ@mail.gmail.com' \
--to=21cnbao@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-riscv@lists.infradead.org \
--cc=linux-s390@vger.kernel.org \
--cc=linuxppc-dev@lists.ozlabs.org \
--cc=loongarch@lists.linux.dev \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox