linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Barry Song <21cnbao@gmail.com>
To: lsf-pc@lists.linux-foundation.org
Cc: Linux-MM <linux-mm@kvack.org>,
	Suren Baghdasaryan <surenb@google.com>,
	 Lorenzo Stoakes <lorenzo.stoakes@oracle.com>,
	Matthew Wilcox <willy@infradead.org>,
	 David Hildenbrand <david@kernel.org>,
	Oven <liyangouwen1@oppo.com>
Subject: [LSM/MM/BPF TOPIC] Do not hold mmap_lock following folio_lock failure in page faults
Date: Sat, 21 Feb 2026 06:12:52 +0800	[thread overview]
Message-ID: <CAGsJ_4wFMm4WBM8pi-+wmgsUAu-+W2gC0wNR_UrdecY3-+5uhQ@mail.gmail.com> (raw)

Currently, page faults use per-VMA locks whenever possible.
However, when a page fault encounters an I/O read (e.g., in
filemap_fault() or do_swap_page()), the handler releases the
per-VMA lock and requests a retry after failing to acquire
folio_lock. On the retry, the fault handler always acquires
the mmap_lock in read mode unconditionally.

This can occur frequently and may sometimes cause UI jank due
to mmap_lock contention. A proposal suggests using the
per-VMA lock in the page fault retry path instead of taking
mmap_lock [1]. Oven reported that this can significantly
improve performance by reducing mmap_lock wait time [2].

Matthew appears to suggest removing the page fault retry
entirely and instead waiting for I/O while holding the
per-VMA lock, noting that folios under I/O read may be
reclaimed by the LRU under severe memory pressure and
with thousands of threads. As a result, folios might be
reallocated, causing the page fault to repeatedly re-enter [3].
However, holding the per-VMA lock during I/O wait raises
some concerns: writers could experience long delays.

On the other hand, retrying with the per-VMA lock may reduce
mmap_lock contention, potentially shortening page fault retry
time and giving the LRU fewer opportunities to reclaim folios
waiting for I/O. By the time of the LSF/MM/BPF discussion, I
may have already created similar cases and collected
additional data.

Another potential optimization is to remove the page fault retry
without risking long I/O waits blocking writers. This occurs when
folio_lock fails but the folio is already up-to-date. In this case,
a parallel page fault may simply wait for the simultaneous PF to
complete the PTE mapping.

I would appreciate feedback from LSF/MM/BPF on approaches
to eliminate mmap_lock acquisition in page faults triggered
by folio_lock failure, for both I/O wait and PTE-mapping
wait cases, for mainline inclusion.

[1] https://lore.kernel.org/linux-mm/20251127011438.6918-1-21cnbao@gmail.com/
[2] https://lore.kernel.org/linux-mm/cccf352a-1a68-430d-83fa-a14bb5e37464@oppo.com/
[3] https://lore.kernel.org/linux-mm/aSip2mWX13sqPW_l@casper.infradead.org/

Thanks
Barry


                 reply	other threads:[~2026-02-20 22:13 UTC|newest]

Thread overview: [no followups] expand[flat|nested]  mbox.gz  Atom feed

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAGsJ_4wFMm4WBM8pi-+wmgsUAu-+W2gC0wNR_UrdecY3-+5uhQ@mail.gmail.com \
    --to=21cnbao@gmail.com \
    --cc=david@kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=liyangouwen1@oppo.com \
    --cc=lorenzo.stoakes@oracle.com \
    --cc=lsf-pc@lists.linux-foundation.org \
    --cc=surenb@google.com \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox