linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] mm: update stale locking comment in do_anonymous_page()
@ 2026-03-31 14:29 Aditya Sharma
  2026-04-01 10:52 ` David Hildenbrand (Arm)
  2026-04-05 17:18 ` [PATCH v2] mm/memory: update stale locking comments for fault handlers Aditya Sharma
  0 siblings, 2 replies; 5+ messages in thread
From: Aditya Sharma @ 2026-03-31 14:29 UTC (permalink / raw)
  To: linux-mm
  Cc: akpm, david, ljs, Liam.Howlett, vbabka, rppt, surenb, mhocko,
	linux-kernel, Aditya Sharma

The comment above do_anonymous_page() dates back to 2005 and describes
the pre-per-VMA-lock world where mmap_lock was always held on entry.
Since CONFIG_PER_VMA_LOCK was introduced (6.4), the fault handler now
has a fast path that enters holding only a per-VMA read lock, with
mmap_lock not held at all.

Update the comment to describe both entry contexts accurately.

Signed-off-by: Aditya Sharma <adi.sharma@zohomail.in>
---
 mm/memory.c | 22 +++++++++++++++++++---
 1 file changed, 19 insertions(+), 3 deletions(-)

diff --git a/mm/memory.c b/mm/memory.c
index c65e82c86..cc8dbbaea 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -5210,9 +5210,25 @@ static struct folio *alloc_anon_folio(struct vm_fault *vmf)
 }
 
 /*
- * We enter with non-exclusive mmap_lock (to exclude vma changes,
- * but allow concurrent faults), and pte mapped but not yet locked.
- * We return with mmap_lock still held, but pte unmapped and unlocked.
+ * We enter in one of two locking contexts:
+ *
+ * 1) VMA lock path (FAULT_FLAG_VMA_LOCK set):
+ *    Entered holding a read lock on the faulting VMA (vma_start_read),
+ *    but NOT holding mmap_lock. This is the fast path introduced with
+ *    per-VMA locking (CONFIG_PER_VMA_LOCK). If this function cannot
+ *    complete the fault (e.g. needs to wait on I/O or encounters a
+ *    condition requiring the mm lock), it must return VM_FAULT_RETRY
+ *    and the caller will fall back to the mmap_lock path below.
+ *
+ * 2) mmap_lock path (FAULT_FLAG_VMA_LOCK not set):
+ *    Entered holding a non-exclusive (read) lock on mmap_lock, which
+ *    excludes VMA tree modifications but allows concurrent faults on
+ *    other VMAs. No per-VMA lock is held.
+ *
+ * In both cases, on entry the pte is mapped but not yet locked.
+ * On return, the pte is unmapped and unlocked, and whichever of
+ * the above locks was held on entry is still held (mmap_lock is
+ * not dropped, VMA read lock is not dropped, rather, the caller releases it).
  */
 static vm_fault_t do_anonymous_page(struct vm_fault *vmf)
 {
-- 
2.34.1



^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2026-04-05 17:22 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2026-03-31 14:29 [PATCH] mm: update stale locking comment in do_anonymous_page() Aditya Sharma
2026-04-01 10:52 ` David Hildenbrand (Arm)
2026-04-01 16:42   ` Aditya Sharma
2026-04-01 18:47     ` David Hildenbrand (Arm)
2026-04-05 17:18 ` [PATCH v2] mm/memory: update stale locking comments for fault handlers Aditya Sharma

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox