From: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: David Hildenbrand <david@kernel.org>,
"Liam R . Howlett" <Liam.Howlett@oracle.com>,
Vlastimil Babka <vbabka@suse.cz>, Mike Rapoport <rppt@kernel.org>,
Suren Baghdasaryan <surenb@google.com>,
Michal Hocko <mhocko@suse.com>,
Shakeel Butt <shakeel.butt@linux.dev>,
Jann Horn <jannh@google.com>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org,
linux-rt-devel@lists.linux.dev,
Peter Zijlstra <peterz@infradead.org>,
Ingo Molnar <mingo@redhat.com>, Will Deacon <will@kernel.org>,
Boqun Feng <boqun.feng@gmail.com>,
Waiman Long <longman@redhat.com>,
Sebastian Andrzej Siewior <bigeasy@linutronix.de>,
Clark Williams <clrkwllms@kernel.org>,
Steven Rostedt <rostedt@goodmis.org>
Subject: [PATCH v4 08/10] mm/vma: improve and document __is_vma_write_locked()
Date: Fri, 23 Jan 2026 20:12:18 +0000 [thread overview]
Message-ID: <ef6c415c2d2c03f529dca124ccaed66bc2f60edc.1769198904.git.lorenzo.stoakes@oracle.com> (raw)
In-Reply-To: <cover.1769198904.git.lorenzo.stoakes@oracle.com>
We don't actually need to return an output parameter providing mm sequence
number, rather we can separate that out into another function -
__vma_raw_mm_seqnum() - and have any callers which need to obtain that
invoke that instead.
The access to the raw sequence number requires that we hold the exclusive
mmap lock such that we know we can't race vma_end_write_all(), so move the
assert to __vma_raw_mm_seqnum() to make this requirement clear.
Also while we're here, convert all of the VM_BUG_ON_VMA()'s to
VM_WARN_ON_ONCE_VMA()'s in line with the convention that we do not invoke
oopses when we can avoid it.
Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
---
include/linux/mmap_lock.h | 44 ++++++++++++++++++++++-----------------
1 file changed, 25 insertions(+), 19 deletions(-)
diff --git a/include/linux/mmap_lock.h b/include/linux/mmap_lock.h
index 678f90080fa6..23bde4bd5a85 100644
--- a/include/linux/mmap_lock.h
+++ b/include/linux/mmap_lock.h
@@ -258,17 +258,30 @@ static inline void vma_end_read(struct vm_area_struct *vma)
vma_refcount_put(vma);
}
-/* WARNING! Can only be used if mmap_lock is expected to be write-locked */
-static inline bool __is_vma_write_locked(struct vm_area_struct *vma, unsigned int *mm_lock_seq)
+static inline unsigned int __vma_raw_mm_seqnum(struct vm_area_struct *vma)
{
+ const struct mm_struct *mm = vma->vm_mm;
+
+ /* We must hold an exclusive write lock for this access to be valid. */
mmap_assert_write_locked(vma->vm_mm);
+ return mm->mm_lock_seq.sequence;
+}
+/*
+ * Determine whether a VMA is write-locked. Must be invoked ONLY if the mmap
+ * write lock is held.
+ *
+ * Returns true if write-locked, otherwise false.
+ *
+ * Note that mm_lock_seq is updated only if the VMA is NOT write-locked.
+ */
+static inline bool __is_vma_write_locked(struct vm_area_struct *vma)
+{
/*
* current task is holding mmap_write_lock, both vma->vm_lock_seq and
* mm->mm_lock_seq can't be concurrently modified.
*/
- *mm_lock_seq = vma->vm_mm->mm_lock_seq.sequence;
- return (vma->vm_lock_seq == *mm_lock_seq);
+ return vma->vm_lock_seq == __vma_raw_mm_seqnum(vma);
}
int __vma_start_write(struct vm_area_struct *vma, unsigned int mm_lock_seq,
@@ -281,12 +294,10 @@ int __vma_start_write(struct vm_area_struct *vma, unsigned int mm_lock_seq,
*/
static inline void vma_start_write(struct vm_area_struct *vma)
{
- unsigned int mm_lock_seq;
-
- if (__is_vma_write_locked(vma, &mm_lock_seq))
+ if (__is_vma_write_locked(vma))
return;
- __vma_start_write(vma, mm_lock_seq, TASK_UNINTERRUPTIBLE);
+ __vma_start_write(vma, __vma_raw_mm_seqnum(vma), TASK_UNINTERRUPTIBLE);
}
/**
@@ -305,30 +316,25 @@ static inline void vma_start_write(struct vm_area_struct *vma)
static inline __must_check
int vma_start_write_killable(struct vm_area_struct *vma)
{
- unsigned int mm_lock_seq;
-
- if (__is_vma_write_locked(vma, &mm_lock_seq))
+ if (__is_vma_write_locked(vma))
return 0;
- return __vma_start_write(vma, mm_lock_seq, TASK_KILLABLE);
+
+ return __vma_start_write(vma, __vma_raw_mm_seqnum(vma), TASK_KILLABLE);
}
static inline void vma_assert_write_locked(struct vm_area_struct *vma)
{
- unsigned int mm_lock_seq;
-
- VM_BUG_ON_VMA(!__is_vma_write_locked(vma, &mm_lock_seq), vma);
+ VM_WARN_ON_ONCE_VMA(!__is_vma_write_locked(vma), vma);
}
static inline void vma_assert_locked(struct vm_area_struct *vma)
{
- unsigned int mm_lock_seq;
-
/*
* See the comment describing the vm_area_struct->vm_refcnt field for
* details of possible refcnt values.
*/
- VM_BUG_ON_VMA(refcount_read(&vma->vm_refcnt) <= 1 &&
- !__is_vma_write_locked(vma, &mm_lock_seq), vma);
+ VM_WARN_ON_ONCE_VMA(refcount_read(&vma->vm_refcnt) <= 1 &&
+ !__is_vma_write_locked(vma), vma);
}
static inline bool vma_is_attached(struct vm_area_struct *vma)
--
2.52.0
next prev parent reply other threads:[~2026-01-23 20:13 UTC|newest]
Thread overview: 43+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-01-23 20:12 [PATCH v4 00/10] mm: add and use vma_assert_stabilised() helper Lorenzo Stoakes
2026-01-23 20:12 ` [PATCH v4 01/10] mm/vma: rename VMA_LOCK_OFFSET to VM_REFCNT_EXCLUDE_READERS_FLAG Lorenzo Stoakes
2026-01-30 16:50 ` Liam R. Howlett
2026-01-23 20:12 ` [PATCH v4 02/10] mm/vma: document possible vma->vm_refcnt values and reference comment Lorenzo Stoakes
2026-01-26 5:15 ` Suren Baghdasaryan
2026-01-26 9:33 ` Lorenzo Stoakes
2026-01-26 9:40 ` Vlastimil Babka
2026-01-30 17:06 ` Liam R. Howlett
2026-01-23 20:12 ` [PATCH v4 03/10] mm/vma: rename is_vma_write_only(), separate out shared refcount put Lorenzo Stoakes
2026-01-26 5:36 ` Suren Baghdasaryan
2026-01-26 9:45 ` Lorenzo Stoakes
2026-01-26 10:12 ` Vlastimil Babka
2026-01-23 20:12 ` [PATCH v4 04/10] mm/vma: add+use vma lockdep acquire/release defines Lorenzo Stoakes
2026-01-28 11:18 ` Sebastian Andrzej Siewior
2026-01-28 11:31 ` Lorenzo Stoakes
2026-01-28 11:37 ` Sebastian Andrzej Siewior
2026-01-28 11:48 ` Lorenzo Stoakes
2026-01-29 21:30 ` Suren Baghdasaryan
2026-01-23 20:12 ` [PATCH v4 05/10] mm/vma: de-duplicate __vma_enter_locked() error path Lorenzo Stoakes
2026-01-23 20:12 ` [PATCH v4 06/10] mm/vma: clean up __vma_enter/exit_locked() Lorenzo Stoakes
2026-01-26 5:47 ` Suren Baghdasaryan
2026-01-26 9:45 ` Lorenzo Stoakes
2026-01-26 10:25 ` Vlastimil Babka
2026-01-23 20:12 ` [PATCH v4 07/10] mm/vma: introduce helper struct + thread through exclusive lock fns Lorenzo Stoakes
2026-01-26 11:16 ` Vlastimil Babka
2026-01-26 16:09 ` Lorenzo Stoakes
2026-01-26 19:38 ` Andrew Morton
2026-01-26 18:15 ` Suren Baghdasaryan
2026-01-23 20:12 ` Lorenzo Stoakes [this message]
2026-01-26 11:30 ` [PATCH v4 08/10] mm/vma: improve and document __is_vma_write_locked() Vlastimil Babka
2026-01-26 16:29 ` Lorenzo Stoakes
2026-01-26 19:21 ` Suren Baghdasaryan
2026-01-28 11:51 ` Lorenzo Stoakes
2026-01-28 13:01 ` Vlastimil Babka
2026-01-28 18:52 ` Suren Baghdasaryan
2026-01-26 16:30 ` Lorenzo Stoakes
2026-01-23 20:12 ` [PATCH v4 09/10] mm/vma: update vma_assert_locked() to use lockdep Lorenzo Stoakes
2026-01-26 13:42 ` Vlastimil Babka
2026-01-26 16:44 ` Lorenzo Stoakes
2026-01-26 17:16 ` Lorenzo Stoakes
2026-01-26 17:37 ` Lorenzo Stoakes
2026-01-23 20:12 ` [PATCH v4 10/10] mm/vma: add and use vma_assert_stabilised() Lorenzo Stoakes
2026-01-23 22:48 ` [PATCH v4 00/10] mm: add and use vma_assert_stabilised() helper Andrew Morton
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ef6c415c2d2c03f529dca124ccaed66bc2f60edc.1769198904.git.lorenzo.stoakes@oracle.com \
--to=lorenzo.stoakes@oracle.com \
--cc=Liam.Howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=bigeasy@linutronix.de \
--cc=boqun.feng@gmail.com \
--cc=clrkwllms@kernel.org \
--cc=david@kernel.org \
--cc=jannh@google.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-rt-devel@lists.linux.dev \
--cc=longman@redhat.com \
--cc=mhocko@suse.com \
--cc=mingo@redhat.com \
--cc=peterz@infradead.org \
--cc=rostedt@goodmis.org \
--cc=rppt@kernel.org \
--cc=shakeel.butt@linux.dev \
--cc=surenb@google.com \
--cc=vbabka@suse.cz \
--cc=will@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox