* [PATCH RESEND 0/3] add and use vma_assert_stabilised() helper
@ 2026-01-16 13:36 Lorenzo Stoakes
2026-01-16 13:36 ` [PATCH RESEND 1/3] locking: add rwsem_is_write_locked(), update non-lockdep asserts Lorenzo Stoakes
` (2 more replies)
0 siblings, 3 replies; 15+ messages in thread
From: Lorenzo Stoakes @ 2026-01-16 13:36 UTC (permalink / raw)
To: Andrew Morton
Cc: David Hildenbrand, Liam R . Howlett, Vlastimil Babka,
Mike Rapoport, Suren Baghdasaryan, Michal Hocko, Shakeel Butt,
Jann Horn, linux-mm, linux-kernel, linux-rt-devel,
Peter Zijlstra, Ingo Molnar, Will Deacon, Boqun Feng,
Waiman Long, Sebastian Andrzej Siewior, Clark Williams,
Steven Rostedt
Sometimes we wish to assert that a VMA is stable, that is - the VMA cannot
be changed underneath us. This will be the case if EITHER the VMA lock or
the mmap lock is held.
We already open-code this in two places - anon_vma_name() in mm/madvise.c
and vma_flag_set_atomic() in include/linux/mm.h.
This series adds a number of pre-requisite predicates and adds
vma_assert_stablisied() which can be used in these callsites instead.
However the asserts implemented there subtly wrong - if CONFIG_PER_VMA_LOCK
is not implemented and the mmap lock is not held, then we don't actually
assert anything.
Since this is an assert that only fires when CONFIG_DEBUG_VM is set and the
test bots will largely be running with CONFIG_PER_VMA_LOCK set, this is
likely in practice not a real-world issue.
In any case, this series additionally fixes this issue.
As part of this change we also reduce duplication of code in VMA lock
asserts.
This change also lays the foundation for future series to add this assert
in further appropriate places to account for us now living in a world where
a VMA may be stablised by either lock.
REVIEWER NOTE: The prior-to-resend version of the series was sent with
insufficient caffeination + my having inevitably got seasonally unwell so
isn't worth looking at :) Treat this one as the only version of the series!
Lorenzo Stoakes (3):
locking: add rwsem_is_write_locked(), update non-lockdep asserts
mm/vma: add vma_is_*_locked() helpers
mm: add + use vma_is_stabilised(), vma_assert_stabilised() helpers
include/linux/mm.h | 4 +--
include/linux/mmap_lock.h | 65 +++++++++++++++++++++++++++++++++------
include/linux/rwsem.h | 20 +++++++++---
mm/madvise.c | 4 +--
4 files changed, 73 insertions(+), 20 deletions(-)
--
2.52.0
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH RESEND 1/3] locking: add rwsem_is_write_locked(), update non-lockdep asserts
2026-01-16 13:36 [PATCH RESEND 0/3] add and use vma_assert_stabilised() helper Lorenzo Stoakes
@ 2026-01-16 13:36 ` Lorenzo Stoakes
2026-01-16 15:08 ` Zi Yan
2026-01-16 15:12 ` Peter Zijlstra
2026-01-16 13:36 ` [PATCH RESEND 2/3] mm/vma: add vma_is_*_locked() helpers Lorenzo Stoakes
2026-01-16 13:36 ` [PATCH RESEND 3/3] mm: add + use vma_is_stabilised(), vma_assert_stabilised() helpers Lorenzo Stoakes
2 siblings, 2 replies; 15+ messages in thread
From: Lorenzo Stoakes @ 2026-01-16 13:36 UTC (permalink / raw)
To: Andrew Morton
Cc: David Hildenbrand, Liam R . Howlett, Vlastimil Babka,
Mike Rapoport, Suren Baghdasaryan, Michal Hocko, Shakeel Butt,
Jann Horn, linux-mm, linux-kernel, linux-rt-devel,
Peter Zijlstra, Ingo Molnar, Will Deacon, Boqun Feng,
Waiman Long, Sebastian Andrzej Siewior, Clark Williams,
Steven Rostedt
As part of adding some additional lock asserts in mm, we wish to be able to
determine if a read/write semaphore is write-locked, so add
rwsem_is_write_locked() to do the write-lock equivalent of
rwsem_is_locked().
While we're here, update rwsem_assert_[write_]held_nolockdep() to utilise
the rwsem_is_[write_]locked() helpers directly to reduce code duplication,
and also update rwsem_is_locked() to take a const rwsem and return a
boolean.
This patch also updates the CONFIG_PREEMPT_RT helpers to do the same thing
there.
Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
---
include/linux/rwsem.h | 20 +++++++++++++++-----
1 file changed, 15 insertions(+), 5 deletions(-)
diff --git a/include/linux/rwsem.h b/include/linux/rwsem.h
index f1aaf676a874..b25b7944ad99 100644
--- a/include/linux/rwsem.h
+++ b/include/linux/rwsem.h
@@ -70,19 +70,24 @@ struct rw_semaphore {
#define RWSEM_WRITER_LOCKED (1UL << 0)
#define __RWSEM_COUNT_INIT(name) .count = ATOMIC_LONG_INIT(RWSEM_UNLOCKED_VALUE)
-static inline int rwsem_is_locked(struct rw_semaphore *sem)
+static inline bool rwsem_is_locked(const struct rw_semaphore *sem)
{
return atomic_long_read(&sem->count) != RWSEM_UNLOCKED_VALUE;
}
+static inline bool rwsem_is_write_locked(const struct rw_semaphore *sem)
+{
+ return atomic_long_read(&sem->count) & RWSEM_WRITER_LOCKED;
+}
+
static inline void rwsem_assert_held_nolockdep(const struct rw_semaphore *sem)
{
- WARN_ON(atomic_long_read(&sem->count) == RWSEM_UNLOCKED_VALUE);
+ WARN_ON(!rwsem_is_locked(sem));
}
static inline void rwsem_assert_held_write_nolockdep(const struct rw_semaphore *sem)
{
- WARN_ON(!(atomic_long_read(&sem->count) & RWSEM_WRITER_LOCKED));
+ WARN_ON(!rwsem_is_write_locked(sem));
}
/* Common initializer macros and functions */
@@ -174,11 +179,16 @@ do { \
__init_rwsem((sem), #sem, &__key); \
} while (0)
-static __always_inline int rwsem_is_locked(const struct rw_semaphore *sem)
+static __always_inline bool rwsem_is_locked(const struct rw_semaphore *sem)
{
return rw_base_is_locked(&sem->rwbase);
}
+static __always_inline bool rwsem_is_write_locked(const struct rw_semaphore *sem)
+{
+ return rw_base_is_write_locked(&sem->rwbase);
+}
+
static __always_inline void rwsem_assert_held_nolockdep(const struct rw_semaphore *sem)
{
WARN_ON(!rwsem_is_locked(sem));
@@ -186,7 +196,7 @@ static __always_inline void rwsem_assert_held_nolockdep(const struct rw_semaphor
static __always_inline void rwsem_assert_held_write_nolockdep(const struct rw_semaphore *sem)
{
- WARN_ON(!rw_base_is_write_locked(&sem->rwbase));
+ WARN_ON(!rwsem_is_write_locked(sem));
}
static __always_inline int rwsem_is_contended(struct rw_semaphore *sem)
--
2.52.0
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH RESEND 2/3] mm/vma: add vma_is_*_locked() helpers
2026-01-16 13:36 [PATCH RESEND 0/3] add and use vma_assert_stabilised() helper Lorenzo Stoakes
2026-01-16 13:36 ` [PATCH RESEND 1/3] locking: add rwsem_is_write_locked(), update non-lockdep asserts Lorenzo Stoakes
@ 2026-01-16 13:36 ` Lorenzo Stoakes
2026-01-16 13:36 ` [PATCH RESEND 3/3] mm: add + use vma_is_stabilised(), vma_assert_stabilised() helpers Lorenzo Stoakes
2 siblings, 0 replies; 15+ messages in thread
From: Lorenzo Stoakes @ 2026-01-16 13:36 UTC (permalink / raw)
To: Andrew Morton
Cc: David Hildenbrand, Liam R . Howlett, Vlastimil Babka,
Mike Rapoport, Suren Baghdasaryan, Michal Hocko, Shakeel Butt,
Jann Horn, linux-mm, linux-kernel, linux-rt-devel,
Peter Zijlstra, Ingo Molnar, Will Deacon, Boqun Feng,
Waiman Long, Sebastian Andrzej Siewior, Clark Williams,
Steven Rostedt
Add vma_is_read_locked(), vma_is_write_locked() and vma_is_locked() helpers
and utilise them in vma_assert_locked() and vma_assert_write_locked().
We need to test mmap lock state to correctly test vma write lock state, so
add mmap_is_locked() and mmap_is_write_locked() so we can explicitly
provide means by which to check mmap_lock state also.
These functions will intentionally not be defined if CONFIG_PER_VMA_LOCK is
not set, as they would not make any sense in a context where VMA locks do
not exist.
We are careful in invoking __is_vma_write_locked() - this function asserts
the mmap write lock, so we check that this lock is held before invoking the
function so vma_is_write_locked() can be used in situations where we don't
want an assert failure.
While we're here, we also update __is_vma_write_locked() to accept a const
vm_area_struct pointer so we can consistently have const VMA parameters for
these helpers.
As part of this change we also move mmap_lock_is_contended() up in
include/linux/mmap_lock.h so we group predicates based on mmap lock state
together.
This lays the groundwork for a subsequent change that allows for asserting
that either the mmap lock or VMA lock is held.
Suggested-by: Suren Baghdasaryan <surenb@google.com>
Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
---
include/linux/mmap_lock.h | 50 +++++++++++++++++++++++++++++----------
1 file changed, 38 insertions(+), 12 deletions(-)
diff --git a/include/linux/mmap_lock.h b/include/linux/mmap_lock.h
index b50416fbba20..9f6932ffaaa0 100644
--- a/include/linux/mmap_lock.h
+++ b/include/linux/mmap_lock.h
@@ -66,6 +66,22 @@ static inline void __mmap_lock_trace_released(struct mm_struct *mm, bool write)
#endif /* CONFIG_TRACING */
+
+static inline bool mmap_lock_is_contended(struct mm_struct *mm)
+{
+ return rwsem_is_contended(&mm->mmap_lock);
+}
+
+static inline bool mmap_is_locked(const struct mm_struct *mm)
+{
+ return rwsem_is_locked(&mm->mmap_lock);
+}
+
+static inline bool mmap_is_write_locked(const struct mm_struct *mm)
+{
+ return rwsem_is_write_locked(&mm->mmap_lock);
+}
+
static inline void mmap_assert_locked(const struct mm_struct *mm)
{
rwsem_assert_held(&mm->mmap_lock);
@@ -183,7 +199,8 @@ static inline void vma_end_read(struct vm_area_struct *vma)
}
/* WARNING! Can only be used if mmap_lock is expected to be write-locked */
-static inline bool __is_vma_write_locked(struct vm_area_struct *vma, unsigned int *mm_lock_seq)
+static inline bool __is_vma_write_locked(const struct vm_area_struct *vma,
+ unsigned int *mm_lock_seq)
{
mmap_assert_write_locked(vma->vm_mm);
@@ -236,19 +253,33 @@ int vma_start_write_killable(struct vm_area_struct *vma)
return __vma_start_write(vma, mm_lock_seq, TASK_KILLABLE);
}
-static inline void vma_assert_write_locked(struct vm_area_struct *vma)
+static inline bool vma_is_read_locked(const struct vm_area_struct *vma)
+{
+ return refcount_read(&vma->vm_refcnt) > 1;
+}
+
+static inline bool vma_is_write_locked(struct vm_area_struct *vma)
{
unsigned int mm_lock_seq;
- VM_BUG_ON_VMA(!__is_vma_write_locked(vma, &mm_lock_seq), vma);
+ /* __is_vma_write_locked() requires the mmap write lock. */
+ return mmap_is_write_locked(vma->vm_mm) &&
+ __is_vma_write_locked(vma, &mm_lock_seq);
}
-static inline void vma_assert_locked(struct vm_area_struct *vma)
+static inline bool vma_is_locked(struct vm_area_struct *vma)
{
- unsigned int mm_lock_seq;
+ return vma_is_read_locked(vma) || vma_is_write_locked(vma);
+}
+
+static inline void vma_assert_write_locked(struct vm_area_struct *vma)
+{
+ VM_BUG_ON_VMA(!vma_is_write_locked(vma), vma);
+}
- VM_BUG_ON_VMA(refcount_read(&vma->vm_refcnt) <= 1 &&
- !__is_vma_write_locked(vma, &mm_lock_seq), vma);
+static inline void vma_assert_locked(struct vm_area_struct *vma)
+{
+ VM_BUG_ON_VMA(!vma_is_locked(vma), vma);
}
static inline bool vma_is_attached(struct vm_area_struct *vma)
@@ -432,9 +463,4 @@ static inline void mmap_read_unlock_non_owner(struct mm_struct *mm)
up_read_non_owner(&mm->mmap_lock);
}
-static inline int mmap_lock_is_contended(struct mm_struct *mm)
-{
- return rwsem_is_contended(&mm->mmap_lock);
-}
-
#endif /* _LINUX_MMAP_LOCK_H */
--
2.52.0
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH RESEND 3/3] mm: add + use vma_is_stabilised(), vma_assert_stabilised() helpers
2026-01-16 13:36 [PATCH RESEND 0/3] add and use vma_assert_stabilised() helper Lorenzo Stoakes
2026-01-16 13:36 ` [PATCH RESEND 1/3] locking: add rwsem_is_write_locked(), update non-lockdep asserts Lorenzo Stoakes
2026-01-16 13:36 ` [PATCH RESEND 2/3] mm/vma: add vma_is_*_locked() helpers Lorenzo Stoakes
@ 2026-01-16 13:36 ` Lorenzo Stoakes
2026-01-16 20:45 ` Zi Yan
2026-01-16 20:47 ` Zi Yan
2 siblings, 2 replies; 15+ messages in thread
From: Lorenzo Stoakes @ 2026-01-16 13:36 UTC (permalink / raw)
To: Andrew Morton
Cc: David Hildenbrand, Liam R . Howlett, Vlastimil Babka,
Mike Rapoport, Suren Baghdasaryan, Michal Hocko, Shakeel Butt,
Jann Horn, linux-mm, linux-kernel, linux-rt-devel,
Peter Zijlstra, Ingo Molnar, Will Deacon, Boqun Feng,
Waiman Long, Sebastian Andrzej Siewior, Clark Williams,
Steven Rostedt
Sometimes we wish to assert that a VMA is stable, that is - the VMA cannot
be changed underneath us. This will be the case if EITHER the VMA lock or
the mmap lock is held.
In order to be able to do so this patch adds a vma_is_stabilised()
predicate.
We specify this differently based on whether CONFIG_PER_VMA_LOCK is
specified - if it is then naturally we check both whether a VMA lock is
held or an mmap lock held, otherwise we need only check the mmap lock.
Note that we only trigger the assert is CONFIG_DEBUG_VM is set, as having
this lock unset would indicate a programmatic error, so a release kernel
runtime assert doesn't make much sense.
There are a couple places in the kernel where we already do this check -
the anon_vma_name() helper in mm/madvise.c and vma_flag_set_atomic() in
include/linux/mm.h, which we update to use vma_assert_stabilised().
These were in fact implemented incorrectly - if neither the mmap lock nor
the VMA lock were held, these asserts did not fire.
However since these asserts are debug-only, and a large number of test
configurations will have CONFIG_PER_VMA_LOCK set, it has likely had no
real-world impact.
This change corrects this mistake at any rate.
Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
---
include/linux/mm.h | 4 +---
include/linux/mmap_lock.h | 23 ++++++++++++++++++++++-
mm/madvise.c | 4 +---
3 files changed, 24 insertions(+), 7 deletions(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 44a2a9c0a92f..8707059f4d37 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1008,9 +1008,7 @@ static inline void vma_flag_set_atomic(struct vm_area_struct *vma,
{
unsigned long *bitmap = ACCESS_PRIVATE(&vma->flags, __vma_flags);
- /* mmap read lock/VMA read lock must be held. */
- if (!rwsem_is_locked(&vma->vm_mm->mmap_lock))
- vma_assert_locked(vma);
+ vma_assert_stabilised(vma);
if (__vma_flag_atomic_valid(vma, bit))
set_bit((__force int)bit, bitmap);
diff --git a/include/linux/mmap_lock.h b/include/linux/mmap_lock.h
index 9f6932ffaaa0..711885cb5372 100644
--- a/include/linux/mmap_lock.h
+++ b/include/linux/mmap_lock.h
@@ -66,7 +66,6 @@ static inline void __mmap_lock_trace_released(struct mm_struct *mm, bool write)
#endif /* CONFIG_TRACING */
-
static inline bool mmap_lock_is_contended(struct mm_struct *mm)
{
return rwsem_is_contended(&mm->mmap_lock);
@@ -272,6 +271,11 @@ static inline bool vma_is_locked(struct vm_area_struct *vma)
return vma_is_read_locked(vma) || vma_is_write_locked(vma);
}
+static inline bool vma_is_stabilised(struct vm_area_struct *vma)
+{
+ return vma_is_locked(vma) || mmap_is_locked(vma->vm_mm);
+}
+
static inline void vma_assert_write_locked(struct vm_area_struct *vma)
{
VM_BUG_ON_VMA(!vma_is_write_locked(vma), vma);
@@ -358,6 +362,11 @@ static inline struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm,
return NULL;
}
+static inline bool vma_is_stabilised(struct vm_area_struct *vma)
+{
+ return mmap_is_locked(vma->vm_mm);
+}
+
static inline void vma_assert_locked(struct vm_area_struct *vma)
{
mmap_assert_locked(vma->vm_mm);
@@ -463,4 +472,16 @@ static inline void mmap_read_unlock_non_owner(struct mm_struct *mm)
up_read_non_owner(&mm->mmap_lock);
}
+/**
+ * vma_assert_stabilised() - assert that this VMA cannot be changed from
+ * underneath us either by having a VMA or mmap lock held.
+ * @vma: The VMA whose stability we wish to assess.
+ *
+ * Note that this will only trigger an assert if CONFIG_DEBUG_VM is set.
+ */
+static inline void vma_assert_stabilised(struct vm_area_struct *vma)
+{
+ VM_BUG_ON_VMA(!vma_is_stabilised(vma), vma);
+}
+
#endif /* _LINUX_MMAP_LOCK_H */
diff --git a/mm/madvise.c b/mm/madvise.c
index 4bf4c8c38fd3..1f3040688f04 100644
--- a/mm/madvise.c
+++ b/mm/madvise.c
@@ -109,9 +109,7 @@ void anon_vma_name_free(struct kref *kref)
struct anon_vma_name *anon_vma_name(struct vm_area_struct *vma)
{
- if (!rwsem_is_locked(&vma->vm_mm->mmap_lock))
- vma_assert_locked(vma);
-
+ vma_assert_stabilised(vma);
return vma->anon_name;
}
--
2.52.0
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH RESEND 1/3] locking: add rwsem_is_write_locked(), update non-lockdep asserts
2026-01-16 13:36 ` [PATCH RESEND 1/3] locking: add rwsem_is_write_locked(), update non-lockdep asserts Lorenzo Stoakes
@ 2026-01-16 15:08 ` Zi Yan
2026-01-16 16:29 ` Lorenzo Stoakes
2026-01-16 15:12 ` Peter Zijlstra
1 sibling, 1 reply; 15+ messages in thread
From: Zi Yan @ 2026-01-16 15:08 UTC (permalink / raw)
To: Lorenzo Stoakes
Cc: Andrew Morton, David Hildenbrand, Liam R . Howlett,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
Shakeel Butt, Jann Horn, linux-mm, linux-kernel, linux-rt-devel,
Peter Zijlstra, Ingo Molnar, Will Deacon, Boqun Feng,
Waiman Long, Sebastian Andrzej Siewior, Clark Williams,
Steven Rostedt
On 16 Jan 2026, at 8:36, Lorenzo Stoakes wrote:
> As part of adding some additional lock asserts in mm, we wish to be able to
> determine if a read/write semaphore is write-locked, so add
> rwsem_is_write_locked() to do the write-lock equivalent of
> rwsem_is_locked().
>
> While we're here, update rwsem_assert_[write_]held_nolockdep() to utilise
> the rwsem_is_[write_]locked() helpers directly to reduce code duplication,
> and also update rwsem_is_locked() to take a const rwsem and return a
> boolean.
>
> This patch also updates the CONFIG_PREEMPT_RT helpers to do the same thing
> there.
>
> Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> ---
> include/linux/rwsem.h | 20 +++++++++++++++-----
> 1 file changed, 15 insertions(+), 5 deletions(-)
>
> diff --git a/include/linux/rwsem.h b/include/linux/rwsem.h
> index f1aaf676a874..b25b7944ad99 100644
> --- a/include/linux/rwsem.h
> +++ b/include/linux/rwsem.h
> @@ -70,19 +70,24 @@ struct rw_semaphore {
> #define RWSEM_WRITER_LOCKED (1UL << 0)
> #define __RWSEM_COUNT_INIT(name) .count = ATOMIC_LONG_INIT(RWSEM_UNLOCKED_VALUE)
>
> -static inline int rwsem_is_locked(struct rw_semaphore *sem)
> +static inline bool rwsem_is_locked(const struct rw_semaphore *sem)
> {
> return atomic_long_read(&sem->count) != RWSEM_UNLOCKED_VALUE;
> }
>
> +static inline bool rwsem_is_write_locked(const struct rw_semaphore *sem)
> +{
> + return atomic_long_read(&sem->count) & RWSEM_WRITER_LOCKED;
> +}
> +
> static inline void rwsem_assert_held_nolockdep(const struct rw_semaphore *sem)
> {
> - WARN_ON(atomic_long_read(&sem->count) == RWSEM_UNLOCKED_VALUE);
> + WARN_ON(!rwsem_is_locked(sem));
> }
>
> static inline void rwsem_assert_held_write_nolockdep(const struct rw_semaphore *sem)
> {
> - WARN_ON(!(atomic_long_read(&sem->count) & RWSEM_WRITER_LOCKED));
> + WARN_ON(!rwsem_is_write_locked(sem));
> }
>
> /* Common initializer macros and functions */
> @@ -174,11 +179,16 @@ do { \
> __init_rwsem((sem), #sem, &__key); \
> } while (0)
>
> -static __always_inline int rwsem_is_locked(const struct rw_semaphore *sem)
> +static __always_inline bool rwsem_is_locked(const struct rw_semaphore *sem)
> {
> return rw_base_is_locked(&sem->rwbase);
> }
>
> +static __always_inline bool rwsem_is_write_locked(const struct rw_semaphore *sem)
> +{
> + return rw_base_is_write_locked(&sem->rwbase);
> +}
> +
> static __always_inline void rwsem_assert_held_nolockdep(const struct rw_semaphore *sem)
> {
> WARN_ON(!rwsem_is_locked(sem));
> @@ -186,7 +196,7 @@ static __always_inline void rwsem_assert_held_nolockdep(const struct rw_semaphor
>
> static __always_inline void rwsem_assert_held_write_nolockdep(const struct rw_semaphore *sem)
> {
> - WARN_ON(!rw_base_is_write_locked(&sem->rwbase));
> + WARN_ON(!rwsem_is_write_locked(sem));
I thought it was wrong since rwsem_is_write_locked() at the top reads ->count
instead of ->rwbase until I see there is another rwsem_is_write_locked() above.
> }
>
> static __always_inline int rwsem_is_contended(struct rw_semaphore *sem)
> --
Reviewed-by: Zi Yan <ziy@nvidia.com>
--
Best Regards,
Yan, Zi
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH RESEND 1/3] locking: add rwsem_is_write_locked(), update non-lockdep asserts
2026-01-16 13:36 ` [PATCH RESEND 1/3] locking: add rwsem_is_write_locked(), update non-lockdep asserts Lorenzo Stoakes
2026-01-16 15:08 ` Zi Yan
@ 2026-01-16 15:12 ` Peter Zijlstra
2026-01-16 15:50 ` Lorenzo Stoakes
1 sibling, 1 reply; 15+ messages in thread
From: Peter Zijlstra @ 2026-01-16 15:12 UTC (permalink / raw)
To: Lorenzo Stoakes
Cc: Andrew Morton, David Hildenbrand, Liam R . Howlett,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
Shakeel Butt, Jann Horn, linux-mm, linux-kernel, linux-rt-devel,
Ingo Molnar, Will Deacon, Boqun Feng, Waiman Long,
Sebastian Andrzej Siewior, Clark Williams, Steven Rostedt
On Fri, Jan 16, 2026 at 01:36:45PM +0000, Lorenzo Stoakes wrote:
> As part of adding some additional lock asserts in mm, we wish to be able to
> determine if a read/write semaphore is write-locked, so add
> rwsem_is_write_locked() to do the write-lock equivalent of
> rwsem_is_locked().
>
> While we're here, update rwsem_assert_[write_]held_nolockdep() to utilise
> the rwsem_is_[write_]locked() helpers directly to reduce code duplication,
> and also update rwsem_is_locked() to take a const rwsem and return a
> boolean.
There is a long history of abuse of _is_locked() primitives. I don't
suppose you read the email thread that led to
rwsem_assert_held_*_nolockdep() by any chance?
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH RESEND 1/3] locking: add rwsem_is_write_locked(), update non-lockdep asserts
2026-01-16 15:12 ` Peter Zijlstra
@ 2026-01-16 15:50 ` Lorenzo Stoakes
2026-01-16 15:57 ` Sebastian Andrzej Siewior
0 siblings, 1 reply; 15+ messages in thread
From: Lorenzo Stoakes @ 2026-01-16 15:50 UTC (permalink / raw)
To: Peter Zijlstra
Cc: Andrew Morton, David Hildenbrand, Liam R . Howlett,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
Shakeel Butt, Jann Horn, linux-mm, linux-kernel, linux-rt-devel,
Ingo Molnar, Will Deacon, Boqun Feng, Waiman Long,
Sebastian Andrzej Siewior, Clark Williams, Steven Rostedt
On Fri, Jan 16, 2026 at 04:12:15PM +0100, Peter Zijlstra wrote:
> On Fri, Jan 16, 2026 at 01:36:45PM +0000, Lorenzo Stoakes wrote:
> > As part of adding some additional lock asserts in mm, we wish to be able to
> > determine if a read/write semaphore is write-locked, so add
> > rwsem_is_write_locked() to do the write-lock equivalent of
> > rwsem_is_locked().
> >
> > While we're here, update rwsem_assert_[write_]held_nolockdep() to utilise
> > the rwsem_is_[write_]locked() helpers directly to reduce code duplication,
> > and also update rwsem_is_locked() to take a const rwsem and return a
> > boolean.
>
> There is a long history of abuse of _is_locked() primitives. I don't
> suppose you read the email thread that led to
> rwsem_assert_held_*_nolockdep() by any chance?
No, but we need to be able to assert that one of two locks are held and we
don't want the failure of one being held to cause an assert when the other
isn't.
At any rate we already have rwsem_is_locked() function from 2011+, I don't
think it's overly egregious to add a write equivalent.
We have a specific need for that, as the VMA locking logic asserts the mmap
write lock is held before you can check the VMA write lock is held.
I suppose we could rely on the mmap write lock assert in
__is_vma_write_locked(), because if it's not set we want to assert, but
then we lose all the neat and structured design here...
Would kind of suck for vma_is_read_locked() to not assert but
vma_is_write_locked() to assert.
Really I don't think __is_vma_write_locked() should be asserting like that
anyway, it's a footgun to make a predicate check like that assert
IMO... but that might speak more broadly to the overly complicated
implementation of VMA locks we have now.
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH RESEND 1/3] locking: add rwsem_is_write_locked(), update non-lockdep asserts
2026-01-16 15:50 ` Lorenzo Stoakes
@ 2026-01-16 15:57 ` Sebastian Andrzej Siewior
2026-01-16 16:21 ` Lorenzo Stoakes
0 siblings, 1 reply; 15+ messages in thread
From: Sebastian Andrzej Siewior @ 2026-01-16 15:57 UTC (permalink / raw)
To: Lorenzo Stoakes
Cc: Peter Zijlstra, Andrew Morton, David Hildenbrand,
Liam R . Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Shakeel Butt, Jann Horn,
linux-mm, linux-kernel, linux-rt-devel, Ingo Molnar, Will Deacon,
Boqun Feng, Waiman Long, Clark Williams, Steven Rostedt
On 2026-01-16 15:50:24 [+0000], Lorenzo Stoakes wrote:
> No, but we need to be able to assert that one of two locks are held and we
> don't want the failure of one being held to cause an assert when the other
> isn't.
But why don't you use the lockdep based check? That assert only ensures
that it is locked at the time you did the check. This does not mean you
are owner - it could be owned by another task which is unrelated to your
cause.
Sebastian
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH RESEND 1/3] locking: add rwsem_is_write_locked(), update non-lockdep asserts
2026-01-16 15:57 ` Sebastian Andrzej Siewior
@ 2026-01-16 16:21 ` Lorenzo Stoakes
2026-01-16 16:41 ` Sebastian Andrzej Siewior
2026-01-17 2:30 ` Boqun Feng
0 siblings, 2 replies; 15+ messages in thread
From: Lorenzo Stoakes @ 2026-01-16 16:21 UTC (permalink / raw)
To: Sebastian Andrzej Siewior
Cc: Peter Zijlstra, Andrew Morton, David Hildenbrand,
Liam R . Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Shakeel Butt, Jann Horn,
linux-mm, linux-kernel, linux-rt-devel, Ingo Molnar, Will Deacon,
Boqun Feng, Waiman Long, Clark Williams, Steven Rostedt
On Fri, Jan 16, 2026 at 04:57:43PM +0100, Sebastian Andrzej Siewior wrote:
> On 2026-01-16 15:50:24 [+0000], Lorenzo Stoakes wrote:
> > No, but we need to be able to assert that one of two locks are held and we
> > don't want the failure of one being held to cause an assert when the other
> > isn't.
>
> But why don't you use the lockdep based check? That assert only ensures
Not sure what you mean, the checks I'm adding don't exist yet.
> that it is locked at the time you did the check. This does not mean you
> are owner - it could be owned by another task which is unrelated to your
> cause.
Yup I'm aware that lockdep tests more than a simple assert.
I wasn't aware this was possible with the lockdep primitives, mea culpa.
Also this came out of a previous discussion where I added a similar
predicate vma_is_detached() and Suren suggested similar for the locks.
Anyway, I went and looked and yes I see there's lockdep_is_held() for
instance.
However, I'd STILL need to do what I'm doing here to account for
CONFIG_DEBUG_VM && !CONFIG_LOCKDEP configurations right?
So I'll respin later with if (IS_ENABLED(CONFIG_LOCKDEP)) ...
And sprinkle with some lockdep_is_held() and see how that works.
I mean rwsem_is_locked() is already specified, so naming is going to be a
thing now but I guess:
static inline bool rwsem_is_locked_nolockdep(const struct rw_semaphore *sem)
{
return rw_base_is_locked(&sem->rwbase);
}
static inline bool rwsem_is_locked(const struct rw_semaphore *sem)
{
if (IS_ENABLED(CONFIG_LOCKDEP))
return lockdep_is_held(sem);
return rwsem_is_locked_nolockdep(sem);
}
And obviously equivalent for the write case is what's necessary now right?
Or am I misunderstanding you?
>
> Sebastian
Thanks, Lorenzo
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH RESEND 1/3] locking: add rwsem_is_write_locked(), update non-lockdep asserts
2026-01-16 15:08 ` Zi Yan
@ 2026-01-16 16:29 ` Lorenzo Stoakes
0 siblings, 0 replies; 15+ messages in thread
From: Lorenzo Stoakes @ 2026-01-16 16:29 UTC (permalink / raw)
To: Zi Yan
Cc: Andrew Morton, David Hildenbrand, Liam R . Howlett,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
Shakeel Butt, Jann Horn, linux-mm, linux-kernel, linux-rt-devel,
Peter Zijlstra, Ingo Molnar, Will Deacon, Boqun Feng,
Waiman Long, Sebastian Andrzej Siewior, Clark Williams,
Steven Rostedt
On Fri, Jan 16, 2026 at 10:08:00AM -0500, Zi Yan wrote:
> On 16 Jan 2026, at 8:36, Lorenzo Stoakes wrote:
>
> > As part of adding some additional lock asserts in mm, we wish to be able to
> > determine if a read/write semaphore is write-locked, so add
> > rwsem_is_write_locked() to do the write-lock equivalent of
> > rwsem_is_locked().
> >
> > While we're here, update rwsem_assert_[write_]held_nolockdep() to utilise
> > the rwsem_is_[write_]locked() helpers directly to reduce code duplication,
> > and also update rwsem_is_locked() to take a const rwsem and return a
> > boolean.
> >
> > This patch also updates the CONFIG_PREEMPT_RT helpers to do the same thing
> > there.
> >
> > Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> > ---
> > include/linux/rwsem.h | 20 +++++++++++++++-----
> > 1 file changed, 15 insertions(+), 5 deletions(-)
> >
> > diff --git a/include/linux/rwsem.h b/include/linux/rwsem.h
> > index f1aaf676a874..b25b7944ad99 100644
> > --- a/include/linux/rwsem.h
> > +++ b/include/linux/rwsem.h
> > @@ -70,19 +70,24 @@ struct rw_semaphore {
> > #define RWSEM_WRITER_LOCKED (1UL << 0)
> > #define __RWSEM_COUNT_INIT(name) .count = ATOMIC_LONG_INIT(RWSEM_UNLOCKED_VALUE)
> >
> > -static inline int rwsem_is_locked(struct rw_semaphore *sem)
> > +static inline bool rwsem_is_locked(const struct rw_semaphore *sem)
> > {
> > return atomic_long_read(&sem->count) != RWSEM_UNLOCKED_VALUE;
> > }
> >
> > +static inline bool rwsem_is_write_locked(const struct rw_semaphore *sem)
> > +{
> > + return atomic_long_read(&sem->count) & RWSEM_WRITER_LOCKED;
> > +}
> > +
> > static inline void rwsem_assert_held_nolockdep(const struct rw_semaphore *sem)
> > {
> > - WARN_ON(atomic_long_read(&sem->count) == RWSEM_UNLOCKED_VALUE);
> > + WARN_ON(!rwsem_is_locked(sem));
> > }
> >
> > static inline void rwsem_assert_held_write_nolockdep(const struct rw_semaphore *sem)
> > {
> > - WARN_ON(!(atomic_long_read(&sem->count) & RWSEM_WRITER_LOCKED));
> > + WARN_ON(!rwsem_is_write_locked(sem));
> > }
> >
> > /* Common initializer macros and functions */
> > @@ -174,11 +179,16 @@ do { \
> > __init_rwsem((sem), #sem, &__key); \
> > } while (0)
> >
> > -static __always_inline int rwsem_is_locked(const struct rw_semaphore *sem)
> > +static __always_inline bool rwsem_is_locked(const struct rw_semaphore *sem)
> > {
> > return rw_base_is_locked(&sem->rwbase);
> > }
> >
> > +static __always_inline bool rwsem_is_write_locked(const struct rw_semaphore *sem)
> > +{
> > + return rw_base_is_write_locked(&sem->rwbase);
> > +}
> > +
> > static __always_inline void rwsem_assert_held_nolockdep(const struct rw_semaphore *sem)
> > {
> > WARN_ON(!rwsem_is_locked(sem));
> > @@ -186,7 +196,7 @@ static __always_inline void rwsem_assert_held_nolockdep(const struct rw_semaphor
> >
> > static __always_inline void rwsem_assert_held_write_nolockdep(const struct rw_semaphore *sem)
> > {
> > - WARN_ON(!rw_base_is_write_locked(&sem->rwbase));
> > + WARN_ON(!rwsem_is_write_locked(sem));
>
> I thought it was wrong since rwsem_is_write_locked() at the top reads ->count
> instead of ->rwbase until I see there is another rwsem_is_write_locked() above.
:)
>
>
> > }
> >
> > static __always_inline int rwsem_is_contended(struct rw_semaphore *sem)
> > --
>
> Reviewed-by: Zi Yan <ziy@nvidia.com>
Thanks!
>
>
> --
> Best Regards,
> Yan, Zi
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH RESEND 1/3] locking: add rwsem_is_write_locked(), update non-lockdep asserts
2026-01-16 16:21 ` Lorenzo Stoakes
@ 2026-01-16 16:41 ` Sebastian Andrzej Siewior
2026-01-16 16:56 ` Lorenzo Stoakes
2026-01-17 2:30 ` Boqun Feng
1 sibling, 1 reply; 15+ messages in thread
From: Sebastian Andrzej Siewior @ 2026-01-16 16:41 UTC (permalink / raw)
To: Lorenzo Stoakes
Cc: Peter Zijlstra, Andrew Morton, David Hildenbrand,
Liam R . Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Shakeel Butt, Jann Horn,
linux-mm, linux-kernel, linux-rt-devel, Ingo Molnar, Will Deacon,
Boqun Feng, Waiman Long, Clark Williams, Steven Rostedt
On 2026-01-16 16:21:29 [+0000], Lorenzo Stoakes wrote:
> On Fri, Jan 16, 2026 at 04:57:43PM +0100, Sebastian Andrzej Siewior wrote:
> > On 2026-01-16 15:50:24 [+0000], Lorenzo Stoakes wrote:
> > > No, but we need to be able to assert that one of two locks are held and we
> > > don't want the failure of one being held to cause an assert when the other
> > > isn't.
> >
> > But why don't you use the lockdep based check? That assert only ensures
>
> Not sure what you mean, the checks I'm adding don't exist yet.
The checks you add are not lockdep.
> > that it is locked at the time you did the check. This does not mean you
> > are owner - it could be owned by another task which is unrelated to your
> > cause.
>
> Yup I'm aware that lockdep tests more than a simple assert.
>
> I wasn't aware this was possible with the lockdep primitives, mea culpa.
>
> Also this came out of a previous discussion where I added a similar
> predicate vma_is_detached() and Suren suggested similar for the locks.
>
> Anyway, I went and looked and yes I see there's lockdep_is_held() for
> instance.
>
> However, I'd STILL need to do what I'm doing here to account for
> CONFIG_DEBUG_VM && !CONFIG_LOCKDEP configurations right?
Without CONFIG_LOCKDEP the locking view is not really accurate. So maybe
it is not worth doing it. The complains are that lockdep is too slow but
the other checks are just "is it locked by someone? - fine".
> So I'll respin later with if (IS_ENABLED(CONFIG_LOCKDEP)) ...
> And sprinkle with some lockdep_is_held() and see how that works.
>
> I mean rwsem_is_locked() is already specified, so naming is going to be a
> thing now but I guess:
>
> static inline bool rwsem_is_locked_nolockdep(const struct rw_semaphore *sem)
> {
> return rw_base_is_locked(&sem->rwbase);
> }
>
> static inline bool rwsem_is_locked(const struct rw_semaphore *sem)
> {
> if (IS_ENABLED(CONFIG_LOCKDEP))
> return lockdep_is_held(sem);
>
> return rwsem_is_locked_nolockdep(sem);
> }
>
> And obviously equivalent for the write case is what's necessary now right?
I would drop the rwsem_is_locked.* and just go with
lockdep_assert_held()/ lockdep_assert_held_write(). Unless you want that
verification in production because it is important and false positives
(as in held by other thread and not caller) are zero.
> Or am I misunderstanding you?
Sebastian
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH RESEND 1/3] locking: add rwsem_is_write_locked(), update non-lockdep asserts
2026-01-16 16:41 ` Sebastian Andrzej Siewior
@ 2026-01-16 16:56 ` Lorenzo Stoakes
0 siblings, 0 replies; 15+ messages in thread
From: Lorenzo Stoakes @ 2026-01-16 16:56 UTC (permalink / raw)
To: Sebastian Andrzej Siewior
Cc: Peter Zijlstra, Andrew Morton, David Hildenbrand,
Liam R . Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Shakeel Butt, Jann Horn,
linux-mm, linux-kernel, linux-rt-devel, Ingo Molnar, Will Deacon,
Boqun Feng, Waiman Long, Clark Williams, Steven Rostedt
On Fri, Jan 16, 2026 at 05:41:39PM +0100, Sebastian Andrzej Siewior wrote:
> On 2026-01-16 16:21:29 [+0000], Lorenzo Stoakes wrote:
> > On Fri, Jan 16, 2026 at 04:57:43PM +0100, Sebastian Andrzej Siewior wrote:
> > > On 2026-01-16 15:50:24 [+0000], Lorenzo Stoakes wrote:
> > > > No, but we need to be able to assert that one of two locks are held and we
> > > > don't want the failure of one being held to cause an assert when the other
> > > > isn't.
> > >
> > > But why don't you use the lockdep based check? That assert only ensures
> >
> > Not sure what you mean, the checks I'm adding don't exist yet.
>
> The checks you add are not lockdep.
I understand that thanks (?)
I'm not sure responding point by point is productive here, so let me
summarise:
We often run code locally without lockdep, testing isn't always ideal
across mm and these asserts are gated by CONFIG_DEBUG_VM anyway so yes I
want a non-lockdep version also.
Note that existing mm lock asserts already work this way, so that's
consistent (though mmap asserts are also at runtime...)
I can't just use existing asserts because I need to test that EITHER one
lock OR the other is held. If there's a way to do that with lockdep in a
way other than what I have suggested, I am more than happy to hear it?
If not I'll respin this with both a lockdep + not-lockdep version.
What you're suggesting, just using the existing lockdep asserts, won't work
unless I'm missing something, because of this _either_ lock requirement.
But if there is an existing solution that you can point me at I'd be more
than happy to use it.
Thanks, Lorenzo
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH RESEND 3/3] mm: add + use vma_is_stabilised(), vma_assert_stabilised() helpers
2026-01-16 13:36 ` [PATCH RESEND 3/3] mm: add + use vma_is_stabilised(), vma_assert_stabilised() helpers Lorenzo Stoakes
@ 2026-01-16 20:45 ` Zi Yan
2026-01-16 20:47 ` Zi Yan
1 sibling, 0 replies; 15+ messages in thread
From: Zi Yan @ 2026-01-16 20:45 UTC (permalink / raw)
To: Lorenzo Stoakes
Cc: Andrew Morton, David Hildenbrand, Liam R . Howlett,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
Shakeel Butt, Jann Horn, linux-mm, linux-kernel, linux-rt-devel,
Peter Zijlstra, Ingo Molnar, Will Deacon, Boqun Feng,
Waiman Long, Sebastian Andrzej Siewior, Clark Williams,
Steven Rostedt
On 16 Jan 2026, at 8:36, Lorenzo Stoakes wrote:
> Sometimes we wish to assert that a VMA is stable, that is - the VMA cannot
> be changed underneath us. This will be the case if EITHER the VMA lock or
> the mmap lock is held.
>
> In order to be able to do so this patch adds a vma_is_stabilised()
> predicate.
>
> We specify this differently based on whether CONFIG_PER_VMA_LOCK is
> specified - if it is then naturally we check both whether a VMA lock is
> held or an mmap lock held, otherwise we need only check the mmap lock.
>
> Note that we only trigger the assert is CONFIG_DEBUG_VM is set, as having
> this lock unset would indicate a programmatic error, so a release kernel
> runtime assert doesn't make much sense.
>
> There are a couple places in the kernel where we already do this check -
> the anon_vma_name() helper in mm/madvise.c and vma_flag_set_atomic() in
> include/linux/mm.h, which we update to use vma_assert_stabilised().
>
> These were in fact implemented incorrectly - if neither the mmap lock nor
> the VMA lock were held, these asserts did not fire.
>
> However since these asserts are debug-only, and a large number of test
> configurations will have CONFIG_PER_VMA_LOCK set, it has likely had no
> real-world impact.
>
> This change corrects this mistake at any rate.
>
> Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> ---
> include/linux/mm.h | 4 +---
> include/linux/mmap_lock.h | 23 ++++++++++++++++++++++-
> mm/madvise.c | 4 +---
> 3 files changed, 24 insertions(+), 7 deletions(-)
>
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index 44a2a9c0a92f..8707059f4d37 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -1008,9 +1008,7 @@ static inline void vma_flag_set_atomic(struct vm_area_struct *vma,
> {
> unsigned long *bitmap = ACCESS_PRIVATE(&vma->flags, __vma_flags);
>
> - /* mmap read lock/VMA read lock must be held. */
> - if (!rwsem_is_locked(&vma->vm_mm->mmap_lock))
Ideally, this should have been converted to use mmap_is_locked(vma->vm_mm)
in Patch 2. But this is a bug fix here, so that churn is not necessary.
> - vma_assert_locked(vma);
> + vma_assert_stabilised(vma);
>
> if (__vma_flag_atomic_valid(vma, bit))
> set_bit((__force int)bit, bitmap);
> diff --git a/include/linux/mmap_lock.h b/include/linux/mmap_lock.h
> index 9f6932ffaaa0..711885cb5372 100644
> --- a/include/linux/mmap_lock.h
> +++ b/include/linux/mmap_lock.h
> @@ -66,7 +66,6 @@ static inline void __mmap_lock_trace_released(struct mm_struct *mm, bool write)
>
> #endif /* CONFIG_TRACING */
>
> -
> static inline bool mmap_lock_is_contended(struct mm_struct *mm)
> {
> return rwsem_is_contended(&mm->mmap_lock);
> @@ -272,6 +271,11 @@ static inline bool vma_is_locked(struct vm_area_struct *vma)
> return vma_is_read_locked(vma) || vma_is_write_locked(vma);
> }
>
> +static inline bool vma_is_stabilised(struct vm_area_struct *vma)
> +{
> + return vma_is_locked(vma) || mmap_is_locked(vma->vm_mm);
> +}
> +
> static inline void vma_assert_write_locked(struct vm_area_struct *vma)
> {
> VM_BUG_ON_VMA(!vma_is_write_locked(vma), vma);
> @@ -358,6 +362,11 @@ static inline struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm,
> return NULL;
> }
>
> +static inline bool vma_is_stabilised(struct vm_area_struct *vma)
> +{
> + return mmap_is_locked(vma->vm_mm);
> +}
> +
> static inline void vma_assert_locked(struct vm_area_struct *vma)
> {
> mmap_assert_locked(vma->vm_mm);
> @@ -463,4 +472,16 @@ static inline void mmap_read_unlock_non_owner(struct mm_struct *mm)
> up_read_non_owner(&mm->mmap_lock);
> }
>
> +/**
> + * vma_assert_stabilised() - assert that this VMA cannot be changed from
> + * underneath us either by having a VMA or mmap lock held.
> + * @vma: The VMA whose stability we wish to assess.
> + *
> + * Note that this will only trigger an assert if CONFIG_DEBUG_VM is set.
> + */
> +static inline void vma_assert_stabilised(struct vm_area_struct *vma)
> +{
> + VM_BUG_ON_VMA(!vma_is_stabilised(vma), vma);
> +}
> +
> #endif /* _LINUX_MMAP_LOCK_H */
> diff --git a/mm/madvise.c b/mm/madvise.c
> index 4bf4c8c38fd3..1f3040688f04 100644
> --- a/mm/madvise.c
> +++ b/mm/madvise.c
> @@ -109,9 +109,7 @@ void anon_vma_name_free(struct kref *kref)
>
> struct anon_vma_name *anon_vma_name(struct vm_area_struct *vma)
> {
> - if (!rwsem_is_locked(&vma->vm_mm->mmap_lock))
> - vma_assert_locked(vma);
> -
> + vma_assert_stabilised(vma);
> return vma->anon_name;
> }
>
LGTM.
Reviewed-by: Zi Yan <ziy@nvidia.com>
Best Regards,
Yan, Zi
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH RESEND 3/3] mm: add + use vma_is_stabilised(), vma_assert_stabilised() helpers
2026-01-16 13:36 ` [PATCH RESEND 3/3] mm: add + use vma_is_stabilised(), vma_assert_stabilised() helpers Lorenzo Stoakes
2026-01-16 20:45 ` Zi Yan
@ 2026-01-16 20:47 ` Zi Yan
1 sibling, 0 replies; 15+ messages in thread
From: Zi Yan @ 2026-01-16 20:47 UTC (permalink / raw)
To: Lorenzo Stoakes
Cc: Andrew Morton, David Hildenbrand, Liam R . Howlett,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
Shakeel Butt, Jann Horn, linux-mm, linux-kernel, linux-rt-devel,
Peter Zijlstra, Ingo Molnar, Will Deacon, Boqun Feng,
Waiman Long, Sebastian Andrzej Siewior, Clark Williams,
Steven Rostedt
Reply again using the right email. Sorry for the noise.
On 16 Jan 2026, at 8:36, Lorenzo Stoakes wrote:
> Sometimes we wish to assert that a VMA is stable, that is - the VMA cannot
> be changed underneath us. This will be the case if EITHER the VMA lock or
> the mmap lock is held.
>
> In order to be able to do so this patch adds a vma_is_stabilised()
> predicate.
>
> We specify this differently based on whether CONFIG_PER_VMA_LOCK is
> specified - if it is then naturally we check both whether a VMA lock is
> held or an mmap lock held, otherwise we need only check the mmap lock.
>
> Note that we only trigger the assert is CONFIG_DEBUG_VM is set, as having
> this lock unset would indicate a programmatic error, so a release kernel
> runtime assert doesn't make much sense.
>
> There are a couple places in the kernel where we already do this check -
> the anon_vma_name() helper in mm/madvise.c and vma_flag_set_atomic() in
> include/linux/mm.h, which we update to use vma_assert_stabilised().
>
> These were in fact implemented incorrectly - if neither the mmap lock nor
> the VMA lock were held, these asserts did not fire.
>
> However since these asserts are debug-only, and a large number of test
> configurations will have CONFIG_PER_VMA_LOCK set, it has likely had no
> real-world impact.
>
> This change corrects this mistake at any rate.
>
> Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> ---
> include/linux/mm.h | 4 +---
> include/linux/mmap_lock.h | 23 ++++++++++++++++++++++-
> mm/madvise.c | 4 +---
> 3 files changed, 24 insertions(+), 7 deletions(-)
>
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index 44a2a9c0a92f..8707059f4d37 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -1008,9 +1008,7 @@ static inline void vma_flag_set_atomic(struct vm_area_struct *vma,
> {
> unsigned long *bitmap = ACCESS_PRIVATE(&vma->flags, __vma_flags);
>
> - /* mmap read lock/VMA read lock must be held. */
> - if (!rwsem_is_locked(&vma->vm_mm->mmap_lock))
> - vma_assert_locked(vma);
Ideally, this should have been converted to use mmap_is_locked(vma->vm_mm)
in Patch 2. But this is a bug fix here, so that churn is not necessary.
> + vma_assert_stabilised(vma);
>
> if (__vma_flag_atomic_valid(vma, bit))
> set_bit((__force int)bit, bitmap);
> diff --git a/include/linux/mmap_lock.h b/include/linux/mmap_lock.h
> index 9f6932ffaaa0..711885cb5372 100644
> --- a/include/linux/mmap_lock.h
> +++ b/include/linux/mmap_lock.h
> @@ -66,7 +66,6 @@ static inline void __mmap_lock_trace_released(struct mm_struct *mm, bool write)
>
> #endif /* CONFIG_TRACING */
>
> -
> static inline bool mmap_lock_is_contended(struct mm_struct *mm)
> {
> return rwsem_is_contended(&mm->mmap_lock);
> @@ -272,6 +271,11 @@ static inline bool vma_is_locked(struct vm_area_struct *vma)
> return vma_is_read_locked(vma) || vma_is_write_locked(vma);
> }
>
> +static inline bool vma_is_stabilised(struct vm_area_struct *vma)
> +{
> + return vma_is_locked(vma) || mmap_is_locked(vma->vm_mm);
> +}
> +
> static inline void vma_assert_write_locked(struct vm_area_struct *vma)
> {
> VM_BUG_ON_VMA(!vma_is_write_locked(vma), vma);
> @@ -358,6 +362,11 @@ static inline struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm,
> return NULL;
> }
>
> +static inline bool vma_is_stabilised(struct vm_area_struct *vma)
> +{
> + return mmap_is_locked(vma->vm_mm);
> +}
> +
> static inline void vma_assert_locked(struct vm_area_struct *vma)
> {
> mmap_assert_locked(vma->vm_mm);
> @@ -463,4 +472,16 @@ static inline void mmap_read_unlock_non_owner(struct mm_struct *mm)
> up_read_non_owner(&mm->mmap_lock);
> }
>
> +/**
> + * vma_assert_stabilised() - assert that this VMA cannot be changed from
> + * underneath us either by having a VMA or mmap lock held.
> + * @vma: The VMA whose stability we wish to assess.
> + *
> + * Note that this will only trigger an assert if CONFIG_DEBUG_VM is set.
> + */
> +static inline void vma_assert_stabilised(struct vm_area_struct *vma)
> +{
> + VM_BUG_ON_VMA(!vma_is_stabilised(vma), vma);
> +}
> +
> #endif /* _LINUX_MMAP_LOCK_H */
> diff --git a/mm/madvise.c b/mm/madvise.c
> index 4bf4c8c38fd3..1f3040688f04 100644
> --- a/mm/madvise.c
> +++ b/mm/madvise.c
> @@ -109,9 +109,7 @@ void anon_vma_name_free(struct kref *kref)
>
> struct anon_vma_name *anon_vma_name(struct vm_area_struct *vma)
> {
> - if (!rwsem_is_locked(&vma->vm_mm->mmap_lock))
> - vma_assert_locked(vma);
> -
> + vma_assert_stabilised(vma);
> return vma->anon_name;
> }
>
LGTM.
Reviewed-by: Zi Yan <ziy@nvidia.com>
Best Regards,
Yan, Zi
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH RESEND 1/3] locking: add rwsem_is_write_locked(), update non-lockdep asserts
2026-01-16 16:21 ` Lorenzo Stoakes
2026-01-16 16:41 ` Sebastian Andrzej Siewior
@ 2026-01-17 2:30 ` Boqun Feng
1 sibling, 0 replies; 15+ messages in thread
From: Boqun Feng @ 2026-01-17 2:30 UTC (permalink / raw)
To: Lorenzo Stoakes
Cc: Sebastian Andrzej Siewior, Peter Zijlstra, Andrew Morton,
David Hildenbrand, Liam R . Howlett, Vlastimil Babka,
Mike Rapoport, Suren Baghdasaryan, Michal Hocko, Shakeel Butt,
Jann Horn, linux-mm, linux-kernel, linux-rt-devel, Ingo Molnar,
Will Deacon, Waiman Long, Clark Williams, Steven Rostedt
On Fri, Jan 16, 2026 at 04:21:29PM +0000, Lorenzo Stoakes wrote:
> On Fri, Jan 16, 2026 at 04:57:43PM +0100, Sebastian Andrzej Siewior wrote:
> > On 2026-01-16 15:50:24 [+0000], Lorenzo Stoakes wrote:
> > > No, but we need to be able to assert that one of two locks are held and we
> > > don't want the failure of one being held to cause an assert when the other
> > > isn't.
> >
> > But why don't you use the lockdep based check? That assert only ensures
>
> Not sure what you mean, the checks I'm adding don't exist yet.
>
> > that it is locked at the time you did the check. This does not mean you
> > are owner - it could be owned by another task which is unrelated to your
> > cause.
>
> Yup I'm aware that lockdep tests more than a simple assert.
>
> I wasn't aware this was possible with the lockdep primitives, mea culpa.
>
> Also this came out of a previous discussion where I added a similar
> predicate vma_is_detached() and Suren suggested similar for the locks.
>
> Anyway, I went and looked and yes I see there's lockdep_is_held() for
> instance.
>
> However, I'd STILL need to do what I'm doing here to account for
> CONFIG_DEBUG_VM && !CONFIG_LOCKDEP configurations right?
>
There is an idea about a light weight lockdep where we only remain
tracking the held locks in a per-task stack and skip the whole
dependency checking, that would provide lock holding
information without the full cost of LOCKDEP, but that requires some
work and I'm not sure whether it fulfills what you need for DEBUG_VM
tests (each task_struct would have some extra space and lock/unlock
would do extra book-keeping).
> So I'll respin later with if (IS_ENABLED(CONFIG_LOCKDEP)) ...
> And sprinkle with some lockdep_is_held() and see how that works.
>
Yeah, for LOCKDEP=y cases, please do use lockdep_is_held() or
lockdep_is_held_type(), those would provide the accurate information.
> I mean rwsem_is_locked() is already specified, so naming is going to be a
> thing now but I guess:
>
> static inline bool rwsem_is_locked_nolockdep(const struct rw_semaphore *sem)
> {
> return rw_base_is_locked(&sem->rwbase);
> }
>
> static inline bool rwsem_is_locked(const struct rw_semaphore *sem)
> {
> if (IS_ENABLED(CONFIG_LOCKDEP))
> return lockdep_is_held(sem);
>
> return rwsem_is_locked_nolockdep(sem);
> }
>
> And obviously equivalent for the write case is what's necessary now right?
>
Assuming we want CONFIG_LOCKDEP=n cases to work without extra
book-keeping, I think we could use rwsem_owner() for write cases, and
name the function as rwsem_is_write_held(), which tells you whether
the current thread is the owner of the write lock (we are lucky here
because rwsem is one of those locks remember their owners ;-)). This
would cover the use case of MM without introducing another is_locked()
function. Peter & Sebastian, how do you like (or not hate ;-)) that
idea?
Regards,
Boqun
> Or am I misunderstanding you?
>
> >
> > Sebastian
>
> Thanks, Lorenzo
^ permalink raw reply [flat|nested] 15+ messages in thread
end of thread, other threads:[~2026-01-17 2:30 UTC | newest]
Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2026-01-16 13:36 [PATCH RESEND 0/3] add and use vma_assert_stabilised() helper Lorenzo Stoakes
2026-01-16 13:36 ` [PATCH RESEND 1/3] locking: add rwsem_is_write_locked(), update non-lockdep asserts Lorenzo Stoakes
2026-01-16 15:08 ` Zi Yan
2026-01-16 16:29 ` Lorenzo Stoakes
2026-01-16 15:12 ` Peter Zijlstra
2026-01-16 15:50 ` Lorenzo Stoakes
2026-01-16 15:57 ` Sebastian Andrzej Siewior
2026-01-16 16:21 ` Lorenzo Stoakes
2026-01-16 16:41 ` Sebastian Andrzej Siewior
2026-01-16 16:56 ` Lorenzo Stoakes
2026-01-17 2:30 ` Boqun Feng
2026-01-16 13:36 ` [PATCH RESEND 2/3] mm/vma: add vma_is_*_locked() helpers Lorenzo Stoakes
2026-01-16 13:36 ` [PATCH RESEND 3/3] mm: add + use vma_is_stabilised(), vma_assert_stabilised() helpers Lorenzo Stoakes
2026-01-16 20:45 ` Zi Yan
2026-01-16 20:47 ` Zi Yan
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox