* [PATCH v10 00/18] reimplement per-vma lock as a refcount
@ 2025-02-13 22:46 Suren Baghdasaryan
2025-02-13 22:46 ` [PATCH v10 01/18] mm: introduce vma_start_read_locked{_nested} helpers Suren Baghdasaryan
` (18 more replies)
0 siblings, 19 replies; 30+ messages in thread
From: Suren Baghdasaryan @ 2025-02-13 22:46 UTC (permalink / raw)
To: akpm
Cc: peterz, willy, liam.howlett, lorenzo.stoakes, david.laight.linux,
mhocko, vbabka, hannes, mjguzik, oliver.sang, mgorman, david,
peterx, oleg, dave, paulmck, brauner, dhowells, hdanton, hughd,
lokeshgidra, minchan, jannh, shakeel.butt, souravpanda,
pasha.tatashin, klarasmodin, richard.weiyang, corbet, linux-doc,
linux-mm, linux-kernel, kernel-team, surenb
Back when per-vma locks were introduces, vm_lock was moved out of
vm_area_struct in [1] because of the performance regression caused by
false cacheline sharing. Recent investigation [2] revealed that the
regressions is limited to a rather old Broadwell microarchitecture and
even there it can be mitigated by disabling adjacent cacheline
prefetching, see [3].
Splitting single logical structure into multiple ones leads to more
complicated management, extra pointer dereferences and overall less
maintainable code. When that split-away part is a lock, it complicates
things even further. With no performance benefits, there are no reasons
for this split. Merging the vm_lock back into vm_area_struct also allows
vm_area_struct to use SLAB_TYPESAFE_BY_RCU later in this patchset.
This patchset:
1. moves vm_lock back into vm_area_struct, aligning it at the cacheline
boundary and changing the cache to be cacheline-aligned to minimize
cacheline sharing;
2. changes vm_area_struct initialization to mark new vma as detached until
it is inserted into vma tree;
3. replaces vm_lock and vma->detached flag with a reference counter;
4. regroups vm_area_struct members to fit them into 3 cachelines;
5. changes vm_area_struct cache to SLAB_TYPESAFE_BY_RCU to allow for their
reuse and to minimize call_rcu() calls.
Pagefault microbenchmarks show performance improvement:
Hmean faults/cpu-1 507926.5547 ( 0.00%) 506519.3692 * -0.28%*
Hmean faults/cpu-4 479119.7051 ( 0.00%) 481333.6802 * 0.46%*
Hmean faults/cpu-7 452880.2961 ( 0.00%) 455845.6211 * 0.65%*
Hmean faults/cpu-12 347639.1021 ( 0.00%) 352004.2254 * 1.26%*
Hmean faults/cpu-21 200061.2238 ( 0.00%) 229597.0317 * 14.76%*
Hmean faults/cpu-30 145251.2001 ( 0.00%) 164202.5067 * 13.05%*
Hmean faults/cpu-48 106848.4434 ( 0.00%) 120641.5504 * 12.91%*
Hmean faults/cpu-56 92472.3835 ( 0.00%) 103464.7916 * 11.89%*
Hmean faults/sec-1 507566.1468 ( 0.00%) 506139.0811 * -0.28%*
Hmean faults/sec-4 1880478.2402 ( 0.00%) 1886795.6329 * 0.34%*
Hmean faults/sec-7 3106394.3438 ( 0.00%) 3140550.7485 * 1.10%*
Hmean faults/sec-12 4061358.4795 ( 0.00%) 4112477.0206 * 1.26%*
Hmean faults/sec-21 3988619.1169 ( 0.00%) 4577747.1436 * 14.77%*
Hmean faults/sec-30 3909839.5449 ( 0.00%) 4311052.2787 * 10.26%*
Hmean faults/sec-48 4761108.4691 ( 0.00%) 5283790.5026 * 10.98%*
Hmean faults/sec-56 4885561.4590 ( 0.00%) 5415839.4045 * 10.85%*
Changes since v9 [4]:
PATCH [4/18]
- Change VM_BUG_ON_VMA() to WARN_ON_ONCE() in vma_assert_{attached|detached},
per Lorenzo Stoakes
- Rename vma_iter_store() into vma_iter_store_new(), per Lorenzo Stoakes
- Expand changelog, per Lorenzo Stoakes
- Update vma tests to check for vma detached state correctness,
per Lorenzo Stoakes
PATCH [5/18]
- Add Reviewed-by, per Lorenzo Stoakes
PATCH [6/18]
- Add Acked-by, per Lorenzo Stoakes
PATCH [7/18]
- Refactor the code, per Lorenzo Stoakes
- Remove Vlastimil's Acked-by since code is changed
PATCH [8/18]
- Drop inline for mmap_init_lock(), per Lorenzo Stoakes
- Add Reviewed-by, per Lorenzo Stoakes
PATCH [9/18]
- Add Reviewed-by, per Lorenzo Stoakes
PATCH [10/18]
- New patch to add refcount_add_not_zero_acquire/refcount_set_release
- Add Acked-by #slab, per Vlastimil Babka
PATCH [11/18]
- Change refcount limit to be used with xxx_acquire functions
PATCH [12/18]
- Use __refcount_inc_not_zero_limited_acquire() in vma_start_read(),
per Hillf Danton
- Refactor vma_assert_locked() to avoid vm_refcnt read when CONFIG_DEBUG_VM=n,
per Mateusz Guzik
- Update changelog, per Wei Yang
- Change vma_start_read() to return EAGAIN if vma got isolated and changed
lock_vma_under_rcu() back to detect this condition, per Wei Yang
- Change VM_BUG_ON_VMA() to WARN_ON_ONCE() when checking vma detached state,
per Lorenzo Stoakes
- Remove Vlastimil's Reviewed-by since code is changed
PATCH [13/18]
- Update vm_area_struct for tests, per Lorenzo Stoakes
- Add Reviewed-by, per Lorenzo Stoakes
PATCH [14/18]
- Minimized duplicate code, per Lorenzo Stoakes
PATCH [15/18]
- Add Reviewed-by, per Lorenzo Stoakes
PATCH [17/18]
- Use refcount_set_release() in vma_mark_attached(), per Will Deacon
PATCH [18/18]
- Updated documenation, per Lorenzo Stoakes
- Add Reviewed-by, per Lorenzo Stoakes
[1] https://lore.kernel.org/all/20230227173632.3292573-34-surenb@google.com/
[2] https://lore.kernel.org/all/ZsQyI%2F087V34JoIt@xsang-OptiPlex-9020/
[3] https://lore.kernel.org/all/CAJuCfpEisU8Lfe96AYJDZ+OM4NoPmnw9bP53cT_kbfP_pR+-2g@mail.gmail.com/
[4] https://lore.kernel.org/all/20250111042604.3230628-1-surenb@google.com/
Patchset applies over mm-unstable
Suren Baghdasaryan (18):
mm: introduce vma_start_read_locked{_nested} helpers
mm: move per-vma lock into vm_area_struct
mm: mark vma as detached until it's added into vma tree
mm: introduce vma_iter_store_attached() to use with attached vmas
mm: mark vmas detached upon exit
types: move struct rcuwait into types.h
mm: allow vma_start_read_locked/vma_start_read_locked_nested to fail
mm: move mmap_init_lock() out of the header file
mm: uninline the main body of vma_start_write()
refcount: provide ops for cases when object's memory can be reused
refcount: introduce __refcount_{add|inc}_not_zero_limited_acquire
mm: replace vm_lock and detached flag with a reference count
mm: move lesser used vma_area_struct members into the last cacheline
mm/debug: print vm_refcnt state when dumping the vma
mm: remove extra vma_numab_state_init() call
mm: prepare lock_vma_under_rcu() for vma reuse possibility
mm: make vma cache SLAB_TYPESAFE_BY_RCU
docs/mm: document latest changes to vm_lock
Documentation/RCU/whatisRCU.rst | 10 +
Documentation/core-api/refcount-vs-atomic.rst | 37 +++-
Documentation/mm/process_addrs.rst | 44 +++--
include/linux/mm.h | 176 ++++++++++++++----
include/linux/mm_types.h | 75 ++++----
include/linux/mmap_lock.h | 6 -
include/linux/rcuwait.h | 13 +-
include/linux/refcount.h | 125 +++++++++++++
include/linux/slab.h | 15 +-
include/linux/types.h | 12 ++
kernel/fork.c | 129 ++++++-------
mm/debug.c | 6 +
mm/init-mm.c | 1 +
mm/memory.c | 106 ++++++++++-
mm/mmap.c | 3 +-
mm/nommu.c | 4 +-
mm/userfaultfd.c | 38 ++--
mm/vma.c | 27 ++-
mm/vma.h | 15 +-
tools/include/linux/refcount.h | 5 +
tools/testing/vma/linux/atomic.h | 6 +
tools/testing/vma/vma.c | 42 ++++-
tools/testing/vma/vma_internal.h | 127 ++++++-------
23 files changed, 702 insertions(+), 320 deletions(-)
base-commit: 47aa60e930fe7fc2a945e4406e3ad1dfa73bb47c
--
2.48.1.601.g30ceb7b040-goog
^ permalink raw reply [flat|nested] 30+ messages in thread
* [PATCH v10 01/18] mm: introduce vma_start_read_locked{_nested} helpers
2025-02-13 22:46 [PATCH v10 00/18] reimplement per-vma lock as a refcount Suren Baghdasaryan
@ 2025-02-13 22:46 ` Suren Baghdasaryan
2025-02-13 22:46 ` [PATCH v10 02/18] mm: move per-vma lock into vm_area_struct Suren Baghdasaryan
` (17 subsequent siblings)
18 siblings, 0 replies; 30+ messages in thread
From: Suren Baghdasaryan @ 2025-02-13 22:46 UTC (permalink / raw)
To: akpm
Cc: peterz, willy, liam.howlett, lorenzo.stoakes, david.laight.linux,
mhocko, vbabka, hannes, mjguzik, oliver.sang, mgorman, david,
peterx, oleg, dave, paulmck, brauner, dhowells, hdanton, hughd,
lokeshgidra, minchan, jannh, shakeel.butt, souravpanda,
pasha.tatashin, klarasmodin, richard.weiyang, corbet, linux-doc,
linux-mm, linux-kernel, kernel-team, surenb, Liam R. Howlett
Introduce helper functions which can be used to read-lock a VMA when
holding mmap_lock for read. Replace direct accesses to vma->vm_lock with
these new helpers.
Signed-off-by: Suren Baghdasaryan <surenb@google.com>
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Reviewed-by: Davidlohr Bueso <dave@stgolabs.net>
Reviewed-by: Shakeel Butt <shakeel.butt@linux.dev>
Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
Reviewed-by: Liam R. Howlett <Liam.Howlett@Oracle.com>
---
include/linux/mm.h | 24 ++++++++++++++++++++++++
mm/userfaultfd.c | 22 +++++-----------------
2 files changed, 29 insertions(+), 17 deletions(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 838097542939..16b3cd3de29a 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -735,6 +735,30 @@ static inline bool vma_start_read(struct vm_area_struct *vma)
return true;
}
+/*
+ * Use only while holding mmap read lock which guarantees that locking will not
+ * fail (nobody can concurrently write-lock the vma). vma_start_read() should
+ * not be used in such cases because it might fail due to mm_lock_seq overflow.
+ * This functionality is used to obtain vma read lock and drop the mmap read lock.
+ */
+static inline void vma_start_read_locked_nested(struct vm_area_struct *vma, int subclass)
+{
+ mmap_assert_locked(vma->vm_mm);
+ down_read_nested(&vma->vm_lock->lock, subclass);
+}
+
+/*
+ * Use only while holding mmap read lock which guarantees that locking will not
+ * fail (nobody can concurrently write-lock the vma). vma_start_read() should
+ * not be used in such cases because it might fail due to mm_lock_seq overflow.
+ * This functionality is used to obtain vma read lock and drop the mmap read lock.
+ */
+static inline void vma_start_read_locked(struct vm_area_struct *vma)
+{
+ mmap_assert_locked(vma->vm_mm);
+ down_read(&vma->vm_lock->lock);
+}
+
static inline void vma_end_read(struct vm_area_struct *vma)
{
rcu_read_lock(); /* keeps vma alive till the end of up_read */
diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
index af3dfc3633db..4527c385935b 100644
--- a/mm/userfaultfd.c
+++ b/mm/userfaultfd.c
@@ -84,16 +84,8 @@ static struct vm_area_struct *uffd_lock_vma(struct mm_struct *mm,
mmap_read_lock(mm);
vma = find_vma_and_prepare_anon(mm, address);
- if (!IS_ERR(vma)) {
- /*
- * We cannot use vma_start_read() as it may fail due to
- * false locked (see comment in vma_start_read()). We
- * can avoid that by directly locking vm_lock under
- * mmap_lock, which guarantees that nobody can lock the
- * vma for write (vma_start_write()) under us.
- */
- down_read(&vma->vm_lock->lock);
- }
+ if (!IS_ERR(vma))
+ vma_start_read_locked(vma);
mmap_read_unlock(mm);
return vma;
@@ -1491,14 +1483,10 @@ static int uffd_move_lock(struct mm_struct *mm,
mmap_read_lock(mm);
err = find_vmas_mm_locked(mm, dst_start, src_start, dst_vmap, src_vmap);
if (!err) {
- /*
- * See comment in uffd_lock_vma() as to why not using
- * vma_start_read() here.
- */
- down_read(&(*dst_vmap)->vm_lock->lock);
+ vma_start_read_locked(*dst_vmap);
if (*dst_vmap != *src_vmap)
- down_read_nested(&(*src_vmap)->vm_lock->lock,
- SINGLE_DEPTH_NESTING);
+ vma_start_read_locked_nested(*src_vmap,
+ SINGLE_DEPTH_NESTING);
}
mmap_read_unlock(mm);
return err;
--
2.48.1.601.g30ceb7b040-goog
^ permalink raw reply [flat|nested] 30+ messages in thread
* [PATCH v10 02/18] mm: move per-vma lock into vm_area_struct
2025-02-13 22:46 [PATCH v10 00/18] reimplement per-vma lock as a refcount Suren Baghdasaryan
2025-02-13 22:46 ` [PATCH v10 01/18] mm: introduce vma_start_read_locked{_nested} helpers Suren Baghdasaryan
@ 2025-02-13 22:46 ` Suren Baghdasaryan
2025-02-13 22:46 ` [PATCH v10 03/18] mm: mark vma as detached until it's added into vma tree Suren Baghdasaryan
` (16 subsequent siblings)
18 siblings, 0 replies; 30+ messages in thread
From: Suren Baghdasaryan @ 2025-02-13 22:46 UTC (permalink / raw)
To: akpm
Cc: peterz, willy, liam.howlett, lorenzo.stoakes, david.laight.linux,
mhocko, vbabka, hannes, mjguzik, oliver.sang, mgorman, david,
peterx, oleg, dave, paulmck, brauner, dhowells, hdanton, hughd,
lokeshgidra, minchan, jannh, shakeel.butt, souravpanda,
pasha.tatashin, klarasmodin, richard.weiyang, corbet, linux-doc,
linux-mm, linux-kernel, kernel-team, surenb, Liam R. Howlett
Back when per-vma locks were introduces, vm_lock was moved out of
vm_area_struct in [1] because of the performance regression caused by
false cacheline sharing. Recent investigation [2] revealed that the
regressions is limited to a rather old Broadwell microarchitecture and
even there it can be mitigated by disabling adjacent cacheline
prefetching, see [3].
Splitting single logical structure into multiple ones leads to more
complicated management, extra pointer dereferences and overall less
maintainable code. When that split-away part is a lock, it complicates
things even further. With no performance benefits, there are no reasons
for this split. Merging the vm_lock back into vm_area_struct also allows
vm_area_struct to use SLAB_TYPESAFE_BY_RCU later in this patchset. Move
vm_lock back into vm_area_struct, aligning it at the cacheline boundary
and changing the cache to be cacheline-aligned as well. With kernel
compiled using defconfig, this causes VMA memory consumption to grow from
160 (vm_area_struct) + 40 (vm_lock) bytes to 256 bytes:
slabinfo before:
<name> ... <objsize> <objperslab> <pagesperslab> : ...
vma_lock ... 40 102 1 : ...
vm_area_struct ... 160 51 2 : ...
slabinfo after moving vm_lock:
<name> ... <objsize> <objperslab> <pagesperslab> : ...
vm_area_struct ... 256 32 2 : ...
Aggregate VMA memory consumption per 1000 VMAs grows from 50 to 64 pages,
which is 5.5MB per 100000 VMAs. Note that the size of this structure is
dependent on the kernel configuration and typically the original size is
higher than 160 bytes. Therefore these calculations are close to the
worst case scenario. A more realistic vm_area_struct usage before this
change is:
<name> ... <objsize> <objperslab> <pagesperslab> : ...
vma_lock ... 40 102 1 : ...
vm_area_struct ... 176 46 2 : ...
Aggregate VMA memory consumption per 1000 VMAs grows from 54 to 64 pages,
which is 3.9MB per 100000 VMAs. This memory consumption growth can be
addressed later by optimizing the vm_lock.
[1] https://lore.kernel.org/all/20230227173632.3292573-34-surenb@google.com/
[2] https://lore.kernel.org/all/ZsQyI%2F087V34JoIt@xsang-OptiPlex-9020/
[3] https://lore.kernel.org/all/CAJuCfpEisU8Lfe96AYJDZ+OM4NoPmnw9bP53cT_kbfP_pR+-2g@mail.gmail.com/
Signed-off-by: Suren Baghdasaryan <surenb@google.com>
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Reviewed-by: Shakeel Butt <shakeel.butt@linux.dev>
Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
Reviewed-by: Liam R. Howlett <Liam.Howlett@Oracle.com>
---
include/linux/mm.h | 28 ++++++++++--------
include/linux/mm_types.h | 6 ++--
kernel/fork.c | 49 ++++----------------------------
tools/testing/vma/vma_internal.h | 33 +++++----------------
4 files changed, 32 insertions(+), 84 deletions(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 16b3cd3de29a..e75fae95b48d 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -697,6 +697,12 @@ static inline void vma_numab_state_free(struct vm_area_struct *vma) {}
#endif /* CONFIG_NUMA_BALANCING */
#ifdef CONFIG_PER_VMA_LOCK
+static inline void vma_lock_init(struct vm_area_struct *vma)
+{
+ init_rwsem(&vma->vm_lock.lock);
+ vma->vm_lock_seq = UINT_MAX;
+}
+
/*
* Try to read-lock a vma. The function is allowed to occasionally yield false
* locked result to avoid performance overhead, in which case we fall back to
@@ -714,7 +720,7 @@ static inline bool vma_start_read(struct vm_area_struct *vma)
if (READ_ONCE(vma->vm_lock_seq) == READ_ONCE(vma->vm_mm->mm_lock_seq.sequence))
return false;
- if (unlikely(down_read_trylock(&vma->vm_lock->lock) == 0))
+ if (unlikely(down_read_trylock(&vma->vm_lock.lock) == 0))
return false;
/*
@@ -729,7 +735,7 @@ static inline bool vma_start_read(struct vm_area_struct *vma)
* This pairs with RELEASE semantics in vma_end_write_all().
*/
if (unlikely(vma->vm_lock_seq == raw_read_seqcount(&vma->vm_mm->mm_lock_seq))) {
- up_read(&vma->vm_lock->lock);
+ up_read(&vma->vm_lock.lock);
return false;
}
return true;
@@ -744,7 +750,7 @@ static inline bool vma_start_read(struct vm_area_struct *vma)
static inline void vma_start_read_locked_nested(struct vm_area_struct *vma, int subclass)
{
mmap_assert_locked(vma->vm_mm);
- down_read_nested(&vma->vm_lock->lock, subclass);
+ down_read_nested(&vma->vm_lock.lock, subclass);
}
/*
@@ -756,13 +762,13 @@ static inline void vma_start_read_locked_nested(struct vm_area_struct *vma, int
static inline void vma_start_read_locked(struct vm_area_struct *vma)
{
mmap_assert_locked(vma->vm_mm);
- down_read(&vma->vm_lock->lock);
+ down_read(&vma->vm_lock.lock);
}
static inline void vma_end_read(struct vm_area_struct *vma)
{
rcu_read_lock(); /* keeps vma alive till the end of up_read */
- up_read(&vma->vm_lock->lock);
+ up_read(&vma->vm_lock.lock);
rcu_read_unlock();
}
@@ -791,7 +797,7 @@ static inline void vma_start_write(struct vm_area_struct *vma)
if (__is_vma_write_locked(vma, &mm_lock_seq))
return;
- down_write(&vma->vm_lock->lock);
+ down_write(&vma->vm_lock.lock);
/*
* We should use WRITE_ONCE() here because we can have concurrent reads
* from the early lockless pessimistic check in vma_start_read().
@@ -799,7 +805,7 @@ static inline void vma_start_write(struct vm_area_struct *vma)
* we should use WRITE_ONCE() for cleanliness and to keep KCSAN happy.
*/
WRITE_ONCE(vma->vm_lock_seq, mm_lock_seq);
- up_write(&vma->vm_lock->lock);
+ up_write(&vma->vm_lock.lock);
}
static inline void vma_assert_write_locked(struct vm_area_struct *vma)
@@ -811,7 +817,7 @@ static inline void vma_assert_write_locked(struct vm_area_struct *vma)
static inline void vma_assert_locked(struct vm_area_struct *vma)
{
- if (!rwsem_is_locked(&vma->vm_lock->lock))
+ if (!rwsem_is_locked(&vma->vm_lock.lock))
vma_assert_write_locked(vma);
}
@@ -844,6 +850,7 @@ struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm,
#else /* CONFIG_PER_VMA_LOCK */
+static inline void vma_lock_init(struct vm_area_struct *vma) {}
static inline bool vma_start_read(struct vm_area_struct *vma)
{ return false; }
static inline void vma_end_read(struct vm_area_struct *vma) {}
@@ -878,10 +885,6 @@ static inline void assert_fault_locked(struct vm_fault *vmf)
extern const struct vm_operations_struct vma_dummy_vm_ops;
-/*
- * WARNING: vma_init does not initialize vma->vm_lock.
- * Use vm_area_alloc()/vm_area_free() if vma needs locking.
- */
static inline void vma_init(struct vm_area_struct *vma, struct mm_struct *mm)
{
memset(vma, 0, sizeof(*vma));
@@ -890,6 +893,7 @@ static inline void vma_init(struct vm_area_struct *vma, struct mm_struct *mm)
INIT_LIST_HEAD(&vma->anon_vma_chain);
vma_mark_detached(vma, false);
vma_numab_state_init(vma);
+ vma_lock_init(vma);
}
/* Use when VMA is not part of the VMA tree and needs no locking */
diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index 8efafef4637e..8a645bcb2b31 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -740,8 +740,6 @@ struct vm_area_struct {
* slowpath.
*/
unsigned int vm_lock_seq;
- /* Unstable RCU readers are allowed to read this. */
- struct vma_lock *vm_lock;
#endif
/*
@@ -794,6 +792,10 @@ struct vm_area_struct {
struct vma_numab_state *numab_state; /* NUMA Balancing state */
#endif
struct vm_userfaultfd_ctx vm_userfaultfd_ctx;
+#ifdef CONFIG_PER_VMA_LOCK
+ /* Unstable RCU readers are allowed to read this. */
+ struct vma_lock vm_lock ____cacheline_aligned_in_smp;
+#endif
} __randomize_layout;
#ifdef CONFIG_NUMA
diff --git a/kernel/fork.c b/kernel/fork.c
index 735405a9c5f3..bdbabe73fb29 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -436,35 +436,6 @@ static struct kmem_cache *vm_area_cachep;
/* SLAB cache for mm_struct structures (tsk->mm) */
static struct kmem_cache *mm_cachep;
-#ifdef CONFIG_PER_VMA_LOCK
-
-/* SLAB cache for vm_area_struct.lock */
-static struct kmem_cache *vma_lock_cachep;
-
-static bool vma_lock_alloc(struct vm_area_struct *vma)
-{
- vma->vm_lock = kmem_cache_alloc(vma_lock_cachep, GFP_KERNEL);
- if (!vma->vm_lock)
- return false;
-
- init_rwsem(&vma->vm_lock->lock);
- vma->vm_lock_seq = UINT_MAX;
-
- return true;
-}
-
-static inline void vma_lock_free(struct vm_area_struct *vma)
-{
- kmem_cache_free(vma_lock_cachep, vma->vm_lock);
-}
-
-#else /* CONFIG_PER_VMA_LOCK */
-
-static inline bool vma_lock_alloc(struct vm_area_struct *vma) { return true; }
-static inline void vma_lock_free(struct vm_area_struct *vma) {}
-
-#endif /* CONFIG_PER_VMA_LOCK */
-
struct vm_area_struct *vm_area_alloc(struct mm_struct *mm)
{
struct vm_area_struct *vma;
@@ -474,10 +445,6 @@ struct vm_area_struct *vm_area_alloc(struct mm_struct *mm)
return NULL;
vma_init(vma, mm);
- if (!vma_lock_alloc(vma)) {
- kmem_cache_free(vm_area_cachep, vma);
- return NULL;
- }
return vma;
}
@@ -496,10 +463,7 @@ struct vm_area_struct *vm_area_dup(struct vm_area_struct *orig)
* will be reinitialized.
*/
data_race(memcpy(new, orig, sizeof(*new)));
- if (!vma_lock_alloc(new)) {
- kmem_cache_free(vm_area_cachep, new);
- return NULL;
- }
+ vma_lock_init(new);
INIT_LIST_HEAD(&new->anon_vma_chain);
vma_numab_state_init(new);
dup_anon_vma_name(orig, new);
@@ -511,7 +475,6 @@ void __vm_area_free(struct vm_area_struct *vma)
{
vma_numab_state_free(vma);
free_anon_vma_name(vma);
- vma_lock_free(vma);
kmem_cache_free(vm_area_cachep, vma);
}
@@ -522,7 +485,7 @@ static void vm_area_free_rcu_cb(struct rcu_head *head)
vm_rcu);
/* The vma should not be locked while being destroyed. */
- VM_BUG_ON_VMA(rwsem_is_locked(&vma->vm_lock->lock), vma);
+ VM_BUG_ON_VMA(rwsem_is_locked(&vma->vm_lock.lock), vma);
__vm_area_free(vma);
}
#endif
@@ -3200,11 +3163,9 @@ void __init proc_caches_init(void)
sizeof(struct fs_struct), 0,
SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_ACCOUNT,
NULL);
-
- vm_area_cachep = KMEM_CACHE(vm_area_struct, SLAB_PANIC|SLAB_ACCOUNT);
-#ifdef CONFIG_PER_VMA_LOCK
- vma_lock_cachep = KMEM_CACHE(vma_lock, SLAB_PANIC|SLAB_ACCOUNT);
-#endif
+ vm_area_cachep = KMEM_CACHE(vm_area_struct,
+ SLAB_HWCACHE_ALIGN|SLAB_NO_MERGE|SLAB_PANIC|
+ SLAB_ACCOUNT);
mmap_init();
nsproxy_cache_init();
}
diff --git a/tools/testing/vma/vma_internal.h b/tools/testing/vma/vma_internal.h
index bb273927af0f..4506e6fb3c6f 100644
--- a/tools/testing/vma/vma_internal.h
+++ b/tools/testing/vma/vma_internal.h
@@ -275,10 +275,10 @@ struct vm_area_struct {
/*
* Can only be written (using WRITE_ONCE()) while holding both:
* - mmap_lock (in write mode)
- * - vm_lock->lock (in write mode)
+ * - vm_lock.lock (in write mode)
* Can be read reliably while holding one of:
* - mmap_lock (in read or write mode)
- * - vm_lock->lock (in read or write mode)
+ * - vm_lock.lock (in read or write mode)
* Can be read unreliably (using READ_ONCE()) for pessimistic bailout
* while holding nothing (except RCU to keep the VMA struct allocated).
*
@@ -287,7 +287,7 @@ struct vm_area_struct {
* slowpath.
*/
unsigned int vm_lock_seq;
- struct vma_lock *vm_lock;
+ struct vma_lock vm_lock;
#endif
/*
@@ -464,17 +464,10 @@ static inline struct vm_area_struct *vma_next(struct vma_iterator *vmi)
return mas_find(&vmi->mas, ULONG_MAX);
}
-static inline bool vma_lock_alloc(struct vm_area_struct *vma)
+static inline void vma_lock_init(struct vm_area_struct *vma)
{
- vma->vm_lock = calloc(1, sizeof(struct vma_lock));
-
- if (!vma->vm_lock)
- return false;
-
- init_rwsem(&vma->vm_lock->lock);
+ init_rwsem(&vma->vm_lock.lock);
vma->vm_lock_seq = UINT_MAX;
-
- return true;
}
static inline void vma_assert_write_locked(struct vm_area_struct *);
@@ -497,6 +490,7 @@ static inline void vma_init(struct vm_area_struct *vma, struct mm_struct *mm)
vma->vm_ops = &vma_dummy_vm_ops;
INIT_LIST_HEAD(&vma->anon_vma_chain);
vma_mark_detached(vma, false);
+ vma_lock_init(vma);
}
static inline struct vm_area_struct *vm_area_alloc(struct mm_struct *mm)
@@ -507,10 +501,6 @@ static inline struct vm_area_struct *vm_area_alloc(struct mm_struct *mm)
return NULL;
vma_init(vma, mm);
- if (!vma_lock_alloc(vma)) {
- free(vma);
- return NULL;
- }
return vma;
}
@@ -523,10 +513,7 @@ static inline struct vm_area_struct *vm_area_dup(struct vm_area_struct *orig)
return NULL;
memcpy(new, orig, sizeof(*new));
- if (!vma_lock_alloc(new)) {
- free(new);
- return NULL;
- }
+ vma_lock_init(new);
INIT_LIST_HEAD(&new->anon_vma_chain);
return new;
@@ -696,14 +683,8 @@ static inline void mpol_put(struct mempolicy *)
{
}
-static inline void vma_lock_free(struct vm_area_struct *vma)
-{
- free(vma->vm_lock);
-}
-
static inline void __vm_area_free(struct vm_area_struct *vma)
{
- vma_lock_free(vma);
free(vma);
}
--
2.48.1.601.g30ceb7b040-goog
^ permalink raw reply [flat|nested] 30+ messages in thread
* [PATCH v10 03/18] mm: mark vma as detached until it's added into vma tree
2025-02-13 22:46 [PATCH v10 00/18] reimplement per-vma lock as a refcount Suren Baghdasaryan
2025-02-13 22:46 ` [PATCH v10 01/18] mm: introduce vma_start_read_locked{_nested} helpers Suren Baghdasaryan
2025-02-13 22:46 ` [PATCH v10 02/18] mm: move per-vma lock into vm_area_struct Suren Baghdasaryan
@ 2025-02-13 22:46 ` Suren Baghdasaryan
2025-02-13 22:46 ` [PATCH v10 04/18] mm: introduce vma_iter_store_attached() to use with attached vmas Suren Baghdasaryan
` (15 subsequent siblings)
18 siblings, 0 replies; 30+ messages in thread
From: Suren Baghdasaryan @ 2025-02-13 22:46 UTC (permalink / raw)
To: akpm
Cc: peterz, willy, liam.howlett, lorenzo.stoakes, david.laight.linux,
mhocko, vbabka, hannes, mjguzik, oliver.sang, mgorman, david,
peterx, oleg, dave, paulmck, brauner, dhowells, hdanton, hughd,
lokeshgidra, minchan, jannh, shakeel.butt, souravpanda,
pasha.tatashin, klarasmodin, richard.weiyang, corbet, linux-doc,
linux-mm, linux-kernel, kernel-team, surenb, Liam R. Howlett
Current implementation does not set detached flag when a VMA is first
allocated. This does not represent the real state of the VMA, which is
detached until it is added into mm's VMA tree. Fix this by marking new
VMAs as detached and resetting detached flag only after VMA is added into
a tree.
Introduce vma_mark_attached() to make the API more readable and to
simplify possible future cleanup when vma->vm_mm might be used to indicate
detached vma and vma_mark_attached() will need an additional mm parameter.
Signed-off-by: Suren Baghdasaryan <surenb@google.com>
Reviewed-by: Shakeel Butt <shakeel.butt@linux.dev>
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
Reviewed-by: Liam R. Howlett <Liam.Howlett@Oracle.com>
---
include/linux/mm.h | 27 ++++++++++++++++++++-------
kernel/fork.c | 4 ++++
mm/memory.c | 2 +-
mm/vma.c | 6 +++---
mm/vma.h | 2 ++
tools/testing/vma/vma_internal.h | 17 ++++++++++++-----
6 files changed, 42 insertions(+), 16 deletions(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index e75fae95b48d..cd5ee61e98f2 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -821,12 +821,21 @@ static inline void vma_assert_locked(struct vm_area_struct *vma)
vma_assert_write_locked(vma);
}
-static inline void vma_mark_detached(struct vm_area_struct *vma, bool detached)
+static inline void vma_mark_attached(struct vm_area_struct *vma)
+{
+ vma->detached = false;
+}
+
+static inline void vma_mark_detached(struct vm_area_struct *vma)
{
/* When detaching vma should be write-locked */
- if (detached)
- vma_assert_write_locked(vma);
- vma->detached = detached;
+ vma_assert_write_locked(vma);
+ vma->detached = true;
+}
+
+static inline bool is_vma_detached(struct vm_area_struct *vma)
+{
+ return vma->detached;
}
static inline void release_fault_lock(struct vm_fault *vmf)
@@ -857,8 +866,8 @@ static inline void vma_end_read(struct vm_area_struct *vma) {}
static inline void vma_start_write(struct vm_area_struct *vma) {}
static inline void vma_assert_write_locked(struct vm_area_struct *vma)
{ mmap_assert_write_locked(vma->vm_mm); }
-static inline void vma_mark_detached(struct vm_area_struct *vma,
- bool detached) {}
+static inline void vma_mark_attached(struct vm_area_struct *vma) {}
+static inline void vma_mark_detached(struct vm_area_struct *vma) {}
static inline struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm,
unsigned long address)
@@ -891,7 +900,10 @@ static inline void vma_init(struct vm_area_struct *vma, struct mm_struct *mm)
vma->vm_mm = mm;
vma->vm_ops = &vma_dummy_vm_ops;
INIT_LIST_HEAD(&vma->anon_vma_chain);
- vma_mark_detached(vma, false);
+#ifdef CONFIG_PER_VMA_LOCK
+ /* vma is not locked, can't use vma_mark_detached() */
+ vma->detached = true;
+#endif
vma_numab_state_init(vma);
vma_lock_init(vma);
}
@@ -1086,6 +1098,7 @@ static inline int vma_iter_bulk_store(struct vma_iterator *vmi,
if (unlikely(mas_is_err(&vmi->mas)))
return -ENOMEM;
+ vma_mark_attached(vma);
return 0;
}
diff --git a/kernel/fork.c b/kernel/fork.c
index bdbabe73fb29..5bf3e407c795 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -465,6 +465,10 @@ struct vm_area_struct *vm_area_dup(struct vm_area_struct *orig)
data_race(memcpy(new, orig, sizeof(*new)));
vma_lock_init(new);
INIT_LIST_HEAD(&new->anon_vma_chain);
+#ifdef CONFIG_PER_VMA_LOCK
+ /* vma is not locked, can't use vma_mark_detached() */
+ new->detached = true;
+#endif
vma_numab_state_init(new);
dup_anon_vma_name(orig, new);
diff --git a/mm/memory.c b/mm/memory.c
index a8d6dbd03668..e600a5ff3c7a 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -6414,7 +6414,7 @@ struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm,
goto inval;
/* Check if the VMA got isolated after we found it */
- if (vma->detached) {
+ if (is_vma_detached(vma)) {
vma_end_read(vma);
count_vm_vma_lock_event(VMA_LOCK_MISS);
/* The area was replaced with another one */
diff --git a/mm/vma.c b/mm/vma.c
index 39146c19f316..498507d8a262 100644
--- a/mm/vma.c
+++ b/mm/vma.c
@@ -341,7 +341,7 @@ static void vma_complete(struct vma_prepare *vp, struct vma_iterator *vmi,
if (vp->remove) {
again:
- vma_mark_detached(vp->remove, true);
+ vma_mark_detached(vp->remove);
if (vp->file) {
uprobe_munmap(vp->remove, vp->remove->vm_start,
vp->remove->vm_end);
@@ -1238,7 +1238,7 @@ static void reattach_vmas(struct ma_state *mas_detach)
mas_set(mas_detach, 0);
mas_for_each(mas_detach, vma, ULONG_MAX)
- vma_mark_detached(vma, false);
+ vma_mark_attached(vma);
__mt_destroy(mas_detach->tree);
}
@@ -1313,7 +1313,7 @@ static int vms_gather_munmap_vmas(struct vma_munmap_struct *vms,
if (error)
goto munmap_gather_failed;
- vma_mark_detached(next, true);
+ vma_mark_detached(next);
nrpages = vma_pages(next);
vms->nr_pages += nrpages;
diff --git a/mm/vma.h b/mm/vma.h
index e55e68abfbe3..bffb56afce5f 100644
--- a/mm/vma.h
+++ b/mm/vma.h
@@ -205,6 +205,7 @@ static inline int vma_iter_store_gfp(struct vma_iterator *vmi,
if (unlikely(mas_is_err(&vmi->mas)))
return -ENOMEM;
+ vma_mark_attached(vma);
return 0;
}
@@ -437,6 +438,7 @@ static inline void vma_iter_store(struct vma_iterator *vmi,
__mas_set_range(&vmi->mas, vma->vm_start, vma->vm_end - 1);
mas_store_prealloc(&vmi->mas, vma);
+ vma_mark_attached(vma);
}
static inline unsigned long vma_iter_addr(struct vma_iterator *vmi)
diff --git a/tools/testing/vma/vma_internal.h b/tools/testing/vma/vma_internal.h
index 4506e6fb3c6f..f93f7f74f97b 100644
--- a/tools/testing/vma/vma_internal.h
+++ b/tools/testing/vma/vma_internal.h
@@ -471,12 +471,16 @@ static inline void vma_lock_init(struct vm_area_struct *vma)
}
static inline void vma_assert_write_locked(struct vm_area_struct *);
-static inline void vma_mark_detached(struct vm_area_struct *vma, bool detached)
+static inline void vma_mark_attached(struct vm_area_struct *vma)
+{
+ vma->detached = false;
+}
+
+static inline void vma_mark_detached(struct vm_area_struct *vma)
{
/* When detaching vma should be write-locked */
- if (detached)
- vma_assert_write_locked(vma);
- vma->detached = detached;
+ vma_assert_write_locked(vma);
+ vma->detached = true;
}
extern const struct vm_operations_struct vma_dummy_vm_ops;
@@ -489,7 +493,8 @@ static inline void vma_init(struct vm_area_struct *vma, struct mm_struct *mm)
vma->vm_mm = mm;
vma->vm_ops = &vma_dummy_vm_ops;
INIT_LIST_HEAD(&vma->anon_vma_chain);
- vma_mark_detached(vma, false);
+ /* vma is not locked, can't use vma_mark_detached() */
+ vma->detached = true;
vma_lock_init(vma);
}
@@ -515,6 +520,8 @@ static inline struct vm_area_struct *vm_area_dup(struct vm_area_struct *orig)
memcpy(new, orig, sizeof(*new));
vma_lock_init(new);
INIT_LIST_HEAD(&new->anon_vma_chain);
+ /* vma is not locked, can't use vma_mark_detached() */
+ new->detached = true;
return new;
}
--
2.48.1.601.g30ceb7b040-goog
^ permalink raw reply [flat|nested] 30+ messages in thread
* [PATCH v10 04/18] mm: introduce vma_iter_store_attached() to use with attached vmas
2025-02-13 22:46 [PATCH v10 00/18] reimplement per-vma lock as a refcount Suren Baghdasaryan
` (2 preceding siblings ...)
2025-02-13 22:46 ` [PATCH v10 03/18] mm: mark vma as detached until it's added into vma tree Suren Baghdasaryan
@ 2025-02-13 22:46 ` Suren Baghdasaryan
2025-02-21 11:07 ` Vlastimil Babka
2025-02-21 16:11 ` Liam R. Howlett
2025-02-13 22:46 ` [PATCH v10 05/18] mm: mark vmas detached upon exit Suren Baghdasaryan
` (14 subsequent siblings)
18 siblings, 2 replies; 30+ messages in thread
From: Suren Baghdasaryan @ 2025-02-13 22:46 UTC (permalink / raw)
To: akpm
Cc: peterz, willy, liam.howlett, lorenzo.stoakes, david.laight.linux,
mhocko, vbabka, hannes, mjguzik, oliver.sang, mgorman, david,
peterx, oleg, dave, paulmck, brauner, dhowells, hdanton, hughd,
lokeshgidra, minchan, jannh, shakeel.butt, souravpanda,
pasha.tatashin, klarasmodin, richard.weiyang, corbet, linux-doc,
linux-mm, linux-kernel, kernel-team, surenb
vma_iter_store() functions can be used both when adding a new vma and
when updating an existing one. However for existing ones we do not need
to mark them attached as they are already marked that way. With
vma->detached being a separate flag, double-marking a vmas as attached
or detached is not an issue because the flag will simply be overwritten
with the same value. However once we fold this flag into the refcount
later in this series, re-attaching or re-detaching a vma becomes an
issue since these operations will be incrementing/decrementing a
refcount.
Introduce vma_iter_store_new() and vma_iter_store_overwrite() to replace
vma_iter_store() and avoid re-attaching a vma during vma update. Add
assertions in vma_mark_attached()/vma_mark_detached() to catch invalid
usage. Update vma tests to check for vma detached state correctness.
Signed-off-by: Suren Baghdasaryan <surenb@google.com>
---
Changes since v9 [1]:
- Change VM_BUG_ON_VMA() to WARN_ON_ONCE() in vma_assert_{attached|detached},
per Lorenzo Stoakes
- Rename vma_iter_store() into vma_iter_store_new(), per Lorenzo Stoakes
- Expand changelog, per Lorenzo Stoakes
- Update vma tests to check for vma detached state correctness,
per Lorenzo Stoakes
[1] https://lore.kernel.org/all/20250111042604.3230628-5-surenb@google.com/
include/linux/mm.h | 14 +++++++++++
mm/nommu.c | 4 +--
mm/vma.c | 12 ++++-----
mm/vma.h | 11 +++++++--
tools/testing/vma/vma.c | 42 +++++++++++++++++++++++++-------
tools/testing/vma/vma_internal.h | 10 ++++++++
6 files changed, 74 insertions(+), 19 deletions(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index cd5ee61e98f2..1b8e72888124 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -821,8 +821,19 @@ static inline void vma_assert_locked(struct vm_area_struct *vma)
vma_assert_write_locked(vma);
}
+static inline void vma_assert_attached(struct vm_area_struct *vma)
+{
+ WARN_ON_ONCE(vma->detached);
+}
+
+static inline void vma_assert_detached(struct vm_area_struct *vma)
+{
+ WARN_ON_ONCE(!vma->detached);
+}
+
static inline void vma_mark_attached(struct vm_area_struct *vma)
{
+ vma_assert_detached(vma);
vma->detached = false;
}
@@ -830,6 +841,7 @@ static inline void vma_mark_detached(struct vm_area_struct *vma)
{
/* When detaching vma should be write-locked */
vma_assert_write_locked(vma);
+ vma_assert_attached(vma);
vma->detached = true;
}
@@ -866,6 +878,8 @@ static inline void vma_end_read(struct vm_area_struct *vma) {}
static inline void vma_start_write(struct vm_area_struct *vma) {}
static inline void vma_assert_write_locked(struct vm_area_struct *vma)
{ mmap_assert_write_locked(vma->vm_mm); }
+static inline void vma_assert_attached(struct vm_area_struct *vma) {}
+static inline void vma_assert_detached(struct vm_area_struct *vma) {}
static inline void vma_mark_attached(struct vm_area_struct *vma) {}
static inline void vma_mark_detached(struct vm_area_struct *vma) {}
diff --git a/mm/nommu.c b/mm/nommu.c
index baa79abdaf03..8b31d8396297 100644
--- a/mm/nommu.c
+++ b/mm/nommu.c
@@ -1191,7 +1191,7 @@ unsigned long do_mmap(struct file *file,
setup_vma_to_mm(vma, current->mm);
current->mm->map_count++;
/* add the VMA to the tree */
- vma_iter_store(&vmi, vma);
+ vma_iter_store_new(&vmi, vma);
/* we flush the region from the icache only when the first executable
* mapping of it is made */
@@ -1356,7 +1356,7 @@ static int split_vma(struct vma_iterator *vmi, struct vm_area_struct *vma,
setup_vma_to_mm(vma, mm);
setup_vma_to_mm(new, mm);
- vma_iter_store(vmi, new);
+ vma_iter_store_new(vmi, new);
mm->map_count++;
return 0;
diff --git a/mm/vma.c b/mm/vma.c
index 498507d8a262..f72b73f57451 100644
--- a/mm/vma.c
+++ b/mm/vma.c
@@ -320,7 +320,7 @@ static void vma_complete(struct vma_prepare *vp, struct vma_iterator *vmi,
* us to insert it before dropping the locks
* (it may either follow vma or precede it).
*/
- vma_iter_store(vmi, vp->insert);
+ vma_iter_store_new(vmi, vp->insert);
mm->map_count++;
}
@@ -700,7 +700,7 @@ static int commit_merge(struct vma_merge_struct *vmg)
vmg->__adjust_middle_start ? vmg->middle : NULL);
vma_set_range(vma, vmg->start, vmg->end, vmg->pgoff);
vmg_adjust_set_range(vmg);
- vma_iter_store(vmg->vmi, vmg->target);
+ vma_iter_store_overwrite(vmg->vmi, vmg->target);
vma_complete(&vp, vmg->vmi, vma->vm_mm);
@@ -1707,7 +1707,7 @@ int vma_link(struct mm_struct *mm, struct vm_area_struct *vma)
return -ENOMEM;
vma_start_write(vma);
- vma_iter_store(&vmi, vma);
+ vma_iter_store_new(&vmi, vma);
vma_link_file(vma);
mm->map_count++;
validate_mm(mm);
@@ -2386,7 +2386,7 @@ static int __mmap_new_vma(struct mmap_state *map, struct vm_area_struct **vmap)
/* Lock the VMA since it is modified after insertion into VMA tree */
vma_start_write(vma);
- vma_iter_store(vmi, vma);
+ vma_iter_store_new(vmi, vma);
map->mm->map_count++;
vma_link_file(vma);
@@ -2862,7 +2862,7 @@ int expand_upwards(struct vm_area_struct *vma, unsigned long address)
anon_vma_interval_tree_pre_update_vma(vma);
vma->vm_end = address;
/* Overwrite old entry in mtree. */
- vma_iter_store(&vmi, vma);
+ vma_iter_store_overwrite(&vmi, vma);
anon_vma_interval_tree_post_update_vma(vma);
perf_event_mmap(vma);
@@ -2942,7 +2942,7 @@ int expand_downwards(struct vm_area_struct *vma, unsigned long address)
vma->vm_start = address;
vma->vm_pgoff -= grow;
/* Overwrite old entry in mtree. */
- vma_iter_store(&vmi, vma);
+ vma_iter_store_overwrite(&vmi, vma);
anon_vma_interval_tree_post_update_vma(vma);
perf_event_mmap(vma);
diff --git a/mm/vma.h b/mm/vma.h
index bffb56afce5f..55be77ff042f 100644
--- a/mm/vma.h
+++ b/mm/vma.h
@@ -413,9 +413,10 @@ static inline struct vm_area_struct *vma_iter_load(struct vma_iterator *vmi)
}
/* Store a VMA with preallocated memory */
-static inline void vma_iter_store(struct vma_iterator *vmi,
- struct vm_area_struct *vma)
+static inline void vma_iter_store_overwrite(struct vma_iterator *vmi,
+ struct vm_area_struct *vma)
{
+ vma_assert_attached(vma);
#if defined(CONFIG_DEBUG_VM_MAPLE_TREE)
if (MAS_WARN_ON(&vmi->mas, vmi->mas.status != ma_start &&
@@ -438,7 +439,13 @@ static inline void vma_iter_store(struct vma_iterator *vmi,
__mas_set_range(&vmi->mas, vma->vm_start, vma->vm_end - 1);
mas_store_prealloc(&vmi->mas, vma);
+}
+
+static inline void vma_iter_store_new(struct vma_iterator *vmi,
+ struct vm_area_struct *vma)
+{
vma_mark_attached(vma);
+ vma_iter_store_overwrite(vmi, vma);
}
static inline unsigned long vma_iter_addr(struct vma_iterator *vmi)
diff --git a/tools/testing/vma/vma.c b/tools/testing/vma/vma.c
index c7ffa71841ca..11f761769b5b 100644
--- a/tools/testing/vma/vma.c
+++ b/tools/testing/vma/vma.c
@@ -74,10 +74,22 @@ static struct vm_area_struct *alloc_vma(struct mm_struct *mm,
ret->vm_end = end;
ret->vm_pgoff = pgoff;
ret->__vm_flags = flags;
+ vma_assert_detached(ret);
return ret;
}
+/* Helper function to allocate a VMA and link it to the tree. */
+static int attach_vma(struct mm_struct *mm, struct vm_area_struct *vma)
+{
+ int res;
+
+ res = vma_link(mm, vma);
+ if (!res)
+ vma_assert_attached(vma);
+ return res;
+}
+
/* Helper function to allocate a VMA and link it to the tree. */
static struct vm_area_struct *alloc_and_link_vma(struct mm_struct *mm,
unsigned long start,
@@ -90,7 +102,7 @@ static struct vm_area_struct *alloc_and_link_vma(struct mm_struct *mm,
if (vma == NULL)
return NULL;
- if (vma_link(mm, vma)) {
+ if (attach_vma(mm, vma)) {
vm_area_free(vma);
return NULL;
}
@@ -108,6 +120,7 @@ static struct vm_area_struct *alloc_and_link_vma(struct mm_struct *mm,
/* Helper function which provides a wrapper around a merge new VMA operation. */
static struct vm_area_struct *merge_new(struct vma_merge_struct *vmg)
{
+ struct vm_area_struct *vma;
/*
* For convenience, get prev and next VMAs. Which the new VMA operation
* requires.
@@ -116,7 +129,11 @@ static struct vm_area_struct *merge_new(struct vma_merge_struct *vmg)
vmg->prev = vma_prev(vmg->vmi);
vma_iter_next_range(vmg->vmi);
- return vma_merge_new_range(vmg);
+ vma = vma_merge_new_range(vmg);
+ if (vma)
+ vma_assert_attached(vma);
+
+ return vma;
}
/*
@@ -125,7 +142,12 @@ static struct vm_area_struct *merge_new(struct vma_merge_struct *vmg)
*/
static struct vm_area_struct *merge_existing(struct vma_merge_struct *vmg)
{
- return vma_merge_existing_range(vmg);
+ struct vm_area_struct *vma;
+
+ vma = vma_merge_existing_range(vmg);
+ if (vma)
+ vma_assert_attached(vma);
+ return vma;
}
/*
@@ -260,8 +282,8 @@ static bool test_simple_merge(void)
.pgoff = 1,
};
- ASSERT_FALSE(vma_link(&mm, vma_left));
- ASSERT_FALSE(vma_link(&mm, vma_right));
+ ASSERT_FALSE(attach_vma(&mm, vma_left));
+ ASSERT_FALSE(attach_vma(&mm, vma_right));
vma = merge_new(&vmg);
ASSERT_NE(vma, NULL);
@@ -285,7 +307,7 @@ static bool test_simple_modify(void)
struct vm_area_struct *init_vma = alloc_vma(&mm, 0, 0x3000, 0, flags);
VMA_ITERATOR(vmi, &mm, 0x1000);
- ASSERT_FALSE(vma_link(&mm, init_vma));
+ ASSERT_FALSE(attach_vma(&mm, init_vma));
/*
* The flags will not be changed, the vma_modify_flags() function
@@ -351,7 +373,7 @@ static bool test_simple_expand(void)
.pgoff = 0,
};
- ASSERT_FALSE(vma_link(&mm, vma));
+ ASSERT_FALSE(attach_vma(&mm, vma));
ASSERT_FALSE(expand_existing(&vmg));
@@ -372,7 +394,7 @@ static bool test_simple_shrink(void)
struct vm_area_struct *vma = alloc_vma(&mm, 0, 0x3000, 0, flags);
VMA_ITERATOR(vmi, &mm, 0);
- ASSERT_FALSE(vma_link(&mm, vma));
+ ASSERT_FALSE(attach_vma(&mm, vma));
ASSERT_FALSE(vma_shrink(&vmi, vma, 0, 0x1000, 0));
@@ -1522,11 +1544,11 @@ static bool test_copy_vma(void)
vma = alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, flags);
vma_new = copy_vma(&vma, 0, 0x2000, 0, &need_locks);
-
ASSERT_NE(vma_new, vma);
ASSERT_EQ(vma_new->vm_start, 0);
ASSERT_EQ(vma_new->vm_end, 0x2000);
ASSERT_EQ(vma_new->vm_pgoff, 0);
+ vma_assert_attached(vma_new);
cleanup_mm(&mm, &vmi);
@@ -1535,6 +1557,7 @@ static bool test_copy_vma(void)
vma = alloc_and_link_vma(&mm, 0, 0x2000, 0, flags);
vma_next = alloc_and_link_vma(&mm, 0x6000, 0x8000, 6, flags);
vma_new = copy_vma(&vma, 0x4000, 0x2000, 4, &need_locks);
+ vma_assert_attached(vma_new);
ASSERT_EQ(vma_new, vma_next);
@@ -1576,6 +1599,7 @@ static bool test_expand_only_mode(void)
ASSERT_EQ(vma->vm_pgoff, 3);
ASSERT_TRUE(vma_write_started(vma));
ASSERT_EQ(vma_iter_addr(&vmi), 0x3000);
+ vma_assert_attached(vma);
cleanup_mm(&mm, &vmi);
return true;
diff --git a/tools/testing/vma/vma_internal.h b/tools/testing/vma/vma_internal.h
index f93f7f74f97b..34277842156c 100644
--- a/tools/testing/vma/vma_internal.h
+++ b/tools/testing/vma/vma_internal.h
@@ -470,6 +470,16 @@ static inline void vma_lock_init(struct vm_area_struct *vma)
vma->vm_lock_seq = UINT_MAX;
}
+static inline void vma_assert_attached(struct vm_area_struct *vma)
+{
+ WARN_ON_ONCE(vma->detached);
+}
+
+static inline void vma_assert_detached(struct vm_area_struct *vma)
+{
+ WARN_ON_ONCE(!vma->detached);
+}
+
static inline void vma_assert_write_locked(struct vm_area_struct *);
static inline void vma_mark_attached(struct vm_area_struct *vma)
{
--
2.48.1.601.g30ceb7b040-goog
^ permalink raw reply [flat|nested] 30+ messages in thread
* [PATCH v10 05/18] mm: mark vmas detached upon exit
2025-02-13 22:46 [PATCH v10 00/18] reimplement per-vma lock as a refcount Suren Baghdasaryan
` (3 preceding siblings ...)
2025-02-13 22:46 ` [PATCH v10 04/18] mm: introduce vma_iter_store_attached() to use with attached vmas Suren Baghdasaryan
@ 2025-02-13 22:46 ` Suren Baghdasaryan
2025-02-21 16:14 ` Liam R. Howlett
2025-02-13 22:46 ` [PATCH v10 06/18] types: move struct rcuwait into types.h Suren Baghdasaryan
` (13 subsequent siblings)
18 siblings, 1 reply; 30+ messages in thread
From: Suren Baghdasaryan @ 2025-02-13 22:46 UTC (permalink / raw)
To: akpm
Cc: peterz, willy, liam.howlett, lorenzo.stoakes, david.laight.linux,
mhocko, vbabka, hannes, mjguzik, oliver.sang, mgorman, david,
peterx, oleg, dave, paulmck, brauner, dhowells, hdanton, hughd,
lokeshgidra, minchan, jannh, shakeel.butt, souravpanda,
pasha.tatashin, klarasmodin, richard.weiyang, corbet, linux-doc,
linux-mm, linux-kernel, kernel-team, surenb
When exit_mmap() removes vmas belonging to an exiting task, it does not
mark them as detached since they can't be reached by other tasks and they
will be freed shortly. Once we introduce vma reuse, all vmas will have to
be in detached state before they are freed to ensure vma when reused is
in a consistent state. Add missing vma_mark_detached() before freeing the
vma.
Signed-off-by: Suren Baghdasaryan <surenb@google.com>
Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
---
Changes since v9 [1]:
- Add Reviewed-by, per Lorenzo Stoakes
[1] https://lore.kernel.org/all/20250111042604.3230628-6-surenb@google.com/
mm/vma.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/mm/vma.c b/mm/vma.c
index f72b73f57451..a16a83d0253f 100644
--- a/mm/vma.c
+++ b/mm/vma.c
@@ -427,10 +427,12 @@ void remove_vma(struct vm_area_struct *vma, bool unreachable)
if (vma->vm_file)
fput(vma->vm_file);
mpol_put(vma_policy(vma));
- if (unreachable)
+ if (unreachable) {
+ vma_mark_detached(vma);
__vm_area_free(vma);
- else
+ } else {
vm_area_free(vma);
+ }
}
/*
--
2.48.1.601.g30ceb7b040-goog
^ permalink raw reply [flat|nested] 30+ messages in thread
* [PATCH v10 06/18] types: move struct rcuwait into types.h
2025-02-13 22:46 [PATCH v10 00/18] reimplement per-vma lock as a refcount Suren Baghdasaryan
` (4 preceding siblings ...)
2025-02-13 22:46 ` [PATCH v10 05/18] mm: mark vmas detached upon exit Suren Baghdasaryan
@ 2025-02-13 22:46 ` Suren Baghdasaryan
2025-02-13 22:46 ` [PATCH v10 07/18] mm: allow vma_start_read_locked/vma_start_read_locked_nested to fail Suren Baghdasaryan
` (12 subsequent siblings)
18 siblings, 0 replies; 30+ messages in thread
From: Suren Baghdasaryan @ 2025-02-13 22:46 UTC (permalink / raw)
To: akpm
Cc: peterz, willy, liam.howlett, lorenzo.stoakes, david.laight.linux,
mhocko, vbabka, hannes, mjguzik, oliver.sang, mgorman, david,
peterx, oleg, dave, paulmck, brauner, dhowells, hdanton, hughd,
lokeshgidra, minchan, jannh, shakeel.butt, souravpanda,
pasha.tatashin, klarasmodin, richard.weiyang, corbet, linux-doc,
linux-mm, linux-kernel, kernel-team, surenb, Liam R. Howlett
Move rcuwait struct definition into types.h so that rcuwait can be used
without including rcuwait.h which includes other headers. Without this
change mm_types.h can't use rcuwait due to a the following circular
dependency:
mm_types.h -> rcuwait.h -> signal.h -> mm_types.h
Suggested-by: Matthew Wilcox <willy@infradead.org>
Signed-off-by: Suren Baghdasaryan <surenb@google.com>
Acked-by: Davidlohr Bueso <dave@stgolabs.net>
Acked-by: Liam R. Howlett <Liam.Howlett@Oracle.com>
Acked-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
---
Changes since v9 [1]:
- Add Acked-by, per Lorenzo Stoakes
[1] https://lore.kernel.org/all/20250111042604.3230628-7-surenb@google.com/
include/linux/rcuwait.h | 13 +------------
include/linux/types.h | 12 ++++++++++++
2 files changed, 13 insertions(+), 12 deletions(-)
diff --git a/include/linux/rcuwait.h b/include/linux/rcuwait.h
index 27343424225c..9ad134a04b41 100644
--- a/include/linux/rcuwait.h
+++ b/include/linux/rcuwait.h
@@ -4,18 +4,7 @@
#include <linux/rcupdate.h>
#include <linux/sched/signal.h>
-
-/*
- * rcuwait provides a way of blocking and waking up a single
- * task in an rcu-safe manner.
- *
- * The only time @task is non-nil is when a user is blocked (or
- * checking if it needs to) on a condition, and reset as soon as we
- * know that the condition has succeeded and are awoken.
- */
-struct rcuwait {
- struct task_struct __rcu *task;
-};
+#include <linux/types.h>
#define __RCUWAIT_INITIALIZER(name) \
{ .task = NULL, }
diff --git a/include/linux/types.h b/include/linux/types.h
index 1c509ce8f7f6..a3d2182c2686 100644
--- a/include/linux/types.h
+++ b/include/linux/types.h
@@ -248,5 +248,17 @@ typedef void (*swap_func_t)(void *a, void *b, int size);
typedef int (*cmp_r_func_t)(const void *a, const void *b, const void *priv);
typedef int (*cmp_func_t)(const void *a, const void *b);
+/*
+ * rcuwait provides a way of blocking and waking up a single
+ * task in an rcu-safe manner.
+ *
+ * The only time @task is non-nil is when a user is blocked (or
+ * checking if it needs to) on a condition, and reset as soon as we
+ * know that the condition has succeeded and are awoken.
+ */
+struct rcuwait {
+ struct task_struct __rcu *task;
+};
+
#endif /* __ASSEMBLY__ */
#endif /* _LINUX_TYPES_H */
--
2.48.1.601.g30ceb7b040-goog
^ permalink raw reply [flat|nested] 30+ messages in thread
* [PATCH v10 07/18] mm: allow vma_start_read_locked/vma_start_read_locked_nested to fail
2025-02-13 22:46 [PATCH v10 00/18] reimplement per-vma lock as a refcount Suren Baghdasaryan
` (5 preceding siblings ...)
2025-02-13 22:46 ` [PATCH v10 06/18] types: move struct rcuwait into types.h Suren Baghdasaryan
@ 2025-02-13 22:46 ` Suren Baghdasaryan
2025-02-21 11:27 ` Vlastimil Babka
2025-02-13 22:46 ` [PATCH v10 08/18] mm: move mmap_init_lock() out of the header file Suren Baghdasaryan
` (11 subsequent siblings)
18 siblings, 1 reply; 30+ messages in thread
From: Suren Baghdasaryan @ 2025-02-13 22:46 UTC (permalink / raw)
To: akpm
Cc: peterz, willy, liam.howlett, lorenzo.stoakes, david.laight.linux,
mhocko, vbabka, hannes, mjguzik, oliver.sang, mgorman, david,
peterx, oleg, dave, paulmck, brauner, dhowells, hdanton, hughd,
lokeshgidra, minchan, jannh, shakeel.butt, souravpanda,
pasha.tatashin, klarasmodin, richard.weiyang, corbet, linux-doc,
linux-mm, linux-kernel, kernel-team, surenb
With upcoming replacement of vm_lock with vm_refcnt, we need to handle a
possibility of vma_start_read_locked/vma_start_read_locked_nested failing
due to refcount overflow. Prepare for such possibility by changing these
APIs and adjusting their users.
Signed-off-by: Suren Baghdasaryan <surenb@google.com>
Cc: Lokesh Gidra <lokeshgidra@google.com>
---
Changes since v9 [1]:
- Refactor the code, per Lorenzo Stoakes
- Remove Vlastimil's Acked-by since code is changed
[1] https://lore.kernel.org/all/20250111042604.3230628-8-surenb@google.com/
include/linux/mm.h | 6 ++++--
mm/userfaultfd.c | 30 +++++++++++++++++++++++-------
2 files changed, 27 insertions(+), 9 deletions(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 1b8e72888124..7fa7c39162fd 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -747,10 +747,11 @@ static inline bool vma_start_read(struct vm_area_struct *vma)
* not be used in such cases because it might fail due to mm_lock_seq overflow.
* This functionality is used to obtain vma read lock and drop the mmap read lock.
*/
-static inline void vma_start_read_locked_nested(struct vm_area_struct *vma, int subclass)
+static inline bool vma_start_read_locked_nested(struct vm_area_struct *vma, int subclass)
{
mmap_assert_locked(vma->vm_mm);
down_read_nested(&vma->vm_lock.lock, subclass);
+ return true;
}
/*
@@ -759,10 +760,11 @@ static inline void vma_start_read_locked_nested(struct vm_area_struct *vma, int
* not be used in such cases because it might fail due to mm_lock_seq overflow.
* This functionality is used to obtain vma read lock and drop the mmap read lock.
*/
-static inline void vma_start_read_locked(struct vm_area_struct *vma)
+static inline bool vma_start_read_locked(struct vm_area_struct *vma)
{
mmap_assert_locked(vma->vm_mm);
down_read(&vma->vm_lock.lock);
+ return true;
}
static inline void vma_end_read(struct vm_area_struct *vma)
diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
index 4527c385935b..867898c4e30b 100644
--- a/mm/userfaultfd.c
+++ b/mm/userfaultfd.c
@@ -84,8 +84,12 @@ static struct vm_area_struct *uffd_lock_vma(struct mm_struct *mm,
mmap_read_lock(mm);
vma = find_vma_and_prepare_anon(mm, address);
- if (!IS_ERR(vma))
- vma_start_read_locked(vma);
+ if (!IS_ERR(vma)) {
+ bool locked = vma_start_read_locked(vma);
+
+ if (!locked)
+ vma = ERR_PTR(-EAGAIN);
+ }
mmap_read_unlock(mm);
return vma;
@@ -1482,12 +1486,24 @@ static int uffd_move_lock(struct mm_struct *mm,
mmap_read_lock(mm);
err = find_vmas_mm_locked(mm, dst_start, src_start, dst_vmap, src_vmap);
- if (!err) {
- vma_start_read_locked(*dst_vmap);
- if (*dst_vmap != *src_vmap)
- vma_start_read_locked_nested(*src_vmap,
- SINGLE_DEPTH_NESTING);
+ if (err)
+ goto out;
+
+ if (!vma_start_read_locked(*dst_vmap)) {
+ err = -EAGAIN;
+ goto out;
}
+
+ /* Nothing further to do if both vmas are locked. */
+ if (*dst_vmap == *src_vmap)
+ goto out;
+
+ if (!vma_start_read_locked_nested(*src_vmap, SINGLE_DEPTH_NESTING)) {
+ /* Undo dst_vmap locking if src_vmap failed to lock */
+ vma_end_read(*dst_vmap);
+ err = -EAGAIN;
+ }
+out:
mmap_read_unlock(mm);
return err;
}
--
2.48.1.601.g30ceb7b040-goog
^ permalink raw reply [flat|nested] 30+ messages in thread
* [PATCH v10 08/18] mm: move mmap_init_lock() out of the header file
2025-02-13 22:46 [PATCH v10 00/18] reimplement per-vma lock as a refcount Suren Baghdasaryan
` (6 preceding siblings ...)
2025-02-13 22:46 ` [PATCH v10 07/18] mm: allow vma_start_read_locked/vma_start_read_locked_nested to fail Suren Baghdasaryan
@ 2025-02-13 22:46 ` Suren Baghdasaryan
2025-02-13 22:46 ` [PATCH v10 09/18] mm: uninline the main body of vma_start_write() Suren Baghdasaryan
` (10 subsequent siblings)
18 siblings, 0 replies; 30+ messages in thread
From: Suren Baghdasaryan @ 2025-02-13 22:46 UTC (permalink / raw)
To: akpm
Cc: peterz, willy, liam.howlett, lorenzo.stoakes, david.laight.linux,
mhocko, vbabka, hannes, mjguzik, oliver.sang, mgorman, david,
peterx, oleg, dave, paulmck, brauner, dhowells, hdanton, hughd,
lokeshgidra, minchan, jannh, shakeel.butt, souravpanda,
pasha.tatashin, klarasmodin, richard.weiyang, corbet, linux-doc,
linux-mm, linux-kernel, kernel-team, surenb
mmap_init_lock() is used only from mm_init() in fork.c, therefore it does
not have to reside in the header file. This move lets us avoid including
additional headers in mmap_lock.h later, when mmap_init_lock() needs to
initialize rcuwait object.
Signed-off-by: Suren Baghdasaryan <surenb@google.com>
Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
---
Changes since v9 [1]:
- Drop inline for mmap_init_lock(), per Lorenzo Stoakes
- Add Reviewed-by, per Lorenzo Stoakes
[1] https://lore.kernel.org/all/20250111042604.3230628-9-surenb@google.com/
include/linux/mmap_lock.h | 6 ------
kernel/fork.c | 6 ++++++
2 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/include/linux/mmap_lock.h b/include/linux/mmap_lock.h
index 45a21faa3ff6..4706c6769902 100644
--- a/include/linux/mmap_lock.h
+++ b/include/linux/mmap_lock.h
@@ -122,12 +122,6 @@ static inline bool mmap_lock_speculate_retry(struct mm_struct *mm, unsigned int
#endif /* CONFIG_PER_VMA_LOCK */
-static inline void mmap_init_lock(struct mm_struct *mm)
-{
- init_rwsem(&mm->mmap_lock);
- mm_lock_seqcount_init(mm);
-}
-
static inline void mmap_write_lock(struct mm_struct *mm)
{
__mmap_lock_trace_start_locking(mm, true);
diff --git a/kernel/fork.c b/kernel/fork.c
index 5bf3e407c795..f1af413e5aa4 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -1230,6 +1230,12 @@ static void mm_init_uprobes_state(struct mm_struct *mm)
#endif
}
+static void mmap_init_lock(struct mm_struct *mm)
+{
+ init_rwsem(&mm->mmap_lock);
+ mm_lock_seqcount_init(mm);
+}
+
static struct mm_struct *mm_init(struct mm_struct *mm, struct task_struct *p,
struct user_namespace *user_ns)
{
--
2.48.1.601.g30ceb7b040-goog
^ permalink raw reply [flat|nested] 30+ messages in thread
* [PATCH v10 09/18] mm: uninline the main body of vma_start_write()
2025-02-13 22:46 [PATCH v10 00/18] reimplement per-vma lock as a refcount Suren Baghdasaryan
` (7 preceding siblings ...)
2025-02-13 22:46 ` [PATCH v10 08/18] mm: move mmap_init_lock() out of the header file Suren Baghdasaryan
@ 2025-02-13 22:46 ` Suren Baghdasaryan
2025-02-13 22:46 ` [PATCH v10 10/18] refcount: provide ops for cases when object's memory can be reused Suren Baghdasaryan
` (9 subsequent siblings)
18 siblings, 0 replies; 30+ messages in thread
From: Suren Baghdasaryan @ 2025-02-13 22:46 UTC (permalink / raw)
To: akpm
Cc: peterz, willy, liam.howlett, lorenzo.stoakes, david.laight.linux,
mhocko, vbabka, hannes, mjguzik, oliver.sang, mgorman, david,
peterx, oleg, dave, paulmck, brauner, dhowells, hdanton, hughd,
lokeshgidra, minchan, jannh, shakeel.butt, souravpanda,
pasha.tatashin, klarasmodin, richard.weiyang, corbet, linux-doc,
linux-mm, linux-kernel, kernel-team, surenb
vma_start_write() is used in many places and will grow in size very soon.
It is not used in performance critical paths and uninlining it should
limit the future code size growth.
No functional changes.
Signed-off-by: Suren Baghdasaryan <surenb@google.com>
Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
---
Changes since v9 [1]:
- Add Reviewed-by, per Lorenzo Stoakes
[1] https://lore.kernel.org/all/20250111042604.3230628-10-surenb@google.com/
include/linux/mm.h | 12 +++---------
mm/memory.c | 14 ++++++++++++++
2 files changed, 17 insertions(+), 9 deletions(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 7fa7c39162fd..557d66e187ff 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -787,6 +787,8 @@ static bool __is_vma_write_locked(struct vm_area_struct *vma, unsigned int *mm_l
return (vma->vm_lock_seq == *mm_lock_seq);
}
+void __vma_start_write(struct vm_area_struct *vma, unsigned int mm_lock_seq);
+
/*
* Begin writing to a VMA.
* Exclude concurrent readers under the per-VMA lock until the currently
@@ -799,15 +801,7 @@ static inline void vma_start_write(struct vm_area_struct *vma)
if (__is_vma_write_locked(vma, &mm_lock_seq))
return;
- down_write(&vma->vm_lock.lock);
- /*
- * We should use WRITE_ONCE() here because we can have concurrent reads
- * from the early lockless pessimistic check in vma_start_read().
- * We don't really care about the correctness of that early check, but
- * we should use WRITE_ONCE() for cleanliness and to keep KCSAN happy.
- */
- WRITE_ONCE(vma->vm_lock_seq, mm_lock_seq);
- up_write(&vma->vm_lock.lock);
+ __vma_start_write(vma, mm_lock_seq);
}
static inline void vma_assert_write_locked(struct vm_area_struct *vma)
diff --git a/mm/memory.c b/mm/memory.c
index e600a5ff3c7a..3d9c5141193f 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -6393,6 +6393,20 @@ struct vm_area_struct *lock_mm_and_find_vma(struct mm_struct *mm,
#endif
#ifdef CONFIG_PER_VMA_LOCK
+void __vma_start_write(struct vm_area_struct *vma, unsigned int mm_lock_seq)
+{
+ down_write(&vma->vm_lock.lock);
+ /*
+ * We should use WRITE_ONCE() here because we can have concurrent reads
+ * from the early lockless pessimistic check in vma_start_read().
+ * We don't really care about the correctness of that early check, but
+ * we should use WRITE_ONCE() for cleanliness and to keep KCSAN happy.
+ */
+ WRITE_ONCE(vma->vm_lock_seq, mm_lock_seq);
+ up_write(&vma->vm_lock.lock);
+}
+EXPORT_SYMBOL_GPL(__vma_start_write);
+
/*
* Lookup and lock a VMA under RCU protection. Returned VMA is guaranteed to be
* stable and not isolated. If the VMA is not found or is being modified the
--
2.48.1.601.g30ceb7b040-goog
^ permalink raw reply [flat|nested] 30+ messages in thread
* [PATCH v10 10/18] refcount: provide ops for cases when object's memory can be reused
2025-02-13 22:46 [PATCH v10 00/18] reimplement per-vma lock as a refcount Suren Baghdasaryan
` (8 preceding siblings ...)
2025-02-13 22:46 ` [PATCH v10 09/18] mm: uninline the main body of vma_start_write() Suren Baghdasaryan
@ 2025-02-13 22:46 ` Suren Baghdasaryan
2025-02-13 22:46 ` [PATCH v10 11/18] refcount: introduce __refcount_{add|inc}_not_zero_limited_acquire Suren Baghdasaryan
` (8 subsequent siblings)
18 siblings, 0 replies; 30+ messages in thread
From: Suren Baghdasaryan @ 2025-02-13 22:46 UTC (permalink / raw)
To: akpm
Cc: peterz, willy, liam.howlett, lorenzo.stoakes, david.laight.linux,
mhocko, vbabka, hannes, mjguzik, oliver.sang, mgorman, david,
peterx, oleg, dave, paulmck, brauner, dhowells, hdanton, hughd,
lokeshgidra, minchan, jannh, shakeel.butt, souravpanda,
pasha.tatashin, klarasmodin, richard.weiyang, corbet, linux-doc,
linux-mm, linux-kernel, kernel-team, surenb, Will Deacon
For speculative lookups where a successful inc_not_zero() pins the
object, but where we still need to double check if the object acquired
is indeed the one we set out to acquire (identity check), needs this
validation to happen *after* the increment.
Similarly, when a new object is initialized and its memory might have
been previously occupied by another object, all stores to initialize the
object should happen *before* refcount initialization.
Notably SLAB_TYPESAFE_BY_RCU is one such an example when this ordering
is required for reference counting.
Add refcount_{add|inc}_not_zero_acquire() to guarantee the proper ordering
between acquiring a reference count on an object and performing the
identity check for that object.
Add refcount_set_release() to guarantee proper ordering between stores
initializing object attributes and the store initializing the refcount.
refcount_set_release() should be done after all other object attributes
are initialized. Once refcount_set_release() is called, the object should
be considered visible to other tasks even if it was not yet added into an
object collection normally used to discover it. This is because other
tasks might have discovered the object previously occupying the same
memory and after memory reuse they can succeed in taking refcount for the
new object and start using it.
Object reuse example to consider:
consumer:
obj = lookup(collection, key);
if (!refcount_inc_not_zero_acquire(&obj->ref))
return;
if (READ_ONCE(obj->key) != key) { /* identity check */
put_ref(obj);
return;
}
use(obj->value);
producer:
remove(collection, obj->key);
if (!refcount_dec_and_test(&obj->ref))
return;
obj->key = KEY_INVALID;
free(obj);
obj = malloc(); /* obj is reused */
obj->key = new_key;
obj->value = new_value;
refcount_set_release(obj->ref, 1);
add(collection, new_key, obj);
refcount_{add|inc}_not_zero_acquire() is required to prevent the following
reordering when refcount_inc_not_zero() is used instead:
consumer:
obj = lookup(collection, key);
if (READ_ONCE(obj->key) != key) { /* reordered identity check */
put_ref(obj);
return;
}
producer:
remove(collection, obj->key);
if (!refcount_dec_and_test(&obj->ref))
return;
obj->key = KEY_INVALID;
free(obj);
obj = malloc(); /* obj is reused */
obj->key = new_key;
obj->value = new_value;
refcount_set_release(obj->ref, 1);
add(collection, new_key, obj);
if (!refcount_inc_not_zero(&obj->ref))
return;
use(obj->value); /* USING WRONG OBJECT */
refcount_set_release() is required to prevent the following reordering
when refcount_set() is used instead:
consumer:
obj = lookup(collection, key);
producer:
remove(collection, obj->key);
if (!refcount_dec_and_test(&obj->ref))
return;
obj->key = KEY_INVALID;
free(obj);
obj = malloc(); /* obj is reused */
obj->key = new_key; /* new_key == old_key */
refcount_set(obj->ref, 1);
if (!refcount_inc_not_zero_acquire(&obj->ref))
return;
if (READ_ONCE(obj->key) != key) { /* pass since new_key == old_key */
put_ref(obj);
return;
}
use(obj->value); /* USING STALE obj->value */
obj->value = new_value; /* reordered store */
add(collection, key, obj);
Signed-off-by: Suren Baghdasaryan <surenb@google.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Will Deacon <will@kernel.org>
Cc: Paul E. McKenney <paulmck@kernel.org>
Acked-by: Vlastimil Babka <vbabka@suse.cz> #slab
---
Documentation/RCU/whatisRCU.rst | 10 ++
Documentation/core-api/refcount-vs-atomic.rst | 37 +++++-
include/linux/refcount.h | 106 ++++++++++++++++++
include/linux/slab.h | 9 ++
4 files changed, 156 insertions(+), 6 deletions(-)
diff --git a/Documentation/RCU/whatisRCU.rst b/Documentation/RCU/whatisRCU.rst
index 1ef5784c1b84..53faeed7c190 100644
--- a/Documentation/RCU/whatisRCU.rst
+++ b/Documentation/RCU/whatisRCU.rst
@@ -971,6 +971,16 @@ unfortunately any spinlock in a ``SLAB_TYPESAFE_BY_RCU`` object must be
initialized after each and every call to kmem_cache_alloc(), which renders
reference-free spinlock acquisition completely unsafe. Therefore, when
using ``SLAB_TYPESAFE_BY_RCU``, make proper use of a reference counter.
+If using refcount_t, the specialized refcount_{add|inc}_not_zero_acquire()
+and refcount_set_release() APIs should be used to ensure correct operation
+ordering when verifying object identity and when initializing newly
+allocated objects. Acquire fence in refcount_{add|inc}_not_zero_acquire()
+ensures that identity checks happen *after* reference count is taken.
+refcount_set_release() should be called after a newly allocated object is
+fully initialized and release fence ensures that new values are visible
+*before* refcount can be successfully taken by other users. Once
+refcount_set_release() is called, the object should be considered visible
+by other tasks.
(Those willing to initialize their locks in a kmem_cache constructor
may also use locking, including cache-friendly sequence locking.)
diff --git a/Documentation/core-api/refcount-vs-atomic.rst b/Documentation/core-api/refcount-vs-atomic.rst
index 79a009ce11df..9551a7bbfd38 100644
--- a/Documentation/core-api/refcount-vs-atomic.rst
+++ b/Documentation/core-api/refcount-vs-atomic.rst
@@ -86,7 +86,19 @@ Memory ordering guarantee changes:
* none (both fully unordered)
-case 2) - increment-based ops that return no value
+case 2) - non-"Read/Modify/Write" (RMW) ops with release ordering
+-------------------------------------------
+
+Function changes:
+
+ * atomic_set_release() --> refcount_set_release()
+
+Memory ordering guarantee changes:
+
+ * none (both provide RELEASE ordering)
+
+
+case 3) - increment-based ops that return no value
--------------------------------------------------
Function changes:
@@ -98,7 +110,7 @@ Memory ordering guarantee changes:
* none (both fully unordered)
-case 3) - decrement-based RMW ops that return no value
+case 4) - decrement-based RMW ops that return no value
------------------------------------------------------
Function changes:
@@ -110,7 +122,7 @@ Memory ordering guarantee changes:
* fully unordered --> RELEASE ordering
-case 4) - increment-based RMW ops that return a value
+case 5) - increment-based RMW ops that return a value
-----------------------------------------------------
Function changes:
@@ -126,7 +138,20 @@ Memory ordering guarantees changes:
result of obtaining pointer to the object!
-case 5) - generic dec/sub decrement-based RMW ops that return a value
+case 6) - increment-based RMW ops with acquire ordering that return a value
+-----------------------------------------------------
+
+Function changes:
+
+ * atomic_inc_not_zero() --> refcount_inc_not_zero_acquire()
+ * no atomic counterpart --> refcount_add_not_zero_acquire()
+
+Memory ordering guarantees changes:
+
+ * fully ordered --> ACQUIRE ordering on success
+
+
+case 7) - generic dec/sub decrement-based RMW ops that return a value
---------------------------------------------------------------------
Function changes:
@@ -139,7 +164,7 @@ Memory ordering guarantees changes:
* fully ordered --> RELEASE ordering + ACQUIRE ordering on success
-case 6) other decrement-based RMW ops that return a value
+case 8) other decrement-based RMW ops that return a value
---------------------------------------------------------
Function changes:
@@ -154,7 +179,7 @@ Memory ordering guarantees changes:
.. note:: atomic_add_unless() only provides full order on success.
-case 7) - lock-based RMW
+case 9) - lock-based RMW
------------------------
Function changes:
diff --git a/include/linux/refcount.h b/include/linux/refcount.h
index 35f039ecb272..4589d2e7bfea 100644
--- a/include/linux/refcount.h
+++ b/include/linux/refcount.h
@@ -87,6 +87,15 @@
* The decrements dec_and_test() and sub_and_test() also provide acquire
* ordering on success.
*
+ * refcount_{add|inc}_not_zero_acquire() and refcount_set_release() provide
+ * acquire and release ordering for cases when the memory occupied by the
+ * object might be reused to store another object. This is important for the
+ * cases where secondary validation is required to detect such reuse, e.g.
+ * SLAB_TYPESAFE_BY_RCU. The secondary validation checks have to happen after
+ * the refcount is taken, hence acquire order is necessary. Similarly, when the
+ * object is initialized, all stores to its attributes should be visible before
+ * the refcount is set, otherwise a stale attribute value might be used by
+ * another task which succeeds in taking a refcount to the new object.
*/
#ifndef _LINUX_REFCOUNT_H
@@ -125,6 +134,31 @@ static inline void refcount_set(refcount_t *r, int n)
atomic_set(&r->refs, n);
}
+/**
+ * refcount_set_release - set a refcount's value with release ordering
+ * @r: the refcount
+ * @n: value to which the refcount will be set
+ *
+ * This function should be used when memory occupied by the object might be
+ * reused to store another object -- consider SLAB_TYPESAFE_BY_RCU.
+ *
+ * Provides release memory ordering which will order previous memory operations
+ * against this store. This ensures all updates to this object are visible
+ * once the refcount is set and stale values from the object previously
+ * occupying this memory are overwritten with new ones.
+ *
+ * This function should be called only after new object is fully initialized.
+ * After this call the object should be considered visible to other tasks even
+ * if it was not yet added into an object collection normally used to discover
+ * it. This is because other tasks might have discovered the object previously
+ * occupying the same memory and after memory reuse they can succeed in taking
+ * refcount to the new object and start using it.
+ */
+static inline void refcount_set_release(refcount_t *r, int n)
+{
+ atomic_set_release(&r->refs, n);
+}
+
/**
* refcount_read - get a refcount's value
* @r: the refcount
@@ -178,6 +212,52 @@ static inline __must_check bool refcount_add_not_zero(int i, refcount_t *r)
return __refcount_add_not_zero(i, r, NULL);
}
+static inline __must_check __signed_wrap
+bool __refcount_add_not_zero_acquire(int i, refcount_t *r, int *oldp)
+{
+ int old = refcount_read(r);
+
+ do {
+ if (!old)
+ break;
+ } while (!atomic_try_cmpxchg_acquire(&r->refs, &old, old + i));
+
+ if (oldp)
+ *oldp = old;
+
+ if (unlikely(old < 0 || old + i < 0))
+ refcount_warn_saturate(r, REFCOUNT_ADD_NOT_ZERO_OVF);
+
+ return old;
+}
+
+/**
+ * refcount_add_not_zero_acquire - add a value to a refcount with acquire ordering unless it is 0
+ *
+ * @i: the value to add to the refcount
+ * @r: the refcount
+ *
+ * Will saturate at REFCOUNT_SATURATED and WARN.
+ *
+ * This function should be used when memory occupied by the object might be
+ * reused to store another object -- consider SLAB_TYPESAFE_BY_RCU.
+ *
+ * Provides acquire memory ordering on success, it is assumed the caller has
+ * guaranteed the object memory to be stable (RCU, etc.). It does provide a
+ * control dependency and thereby orders future stores. See the comment on top.
+ *
+ * Use of this function is not recommended for the normal reference counting
+ * use case in which references are taken and released one at a time. In these
+ * cases, refcount_inc_not_zero_acquire() should instead be used to increment a
+ * reference count.
+ *
+ * Return: false if the passed refcount is 0, true otherwise
+ */
+static inline __must_check bool refcount_add_not_zero_acquire(int i, refcount_t *r)
+{
+ return __refcount_add_not_zero_acquire(i, r, NULL);
+}
+
static inline __signed_wrap
void __refcount_add(int i, refcount_t *r, int *oldp)
{
@@ -236,6 +316,32 @@ static inline __must_check bool refcount_inc_not_zero(refcount_t *r)
return __refcount_inc_not_zero(r, NULL);
}
+static inline __must_check bool __refcount_inc_not_zero_acquire(refcount_t *r, int *oldp)
+{
+ return __refcount_add_not_zero_acquire(1, r, oldp);
+}
+
+/**
+ * refcount_inc_not_zero_acquire - increment a refcount with acquire ordering unless it is 0
+ * @r: the refcount to increment
+ *
+ * Similar to refcount_inc_not_zero(), but provides acquire memory ordering on
+ * success.
+ *
+ * This function should be used when memory occupied by the object might be
+ * reused to store another object -- consider SLAB_TYPESAFE_BY_RCU.
+ *
+ * Provides acquire memory ordering on success, it is assumed the caller has
+ * guaranteed the object memory to be stable (RCU, etc.). It does provide a
+ * control dependency and thereby orders future stores. See the comment on top.
+ *
+ * Return: true if the increment was successful, false otherwise
+ */
+static inline __must_check bool refcount_inc_not_zero_acquire(refcount_t *r)
+{
+ return __refcount_inc_not_zero_acquire(r, NULL);
+}
+
static inline void __refcount_inc(refcount_t *r, int *oldp)
{
__refcount_add(1, r, oldp);
diff --git a/include/linux/slab.h b/include/linux/slab.h
index 09eedaecf120..ad902a2d692b 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -136,6 +136,15 @@ enum _slab_flag_bits {
* rcu_read_lock before reading the address, then rcu_read_unlock after
* taking the spinlock within the structure expected at that address.
*
+ * Note that object identity check has to be done *after* acquiring a
+ * reference, therefore user has to ensure proper ordering for loads.
+ * Similarly, when initializing objects allocated with SLAB_TYPESAFE_BY_RCU,
+ * the newly allocated object has to be fully initialized *before* its
+ * refcount gets initialized and proper ordering for stores is required.
+ * refcount_{add|inc}_not_zero_acquire() and refcount_set_release() are
+ * designed with the proper fences required for reference counting objects
+ * allocated with SLAB_TYPESAFE_BY_RCU.
+ *
* Note that it is not possible to acquire a lock within a structure
* allocated with SLAB_TYPESAFE_BY_RCU without first acquiring a reference
* as described above. The reason is that SLAB_TYPESAFE_BY_RCU pages
--
2.48.1.601.g30ceb7b040-goog
^ permalink raw reply [flat|nested] 30+ messages in thread
* [PATCH v10 11/18] refcount: introduce __refcount_{add|inc}_not_zero_limited_acquire
2025-02-13 22:46 [PATCH v10 00/18] reimplement per-vma lock as a refcount Suren Baghdasaryan
` (9 preceding siblings ...)
2025-02-13 22:46 ` [PATCH v10 10/18] refcount: provide ops for cases when object's memory can be reused Suren Baghdasaryan
@ 2025-02-13 22:46 ` Suren Baghdasaryan
2025-02-13 22:46 ` [PATCH v10 12/18] mm: replace vm_lock and detached flag with a reference count Suren Baghdasaryan
` (7 subsequent siblings)
18 siblings, 0 replies; 30+ messages in thread
From: Suren Baghdasaryan @ 2025-02-13 22:46 UTC (permalink / raw)
To: akpm
Cc: peterz, willy, liam.howlett, lorenzo.stoakes, david.laight.linux,
mhocko, vbabka, hannes, mjguzik, oliver.sang, mgorman, david,
peterx, oleg, dave, paulmck, brauner, dhowells, hdanton, hughd,
lokeshgidra, minchan, jannh, shakeel.butt, souravpanda,
pasha.tatashin, klarasmodin, richard.weiyang, corbet, linux-doc,
linux-mm, linux-kernel, kernel-team, surenb
Introduce functions to increase refcount but with a top limit above which
they will fail to increase (the limit is inclusive). Setting the limit to
INT_MAX indicates no limit.
Signed-off-by: Suren Baghdasaryan <surenb@google.com>
---
Changes since v9 [1]:
- Change refcount limit to be used with xxx_acquire functions
[1] https://lore.kernel.org/all/20250111042604.3230628-11-surenb@google.com/
include/linux/refcount.h | 21 ++++++++++++++++++++-
1 file changed, 20 insertions(+), 1 deletion(-)
diff --git a/include/linux/refcount.h b/include/linux/refcount.h
index 4589d2e7bfea..80dc023ac2bf 100644
--- a/include/linux/refcount.h
+++ b/include/linux/refcount.h
@@ -213,13 +213,20 @@ static inline __must_check bool refcount_add_not_zero(int i, refcount_t *r)
}
static inline __must_check __signed_wrap
-bool __refcount_add_not_zero_acquire(int i, refcount_t *r, int *oldp)
+bool __refcount_add_not_zero_limited_acquire(int i, refcount_t *r, int *oldp,
+ int limit)
{
int old = refcount_read(r);
do {
if (!old)
break;
+
+ if (i > limit - old) {
+ if (oldp)
+ *oldp = old;
+ return false;
+ }
} while (!atomic_try_cmpxchg_acquire(&r->refs, &old, old + i));
if (oldp)
@@ -231,6 +238,18 @@ bool __refcount_add_not_zero_acquire(int i, refcount_t *r, int *oldp)
return old;
}
+static inline __must_check bool
+__refcount_inc_not_zero_limited_acquire(refcount_t *r, int *oldp, int limit)
+{
+ return __refcount_add_not_zero_limited_acquire(1, r, oldp, limit);
+}
+
+static inline __must_check __signed_wrap
+bool __refcount_add_not_zero_acquire(int i, refcount_t *r, int *oldp)
+{
+ return __refcount_add_not_zero_limited_acquire(i, r, oldp, INT_MAX);
+}
+
/**
* refcount_add_not_zero_acquire - add a value to a refcount with acquire ordering unless it is 0
*
--
2.48.1.601.g30ceb7b040-goog
^ permalink raw reply [flat|nested] 30+ messages in thread
* [PATCH v10 12/18] mm: replace vm_lock and detached flag with a reference count
2025-02-13 22:46 [PATCH v10 00/18] reimplement per-vma lock as a refcount Suren Baghdasaryan
` (10 preceding siblings ...)
2025-02-13 22:46 ` [PATCH v10 11/18] refcount: introduce __refcount_{add|inc}_not_zero_limited_acquire Suren Baghdasaryan
@ 2025-02-13 22:46 ` Suren Baghdasaryan
2025-02-20 18:53 ` Heiko Carstens
2025-02-21 15:18 ` Vlastimil Babka
2025-02-13 22:46 ` [PATCH v10 13/18] mm: move lesser used vma_area_struct members into the last cacheline Suren Baghdasaryan
` (6 subsequent siblings)
18 siblings, 2 replies; 30+ messages in thread
From: Suren Baghdasaryan @ 2025-02-13 22:46 UTC (permalink / raw)
To: akpm
Cc: peterz, willy, liam.howlett, lorenzo.stoakes, david.laight.linux,
mhocko, vbabka, hannes, mjguzik, oliver.sang, mgorman, david,
peterx, oleg, dave, paulmck, brauner, dhowells, hdanton, hughd,
lokeshgidra, minchan, jannh, shakeel.butt, souravpanda,
pasha.tatashin, klarasmodin, richard.weiyang, corbet, linux-doc,
linux-mm, linux-kernel, kernel-team, surenb
rw_semaphore is a sizable structure of 40 bytes and consumes
considerable space for each vm_area_struct. However vma_lock has
two important specifics which can be used to replace rw_semaphore
with a simpler structure:
1. Readers never wait. They try to take the vma_lock and fall back to
mmap_lock if that fails.
2. Only one writer at a time will ever try to write-lock a vma_lock
because writers first take mmap_lock in write mode.
Because of these requirements, full rw_semaphore functionality is not
needed and we can replace rw_semaphore and the vma->detached flag with
a refcount (vm_refcnt).
When vma is in detached state, vm_refcnt is 0 and only a call to
vma_mark_attached() can take it out of this state. Note that unlike
before, now we enforce both vma_mark_attached() and vma_mark_detached()
to be done only after vma has been write-locked. vma_mark_attached()
changes vm_refcnt to 1 to indicate that it has been attached to the vma
tree. When a reader takes read lock, it increments vm_refcnt, unless the
top usable bit of vm_refcnt (0x40000000) is set, indicating presence of
a writer. When writer takes write lock, it sets the top usable bit to
indicate its presence. If there are readers, writer will wait using newly
introduced mm->vma_writer_wait. Since all writers take mmap_lock in write
mode first, there can be only one writer at a time. The last reader to
release the lock will signal the writer to wake up.
refcount might overflow if there are many competing readers, in which case
read-locking will fail. Readers are expected to handle such failures.
In summary:
1. all readers increment the vm_refcnt;
2. writer sets top usable (writer) bit of vm_refcnt;
3. readers cannot increment the vm_refcnt if the writer bit is set;
4. in the presence of readers, writer must wait for the vm_refcnt to drop
to 1 (plus the VMA_LOCK_OFFSET writer bit), indicating an attached vma
with no readers;
5. vm_refcnt overflow is handled by the readers.
While this vm_lock replacement does not yet result in a smaller
vm_area_struct (it stays at 256 bytes due to cacheline alignment), it
allows for further size optimization by structure member regrouping
to bring the size of vm_area_struct below 192 bytes.
Suggested-by: Peter Zijlstra <peterz@infradead.org>
Suggested-by: Matthew Wilcox <willy@infradead.org>
Signed-off-by: Suren Baghdasaryan <surenb@google.com>
---
Changes since v9 [1]:
- Use __refcount_inc_not_zero_limited_acquire() in vma_start_read(),
per Hillf Danton
- Refactor vma_assert_locked() to avoid vm_refcnt read when CONFIG_DEBUG_VM=n,
per Mateusz Guzik
- Update changelog, per Wei Yang
- Change vma_start_read() to return EAGAIN if vma got isolated and changed
lock_vma_under_rcu() back to detect this condition, per Wei Yang
- Change VM_BUG_ON_VMA() to WARN_ON_ONCE() when checking vma detached state,
per Lorenzo Stoakes
- Remove Vlastimil's Reviewed-by since code is changed
[1] https://lore.kernel.org/all/20250111042604.3230628-12-surenb@google.com/
include/linux/mm.h | 128 ++++++++++++++++++++-----------
include/linux/mm_types.h | 22 +++---
kernel/fork.c | 13 ++--
mm/init-mm.c | 1 +
mm/memory.c | 91 +++++++++++++++++++---
tools/testing/vma/linux/atomic.h | 5 ++
tools/testing/vma/vma_internal.h | 63 ++++++++-------
7 files changed, 218 insertions(+), 105 deletions(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 557d66e187ff..11a042c27aee 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -32,6 +32,7 @@
#include <linux/memremap.h>
#include <linux/slab.h>
#include <linux/cacheinfo.h>
+#include <linux/rcuwait.h>
struct mempolicy;
struct anon_vma;
@@ -697,19 +698,54 @@ static inline void vma_numab_state_free(struct vm_area_struct *vma) {}
#endif /* CONFIG_NUMA_BALANCING */
#ifdef CONFIG_PER_VMA_LOCK
-static inline void vma_lock_init(struct vm_area_struct *vma)
+static inline void vma_lock_init(struct vm_area_struct *vma, bool reset_refcnt)
{
- init_rwsem(&vma->vm_lock.lock);
+#ifdef CONFIG_DEBUG_LOCK_ALLOC
+ static struct lock_class_key lockdep_key;
+
+ lockdep_init_map(&vma->vmlock_dep_map, "vm_lock", &lockdep_key, 0);
+#endif
+ if (reset_refcnt)
+ refcount_set(&vma->vm_refcnt, 0);
vma->vm_lock_seq = UINT_MAX;
}
+static inline bool is_vma_writer_only(int refcnt)
+{
+ /*
+ * With a writer and no readers, refcnt is VMA_LOCK_OFFSET if the vma
+ * is detached and (VMA_LOCK_OFFSET + 1) if it is attached. Waiting on
+ * a detached vma happens only in vma_mark_detached() and is a rare
+ * case, therefore most of the time there will be no unnecessary wakeup.
+ */
+ return refcnt & VMA_LOCK_OFFSET && refcnt <= VMA_LOCK_OFFSET + 1;
+}
+
+static inline void vma_refcount_put(struct vm_area_struct *vma)
+{
+ /* Use a copy of vm_mm in case vma is freed after we drop vm_refcnt */
+ struct mm_struct *mm = vma->vm_mm;
+ int oldcnt;
+
+ rwsem_release(&vma->vmlock_dep_map, _RET_IP_);
+ if (!__refcount_dec_and_test(&vma->vm_refcnt, &oldcnt)) {
+
+ if (is_vma_writer_only(oldcnt - 1))
+ rcuwait_wake_up(&mm->vma_writer_wait);
+ }
+}
+
/*
* Try to read-lock a vma. The function is allowed to occasionally yield false
* locked result to avoid performance overhead, in which case we fall back to
* using mmap_lock. The function should never yield false unlocked result.
+ * Returns the vma on success, NULL on failure to lock and EAGAIN if vma got
+ * detached.
*/
-static inline bool vma_start_read(struct vm_area_struct *vma)
+static inline struct vm_area_struct *vma_start_read(struct vm_area_struct *vma)
{
+ int oldcnt;
+
/*
* Check before locking. A race might cause false locked result.
* We can use READ_ONCE() for the mm_lock_seq here, and don't need
@@ -718,15 +754,25 @@ static inline bool vma_start_read(struct vm_area_struct *vma)
* need ordering is below.
*/
if (READ_ONCE(vma->vm_lock_seq) == READ_ONCE(vma->vm_mm->mm_lock_seq.sequence))
- return false;
+ return NULL;
- if (unlikely(down_read_trylock(&vma->vm_lock.lock) == 0))
- return false;
+ /*
+ * If VMA_LOCK_OFFSET is set, __refcount_inc_not_zero_limited_acquire()
+ * will fail because VMA_REF_LIMIT is less than VMA_LOCK_OFFSET.
+ * Acquire fence is required here to avoid reordering against later
+ * vm_lock_seq check and checks inside lock_vma_under_rcu().
+ */
+ if (unlikely(!__refcount_inc_not_zero_limited_acquire(&vma->vm_refcnt, &oldcnt,
+ VMA_REF_LIMIT))) {
+ /* return EAGAIN if vma got detached from under us */
+ return oldcnt ? NULL : ERR_PTR(-EAGAIN);
+ }
+ rwsem_acquire_read(&vma->vmlock_dep_map, 0, 1, _RET_IP_);
/*
- * Overflow might produce false locked result.
+ * Overflow of vm_lock_seq/mm_lock_seq might produce false locked result.
* False unlocked result is impossible because we modify and check
- * vma->vm_lock_seq under vma->vm_lock protection and mm->mm_lock_seq
+ * vma->vm_lock_seq under vma->vm_refcnt protection and mm->mm_lock_seq
* modification invalidates all existing locks.
*
* We must use ACQUIRE semantics for the mm_lock_seq so that if we are
@@ -735,10 +781,11 @@ static inline bool vma_start_read(struct vm_area_struct *vma)
* This pairs with RELEASE semantics in vma_end_write_all().
*/
if (unlikely(vma->vm_lock_seq == raw_read_seqcount(&vma->vm_mm->mm_lock_seq))) {
- up_read(&vma->vm_lock.lock);
- return false;
+ vma_refcount_put(vma);
+ return NULL;
}
- return true;
+
+ return vma;
}
/*
@@ -749,8 +796,14 @@ static inline bool vma_start_read(struct vm_area_struct *vma)
*/
static inline bool vma_start_read_locked_nested(struct vm_area_struct *vma, int subclass)
{
+ int oldcnt;
+
mmap_assert_locked(vma->vm_mm);
- down_read_nested(&vma->vm_lock.lock, subclass);
+ if (unlikely(!__refcount_inc_not_zero_limited_acquire(&vma->vm_refcnt, &oldcnt,
+ VMA_REF_LIMIT)))
+ return false;
+
+ rwsem_acquire_read(&vma->vmlock_dep_map, 0, 1, _RET_IP_);
return true;
}
@@ -762,16 +815,12 @@ static inline bool vma_start_read_locked_nested(struct vm_area_struct *vma, int
*/
static inline bool vma_start_read_locked(struct vm_area_struct *vma)
{
- mmap_assert_locked(vma->vm_mm);
- down_read(&vma->vm_lock.lock);
- return true;
+ return vma_start_read_locked_nested(vma, 0);
}
static inline void vma_end_read(struct vm_area_struct *vma)
{
- rcu_read_lock(); /* keeps vma alive till the end of up_read */
- up_read(&vma->vm_lock.lock);
- rcu_read_unlock();
+ vma_refcount_put(vma);
}
/* WARNING! Can only be used if mmap_lock is expected to be write-locked */
@@ -813,38 +862,35 @@ static inline void vma_assert_write_locked(struct vm_area_struct *vma)
static inline void vma_assert_locked(struct vm_area_struct *vma)
{
- if (!rwsem_is_locked(&vma->vm_lock.lock))
- vma_assert_write_locked(vma);
+ unsigned int mm_lock_seq;
+
+ VM_BUG_ON_VMA(refcount_read(&vma->vm_refcnt) <= 1 &&
+ !__is_vma_write_locked(vma, &mm_lock_seq), vma);
}
+/*
+ * WARNING: to avoid racing with vma_mark_attached()/vma_mark_detached(), these
+ * assertions should be made either under mmap_write_lock or when the object
+ * has been isolated under mmap_write_lock, ensuring no competing writers.
+ */
static inline void vma_assert_attached(struct vm_area_struct *vma)
{
- WARN_ON_ONCE(vma->detached);
+ WARN_ON_ONCE(!refcount_read(&vma->vm_refcnt));
}
static inline void vma_assert_detached(struct vm_area_struct *vma)
{
- WARN_ON_ONCE(!vma->detached);
+ WARN_ON_ONCE(refcount_read(&vma->vm_refcnt));
}
static inline void vma_mark_attached(struct vm_area_struct *vma)
{
- vma_assert_detached(vma);
- vma->detached = false;
-}
-
-static inline void vma_mark_detached(struct vm_area_struct *vma)
-{
- /* When detaching vma should be write-locked */
vma_assert_write_locked(vma);
- vma_assert_attached(vma);
- vma->detached = true;
+ vma_assert_detached(vma);
+ refcount_set(&vma->vm_refcnt, 1);
}
-static inline bool is_vma_detached(struct vm_area_struct *vma)
-{
- return vma->detached;
-}
+void vma_mark_detached(struct vm_area_struct *vma);
static inline void release_fault_lock(struct vm_fault *vmf)
{
@@ -867,9 +913,9 @@ struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm,
#else /* CONFIG_PER_VMA_LOCK */
-static inline void vma_lock_init(struct vm_area_struct *vma) {}
-static inline bool vma_start_read(struct vm_area_struct *vma)
- { return false; }
+static inline void vma_lock_init(struct vm_area_struct *vma, bool reset_refcnt) {}
+static inline struct vm_area_struct *vma_start_read(struct vm_area_struct *vma)
+ { return NULL; }
static inline void vma_end_read(struct vm_area_struct *vma) {}
static inline void vma_start_write(struct vm_area_struct *vma) {}
static inline void vma_assert_write_locked(struct vm_area_struct *vma)
@@ -910,12 +956,8 @@ static inline void vma_init(struct vm_area_struct *vma, struct mm_struct *mm)
vma->vm_mm = mm;
vma->vm_ops = &vma_dummy_vm_ops;
INIT_LIST_HEAD(&vma->anon_vma_chain);
-#ifdef CONFIG_PER_VMA_LOCK
- /* vma is not locked, can't use vma_mark_detached() */
- vma->detached = true;
-#endif
vma_numab_state_init(vma);
- vma_lock_init(vma);
+ vma_lock_init(vma, false);
}
/* Use when VMA is not part of the VMA tree and needs no locking */
diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index 8a645bcb2b31..48ddfedfff83 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -19,6 +19,7 @@
#include <linux/workqueue.h>
#include <linux/seqlock.h>
#include <linux/percpu_counter.h>
+#include <linux/types.h>
#include <asm/mmu.h>
@@ -639,9 +640,8 @@ static inline struct anon_vma_name *anon_vma_name_alloc(const char *name)
}
#endif
-struct vma_lock {
- struct rw_semaphore lock;
-};
+#define VMA_LOCK_OFFSET 0x40000000
+#define VMA_REF_LIMIT (VMA_LOCK_OFFSET - 1)
struct vma_numab_state {
/*
@@ -719,19 +719,13 @@ struct vm_area_struct {
};
#ifdef CONFIG_PER_VMA_LOCK
- /*
- * Flag to indicate areas detached from the mm->mm_mt tree.
- * Unstable RCU readers are allowed to read this.
- */
- bool detached;
-
/*
* Can only be written (using WRITE_ONCE()) while holding both:
* - mmap_lock (in write mode)
- * - vm_lock->lock (in write mode)
+ * - vm_refcnt bit at VMA_LOCK_OFFSET is set
* Can be read reliably while holding one of:
* - mmap_lock (in read or write mode)
- * - vm_lock->lock (in read or write mode)
+ * - vm_refcnt bit at VMA_LOCK_OFFSET is set or vm_refcnt > 1
* Can be read unreliably (using READ_ONCE()) for pessimistic bailout
* while holding nothing (except RCU to keep the VMA struct allocated).
*
@@ -794,7 +788,10 @@ struct vm_area_struct {
struct vm_userfaultfd_ctx vm_userfaultfd_ctx;
#ifdef CONFIG_PER_VMA_LOCK
/* Unstable RCU readers are allowed to read this. */
- struct vma_lock vm_lock ____cacheline_aligned_in_smp;
+ refcount_t vm_refcnt ____cacheline_aligned_in_smp;
+#ifdef CONFIG_DEBUG_LOCK_ALLOC
+ struct lockdep_map vmlock_dep_map;
+#endif
#endif
} __randomize_layout;
@@ -929,6 +926,7 @@ struct mm_struct {
* by mmlist_lock
*/
#ifdef CONFIG_PER_VMA_LOCK
+ struct rcuwait vma_writer_wait;
/*
* This field has lock-like semantics, meaning it is sometimes
* accessed with ACQUIRE/RELEASE semantics.
diff --git a/kernel/fork.c b/kernel/fork.c
index f1af413e5aa4..48a0038f606f 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -463,12 +463,8 @@ struct vm_area_struct *vm_area_dup(struct vm_area_struct *orig)
* will be reinitialized.
*/
data_race(memcpy(new, orig, sizeof(*new)));
- vma_lock_init(new);
+ vma_lock_init(new, true);
INIT_LIST_HEAD(&new->anon_vma_chain);
-#ifdef CONFIG_PER_VMA_LOCK
- /* vma is not locked, can't use vma_mark_detached() */
- new->detached = true;
-#endif
vma_numab_state_init(new);
dup_anon_vma_name(orig, new);
@@ -477,6 +473,8 @@ struct vm_area_struct *vm_area_dup(struct vm_area_struct *orig)
void __vm_area_free(struct vm_area_struct *vma)
{
+ /* The vma should be detached while being destroyed. */
+ vma_assert_detached(vma);
vma_numab_state_free(vma);
free_anon_vma_name(vma);
kmem_cache_free(vm_area_cachep, vma);
@@ -488,8 +486,6 @@ static void vm_area_free_rcu_cb(struct rcu_head *head)
struct vm_area_struct *vma = container_of(head, struct vm_area_struct,
vm_rcu);
- /* The vma should not be locked while being destroyed. */
- VM_BUG_ON_VMA(rwsem_is_locked(&vma->vm_lock.lock), vma);
__vm_area_free(vma);
}
#endif
@@ -1234,6 +1230,9 @@ static void mmap_init_lock(struct mm_struct *mm)
{
init_rwsem(&mm->mmap_lock);
mm_lock_seqcount_init(mm);
+#ifdef CONFIG_PER_VMA_LOCK
+ rcuwait_init(&mm->vma_writer_wait);
+#endif
}
static struct mm_struct *mm_init(struct mm_struct *mm, struct task_struct *p,
diff --git a/mm/init-mm.c b/mm/init-mm.c
index 6af3ad675930..4600e7605cab 100644
--- a/mm/init-mm.c
+++ b/mm/init-mm.c
@@ -40,6 +40,7 @@ struct mm_struct init_mm = {
.arg_lock = __SPIN_LOCK_UNLOCKED(init_mm.arg_lock),
.mmlist = LIST_HEAD_INIT(init_mm.mmlist),
#ifdef CONFIG_PER_VMA_LOCK
+ .vma_writer_wait = __RCUWAIT_INITIALIZER(init_mm.vma_writer_wait),
.mm_lock_seq = SEQCNT_ZERO(init_mm.mm_lock_seq),
#endif
.user_ns = &init_user_ns,
diff --git a/mm/memory.c b/mm/memory.c
index 3d9c5141193f..528407c0d7cf 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -6393,9 +6393,47 @@ struct vm_area_struct *lock_mm_and_find_vma(struct mm_struct *mm,
#endif
#ifdef CONFIG_PER_VMA_LOCK
+static inline bool __vma_enter_locked(struct vm_area_struct *vma, bool detaching)
+{
+ unsigned int tgt_refcnt = VMA_LOCK_OFFSET;
+
+ /* Additional refcnt if the vma is attached. */
+ if (!detaching)
+ tgt_refcnt++;
+
+ /*
+ * If vma is detached then only vma_mark_attached() can raise the
+ * vm_refcnt. mmap_write_lock prevents racing with vma_mark_attached().
+ */
+ if (!refcount_add_not_zero(VMA_LOCK_OFFSET, &vma->vm_refcnt))
+ return false;
+
+ rwsem_acquire(&vma->vmlock_dep_map, 0, 0, _RET_IP_);
+ rcuwait_wait_event(&vma->vm_mm->vma_writer_wait,
+ refcount_read(&vma->vm_refcnt) == tgt_refcnt,
+ TASK_UNINTERRUPTIBLE);
+ lock_acquired(&vma->vmlock_dep_map, _RET_IP_);
+
+ return true;
+}
+
+static inline void __vma_exit_locked(struct vm_area_struct *vma, bool *detached)
+{
+ *detached = refcount_sub_and_test(VMA_LOCK_OFFSET, &vma->vm_refcnt);
+ rwsem_release(&vma->vmlock_dep_map, _RET_IP_);
+}
+
void __vma_start_write(struct vm_area_struct *vma, unsigned int mm_lock_seq)
{
- down_write(&vma->vm_lock.lock);
+ bool locked;
+
+ /*
+ * __vma_enter_locked() returns false immediately if the vma is not
+ * attached, otherwise it waits until refcnt is indicating that vma
+ * is attached with no readers.
+ */
+ locked = __vma_enter_locked(vma, false);
+
/*
* We should use WRITE_ONCE() here because we can have concurrent reads
* from the early lockless pessimistic check in vma_start_read().
@@ -6403,10 +6441,40 @@ void __vma_start_write(struct vm_area_struct *vma, unsigned int mm_lock_seq)
* we should use WRITE_ONCE() for cleanliness and to keep KCSAN happy.
*/
WRITE_ONCE(vma->vm_lock_seq, mm_lock_seq);
- up_write(&vma->vm_lock.lock);
+
+ if (locked) {
+ bool detached;
+
+ __vma_exit_locked(vma, &detached);
+ WARN_ON_ONCE(detached); /* vma should remain attached */
+ }
}
EXPORT_SYMBOL_GPL(__vma_start_write);
+void vma_mark_detached(struct vm_area_struct *vma)
+{
+ vma_assert_write_locked(vma);
+ vma_assert_attached(vma);
+
+ /*
+ * We are the only writer, so no need to use vma_refcount_put().
+ * The condition below is unlikely because the vma has been already
+ * write-locked and readers can increment vm_refcnt only temporarily
+ * before they check vm_lock_seq, realize the vma is locked and drop
+ * back the vm_refcnt. That is a narrow window for observing a raised
+ * vm_refcnt.
+ */
+ if (unlikely(!refcount_dec_and_test(&vma->vm_refcnt))) {
+ /* Wait until vma is detached with no readers. */
+ if (__vma_enter_locked(vma, true)) {
+ bool detached;
+
+ __vma_exit_locked(vma, &detached);
+ WARN_ON_ONCE(!detached);
+ }
+ }
+}
+
/*
* Lookup and lock a VMA under RCU protection. Returned VMA is guaranteed to be
* stable and not isolated. If the VMA is not found or is being modified the
@@ -6424,15 +6492,18 @@ struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm,
if (!vma)
goto inval;
- if (!vma_start_read(vma))
- goto inval;
+ vma = vma_start_read(vma);
+ if (IS_ERR_OR_NULL(vma)) {
+ /* Check if the VMA got isolated after we found it */
+ if (PTR_ERR(vma) == -EAGAIN) {
+ vma_end_read(vma);
+ count_vm_vma_lock_event(VMA_LOCK_MISS);
+ /* The area was replaced with another one */
+ goto retry;
+ }
- /* Check if the VMA got isolated after we found it */
- if (is_vma_detached(vma)) {
- vma_end_read(vma);
- count_vm_vma_lock_event(VMA_LOCK_MISS);
- /* The area was replaced with another one */
- goto retry;
+ /* Failed to lock the VMA */
+ goto inval;
}
/*
* At this point, we have a stable reference to a VMA: The VMA is
diff --git a/tools/testing/vma/linux/atomic.h b/tools/testing/vma/linux/atomic.h
index 3e1b6adc027b..788c597c4fde 100644
--- a/tools/testing/vma/linux/atomic.h
+++ b/tools/testing/vma/linux/atomic.h
@@ -9,4 +9,9 @@
#define atomic_set(x, y) uatomic_set(x, y)
#define U8_MAX UCHAR_MAX
+#ifndef atomic_cmpxchg_relaxed
+#define atomic_cmpxchg_relaxed uatomic_cmpxchg
+#define atomic_cmpxchg_release uatomic_cmpxchg
+#endif /* atomic_cmpxchg_relaxed */
+
#endif /* _LINUX_ATOMIC_H */
diff --git a/tools/testing/vma/vma_internal.h b/tools/testing/vma/vma_internal.h
index 34277842156c..ba838097d3f6 100644
--- a/tools/testing/vma/vma_internal.h
+++ b/tools/testing/vma/vma_internal.h
@@ -25,7 +25,7 @@
#include <linux/maple_tree.h>
#include <linux/mm.h>
#include <linux/rbtree.h>
-#include <linux/rwsem.h>
+#include <linux/refcount.h>
extern unsigned long stack_guard_gap;
#ifdef CONFIG_MMU
@@ -135,10 +135,6 @@ typedef __bitwise unsigned int vm_fault_t;
*/
#define pr_warn_once pr_err
-typedef struct refcount_struct {
- atomic_t refs;
-} refcount_t;
-
struct kref {
refcount_t refcount;
};
@@ -233,15 +229,12 @@ struct mm_struct {
unsigned long flags; /* Must use atomic bitops to access */
};
-struct vma_lock {
- struct rw_semaphore lock;
-};
-
-
struct file {
struct address_space *f_mapping;
};
+#define VMA_LOCK_OFFSET 0x40000000
+
struct vm_area_struct {
/* The first cache line has the info for VMA tree walking. */
@@ -269,16 +262,13 @@ struct vm_area_struct {
};
#ifdef CONFIG_PER_VMA_LOCK
- /* Flag to indicate areas detached from the mm->mm_mt tree */
- bool detached;
-
/*
* Can only be written (using WRITE_ONCE()) while holding both:
* - mmap_lock (in write mode)
- * - vm_lock.lock (in write mode)
+ * - vm_refcnt bit at VMA_LOCK_OFFSET is set
* Can be read reliably while holding one of:
* - mmap_lock (in read or write mode)
- * - vm_lock.lock (in read or write mode)
+ * - vm_refcnt bit at VMA_LOCK_OFFSET is set or vm_refcnt > 1
* Can be read unreliably (using READ_ONCE()) for pessimistic bailout
* while holding nothing (except RCU to keep the VMA struct allocated).
*
@@ -287,7 +277,6 @@ struct vm_area_struct {
* slowpath.
*/
unsigned int vm_lock_seq;
- struct vma_lock vm_lock;
#endif
/*
@@ -340,6 +329,10 @@ struct vm_area_struct {
struct vma_numab_state *numab_state; /* NUMA Balancing state */
#endif
struct vm_userfaultfd_ctx vm_userfaultfd_ctx;
+#ifdef CONFIG_PER_VMA_LOCK
+ /* Unstable RCU readers are allowed to read this. */
+ refcount_t vm_refcnt;
+#endif
} __randomize_layout;
struct vm_fault {};
@@ -464,33 +457,40 @@ static inline struct vm_area_struct *vma_next(struct vma_iterator *vmi)
return mas_find(&vmi->mas, ULONG_MAX);
}
-static inline void vma_lock_init(struct vm_area_struct *vma)
-{
- init_rwsem(&vma->vm_lock.lock);
- vma->vm_lock_seq = UINT_MAX;
-}
-
+/*
+ * WARNING: to avoid racing with vma_mark_attached()/vma_mark_detached(), these
+ * assertions should be made either under mmap_write_lock or when the object
+ * has been isolated under mmap_write_lock, ensuring no competing writers.
+ */
static inline void vma_assert_attached(struct vm_area_struct *vma)
{
- WARN_ON_ONCE(vma->detached);
+ WARN_ON_ONCE(!refcount_read(&vma->vm_refcnt));
}
static inline void vma_assert_detached(struct vm_area_struct *vma)
{
- WARN_ON_ONCE(!vma->detached);
+ WARN_ON_ONCE(refcount_read(&vma->vm_refcnt));
}
static inline void vma_assert_write_locked(struct vm_area_struct *);
static inline void vma_mark_attached(struct vm_area_struct *vma)
{
- vma->detached = false;
+ vma_assert_write_locked(vma);
+ vma_assert_detached(vma);
+ refcount_set(&vma->vm_refcnt, 1);
}
static inline void vma_mark_detached(struct vm_area_struct *vma)
{
- /* When detaching vma should be write-locked */
vma_assert_write_locked(vma);
- vma->detached = true;
+ vma_assert_attached(vma);
+ /* We are the only writer, so no need to use vma_refcount_put(). */
+ if (unlikely(!refcount_dec_and_test(&vma->vm_refcnt))) {
+ /*
+ * Reader must have temporarily raised vm_refcnt but it will
+ * drop it without using the vma since vma is write-locked.
+ */
+ }
}
extern const struct vm_operations_struct vma_dummy_vm_ops;
@@ -503,9 +503,7 @@ static inline void vma_init(struct vm_area_struct *vma, struct mm_struct *mm)
vma->vm_mm = mm;
vma->vm_ops = &vma_dummy_vm_ops;
INIT_LIST_HEAD(&vma->anon_vma_chain);
- /* vma is not locked, can't use vma_mark_detached() */
- vma->detached = true;
- vma_lock_init(vma);
+ vma->vm_lock_seq = UINT_MAX;
}
static inline struct vm_area_struct *vm_area_alloc(struct mm_struct *mm)
@@ -528,10 +526,9 @@ static inline struct vm_area_struct *vm_area_dup(struct vm_area_struct *orig)
return NULL;
memcpy(new, orig, sizeof(*new));
- vma_lock_init(new);
+ refcount_set(&new->vm_refcnt, 0);
+ new->vm_lock_seq = UINT_MAX;
INIT_LIST_HEAD(&new->anon_vma_chain);
- /* vma is not locked, can't use vma_mark_detached() */
- new->detached = true;
return new;
}
--
2.48.1.601.g30ceb7b040-goog
^ permalink raw reply [flat|nested] 30+ messages in thread
* [PATCH v10 13/18] mm: move lesser used vma_area_struct members into the last cacheline
2025-02-13 22:46 [PATCH v10 00/18] reimplement per-vma lock as a refcount Suren Baghdasaryan
` (11 preceding siblings ...)
2025-02-13 22:46 ` [PATCH v10 12/18] mm: replace vm_lock and detached flag with a reference count Suren Baghdasaryan
@ 2025-02-13 22:46 ` Suren Baghdasaryan
2025-02-21 15:20 ` Vlastimil Babka
2025-02-13 22:46 ` [PATCH v10 14/18] mm/debug: print vm_refcnt state when dumping the vma Suren Baghdasaryan
` (5 subsequent siblings)
18 siblings, 1 reply; 30+ messages in thread
From: Suren Baghdasaryan @ 2025-02-13 22:46 UTC (permalink / raw)
To: akpm
Cc: peterz, willy, liam.howlett, lorenzo.stoakes, david.laight.linux,
mhocko, vbabka, hannes, mjguzik, oliver.sang, mgorman, david,
peterx, oleg, dave, paulmck, brauner, dhowells, hdanton, hughd,
lokeshgidra, minchan, jannh, shakeel.butt, souravpanda,
pasha.tatashin, klarasmodin, richard.weiyang, corbet, linux-doc,
linux-mm, linux-kernel, kernel-team, surenb
Move several vma_area_struct members which are rarely or never used
during page fault handling into the last cacheline to better pack
vm_area_struct. As a result vm_area_struct will fit into 3 as opposed
to 4 cachelines. New typical vm_area_struct layout:
struct vm_area_struct {
union {
struct {
long unsigned int vm_start; /* 0 8 */
long unsigned int vm_end; /* 8 8 */
}; /* 0 16 */
freeptr_t vm_freeptr; /* 0 8 */
}; /* 0 16 */
struct mm_struct * vm_mm; /* 16 8 */
pgprot_t vm_page_prot; /* 24 8 */
union {
const vm_flags_t vm_flags; /* 32 8 */
vm_flags_t __vm_flags; /* 32 8 */
}; /* 32 8 */
unsigned int vm_lock_seq; /* 40 4 */
/* XXX 4 bytes hole, try to pack */
struct list_head anon_vma_chain; /* 48 16 */
/* --- cacheline 1 boundary (64 bytes) --- */
struct anon_vma * anon_vma; /* 64 8 */
const struct vm_operations_struct * vm_ops; /* 72 8 */
long unsigned int vm_pgoff; /* 80 8 */
struct file * vm_file; /* 88 8 */
void * vm_private_data; /* 96 8 */
atomic_long_t swap_readahead_info; /* 104 8 */
struct mempolicy * vm_policy; /* 112 8 */
struct vma_numab_state * numab_state; /* 120 8 */
/* --- cacheline 2 boundary (128 bytes) --- */
refcount_t vm_refcnt (__aligned__(64)); /* 128 4 */
/* XXX 4 bytes hole, try to pack */
struct {
struct rb_node rb (__aligned__(8)); /* 136 24 */
long unsigned int rb_subtree_last; /* 160 8 */
} __attribute__((__aligned__(8))) shared; /* 136 32 */
struct anon_vma_name * anon_name; /* 168 8 */
struct vm_userfaultfd_ctx vm_userfaultfd_ctx; /* 176 8 */
/* size: 192, cachelines: 3, members: 18 */
/* sum members: 176, holes: 2, sum holes: 8 */
/* padding: 8 */
/* forced alignments: 2, forced holes: 1, sum forced holes: 4 */
} __attribute__((__aligned__(64)));
Memory consumption per 1000 VMAs becomes 48 pages:
slabinfo after vm_area_struct changes:
<name> ... <objsize> <objperslab> <pagesperslab> : ...
vm_area_struct ... 192 42 2 : ...
Signed-off-by: Suren Baghdasaryan <surenb@google.com>
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
---
Changes since v9 [1]:
- Update vm_area_struct for tests, per Lorenzo Stoakes
- Add Reviewed-by, per Lorenzo Stoakes
[1] https://lore.kernel.org/all/20250111042604.3230628-13-surenb@google.com/
include/linux/mm_types.h | 38 +++++++++++++++-----------------
tools/testing/vma/vma_internal.h | 37 +++++++++++++++----------------
2 files changed, 36 insertions(+), 39 deletions(-)
diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index 48ddfedfff83..63ab51699120 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -735,17 +735,6 @@ struct vm_area_struct {
*/
unsigned int vm_lock_seq;
#endif
-
- /*
- * For areas with an address space and backing store,
- * linkage into the address_space->i_mmap interval tree.
- *
- */
- struct {
- struct rb_node rb;
- unsigned long rb_subtree_last;
- } shared;
-
/*
* A file's MAP_PRIVATE vma can be in both i_mmap tree and anon_vma
* list, after a COW of one of the file pages. A MAP_SHARED vma
@@ -765,14 +754,6 @@ struct vm_area_struct {
struct file * vm_file; /* File we map to (can be NULL). */
void * vm_private_data; /* was vm_pte (shared mem) */
-#ifdef CONFIG_ANON_VMA_NAME
- /*
- * For private and shared anonymous mappings, a pointer to a null
- * terminated string containing the name given to the vma, or NULL if
- * unnamed. Serialized by mmap_lock. Use anon_vma_name to access.
- */
- struct anon_vma_name *anon_name;
-#endif
#ifdef CONFIG_SWAP
atomic_long_t swap_readahead_info;
#endif
@@ -785,7 +766,6 @@ struct vm_area_struct {
#ifdef CONFIG_NUMA_BALANCING
struct vma_numab_state *numab_state; /* NUMA Balancing state */
#endif
- struct vm_userfaultfd_ctx vm_userfaultfd_ctx;
#ifdef CONFIG_PER_VMA_LOCK
/* Unstable RCU readers are allowed to read this. */
refcount_t vm_refcnt ____cacheline_aligned_in_smp;
@@ -793,6 +773,24 @@ struct vm_area_struct {
struct lockdep_map vmlock_dep_map;
#endif
#endif
+ /*
+ * For areas with an address space and backing store,
+ * linkage into the address_space->i_mmap interval tree.
+ *
+ */
+ struct {
+ struct rb_node rb;
+ unsigned long rb_subtree_last;
+ } shared;
+#ifdef CONFIG_ANON_VMA_NAME
+ /*
+ * For private and shared anonymous mappings, a pointer to a null
+ * terminated string containing the name given to the vma, or NULL if
+ * unnamed. Serialized by mmap_lock. Use anon_vma_name to access.
+ */
+ struct anon_vma_name *anon_name;
+#endif
+ struct vm_userfaultfd_ctx vm_userfaultfd_ctx;
} __randomize_layout;
#ifdef CONFIG_NUMA
diff --git a/tools/testing/vma/vma_internal.h b/tools/testing/vma/vma_internal.h
index ba838097d3f6..b385170fbb8f 100644
--- a/tools/testing/vma/vma_internal.h
+++ b/tools/testing/vma/vma_internal.h
@@ -279,16 +279,6 @@ struct vm_area_struct {
unsigned int vm_lock_seq;
#endif
- /*
- * For areas with an address space and backing store,
- * linkage into the address_space->i_mmap interval tree.
- *
- */
- struct {
- struct rb_node rb;
- unsigned long rb_subtree_last;
- } shared;
-
/*
* A file's MAP_PRIVATE vma can be in both i_mmap tree and anon_vma
* list, after a COW of one of the file pages. A MAP_SHARED vma
@@ -308,14 +298,6 @@ struct vm_area_struct {
struct file * vm_file; /* File we map to (can be NULL). */
void * vm_private_data; /* was vm_pte (shared mem) */
-#ifdef CONFIG_ANON_VMA_NAME
- /*
- * For private and shared anonymous mappings, a pointer to a null
- * terminated string containing the name given to the vma, or NULL if
- * unnamed. Serialized by mmap_lock. Use anon_vma_name to access.
- */
- struct anon_vma_name *anon_name;
-#endif
#ifdef CONFIG_SWAP
atomic_long_t swap_readahead_info;
#endif
@@ -328,11 +310,28 @@ struct vm_area_struct {
#ifdef CONFIG_NUMA_BALANCING
struct vma_numab_state *numab_state; /* NUMA Balancing state */
#endif
- struct vm_userfaultfd_ctx vm_userfaultfd_ctx;
#ifdef CONFIG_PER_VMA_LOCK
/* Unstable RCU readers are allowed to read this. */
refcount_t vm_refcnt;
#endif
+ /*
+ * For areas with an address space and backing store,
+ * linkage into the address_space->i_mmap interval tree.
+ *
+ */
+ struct {
+ struct rb_node rb;
+ unsigned long rb_subtree_last;
+ } shared;
+#ifdef CONFIG_ANON_VMA_NAME
+ /*
+ * For private and shared anonymous mappings, a pointer to a null
+ * terminated string containing the name given to the vma, or NULL if
+ * unnamed. Serialized by mmap_lock. Use anon_vma_name to access.
+ */
+ struct anon_vma_name *anon_name;
+#endif
+ struct vm_userfaultfd_ctx vm_userfaultfd_ctx;
} __randomize_layout;
struct vm_fault {};
--
2.48.1.601.g30ceb7b040-goog
^ permalink raw reply [flat|nested] 30+ messages in thread
* [PATCH v10 14/18] mm/debug: print vm_refcnt state when dumping the vma
2025-02-13 22:46 [PATCH v10 00/18] reimplement per-vma lock as a refcount Suren Baghdasaryan
` (12 preceding siblings ...)
2025-02-13 22:46 ` [PATCH v10 13/18] mm: move lesser used vma_area_struct members into the last cacheline Suren Baghdasaryan
@ 2025-02-13 22:46 ` Suren Baghdasaryan
2025-02-13 22:46 ` [PATCH v10 15/18] mm: remove extra vma_numab_state_init() call Suren Baghdasaryan
` (4 subsequent siblings)
18 siblings, 0 replies; 30+ messages in thread
From: Suren Baghdasaryan @ 2025-02-13 22:46 UTC (permalink / raw)
To: akpm
Cc: peterz, willy, liam.howlett, lorenzo.stoakes, david.laight.linux,
mhocko, vbabka, hannes, mjguzik, oliver.sang, mgorman, david,
peterx, oleg, dave, paulmck, brauner, dhowells, hdanton, hughd,
lokeshgidra, minchan, jannh, shakeel.butt, souravpanda,
pasha.tatashin, klarasmodin, richard.weiyang, corbet, linux-doc,
linux-mm, linux-kernel, kernel-team, surenb
vm_refcnt encodes a number of useful states:
- whether vma is attached or detached
- the number of current vma readers
- presence of a vma writer
Let's include it in the vma dump.
Signed-off-by: Suren Baghdasaryan <surenb@google.com>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
---
Changes since v9 [1]:
- Minimized duplicate code, per Lorenzo Stoakes
[1] https://lore.kernel.org/all/20250111042604.3230628-14-surenb@google.com/
mm/debug.c | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/mm/debug.c b/mm/debug.c
index e1282b85a877..2d1bd67d957b 100644
--- a/mm/debug.c
+++ b/mm/debug.c
@@ -181,11 +181,17 @@ void dump_vma(const struct vm_area_struct *vma)
pr_emerg("vma %px start %px end %px mm %px\n"
"prot %lx anon_vma %px vm_ops %px\n"
"pgoff %lx file %px private_data %px\n"
+#ifdef CONFIG_PER_VMA_LOCK
+ "refcnt %x\n"
+#endif
"flags: %#lx(%pGv)\n",
vma, (void *)vma->vm_start, (void *)vma->vm_end, vma->vm_mm,
(unsigned long)pgprot_val(vma->vm_page_prot),
vma->anon_vma, vma->vm_ops, vma->vm_pgoff,
vma->vm_file, vma->vm_private_data,
+#ifdef CONFIG_PER_VMA_LOCK
+ refcount_read(&vma->vm_refcnt),
+#endif
vma->vm_flags, &vma->vm_flags);
}
EXPORT_SYMBOL(dump_vma);
--
2.48.1.601.g30ceb7b040-goog
^ permalink raw reply [flat|nested] 30+ messages in thread
* [PATCH v10 15/18] mm: remove extra vma_numab_state_init() call
2025-02-13 22:46 [PATCH v10 00/18] reimplement per-vma lock as a refcount Suren Baghdasaryan
` (13 preceding siblings ...)
2025-02-13 22:46 ` [PATCH v10 14/18] mm/debug: print vm_refcnt state when dumping the vma Suren Baghdasaryan
@ 2025-02-13 22:46 ` Suren Baghdasaryan
2025-02-13 22:46 ` [PATCH v10 16/18] mm: prepare lock_vma_under_rcu() for vma reuse possibility Suren Baghdasaryan
` (3 subsequent siblings)
18 siblings, 0 replies; 30+ messages in thread
From: Suren Baghdasaryan @ 2025-02-13 22:46 UTC (permalink / raw)
To: akpm
Cc: peterz, willy, liam.howlett, lorenzo.stoakes, david.laight.linux,
mhocko, vbabka, hannes, mjguzik, oliver.sang, mgorman, david,
peterx, oleg, dave, paulmck, brauner, dhowells, hdanton, hughd,
lokeshgidra, minchan, jannh, shakeel.butt, souravpanda,
pasha.tatashin, klarasmodin, richard.weiyang, corbet, linux-doc,
linux-mm, linux-kernel, kernel-team, surenb
vma_init() already memset's the whole vm_area_struct to 0, so there is
no need to an additional vma_numab_state_init().
Signed-off-by: Suren Baghdasaryan <surenb@google.com>
Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
---
Changes since v9 [1]:
- Add Reviewed-by, per Lorenzo Stoakes
[1] https://lore.kernel.org/all/20250111042604.3230628-15-surenb@google.com/
include/linux/mm.h | 1 -
1 file changed, 1 deletion(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 11a042c27aee..327cf5944569 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -956,7 +956,6 @@ static inline void vma_init(struct vm_area_struct *vma, struct mm_struct *mm)
vma->vm_mm = mm;
vma->vm_ops = &vma_dummy_vm_ops;
INIT_LIST_HEAD(&vma->anon_vma_chain);
- vma_numab_state_init(vma);
vma_lock_init(vma, false);
}
--
2.48.1.601.g30ceb7b040-goog
^ permalink raw reply [flat|nested] 30+ messages in thread
* [PATCH v10 16/18] mm: prepare lock_vma_under_rcu() for vma reuse possibility
2025-02-13 22:46 [PATCH v10 00/18] reimplement per-vma lock as a refcount Suren Baghdasaryan
` (14 preceding siblings ...)
2025-02-13 22:46 ` [PATCH v10 15/18] mm: remove extra vma_numab_state_init() call Suren Baghdasaryan
@ 2025-02-13 22:46 ` Suren Baghdasaryan
2025-02-13 22:46 ` [PATCH v10 17/18] mm: make vma cache SLAB_TYPESAFE_BY_RCU Suren Baghdasaryan
` (2 subsequent siblings)
18 siblings, 0 replies; 30+ messages in thread
From: Suren Baghdasaryan @ 2025-02-13 22:46 UTC (permalink / raw)
To: akpm
Cc: peterz, willy, liam.howlett, lorenzo.stoakes, david.laight.linux,
mhocko, vbabka, hannes, mjguzik, oliver.sang, mgorman, david,
peterx, oleg, dave, paulmck, brauner, dhowells, hdanton, hughd,
lokeshgidra, minchan, jannh, shakeel.butt, souravpanda,
pasha.tatashin, klarasmodin, richard.weiyang, corbet, linux-doc,
linux-mm, linux-kernel, kernel-team, surenb
Once we make vma cache SLAB_TYPESAFE_BY_RCU, it will be possible for a vma
to be reused and attached to another mm after lock_vma_under_rcu() locks
the vma. lock_vma_under_rcu() should ensure that vma_start_read() is using
the original mm and after locking the vma it should ensure that vma->vm_mm
has not changed from under us.
Signed-off-by: Suren Baghdasaryan <surenb@google.com>
Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
---
include/linux/mm.h | 12 ++++++++----
mm/memory.c | 7 ++++---
2 files changed, 12 insertions(+), 7 deletions(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 327cf5944569..88693568c9ef 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -739,10 +739,13 @@ static inline void vma_refcount_put(struct vm_area_struct *vma)
* Try to read-lock a vma. The function is allowed to occasionally yield false
* locked result to avoid performance overhead, in which case we fall back to
* using mmap_lock. The function should never yield false unlocked result.
+ * False locked result is possible if mm_lock_seq overflows or if vma gets
+ * reused and attached to a different mm before we lock it.
* Returns the vma on success, NULL on failure to lock and EAGAIN if vma got
* detached.
*/
-static inline struct vm_area_struct *vma_start_read(struct vm_area_struct *vma)
+static inline struct vm_area_struct *vma_start_read(struct mm_struct *mm,
+ struct vm_area_struct *vma)
{
int oldcnt;
@@ -753,7 +756,7 @@ static inline struct vm_area_struct *vma_start_read(struct vm_area_struct *vma)
* we don't rely on for anything - the mm_lock_seq read against which we
* need ordering is below.
*/
- if (READ_ONCE(vma->vm_lock_seq) == READ_ONCE(vma->vm_mm->mm_lock_seq.sequence))
+ if (READ_ONCE(vma->vm_lock_seq) == READ_ONCE(mm->mm_lock_seq.sequence))
return NULL;
/*
@@ -780,7 +783,7 @@ static inline struct vm_area_struct *vma_start_read(struct vm_area_struct *vma)
* after it has been unlocked.
* This pairs with RELEASE semantics in vma_end_write_all().
*/
- if (unlikely(vma->vm_lock_seq == raw_read_seqcount(&vma->vm_mm->mm_lock_seq))) {
+ if (unlikely(vma->vm_lock_seq == raw_read_seqcount(&mm->mm_lock_seq))) {
vma_refcount_put(vma);
return NULL;
}
@@ -914,7 +917,8 @@ struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm,
#else /* CONFIG_PER_VMA_LOCK */
static inline void vma_lock_init(struct vm_area_struct *vma, bool reset_refcnt) {}
-static inline struct vm_area_struct *vma_start_read(struct vm_area_struct *vma)
+static inline struct vm_area_struct *vma_start_read(struct mm_struct *mm,
+ struct vm_area_struct *vma)
{ return NULL; }
static inline void vma_end_read(struct vm_area_struct *vma) {}
static inline void vma_start_write(struct vm_area_struct *vma) {}
diff --git a/mm/memory.c b/mm/memory.c
index 528407c0d7cf..6378a873e7c1 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -6492,7 +6492,7 @@ struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm,
if (!vma)
goto inval;
- vma = vma_start_read(vma);
+ vma = vma_start_read(mm, vma);
if (IS_ERR_OR_NULL(vma)) {
/* Check if the VMA got isolated after we found it */
if (PTR_ERR(vma) == -EAGAIN) {
@@ -6512,8 +6512,9 @@ struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm,
* fields are accessible for RCU readers.
*/
- /* Check since vm_start/vm_end might change before we lock the VMA */
- if (unlikely(address < vma->vm_start || address >= vma->vm_end))
+ /* Check if the vma we locked is the right one. */
+ if (unlikely(vma->vm_mm != mm ||
+ address < vma->vm_start || address >= vma->vm_end))
goto inval_end_read;
rcu_read_unlock();
--
2.48.1.601.g30ceb7b040-goog
^ permalink raw reply [flat|nested] 30+ messages in thread
* [PATCH v10 17/18] mm: make vma cache SLAB_TYPESAFE_BY_RCU
2025-02-13 22:46 [PATCH v10 00/18] reimplement per-vma lock as a refcount Suren Baghdasaryan
` (15 preceding siblings ...)
2025-02-13 22:46 ` [PATCH v10 16/18] mm: prepare lock_vma_under_rcu() for vma reuse possibility Suren Baghdasaryan
@ 2025-02-13 22:46 ` Suren Baghdasaryan
2025-02-13 22:46 ` [PATCH v10 18/18] docs/mm: document latest changes to vm_lock Suren Baghdasaryan
2025-02-14 19:34 ` [PATCH v10 00/18] reimplement per-vma lock as a refcount Shivank Garg
18 siblings, 0 replies; 30+ messages in thread
From: Suren Baghdasaryan @ 2025-02-13 22:46 UTC (permalink / raw)
To: akpm
Cc: peterz, willy, liam.howlett, lorenzo.stoakes, david.laight.linux,
mhocko, vbabka, hannes, mjguzik, oliver.sang, mgorman, david,
peterx, oleg, dave, paulmck, brauner, dhowells, hdanton, hughd,
lokeshgidra, minchan, jannh, shakeel.butt, souravpanda,
pasha.tatashin, klarasmodin, richard.weiyang, corbet, linux-doc,
linux-mm, linux-kernel, kernel-team, surenb
To enable SLAB_TYPESAFE_BY_RCU for vma cache we need to ensure that
object reuse before RCU grace period is over will be detected by
lock_vma_under_rcu().
Current checks are sufficient as long as vma is detached before it is
freed. The only place this is not currently happening is in exit_mmap().
Add the missing vma_mark_detached() in exit_mmap().
Another issue which might trick lock_vma_under_rcu() during vma reuse
is vm_area_dup(), which copies the entire content of the vma into a new
one, overriding new vma's vm_refcnt and temporarily making it appear as
attached. This might trick a racing lock_vma_under_rcu() to operate on
a reused vma if it found the vma before it got reused. To prevent this
situation, we should ensure that vm_refcnt stays at detached state (0)
when it is copied and advances to attached state only after it is added
into the vma tree. Introduce vm_area_init_from() which preserves new
vma's vm_refcnt and use it in vm_area_dup(). Since all vmas are in
detached state with no current readers when they are freed,
lock_vma_under_rcu() will not be able to take vm_refcnt after vma got
detached even if vma is reused. vma_mark_attached() in modified to
include a release fence to ensure all stores to the vma happen before
vm_refcnt gets initialized.
Finally, make vm_area_cachep SLAB_TYPESAFE_BY_RCU. This will facilitate
vm_area_struct reuse and will minimize the number of call_rcu() calls.
Signed-off-by: Suren Baghdasaryan <surenb@google.com>
Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
---
Changes since v9 [1]:
- Use refcount_set_release() in vma_mark_attached(), per Will Deacon
[1] https://lore.kernel.org/all/20250111042604.3230628-17-surenb@google.com/
include/linux/mm.h | 4 +-
include/linux/mm_types.h | 13 ++++--
include/linux/slab.h | 6 ---
kernel/fork.c | 73 ++++++++++++++++++++------------
mm/mmap.c | 3 +-
mm/vma.c | 11 ++---
mm/vma.h | 2 +-
tools/include/linux/refcount.h | 5 +++
tools/testing/vma/linux/atomic.h | 1 +
tools/testing/vma/vma_internal.h | 9 +---
10 files changed, 71 insertions(+), 56 deletions(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 88693568c9ef..7b21b48627b0 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -258,8 +258,6 @@ void setup_initial_init_mm(void *start_code, void *end_code,
struct vm_area_struct *vm_area_alloc(struct mm_struct *);
struct vm_area_struct *vm_area_dup(struct vm_area_struct *);
void vm_area_free(struct vm_area_struct *);
-/* Use only if VMA has no other users */
-void __vm_area_free(struct vm_area_struct *vma);
#ifndef CONFIG_MMU
extern struct rb_root nommu_region_tree;
@@ -890,7 +888,7 @@ static inline void vma_mark_attached(struct vm_area_struct *vma)
{
vma_assert_write_locked(vma);
vma_assert_detached(vma);
- refcount_set(&vma->vm_refcnt, 1);
+ refcount_set_release(&vma->vm_refcnt, 1);
}
void vma_mark_detached(struct vm_area_struct *vma);
diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index 63ab51699120..689b2a746189 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -584,6 +584,12 @@ static inline void *folio_get_private(struct folio *folio)
typedef unsigned long vm_flags_t;
+/*
+ * freeptr_t represents a SLUB freelist pointer, which might be encoded
+ * and not dereferenceable if CONFIG_SLAB_FREELIST_HARDENED is enabled.
+ */
+typedef struct { unsigned long v; } freeptr_t;
+
/*
* A region containing a mapping of a non-memory backed file under NOMMU
* conditions. These are held in a global tree and are pinned by the VMAs that
@@ -687,6 +693,9 @@ struct vma_numab_state {
*
* Only explicitly marked struct members may be accessed by RCU readers before
* getting a stable reference.
+ *
+ * WARNING: when adding new members, please update vm_area_init_from() to copy
+ * them during vm_area_struct content duplication.
*/
struct vm_area_struct {
/* The first cache line has the info for VMA tree walking. */
@@ -697,9 +706,7 @@ struct vm_area_struct {
unsigned long vm_start;
unsigned long vm_end;
};
-#ifdef CONFIG_PER_VMA_LOCK
- struct rcu_head vm_rcu; /* Used for deferred freeing. */
-#endif
+ freeptr_t vm_freeptr; /* Pointer used by SLAB_TYPESAFE_BY_RCU */
};
/*
diff --git a/include/linux/slab.h b/include/linux/slab.h
index ad902a2d692b..f8924fd6ea26 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -243,12 +243,6 @@ enum _slab_flag_bits {
#define SLAB_NO_OBJ_EXT __SLAB_FLAG_UNUSED
#endif
-/*
- * freeptr_t represents a SLUB freelist pointer, which might be encoded
- * and not dereferenceable if CONFIG_SLAB_FREELIST_HARDENED is enabled.
- */
-typedef struct { unsigned long v; } freeptr_t;
-
/*
* ZERO_SIZE_PTR will be returned for zero sized kmalloc requests.
*
diff --git a/kernel/fork.c b/kernel/fork.c
index 48a0038f606f..364b2d4fd3ef 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -449,6 +449,42 @@ struct vm_area_struct *vm_area_alloc(struct mm_struct *mm)
return vma;
}
+static void vm_area_init_from(const struct vm_area_struct *src,
+ struct vm_area_struct *dest)
+{
+ dest->vm_mm = src->vm_mm;
+ dest->vm_ops = src->vm_ops;
+ dest->vm_start = src->vm_start;
+ dest->vm_end = src->vm_end;
+ dest->anon_vma = src->anon_vma;
+ dest->vm_pgoff = src->vm_pgoff;
+ dest->vm_file = src->vm_file;
+ dest->vm_private_data = src->vm_private_data;
+ vm_flags_init(dest, src->vm_flags);
+ memcpy(&dest->vm_page_prot, &src->vm_page_prot,
+ sizeof(dest->vm_page_prot));
+ /*
+ * src->shared.rb may be modified concurrently when called from
+ * dup_mmap(), but the clone will reinitialize it.
+ */
+ data_race(memcpy(&dest->shared, &src->shared, sizeof(dest->shared)));
+ memcpy(&dest->vm_userfaultfd_ctx, &src->vm_userfaultfd_ctx,
+ sizeof(dest->vm_userfaultfd_ctx));
+#ifdef CONFIG_ANON_VMA_NAME
+ dest->anon_name = src->anon_name;
+#endif
+#ifdef CONFIG_SWAP
+ memcpy(&dest->swap_readahead_info, &src->swap_readahead_info,
+ sizeof(dest->swap_readahead_info));
+#endif
+#ifndef CONFIG_MMU
+ dest->vm_region = src->vm_region;
+#endif
+#ifdef CONFIG_NUMA
+ dest->vm_policy = src->vm_policy;
+#endif
+}
+
struct vm_area_struct *vm_area_dup(struct vm_area_struct *orig)
{
struct vm_area_struct *new = kmem_cache_alloc(vm_area_cachep, GFP_KERNEL);
@@ -458,11 +494,7 @@ struct vm_area_struct *vm_area_dup(struct vm_area_struct *orig)
ASSERT_EXCLUSIVE_WRITER(orig->vm_flags);
ASSERT_EXCLUSIVE_WRITER(orig->vm_file);
- /*
- * orig->shared.rb may be modified concurrently, but the clone
- * will be reinitialized.
- */
- data_race(memcpy(new, orig, sizeof(*new)));
+ vm_area_init_from(orig, new);
vma_lock_init(new, true);
INIT_LIST_HEAD(&new->anon_vma_chain);
vma_numab_state_init(new);
@@ -471,7 +503,7 @@ struct vm_area_struct *vm_area_dup(struct vm_area_struct *orig)
return new;
}
-void __vm_area_free(struct vm_area_struct *vma)
+void vm_area_free(struct vm_area_struct *vma)
{
/* The vma should be detached while being destroyed. */
vma_assert_detached(vma);
@@ -480,25 +512,6 @@ void __vm_area_free(struct vm_area_struct *vma)
kmem_cache_free(vm_area_cachep, vma);
}
-#ifdef CONFIG_PER_VMA_LOCK
-static void vm_area_free_rcu_cb(struct rcu_head *head)
-{
- struct vm_area_struct *vma = container_of(head, struct vm_area_struct,
- vm_rcu);
-
- __vm_area_free(vma);
-}
-#endif
-
-void vm_area_free(struct vm_area_struct *vma)
-{
-#ifdef CONFIG_PER_VMA_LOCK
- call_rcu(&vma->vm_rcu, vm_area_free_rcu_cb);
-#else
- __vm_area_free(vma);
-#endif
-}
-
static void account_kernel_stack(struct task_struct *tsk, int account)
{
if (IS_ENABLED(CONFIG_VMAP_STACK)) {
@@ -3156,6 +3169,11 @@ void __init mm_cache_init(void)
void __init proc_caches_init(void)
{
+ struct kmem_cache_args args = {
+ .use_freeptr_offset = true,
+ .freeptr_offset = offsetof(struct vm_area_struct, vm_freeptr),
+ };
+
sighand_cachep = kmem_cache_create("sighand_cache",
sizeof(struct sighand_struct), 0,
SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_TYPESAFE_BY_RCU|
@@ -3172,8 +3190,9 @@ void __init proc_caches_init(void)
sizeof(struct fs_struct), 0,
SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_ACCOUNT,
NULL);
- vm_area_cachep = KMEM_CACHE(vm_area_struct,
- SLAB_HWCACHE_ALIGN|SLAB_NO_MERGE|SLAB_PANIC|
+ vm_area_cachep = kmem_cache_create("vm_area_struct",
+ sizeof(struct vm_area_struct), &args,
+ SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_TYPESAFE_BY_RCU|
SLAB_ACCOUNT);
mmap_init();
nsproxy_cache_init();
diff --git a/mm/mmap.c b/mm/mmap.c
index 6401a1d73f4a..15d6cd7cc845 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -1305,7 +1305,8 @@ void exit_mmap(struct mm_struct *mm)
do {
if (vma->vm_flags & VM_ACCOUNT)
nr_accounted += vma_pages(vma);
- remove_vma(vma, /* unreachable = */ true);
+ vma_mark_detached(vma);
+ remove_vma(vma);
count++;
cond_resched();
vma = vma_next(&vmi);
diff --git a/mm/vma.c b/mm/vma.c
index a16a83d0253f..c7abef5177cc 100644
--- a/mm/vma.c
+++ b/mm/vma.c
@@ -420,19 +420,14 @@ static bool can_vma_merge_right(struct vma_merge_struct *vmg,
/*
* Close a vm structure and free it.
*/
-void remove_vma(struct vm_area_struct *vma, bool unreachable)
+void remove_vma(struct vm_area_struct *vma)
{
might_sleep();
vma_close(vma);
if (vma->vm_file)
fput(vma->vm_file);
mpol_put(vma_policy(vma));
- if (unreachable) {
- vma_mark_detached(vma);
- __vm_area_free(vma);
- } else {
- vm_area_free(vma);
- }
+ vm_area_free(vma);
}
/*
@@ -1218,7 +1213,7 @@ static void vms_complete_munmap_vmas(struct vma_munmap_struct *vms,
/* Remove and clean up vmas */
mas_set(mas_detach, 0);
mas_for_each(mas_detach, vma, ULONG_MAX)
- remove_vma(vma, /* unreachable = */ false);
+ remove_vma(vma);
vm_unacct_memory(vms->nr_accounted);
validate_mm(mm);
diff --git a/mm/vma.h b/mm/vma.h
index 55be77ff042f..7356ca5a22d3 100644
--- a/mm/vma.h
+++ b/mm/vma.h
@@ -218,7 +218,7 @@ int do_vmi_munmap(struct vma_iterator *vmi, struct mm_struct *mm,
unsigned long start, size_t len, struct list_head *uf,
bool unlock);
-void remove_vma(struct vm_area_struct *vma, bool unreachable);
+void remove_vma(struct vm_area_struct *vma);
void unmap_region(struct ma_state *mas, struct vm_area_struct *vma,
struct vm_area_struct *prev, struct vm_area_struct *next);
diff --git a/tools/include/linux/refcount.h b/tools/include/linux/refcount.h
index 36cb29bc57c2..1ace03e1a4f8 100644
--- a/tools/include/linux/refcount.h
+++ b/tools/include/linux/refcount.h
@@ -60,6 +60,11 @@ static inline void refcount_set(refcount_t *r, unsigned int n)
atomic_set(&r->refs, n);
}
+static inline void refcount_set_release(refcount_t *r, unsigned int n)
+{
+ atomic_set_release(&r->refs, n);
+}
+
static inline unsigned int refcount_read(const refcount_t *r)
{
return atomic_read(&r->refs);
diff --git a/tools/testing/vma/linux/atomic.h b/tools/testing/vma/linux/atomic.h
index 788c597c4fde..683383d0f2bf 100644
--- a/tools/testing/vma/linux/atomic.h
+++ b/tools/testing/vma/linux/atomic.h
@@ -7,6 +7,7 @@
#define atomic_inc(x) uatomic_inc(x)
#define atomic_read(x) uatomic_read(x)
#define atomic_set(x, y) uatomic_set(x, y)
+#define atomic_set_release(x, y) uatomic_set(x, y)
#define U8_MAX UCHAR_MAX
#ifndef atomic_cmpxchg_relaxed
diff --git a/tools/testing/vma/vma_internal.h b/tools/testing/vma/vma_internal.h
index b385170fbb8f..572ab2cea763 100644
--- a/tools/testing/vma/vma_internal.h
+++ b/tools/testing/vma/vma_internal.h
@@ -476,7 +476,7 @@ static inline void vma_mark_attached(struct vm_area_struct *vma)
{
vma_assert_write_locked(vma);
vma_assert_detached(vma);
- refcount_set(&vma->vm_refcnt, 1);
+ refcount_set_release(&vma->vm_refcnt, 1);
}
static inline void vma_mark_detached(struct vm_area_struct *vma)
@@ -696,14 +696,9 @@ static inline void mpol_put(struct mempolicy *)
{
}
-static inline void __vm_area_free(struct vm_area_struct *vma)
-{
- free(vma);
-}
-
static inline void vm_area_free(struct vm_area_struct *vma)
{
- __vm_area_free(vma);
+ free(vma);
}
static inline void lru_add_drain(void)
--
2.48.1.601.g30ceb7b040-goog
^ permalink raw reply [flat|nested] 30+ messages in thread
* [PATCH v10 18/18] docs/mm: document latest changes to vm_lock
2025-02-13 22:46 [PATCH v10 00/18] reimplement per-vma lock as a refcount Suren Baghdasaryan
` (16 preceding siblings ...)
2025-02-13 22:46 ` [PATCH v10 17/18] mm: make vma cache SLAB_TYPESAFE_BY_RCU Suren Baghdasaryan
@ 2025-02-13 22:46 ` Suren Baghdasaryan
2025-02-21 15:44 ` Vlastimil Babka
2025-02-14 19:34 ` [PATCH v10 00/18] reimplement per-vma lock as a refcount Shivank Garg
18 siblings, 1 reply; 30+ messages in thread
From: Suren Baghdasaryan @ 2025-02-13 22:46 UTC (permalink / raw)
To: akpm
Cc: peterz, willy, liam.howlett, lorenzo.stoakes, david.laight.linux,
mhocko, vbabka, hannes, mjguzik, oliver.sang, mgorman, david,
peterx, oleg, dave, paulmck, brauner, dhowells, hdanton, hughd,
lokeshgidra, minchan, jannh, shakeel.butt, souravpanda,
pasha.tatashin, klarasmodin, richard.weiyang, corbet, linux-doc,
linux-mm, linux-kernel, kernel-team, surenb, Liam R. Howlett
Change the documentation to reflect that vm_lock is integrated into vma
and replaced with vm_refcnt.
Document newly introduced vma_start_read_locked{_nested} functions.
Signed-off-by: Suren Baghdasaryan <surenb@google.com>
Reviewed-by: Liam R. Howlett <Liam.Howlett@Oracle.com>
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
---
Changes since v9 [1]:
- Updated documenation, per Lorenzo Stoakes
- Add Reviewed-by, per Lorenzo Stoakes
[1] https://lore.kernel.org/all/20250111042604.3230628-18-surenb@google.com/
Documentation/mm/process_addrs.rst | 44 ++++++++++++++++++------------
1 file changed, 26 insertions(+), 18 deletions(-)
diff --git a/Documentation/mm/process_addrs.rst b/Documentation/mm/process_addrs.rst
index 81417fa2ed20..e6756e78b476 100644
--- a/Documentation/mm/process_addrs.rst
+++ b/Documentation/mm/process_addrs.rst
@@ -716,9 +716,14 @@ calls :c:func:`!rcu_read_lock` to ensure that the VMA is looked up in an RCU
critical section, then attempts to VMA lock it via :c:func:`!vma_start_read`,
before releasing the RCU lock via :c:func:`!rcu_read_unlock`.
-VMA read locks hold the read lock on the :c:member:`!vma->vm_lock` semaphore for
-their duration and the caller of :c:func:`!lock_vma_under_rcu` must release it
-via :c:func:`!vma_end_read`.
+In cases when the user already holds mmap read lock, :c:func:`!vma_start_read_locked`
+and :c:func:`!vma_start_read_locked_nested` can be used. These functions do not
+fail due to lock contention but the caller should still check their return values
+in case they fail for other reasons.
+
+VMA read locks increment :c:member:`!vma.vm_refcnt` reference counter for their
+duration and the caller of :c:func:`!lock_vma_under_rcu` must drop it via
+:c:func:`!vma_end_read`.
VMA **write** locks are acquired via :c:func:`!vma_start_write` in instances where a
VMA is about to be modified, unlike :c:func:`!vma_start_read` the lock is always
@@ -726,9 +731,9 @@ acquired. An mmap write lock **must** be held for the duration of the VMA write
lock, releasing or downgrading the mmap write lock also releases the VMA write
lock so there is no :c:func:`!vma_end_write` function.
-Note that a semaphore write lock is not held across a VMA lock. Rather, a
-sequence number is used for serialisation, and the write semaphore is only
-acquired at the point of write lock to update this.
+Note that when write-locking a VMA lock, the :c:member:`!vma.vm_refcnt` is temporarily
+modified so that readers can detect the presense of a writer. The reference counter is
+restored once the vma sequence number used for serialisation is updated.
This ensures the semantics we require - VMA write locks provide exclusive write
access to the VMA.
@@ -738,7 +743,7 @@ Implementation details
The VMA lock mechanism is designed to be a lightweight means of avoiding the use
of the heavily contended mmap lock. It is implemented using a combination of a
-read/write semaphore and sequence numbers belonging to the containing
+reference counter and sequence numbers belonging to the containing
:c:struct:`!struct mm_struct` and the VMA.
Read locks are acquired via :c:func:`!vma_start_read`, which is an optimistic
@@ -779,28 +784,31 @@ release of any VMA locks on its release makes sense, as you would never want to
keep VMAs locked across entirely separate write operations. It also maintains
correct lock ordering.
-Each time a VMA read lock is acquired, we acquire a read lock on the
-:c:member:`!vma->vm_lock` read/write semaphore and hold it, while checking that
-the sequence count of the VMA does not match that of the mm.
+Each time a VMA read lock is acquired, we increment :c:member:`!vma.vm_refcnt`
+reference counter and check that the sequence count of the VMA does not match
+that of the mm.
-If it does, the read lock fails. If it does not, we hold the lock, excluding
-writers, but permitting other readers, who will also obtain this lock under RCU.
+If it does, the read lock fails and :c:member:`!vma.vm_refcnt` is dropped.
+If it does not, we keep the reference counter raised, excluding writers, but
+permitting other readers, who can also obtain this lock under RCU.
Importantly, maple tree operations performed in :c:func:`!lock_vma_under_rcu`
are also RCU safe, so the whole read lock operation is guaranteed to function
correctly.
-On the write side, we acquire a write lock on the :c:member:`!vma->vm_lock`
-read/write semaphore, before setting the VMA's sequence number under this lock,
-also simultaneously holding the mmap write lock.
+On the write side, we set a bit in :c:member:`!vma.vm_refcnt` which can't be
+modified by readers and wait for all readers to drop their reference count.
+Once there are no readers, the VMA's sequence number is set to match that of
+the mm. During this entire operation mmap write lock is held.
This way, if any read locks are in effect, :c:func:`!vma_start_write` will sleep
until these are finished and mutual exclusion is achieved.
-After setting the VMA's sequence number, the lock is released, avoiding
-complexity with a long-term held write lock.
+After setting the VMA's sequence number, the bit in :c:member:`!vma.vm_refcnt`
+indicating a writer is cleared. From this point on, VMA's sequence number will
+indicate VMA's write-locked state until mmap write lock is dropped or downgraded.
-This clever combination of a read/write semaphore and sequence count allows for
+This clever combination of a reference counter and sequence count allows for
fast RCU-based per-VMA lock acquisition (especially on page fault, though
utilised elsewhere) with minimal complexity around lock ordering.
--
2.48.1.601.g30ceb7b040-goog
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [PATCH v10 00/18] reimplement per-vma lock as a refcount
2025-02-13 22:46 [PATCH v10 00/18] reimplement per-vma lock as a refcount Suren Baghdasaryan
` (17 preceding siblings ...)
2025-02-13 22:46 ` [PATCH v10 18/18] docs/mm: document latest changes to vm_lock Suren Baghdasaryan
@ 2025-02-14 19:34 ` Shivank Garg
18 siblings, 0 replies; 30+ messages in thread
From: Shivank Garg @ 2025-02-14 19:34 UTC (permalink / raw)
To: Suren Baghdasaryan, akpm
Cc: peterz, willy, liam.howlett, lorenzo.stoakes, david.laight.linux,
mhocko, vbabka, hannes, mjguzik, oliver.sang, mgorman, david,
peterx, oleg, dave, paulmck, brauner, dhowells, hdanton, hughd,
lokeshgidra, minchan, jannh, shakeel.butt, souravpanda,
pasha.tatashin, klarasmodin, richard.weiyang, corbet, linux-doc,
linux-mm, linux-kernel, kernel-team
On 2/14/2025 4:16 AM, Suren Baghdasaryan wrote:
> Back when per-vma locks were introduces, vm_lock was moved out of
> vm_area_struct in [1] because of the performance regression caused by
> false cacheline sharing. Recent investigation [2] revealed that the
> regressions is limited to a rather old Broadwell microarchitecture and
> even there it can be mitigated by disabling adjacent cacheline
> prefetching, see [3].
> Splitting single logical structure into multiple ones leads to more
> complicated management, extra pointer dereferences and overall less
> maintainable code. When that split-away part is a lock, it complicates
> things even further. With no performance benefits, there are no reasons
> for this split. Merging the vm_lock back into vm_area_struct also allows
> vm_area_struct to use SLAB_TYPESAFE_BY_RCU later in this patchset.
> This patchset:
> 1. moves vm_lock back into vm_area_struct, aligning it at the cacheline
> boundary and changing the cache to be cacheline-aligned to minimize
> cacheline sharing;
> 2. changes vm_area_struct initialization to mark new vma as detached until
> it is inserted into vma tree;
> 3. replaces vm_lock and vma->detached flag with a reference counter;
> 4. regroups vm_area_struct members to fit them into 3 cachelines;
> 5. changes vm_area_struct cache to SLAB_TYPESAFE_BY_RCU to allow for their
> reuse and to minimize call_rcu() calls.
>
> Pagefault microbenchmarks show performance improvement:
> Hmean faults/cpu-1 507926.5547 ( 0.00%) 506519.3692 * -0.28%*
> Hmean faults/cpu-4 479119.7051 ( 0.00%) 481333.6802 * 0.46%*
> Hmean faults/cpu-7 452880.2961 ( 0.00%) 455845.6211 * 0.65%*
> Hmean faults/cpu-12 347639.1021 ( 0.00%) 352004.2254 * 1.26%*
> Hmean faults/cpu-21 200061.2238 ( 0.00%) 229597.0317 * 14.76%*
> Hmean faults/cpu-30 145251.2001 ( 0.00%) 164202.5067 * 13.05%*
> Hmean faults/cpu-48 106848.4434 ( 0.00%) 120641.5504 * 12.91%*
> Hmean faults/cpu-56 92472.3835 ( 0.00%) 103464.7916 * 11.89%*
> Hmean faults/sec-1 507566.1468 ( 0.00%) 506139.0811 * -0.28%*
> Hmean faults/sec-4 1880478.2402 ( 0.00%) 1886795.6329 * 0.34%*
> Hmean faults/sec-7 3106394.3438 ( 0.00%) 3140550.7485 * 1.10%*
> Hmean faults/sec-12 4061358.4795 ( 0.00%) 4112477.0206 * 1.26%*
> Hmean faults/sec-21 3988619.1169 ( 0.00%) 4577747.1436 * 14.77%*
> Hmean faults/sec-30 3909839.5449 ( 0.00%) 4311052.2787 * 10.26%*
> Hmean faults/sec-48 4761108.4691 ( 0.00%) 5283790.5026 * 10.98%*
> Hmean faults/sec-56 4885561.4590 ( 0.00%) 5415839.4045 * 10.85%*
I tested this patch-series on AMD EPYC Zen 5 system
(2-socket, 64-core per socket with SMT Enabled, 4 NUMA nodes)
using mmtests's PFT (config-workload-pft-threads) on mm-unstable.
I see significant performance improvement for higher thread count:
mm-unstable mm-unstable
-6.14-rc2-vanilla1 -6.14-rc2-v10-per-vma-lock
Hmean faults/cpu-1 1933589.0920 ( 0.00%) 1950506.1985 ( 0.87%)
Hmean faults/cpu-4 722834.4269 ( 0.00%) 657946.3257 ( -8.98%)
Hmean faults/cpu-7 373210.8410 ( 0.00%) 358995.9493 ( -3.81%)
Hmean faults/cpu-12 216267.7580 ( 0.00%) 211032.8119 ( -2.42%)
Hmean faults/cpu-21 153080.2758 ( 0.00%) 150207.3115 ( -1.88%)
Hmean faults/cpu-30 143142.8874 ( 0.00%) 142904.3981 ( -0.17%)
Hmean faults/cpu-48 135825.2524 ( 0.00%) 158502.4303 * 16.70%*
Hmean faults/cpu-79 111892.4921 ( 0.00%) 141725.6864 * 26.66%*
Hmean faults/cpu-110 96905.8995 ( 0.00%) 114238.6961 * 17.89%*
Hmean faults/cpu-128 89136.8524 ( 0.00%) 107620.7035 * 20.74%*
Hmean faults/sec-1 1933283.3273 ( 0.00%) 1950224.2371 ( 0.88%)
Hmean faults/sec-4 2859235.5825 ( 0.00%) 2611293.1103 ( -8.67%)
Hmean faults/sec-7 2580415.8792 ( 0.00%) 2497936.1104 ( -3.20%)
Hmean faults/sec-12 2560172.2303 ( 0.00%) 2516697.9056 ( -1.70%)
Hmean faults/sec-21 3080686.9599 ( 0.00%) 3038393.3328 ( -1.37%)
Hmean faults/sec-30 4174290.0462 ( 0.00%) 4168318.6202 ( -0.14%)
Hmean faults/sec-48 6318251.1880 ( 0.00%) 7323087.0849 * 15.90%*
Hmean faults/sec-79 8502378.1341 ( 0.00%) 10761979.4193 * 26.58%*
Hmean faults/sec-110 10131823.3341 ( 0.00%) 12318722.2392 * 21.58%*
Hmean faults/sec-128 10584693.5966 ( 0.00%) 13354652.5141 * 26.17%*
Slight degradation at 4 and 7 can be ignored due to high variance:
HCoeffVar faults/cpu-4 8.7568 ( 0.00%) 11.4420 ( -30.66%)
HCoeffVar faults/cpu-7 3.3204 ( 0.00%) 3.4852 ( -4.96%)
Please consider my:
Tested-by: Shivank Garg <shivankg@amd.com>
Best Regards,
Shivank Garg
>
> Changes since v9 [4]:
> PATCH [4/18]
> - Change VM_BUG_ON_VMA() to WARN_ON_ONCE() in vma_assert_{attached|detached},
> per Lorenzo Stoakes
> - Rename vma_iter_store() into vma_iter_store_new(), per Lorenzo Stoakes
> - Expand changelog, per Lorenzo Stoakes
> - Update vma tests to check for vma detached state correctness,
> per Lorenzo Stoakes
>
> PATCH [5/18]
> - Add Reviewed-by, per Lorenzo Stoakes
>
> PATCH [6/18]
> - Add Acked-by, per Lorenzo Stoakes
>
> PATCH [7/18]
> - Refactor the code, per Lorenzo Stoakes
> - Remove Vlastimil's Acked-by since code is changed
>
> PATCH [8/18]
> - Drop inline for mmap_init_lock(), per Lorenzo Stoakes
> - Add Reviewed-by, per Lorenzo Stoakes
>
> PATCH [9/18]
> - Add Reviewed-by, per Lorenzo Stoakes
>
> PATCH [10/18]
> - New patch to add refcount_add_not_zero_acquire/refcount_set_release
> - Add Acked-by #slab, per Vlastimil Babka
>
> PATCH [11/18]
> - Change refcount limit to be used with xxx_acquire functions
>
> PATCH [12/18]
> - Use __refcount_inc_not_zero_limited_acquire() in vma_start_read(),
> per Hillf Danton
> - Refactor vma_assert_locked() to avoid vm_refcnt read when CONFIG_DEBUG_VM=n,
> per Mateusz Guzik
> - Update changelog, per Wei Yang
> - Change vma_start_read() to return EAGAIN if vma got isolated and changed
> lock_vma_under_rcu() back to detect this condition, per Wei Yang
> - Change VM_BUG_ON_VMA() to WARN_ON_ONCE() when checking vma detached state,
> per Lorenzo Stoakes
> - Remove Vlastimil's Reviewed-by since code is changed
>
> PATCH [13/18]
> - Update vm_area_struct for tests, per Lorenzo Stoakes
> - Add Reviewed-by, per Lorenzo Stoakes
>
> PATCH [14/18]
> - Minimized duplicate code, per Lorenzo Stoakes
>
> PATCH [15/18]
> - Add Reviewed-by, per Lorenzo Stoakes
>
> PATCH [17/18]
> - Use refcount_set_release() in vma_mark_attached(), per Will Deacon
>
> PATCH [18/18]
> - Updated documenation, per Lorenzo Stoakes
> - Add Reviewed-by, per Lorenzo Stoakes
>
> [1] https://lore.kernel.org/all/20230227173632.3292573-34-surenb@google.com/
> [2] https://lore.kernel.org/all/ZsQyI%2F087V34JoIt@xsang-OptiPlex-9020/
> [3] https://lore.kernel.org/all/CAJuCfpEisU8Lfe96AYJDZ+OM4NoPmnw9bP53cT_kbfP_pR+-2g@mail.gmail.com/
> [4] https://lore.kernel.org/all/20250111042604.3230628-1-surenb@google.com/
>
> Patchset applies over mm-unstable
>
> Suren Baghdasaryan (18):
> mm: introduce vma_start_read_locked{_nested} helpers
> mm: move per-vma lock into vm_area_struct
> mm: mark vma as detached until it's added into vma tree
> mm: introduce vma_iter_store_attached() to use with attached vmas
> mm: mark vmas detached upon exit
> types: move struct rcuwait into types.h
> mm: allow vma_start_read_locked/vma_start_read_locked_nested to fail
> mm: move mmap_init_lock() out of the header file
> mm: uninline the main body of vma_start_write()
> refcount: provide ops for cases when object's memory can be reused
> refcount: introduce __refcount_{add|inc}_not_zero_limited_acquire
> mm: replace vm_lock and detached flag with a reference count
> mm: move lesser used vma_area_struct members into the last cacheline
> mm/debug: print vm_refcnt state when dumping the vma
> mm: remove extra vma_numab_state_init() call
> mm: prepare lock_vma_under_rcu() for vma reuse possibility
> mm: make vma cache SLAB_TYPESAFE_BY_RCU
> docs/mm: document latest changes to vm_lock
>
> Documentation/RCU/whatisRCU.rst | 10 +
> Documentation/core-api/refcount-vs-atomic.rst | 37 +++-
> Documentation/mm/process_addrs.rst | 44 +++--
> include/linux/mm.h | 176 ++++++++++++++----
> include/linux/mm_types.h | 75 ++++----
> include/linux/mmap_lock.h | 6 -
> include/linux/rcuwait.h | 13 +-
> include/linux/refcount.h | 125 +++++++++++++
> include/linux/slab.h | 15 +-
> include/linux/types.h | 12 ++
> kernel/fork.c | 129 ++++++-------
> mm/debug.c | 6 +
> mm/init-mm.c | 1 +
> mm/memory.c | 106 ++++++++++-
> mm/mmap.c | 3 +-
> mm/nommu.c | 4 +-
> mm/userfaultfd.c | 38 ++--
> mm/vma.c | 27 ++-
> mm/vma.h | 15 +-
> tools/include/linux/refcount.h | 5 +
> tools/testing/vma/linux/atomic.h | 6 +
> tools/testing/vma/vma.c | 42 ++++-
> tools/testing/vma/vma_internal.h | 127 ++++++-------
> 23 files changed, 702 insertions(+), 320 deletions(-)
>
>
> base-commit: 47aa60e930fe7fc2a945e4406e3ad1dfa73bb47c
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [PATCH v10 12/18] mm: replace vm_lock and detached flag with a reference count
2025-02-13 22:46 ` [PATCH v10 12/18] mm: replace vm_lock and detached flag with a reference count Suren Baghdasaryan
@ 2025-02-20 18:53 ` Heiko Carstens
2025-02-20 19:03 ` Suren Baghdasaryan
2025-02-21 15:18 ` Vlastimil Babka
1 sibling, 1 reply; 30+ messages in thread
From: Heiko Carstens @ 2025-02-20 18:53 UTC (permalink / raw)
To: Suren Baghdasaryan
Cc: akpm, peterz, willy, liam.howlett, lorenzo.stoakes,
david.laight.linux, mhocko, vbabka, hannes, mjguzik, oliver.sang,
mgorman, david, peterx, oleg, dave, paulmck, brauner, dhowells,
hdanton, hughd, lokeshgidra, minchan, jannh, shakeel.butt,
souravpanda, pasha.tatashin, klarasmodin, richard.weiyang,
corbet, linux-doc, linux-mm, linux-kernel, kernel-team
On Thu, Feb 13, 2025 at 02:46:49PM -0800, Suren Baghdasaryan wrote:
...
> While this vm_lock replacement does not yet result in a smaller
> vm_area_struct (it stays at 256 bytes due to cacheline alignment), it
> allows for further size optimization by structure member regrouping
> to bring the size of vm_area_struct below 192 bytes.
>
> Suggested-by: Peter Zijlstra <peterz@infradead.org>
> Suggested-by: Matthew Wilcox <willy@infradead.org>
> Signed-off-by: Suren Baghdasaryan <surenb@google.com>
> ---
> Changes since v9 [1]:
> - Use __refcount_inc_not_zero_limited_acquire() in vma_start_read(),
> per Hillf Danton
> - Refactor vma_assert_locked() to avoid vm_refcnt read when CONFIG_DEBUG_VM=n,
> per Mateusz Guzik
> - Update changelog, per Wei Yang
> - Change vma_start_read() to return EAGAIN if vma got isolated and changed
> lock_vma_under_rcu() back to detect this condition, per Wei Yang
> - Change VM_BUG_ON_VMA() to WARN_ON_ONCE() when checking vma detached state,
> per Lorenzo Stoakes
> - Remove Vlastimil's Reviewed-by since code is changed
This causes crashes (NULL pointer deref) with linux-next when running
the ltp test suite; mtest06 (mmap1) test case.
The bug seems to be quite obvious:
> @@ -6424,15 +6492,18 @@ struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm,
> if (!vma)
> goto inval;
>
> - if (!vma_start_read(vma))
> - goto inval;
> + vma = vma_start_read(vma);
> + if (IS_ERR_OR_NULL(vma)) {
^^^^^^^^^^^^^^^^^^^
> + /* Check if the VMA got isolated after we found it */
> + if (PTR_ERR(vma) == -EAGAIN) {
> + vma_end_read(vma);
^^^^^^^^^^^^^^^^
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [PATCH v10 12/18] mm: replace vm_lock and detached flag with a reference count
2025-02-20 18:53 ` Heiko Carstens
@ 2025-02-20 19:03 ` Suren Baghdasaryan
2025-02-20 20:05 ` Suren Baghdasaryan
0 siblings, 1 reply; 30+ messages in thread
From: Suren Baghdasaryan @ 2025-02-20 19:03 UTC (permalink / raw)
To: Heiko Carstens
Cc: akpm, peterz, willy, liam.howlett, lorenzo.stoakes,
david.laight.linux, mhocko, vbabka, hannes, mjguzik, oliver.sang,
mgorman, david, peterx, oleg, dave, paulmck, brauner, dhowells,
hdanton, hughd, lokeshgidra, minchan, jannh, shakeel.butt,
souravpanda, pasha.tatashin, klarasmodin, richard.weiyang,
corbet, linux-doc, linux-mm, linux-kernel, kernel-team
On Thu, Feb 20, 2025 at 10:53 AM Heiko Carstens <hca@linux.ibm.com> wrote:
>
> On Thu, Feb 13, 2025 at 02:46:49PM -0800, Suren Baghdasaryan wrote:
> ...
> > While this vm_lock replacement does not yet result in a smaller
> > vm_area_struct (it stays at 256 bytes due to cacheline alignment), it
> > allows for further size optimization by structure member regrouping
> > to bring the size of vm_area_struct below 192 bytes.
> >
> > Suggested-by: Peter Zijlstra <peterz@infradead.org>
> > Suggested-by: Matthew Wilcox <willy@infradead.org>
> > Signed-off-by: Suren Baghdasaryan <surenb@google.com>
> > ---
> > Changes since v9 [1]:
> > - Use __refcount_inc_not_zero_limited_acquire() in vma_start_read(),
> > per Hillf Danton
> > - Refactor vma_assert_locked() to avoid vm_refcnt read when CONFIG_DEBUG_VM=n,
> > per Mateusz Guzik
> > - Update changelog, per Wei Yang
> > - Change vma_start_read() to return EAGAIN if vma got isolated and changed
> > lock_vma_under_rcu() back to detect this condition, per Wei Yang
> > - Change VM_BUG_ON_VMA() to WARN_ON_ONCE() when checking vma detached state,
> > per Lorenzo Stoakes
> > - Remove Vlastimil's Reviewed-by since code is changed
>
> This causes crashes (NULL pointer deref) with linux-next when running
> the ltp test suite; mtest06 (mmap1) test case.
>
> The bug seems to be quite obvious:
>
> > @@ -6424,15 +6492,18 @@ struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm,
> > if (!vma)
> > goto inval;
> >
> > - if (!vma_start_read(vma))
> > - goto inval;
> > + vma = vma_start_read(vma);
> > + if (IS_ERR_OR_NULL(vma)) {
> ^^^^^^^^^^^^^^^^^^^
> > + /* Check if the VMA got isolated after we found it */
> > + if (PTR_ERR(vma) == -EAGAIN) {
> > + vma_end_read(vma);
> ^^^^^^^^^^^^^^^^
Doh! Thanks for reporting! I'll post a fix shortly.
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [PATCH v10 12/18] mm: replace vm_lock and detached flag with a reference count
2025-02-20 19:03 ` Suren Baghdasaryan
@ 2025-02-20 20:05 ` Suren Baghdasaryan
0 siblings, 0 replies; 30+ messages in thread
From: Suren Baghdasaryan @ 2025-02-20 20:05 UTC (permalink / raw)
To: Heiko Carstens
Cc: akpm, peterz, willy, liam.howlett, lorenzo.stoakes,
david.laight.linux, mhocko, vbabka, hannes, mjguzik, oliver.sang,
david, peterx, oleg, dave, paulmck, brauner, dhowells, hdanton,
hughd, lokeshgidra, minchan, jannh, shakeel.butt, souravpanda,
pasha.tatashin, klarasmodin, richard.weiyang, corbet, linux-doc,
linux-mm, linux-kernel, kernel-team
On Thu, Feb 20, 2025 at 11:03 AM Suren Baghdasaryan <surenb@google.com> wrote:
>
> On Thu, Feb 20, 2025 at 10:53 AM Heiko Carstens <hca@linux.ibm.com> wrote:
> >
> > On Thu, Feb 13, 2025 at 02:46:49PM -0800, Suren Baghdasaryan wrote:
> > ...
> > > While this vm_lock replacement does not yet result in a smaller
> > > vm_area_struct (it stays at 256 bytes due to cacheline alignment), it
> > > allows for further size optimization by structure member regrouping
> > > to bring the size of vm_area_struct below 192 bytes.
> > >
> > > Suggested-by: Peter Zijlstra <peterz@infradead.org>
> > > Suggested-by: Matthew Wilcox <willy@infradead.org>
> > > Signed-off-by: Suren Baghdasaryan <surenb@google.com>
> > > ---
> > > Changes since v9 [1]:
> > > - Use __refcount_inc_not_zero_limited_acquire() in vma_start_read(),
> > > per Hillf Danton
> > > - Refactor vma_assert_locked() to avoid vm_refcnt read when CONFIG_DEBUG_VM=n,
> > > per Mateusz Guzik
> > > - Update changelog, per Wei Yang
> > > - Change vma_start_read() to return EAGAIN if vma got isolated and changed
> > > lock_vma_under_rcu() back to detect this condition, per Wei Yang
> > > - Change VM_BUG_ON_VMA() to WARN_ON_ONCE() when checking vma detached state,
> > > per Lorenzo Stoakes
> > > - Remove Vlastimil's Reviewed-by since code is changed
> >
> > This causes crashes (NULL pointer deref) with linux-next when running
> > the ltp test suite; mtest06 (mmap1) test case.
> >
> > The bug seems to be quite obvious:
> >
> > > @@ -6424,15 +6492,18 @@ struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm,
> > > if (!vma)
> > > goto inval;
> > >
> > > - if (!vma_start_read(vma))
> > > - goto inval;
> > > + vma = vma_start_read(vma);
> > > + if (IS_ERR_OR_NULL(vma)) {
> > ^^^^^^^^^^^^^^^^^^^
> > > + /* Check if the VMA got isolated after we found it */
> > > + if (PTR_ERR(vma) == -EAGAIN) {
> > > + vma_end_read(vma);
> > ^^^^^^^^^^^^^^^^
>
> Doh! Thanks for reporting! I'll post a fix shortly.
The fix is posted at:
https://lore.kernel.org/all/20250220200208.323769-1-surenb@google.com/
Quite embarrassing... I missed removing that extra call. This change
was done only in v10, so never appeared before.
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [PATCH v10 04/18] mm: introduce vma_iter_store_attached() to use with attached vmas
2025-02-13 22:46 ` [PATCH v10 04/18] mm: introduce vma_iter_store_attached() to use with attached vmas Suren Baghdasaryan
@ 2025-02-21 11:07 ` Vlastimil Babka
2025-02-21 16:11 ` Liam R. Howlett
1 sibling, 0 replies; 30+ messages in thread
From: Vlastimil Babka @ 2025-02-21 11:07 UTC (permalink / raw)
To: Suren Baghdasaryan, akpm
Cc: peterz, willy, liam.howlett, lorenzo.stoakes, david.laight.linux,
mhocko, hannes, mjguzik, oliver.sang, mgorman, david, peterx,
oleg, dave, paulmck, brauner, dhowells, hdanton, hughd,
lokeshgidra, minchan, jannh, shakeel.butt, souravpanda,
pasha.tatashin, klarasmodin, richard.weiyang, corbet, linux-doc,
linux-mm, linux-kernel, kernel-team
On 2/13/25 23:46, Suren Baghdasaryan wrote:
> vma_iter_store() functions can be used both when adding a new vma and
> when updating an existing one. However for existing ones we do not need
> to mark them attached as they are already marked that way. With
> vma->detached being a separate flag, double-marking a vmas as attached
> or detached is not an issue because the flag will simply be overwritten
> with the same value. However once we fold this flag into the refcount
> later in this series, re-attaching or re-detaching a vma becomes an
> issue since these operations will be incrementing/decrementing a
> refcount.
> Introduce vma_iter_store_new() and vma_iter_store_overwrite() to replace
> vma_iter_store() and avoid re-attaching a vma during vma update. Add
> assertions in vma_mark_attached()/vma_mark_detached() to catch invalid
> usage. Update vma tests to check for vma detached state correctness.
>
> Signed-off-by: Suren Baghdasaryan <surenb@google.com>
Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
> ---
> Changes since v9 [1]:
> - Change VM_BUG_ON_VMA() to WARN_ON_ONCE() in vma_assert_{attached|detached},
> per Lorenzo Stoakes
Maybe later we can reduce the paranoia to VM_WARN_ON_ONCE()?
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [PATCH v10 07/18] mm: allow vma_start_read_locked/vma_start_read_locked_nested to fail
2025-02-13 22:46 ` [PATCH v10 07/18] mm: allow vma_start_read_locked/vma_start_read_locked_nested to fail Suren Baghdasaryan
@ 2025-02-21 11:27 ` Vlastimil Babka
0 siblings, 0 replies; 30+ messages in thread
From: Vlastimil Babka @ 2025-02-21 11:27 UTC (permalink / raw)
To: Suren Baghdasaryan, akpm
Cc: peterz, willy, liam.howlett, lorenzo.stoakes, david.laight.linux,
mhocko, hannes, mjguzik, oliver.sang, mgorman, david, peterx,
oleg, dave, paulmck, brauner, dhowells, hdanton, hughd,
lokeshgidra, minchan, jannh, shakeel.butt, souravpanda,
pasha.tatashin, klarasmodin, richard.weiyang, corbet, linux-doc,
linux-mm, linux-kernel, kernel-team
On 2/13/25 23:46, Suren Baghdasaryan wrote:
> With upcoming replacement of vm_lock with vm_refcnt, we need to handle a
> possibility of vma_start_read_locked/vma_start_read_locked_nested failing
> due to refcount overflow. Prepare for such possibility by changing these
> APIs and adjusting their users.
>
> Signed-off-by: Suren Baghdasaryan <surenb@google.com>
> Cc: Lokesh Gidra <lokeshgidra@google.com>
Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [PATCH v10 12/18] mm: replace vm_lock and detached flag with a reference count
2025-02-13 22:46 ` [PATCH v10 12/18] mm: replace vm_lock and detached flag with a reference count Suren Baghdasaryan
2025-02-20 18:53 ` Heiko Carstens
@ 2025-02-21 15:18 ` Vlastimil Babka
1 sibling, 0 replies; 30+ messages in thread
From: Vlastimil Babka @ 2025-02-21 15:18 UTC (permalink / raw)
To: Suren Baghdasaryan, akpm
Cc: peterz, willy, liam.howlett, lorenzo.stoakes, david.laight.linux,
mhocko, hannes, mjguzik, oliver.sang, mgorman, david, peterx,
oleg, dave, paulmck, brauner, dhowells, hdanton, hughd,
lokeshgidra, minchan, jannh, shakeel.butt, souravpanda,
pasha.tatashin, klarasmodin, richard.weiyang, corbet, linux-doc,
linux-mm, linux-kernel, kernel-team
On 2/13/25 23:46, Suren Baghdasaryan wrote:
> rw_semaphore is a sizable structure of 40 bytes and consumes
> considerable space for each vm_area_struct. However vma_lock has
> two important specifics which can be used to replace rw_semaphore
> with a simpler structure:
> 1. Readers never wait. They try to take the vma_lock and fall back to
> mmap_lock if that fails.
> 2. Only one writer at a time will ever try to write-lock a vma_lock
> because writers first take mmap_lock in write mode.
> Because of these requirements, full rw_semaphore functionality is not
> needed and we can replace rw_semaphore and the vma->detached flag with
> a refcount (vm_refcnt).
>
> When vma is in detached state, vm_refcnt is 0 and only a call to
> vma_mark_attached() can take it out of this state. Note that unlike
> before, now we enforce both vma_mark_attached() and vma_mark_detached()
> to be done only after vma has been write-locked. vma_mark_attached()
> changes vm_refcnt to 1 to indicate that it has been attached to the vma
> tree. When a reader takes read lock, it increments vm_refcnt, unless the
> top usable bit of vm_refcnt (0x40000000) is set, indicating presence of
> a writer. When writer takes write lock, it sets the top usable bit to
> indicate its presence. If there are readers, writer will wait using newly
> introduced mm->vma_writer_wait. Since all writers take mmap_lock in write
> mode first, there can be only one writer at a time. The last reader to
> release the lock will signal the writer to wake up.
> refcount might overflow if there are many competing readers, in which case
> read-locking will fail. Readers are expected to handle such failures.
>
> In summary:
> 1. all readers increment the vm_refcnt;
> 2. writer sets top usable (writer) bit of vm_refcnt;
> 3. readers cannot increment the vm_refcnt if the writer bit is set;
> 4. in the presence of readers, writer must wait for the vm_refcnt to drop
> to 1 (plus the VMA_LOCK_OFFSET writer bit), indicating an attached vma
> with no readers;
> 5. vm_refcnt overflow is handled by the readers.
>
> While this vm_lock replacement does not yet result in a smaller
> vm_area_struct (it stays at 256 bytes due to cacheline alignment), it
> allows for further size optimization by structure member regrouping
> to bring the size of vm_area_struct below 192 bytes.
>
> Suggested-by: Peter Zijlstra <peterz@infradead.org>
> Suggested-by: Matthew Wilcox <willy@infradead.org>
> Signed-off-by: Suren Baghdasaryan <surenb@google.com>
with the fix,
Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [PATCH v10 13/18] mm: move lesser used vma_area_struct members into the last cacheline
2025-02-13 22:46 ` [PATCH v10 13/18] mm: move lesser used vma_area_struct members into the last cacheline Suren Baghdasaryan
@ 2025-02-21 15:20 ` Vlastimil Babka
0 siblings, 0 replies; 30+ messages in thread
From: Vlastimil Babka @ 2025-02-21 15:20 UTC (permalink / raw)
To: Suren Baghdasaryan, akpm
Cc: peterz, willy, liam.howlett, lorenzo.stoakes, david.laight.linux,
mhocko, hannes, mjguzik, oliver.sang, mgorman, david, peterx,
oleg, dave, paulmck, brauner, dhowells, hdanton, hughd,
lokeshgidra, minchan, jannh, shakeel.butt, souravpanda,
pasha.tatashin, klarasmodin, richard.weiyang, corbet, linux-doc,
linux-mm, linux-kernel, kernel-team
On 2/13/25 23:46, Suren Baghdasaryan wrote:
> Move several vma_area_struct members which are rarely or never used
> during page fault handling into the last cacheline to better pack
> vm_area_struct. As a result vm_area_struct will fit into 3 as opposed
> to 4 cachelines. New typical vm_area_struct layout:
>
> struct vm_area_struct {
> union {
> struct {
> long unsigned int vm_start; /* 0 8 */
> long unsigned int vm_end; /* 8 8 */
> }; /* 0 16 */
> freeptr_t vm_freeptr; /* 0 8 */
> }; /* 0 16 */
> struct mm_struct * vm_mm; /* 16 8 */
> pgprot_t vm_page_prot; /* 24 8 */
> union {
> const vm_flags_t vm_flags; /* 32 8 */
> vm_flags_t __vm_flags; /* 32 8 */
> }; /* 32 8 */
> unsigned int vm_lock_seq; /* 40 4 */
>
> /* XXX 4 bytes hole, try to pack */
>
> struct list_head anon_vma_chain; /* 48 16 */
> /* --- cacheline 1 boundary (64 bytes) --- */
> struct anon_vma * anon_vma; /* 64 8 */
> const struct vm_operations_struct * vm_ops; /* 72 8 */
> long unsigned int vm_pgoff; /* 80 8 */
> struct file * vm_file; /* 88 8 */
> void * vm_private_data; /* 96 8 */
> atomic_long_t swap_readahead_info; /* 104 8 */
> struct mempolicy * vm_policy; /* 112 8 */
> struct vma_numab_state * numab_state; /* 120 8 */
> /* --- cacheline 2 boundary (128 bytes) --- */
> refcount_t vm_refcnt (__aligned__(64)); /* 128 4 */
>
> /* XXX 4 bytes hole, try to pack */
>
> struct {
> struct rb_node rb (__aligned__(8)); /* 136 24 */
> long unsigned int rb_subtree_last; /* 160 8 */
> } __attribute__((__aligned__(8))) shared; /* 136 32 */
> struct anon_vma_name * anon_name; /* 168 8 */
> struct vm_userfaultfd_ctx vm_userfaultfd_ctx; /* 176 8 */
>
> /* size: 192, cachelines: 3, members: 18 */
> /* sum members: 176, holes: 2, sum holes: 8 */
> /* padding: 8 */
> /* forced alignments: 2, forced holes: 1, sum forced holes: 4 */
> } __attribute__((__aligned__(64)));
>
> Memory consumption per 1000 VMAs becomes 48 pages:
>
> slabinfo after vm_area_struct changes:
> <name> ... <objsize> <objperslab> <pagesperslab> : ...
> vm_area_struct ... 192 42 2 : ...
>
> Signed-off-by: Suren Baghdasaryan <surenb@google.com>
> Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [PATCH v10 18/18] docs/mm: document latest changes to vm_lock
2025-02-13 22:46 ` [PATCH v10 18/18] docs/mm: document latest changes to vm_lock Suren Baghdasaryan
@ 2025-02-21 15:44 ` Vlastimil Babka
0 siblings, 0 replies; 30+ messages in thread
From: Vlastimil Babka @ 2025-02-21 15:44 UTC (permalink / raw)
To: Suren Baghdasaryan, akpm
Cc: peterz, willy, liam.howlett, lorenzo.stoakes, david.laight.linux,
mhocko, hannes, mjguzik, oliver.sang, mgorman, david, peterx,
oleg, dave, paulmck, brauner, dhowells, hdanton, hughd,
lokeshgidra, minchan, jannh, shakeel.butt, souravpanda,
pasha.tatashin, klarasmodin, richard.weiyang, corbet, linux-doc,
linux-mm, linux-kernel, kernel-team
On 2/13/25 23:46, Suren Baghdasaryan wrote:
> Change the documentation to reflect that vm_lock is integrated into vma
> and replaced with vm_refcnt.
> Document newly introduced vma_start_read_locked{_nested} functions.
>
> Signed-off-by: Suren Baghdasaryan <surenb@google.com>
> Reviewed-by: Liam R. Howlett <Liam.Howlett@Oracle.com>
> Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [PATCH v10 04/18] mm: introduce vma_iter_store_attached() to use with attached vmas
2025-02-13 22:46 ` [PATCH v10 04/18] mm: introduce vma_iter_store_attached() to use with attached vmas Suren Baghdasaryan
2025-02-21 11:07 ` Vlastimil Babka
@ 2025-02-21 16:11 ` Liam R. Howlett
1 sibling, 0 replies; 30+ messages in thread
From: Liam R. Howlett @ 2025-02-21 16:11 UTC (permalink / raw)
To: Suren Baghdasaryan
Cc: akpm, peterz, willy, lorenzo.stoakes, david.laight.linux, mhocko,
vbabka, hannes, mjguzik, oliver.sang, mgorman, david, peterx,
oleg, dave, paulmck, brauner, dhowells, hdanton, hughd,
lokeshgidra, minchan, jannh, shakeel.butt, souravpanda,
pasha.tatashin, klarasmodin, richard.weiyang, corbet, linux-doc,
linux-mm, linux-kernel, kernel-team
* Suren Baghdasaryan <surenb@google.com> [250213 17:47]:
> vma_iter_store() functions can be used both when adding a new vma and
> when updating an existing one. However for existing ones we do not need
> to mark them attached as they are already marked that way. With
> vma->detached being a separate flag, double-marking a vmas as attached
> or detached is not an issue because the flag will simply be overwritten
> with the same value. However once we fold this flag into the refcount
> later in this series, re-attaching or re-detaching a vma becomes an
> issue since these operations will be incrementing/decrementing a
> refcount.
> Introduce vma_iter_store_new() and vma_iter_store_overwrite() to replace
> vma_iter_store() and avoid re-attaching a vma during vma update. Add
> assertions in vma_mark_attached()/vma_mark_detached() to catch invalid
> usage. Update vma tests to check for vma detached state correctness.
>
> Signed-off-by: Suren Baghdasaryan <surenb@google.com>
Reviewed-by: Liam R. Howlett <Liam.Howlett@Oracle.com>
> ---
> Changes since v9 [1]:
> - Change VM_BUG_ON_VMA() to WARN_ON_ONCE() in vma_assert_{attached|detached},
> per Lorenzo Stoakes
> - Rename vma_iter_store() into vma_iter_store_new(), per Lorenzo Stoakes
> - Expand changelog, per Lorenzo Stoakes
> - Update vma tests to check for vma detached state correctness,
> per Lorenzo Stoakes
>
> [1] https://lore.kernel.org/all/20250111042604.3230628-5-surenb@google.com/
>
> include/linux/mm.h | 14 +++++++++++
> mm/nommu.c | 4 +--
> mm/vma.c | 12 ++++-----
> mm/vma.h | 11 +++++++--
> tools/testing/vma/vma.c | 42 +++++++++++++++++++++++++-------
> tools/testing/vma/vma_internal.h | 10 ++++++++
> 6 files changed, 74 insertions(+), 19 deletions(-)
>
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index cd5ee61e98f2..1b8e72888124 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -821,8 +821,19 @@ static inline void vma_assert_locked(struct vm_area_struct *vma)
> vma_assert_write_locked(vma);
> }
>
> +static inline void vma_assert_attached(struct vm_area_struct *vma)
> +{
> + WARN_ON_ONCE(vma->detached);
> +}
> +
> +static inline void vma_assert_detached(struct vm_area_struct *vma)
> +{
> + WARN_ON_ONCE(!vma->detached);
> +}
> +
> static inline void vma_mark_attached(struct vm_area_struct *vma)
> {
> + vma_assert_detached(vma);
> vma->detached = false;
> }
>
> @@ -830,6 +841,7 @@ static inline void vma_mark_detached(struct vm_area_struct *vma)
> {
> /* When detaching vma should be write-locked */
> vma_assert_write_locked(vma);
> + vma_assert_attached(vma);
> vma->detached = true;
> }
>
> @@ -866,6 +878,8 @@ static inline void vma_end_read(struct vm_area_struct *vma) {}
> static inline void vma_start_write(struct vm_area_struct *vma) {}
> static inline void vma_assert_write_locked(struct vm_area_struct *vma)
> { mmap_assert_write_locked(vma->vm_mm); }
> +static inline void vma_assert_attached(struct vm_area_struct *vma) {}
> +static inline void vma_assert_detached(struct vm_area_struct *vma) {}
> static inline void vma_mark_attached(struct vm_area_struct *vma) {}
> static inline void vma_mark_detached(struct vm_area_struct *vma) {}
>
> diff --git a/mm/nommu.c b/mm/nommu.c
> index baa79abdaf03..8b31d8396297 100644
> --- a/mm/nommu.c
> +++ b/mm/nommu.c
> @@ -1191,7 +1191,7 @@ unsigned long do_mmap(struct file *file,
> setup_vma_to_mm(vma, current->mm);
> current->mm->map_count++;
> /* add the VMA to the tree */
> - vma_iter_store(&vmi, vma);
> + vma_iter_store_new(&vmi, vma);
>
> /* we flush the region from the icache only when the first executable
> * mapping of it is made */
> @@ -1356,7 +1356,7 @@ static int split_vma(struct vma_iterator *vmi, struct vm_area_struct *vma,
>
> setup_vma_to_mm(vma, mm);
> setup_vma_to_mm(new, mm);
> - vma_iter_store(vmi, new);
> + vma_iter_store_new(vmi, new);
> mm->map_count++;
> return 0;
>
> diff --git a/mm/vma.c b/mm/vma.c
> index 498507d8a262..f72b73f57451 100644
> --- a/mm/vma.c
> +++ b/mm/vma.c
> @@ -320,7 +320,7 @@ static void vma_complete(struct vma_prepare *vp, struct vma_iterator *vmi,
> * us to insert it before dropping the locks
> * (it may either follow vma or precede it).
> */
> - vma_iter_store(vmi, vp->insert);
> + vma_iter_store_new(vmi, vp->insert);
> mm->map_count++;
> }
>
> @@ -700,7 +700,7 @@ static int commit_merge(struct vma_merge_struct *vmg)
> vmg->__adjust_middle_start ? vmg->middle : NULL);
> vma_set_range(vma, vmg->start, vmg->end, vmg->pgoff);
> vmg_adjust_set_range(vmg);
> - vma_iter_store(vmg->vmi, vmg->target);
> + vma_iter_store_overwrite(vmg->vmi, vmg->target);
>
> vma_complete(&vp, vmg->vmi, vma->vm_mm);
>
> @@ -1707,7 +1707,7 @@ int vma_link(struct mm_struct *mm, struct vm_area_struct *vma)
> return -ENOMEM;
>
> vma_start_write(vma);
> - vma_iter_store(&vmi, vma);
> + vma_iter_store_new(&vmi, vma);
> vma_link_file(vma);
> mm->map_count++;
> validate_mm(mm);
> @@ -2386,7 +2386,7 @@ static int __mmap_new_vma(struct mmap_state *map, struct vm_area_struct **vmap)
>
> /* Lock the VMA since it is modified after insertion into VMA tree */
> vma_start_write(vma);
> - vma_iter_store(vmi, vma);
> + vma_iter_store_new(vmi, vma);
> map->mm->map_count++;
> vma_link_file(vma);
>
> @@ -2862,7 +2862,7 @@ int expand_upwards(struct vm_area_struct *vma, unsigned long address)
> anon_vma_interval_tree_pre_update_vma(vma);
> vma->vm_end = address;
> /* Overwrite old entry in mtree. */
> - vma_iter_store(&vmi, vma);
> + vma_iter_store_overwrite(&vmi, vma);
> anon_vma_interval_tree_post_update_vma(vma);
>
> perf_event_mmap(vma);
> @@ -2942,7 +2942,7 @@ int expand_downwards(struct vm_area_struct *vma, unsigned long address)
> vma->vm_start = address;
> vma->vm_pgoff -= grow;
> /* Overwrite old entry in mtree. */
> - vma_iter_store(&vmi, vma);
> + vma_iter_store_overwrite(&vmi, vma);
> anon_vma_interval_tree_post_update_vma(vma);
>
> perf_event_mmap(vma);
> diff --git a/mm/vma.h b/mm/vma.h
> index bffb56afce5f..55be77ff042f 100644
> --- a/mm/vma.h
> +++ b/mm/vma.h
> @@ -413,9 +413,10 @@ static inline struct vm_area_struct *vma_iter_load(struct vma_iterator *vmi)
> }
>
> /* Store a VMA with preallocated memory */
> -static inline void vma_iter_store(struct vma_iterator *vmi,
> - struct vm_area_struct *vma)
> +static inline void vma_iter_store_overwrite(struct vma_iterator *vmi,
> + struct vm_area_struct *vma)
> {
> + vma_assert_attached(vma);
>
> #if defined(CONFIG_DEBUG_VM_MAPLE_TREE)
> if (MAS_WARN_ON(&vmi->mas, vmi->mas.status != ma_start &&
> @@ -438,7 +439,13 @@ static inline void vma_iter_store(struct vma_iterator *vmi,
>
> __mas_set_range(&vmi->mas, vma->vm_start, vma->vm_end - 1);
> mas_store_prealloc(&vmi->mas, vma);
> +}
> +
> +static inline void vma_iter_store_new(struct vma_iterator *vmi,
> + struct vm_area_struct *vma)
> +{
> vma_mark_attached(vma);
> + vma_iter_store_overwrite(vmi, vma);
> }
>
> static inline unsigned long vma_iter_addr(struct vma_iterator *vmi)
> diff --git a/tools/testing/vma/vma.c b/tools/testing/vma/vma.c
> index c7ffa71841ca..11f761769b5b 100644
> --- a/tools/testing/vma/vma.c
> +++ b/tools/testing/vma/vma.c
> @@ -74,10 +74,22 @@ static struct vm_area_struct *alloc_vma(struct mm_struct *mm,
> ret->vm_end = end;
> ret->vm_pgoff = pgoff;
> ret->__vm_flags = flags;
> + vma_assert_detached(ret);
>
> return ret;
> }
>
> +/* Helper function to allocate a VMA and link it to the tree. */
> +static int attach_vma(struct mm_struct *mm, struct vm_area_struct *vma)
> +{
> + int res;
> +
> + res = vma_link(mm, vma);
> + if (!res)
> + vma_assert_attached(vma);
> + return res;
> +}
> +
> /* Helper function to allocate a VMA and link it to the tree. */
> static struct vm_area_struct *alloc_and_link_vma(struct mm_struct *mm,
> unsigned long start,
> @@ -90,7 +102,7 @@ static struct vm_area_struct *alloc_and_link_vma(struct mm_struct *mm,
> if (vma == NULL)
> return NULL;
>
> - if (vma_link(mm, vma)) {
> + if (attach_vma(mm, vma)) {
> vm_area_free(vma);
> return NULL;
> }
> @@ -108,6 +120,7 @@ static struct vm_area_struct *alloc_and_link_vma(struct mm_struct *mm,
> /* Helper function which provides a wrapper around a merge new VMA operation. */
> static struct vm_area_struct *merge_new(struct vma_merge_struct *vmg)
> {
> + struct vm_area_struct *vma;
> /*
> * For convenience, get prev and next VMAs. Which the new VMA operation
> * requires.
> @@ -116,7 +129,11 @@ static struct vm_area_struct *merge_new(struct vma_merge_struct *vmg)
> vmg->prev = vma_prev(vmg->vmi);
> vma_iter_next_range(vmg->vmi);
>
> - return vma_merge_new_range(vmg);
> + vma = vma_merge_new_range(vmg);
> + if (vma)
> + vma_assert_attached(vma);
> +
> + return vma;
> }
>
> /*
> @@ -125,7 +142,12 @@ static struct vm_area_struct *merge_new(struct vma_merge_struct *vmg)
> */
> static struct vm_area_struct *merge_existing(struct vma_merge_struct *vmg)
> {
> - return vma_merge_existing_range(vmg);
> + struct vm_area_struct *vma;
> +
> + vma = vma_merge_existing_range(vmg);
> + if (vma)
> + vma_assert_attached(vma);
> + return vma;
> }
>
> /*
> @@ -260,8 +282,8 @@ static bool test_simple_merge(void)
> .pgoff = 1,
> };
>
> - ASSERT_FALSE(vma_link(&mm, vma_left));
> - ASSERT_FALSE(vma_link(&mm, vma_right));
> + ASSERT_FALSE(attach_vma(&mm, vma_left));
> + ASSERT_FALSE(attach_vma(&mm, vma_right));
>
> vma = merge_new(&vmg);
> ASSERT_NE(vma, NULL);
> @@ -285,7 +307,7 @@ static bool test_simple_modify(void)
> struct vm_area_struct *init_vma = alloc_vma(&mm, 0, 0x3000, 0, flags);
> VMA_ITERATOR(vmi, &mm, 0x1000);
>
> - ASSERT_FALSE(vma_link(&mm, init_vma));
> + ASSERT_FALSE(attach_vma(&mm, init_vma));
>
> /*
> * The flags will not be changed, the vma_modify_flags() function
> @@ -351,7 +373,7 @@ static bool test_simple_expand(void)
> .pgoff = 0,
> };
>
> - ASSERT_FALSE(vma_link(&mm, vma));
> + ASSERT_FALSE(attach_vma(&mm, vma));
>
> ASSERT_FALSE(expand_existing(&vmg));
>
> @@ -372,7 +394,7 @@ static bool test_simple_shrink(void)
> struct vm_area_struct *vma = alloc_vma(&mm, 0, 0x3000, 0, flags);
> VMA_ITERATOR(vmi, &mm, 0);
>
> - ASSERT_FALSE(vma_link(&mm, vma));
> + ASSERT_FALSE(attach_vma(&mm, vma));
>
> ASSERT_FALSE(vma_shrink(&vmi, vma, 0, 0x1000, 0));
>
> @@ -1522,11 +1544,11 @@ static bool test_copy_vma(void)
>
> vma = alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, flags);
> vma_new = copy_vma(&vma, 0, 0x2000, 0, &need_locks);
> -
> ASSERT_NE(vma_new, vma);
> ASSERT_EQ(vma_new->vm_start, 0);
> ASSERT_EQ(vma_new->vm_end, 0x2000);
> ASSERT_EQ(vma_new->vm_pgoff, 0);
> + vma_assert_attached(vma_new);
>
> cleanup_mm(&mm, &vmi);
>
> @@ -1535,6 +1557,7 @@ static bool test_copy_vma(void)
> vma = alloc_and_link_vma(&mm, 0, 0x2000, 0, flags);
> vma_next = alloc_and_link_vma(&mm, 0x6000, 0x8000, 6, flags);
> vma_new = copy_vma(&vma, 0x4000, 0x2000, 4, &need_locks);
> + vma_assert_attached(vma_new);
>
> ASSERT_EQ(vma_new, vma_next);
>
> @@ -1576,6 +1599,7 @@ static bool test_expand_only_mode(void)
> ASSERT_EQ(vma->vm_pgoff, 3);
> ASSERT_TRUE(vma_write_started(vma));
> ASSERT_EQ(vma_iter_addr(&vmi), 0x3000);
> + vma_assert_attached(vma);
>
> cleanup_mm(&mm, &vmi);
> return true;
> diff --git a/tools/testing/vma/vma_internal.h b/tools/testing/vma/vma_internal.h
> index f93f7f74f97b..34277842156c 100644
> --- a/tools/testing/vma/vma_internal.h
> +++ b/tools/testing/vma/vma_internal.h
> @@ -470,6 +470,16 @@ static inline void vma_lock_init(struct vm_area_struct *vma)
> vma->vm_lock_seq = UINT_MAX;
> }
>
> +static inline void vma_assert_attached(struct vm_area_struct *vma)
> +{
> + WARN_ON_ONCE(vma->detached);
> +}
> +
> +static inline void vma_assert_detached(struct vm_area_struct *vma)
> +{
> + WARN_ON_ONCE(!vma->detached);
> +}
> +
> static inline void vma_assert_write_locked(struct vm_area_struct *);
> static inline void vma_mark_attached(struct vm_area_struct *vma)
> {
> --
> 2.48.1.601.g30ceb7b040-goog
>
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [PATCH v10 05/18] mm: mark vmas detached upon exit
2025-02-13 22:46 ` [PATCH v10 05/18] mm: mark vmas detached upon exit Suren Baghdasaryan
@ 2025-02-21 16:14 ` Liam R. Howlett
0 siblings, 0 replies; 30+ messages in thread
From: Liam R. Howlett @ 2025-02-21 16:14 UTC (permalink / raw)
To: Suren Baghdasaryan
Cc: akpm, peterz, willy, lorenzo.stoakes, david.laight.linux, mhocko,
vbabka, hannes, mjguzik, oliver.sang, mgorman, david, peterx,
oleg, dave, paulmck, brauner, dhowells, hdanton, hughd,
lokeshgidra, minchan, jannh, shakeel.butt, souravpanda,
pasha.tatashin, klarasmodin, richard.weiyang, corbet, linux-doc,
linux-mm, linux-kernel, kernel-team
* Suren Baghdasaryan <surenb@google.com> [250213 17:47]:
> When exit_mmap() removes vmas belonging to an exiting task, it does not
> mark them as detached since they can't be reached by other tasks and they
> will be freed shortly. Once we introduce vma reuse, all vmas will have to
> be in detached state before they are freed to ensure vma when reused is
> in a consistent state. Add missing vma_mark_detached() before freeing the
> vma.
>
> Signed-off-by: Suren Baghdasaryan <surenb@google.com>
> Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
> Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Reviewed-by: Liam R. Howlett <Liam.Howlett@Oracle.com>
> ---
> Changes since v9 [1]:
> - Add Reviewed-by, per Lorenzo Stoakes
>
> [1] https://lore.kernel.org/all/20250111042604.3230628-6-surenb@google.com/
>
> mm/vma.c | 6 ++++--
> 1 file changed, 4 insertions(+), 2 deletions(-)
>
> diff --git a/mm/vma.c b/mm/vma.c
> index f72b73f57451..a16a83d0253f 100644
> --- a/mm/vma.c
> +++ b/mm/vma.c
> @@ -427,10 +427,12 @@ void remove_vma(struct vm_area_struct *vma, bool unreachable)
> if (vma->vm_file)
> fput(vma->vm_file);
> mpol_put(vma_policy(vma));
> - if (unreachable)
> + if (unreachable) {
> + vma_mark_detached(vma);
> __vm_area_free(vma);
> - else
> + } else {
> vm_area_free(vma);
> + }
> }
>
> /*
> --
> 2.48.1.601.g30ceb7b040-goog
>
^ permalink raw reply [flat|nested] 30+ messages in thread
end of thread, other threads:[~2025-02-21 16:14 UTC | newest]
Thread overview: 30+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-02-13 22:46 [PATCH v10 00/18] reimplement per-vma lock as a refcount Suren Baghdasaryan
2025-02-13 22:46 ` [PATCH v10 01/18] mm: introduce vma_start_read_locked{_nested} helpers Suren Baghdasaryan
2025-02-13 22:46 ` [PATCH v10 02/18] mm: move per-vma lock into vm_area_struct Suren Baghdasaryan
2025-02-13 22:46 ` [PATCH v10 03/18] mm: mark vma as detached until it's added into vma tree Suren Baghdasaryan
2025-02-13 22:46 ` [PATCH v10 04/18] mm: introduce vma_iter_store_attached() to use with attached vmas Suren Baghdasaryan
2025-02-21 11:07 ` Vlastimil Babka
2025-02-21 16:11 ` Liam R. Howlett
2025-02-13 22:46 ` [PATCH v10 05/18] mm: mark vmas detached upon exit Suren Baghdasaryan
2025-02-21 16:14 ` Liam R. Howlett
2025-02-13 22:46 ` [PATCH v10 06/18] types: move struct rcuwait into types.h Suren Baghdasaryan
2025-02-13 22:46 ` [PATCH v10 07/18] mm: allow vma_start_read_locked/vma_start_read_locked_nested to fail Suren Baghdasaryan
2025-02-21 11:27 ` Vlastimil Babka
2025-02-13 22:46 ` [PATCH v10 08/18] mm: move mmap_init_lock() out of the header file Suren Baghdasaryan
2025-02-13 22:46 ` [PATCH v10 09/18] mm: uninline the main body of vma_start_write() Suren Baghdasaryan
2025-02-13 22:46 ` [PATCH v10 10/18] refcount: provide ops for cases when object's memory can be reused Suren Baghdasaryan
2025-02-13 22:46 ` [PATCH v10 11/18] refcount: introduce __refcount_{add|inc}_not_zero_limited_acquire Suren Baghdasaryan
2025-02-13 22:46 ` [PATCH v10 12/18] mm: replace vm_lock and detached flag with a reference count Suren Baghdasaryan
2025-02-20 18:53 ` Heiko Carstens
2025-02-20 19:03 ` Suren Baghdasaryan
2025-02-20 20:05 ` Suren Baghdasaryan
2025-02-21 15:18 ` Vlastimil Babka
2025-02-13 22:46 ` [PATCH v10 13/18] mm: move lesser used vma_area_struct members into the last cacheline Suren Baghdasaryan
2025-02-21 15:20 ` Vlastimil Babka
2025-02-13 22:46 ` [PATCH v10 14/18] mm/debug: print vm_refcnt state when dumping the vma Suren Baghdasaryan
2025-02-13 22:46 ` [PATCH v10 15/18] mm: remove extra vma_numab_state_init() call Suren Baghdasaryan
2025-02-13 22:46 ` [PATCH v10 16/18] mm: prepare lock_vma_under_rcu() for vma reuse possibility Suren Baghdasaryan
2025-02-13 22:46 ` [PATCH v10 17/18] mm: make vma cache SLAB_TYPESAFE_BY_RCU Suren Baghdasaryan
2025-02-13 22:46 ` [PATCH v10 18/18] docs/mm: document latest changes to vm_lock Suren Baghdasaryan
2025-02-21 15:44 ` Vlastimil Babka
2025-02-14 19:34 ` [PATCH v10 00/18] reimplement per-vma lock as a refcount Shivank Garg
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox