From: Suren Baghdasaryan <surenb@google.com>
To: akpm@linux-foundation.org
Cc: peterz@infradead.org, willy@infradead.org,
liam.howlett@oracle.com, lorenzo.stoakes@oracle.com,
mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org,
mjguzik@gmail.com, oliver.sang@intel.com,
mgorman@techsingularity.net, david@redhat.com,
peterx@redhat.com, oleg@redhat.com, dave@stgolabs.net,
paulmck@kernel.org, brauner@kernel.org, dhowells@redhat.com,
hdanton@sina.com, hughd@google.com, lokeshgidra@google.com,
minchan@google.com, jannh@google.com, shakeel.butt@linux.dev,
souravpanda@google.com, pasha.tatashin@soleen.com,
klarasmodin@gmail.com, richard.weiyang@gmail.com,
corbet@lwn.net, linux-doc@vger.kernel.org, linux-mm@kvack.org,
linux-kernel@vger.kernel.org, kernel-team@android.com,
surenb@google.com
Subject: [PATCH v8 00/16] move per-vma lock into vm_area_struct
Date: Wed, 8 Jan 2025 18:30:09 -0800 [thread overview]
Message-ID: <20250109023025.2242447-1-surenb@google.com> (raw)
Back when per-vma locks were introduces, vm_lock was moved out of
vm_area_struct in [1] because of the performance regression caused by
false cacheline sharing. Recent investigation [2] revealed that the
regressions is limited to a rather old Broadwell microarchitecture and
even there it can be mitigated by disabling adjacent cacheline
prefetching, see [3].
Splitting single logical structure into multiple ones leads to more
complicated management, extra pointer dereferences and overall less
maintainable code. When that split-away part is a lock, it complicates
things even further. With no performance benefits, there are no reasons
for this split. Merging the vm_lock back into vm_area_struct also allows
vm_area_struct to use SLAB_TYPESAFE_BY_RCU later in this patchset.
This patchset:
1. moves vm_lock back into vm_area_struct, aligning it at the cacheline
boundary and changing the cache to be cacheline-aligned to minimize
cacheline sharing;
2. changes vm_area_struct initialization to mark new vma as detached until
it is inserted into vma tree;
3. replaces vm_lock and vma->detached flag with a reference counter;
4. changes vm_area_struct cache to SLAB_TYPESAFE_BY_RCU to allow for their
reuse and to minimize call_rcu() calls.
Pagefault microbenchmarks show performance improvement:
Hmean faults/cpu-1 507926.5547 ( 0.00%) 506519.3692 * -0.28%*
Hmean faults/cpu-4 479119.7051 ( 0.00%) 481333.6802 * 0.46%*
Hmean faults/cpu-7 452880.2961 ( 0.00%) 455845.6211 * 0.65%*
Hmean faults/cpu-12 347639.1021 ( 0.00%) 352004.2254 * 1.26%*
Hmean faults/cpu-21 200061.2238 ( 0.00%) 229597.0317 * 14.76%*
Hmean faults/cpu-30 145251.2001 ( 0.00%) 164202.5067 * 13.05%*
Hmean faults/cpu-48 106848.4434 ( 0.00%) 120641.5504 * 12.91%*
Hmean faults/cpu-56 92472.3835 ( 0.00%) 103464.7916 * 11.89%*
Hmean faults/sec-1 507566.1468 ( 0.00%) 506139.0811 * -0.28%*
Hmean faults/sec-4 1880478.2402 ( 0.00%) 1886795.6329 * 0.34%*
Hmean faults/sec-7 3106394.3438 ( 0.00%) 3140550.7485 * 1.10%*
Hmean faults/sec-12 4061358.4795 ( 0.00%) 4112477.0206 * 1.26%*
Hmean faults/sec-21 3988619.1169 ( 0.00%) 4577747.1436 * 14.77%*
Hmean faults/sec-30 3909839.5449 ( 0.00%) 4311052.2787 * 10.26%*
Hmean faults/sec-48 4761108.4691 ( 0.00%) 5283790.5026 * 10.98%*
Hmean faults/sec-56 4885561.4590 ( 0.00%) 5415839.4045 * 10.85%*
Changes since v7 [4]:
- Removed additional parameter for vma_iter_store() and introduced
vma_iter_store_attached() instead, per Vlastimil Babka and
Liam R. Howlett
- Fixed coding style nits, per Vlastimil Babka
- Added Reviewed-bys and Acked-bys, per Vlastimil Babka
- Added Reviewed-bys and Acked-bys, per Liam R. Howlett
- Added Acked-by, per Davidlohr Bueso
- Removed unnecessary patch changeing nommu.c
- Folded a fixup patch [5] into the patch it was fixing
- Changed calculation in __refcount_add_not_zero_limited() to avoid
overflow, to change the limit to be inclusive and to use INT_MAX to
indicate no limits, per Vlastimil Babka and Matthew Wilcox
- Folded a fixup patch [6] into the patch it was fixing
- Added vm_refcnt rules summary in the changelog, per Liam R. Howlett
- Changed writers to not increment vm_refcnt and adjusted VMA_REF_LIMIT
to not reserve one count for a writer, per Liam R. Howlett
- Changed vma_refcount_put() to wake up writers only when the last reader
is leaving, per Liam R. Howlett
- Fixed rwsem_acquire_read() parameters when read-locking a vma to match
the way down_read_trylock() does lockdep, per Vlastimil Babka
- Folded vma_lockdep_init() into vma_lock_init() for simplicity
- Brought back vma_copy() to keep vm_refcount at 0 during reuse,
per Vlastimil Babka
What I did not include in this patchset:
- Liam's suggestion to change dump_vma() output since it's unclear to me
how it should look like. The patch is for debug only and not critical for
the rest of the series, we can change the output later or even drop it if
necessary.
[1] https://lore.kernel.org/all/20230227173632.3292573-34-surenb@google.com/
[2] https://lore.kernel.org/all/ZsQyI%2F087V34JoIt@xsang-OptiPlex-9020/
[3] https://lore.kernel.org/all/CAJuCfpEisU8Lfe96AYJDZ+OM4NoPmnw9bP53cT_kbfP_pR+-2g@mail.gmail.com/
[4] https://lore.kernel.org/all/20241226170710.1159679-1-surenb@google.com/
[5] https://lore.kernel.org/all/20250107030415.721474-1-surenb@google.com/
[6] https://lore.kernel.org/all/20241226200335.1250078-1-surenb@google.com/
Patchset applies over mm-unstable after reverting v7
(current SHA range: 588f0086398e - fb2270654630)
Suren Baghdasaryan (16):
mm: introduce vma_start_read_locked{_nested} helpers
mm: move per-vma lock into vm_area_struct
mm: mark vma as detached until it's added into vma tree
mm: introduce vma_iter_store_attached() to use with attached vmas
mm: mark vmas detached upon exit
types: move struct rcuwait into types.h
mm: allow vma_start_read_locked/vma_start_read_locked_nested to fail
mm: move mmap_init_lock() out of the header file
mm: uninline the main body of vma_start_write()
refcount: introduce __refcount_{add|inc}_not_zero_limited
mm: replace vm_lock and detached flag with a reference count
mm/debug: print vm_refcnt state when dumping the vma
mm: remove extra vma_numab_state_init() call
mm: prepare lock_vma_under_rcu() for vma reuse possibility
mm: make vma cache SLAB_TYPESAFE_BY_RCU
docs/mm: document latest changes to vm_lock
Documentation/mm/process_addrs.rst | 44 +++++----
include/linux/mm.h | 152 ++++++++++++++++++++++-------
include/linux/mm_types.h | 36 ++++---
include/linux/mmap_lock.h | 6 --
include/linux/rcuwait.h | 13 +--
include/linux/refcount.h | 20 +++-
include/linux/slab.h | 6 --
include/linux/types.h | 12 +++
kernel/fork.c | 128 +++++++++++-------------
mm/debug.c | 12 +++
mm/init-mm.c | 1 +
mm/memory.c | 94 +++++++++++++++---
mm/mmap.c | 3 +-
mm/userfaultfd.c | 32 +++---
mm/vma.c | 23 ++---
mm/vma.h | 15 ++-
tools/testing/vma/linux/atomic.h | 5 +
tools/testing/vma/vma_internal.h | 93 ++++++++----------
18 files changed, 435 insertions(+), 260 deletions(-)
--
2.47.1.613.gc27f4b7a9f-goog
next reply other threads:[~2025-01-09 2:30 UTC|newest]
Thread overview: 53+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-01-09 2:30 Suren Baghdasaryan [this message]
2025-01-09 2:30 ` [PATCH v8 01/16] mm: introduce vma_start_read_locked{_nested} helpers Suren Baghdasaryan
2025-01-09 2:30 ` [PATCH v8 02/16] mm: move per-vma lock into vm_area_struct Suren Baghdasaryan
2025-01-09 2:30 ` [PATCH v8 03/16] mm: mark vma as detached until it's added into vma tree Suren Baghdasaryan
2025-01-09 2:30 ` [PATCH v8 04/16] mm: introduce vma_iter_store_attached() to use with attached vmas Suren Baghdasaryan
2025-01-09 14:01 ` Vlastimil Babka
2025-01-09 2:30 ` [PATCH v8 05/16] mm: mark vmas detached upon exit Suren Baghdasaryan
2025-01-09 2:30 ` [PATCH v8 06/16] types: move struct rcuwait into types.h Suren Baghdasaryan
2025-01-09 2:30 ` [PATCH v8 07/16] mm: allow vma_start_read_locked/vma_start_read_locked_nested to fail Suren Baghdasaryan
2025-01-09 2:30 ` [PATCH v8 08/16] mm: move mmap_init_lock() out of the header file Suren Baghdasaryan
2025-01-09 2:30 ` [PATCH v8 09/16] mm: uninline the main body of vma_start_write() Suren Baghdasaryan
2025-01-09 2:30 ` [PATCH v8 10/16] refcount: introduce __refcount_{add|inc}_not_zero_limited Suren Baghdasaryan
2025-01-09 14:42 ` Vlastimil Babka
2025-01-11 1:33 ` Suren Baghdasaryan
2025-01-09 2:30 ` [PATCH v8 11/16] mm: replace vm_lock and detached flag with a reference count Suren Baghdasaryan
2025-01-09 10:35 ` Hillf Danton
2025-01-09 16:01 ` Suren Baghdasaryan
2025-01-10 14:34 ` Vlastimil Babka
2025-01-10 15:56 ` Suren Baghdasaryan
2025-01-10 16:47 ` Suren Baghdasaryan
2025-01-10 16:50 ` Suren Baghdasaryan
2025-01-10 22:26 ` Vlastimil Babka
2025-01-10 22:37 ` Suren Baghdasaryan
2025-01-09 2:30 ` [PATCH v8 12/16] mm/debug: print vm_refcnt state when dumping the vma Suren Baghdasaryan
2025-01-09 2:30 ` [PATCH v8 13/16] mm: remove extra vma_numab_state_init() call Suren Baghdasaryan
2025-01-09 2:30 ` [PATCH v8 14/16] mm: prepare lock_vma_under_rcu() for vma reuse possibility Suren Baghdasaryan
2025-01-09 2:30 ` [PATCH v8 15/16] mm: make vma cache SLAB_TYPESAFE_BY_RCU Suren Baghdasaryan
2025-01-10 15:32 ` Vlastimil Babka
2025-01-10 16:07 ` Suren Baghdasaryan
2025-01-10 22:14 ` Vlastimil Babka
2025-01-11 3:37 ` Suren Baghdasaryan
2025-01-10 17:47 ` Liam R. Howlett
2025-01-10 19:07 ` Suren Baghdasaryan
2025-01-10 19:46 ` Liam R. Howlett
2025-01-10 20:34 ` Suren Baghdasaryan
2025-01-10 20:47 ` Liam R. Howlett
2025-01-10 21:32 ` Suren Baghdasaryan
2025-01-10 19:51 ` Liam R. Howlett
2025-01-10 20:40 ` Suren Baghdasaryan
2025-01-10 20:48 ` Liam R. Howlett
2025-01-09 2:30 ` [PATCH v8 16/16] docs/mm: document latest changes to vm_lock Suren Baghdasaryan
2025-01-09 2:32 ` [PATCH v8 00/16] move per-vma lock into vm_area_struct Suren Baghdasaryan
2025-01-09 11:51 ` Peter Zijlstra
2025-01-09 15:48 ` Suren Baghdasaryan
2025-01-10 17:01 ` Peter Zijlstra
2025-01-15 8:59 ` Peter Zijlstra
2025-01-09 13:41 ` Vlastimil Babka
2025-01-09 15:57 ` Suren Baghdasaryan
2025-01-10 0:14 ` Suren Baghdasaryan
2025-01-09 15:59 ` Suren Baghdasaryan
2025-01-10 0:16 ` Suren Baghdasaryan
2025-01-10 15:36 ` Vlastimil Babka
2025-01-10 16:08 ` Suren Baghdasaryan
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250109023025.2242447-1-surenb@google.com \
--to=surenb@google.com \
--cc=akpm@linux-foundation.org \
--cc=brauner@kernel.org \
--cc=corbet@lwn.net \
--cc=dave@stgolabs.net \
--cc=david@redhat.com \
--cc=dhowells@redhat.com \
--cc=hannes@cmpxchg.org \
--cc=hdanton@sina.com \
--cc=hughd@google.com \
--cc=jannh@google.com \
--cc=kernel-team@android.com \
--cc=klarasmodin@gmail.com \
--cc=liam.howlett@oracle.com \
--cc=linux-doc@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lokeshgidra@google.com \
--cc=lorenzo.stoakes@oracle.com \
--cc=mgorman@techsingularity.net \
--cc=mhocko@suse.com \
--cc=minchan@google.com \
--cc=mjguzik@gmail.com \
--cc=oleg@redhat.com \
--cc=oliver.sang@intel.com \
--cc=pasha.tatashin@soleen.com \
--cc=paulmck@kernel.org \
--cc=peterx@redhat.com \
--cc=peterz@infradead.org \
--cc=richard.weiyang@gmail.com \
--cc=shakeel.butt@linux.dev \
--cc=souravpanda@google.com \
--cc=vbabka@suse.cz \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox