linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/4] move per-vma lock into vm_area_struct
@ 2024-11-11 20:55 Suren Baghdasaryan
  2024-11-11 20:55 ` [PATCH 1/4] mm: introduce vma_start_read_locked{_nested} helpers Suren Baghdasaryan
                   ` (7 more replies)
  0 siblings, 8 replies; 49+ messages in thread
From: Suren Baghdasaryan @ 2024-11-11 20:55 UTC (permalink / raw)
  To: akpm
  Cc: willy, liam.howlett, lorenzo.stoakes, mhocko, vbabka, hannes,
	mjguzik, oliver.sang, mgorman, david, peterx, oleg, dave,
	paulmck, brauner, dhowells, hdanton, hughd, minchan, jannh,
	shakeel.butt, souravpanda, pasha.tatashin, linux-mm,
	linux-kernel, kernel-team, surenb

Back when per-vma locks were introduces, vm_lock was moved out of
vm_area_struct in [1] because of the performance regression caused by
false cacheline sharing. Recent investigation [2] revealed that the
regressions is limited to a rather old Broadwell microarchitecture and
even there it can be mitigated by disabling adjacent cacheline
prefetching, see [3].
This patchset moves vm_lock back into vm_area_struct, aligning it at the
cacheline boundary and changing the cache to be cache-aligned as well.
This causes VMA memory consumption to grow from 160 (vm_area_struct) + 40
(vm_lock) bytes to 256 bytes:

    slabinfo before:
     <name>           ... <objsize> <objperslab> <pagesperslab> : ...
     vma_lock         ...     40  102    1 : ...
     vm_area_struct   ...    160   51    2 : ...

    slabinfo after moving vm_lock:
     <name>           ... <objsize> <objperslab> <pagesperslab> : ...
     vm_area_struct   ...    256   32    2 : ...

Aggregate VMA memory consumption per 1000 VMAs grows from 50 to 64 pages,
which is 5.5MB per 100000 VMAs.
To minimize memory overhead, vm_lock implementation is changed from
using rw_semaphore (40 bytes) to an atomic (8 bytes) and several
vm_area_struct members are moved into the last cacheline, resulting
in a less fragmented structure:

struct vm_area_struct {
	union {
		struct {
			long unsigned int vm_start;      /*     0     8 */
			long unsigned int vm_end;        /*     8     8 */
		};                                       /*     0    16 */
		struct callback_head vm_rcu ;            /*     0    16 */
	} __attribute__((__aligned__(8)));               /*     0    16 */
	struct mm_struct *         vm_mm;                /*    16     8 */
	pgprot_t                   vm_page_prot;         /*    24     8 */
	union {
		const vm_flags_t   vm_flags;             /*    32     8 */
		vm_flags_t         __vm_flags;           /*    32     8 */
	};                                               /*    32     8 */
	bool                       detached;             /*    40     1 */

	/* XXX 3 bytes hole, try to pack */

	unsigned int               vm_lock_seq;          /*    44     4 */
	struct list_head           anon_vma_chain;       /*    48    16 */
	/* --- cacheline 1 boundary (64 bytes) --- */
	struct anon_vma *          anon_vma;             /*    64     8 */
	const struct vm_operations_struct  * vm_ops;     /*    72     8 */
	long unsigned int          vm_pgoff;             /*    80     8 */
	struct file *              vm_file;              /*    88     8 */
	void *                     vm_private_data;      /*    96     8 */
	atomic_long_t              swap_readahead_info;  /*   104     8 */
	struct mempolicy *         vm_policy;            /*   112     8 */

	/* XXX 8 bytes hole, try to pack */

	/* --- cacheline 2 boundary (128 bytes) --- */
	struct vma_lock       vm_lock (__aligned__(64)); /*   128     4 */

	/* XXX 4 bytes hole, try to pack */

	struct {
		struct rb_node     rb (__aligned__(8));  /*   136    24 */
		long unsigned int  rb_subtree_last;      /*   160     8 */
	} __attribute__((__aligned__(8))) shared;        /*   136    32 */
	struct vm_userfaultfd_ctx  vm_userfaultfd_ctx;   /*   168     0 */

	/* size: 192, cachelines: 3, members: 17 */
	/* sum members: 153, holes: 3, sum holes: 15 */
	/* padding: 24 */
	/* forced alignments: 3, forced holes: 2, sum forced holes: 12 */
} __attribute__((__aligned__(64)));

Memory consumption per 1000 VMAs becomes 48 pages, saving 2 pages compared
to the 50 pages in the baseline:

    slabinfo after vm_area_struct changes:
     <name>           ... <objsize> <objperslab> <pagesperslab> : ...
     vm_area_struct   ...    192   42    2 : ...

Performance measurements using pft test on x86 do not show considerable
difference, on Pixel 6 running Android it results in 3-5% improvement in
faults per second.

[1] https://lore.kernel.org/all/20230227173632.3292573-34-surenb@google.com/
[2] https://lore.kernel.org/all/ZsQyI%2F087V34JoIt@xsang-OptiPlex-9020/
[3] https://lore.kernel.org/all/CAJuCfpEisU8Lfe96AYJDZ+OM4NoPmnw9bP53cT_kbfP_pR+-2g@mail.gmail.com/

Suren Baghdasaryan (4):
  mm: introduce vma_start_read_locked{_nested} helpers
  mm: move per-vma lock into vm_area_struct
  mm: replace rw_semaphore with atomic_t in vma_lock
  mm: move lesser used vma_area_struct members into the last cacheline

 include/linux/mm.h        | 163 +++++++++++++++++++++++++++++++++++---
 include/linux/mm_types.h  |  59 +++++++++-----
 include/linux/mmap_lock.h |   3 +
 kernel/fork.c             |  50 ++----------
 mm/init-mm.c              |   2 +
 mm/userfaultfd.c          |  14 ++--
 6 files changed, 205 insertions(+), 86 deletions(-)


base-commit: 931086f2a88086319afb57cd3925607e8cda0a9f
-- 
2.47.0.277.g8800431eea-goog



^ permalink raw reply	[flat|nested] 49+ messages in thread

end of thread, other threads:[~2024-12-16 19:33 UTC | newest]

Thread overview: 49+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-11-11 20:55 [PATCH 0/4] move per-vma lock into vm_area_struct Suren Baghdasaryan
2024-11-11 20:55 ` [PATCH 1/4] mm: introduce vma_start_read_locked{_nested} helpers Suren Baghdasaryan
2024-11-11 20:55 ` [PATCH 2/4] mm: move per-vma lock into vm_area_struct Suren Baghdasaryan
2024-11-12  9:36   ` kernel test robot
2024-11-12  9:47   ` kernel test robot
2024-11-12 10:18   ` kernel test robot
2024-11-12 15:57   ` Vlastimil Babka
2024-11-12 16:08     ` Suren Baghdasaryan
2024-11-12 16:58       ` Suren Baghdasaryan
2024-11-11 20:55 ` [PATCH 3/4] mm: replace rw_semaphore with atomic_t in vma_lock Suren Baghdasaryan
2024-11-12  0:06   ` Andrew Morton
2024-11-12  0:52     ` Suren Baghdasaryan
2024-11-12  0:35   ` Davidlohr Bueso
2024-11-12  0:59     ` Suren Baghdasaryan
2024-11-12  4:58   ` Matthew Wilcox
2024-11-12 15:18     ` Suren Baghdasaryan
2024-12-10 22:38       ` Peter Zijlstra
2024-12-10 23:37         ` Suren Baghdasaryan
2024-12-11  8:25           ` Peter Zijlstra
2024-12-11 15:20             ` Suren Baghdasaryan
2024-12-12  3:01               ` Suren Baghdasaryan
2024-12-12  9:16                 ` Peter Zijlstra
2024-12-12 14:17                   ` Suren Baghdasaryan
2024-12-12 14:19                     ` Suren Baghdasaryan
2024-12-13  4:48                       ` Suren Baghdasaryan
2024-12-13  9:57                         ` Peter Zijlstra
2024-12-13 17:45                           ` Suren Baghdasaryan
2024-12-13 18:19                             ` Matthew Wilcox
2024-12-13 18:35                             ` Peter Zijlstra
2024-12-13 18:37                               ` Suren Baghdasaryan
2024-12-16 19:32                                 ` Suren Baghdasaryan
2024-12-13  9:22                     ` Peter Zijlstra
2024-12-13 17:41                       ` Suren Baghdasaryan
2024-11-11 20:55 ` [PATCH 4/4] mm: move lesser used vma_area_struct members into the last cacheline Suren Baghdasaryan
2024-11-12 10:07   ` Vlastimil Babka
2024-11-12 15:45     ` Suren Baghdasaryan
2024-11-11 21:41 ` [PATCH 0/4] move per-vma lock into vm_area_struct Suren Baghdasaryan
2024-11-12  2:48   ` Liam R. Howlett
2024-11-12  3:27     ` Suren Baghdasaryan
2024-11-11 22:18 ` Davidlohr Bueso
2024-11-11 23:19   ` Suren Baghdasaryan
2024-11-12  0:03     ` Davidlohr Bueso
2024-11-12  0:43       ` Suren Baghdasaryan
2024-11-12  0:11     ` Andrew Morton
2024-11-12  0:48       ` Suren Baghdasaryan
2024-11-12  2:35 ` Liam R. Howlett
2024-11-12  3:23   ` Suren Baghdasaryan
2024-11-12  9:39 ` Lorenzo Stoakes
2024-11-12 15:38   ` Suren Baghdasaryan

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox