From: Suren Baghdasaryan <surenb@google.com>
To: akpm@linux-foundation.org
Cc: michel@lespinasse.org, jglisse@google.com, mhocko@suse.com,
vbabka@suse.cz, hannes@cmpxchg.org, mgorman@techsingularity.net,
dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com,
peterz@infradead.org, ldufour@linux.ibm.com,
laurent.dufour@fr.ibm.com, paulmck@kernel.org, riel@surriel.com,
luto@kernel.org, songliubraving@fb.com, peterx@redhat.com,
david@redhat.com, dhowells@redhat.com, hughd@google.com,
bigeasy@linutronix.de, kent.overstreet@linux.dev,
rientjes@google.com, axelrasmussen@google.com,
joelaf@google.com, minchan@google.com, surenb@google.com,
kernel-team@android.com, linux-mm@kvack.org,
linux-arm-kernel@lists.infradead.org,
linuxppc-dev@lists.ozlabs.org, x86@kernel.org,
linux-kernel@vger.kernel.org
Subject: [RFC PATCH 00/28] per-VMA locks proposal
Date: Mon, 29 Aug 2022 21:25:03 +0000 [thread overview]
Message-ID: <20220829212531.3184856-1-surenb@google.com> (raw)
This is a proof of concept for per-vma locks idea that was discussed
during SPF [1] discussion at LSF/MM this year [2], which concluded with
suggestion that “a reader/writer semaphore could be put into the VMA
itself; that would have the effect of using the VMA as a sort of range
lock. There would still be contention at the VMA level, but it would be an
improvement.” This patchset implements this suggested approach.
When handling page faults we lookup the VMA that contains the faulting
page under RCU protection and try to acquire its lock. If that fails we
fall back to using mmap_lock, similar to how SPF handled this situation.
One notable way the implementation deviates from the proposal is the way
VMAs are marked as locked. Because during some of mm updates multiple
VMAs need to be locked until the end of the update (e.g. vma_merge,
split_vma, etc). Tracking all the locked VMAs, avoiding recursive locks
and other complications would make the code more complex. Therefore we
provide a way to "mark" VMAs as locked and then unmark all locked VMAs
all at once. This is done using two sequence numbers - one in the
vm_area_struct and one in the mm_struct. VMA is considered locked when
these sequence numbers are equal. To mark a VMA as locked we set the
sequence number in vm_area_struct to be equal to the sequence number
in mm_struct. To unlock all VMAs we increment mm_struct's seq number.
This allows for an efficient way to track locked VMAs and to drop the
locks on all VMAs at the end of the update.
The patchset implements per-VMA locking only for anonymous pages which
are not in swap. If the initial proposal is considered acceptable, then
support for swapped and file-backed page faults will be added.
Performance benchmarks show similar although slightly smaller benefits as
with SPF patchset (~75% of SPF benefits). Still, with lower complexity
this approach might be more desirable.
The patchset applies cleanly over 6.0-rc3
The tree for testing is posted at [3]
[1] https://lore.kernel.org/all/20220128131006.67712-1-michel@lespinasse.org/
[2] https://lwn.net/Articles/893906/
[3] https://github.com/surenbaghdasaryan/linux/tree/per_vma_lock_rfc
Laurent Dufour (2):
powerc/mm: try VMA lock-based page fault handling first
powerpc/mm: define ARCH_SUPPORTS_PER_VMA_LOCK
Michel Lespinasse (1):
mm: rcu safe VMA freeing
Suren Baghdasaryan (25):
mm: introduce CONFIG_PER_VMA_LOCK
mm: introduce __find_vma to be used without mmap_lock protection
mm: move mmap_lock assert function definitions
mm: add per-VMA lock and helper functions to control it
mm: mark VMA as locked whenever vma->vm_flags are modified
kernel/fork: mark VMAs as locked before copying pages during fork
mm/khugepaged: mark VMA as locked while collapsing a hugepage
mm/mempolicy: mark VMA as locked when changing protection policy
mm/mmap: mark VMAs as locked in vma_adjust
mm/mmap: mark VMAs as locked before merging or splitting them
mm/mremap: mark VMA as locked while remapping it to a new address
range
mm: conditionally mark VMA as locked in free_pgtables and
unmap_page_range
mm: mark VMAs as locked before isolating them
mm/mmap: mark adjacent VMAs as locked if they can grow into unmapped
area
kernel/fork: assert no VMA readers during its destruction
mm/mmap: prevent pagefault handler from racing with mmu_notifier
registration
mm: add FAULT_FLAG_VMA_LOCK flag
mm: disallow do_swap_page to handle page faults under VMA lock
mm: introduce per-VMA lock statistics
mm: introduce find_and_lock_anon_vma to be used from arch-specific
code
x86/mm: try VMA lock-based page fault handling first
x86/mm: define ARCH_SUPPORTS_PER_VMA_LOCK
arm64/mm: try VMA lock-based page fault handling first
arm64/mm: define ARCH_SUPPORTS_PER_VMA_LOCK
kernel/fork: throttle call_rcu() calls in vm_area_free
arch/arm64/Kconfig | 1 +
arch/arm64/mm/fault.c | 36 +++++++++
arch/powerpc/mm/fault.c | 41 ++++++++++
arch/powerpc/platforms/powernv/Kconfig | 1 +
arch/powerpc/platforms/pseries/Kconfig | 1 +
arch/x86/Kconfig | 1 +
arch/x86/mm/fault.c | 36 +++++++++
drivers/gpu/drm/i915/i915_gpu_error.c | 4 +-
fs/proc/task_mmu.c | 1 +
fs/userfaultfd.c | 6 ++
include/linux/mm.h | 104 ++++++++++++++++++++++++-
include/linux/mm_types.h | 33 ++++++--
include/linux/mmap_lock.h | 37 ++++++---
include/linux/vm_event_item.h | 6 ++
include/linux/vmstat.h | 6 ++
kernel/fork.c | 75 +++++++++++++++++-
mm/Kconfig | 13 ++++
mm/Kconfig.debug | 8 ++
mm/init-mm.c | 6 ++
mm/internal.h | 4 +-
mm/khugepaged.c | 1 +
mm/madvise.c | 1 +
mm/memory.c | 82 ++++++++++++++++---
mm/mempolicy.c | 6 +-
mm/mlock.c | 2 +
mm/mmap.c | 60 ++++++++++----
mm/mprotect.c | 1 +
mm/mremap.c | 1 +
mm/nommu.c | 2 +
mm/oom_kill.c | 3 +-
mm/vmstat.c | 6 ++
31 files changed, 531 insertions(+), 54 deletions(-)
--
2.37.2.672.g94769d06f0-goog
next reply other threads:[~2022-08-29 21:25 UTC|newest]
Thread overview: 29+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-08-29 21:25 Suren Baghdasaryan [this message]
2022-08-29 21:25 ` [RFC PATCH 01/28] mm: introduce CONFIG_PER_VMA_LOCK Suren Baghdasaryan
2022-08-29 21:25 ` [RFC PATCH 02/28] mm: rcu safe VMA freeing Suren Baghdasaryan
2022-08-29 21:25 ` [RFC PATCH 03/28] mm: introduce __find_vma to be used without mmap_lock protection Suren Baghdasaryan
2022-08-29 21:25 ` [RFC PATCH 04/28] mm: move mmap_lock assert function definitions Suren Baghdasaryan
2022-08-29 21:25 ` [RFC PATCH 05/28] mm: add per-VMA lock and helper functions to control it Suren Baghdasaryan
2022-08-29 21:25 ` [RFC PATCH 06/28] mm: mark VMA as locked whenever vma->vm_flags are modified Suren Baghdasaryan
2022-08-29 21:25 ` [RFC PATCH 07/28] kernel/fork: mark VMAs as locked before copying pages during fork Suren Baghdasaryan
2022-08-29 21:25 ` [RFC PATCH 08/28] mm/khugepaged: mark VMA as locked while collapsing a hugepage Suren Baghdasaryan
2022-08-29 21:25 ` [RFC PATCH 09/28] mm/mempolicy: mark VMA as locked when changing protection policy Suren Baghdasaryan
2022-08-29 21:25 ` [RFC PATCH 10/28] mm/mmap: mark VMAs as locked in vma_adjust Suren Baghdasaryan
2022-08-29 21:25 ` [RFC PATCH 11/28] mm/mmap: mark VMAs as locked before merging or splitting them Suren Baghdasaryan
2022-08-29 21:25 ` [RFC PATCH 12/28] mm/mremap: mark VMA as locked while remapping it to a new address range Suren Baghdasaryan
2022-08-29 21:25 ` [RFC PATCH 13/28] mm: conditionally mark VMA as locked in free_pgtables and unmap_page_range Suren Baghdasaryan
2022-08-29 21:25 ` [RFC PATCH 14/28] mm: mark VMAs as locked before isolating them Suren Baghdasaryan
2022-08-29 21:25 ` [RFC PATCH 15/28] mm/mmap: mark adjacent VMAs as locked if they can grow into unmapped area Suren Baghdasaryan
2022-08-29 21:25 ` [RFC PATCH 16/28] kernel/fork: assert no VMA readers during its destruction Suren Baghdasaryan
2022-08-29 21:25 ` [RFC PATCH 17/28] mm/mmap: prevent pagefault handler from racing with mmu_notifier registration Suren Baghdasaryan
2022-08-29 21:25 ` [RFC PATCH 18/28] mm: add FAULT_FLAG_VMA_LOCK flag Suren Baghdasaryan
2022-08-29 21:25 ` [RFC PATCH 19/28] mm: disallow do_swap_page to handle page faults under VMA lock Suren Baghdasaryan
2022-08-29 21:25 ` [RFC PATCH 20/28] mm: introduce per-VMA lock statistics Suren Baghdasaryan
2022-08-29 21:25 ` [RFC PATCH 21/28] mm: introduce find_and_lock_anon_vma to be used from arch-specific code Suren Baghdasaryan
2022-08-29 21:25 ` [RFC PATCH 22/28] x86/mm: try VMA lock-based page fault handling first Suren Baghdasaryan
2022-08-29 21:25 ` [RFC PATCH 23/28] x86/mm: define ARCH_SUPPORTS_PER_VMA_LOCK Suren Baghdasaryan
2022-08-29 21:25 ` [RFC PATCH 24/28] arm64/mm: try VMA lock-based page fault handling first Suren Baghdasaryan
2022-08-29 21:25 ` [RFC PATCH 25/28] arm64/mm: define ARCH_SUPPORTS_PER_VMA_LOCK Suren Baghdasaryan
2022-08-29 21:25 ` [RFC PATCH 26/28] powerc/mm: try VMA lock-based page fault handling first Suren Baghdasaryan
2022-08-29 21:25 ` [RFC PATCH 27/28] powerpc/mm: define ARCH_SUPPORTS_PER_VMA_LOCK Suren Baghdasaryan
2022-08-29 21:25 ` [RFC PATCH 28/28] kernel/fork: throttle call_rcu() calls in vm_area_free Suren Baghdasaryan
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20220829212531.3184856-1-surenb@google.com \
--to=surenb@google.com \
--cc=akpm@linux-foundation.org \
--cc=axelrasmussen@google.com \
--cc=bigeasy@linutronix.de \
--cc=dave@stgolabs.net \
--cc=david@redhat.com \
--cc=dhowells@redhat.com \
--cc=hannes@cmpxchg.org \
--cc=hughd@google.com \
--cc=jglisse@google.com \
--cc=joelaf@google.com \
--cc=kent.overstreet@linux.dev \
--cc=kernel-team@android.com \
--cc=laurent.dufour@fr.ibm.com \
--cc=ldufour@linux.ibm.com \
--cc=liam.howlett@oracle.com \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linuxppc-dev@lists.ozlabs.org \
--cc=luto@kernel.org \
--cc=mgorman@techsingularity.net \
--cc=mhocko@suse.com \
--cc=michel@lespinasse.org \
--cc=minchan@google.com \
--cc=paulmck@kernel.org \
--cc=peterx@redhat.com \
--cc=peterz@infradead.org \
--cc=riel@surriel.com \
--cc=rientjes@google.com \
--cc=songliubraving@fb.com \
--cc=vbabka@suse.cz \
--cc=willy@infradead.org \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox