* [PATCH v6 0/3] per-vma locks in userfaultfd
@ 2024-02-13 21:57 Lokesh Gidra
2024-02-13 21:57 ` [PATCH v6 1/3] userfaultfd: move userfaultfd_ctx struct to header file Lokesh Gidra
` (3 more replies)
0 siblings, 4 replies; 9+ messages in thread
From: Lokesh Gidra @ 2024-02-13 21:57 UTC (permalink / raw)
To: akpm
Cc: lokeshgidra, linux-fsdevel, linux-mm, linux-kernel, selinux,
surenb, kernel-team, aarcange, peterx, david, axelrasmussen,
bgeffon, willy, jannh, kaleshsingh, ngeoffray, timmurray, rppt,
Liam.Howlett
Performing userfaultfd operations (like copy/move etc.) in critical
section of mmap_lock (read-mode) causes significant contention on the
lock when operations requiring the lock in write-mode are taking place
concurrently. We can use per-vma locks instead to significantly reduce
the contention issue.
Android runtime's Garbage Collector uses userfaultfd for concurrent
compaction. mmap-lock contention during compaction potentially causes
jittery experience for the user. During one such reproducible scenario,
we observed the following improvements with this patch-set:
- Wall clock time of compaction phase came down from ~3s to <500ms
- Uninterruptible sleep time (across all threads in the process) was
~10ms (none in mmap_lock) during compaction, instead of >20s
Changes since v5 [5]:
- Use abstract function names (like uffd_mfill_lock/uffd_mfill_unlock)
to avoid using too many #ifdef's, per Suren Baghdasaryan and Liam
Howlett
- Use 'unlikely' (as earlier) to anon_vma related checks, per Liam Howlett
- Eliminate redundant ptr->err->ptr conversion, per Liam Howlett
- Use 'int' instead of 'long' for error return type, per Liam Howlett
Changes since v4 [4]:
- Fix possible deadlock in find_and_lock_vmas() which may arise if
lock_vma() is used for both src and dst vmas.
- Ensure we lock vma only once if src and dst vmas are same.
- Fix error handling in move_pages() after successfully locking vmas.
- Introduce helper function for finding dst vma and preparing its
anon_vma when done in mmap_lock critical section, per Liam Howlett.
- Introduce helper function for finding dst and src vmas when done in
mmap_lock critical section.
Changes since v3 [3]:
- Rename function names to clearly reflect which lock is being taken,
per Liam Howlett.
- Have separate functions and abstractions in mm/userfaultfd.c to avoid
confusion around which lock is being acquired/released, per Liam Howlett.
- Prepare anon_vma for all private vmas, anonymous or file-backed,
per Jann Horn.
Changes since v2 [2]:
- Implement and use lock_vma() which uses mmap_lock critical section
to lock the VMA using per-vma lock if lock_vma_under_rcu() fails,
per Liam R. Howlett. This helps simplify the code and also avoids
performing the entire userfaultfd operation under mmap_lock.
Changes since v1 [1]:
- rebase patches on 'mm-unstable' branch
[1] https://lore.kernel.org/all/20240126182647.2748949-1-lokeshgidra@google.com/
[2] https://lore.kernel.org/all/20240129193512.123145-1-lokeshgidra@google.com/
[3] https://lore.kernel.org/all/20240206010919.1109005-1-lokeshgidra@google.com/
[4] https://lore.kernel.org/all/20240208212204.2043140-1-lokeshgidra@google.com/
[5] https://lore.kernel.org/all/20240213001920.3551772-1-lokeshgidra@google.com/
Lokesh Gidra (3):
userfaultfd: move userfaultfd_ctx struct to header file
userfaultfd: protect mmap_changing with rw_sem in userfaulfd_ctx
userfaultfd: use per-vma locks in userfaultfd operations
fs/userfaultfd.c | 86 ++-----
include/linux/userfaultfd_k.h | 75 ++++--
mm/userfaultfd.c | 438 +++++++++++++++++++++++++---------
3 files changed, 405 insertions(+), 194 deletions(-)
--
2.43.0.687.g38aa6559b0-goog
^ permalink raw reply [flat|nested] 9+ messages in thread* [PATCH v6 1/3] userfaultfd: move userfaultfd_ctx struct to header file 2024-02-13 21:57 [PATCH v6 0/3] per-vma locks in userfaultfd Lokesh Gidra @ 2024-02-13 21:57 ` Lokesh Gidra 2024-02-13 21:57 ` [PATCH v6 2/3] userfaultfd: protect mmap_changing with rw_sem in userfaulfd_ctx Lokesh Gidra ` (2 subsequent siblings) 3 siblings, 0 replies; 9+ messages in thread From: Lokesh Gidra @ 2024-02-13 21:57 UTC (permalink / raw) To: akpm Cc: lokeshgidra, linux-fsdevel, linux-mm, linux-kernel, selinux, surenb, kernel-team, aarcange, peterx, david, axelrasmussen, bgeffon, willy, jannh, kaleshsingh, ngeoffray, timmurray, rppt, Liam.Howlett Moving the struct to userfaultfd_k.h to be accessible from mm/userfaultfd.c. There are no other changes in the struct. This is required to prepare for using per-vma locks in userfaultfd operations. Signed-off-by: Lokesh Gidra <lokeshgidra@google.com> Reviewed-by: Mike Rapoport (IBM) <rppt@kernel.org> --- fs/userfaultfd.c | 39 ----------------------------------- include/linux/userfaultfd_k.h | 39 +++++++++++++++++++++++++++++++++++ 2 files changed, 39 insertions(+), 39 deletions(-) diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c index 05c8e8a05427..58331b83d648 100644 --- a/fs/userfaultfd.c +++ b/fs/userfaultfd.c @@ -50,45 +50,6 @@ static struct ctl_table vm_userfaultfd_table[] = { static struct kmem_cache *userfaultfd_ctx_cachep __ro_after_init; -/* - * Start with fault_pending_wqh and fault_wqh so they're more likely - * to be in the same cacheline. - * - * Locking order: - * fd_wqh.lock - * fault_pending_wqh.lock - * fault_wqh.lock - * event_wqh.lock - * - * To avoid deadlocks, IRQs must be disabled when taking any of the above locks, - * since fd_wqh.lock is taken by aio_poll() while it's holding a lock that's - * also taken in IRQ context. - */ -struct userfaultfd_ctx { - /* waitqueue head for the pending (i.e. not read) userfaults */ - wait_queue_head_t fault_pending_wqh; - /* waitqueue head for the userfaults */ - wait_queue_head_t fault_wqh; - /* waitqueue head for the pseudo fd to wakeup poll/read */ - wait_queue_head_t fd_wqh; - /* waitqueue head for events */ - wait_queue_head_t event_wqh; - /* a refile sequence protected by fault_pending_wqh lock */ - seqcount_spinlock_t refile_seq; - /* pseudo fd refcounting */ - refcount_t refcount; - /* userfaultfd syscall flags */ - unsigned int flags; - /* features requested from the userspace */ - unsigned int features; - /* released */ - bool released; - /* memory mappings are changing because of non-cooperative event */ - atomic_t mmap_changing; - /* mm with one ore more vmas attached to this userfaultfd_ctx */ - struct mm_struct *mm; -}; - struct userfaultfd_fork_ctx { struct userfaultfd_ctx *orig; struct userfaultfd_ctx *new; diff --git a/include/linux/userfaultfd_k.h b/include/linux/userfaultfd_k.h index e4056547fbe6..691d928ee864 100644 --- a/include/linux/userfaultfd_k.h +++ b/include/linux/userfaultfd_k.h @@ -36,6 +36,45 @@ #define UFFD_SHARED_FCNTL_FLAGS (O_CLOEXEC | O_NONBLOCK) #define UFFD_FLAGS_SET (EFD_SHARED_FCNTL_FLAGS) +/* + * Start with fault_pending_wqh and fault_wqh so they're more likely + * to be in the same cacheline. + * + * Locking order: + * fd_wqh.lock + * fault_pending_wqh.lock + * fault_wqh.lock + * event_wqh.lock + * + * To avoid deadlocks, IRQs must be disabled when taking any of the above locks, + * since fd_wqh.lock is taken by aio_poll() while it's holding a lock that's + * also taken in IRQ context. + */ +struct userfaultfd_ctx { + /* waitqueue head for the pending (i.e. not read) userfaults */ + wait_queue_head_t fault_pending_wqh; + /* waitqueue head for the userfaults */ + wait_queue_head_t fault_wqh; + /* waitqueue head for the pseudo fd to wakeup poll/read */ + wait_queue_head_t fd_wqh; + /* waitqueue head for events */ + wait_queue_head_t event_wqh; + /* a refile sequence protected by fault_pending_wqh lock */ + seqcount_spinlock_t refile_seq; + /* pseudo fd refcounting */ + refcount_t refcount; + /* userfaultfd syscall flags */ + unsigned int flags; + /* features requested from the userspace */ + unsigned int features; + /* released */ + bool released; + /* memory mappings are changing because of non-cooperative event */ + atomic_t mmap_changing; + /* mm with one ore more vmas attached to this userfaultfd_ctx */ + struct mm_struct *mm; +}; + extern vm_fault_t handle_userfault(struct vm_fault *vmf, unsigned long reason); /* A combined operation mode + behavior flags. */ -- 2.43.0.687.g38aa6559b0-goog ^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH v6 2/3] userfaultfd: protect mmap_changing with rw_sem in userfaulfd_ctx 2024-02-13 21:57 [PATCH v6 0/3] per-vma locks in userfaultfd Lokesh Gidra 2024-02-13 21:57 ` [PATCH v6 1/3] userfaultfd: move userfaultfd_ctx struct to header file Lokesh Gidra @ 2024-02-13 21:57 ` Lokesh Gidra 2024-02-13 21:57 ` [PATCH v6 3/3] userfaultfd: use per-vma locks in userfaultfd operations Lokesh Gidra 2024-02-14 15:17 ` [PATCH v6 0/3] per-vma locks in userfaultfd Liam R. Howlett 3 siblings, 0 replies; 9+ messages in thread From: Lokesh Gidra @ 2024-02-13 21:57 UTC (permalink / raw) To: akpm Cc: lokeshgidra, linux-fsdevel, linux-mm, linux-kernel, selinux, surenb, kernel-team, aarcange, peterx, david, axelrasmussen, bgeffon, willy, jannh, kaleshsingh, ngeoffray, timmurray, rppt, Liam.Howlett Increments and loads to mmap_changing are always in mmap_lock critical section. This ensures that if userspace requests event notification for non-cooperative operations (e.g. mremap), userfaultfd operations don't occur concurrently. This can be achieved by using a separate read-write semaphore in userfaultfd_ctx such that increments are done in write-mode and loads in read-mode, thereby eliminating the dependency on mmap_lock for this purpose. This is a preparatory step before we replace mmap_lock usage with per-vma locks in fill/move ioctls. Signed-off-by: Lokesh Gidra <lokeshgidra@google.com> Reviewed-by: Mike Rapoport (IBM) <rppt@kernel.org> --- fs/userfaultfd.c | 40 ++++++++++++---------- include/linux/userfaultfd_k.h | 31 ++++++++++-------- mm/userfaultfd.c | 62 ++++++++++++++++++++--------------- 3 files changed, 75 insertions(+), 58 deletions(-) diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c index 58331b83d648..c00a021bcce4 100644 --- a/fs/userfaultfd.c +++ b/fs/userfaultfd.c @@ -685,12 +685,15 @@ int dup_userfaultfd(struct vm_area_struct *vma, struct list_head *fcs) ctx->flags = octx->flags; ctx->features = octx->features; ctx->released = false; + init_rwsem(&ctx->map_changing_lock); atomic_set(&ctx->mmap_changing, 0); ctx->mm = vma->vm_mm; mmgrab(ctx->mm); userfaultfd_ctx_get(octx); + down_write(&octx->map_changing_lock); atomic_inc(&octx->mmap_changing); + up_write(&octx->map_changing_lock); fctx->orig = octx; fctx->new = ctx; list_add_tail(&fctx->list, fcs); @@ -737,7 +740,9 @@ void mremap_userfaultfd_prep(struct vm_area_struct *vma, if (ctx->features & UFFD_FEATURE_EVENT_REMAP) { vm_ctx->ctx = ctx; userfaultfd_ctx_get(ctx); + down_write(&ctx->map_changing_lock); atomic_inc(&ctx->mmap_changing); + up_write(&ctx->map_changing_lock); } else { /* Drop uffd context if remap feature not enabled */ vma_start_write(vma); @@ -783,7 +788,9 @@ bool userfaultfd_remove(struct vm_area_struct *vma, return true; userfaultfd_ctx_get(ctx); + down_write(&ctx->map_changing_lock); atomic_inc(&ctx->mmap_changing); + up_write(&ctx->map_changing_lock); mmap_read_unlock(mm); msg_init(&ewq.msg); @@ -825,7 +832,9 @@ int userfaultfd_unmap_prep(struct vm_area_struct *vma, unsigned long start, return -ENOMEM; userfaultfd_ctx_get(ctx); + down_write(&ctx->map_changing_lock); atomic_inc(&ctx->mmap_changing); + up_write(&ctx->map_changing_lock); unmap_ctx->ctx = ctx; unmap_ctx->start = start; unmap_ctx->end = end; @@ -1709,9 +1718,8 @@ static int userfaultfd_copy(struct userfaultfd_ctx *ctx, if (uffdio_copy.mode & UFFDIO_COPY_MODE_WP) flags |= MFILL_ATOMIC_WP; if (mmget_not_zero(ctx->mm)) { - ret = mfill_atomic_copy(ctx->mm, uffdio_copy.dst, uffdio_copy.src, - uffdio_copy.len, &ctx->mmap_changing, - flags); + ret = mfill_atomic_copy(ctx, uffdio_copy.dst, uffdio_copy.src, + uffdio_copy.len, flags); mmput(ctx->mm); } else { return -ESRCH; @@ -1761,9 +1769,8 @@ static int userfaultfd_zeropage(struct userfaultfd_ctx *ctx, goto out; if (mmget_not_zero(ctx->mm)) { - ret = mfill_atomic_zeropage(ctx->mm, uffdio_zeropage.range.start, - uffdio_zeropage.range.len, - &ctx->mmap_changing); + ret = mfill_atomic_zeropage(ctx, uffdio_zeropage.range.start, + uffdio_zeropage.range.len); mmput(ctx->mm); } else { return -ESRCH; @@ -1818,9 +1825,8 @@ static int userfaultfd_writeprotect(struct userfaultfd_ctx *ctx, return -EINVAL; if (mmget_not_zero(ctx->mm)) { - ret = mwriteprotect_range(ctx->mm, uffdio_wp.range.start, - uffdio_wp.range.len, mode_wp, - &ctx->mmap_changing); + ret = mwriteprotect_range(ctx, uffdio_wp.range.start, + uffdio_wp.range.len, mode_wp); mmput(ctx->mm); } else { return -ESRCH; @@ -1870,9 +1876,8 @@ static int userfaultfd_continue(struct userfaultfd_ctx *ctx, unsigned long arg) flags |= MFILL_ATOMIC_WP; if (mmget_not_zero(ctx->mm)) { - ret = mfill_atomic_continue(ctx->mm, uffdio_continue.range.start, - uffdio_continue.range.len, - &ctx->mmap_changing, flags); + ret = mfill_atomic_continue(ctx, uffdio_continue.range.start, + uffdio_continue.range.len, flags); mmput(ctx->mm); } else { return -ESRCH; @@ -1925,9 +1930,8 @@ static inline int userfaultfd_poison(struct userfaultfd_ctx *ctx, unsigned long goto out; if (mmget_not_zero(ctx->mm)) { - ret = mfill_atomic_poison(ctx->mm, uffdio_poison.range.start, - uffdio_poison.range.len, - &ctx->mmap_changing, 0); + ret = mfill_atomic_poison(ctx, uffdio_poison.range.start, + uffdio_poison.range.len, 0); mmput(ctx->mm); } else { return -ESRCH; @@ -2003,13 +2007,14 @@ static int userfaultfd_move(struct userfaultfd_ctx *ctx, if (mmget_not_zero(mm)) { mmap_read_lock(mm); - /* Re-check after taking mmap_lock */ + /* Re-check after taking map_changing_lock */ + down_read(&ctx->map_changing_lock); if (likely(!atomic_read(&ctx->mmap_changing))) ret = move_pages(ctx, mm, uffdio_move.dst, uffdio_move.src, uffdio_move.len, uffdio_move.mode); else ret = -EAGAIN; - + up_read(&ctx->map_changing_lock); mmap_read_unlock(mm); mmput(mm); } else { @@ -2216,6 +2221,7 @@ static int new_userfaultfd(int flags) ctx->flags = flags; ctx->features = 0; ctx->released = false; + init_rwsem(&ctx->map_changing_lock); atomic_set(&ctx->mmap_changing, 0); ctx->mm = current->mm; /* prevent the mm struct to be freed */ diff --git a/include/linux/userfaultfd_k.h b/include/linux/userfaultfd_k.h index 691d928ee864..3210c3552976 100644 --- a/include/linux/userfaultfd_k.h +++ b/include/linux/userfaultfd_k.h @@ -69,6 +69,13 @@ struct userfaultfd_ctx { unsigned int features; /* released */ bool released; + /* + * Prevents userfaultfd operations (fill/move/wp) from happening while + * some non-cooperative event(s) is taking place. Increments are done + * in write-mode. Whereas, userfaultfd operations, which includes + * reading mmap_changing, is done under read-mode. + */ + struct rw_semaphore map_changing_lock; /* memory mappings are changing because of non-cooperative event */ atomic_t mmap_changing; /* mm with one ore more vmas attached to this userfaultfd_ctx */ @@ -113,22 +120,18 @@ extern int mfill_atomic_install_pte(pmd_t *dst_pmd, unsigned long dst_addr, struct page *page, bool newly_allocated, uffd_flags_t flags); -extern ssize_t mfill_atomic_copy(struct mm_struct *dst_mm, unsigned long dst_start, +extern ssize_t mfill_atomic_copy(struct userfaultfd_ctx *ctx, unsigned long dst_start, unsigned long src_start, unsigned long len, - atomic_t *mmap_changing, uffd_flags_t flags); -extern ssize_t mfill_atomic_zeropage(struct mm_struct *dst_mm, + uffd_flags_t flags); +extern ssize_t mfill_atomic_zeropage(struct userfaultfd_ctx *ctx, unsigned long dst_start, - unsigned long len, - atomic_t *mmap_changing); -extern ssize_t mfill_atomic_continue(struct mm_struct *dst_mm, unsigned long dst_start, - unsigned long len, atomic_t *mmap_changing, - uffd_flags_t flags); -extern ssize_t mfill_atomic_poison(struct mm_struct *dst_mm, unsigned long start, - unsigned long len, atomic_t *mmap_changing, - uffd_flags_t flags); -extern int mwriteprotect_range(struct mm_struct *dst_mm, - unsigned long start, unsigned long len, - bool enable_wp, atomic_t *mmap_changing); + unsigned long len); +extern ssize_t mfill_atomic_continue(struct userfaultfd_ctx *ctx, unsigned long dst_start, + unsigned long len, uffd_flags_t flags); +extern ssize_t mfill_atomic_poison(struct userfaultfd_ctx *ctx, unsigned long start, + unsigned long len, uffd_flags_t flags); +extern int mwriteprotect_range(struct userfaultfd_ctx *ctx, unsigned long start, + unsigned long len, bool enable_wp); extern long uffd_wp_range(struct vm_area_struct *vma, unsigned long start, unsigned long len, bool enable_wp); diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index 9cc93cc1330b..74aad0831e40 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -353,11 +353,11 @@ static pmd_t *mm_alloc_pmd(struct mm_struct *mm, unsigned long address) * called with mmap_lock held, it will release mmap_lock before returning. */ static __always_inline ssize_t mfill_atomic_hugetlb( + struct userfaultfd_ctx *ctx, struct vm_area_struct *dst_vma, unsigned long dst_start, unsigned long src_start, unsigned long len, - atomic_t *mmap_changing, uffd_flags_t flags) { struct mm_struct *dst_mm = dst_vma->vm_mm; @@ -379,6 +379,7 @@ static __always_inline ssize_t mfill_atomic_hugetlb( * feature is not supported. */ if (uffd_flags_mode_is(flags, MFILL_ATOMIC_ZEROPAGE)) { + up_read(&ctx->map_changing_lock); mmap_read_unlock(dst_mm); return -EINVAL; } @@ -463,6 +464,7 @@ static __always_inline ssize_t mfill_atomic_hugetlb( cond_resched(); if (unlikely(err == -ENOENT)) { + up_read(&ctx->map_changing_lock); mmap_read_unlock(dst_mm); BUG_ON(!folio); @@ -473,12 +475,13 @@ static __always_inline ssize_t mfill_atomic_hugetlb( goto out; } mmap_read_lock(dst_mm); + down_read(&ctx->map_changing_lock); /* * If memory mappings are changing because of non-cooperative * operation (e.g. mremap) running in parallel, bail out and * request the user to retry later */ - if (mmap_changing && atomic_read(mmap_changing)) { + if (atomic_read(&ctx->mmap_changing)) { err = -EAGAIN; break; } @@ -501,6 +504,7 @@ static __always_inline ssize_t mfill_atomic_hugetlb( } out_unlock: + up_read(&ctx->map_changing_lock); mmap_read_unlock(dst_mm); out: if (folio) @@ -512,11 +516,11 @@ static __always_inline ssize_t mfill_atomic_hugetlb( } #else /* !CONFIG_HUGETLB_PAGE */ /* fail at build time if gcc attempts to use this */ -extern ssize_t mfill_atomic_hugetlb(struct vm_area_struct *dst_vma, +extern ssize_t mfill_atomic_hugetlb(struct userfaultfd_ctx *ctx, + struct vm_area_struct *dst_vma, unsigned long dst_start, unsigned long src_start, unsigned long len, - atomic_t *mmap_changing, uffd_flags_t flags); #endif /* CONFIG_HUGETLB_PAGE */ @@ -564,13 +568,13 @@ static __always_inline ssize_t mfill_atomic_pte(pmd_t *dst_pmd, return err; } -static __always_inline ssize_t mfill_atomic(struct mm_struct *dst_mm, +static __always_inline ssize_t mfill_atomic(struct userfaultfd_ctx *ctx, unsigned long dst_start, unsigned long src_start, unsigned long len, - atomic_t *mmap_changing, uffd_flags_t flags) { + struct mm_struct *dst_mm = ctx->mm; struct vm_area_struct *dst_vma; ssize_t err; pmd_t *dst_pmd; @@ -600,8 +604,9 @@ static __always_inline ssize_t mfill_atomic(struct mm_struct *dst_mm, * operation (e.g. mremap) running in parallel, bail out and * request the user to retry later */ + down_read(&ctx->map_changing_lock); err = -EAGAIN; - if (mmap_changing && atomic_read(mmap_changing)) + if (atomic_read(&ctx->mmap_changing)) goto out_unlock; /* @@ -633,8 +638,8 @@ static __always_inline ssize_t mfill_atomic(struct mm_struct *dst_mm, * If this is a HUGETLB vma, pass off to appropriate routine */ if (is_vm_hugetlb_page(dst_vma)) - return mfill_atomic_hugetlb(dst_vma, dst_start, src_start, - len, mmap_changing, flags); + return mfill_atomic_hugetlb(ctx, dst_vma, dst_start, + src_start, len, flags); if (!vma_is_anonymous(dst_vma) && !vma_is_shmem(dst_vma)) goto out_unlock; @@ -693,6 +698,7 @@ static __always_inline ssize_t mfill_atomic(struct mm_struct *dst_mm, if (unlikely(err == -ENOENT)) { void *kaddr; + up_read(&ctx->map_changing_lock); mmap_read_unlock(dst_mm); BUG_ON(!folio); @@ -723,6 +729,7 @@ static __always_inline ssize_t mfill_atomic(struct mm_struct *dst_mm, } out_unlock: + up_read(&ctx->map_changing_lock); mmap_read_unlock(dst_mm); out: if (folio) @@ -733,34 +740,33 @@ static __always_inline ssize_t mfill_atomic(struct mm_struct *dst_mm, return copied ? copied : err; } -ssize_t mfill_atomic_copy(struct mm_struct *dst_mm, unsigned long dst_start, +ssize_t mfill_atomic_copy(struct userfaultfd_ctx *ctx, unsigned long dst_start, unsigned long src_start, unsigned long len, - atomic_t *mmap_changing, uffd_flags_t flags) + uffd_flags_t flags) { - return mfill_atomic(dst_mm, dst_start, src_start, len, mmap_changing, + return mfill_atomic(ctx, dst_start, src_start, len, uffd_flags_set_mode(flags, MFILL_ATOMIC_COPY)); } -ssize_t mfill_atomic_zeropage(struct mm_struct *dst_mm, unsigned long start, - unsigned long len, atomic_t *mmap_changing) +ssize_t mfill_atomic_zeropage(struct userfaultfd_ctx *ctx, + unsigned long start, + unsigned long len) { - return mfill_atomic(dst_mm, start, 0, len, mmap_changing, + return mfill_atomic(ctx, start, 0, len, uffd_flags_set_mode(0, MFILL_ATOMIC_ZEROPAGE)); } -ssize_t mfill_atomic_continue(struct mm_struct *dst_mm, unsigned long start, - unsigned long len, atomic_t *mmap_changing, - uffd_flags_t flags) +ssize_t mfill_atomic_continue(struct userfaultfd_ctx *ctx, unsigned long start, + unsigned long len, uffd_flags_t flags) { - return mfill_atomic(dst_mm, start, 0, len, mmap_changing, + return mfill_atomic(ctx, start, 0, len, uffd_flags_set_mode(flags, MFILL_ATOMIC_CONTINUE)); } -ssize_t mfill_atomic_poison(struct mm_struct *dst_mm, unsigned long start, - unsigned long len, atomic_t *mmap_changing, - uffd_flags_t flags) +ssize_t mfill_atomic_poison(struct userfaultfd_ctx *ctx, unsigned long start, + unsigned long len, uffd_flags_t flags) { - return mfill_atomic(dst_mm, start, 0, len, mmap_changing, + return mfill_atomic(ctx, start, 0, len, uffd_flags_set_mode(flags, MFILL_ATOMIC_POISON)); } @@ -793,10 +799,10 @@ long uffd_wp_range(struct vm_area_struct *dst_vma, return ret; } -int mwriteprotect_range(struct mm_struct *dst_mm, unsigned long start, - unsigned long len, bool enable_wp, - atomic_t *mmap_changing) +int mwriteprotect_range(struct userfaultfd_ctx *ctx, unsigned long start, + unsigned long len, bool enable_wp) { + struct mm_struct *dst_mm = ctx->mm; unsigned long end = start + len; unsigned long _start, _end; struct vm_area_struct *dst_vma; @@ -820,8 +826,9 @@ int mwriteprotect_range(struct mm_struct *dst_mm, unsigned long start, * operation (e.g. mremap) running in parallel, bail out and * request the user to retry later */ + down_read(&ctx->map_changing_lock); err = -EAGAIN; - if (mmap_changing && atomic_read(mmap_changing)) + if (atomic_read(&ctx->mmap_changing)) goto out_unlock; err = -ENOENT; @@ -850,6 +857,7 @@ int mwriteprotect_range(struct mm_struct *dst_mm, unsigned long start, err = 0; } out_unlock: + up_read(&ctx->map_changing_lock); mmap_read_unlock(dst_mm); return err; } -- 2.43.0.687.g38aa6559b0-goog ^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH v6 3/3] userfaultfd: use per-vma locks in userfaultfd operations 2024-02-13 21:57 [PATCH v6 0/3] per-vma locks in userfaultfd Lokesh Gidra 2024-02-13 21:57 ` [PATCH v6 1/3] userfaultfd: move userfaultfd_ctx struct to header file Lokesh Gidra 2024-02-13 21:57 ` [PATCH v6 2/3] userfaultfd: protect mmap_changing with rw_sem in userfaulfd_ctx Lokesh Gidra @ 2024-02-13 21:57 ` Lokesh Gidra 2024-02-14 22:12 ` Ryan Roberts 2024-02-14 15:17 ` [PATCH v6 0/3] per-vma locks in userfaultfd Liam R. Howlett 3 siblings, 1 reply; 9+ messages in thread From: Lokesh Gidra @ 2024-02-13 21:57 UTC (permalink / raw) To: akpm Cc: lokeshgidra, linux-fsdevel, linux-mm, linux-kernel, selinux, surenb, kernel-team, aarcange, peterx, david, axelrasmussen, bgeffon, willy, jannh, kaleshsingh, ngeoffray, timmurray, rppt, Liam.Howlett All userfaultfd operations, except write-protect, opportunistically use per-vma locks to lock vmas. On failure, attempt again inside mmap_lock critical section. Write-protect operation requires mmap_lock as it iterates over multiple vmas. Signed-off-by: Lokesh Gidra <lokeshgidra@google.com> --- fs/userfaultfd.c | 13 +- include/linux/userfaultfd_k.h | 5 +- mm/userfaultfd.c | 380 ++++++++++++++++++++++++++-------- 3 files changed, 296 insertions(+), 102 deletions(-) diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c index c00a021bcce4..60dcfafdc11a 100644 --- a/fs/userfaultfd.c +++ b/fs/userfaultfd.c @@ -2005,17 +2005,8 @@ static int userfaultfd_move(struct userfaultfd_ctx *ctx, return -EINVAL; if (mmget_not_zero(mm)) { - mmap_read_lock(mm); - - /* Re-check after taking map_changing_lock */ - down_read(&ctx->map_changing_lock); - if (likely(!atomic_read(&ctx->mmap_changing))) - ret = move_pages(ctx, mm, uffdio_move.dst, uffdio_move.src, - uffdio_move.len, uffdio_move.mode); - else - ret = -EAGAIN; - up_read(&ctx->map_changing_lock); - mmap_read_unlock(mm); + ret = move_pages(ctx, uffdio_move.dst, uffdio_move.src, + uffdio_move.len, uffdio_move.mode); mmput(mm); } else { return -ESRCH; diff --git a/include/linux/userfaultfd_k.h b/include/linux/userfaultfd_k.h index 3210c3552976..05d59f74fc88 100644 --- a/include/linux/userfaultfd_k.h +++ b/include/linux/userfaultfd_k.h @@ -138,9 +138,8 @@ extern long uffd_wp_range(struct vm_area_struct *vma, /* move_pages */ void double_pt_lock(spinlock_t *ptl1, spinlock_t *ptl2); void double_pt_unlock(spinlock_t *ptl1, spinlock_t *ptl2); -ssize_t move_pages(struct userfaultfd_ctx *ctx, struct mm_struct *mm, - unsigned long dst_start, unsigned long src_start, - unsigned long len, __u64 flags); +ssize_t move_pages(struct userfaultfd_ctx *ctx, unsigned long dst_start, + unsigned long src_start, unsigned long len, __u64 flags); int move_pages_huge_pmd(struct mm_struct *mm, pmd_t *dst_pmd, pmd_t *src_pmd, pmd_t dst_pmdval, struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma, diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index 74aad0831e40..4744d6a96f96 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -20,19 +20,11 @@ #include "internal.h" static __always_inline -struct vm_area_struct *find_dst_vma(struct mm_struct *dst_mm, - unsigned long dst_start, - unsigned long len) +bool validate_dst_vma(struct vm_area_struct *dst_vma, unsigned long dst_end) { - /* - * Make sure that the dst range is both valid and fully within a - * single existing vma. - */ - struct vm_area_struct *dst_vma; - - dst_vma = find_vma(dst_mm, dst_start); - if (!range_in_vma(dst_vma, dst_start, dst_start + len)) - return NULL; + /* Make sure that the dst range is fully within dst_vma. */ + if (dst_end > dst_vma->vm_end) + return false; /* * Check the vma is registered in uffd, this is required to @@ -40,11 +32,122 @@ struct vm_area_struct *find_dst_vma(struct mm_struct *dst_mm, * time. */ if (!dst_vma->vm_userfaultfd_ctx.ctx) - return NULL; + return false; + + return true; +} + +static __always_inline +struct vm_area_struct *find_vma_and_prepare_anon(struct mm_struct *mm, + unsigned long addr) +{ + struct vm_area_struct *vma; + + mmap_assert_locked(mm); + vma = vma_lookup(mm, addr); + if (!vma) + vma = ERR_PTR(-ENOENT); + else if (!(vma->vm_flags & VM_SHARED) && + unlikely(anon_vma_prepare(vma))) + vma = ERR_PTR(-ENOMEM); + + return vma; +} + +#ifdef CONFIG_PER_VMA_LOCK +/* + * lock_vma() - Lookup and lock vma corresponding to @address. + * @mm: mm to search vma in. + * @address: address that the vma should contain. + * + * Should be called without holding mmap_lock. vma should be unlocked after use + * with unlock_vma(). + * + * Return: A locked vma containing @address, -ENOENT if no vma is found, or + * -ENOMEM if anon_vma couldn't be allocated. + */ +static struct vm_area_struct *lock_vma(struct mm_struct *mm, + unsigned long address) +{ + struct vm_area_struct *vma; + + vma = lock_vma_under_rcu(mm, address); + if (vma) { + /* + * lock_vma_under_rcu() only checks anon_vma for private + * anonymous mappings. But we need to ensure it is assigned in + * private file-backed vmas as well. + */ + if (!(vma->vm_flags & VM_SHARED) && unlikely(!vma->anon_vma)) + vma_end_read(vma); + else + return vma; + } + + mmap_read_lock(mm); + vma = find_vma_and_prepare_anon(mm, address); + if (!IS_ERR(vma)) { + /* + * We cannot use vma_start_read() as it may fail due to + * false locked (see comment in vma_start_read()). We + * can avoid that by directly locking vm_lock under + * mmap_lock, which guarantees that nobody can lock the + * vma for write (vma_start_write()) under us. + */ + down_read(&vma->vm_lock->lock); + } + + mmap_read_unlock(mm); + return vma; +} + +static struct vm_area_struct *uffd_mfill_lock(struct mm_struct *dst_mm, + unsigned long dst_start, + unsigned long len) +{ + struct vm_area_struct *dst_vma; + dst_vma = lock_vma(dst_mm, dst_start); + if (IS_ERR(dst_vma) || validate_dst_vma(dst_vma, dst_start + len)) + return dst_vma; + + vma_end_read(dst_vma); + return ERR_PTR(-ENOENT); +} + +static void uffd_mfill_unlock(struct vm_area_struct *vma) +{ + vma_end_read(vma); +} + +#else + +static struct vm_area_struct *uffd_mfill_lock(struct mm_struct *dst_mm, + unsigned long dst_start, + unsigned long len) +{ + struct vm_area_struct *dst_vma; + + mmap_read_lock(dst_mm); + dst_vma = find_vma_and_prepare_anon(dst_mm, dst_start); + if (IS_ERR(dst_vma)) + goto out_unlock; + + if (validate_dst_vma(dst_vma, dst_start + len)) + return dst_vma; + + dst_vma = ERR_PTR(-ENOENT); +out_unlock: + mmap_read_unlock(dst_mm); return dst_vma; } +static void uffd_mfill_unlock(struct vm_area_struct *vma) +{ + mmap_read_unlock(vma->vm_mm); +} +#endif + /* Check if dst_addr is outside of file's size. Must be called with ptl held. */ static bool mfill_file_over_size(struct vm_area_struct *dst_vma, unsigned long dst_addr) @@ -350,7 +453,8 @@ static pmd_t *mm_alloc_pmd(struct mm_struct *mm, unsigned long address) #ifdef CONFIG_HUGETLB_PAGE /* * mfill_atomic processing for HUGETLB vmas. Note that this routine is - * called with mmap_lock held, it will release mmap_lock before returning. + * called with either vma-lock or mmap_lock held, it will release the lock + * before returning. */ static __always_inline ssize_t mfill_atomic_hugetlb( struct userfaultfd_ctx *ctx, @@ -361,7 +465,6 @@ static __always_inline ssize_t mfill_atomic_hugetlb( uffd_flags_t flags) { struct mm_struct *dst_mm = dst_vma->vm_mm; - int vm_shared = dst_vma->vm_flags & VM_SHARED; ssize_t err; pte_t *dst_pte; unsigned long src_addr, dst_addr; @@ -380,7 +483,7 @@ static __always_inline ssize_t mfill_atomic_hugetlb( */ if (uffd_flags_mode_is(flags, MFILL_ATOMIC_ZEROPAGE)) { up_read(&ctx->map_changing_lock); - mmap_read_unlock(dst_mm); + uffd_mfill_unlock(dst_vma); return -EINVAL; } @@ -403,24 +506,28 @@ static __always_inline ssize_t mfill_atomic_hugetlb( * retry, dst_vma will be set to NULL and we must lookup again. */ if (!dst_vma) { + dst_vma = uffd_mfill_lock(dst_mm, dst_start, len); + if (IS_ERR(dst_vma)) { + err = PTR_ERR(dst_vma); + goto out; + } + err = -ENOENT; - dst_vma = find_dst_vma(dst_mm, dst_start, len); - if (!dst_vma || !is_vm_hugetlb_page(dst_vma)) - goto out_unlock; + if (!is_vm_hugetlb_page(dst_vma)) + goto out_unlock_vma; err = -EINVAL; if (vma_hpagesize != vma_kernel_pagesize(dst_vma)) - goto out_unlock; - - vm_shared = dst_vma->vm_flags & VM_SHARED; - } + goto out_unlock_vma; - /* - * If not shared, ensure the dst_vma has a anon_vma. - */ - err = -ENOMEM; - if (!vm_shared) { - if (unlikely(anon_vma_prepare(dst_vma))) + /* + * If memory mappings are changing because of non-cooperative + * operation (e.g. mremap) running in parallel, bail out and + * request the user to retry later + */ + down_read(&ctx->map_changing_lock); + err = -EAGAIN; + if (atomic_read(&ctx->mmap_changing)) goto out_unlock; } @@ -465,7 +572,7 @@ static __always_inline ssize_t mfill_atomic_hugetlb( if (unlikely(err == -ENOENT)) { up_read(&ctx->map_changing_lock); - mmap_read_unlock(dst_mm); + uffd_mfill_unlock(dst_vma); BUG_ON(!folio); err = copy_folio_from_user(folio, @@ -474,17 +581,6 @@ static __always_inline ssize_t mfill_atomic_hugetlb( err = -EFAULT; goto out; } - mmap_read_lock(dst_mm); - down_read(&ctx->map_changing_lock); - /* - * If memory mappings are changing because of non-cooperative - * operation (e.g. mremap) running in parallel, bail out and - * request the user to retry later - */ - if (atomic_read(&ctx->mmap_changing)) { - err = -EAGAIN; - break; - } dst_vma = NULL; goto retry; @@ -505,7 +601,8 @@ static __always_inline ssize_t mfill_atomic_hugetlb( out_unlock: up_read(&ctx->map_changing_lock); - mmap_read_unlock(dst_mm); +out_unlock_vma: + uffd_mfill_unlock(dst_vma); out: if (folio) folio_put(folio); @@ -597,7 +694,15 @@ static __always_inline ssize_t mfill_atomic(struct userfaultfd_ctx *ctx, copied = 0; folio = NULL; retry: - mmap_read_lock(dst_mm); + /* + * Make sure the vma is not shared, that the dst range is + * both valid and fully within a single existing vma. + */ + dst_vma = uffd_mfill_lock(dst_mm, dst_start, len); + if (IS_ERR(dst_vma)) { + err = PTR_ERR(dst_vma); + goto out; + } /* * If memory mappings are changing because of non-cooperative @@ -609,15 +714,6 @@ static __always_inline ssize_t mfill_atomic(struct userfaultfd_ctx *ctx, if (atomic_read(&ctx->mmap_changing)) goto out_unlock; - /* - * Make sure the vma is not shared, that the dst range is - * both valid and fully within a single existing vma. - */ - err = -ENOENT; - dst_vma = find_dst_vma(dst_mm, dst_start, len); - if (!dst_vma) - goto out_unlock; - err = -EINVAL; /* * shmem_zero_setup is invoked in mmap for MAP_ANONYMOUS|MAP_SHARED but @@ -647,16 +743,6 @@ static __always_inline ssize_t mfill_atomic(struct userfaultfd_ctx *ctx, uffd_flags_mode_is(flags, MFILL_ATOMIC_CONTINUE)) goto out_unlock; - /* - * Ensure the dst_vma has a anon_vma or this page - * would get a NULL anon_vma when moved in the - * dst_vma. - */ - err = -ENOMEM; - if (!(dst_vma->vm_flags & VM_SHARED) && - unlikely(anon_vma_prepare(dst_vma))) - goto out_unlock; - while (src_addr < src_start + len) { pmd_t dst_pmdval; @@ -699,7 +785,7 @@ static __always_inline ssize_t mfill_atomic(struct userfaultfd_ctx *ctx, void *kaddr; up_read(&ctx->map_changing_lock); - mmap_read_unlock(dst_mm); + uffd_mfill_unlock(dst_vma); BUG_ON(!folio); kaddr = kmap_local_folio(folio, 0); @@ -730,7 +816,7 @@ static __always_inline ssize_t mfill_atomic(struct userfaultfd_ctx *ctx, out_unlock: up_read(&ctx->map_changing_lock); - mmap_read_unlock(dst_mm); + uffd_mfill_unlock(dst_vma); out: if (folio) folio_put(folio); @@ -1267,27 +1353,136 @@ static int validate_move_areas(struct userfaultfd_ctx *ctx, if (!vma_is_anonymous(src_vma) || !vma_is_anonymous(dst_vma)) return -EINVAL; + return 0; +} + +static __always_inline +int find_vmas_mm_locked(struct mm_struct *mm, + unsigned long dst_start, + unsigned long src_start, + struct vm_area_struct **dst_vmap, + struct vm_area_struct **src_vmap) +{ + struct vm_area_struct *vma; + + mmap_assert_locked(mm); + vma = find_vma_and_prepare_anon(mm, dst_start); + if (IS_ERR(vma)) + return PTR_ERR(vma); + + *dst_vmap = vma; + /* Skip finding src_vma if src_start is in dst_vma */ + if (src_start >= vma->vm_start && src_start < vma->vm_end) + goto out_success; + + vma = vma_lookup(mm, src_start); + if (!vma) + return -ENOENT; +out_success: + *src_vmap = vma; + return 0; +} + +#ifdef CONFIG_PER_VMA_LOCK +static int uffd_move_lock(struct mm_struct *mm, + unsigned long dst_start, + unsigned long src_start, + struct vm_area_struct **dst_vmap, + struct vm_area_struct **src_vmap) +{ + struct vm_area_struct *vma; + int err; + + vma = lock_vma(mm, dst_start); + if (IS_ERR(vma)) + return PTR_ERR(vma); + + *dst_vmap = vma; /* - * Ensure the dst_vma has a anon_vma or this page - * would get a NULL anon_vma when moved in the - * dst_vma. + * Skip finding src_vma if src_start is in dst_vma. This also ensures + * that we don't lock the same vma twice. */ - if (unlikely(anon_vma_prepare(dst_vma))) - return -ENOMEM; + if (src_start >= vma->vm_start && src_start < vma->vm_end) { + *src_vmap = vma; + return 0; + } - return 0; + /* + * Using lock_vma() to get src_vma can lead to following deadlock: + * + * Thread1 Thread2 + * ------- ------- + * vma_start_read(dst_vma) + * mmap_write_lock(mm) + * vma_start_write(src_vma) + * vma_start_read(src_vma) + * mmap_read_lock(mm) + * vma_start_write(dst_vma) + */ + *src_vmap = lock_vma_under_rcu(mm, src_start); + if (likely(*src_vmap)) + return 0; + + /* Undo any locking and retry in mmap_lock critical section */ + vma_end_read(*dst_vmap); + + mmap_read_lock(mm); + err = find_vmas_mm_locked(mm, dst_start, src_start, dst_vmap, src_vmap); + if (!err) { + /* + * See comment in lock_vma() as to why not using + * vma_start_read() here. + */ + down_read(&(*dst_vmap)->vm_lock->lock); + if (*dst_vmap != *src_vmap) + down_read(&(*src_vmap)->vm_lock->lock); + } + mmap_read_unlock(mm); + return err; +} + +static void uffd_move_unlock(struct vm_area_struct *dst_vma, + struct vm_area_struct *src_vma) +{ + vma_end_read(src_vma); + if (src_vma != dst_vma) + vma_end_read(dst_vma); } +#else + +static int uffd_move_lock(struct mm_struct *mm, + unsigned long dst_start, + unsigned long src_start, + struct vm_area_struct **dst_vmap, + struct vm_area_struct **src_vmap) +{ + int err; + + mmap_read_lock(mm); + err = find_vmas_mm_locked(mm, dst_start, src_start, dst_vmap, src_vmap); + if (err) + mmap_read_unlock(mm); + return err; +} + +static void uffd_move_unlock(struct vm_area_struct *dst_vma, + struct vm_area_struct *src_vma) +{ + mmap_assert_locked(src_vma->vm_mm); + mmap_read_unlock(dst_vma->vm_mm); +} +#endif + /** * move_pages - move arbitrary anonymous pages of an existing vma * @ctx: pointer to the userfaultfd context - * @mm: the address space to move pages * @dst_start: start of the destination virtual memory range * @src_start: start of the source virtual memory range * @len: length of the virtual memory range * @mode: flags from uffdio_move.mode * - * Must be called with mmap_lock held for read. + * It will either use the mmap_lock in read mode or per-vma locks * * move_pages() remaps arbitrary anonymous pages atomically in zero * copy. It only works on non shared anonymous pages because those can @@ -1355,10 +1550,10 @@ static int validate_move_areas(struct userfaultfd_ctx *ctx, * could be obtained. This is the only additional complexity added to * the rmap code to provide this anonymous page remapping functionality. */ -ssize_t move_pages(struct userfaultfd_ctx *ctx, struct mm_struct *mm, - unsigned long dst_start, unsigned long src_start, - unsigned long len, __u64 mode) +ssize_t move_pages(struct userfaultfd_ctx *ctx, unsigned long dst_start, + unsigned long src_start, unsigned long len, __u64 mode) { + struct mm_struct *mm = ctx->mm; struct vm_area_struct *src_vma, *dst_vma; unsigned long src_addr, dst_addr; pmd_t *src_pmd, *dst_pmd; @@ -1376,28 +1571,34 @@ ssize_t move_pages(struct userfaultfd_ctx *ctx, struct mm_struct *mm, WARN_ON_ONCE(dst_start + len <= dst_start)) goto out; + err = uffd_move_lock(mm, dst_start, src_start, &dst_vma, &src_vma); + if (err) + goto out; + + /* Re-check after taking map_changing_lock */ + err = -EAGAIN; + down_read(&ctx->map_changing_lock); + if (likely(atomic_read(&ctx->mmap_changing))) + goto out_unlock; /* * Make sure the vma is not shared, that the src and dst remap * ranges are both valid and fully within a single existing * vma. */ - src_vma = find_vma(mm, src_start); - if (!src_vma || (src_vma->vm_flags & VM_SHARED)) - goto out; - if (src_start < src_vma->vm_start || - src_start + len > src_vma->vm_end) - goto out; + err = -EINVAL; + if (src_vma->vm_flags & VM_SHARED) + goto out_unlock; + if (src_start + len > src_vma->vm_end) + goto out_unlock; - dst_vma = find_vma(mm, dst_start); - if (!dst_vma || (dst_vma->vm_flags & VM_SHARED)) - goto out; - if (dst_start < dst_vma->vm_start || - dst_start + len > dst_vma->vm_end) - goto out; + if (dst_vma->vm_flags & VM_SHARED) + goto out_unlock; + if (dst_start + len > dst_vma->vm_end) + goto out_unlock; err = validate_move_areas(ctx, src_vma, dst_vma); if (err) - goto out; + goto out_unlock; for (src_addr = src_start, dst_addr = dst_start; src_addr < src_start + len;) { @@ -1514,6 +1715,9 @@ ssize_t move_pages(struct userfaultfd_ctx *ctx, struct mm_struct *mm, moved += step_size; } +out_unlock: + up_read(&ctx->map_changing_lock); + uffd_move_unlock(dst_vma, src_vma); out: VM_WARN_ON(moved < 0); VM_WARN_ON(err > 0); -- 2.43.0.687.g38aa6559b0-goog ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v6 3/3] userfaultfd: use per-vma locks in userfaultfd operations 2024-02-13 21:57 ` [PATCH v6 3/3] userfaultfd: use per-vma locks in userfaultfd operations Lokesh Gidra @ 2024-02-14 22:12 ` Ryan Roberts 2024-02-14 22:20 ` Suren Baghdasaryan 0 siblings, 1 reply; 9+ messages in thread From: Ryan Roberts @ 2024-02-14 22:12 UTC (permalink / raw) To: Lokesh Gidra, akpm Cc: linux-fsdevel, linux-mm, linux-kernel, selinux, surenb, kernel-team, aarcange, peterx, david, axelrasmussen, bgeffon, willy, jannh, kaleshsingh, ngeoffray, timmurray, rppt, Liam.Howlett On 13/02/2024 21:57, Lokesh Gidra wrote: > All userfaultfd operations, except write-protect, opportunistically use > per-vma locks to lock vmas. On failure, attempt again inside mmap_lock > critical section. > > Write-protect operation requires mmap_lock as it iterates over multiple > vmas. Hi, I'm seeing the below OOPS when running on arm64 against mm-unstable. It can be reliably reproduced by running the `uffd-unit-tests` mm selftest. Bisecting mm-unstable leads to this patch: # bad: [649936c3db47f8f75b9b927e4edf5922a0f240a6] mm: add swappiness= arg to memory.reclaim git bisect bad 649936c3db47f8f75b9b927e4edf5922a0f240a6 # good: [54be6c6c5ae8e0d93a6c4641cb7528eb0b6ba478] Linux 6.8-rc3 git bisect good 54be6c6c5ae8e0d93a6c4641cb7528eb0b6ba478 # good: [10794cd18bb46c91c75fac44e551201cfe006baf] mm: zswap: warn when referencing a dead entry git bisect good 10794cd18bb46c91c75fac44e551201cfe006baf # good: [cb769b427edc7f46c7c764daa3421725bffdf315] mm-memcg-use-larger-batches-for-proactive-reclaim-v4 git bisect good cb769b427edc7f46c7c764daa3421725bffdf315 # good: [cfc5c1be4010c9972bc3f3d991235e8ea6928672] mm/z3fold: remove unneeded spinlock git bisect good cfc5c1be4010c9972bc3f3d991235e8ea6928672 # good: [31094bce101651acb4747ab25d614bc893d65c89] kasan/test: avoid gcc warning for intentional overflow git bisect good 31094bce101651acb4747ab25d614bc893d65c89 # bad: [55be0b2cd1fbf00e036e2e48ee0999599135af66] zram: do not allocate physically contiguous strm buffers git bisect bad 55be0b2cd1fbf00e036e2e48ee0999599135af66 # good: [b11ca4a0a13024c0175dde56f9bd848803eddcd2] mm/mglru: improve swappiness handling git bisect good b11ca4a0a13024c0175dde56f9bd848803eddcd2 # good: [22e7ccd57a1220afcd7c4da1f3005fd04d70014e] userfaultfd: move userfaultfd_ctx struct to header file git bisect good 22e7ccd57a1220afcd7c4da1f3005fd04d70014e # bad: [0a0d05338f13e64c9fb7ccd8f8d1793aaf33ec7d] userfaultfd: use per-vma locks in userfaultfd operations git bisect bad 0a0d05338f13e64c9fb7ccd8f8d1793aaf33ec7d # good: [8459e1c7acbe4442c6c0eef59825da1339e0a3cf] userfaultfd: protect mmap_changing with rw_sem in userfaulfd_ctx git bisect good 8459e1c7acbe4442c6c0eef59825da1339e0a3cf # first bad commit: [0a0d05338f13e64c9fb7ccd8f8d1793aaf33ec7d] userfaultfd: use per-vma locks in userfaultfd operations This is the oops: [ 21.280142] mm ffff049340ed0000 task_size 281474976710656 [ 21.280142] get_unmapped_area ffffb3dbdc725e48 [ 21.280142] mmap_base 281474842492928 mmap_legacy_base 0 [ 21.280142] pgd ffff04935829a000 mm_users 3 mm_count 4 pgtables_bytes 98304 map_count 21 [ 21.280142] hiwater_rss 2167 hiwater_vm 8234 total_vm 4a44 locked_vm 0 [ 21.280142] pinned_vm 0 data_vm 4835 exec_vm 1c6 stack_vm 21 [ 21.280142] start_code aaaaaaaa0000 end_code aaaaaaab1b28 start_data aaaaaaac2a60 end_data aaaaaaac3410 [ 21.280142] start_brk aaaaaaac5000 brk aaaaaaae6000 start_stack fffffffff6b0 [ 21.280142] arg_start fffffffff8c7 arg_end fffffffff8e2 env_start fffffffff8e2 env_end ffffffffffdd [ 21.280142] binfmt ffffb3dbdf2f8cf8 flags 82008d [ 21.280142] ioctx_table 0000000000000000 [ 21.280142] owner ffff049311f02280 exe_file ffff049355314c00 [ 21.280142] notifier_subscriptions 0000000000000000 [ 21.280142] numa_next_scan 4294897864 numa_scan_offset 0 numa_scan_seq 0 [ 21.280142] tlb_flush_pending 0 [ 21.280142] def_flags: 0x0() [ 21.283302] kernel BUG at include/linux/mmap_lock.h:66! [ 21.283481] Internal error: Oops - BUG: 00000000f2000800 [#1] PREEMPT SMP [ 21.283720] Modules linked in: [ 21.283867] CPU: 3 PID: 1226 Comm: uffd-unit-tests Not tainted 6.8.0-rc3-00297-g0a0d05338f13 #19 [ 21.284306] Hardware name: linux,dummy-virt (DT) [ 21.284495] pstate: 61400005 (nZCv daif +PAN -UAO -TCO +DIT -SSBS BTYPE=--) [ 21.284833] pc : move_pages_huge_pmd+0x4d0/0x8a8 [ 21.285072] lr : move_pages_huge_pmd+0x4d0/0x8a8 [ 21.285289] sp : ffff800088573b30 [ 21.285439] x29: ffff800088573b30 x28: ffff049358292d78 x27: fffffc0000000000 [ 21.285770] x26: ffff049358292d78 x25: 0000fffff5e00000 x24: ffff049359db3730 [ 21.286097] x23: 0000fffff6000000 x22: ffff049340ed0000 x21: ffff049359db3678 [ 21.286422] x20: 0000000000000002 x19: fffffc124d60a480 x18: 0000000000000006 [ 21.286747] x17: 626c740a30207165 x16: 735f6e6163735f61 x15: 6d756e2030207465 [ 21.287084] x14: 7366666f5f6e6163 x13: 2928307830203a73 x12: 67616c665f666564 [ 21.287417] x11: 0a3020676e69646e x10: ffffb3dbdf28bff8 x9 : ffffb3dbdc620f80 [ 21.287745] x8 : 00000000ffffefff x7 : ffffb3dbdf28bff8 x6 : 80000000fffff000 [ 21.288082] x5 : ffff04933ffdcd08 x4 : 0000000000000000 x3 : 0000000000000000 [ 21.288364] x2 : 0000000000000000 x1 : ffff049340106780 x0 : 0000000000000328 [ 21.288712] Call trace: [ 21.288832] move_pages_huge_pmd+0x4d0/0x8a8 [ 21.289058] move_pages+0x8b8/0x13d8 [ 21.289205] userfaultfd_ioctl+0x11e0/0x1e90 [ 21.289411] __arm64_sys_ioctl+0xb4/0x100 [ 21.289579] invoke_syscall+0x50/0x128 [ 21.289738] el0_svc_common.constprop.0+0x48/0xf0 [ 21.289942] do_el0_svc+0x24/0x38 [ 21.290078] el0_svc+0x34/0xb8 [ 21.290219] el0t_64_sync_handler+0x100/0x130 [ 21.290387] el0t_64_sync+0x190/0x198 [ 21.290535] Code: 17ffff7e a90363f7 a9046bf9 97fde2a7 (d4210000) [ 21.290776] ---[ end trace 0000000000000000 ]--- [ 21.290966] note: uffd-unit-tests[1226] exited with irqs disabled [ 21.291254] note: uffd-unit-tests[1226] exited with preempt_count 2 [ 21.292864] ------------[ cut here ]------------ [ 21.293020] WARNING: CPU: 3 PID: 0 at kernel/context_tracking.c:128 ct_kernel_exit.constprop.0+0x108/0x120 [ 21.293711] Modules linked in: [ 21.294070] CPU: 3 PID: 0 Comm: swapper/3 Tainted: G D 6.8.0-rc3-00297-g0a0d05338f13 #19 [ 21.295447] Hardware name: linux,dummy-virt (DT) [ 21.295850] pstate: 214003c5 (nzCv DAIF +PAN -UAO -TCO +DIT -SSBS BTYPE=--) [ 21.296152] pc : ct_kernel_exit.constprop.0+0x108/0x120 [ 21.296685] lr : ct_idle_enter+0x10/0x20 [ 21.296848] sp : ffff8000801b3dc0 [ 21.296983] x29: ffff8000801b3dc0 x28: 0000000000000000 x27: 0000000000000000 [ 21.297255] x26: 0000000000000000 x25: ffff049301496780 x24: 0000000000000000 [ 21.297533] x23: 0000000000000000 x22: ffff049301496780 x21: ffffb3dbdf209ae0 [ 21.297844] x20: ffffb3dbdf209a20 x19: ffff04933ffeace0 x18: ffff800088573648 [ 21.298152] x17: ffffb3dbdf65def0 x16: 00000000d72145a7 x15: 00000000f1a17b35 [ 21.298439] x14: 0000000000000004 x13: ffffb3dbdf22c3a8 x12: 0000000000000000 [ 21.298738] x11: ffff0493541472a8 x10: 2b58349b7608bfc4 x9 : ffffb3dbdc57f43c [ 21.299031] x8 : ffff049301497858 x7 : 00000000000001c1 x6 : 000000001eeeb49c [ 21.299329] x5 : 4000000000000002 x4 : ffff50b7618d7000 x3 : ffff8000801b3dc0 [ 21.299615] x2 : ffffb3dbde713ce0 x1 : 4000000000000000 x0 : ffffb3dbde713ce0 [ 21.299900] Call trace: [ 21.299998] ct_kernel_exit.constprop.0+0x108/0x120 [ 21.300460] ct_idle_enter+0x10/0x20 [ 21.300690] default_idle_call+0x3c/0x170 [ 21.300861] do_idle+0x218/0x278 [ 21.300992] cpu_startup_entry+0x3c/0x50 [ 21.301148] secondary_start_kernel+0x130/0x158 [ 21.301329] __secondary_switched+0xb8/0xc0 [ 21.301641] ---[ end trace 0000000000000000 ]--- Can this series be removed from mm-unstable until fixed, please? Thanks, Ryan > > Signed-off-by: Lokesh Gidra <lokeshgidra@google.com> > --- > fs/userfaultfd.c | 13 +- > include/linux/userfaultfd_k.h | 5 +- > mm/userfaultfd.c | 380 ++++++++++++++++++++++++++-------- > 3 files changed, 296 insertions(+), 102 deletions(-) > > diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c > index c00a021bcce4..60dcfafdc11a 100644 > --- a/fs/userfaultfd.c > +++ b/fs/userfaultfd.c > @@ -2005,17 +2005,8 @@ static int userfaultfd_move(struct userfaultfd_ctx *ctx, > return -EINVAL; > > if (mmget_not_zero(mm)) { > - mmap_read_lock(mm); > - > - /* Re-check after taking map_changing_lock */ > - down_read(&ctx->map_changing_lock); > - if (likely(!atomic_read(&ctx->mmap_changing))) > - ret = move_pages(ctx, mm, uffdio_move.dst, uffdio_move.src, > - uffdio_move.len, uffdio_move.mode); > - else > - ret = -EAGAIN; > - up_read(&ctx->map_changing_lock); > - mmap_read_unlock(mm); > + ret = move_pages(ctx, uffdio_move.dst, uffdio_move.src, > + uffdio_move.len, uffdio_move.mode); > mmput(mm); > } else { > return -ESRCH; > diff --git a/include/linux/userfaultfd_k.h b/include/linux/userfaultfd_k.h > index 3210c3552976..05d59f74fc88 100644 > --- a/include/linux/userfaultfd_k.h > +++ b/include/linux/userfaultfd_k.h > @@ -138,9 +138,8 @@ extern long uffd_wp_range(struct vm_area_struct *vma, > /* move_pages */ > void double_pt_lock(spinlock_t *ptl1, spinlock_t *ptl2); > void double_pt_unlock(spinlock_t *ptl1, spinlock_t *ptl2); > -ssize_t move_pages(struct userfaultfd_ctx *ctx, struct mm_struct *mm, > - unsigned long dst_start, unsigned long src_start, > - unsigned long len, __u64 flags); > +ssize_t move_pages(struct userfaultfd_ctx *ctx, unsigned long dst_start, > + unsigned long src_start, unsigned long len, __u64 flags); > int move_pages_huge_pmd(struct mm_struct *mm, pmd_t *dst_pmd, pmd_t *src_pmd, pmd_t dst_pmdval, > struct vm_area_struct *dst_vma, > struct vm_area_struct *src_vma, > diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c > index 74aad0831e40..4744d6a96f96 100644 > --- a/mm/userfaultfd.c > +++ b/mm/userfaultfd.c > @@ -20,19 +20,11 @@ > #include "internal.h" > > static __always_inline > -struct vm_area_struct *find_dst_vma(struct mm_struct *dst_mm, > - unsigned long dst_start, > - unsigned long len) > +bool validate_dst_vma(struct vm_area_struct *dst_vma, unsigned long dst_end) > { > - /* > - * Make sure that the dst range is both valid and fully within a > - * single existing vma. > - */ > - struct vm_area_struct *dst_vma; > - > - dst_vma = find_vma(dst_mm, dst_start); > - if (!range_in_vma(dst_vma, dst_start, dst_start + len)) > - return NULL; > + /* Make sure that the dst range is fully within dst_vma. */ > + if (dst_end > dst_vma->vm_end) > + return false; > > /* > * Check the vma is registered in uffd, this is required to > @@ -40,11 +32,122 @@ struct vm_area_struct *find_dst_vma(struct mm_struct *dst_mm, > * time. > */ > if (!dst_vma->vm_userfaultfd_ctx.ctx) > - return NULL; > + return false; > + > + return true; > +} > + > +static __always_inline > +struct vm_area_struct *find_vma_and_prepare_anon(struct mm_struct *mm, > + unsigned long addr) > +{ > + struct vm_area_struct *vma; > + > + mmap_assert_locked(mm); > + vma = vma_lookup(mm, addr); > + if (!vma) > + vma = ERR_PTR(-ENOENT); > + else if (!(vma->vm_flags & VM_SHARED) && > + unlikely(anon_vma_prepare(vma))) > + vma = ERR_PTR(-ENOMEM); > + > + return vma; > +} > + > +#ifdef CONFIG_PER_VMA_LOCK > +/* > + * lock_vma() - Lookup and lock vma corresponding to @address. > + * @mm: mm to search vma in. > + * @address: address that the vma should contain. > + * > + * Should be called without holding mmap_lock. vma should be unlocked after use > + * with unlock_vma(). > + * > + * Return: A locked vma containing @address, -ENOENT if no vma is found, or > + * -ENOMEM if anon_vma couldn't be allocated. > + */ > +static struct vm_area_struct *lock_vma(struct mm_struct *mm, > + unsigned long address) > +{ > + struct vm_area_struct *vma; > + > + vma = lock_vma_under_rcu(mm, address); > + if (vma) { > + /* > + * lock_vma_under_rcu() only checks anon_vma for private > + * anonymous mappings. But we need to ensure it is assigned in > + * private file-backed vmas as well. > + */ > + if (!(vma->vm_flags & VM_SHARED) && unlikely(!vma->anon_vma)) > + vma_end_read(vma); > + else > + return vma; > + } > + > + mmap_read_lock(mm); > + vma = find_vma_and_prepare_anon(mm, address); > + if (!IS_ERR(vma)) { > + /* > + * We cannot use vma_start_read() as it may fail due to > + * false locked (see comment in vma_start_read()). We > + * can avoid that by directly locking vm_lock under > + * mmap_lock, which guarantees that nobody can lock the > + * vma for write (vma_start_write()) under us. > + */ > + down_read(&vma->vm_lock->lock); > + } > + > + mmap_read_unlock(mm); > + return vma; > +} > + > +static struct vm_area_struct *uffd_mfill_lock(struct mm_struct *dst_mm, > + unsigned long dst_start, > + unsigned long len) > +{ > + struct vm_area_struct *dst_vma; > > + dst_vma = lock_vma(dst_mm, dst_start); > + if (IS_ERR(dst_vma) || validate_dst_vma(dst_vma, dst_start + len)) > + return dst_vma; > + > + vma_end_read(dst_vma); > + return ERR_PTR(-ENOENT); > +} > + > +static void uffd_mfill_unlock(struct vm_area_struct *vma) > +{ > + vma_end_read(vma); > +} > + > +#else > + > +static struct vm_area_struct *uffd_mfill_lock(struct mm_struct *dst_mm, > + unsigned long dst_start, > + unsigned long len) > +{ > + struct vm_area_struct *dst_vma; > + > + mmap_read_lock(dst_mm); > + dst_vma = find_vma_and_prepare_anon(dst_mm, dst_start); > + if (IS_ERR(dst_vma)) > + goto out_unlock; > + > + if (validate_dst_vma(dst_vma, dst_start + len)) > + return dst_vma; > + > + dst_vma = ERR_PTR(-ENOENT); > +out_unlock: > + mmap_read_unlock(dst_mm); > return dst_vma; > } > > +static void uffd_mfill_unlock(struct vm_area_struct *vma) > +{ > + mmap_read_unlock(vma->vm_mm); > +} > +#endif > + > /* Check if dst_addr is outside of file's size. Must be called with ptl held. */ > static bool mfill_file_over_size(struct vm_area_struct *dst_vma, > unsigned long dst_addr) > @@ -350,7 +453,8 @@ static pmd_t *mm_alloc_pmd(struct mm_struct *mm, unsigned long address) > #ifdef CONFIG_HUGETLB_PAGE > /* > * mfill_atomic processing for HUGETLB vmas. Note that this routine is > - * called with mmap_lock held, it will release mmap_lock before returning. > + * called with either vma-lock or mmap_lock held, it will release the lock > + * before returning. > */ > static __always_inline ssize_t mfill_atomic_hugetlb( > struct userfaultfd_ctx *ctx, > @@ -361,7 +465,6 @@ static __always_inline ssize_t mfill_atomic_hugetlb( > uffd_flags_t flags) > { > struct mm_struct *dst_mm = dst_vma->vm_mm; > - int vm_shared = dst_vma->vm_flags & VM_SHARED; > ssize_t err; > pte_t *dst_pte; > unsigned long src_addr, dst_addr; > @@ -380,7 +483,7 @@ static __always_inline ssize_t mfill_atomic_hugetlb( > */ > if (uffd_flags_mode_is(flags, MFILL_ATOMIC_ZEROPAGE)) { > up_read(&ctx->map_changing_lock); > - mmap_read_unlock(dst_mm); > + uffd_mfill_unlock(dst_vma); > return -EINVAL; > } > > @@ -403,24 +506,28 @@ static __always_inline ssize_t mfill_atomic_hugetlb( > * retry, dst_vma will be set to NULL and we must lookup again. > */ > if (!dst_vma) { > + dst_vma = uffd_mfill_lock(dst_mm, dst_start, len); > + if (IS_ERR(dst_vma)) { > + err = PTR_ERR(dst_vma); > + goto out; > + } > + > err = -ENOENT; > - dst_vma = find_dst_vma(dst_mm, dst_start, len); > - if (!dst_vma || !is_vm_hugetlb_page(dst_vma)) > - goto out_unlock; > + if (!is_vm_hugetlb_page(dst_vma)) > + goto out_unlock_vma; > > err = -EINVAL; > if (vma_hpagesize != vma_kernel_pagesize(dst_vma)) > - goto out_unlock; > - > - vm_shared = dst_vma->vm_flags & VM_SHARED; > - } > + goto out_unlock_vma; > > - /* > - * If not shared, ensure the dst_vma has a anon_vma. > - */ > - err = -ENOMEM; > - if (!vm_shared) { > - if (unlikely(anon_vma_prepare(dst_vma))) > + /* > + * If memory mappings are changing because of non-cooperative > + * operation (e.g. mremap) running in parallel, bail out and > + * request the user to retry later > + */ > + down_read(&ctx->map_changing_lock); > + err = -EAGAIN; > + if (atomic_read(&ctx->mmap_changing)) > goto out_unlock; > } > > @@ -465,7 +572,7 @@ static __always_inline ssize_t mfill_atomic_hugetlb( > > if (unlikely(err == -ENOENT)) { > up_read(&ctx->map_changing_lock); > - mmap_read_unlock(dst_mm); > + uffd_mfill_unlock(dst_vma); > BUG_ON(!folio); > > err = copy_folio_from_user(folio, > @@ -474,17 +581,6 @@ static __always_inline ssize_t mfill_atomic_hugetlb( > err = -EFAULT; > goto out; > } > - mmap_read_lock(dst_mm); > - down_read(&ctx->map_changing_lock); > - /* > - * If memory mappings are changing because of non-cooperative > - * operation (e.g. mremap) running in parallel, bail out and > - * request the user to retry later > - */ > - if (atomic_read(&ctx->mmap_changing)) { > - err = -EAGAIN; > - break; > - } > > dst_vma = NULL; > goto retry; > @@ -505,7 +601,8 @@ static __always_inline ssize_t mfill_atomic_hugetlb( > > out_unlock: > up_read(&ctx->map_changing_lock); > - mmap_read_unlock(dst_mm); > +out_unlock_vma: > + uffd_mfill_unlock(dst_vma); > out: > if (folio) > folio_put(folio); > @@ -597,7 +694,15 @@ static __always_inline ssize_t mfill_atomic(struct userfaultfd_ctx *ctx, > copied = 0; > folio = NULL; > retry: > - mmap_read_lock(dst_mm); > + /* > + * Make sure the vma is not shared, that the dst range is > + * both valid and fully within a single existing vma. > + */ > + dst_vma = uffd_mfill_lock(dst_mm, dst_start, len); > + if (IS_ERR(dst_vma)) { > + err = PTR_ERR(dst_vma); > + goto out; > + } > > /* > * If memory mappings are changing because of non-cooperative > @@ -609,15 +714,6 @@ static __always_inline ssize_t mfill_atomic(struct userfaultfd_ctx *ctx, > if (atomic_read(&ctx->mmap_changing)) > goto out_unlock; > > - /* > - * Make sure the vma is not shared, that the dst range is > - * both valid and fully within a single existing vma. > - */ > - err = -ENOENT; > - dst_vma = find_dst_vma(dst_mm, dst_start, len); > - if (!dst_vma) > - goto out_unlock; > - > err = -EINVAL; > /* > * shmem_zero_setup is invoked in mmap for MAP_ANONYMOUS|MAP_SHARED but > @@ -647,16 +743,6 @@ static __always_inline ssize_t mfill_atomic(struct userfaultfd_ctx *ctx, > uffd_flags_mode_is(flags, MFILL_ATOMIC_CONTINUE)) > goto out_unlock; > > - /* > - * Ensure the dst_vma has a anon_vma or this page > - * would get a NULL anon_vma when moved in the > - * dst_vma. > - */ > - err = -ENOMEM; > - if (!(dst_vma->vm_flags & VM_SHARED) && > - unlikely(anon_vma_prepare(dst_vma))) > - goto out_unlock; > - > while (src_addr < src_start + len) { > pmd_t dst_pmdval; > > @@ -699,7 +785,7 @@ static __always_inline ssize_t mfill_atomic(struct userfaultfd_ctx *ctx, > void *kaddr; > > up_read(&ctx->map_changing_lock); > - mmap_read_unlock(dst_mm); > + uffd_mfill_unlock(dst_vma); > BUG_ON(!folio); > > kaddr = kmap_local_folio(folio, 0); > @@ -730,7 +816,7 @@ static __always_inline ssize_t mfill_atomic(struct userfaultfd_ctx *ctx, > > out_unlock: > up_read(&ctx->map_changing_lock); > - mmap_read_unlock(dst_mm); > + uffd_mfill_unlock(dst_vma); > out: > if (folio) > folio_put(folio); > @@ -1267,27 +1353,136 @@ static int validate_move_areas(struct userfaultfd_ctx *ctx, > if (!vma_is_anonymous(src_vma) || !vma_is_anonymous(dst_vma)) > return -EINVAL; > > + return 0; > +} > + > +static __always_inline > +int find_vmas_mm_locked(struct mm_struct *mm, > + unsigned long dst_start, > + unsigned long src_start, > + struct vm_area_struct **dst_vmap, > + struct vm_area_struct **src_vmap) > +{ > + struct vm_area_struct *vma; > + > + mmap_assert_locked(mm); > + vma = find_vma_and_prepare_anon(mm, dst_start); > + if (IS_ERR(vma)) > + return PTR_ERR(vma); > + > + *dst_vmap = vma; > + /* Skip finding src_vma if src_start is in dst_vma */ > + if (src_start >= vma->vm_start && src_start < vma->vm_end) > + goto out_success; > + > + vma = vma_lookup(mm, src_start); > + if (!vma) > + return -ENOENT; > +out_success: > + *src_vmap = vma; > + return 0; > +} > + > +#ifdef CONFIG_PER_VMA_LOCK > +static int uffd_move_lock(struct mm_struct *mm, > + unsigned long dst_start, > + unsigned long src_start, > + struct vm_area_struct **dst_vmap, > + struct vm_area_struct **src_vmap) > +{ > + struct vm_area_struct *vma; > + int err; > + > + vma = lock_vma(mm, dst_start); > + if (IS_ERR(vma)) > + return PTR_ERR(vma); > + > + *dst_vmap = vma; > /* > - * Ensure the dst_vma has a anon_vma or this page > - * would get a NULL anon_vma when moved in the > - * dst_vma. > + * Skip finding src_vma if src_start is in dst_vma. This also ensures > + * that we don't lock the same vma twice. > */ > - if (unlikely(anon_vma_prepare(dst_vma))) > - return -ENOMEM; > + if (src_start >= vma->vm_start && src_start < vma->vm_end) { > + *src_vmap = vma; > + return 0; > + } > > - return 0; > + /* > + * Using lock_vma() to get src_vma can lead to following deadlock: > + * > + * Thread1 Thread2 > + * ------- ------- > + * vma_start_read(dst_vma) > + * mmap_write_lock(mm) > + * vma_start_write(src_vma) > + * vma_start_read(src_vma) > + * mmap_read_lock(mm) > + * vma_start_write(dst_vma) > + */ > + *src_vmap = lock_vma_under_rcu(mm, src_start); > + if (likely(*src_vmap)) > + return 0; > + > + /* Undo any locking and retry in mmap_lock critical section */ > + vma_end_read(*dst_vmap); > + > + mmap_read_lock(mm); > + err = find_vmas_mm_locked(mm, dst_start, src_start, dst_vmap, src_vmap); > + if (!err) { > + /* > + * See comment in lock_vma() as to why not using > + * vma_start_read() here. > + */ > + down_read(&(*dst_vmap)->vm_lock->lock); > + if (*dst_vmap != *src_vmap) > + down_read(&(*src_vmap)->vm_lock->lock); > + } > + mmap_read_unlock(mm); > + return err; > +} > + > +static void uffd_move_unlock(struct vm_area_struct *dst_vma, > + struct vm_area_struct *src_vma) > +{ > + vma_end_read(src_vma); > + if (src_vma != dst_vma) > + vma_end_read(dst_vma); > } > > +#else > + > +static int uffd_move_lock(struct mm_struct *mm, > + unsigned long dst_start, > + unsigned long src_start, > + struct vm_area_struct **dst_vmap, > + struct vm_area_struct **src_vmap) > +{ > + int err; > + > + mmap_read_lock(mm); > + err = find_vmas_mm_locked(mm, dst_start, src_start, dst_vmap, src_vmap); > + if (err) > + mmap_read_unlock(mm); > + return err; > +} > + > +static void uffd_move_unlock(struct vm_area_struct *dst_vma, > + struct vm_area_struct *src_vma) > +{ > + mmap_assert_locked(src_vma->vm_mm); > + mmap_read_unlock(dst_vma->vm_mm); > +} > +#endif > + > /** > * move_pages - move arbitrary anonymous pages of an existing vma > * @ctx: pointer to the userfaultfd context > - * @mm: the address space to move pages > * @dst_start: start of the destination virtual memory range > * @src_start: start of the source virtual memory range > * @len: length of the virtual memory range > * @mode: flags from uffdio_move.mode > * > - * Must be called with mmap_lock held for read. > + * It will either use the mmap_lock in read mode or per-vma locks > * > * move_pages() remaps arbitrary anonymous pages atomically in zero > * copy. It only works on non shared anonymous pages because those can > @@ -1355,10 +1550,10 @@ static int validate_move_areas(struct userfaultfd_ctx *ctx, > * could be obtained. This is the only additional complexity added to > * the rmap code to provide this anonymous page remapping functionality. > */ > -ssize_t move_pages(struct userfaultfd_ctx *ctx, struct mm_struct *mm, > - unsigned long dst_start, unsigned long src_start, > - unsigned long len, __u64 mode) > +ssize_t move_pages(struct userfaultfd_ctx *ctx, unsigned long dst_start, > + unsigned long src_start, unsigned long len, __u64 mode) > { > + struct mm_struct *mm = ctx->mm; > struct vm_area_struct *src_vma, *dst_vma; > unsigned long src_addr, dst_addr; > pmd_t *src_pmd, *dst_pmd; > @@ -1376,28 +1571,34 @@ ssize_t move_pages(struct userfaultfd_ctx *ctx, struct mm_struct *mm, > WARN_ON_ONCE(dst_start + len <= dst_start)) > goto out; > > + err = uffd_move_lock(mm, dst_start, src_start, &dst_vma, &src_vma); > + if (err) > + goto out; > + > + /* Re-check after taking map_changing_lock */ > + err = -EAGAIN; > + down_read(&ctx->map_changing_lock); > + if (likely(atomic_read(&ctx->mmap_changing))) > + goto out_unlock; > /* > * Make sure the vma is not shared, that the src and dst remap > * ranges are both valid and fully within a single existing > * vma. > */ > - src_vma = find_vma(mm, src_start); > - if (!src_vma || (src_vma->vm_flags & VM_SHARED)) > - goto out; > - if (src_start < src_vma->vm_start || > - src_start + len > src_vma->vm_end) > - goto out; > + err = -EINVAL; > + if (src_vma->vm_flags & VM_SHARED) > + goto out_unlock; > + if (src_start + len > src_vma->vm_end) > + goto out_unlock; > > - dst_vma = find_vma(mm, dst_start); > - if (!dst_vma || (dst_vma->vm_flags & VM_SHARED)) > - goto out; > - if (dst_start < dst_vma->vm_start || > - dst_start + len > dst_vma->vm_end) > - goto out; > + if (dst_vma->vm_flags & VM_SHARED) > + goto out_unlock; > + if (dst_start + len > dst_vma->vm_end) > + goto out_unlock; > > err = validate_move_areas(ctx, src_vma, dst_vma); > if (err) > - goto out; > + goto out_unlock; > > for (src_addr = src_start, dst_addr = dst_start; > src_addr < src_start + len;) { > @@ -1514,6 +1715,9 @@ ssize_t move_pages(struct userfaultfd_ctx *ctx, struct mm_struct *mm, > moved += step_size; > } > > +out_unlock: > + up_read(&ctx->map_changing_lock); > + uffd_move_unlock(dst_vma, src_vma); > out: > VM_WARN_ON(moved < 0); > VM_WARN_ON(err > 0); ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v6 3/3] userfaultfd: use per-vma locks in userfaultfd operations 2024-02-14 22:12 ` Ryan Roberts @ 2024-02-14 22:20 ` Suren Baghdasaryan 2024-02-14 22:27 ` Lokesh Gidra 0 siblings, 1 reply; 9+ messages in thread From: Suren Baghdasaryan @ 2024-02-14 22:20 UTC (permalink / raw) To: Ryan Roberts Cc: Lokesh Gidra, akpm, linux-fsdevel, linux-mm, linux-kernel, selinux, kernel-team, aarcange, peterx, david, axelrasmussen, bgeffon, willy, jannh, kaleshsingh, ngeoffray, timmurray, rppt, Liam.Howlett On Wed, Feb 14, 2024 at 2:12 PM Ryan Roberts <ryan.roberts@arm.com> wrote: > > On 13/02/2024 21:57, Lokesh Gidra wrote: > > All userfaultfd operations, except write-protect, opportunistically use > > per-vma locks to lock vmas. On failure, attempt again inside mmap_lock > > critical section. > > > > Write-protect operation requires mmap_lock as it iterates over multiple > > vmas. > > Hi, > > I'm seeing the below OOPS when running on arm64 against mm-unstable. It can be reliably reproduced by running the `uffd-unit-tests` mm selftest. Bisecting mm-unstable leads to this patch: > > # bad: [649936c3db47f8f75b9b927e4edf5922a0f240a6] mm: add swappiness= arg to memory.reclaim > git bisect bad 649936c3db47f8f75b9b927e4edf5922a0f240a6 > # good: [54be6c6c5ae8e0d93a6c4641cb7528eb0b6ba478] Linux 6.8-rc3 > git bisect good 54be6c6c5ae8e0d93a6c4641cb7528eb0b6ba478 > # good: [10794cd18bb46c91c75fac44e551201cfe006baf] mm: zswap: warn when referencing a dead entry > git bisect good 10794cd18bb46c91c75fac44e551201cfe006baf > # good: [cb769b427edc7f46c7c764daa3421725bffdf315] mm-memcg-use-larger-batches-for-proactive-reclaim-v4 > git bisect good cb769b427edc7f46c7c764daa3421725bffdf315 > # good: [cfc5c1be4010c9972bc3f3d991235e8ea6928672] mm/z3fold: remove unneeded spinlock > git bisect good cfc5c1be4010c9972bc3f3d991235e8ea6928672 > # good: [31094bce101651acb4747ab25d614bc893d65c89] kasan/test: avoid gcc warning for intentional overflow > git bisect good 31094bce101651acb4747ab25d614bc893d65c89 > # bad: [55be0b2cd1fbf00e036e2e48ee0999599135af66] zram: do not allocate physically contiguous strm buffers > git bisect bad 55be0b2cd1fbf00e036e2e48ee0999599135af66 > # good: [b11ca4a0a13024c0175dde56f9bd848803eddcd2] mm/mglru: improve swappiness handling > git bisect good b11ca4a0a13024c0175dde56f9bd848803eddcd2 > # good: [22e7ccd57a1220afcd7c4da1f3005fd04d70014e] userfaultfd: move userfaultfd_ctx struct to header file > git bisect good 22e7ccd57a1220afcd7c4da1f3005fd04d70014e > # bad: [0a0d05338f13e64c9fb7ccd8f8d1793aaf33ec7d] userfaultfd: use per-vma locks in userfaultfd operations > git bisect bad 0a0d05338f13e64c9fb7ccd8f8d1793aaf33ec7d > # good: [8459e1c7acbe4442c6c0eef59825da1339e0a3cf] userfaultfd: protect mmap_changing with rw_sem in userfaulfd_ctx > git bisect good 8459e1c7acbe4442c6c0eef59825da1339e0a3cf > # first bad commit: [0a0d05338f13e64c9fb7ccd8f8d1793aaf33ec7d] userfaultfd: use per-vma locks in userfaultfd operations > > This is the oops: That's a call to mmap_assert_locked() from inside move_pages_huge_pmd(). Since move_pages() are now happening under VMA lock this call should be replaced with vma_assert_locked(). > > [ 21.280142] mm ffff049340ed0000 task_size 281474976710656 > [ 21.280142] get_unmapped_area ffffb3dbdc725e48 > [ 21.280142] mmap_base 281474842492928 mmap_legacy_base 0 > [ 21.280142] pgd ffff04935829a000 mm_users 3 mm_count 4 pgtables_bytes 98304 map_count 21 > [ 21.280142] hiwater_rss 2167 hiwater_vm 8234 total_vm 4a44 locked_vm 0 > [ 21.280142] pinned_vm 0 data_vm 4835 exec_vm 1c6 stack_vm 21 > [ 21.280142] start_code aaaaaaaa0000 end_code aaaaaaab1b28 start_data aaaaaaac2a60 end_data aaaaaaac3410 > [ 21.280142] start_brk aaaaaaac5000 brk aaaaaaae6000 start_stack fffffffff6b0 > [ 21.280142] arg_start fffffffff8c7 arg_end fffffffff8e2 env_start fffffffff8e2 env_end ffffffffffdd > [ 21.280142] binfmt ffffb3dbdf2f8cf8 flags 82008d > [ 21.280142] ioctx_table 0000000000000000 > [ 21.280142] owner ffff049311f02280 exe_file ffff049355314c00 > [ 21.280142] notifier_subscriptions 0000000000000000 > [ 21.280142] numa_next_scan 4294897864 numa_scan_offset 0 numa_scan_seq 0 > [ 21.280142] tlb_flush_pending 0 > [ 21.280142] def_flags: 0x0() > [ 21.283302] kernel BUG at include/linux/mmap_lock.h:66! > [ 21.283481] Internal error: Oops - BUG: 00000000f2000800 [#1] PREEMPT SMP > [ 21.283720] Modules linked in: > [ 21.283867] CPU: 3 PID: 1226 Comm: uffd-unit-tests Not tainted 6.8.0-rc3-00297-g0a0d05338f13 #19 > [ 21.284306] Hardware name: linux,dummy-virt (DT) > [ 21.284495] pstate: 61400005 (nZCv daif +PAN -UAO -TCO +DIT -SSBS BTYPE=--) > [ 21.284833] pc : move_pages_huge_pmd+0x4d0/0x8a8 > [ 21.285072] lr : move_pages_huge_pmd+0x4d0/0x8a8 > [ 21.285289] sp : ffff800088573b30 > [ 21.285439] x29: ffff800088573b30 x28: ffff049358292d78 x27: fffffc0000000000 > [ 21.285770] x26: ffff049358292d78 x25: 0000fffff5e00000 x24: ffff049359db3730 > [ 21.286097] x23: 0000fffff6000000 x22: ffff049340ed0000 x21: ffff049359db3678 > [ 21.286422] x20: 0000000000000002 x19: fffffc124d60a480 x18: 0000000000000006 > [ 21.286747] x17: 626c740a30207165 x16: 735f6e6163735f61 x15: 6d756e2030207465 > [ 21.287084] x14: 7366666f5f6e6163 x13: 2928307830203a73 x12: 67616c665f666564 > [ 21.287417] x11: 0a3020676e69646e x10: ffffb3dbdf28bff8 x9 : ffffb3dbdc620f80 > [ 21.287745] x8 : 00000000ffffefff x7 : ffffb3dbdf28bff8 x6 : 80000000fffff000 > [ 21.288082] x5 : ffff04933ffdcd08 x4 : 0000000000000000 x3 : 0000000000000000 > [ 21.288364] x2 : 0000000000000000 x1 : ffff049340106780 x0 : 0000000000000328 > [ 21.288712] Call trace: > [ 21.288832] move_pages_huge_pmd+0x4d0/0x8a8 > [ 21.289058] move_pages+0x8b8/0x13d8 > [ 21.289205] userfaultfd_ioctl+0x11e0/0x1e90 > [ 21.289411] __arm64_sys_ioctl+0xb4/0x100 > [ 21.289579] invoke_syscall+0x50/0x128 > [ 21.289738] el0_svc_common.constprop.0+0x48/0xf0 > [ 21.289942] do_el0_svc+0x24/0x38 > [ 21.290078] el0_svc+0x34/0xb8 > [ 21.290219] el0t_64_sync_handler+0x100/0x130 > [ 21.290387] el0t_64_sync+0x190/0x198 > [ 21.290535] Code: 17ffff7e a90363f7 a9046bf9 97fde2a7 (d4210000) > [ 21.290776] ---[ end trace 0000000000000000 ]--- > [ 21.290966] note: uffd-unit-tests[1226] exited with irqs disabled > [ 21.291254] note: uffd-unit-tests[1226] exited with preempt_count 2 > [ 21.292864] ------------[ cut here ]------------ > [ 21.293020] WARNING: CPU: 3 PID: 0 at kernel/context_tracking.c:128 ct_kernel_exit.constprop.0+0x108/0x120 > [ 21.293711] Modules linked in: > [ 21.294070] CPU: 3 PID: 0 Comm: swapper/3 Tainted: G D 6.8.0-rc3-00297-g0a0d05338f13 #19 > [ 21.295447] Hardware name: linux,dummy-virt (DT) > [ 21.295850] pstate: 214003c5 (nzCv DAIF +PAN -UAO -TCO +DIT -SSBS BTYPE=--) > [ 21.296152] pc : ct_kernel_exit.constprop.0+0x108/0x120 > [ 21.296685] lr : ct_idle_enter+0x10/0x20 > [ 21.296848] sp : ffff8000801b3dc0 > [ 21.296983] x29: ffff8000801b3dc0 x28: 0000000000000000 x27: 0000000000000000 > [ 21.297255] x26: 0000000000000000 x25: ffff049301496780 x24: 0000000000000000 > [ 21.297533] x23: 0000000000000000 x22: ffff049301496780 x21: ffffb3dbdf209ae0 > [ 21.297844] x20: ffffb3dbdf209a20 x19: ffff04933ffeace0 x18: ffff800088573648 > [ 21.298152] x17: ffffb3dbdf65def0 x16: 00000000d72145a7 x15: 00000000f1a17b35 > [ 21.298439] x14: 0000000000000004 x13: ffffb3dbdf22c3a8 x12: 0000000000000000 > [ 21.298738] x11: ffff0493541472a8 x10: 2b58349b7608bfc4 x9 : ffffb3dbdc57f43c > [ 21.299031] x8 : ffff049301497858 x7 : 00000000000001c1 x6 : 000000001eeeb49c > [ 21.299329] x5 : 4000000000000002 x4 : ffff50b7618d7000 x3 : ffff8000801b3dc0 > [ 21.299615] x2 : ffffb3dbde713ce0 x1 : 4000000000000000 x0 : ffffb3dbde713ce0 > [ 21.299900] Call trace: > [ 21.299998] ct_kernel_exit.constprop.0+0x108/0x120 > [ 21.300460] ct_idle_enter+0x10/0x20 > [ 21.300690] default_idle_call+0x3c/0x170 > [ 21.300861] do_idle+0x218/0x278 > [ 21.300992] cpu_startup_entry+0x3c/0x50 > [ 21.301148] secondary_start_kernel+0x130/0x158 > [ 21.301329] __secondary_switched+0xb8/0xc0 > [ 21.301641] ---[ end trace 0000000000000000 ]--- > > > Can this series be removed from mm-unstable until fixed, please? > > Thanks, > Ryan > > > > > > Signed-off-by: Lokesh Gidra <lokeshgidra@google.com> > > --- > > fs/userfaultfd.c | 13 +- > > include/linux/userfaultfd_k.h | 5 +- > > mm/userfaultfd.c | 380 ++++++++++++++++++++++++++-------- > > 3 files changed, 296 insertions(+), 102 deletions(-) > > > > diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c > > index c00a021bcce4..60dcfafdc11a 100644 > > --- a/fs/userfaultfd.c > > +++ b/fs/userfaultfd.c > > @@ -2005,17 +2005,8 @@ static int userfaultfd_move(struct userfaultfd_ctx *ctx, > > return -EINVAL; > > > > if (mmget_not_zero(mm)) { > > - mmap_read_lock(mm); > > - > > - /* Re-check after taking map_changing_lock */ > > - down_read(&ctx->map_changing_lock); > > - if (likely(!atomic_read(&ctx->mmap_changing))) > > - ret = move_pages(ctx, mm, uffdio_move.dst, uffdio_move.src, > > - uffdio_move.len, uffdio_move.mode); > > - else > > - ret = -EAGAIN; > > - up_read(&ctx->map_changing_lock); > > - mmap_read_unlock(mm); > > + ret = move_pages(ctx, uffdio_move.dst, uffdio_move.src, > > + uffdio_move.len, uffdio_move.mode); > > mmput(mm); > > } else { > > return -ESRCH; > > diff --git a/include/linux/userfaultfd_k.h b/include/linux/userfaultfd_k.h > > index 3210c3552976..05d59f74fc88 100644 > > --- a/include/linux/userfaultfd_k.h > > +++ b/include/linux/userfaultfd_k.h > > @@ -138,9 +138,8 @@ extern long uffd_wp_range(struct vm_area_struct *vma, > > /* move_pages */ > > void double_pt_lock(spinlock_t *ptl1, spinlock_t *ptl2); > > void double_pt_unlock(spinlock_t *ptl1, spinlock_t *ptl2); > > -ssize_t move_pages(struct userfaultfd_ctx *ctx, struct mm_struct *mm, > > - unsigned long dst_start, unsigned long src_start, > > - unsigned long len, __u64 flags); > > +ssize_t move_pages(struct userfaultfd_ctx *ctx, unsigned long dst_start, > > + unsigned long src_start, unsigned long len, __u64 flags); > > int move_pages_huge_pmd(struct mm_struct *mm, pmd_t *dst_pmd, pmd_t *src_pmd, pmd_t dst_pmdval, > > struct vm_area_struct *dst_vma, > > struct vm_area_struct *src_vma, > > diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c > > index 74aad0831e40..4744d6a96f96 100644 > > --- a/mm/userfaultfd.c > > +++ b/mm/userfaultfd.c > > @@ -20,19 +20,11 @@ > > #include "internal.h" > > > > static __always_inline > > -struct vm_area_struct *find_dst_vma(struct mm_struct *dst_mm, > > - unsigned long dst_start, > > - unsigned long len) > > +bool validate_dst_vma(struct vm_area_struct *dst_vma, unsigned long dst_end) > > { > > - /* > > - * Make sure that the dst range is both valid and fully within a > > - * single existing vma. > > - */ > > - struct vm_area_struct *dst_vma; > > - > > - dst_vma = find_vma(dst_mm, dst_start); > > - if (!range_in_vma(dst_vma, dst_start, dst_start + len)) > > - return NULL; > > + /* Make sure that the dst range is fully within dst_vma. */ > > + if (dst_end > dst_vma->vm_end) > > + return false; > > > > /* > > * Check the vma is registered in uffd, this is required to > > @@ -40,11 +32,122 @@ struct vm_area_struct *find_dst_vma(struct mm_struct *dst_mm, > > * time. > > */ > > if (!dst_vma->vm_userfaultfd_ctx.ctx) > > - return NULL; > > + return false; > > + > > + return true; > > +} > > + > > +static __always_inline > > +struct vm_area_struct *find_vma_and_prepare_anon(struct mm_struct *mm, > > + unsigned long addr) > > +{ > > + struct vm_area_struct *vma; > > + > > + mmap_assert_locked(mm); > > + vma = vma_lookup(mm, addr); > > + if (!vma) > > + vma = ERR_PTR(-ENOENT); > > + else if (!(vma->vm_flags & VM_SHARED) && > > + unlikely(anon_vma_prepare(vma))) > > + vma = ERR_PTR(-ENOMEM); > > + > > + return vma; > > +} > > + > > +#ifdef CONFIG_PER_VMA_LOCK > > +/* > > + * lock_vma() - Lookup and lock vma corresponding to @address. > > + * @mm: mm to search vma in. > > + * @address: address that the vma should contain. > > + * > > + * Should be called without holding mmap_lock. vma should be unlocked after use > > + * with unlock_vma(). > > + * > > + * Return: A locked vma containing @address, -ENOENT if no vma is found, or > > + * -ENOMEM if anon_vma couldn't be allocated. > > + */ > > +static struct vm_area_struct *lock_vma(struct mm_struct *mm, > > + unsigned long address) > > +{ > > + struct vm_area_struct *vma; > > + > > + vma = lock_vma_under_rcu(mm, address); > > + if (vma) { > > + /* > > + * lock_vma_under_rcu() only checks anon_vma for private > > + * anonymous mappings. But we need to ensure it is assigned in > > + * private file-backed vmas as well. > > + */ > > + if (!(vma->vm_flags & VM_SHARED) && unlikely(!vma->anon_vma)) > > + vma_end_read(vma); > > + else > > + return vma; > > + } > > + > > + mmap_read_lock(mm); > > + vma = find_vma_and_prepare_anon(mm, address); > > + if (!IS_ERR(vma)) { > > + /* > > + * We cannot use vma_start_read() as it may fail due to > > + * false locked (see comment in vma_start_read()). We > > + * can avoid that by directly locking vm_lock under > > + * mmap_lock, which guarantees that nobody can lock the > > + * vma for write (vma_start_write()) under us. > > + */ > > + down_read(&vma->vm_lock->lock); > > + } > > + > > + mmap_read_unlock(mm); > > + return vma; > > +} > > + > > +static struct vm_area_struct *uffd_mfill_lock(struct mm_struct *dst_mm, > > + unsigned long dst_start, > > + unsigned long len) > > +{ > > + struct vm_area_struct *dst_vma; > > > > + dst_vma = lock_vma(dst_mm, dst_start); > > + if (IS_ERR(dst_vma) || validate_dst_vma(dst_vma, dst_start + len)) > > + return dst_vma; > > + > > + vma_end_read(dst_vma); > > + return ERR_PTR(-ENOENT); > > +} > > + > > +static void uffd_mfill_unlock(struct vm_area_struct *vma) > > +{ > > + vma_end_read(vma); > > +} > > + > > +#else > > + > > +static struct vm_area_struct *uffd_mfill_lock(struct mm_struct *dst_mm, > > + unsigned long dst_start, > > + unsigned long len) > > +{ > > + struct vm_area_struct *dst_vma; > > + > > + mmap_read_lock(dst_mm); > > + dst_vma = find_vma_and_prepare_anon(dst_mm, dst_start); > > + if (IS_ERR(dst_vma)) > > + goto out_unlock; > > + > > + if (validate_dst_vma(dst_vma, dst_start + len)) > > + return dst_vma; > > + > > + dst_vma = ERR_PTR(-ENOENT); > > +out_unlock: > > + mmap_read_unlock(dst_mm); > > return dst_vma; > > } > > > > +static void uffd_mfill_unlock(struct vm_area_struct *vma) > > +{ > > + mmap_read_unlock(vma->vm_mm); > > +} > > +#endif > > + > > /* Check if dst_addr is outside of file's size. Must be called with ptl held. */ > > static bool mfill_file_over_size(struct vm_area_struct *dst_vma, > > unsigned long dst_addr) > > @@ -350,7 +453,8 @@ static pmd_t *mm_alloc_pmd(struct mm_struct *mm, unsigned long address) > > #ifdef CONFIG_HUGETLB_PAGE > > /* > > * mfill_atomic processing for HUGETLB vmas. Note that this routine is > > - * called with mmap_lock held, it will release mmap_lock before returning. > > + * called with either vma-lock or mmap_lock held, it will release the lock > > + * before returning. > > */ > > static __always_inline ssize_t mfill_atomic_hugetlb( > > struct userfaultfd_ctx *ctx, > > @@ -361,7 +465,6 @@ static __always_inline ssize_t mfill_atomic_hugetlb( > > uffd_flags_t flags) > > { > > struct mm_struct *dst_mm = dst_vma->vm_mm; > > - int vm_shared = dst_vma->vm_flags & VM_SHARED; > > ssize_t err; > > pte_t *dst_pte; > > unsigned long src_addr, dst_addr; > > @@ -380,7 +483,7 @@ static __always_inline ssize_t mfill_atomic_hugetlb( > > */ > > if (uffd_flags_mode_is(flags, MFILL_ATOMIC_ZEROPAGE)) { > > up_read(&ctx->map_changing_lock); > > - mmap_read_unlock(dst_mm); > > + uffd_mfill_unlock(dst_vma); > > return -EINVAL; > > } > > > > @@ -403,24 +506,28 @@ static __always_inline ssize_t mfill_atomic_hugetlb( > > * retry, dst_vma will be set to NULL and we must lookup again. > > */ > > if (!dst_vma) { > > + dst_vma = uffd_mfill_lock(dst_mm, dst_start, len); > > + if (IS_ERR(dst_vma)) { > > + err = PTR_ERR(dst_vma); > > + goto out; > > + } > > + > > err = -ENOENT; > > - dst_vma = find_dst_vma(dst_mm, dst_start, len); > > - if (!dst_vma || !is_vm_hugetlb_page(dst_vma)) > > - goto out_unlock; > > + if (!is_vm_hugetlb_page(dst_vma)) > > + goto out_unlock_vma; > > > > err = -EINVAL; > > if (vma_hpagesize != vma_kernel_pagesize(dst_vma)) > > - goto out_unlock; > > - > > - vm_shared = dst_vma->vm_flags & VM_SHARED; > > - } > > + goto out_unlock_vma; > > > > - /* > > - * If not shared, ensure the dst_vma has a anon_vma. > > - */ > > - err = -ENOMEM; > > - if (!vm_shared) { > > - if (unlikely(anon_vma_prepare(dst_vma))) > > + /* > > + * If memory mappings are changing because of non-cooperative > > + * operation (e.g. mremap) running in parallel, bail out and > > + * request the user to retry later > > + */ > > + down_read(&ctx->map_changing_lock); > > + err = -EAGAIN; > > + if (atomic_read(&ctx->mmap_changing)) > > goto out_unlock; > > } > > > > @@ -465,7 +572,7 @@ static __always_inline ssize_t mfill_atomic_hugetlb( > > > > if (unlikely(err == -ENOENT)) { > > up_read(&ctx->map_changing_lock); > > - mmap_read_unlock(dst_mm); > > + uffd_mfill_unlock(dst_vma); > > BUG_ON(!folio); > > > > err = copy_folio_from_user(folio, > > @@ -474,17 +581,6 @@ static __always_inline ssize_t mfill_atomic_hugetlb( > > err = -EFAULT; > > goto out; > > } > > - mmap_read_lock(dst_mm); > > - down_read(&ctx->map_changing_lock); > > - /* > > - * If memory mappings are changing because of non-cooperative > > - * operation (e.g. mremap) running in parallel, bail out and > > - * request the user to retry later > > - */ > > - if (atomic_read(&ctx->mmap_changing)) { > > - err = -EAGAIN; > > - break; > > - } > > > > dst_vma = NULL; > > goto retry; > > @@ -505,7 +601,8 @@ static __always_inline ssize_t mfill_atomic_hugetlb( > > > > out_unlock: > > up_read(&ctx->map_changing_lock); > > - mmap_read_unlock(dst_mm); > > +out_unlock_vma: > > + uffd_mfill_unlock(dst_vma); > > out: > > if (folio) > > folio_put(folio); > > @@ -597,7 +694,15 @@ static __always_inline ssize_t mfill_atomic(struct userfaultfd_ctx *ctx, > > copied = 0; > > folio = NULL; > > retry: > > - mmap_read_lock(dst_mm); > > + /* > > + * Make sure the vma is not shared, that the dst range is > > + * both valid and fully within a single existing vma. > > + */ > > + dst_vma = uffd_mfill_lock(dst_mm, dst_start, len); > > + if (IS_ERR(dst_vma)) { > > + err = PTR_ERR(dst_vma); > > + goto out; > > + } > > > > /* > > * If memory mappings are changing because of non-cooperative > > @@ -609,15 +714,6 @@ static __always_inline ssize_t mfill_atomic(struct userfaultfd_ctx *ctx, > > if (atomic_read(&ctx->mmap_changing)) > > goto out_unlock; > > > > - /* > > - * Make sure the vma is not shared, that the dst range is > > - * both valid and fully within a single existing vma. > > - */ > > - err = -ENOENT; > > - dst_vma = find_dst_vma(dst_mm, dst_start, len); > > - if (!dst_vma) > > - goto out_unlock; > > - > > err = -EINVAL; > > /* > > * shmem_zero_setup is invoked in mmap for MAP_ANONYMOUS|MAP_SHARED but > > @@ -647,16 +743,6 @@ static __always_inline ssize_t mfill_atomic(struct userfaultfd_ctx *ctx, > > uffd_flags_mode_is(flags, MFILL_ATOMIC_CONTINUE)) > > goto out_unlock; > > > > - /* > > - * Ensure the dst_vma has a anon_vma or this page > > - * would get a NULL anon_vma when moved in the > > - * dst_vma. > > - */ > > - err = -ENOMEM; > > - if (!(dst_vma->vm_flags & VM_SHARED) && > > - unlikely(anon_vma_prepare(dst_vma))) > > - goto out_unlock; > > - > > while (src_addr < src_start + len) { > > pmd_t dst_pmdval; > > > > @@ -699,7 +785,7 @@ static __always_inline ssize_t mfill_atomic(struct userfaultfd_ctx *ctx, > > void *kaddr; > > > > up_read(&ctx->map_changing_lock); > > - mmap_read_unlock(dst_mm); > > + uffd_mfill_unlock(dst_vma); > > BUG_ON(!folio); > > > > kaddr = kmap_local_folio(folio, 0); > > @@ -730,7 +816,7 @@ static __always_inline ssize_t mfill_atomic(struct userfaultfd_ctx *ctx, > > > > out_unlock: > > up_read(&ctx->map_changing_lock); > > - mmap_read_unlock(dst_mm); > > + uffd_mfill_unlock(dst_vma); > > out: > > if (folio) > > folio_put(folio); > > @@ -1267,27 +1353,136 @@ static int validate_move_areas(struct userfaultfd_ctx *ctx, > > if (!vma_is_anonymous(src_vma) || !vma_is_anonymous(dst_vma)) > > return -EINVAL; > > > > + return 0; > > +} > > + > > +static __always_inline > > +int find_vmas_mm_locked(struct mm_struct *mm, > > + unsigned long dst_start, > > + unsigned long src_start, > > + struct vm_area_struct **dst_vmap, > > + struct vm_area_struct **src_vmap) > > +{ > > + struct vm_area_struct *vma; > > + > > + mmap_assert_locked(mm); > > + vma = find_vma_and_prepare_anon(mm, dst_start); > > + if (IS_ERR(vma)) > > + return PTR_ERR(vma); > > + > > + *dst_vmap = vma; > > + /* Skip finding src_vma if src_start is in dst_vma */ > > + if (src_start >= vma->vm_start && src_start < vma->vm_end) > > + goto out_success; > > + > > + vma = vma_lookup(mm, src_start); > > + if (!vma) > > + return -ENOENT; > > +out_success: > > + *src_vmap = vma; > > + return 0; > > +} > > + > > +#ifdef CONFIG_PER_VMA_LOCK > > +static int uffd_move_lock(struct mm_struct *mm, > > + unsigned long dst_start, > > + unsigned long src_start, > > + struct vm_area_struct **dst_vmap, > > + struct vm_area_struct **src_vmap) > > +{ > > + struct vm_area_struct *vma; > > + int err; > > + > > + vma = lock_vma(mm, dst_start); > > + if (IS_ERR(vma)) > > + return PTR_ERR(vma); > > + > > + *dst_vmap = vma; > > /* > > - * Ensure the dst_vma has a anon_vma or this page > > - * would get a NULL anon_vma when moved in the > > - * dst_vma. > > + * Skip finding src_vma if src_start is in dst_vma. This also ensures > > + * that we don't lock the same vma twice. > > */ > > - if (unlikely(anon_vma_prepare(dst_vma))) > > - return -ENOMEM; > > + if (src_start >= vma->vm_start && src_start < vma->vm_end) { > > + *src_vmap = vma; > > + return 0; > > + } > > > > - return 0; > > + /* > > + * Using lock_vma() to get src_vma can lead to following deadlock: > > + * > > + * Thread1 Thread2 > > + * ------- ------- > > + * vma_start_read(dst_vma) > > + * mmap_write_lock(mm) > > + * vma_start_write(src_vma) > > + * vma_start_read(src_vma) > > + * mmap_read_lock(mm) > > + * vma_start_write(dst_vma) > > + */ > > + *src_vmap = lock_vma_under_rcu(mm, src_start); > > + if (likely(*src_vmap)) > > + return 0; > > + > > + /* Undo any locking and retry in mmap_lock critical section */ > > + vma_end_read(*dst_vmap); > > + > > + mmap_read_lock(mm); > > + err = find_vmas_mm_locked(mm, dst_start, src_start, dst_vmap, src_vmap); > > + if (!err) { > > + /* > > + * See comment in lock_vma() as to why not using > > + * vma_start_read() here. > > + */ > > + down_read(&(*dst_vmap)->vm_lock->lock); > > + if (*dst_vmap != *src_vmap) > > + down_read(&(*src_vmap)->vm_lock->lock); > > + } > > + mmap_read_unlock(mm); > > + return err; > > +} > > + > > +static void uffd_move_unlock(struct vm_area_struct *dst_vma, > > + struct vm_area_struct *src_vma) > > +{ > > + vma_end_read(src_vma); > > + if (src_vma != dst_vma) > > + vma_end_read(dst_vma); > > } > > > > +#else > > + > > +static int uffd_move_lock(struct mm_struct *mm, > > + unsigned long dst_start, > > + unsigned long src_start, > > + struct vm_area_struct **dst_vmap, > > + struct vm_area_struct **src_vmap) > > +{ > > + int err; > > + > > + mmap_read_lock(mm); > > + err = find_vmas_mm_locked(mm, dst_start, src_start, dst_vmap, src_vmap); > > + if (err) > > + mmap_read_unlock(mm); > > + return err; > > +} > > + > > +static void uffd_move_unlock(struct vm_area_struct *dst_vma, > > + struct vm_area_struct *src_vma) > > +{ > > + mmap_assert_locked(src_vma->vm_mm); > > + mmap_read_unlock(dst_vma->vm_mm); > > +} > > +#endif > > + > > /** > > * move_pages - move arbitrary anonymous pages of an existing vma > > * @ctx: pointer to the userfaultfd context > > - * @mm: the address space to move pages > > * @dst_start: start of the destination virtual memory range > > * @src_start: start of the source virtual memory range > > * @len: length of the virtual memory range > > * @mode: flags from uffdio_move.mode > > * > > - * Must be called with mmap_lock held for read. > > + * It will either use the mmap_lock in read mode or per-vma locks > > * > > * move_pages() remaps arbitrary anonymous pages atomically in zero > > * copy. It only works on non shared anonymous pages because those can > > @@ -1355,10 +1550,10 @@ static int validate_move_areas(struct userfaultfd_ctx *ctx, > > * could be obtained. This is the only additional complexity added to > > * the rmap code to provide this anonymous page remapping functionality. > > */ > > -ssize_t move_pages(struct userfaultfd_ctx *ctx, struct mm_struct *mm, > > - unsigned long dst_start, unsigned long src_start, > > - unsigned long len, __u64 mode) > > +ssize_t move_pages(struct userfaultfd_ctx *ctx, unsigned long dst_start, > > + unsigned long src_start, unsigned long len, __u64 mode) > > { > > + struct mm_struct *mm = ctx->mm; > > struct vm_area_struct *src_vma, *dst_vma; > > unsigned long src_addr, dst_addr; > > pmd_t *src_pmd, *dst_pmd; > > @@ -1376,28 +1571,34 @@ ssize_t move_pages(struct userfaultfd_ctx *ctx, struct mm_struct *mm, > > WARN_ON_ONCE(dst_start + len <= dst_start)) > > goto out; > > > > + err = uffd_move_lock(mm, dst_start, src_start, &dst_vma, &src_vma); > > + if (err) > > + goto out; > > + > > + /* Re-check after taking map_changing_lock */ > > + err = -EAGAIN; > > + down_read(&ctx->map_changing_lock); > > + if (likely(atomic_read(&ctx->mmap_changing))) > > + goto out_unlock; > > /* > > * Make sure the vma is not shared, that the src and dst remap > > * ranges are both valid and fully within a single existing > > * vma. > > */ > > - src_vma = find_vma(mm, src_start); > > - if (!src_vma || (src_vma->vm_flags & VM_SHARED)) > > - goto out; > > - if (src_start < src_vma->vm_start || > > - src_start + len > src_vma->vm_end) > > - goto out; > > + err = -EINVAL; > > + if (src_vma->vm_flags & VM_SHARED) > > + goto out_unlock; > > + if (src_start + len > src_vma->vm_end) > > + goto out_unlock; > > > > - dst_vma = find_vma(mm, dst_start); > > - if (!dst_vma || (dst_vma->vm_flags & VM_SHARED)) > > - goto out; > > - if (dst_start < dst_vma->vm_start || > > - dst_start + len > dst_vma->vm_end) > > - goto out; > > + if (dst_vma->vm_flags & VM_SHARED) > > + goto out_unlock; > > + if (dst_start + len > dst_vma->vm_end) > > + goto out_unlock; > > > > err = validate_move_areas(ctx, src_vma, dst_vma); > > if (err) > > - goto out; > > + goto out_unlock; > > > > for (src_addr = src_start, dst_addr = dst_start; > > src_addr < src_start + len;) { > > @@ -1514,6 +1715,9 @@ ssize_t move_pages(struct userfaultfd_ctx *ctx, struct mm_struct *mm, > > moved += step_size; > > } > > > > +out_unlock: > > + up_read(&ctx->map_changing_lock); > > + uffd_move_unlock(dst_vma, src_vma); > > out: > > VM_WARN_ON(moved < 0); > > VM_WARN_ON(err > 0); > > -- > To unsubscribe from this group and stop receiving emails from it, send an email to kernel-team+unsubscribe@android.com. > ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v6 3/3] userfaultfd: use per-vma locks in userfaultfd operations 2024-02-14 22:20 ` Suren Baghdasaryan @ 2024-02-14 22:27 ` Lokesh Gidra 2024-02-14 22:33 ` Ryan Roberts 0 siblings, 1 reply; 9+ messages in thread From: Lokesh Gidra @ 2024-02-14 22:27 UTC (permalink / raw) To: Suren Baghdasaryan Cc: Ryan Roberts, akpm, linux-fsdevel, linux-mm, linux-kernel, selinux, kernel-team, aarcange, peterx, david, axelrasmussen, bgeffon, willy, jannh, kaleshsingh, ngeoffray, timmurray, rppt, Liam.Howlett On Wed, Feb 14, 2024 at 2:20 PM Suren Baghdasaryan <surenb@google.com> wrote: > > On Wed, Feb 14, 2024 at 2:12 PM Ryan Roberts <ryan.roberts@arm.com> wrote: > > > > On 13/02/2024 21:57, Lokesh Gidra wrote: > > > All userfaultfd operations, except write-protect, opportunistically use > > > per-vma locks to lock vmas. On failure, attempt again inside mmap_lock > > > critical section. > > > > > > Write-protect operation requires mmap_lock as it iterates over multiple > > > vmas. > > > > Hi, > > > > I'm seeing the below OOPS when running on arm64 against mm-unstable. It can be reliably reproduced by running the `uffd-unit-tests` mm selftest. Bisecting mm-unstable leads to this patch: > > > > # bad: [649936c3db47f8f75b9b927e4edf5922a0f240a6] mm: add swappiness= arg to memory.reclaim > > git bisect bad 649936c3db47f8f75b9b927e4edf5922a0f240a6 > > # good: [54be6c6c5ae8e0d93a6c4641cb7528eb0b6ba478] Linux 6.8-rc3 > > git bisect good 54be6c6c5ae8e0d93a6c4641cb7528eb0b6ba478 > > # good: [10794cd18bb46c91c75fac44e551201cfe006baf] mm: zswap: warn when referencing a dead entry > > git bisect good 10794cd18bb46c91c75fac44e551201cfe006baf > > # good: [cb769b427edc7f46c7c764daa3421725bffdf315] mm-memcg-use-larger-batches-for-proactive-reclaim-v4 > > git bisect good cb769b427edc7f46c7c764daa3421725bffdf315 > > # good: [cfc5c1be4010c9972bc3f3d991235e8ea6928672] mm/z3fold: remove unneeded spinlock > > git bisect good cfc5c1be4010c9972bc3f3d991235e8ea6928672 > > # good: [31094bce101651acb4747ab25d614bc893d65c89] kasan/test: avoid gcc warning for intentional overflow > > git bisect good 31094bce101651acb4747ab25d614bc893d65c89 > > # bad: [55be0b2cd1fbf00e036e2e48ee0999599135af66] zram: do not allocate physically contiguous strm buffers > > git bisect bad 55be0b2cd1fbf00e036e2e48ee0999599135af66 > > # good: [b11ca4a0a13024c0175dde56f9bd848803eddcd2] mm/mglru: improve swappiness handling > > git bisect good b11ca4a0a13024c0175dde56f9bd848803eddcd2 > > # good: [22e7ccd57a1220afcd7c4da1f3005fd04d70014e] userfaultfd: move userfaultfd_ctx struct to header file > > git bisect good 22e7ccd57a1220afcd7c4da1f3005fd04d70014e > > # bad: [0a0d05338f13e64c9fb7ccd8f8d1793aaf33ec7d] userfaultfd: use per-vma locks in userfaultfd operations > > git bisect bad 0a0d05338f13e64c9fb7ccd8f8d1793aaf33ec7d > > # good: [8459e1c7acbe4442c6c0eef59825da1339e0a3cf] userfaultfd: protect mmap_changing with rw_sem in userfaulfd_ctx > > git bisect good 8459e1c7acbe4442c6c0eef59825da1339e0a3cf > > # first bad commit: [0a0d05338f13e64c9fb7ccd8f8d1793aaf33ec7d] userfaultfd: use per-vma locks in userfaultfd operations > > > > This is the oops: > > That's a call to mmap_assert_locked() from inside > move_pages_huge_pmd(). Since move_pages() are now happening under VMA > lock this call should be replaced with vma_assert_locked(). > So sorry for missing this. I missed testing with CONFIG_DEBUG_VM turned on. I'll fix and thoroughly test before sending the next version. > > > > [ 21.280142] mm ffff049340ed0000 task_size 281474976710656 > > [ 21.280142] get_unmapped_area ffffb3dbdc725e48 > > [ 21.280142] mmap_base 281474842492928 mmap_legacy_base 0 > > [ 21.280142] pgd ffff04935829a000 mm_users 3 mm_count 4 pgtables_bytes 98304 map_count 21 > > [ 21.280142] hiwater_rss 2167 hiwater_vm 8234 total_vm 4a44 locked_vm 0 > > [ 21.280142] pinned_vm 0 data_vm 4835 exec_vm 1c6 stack_vm 21 > > [ 21.280142] start_code aaaaaaaa0000 end_code aaaaaaab1b28 start_data aaaaaaac2a60 end_data aaaaaaac3410 > > [ 21.280142] start_brk aaaaaaac5000 brk aaaaaaae6000 start_stack fffffffff6b0 > > [ 21.280142] arg_start fffffffff8c7 arg_end fffffffff8e2 env_start fffffffff8e2 env_end ffffffffffdd > > [ 21.280142] binfmt ffffb3dbdf2f8cf8 flags 82008d > > [ 21.280142] ioctx_table 0000000000000000 > > [ 21.280142] owner ffff049311f02280 exe_file ffff049355314c00 > > [ 21.280142] notifier_subscriptions 0000000000000000 > > [ 21.280142] numa_next_scan 4294897864 numa_scan_offset 0 numa_scan_seq 0 > > [ 21.280142] tlb_flush_pending 0 > > [ 21.280142] def_flags: 0x0() > > [ 21.283302] kernel BUG at include/linux/mmap_lock.h:66! > > [ 21.283481] Internal error: Oops - BUG: 00000000f2000800 [#1] PREEMPT SMP > > [ 21.283720] Modules linked in: > > [ 21.283867] CPU: 3 PID: 1226 Comm: uffd-unit-tests Not tainted 6.8.0-rc3-00297-g0a0d05338f13 #19 > > [ 21.284306] Hardware name: linux,dummy-virt (DT) > > [ 21.284495] pstate: 61400005 (nZCv daif +PAN -UAO -TCO +DIT -SSBS BTYPE=--) > > [ 21.284833] pc : move_pages_huge_pmd+0x4d0/0x8a8 > > [ 21.285072] lr : move_pages_huge_pmd+0x4d0/0x8a8 > > [ 21.285289] sp : ffff800088573b30 > > [ 21.285439] x29: ffff800088573b30 x28: ffff049358292d78 x27: fffffc0000000000 > > [ 21.285770] x26: ffff049358292d78 x25: 0000fffff5e00000 x24: ffff049359db3730 > > [ 21.286097] x23: 0000fffff6000000 x22: ffff049340ed0000 x21: ffff049359db3678 > > [ 21.286422] x20: 0000000000000002 x19: fffffc124d60a480 x18: 0000000000000006 > > [ 21.286747] x17: 626c740a30207165 x16: 735f6e6163735f61 x15: 6d756e2030207465 > > [ 21.287084] x14: 7366666f5f6e6163 x13: 2928307830203a73 x12: 67616c665f666564 > > [ 21.287417] x11: 0a3020676e69646e x10: ffffb3dbdf28bff8 x9 : ffffb3dbdc620f80 > > [ 21.287745] x8 : 00000000ffffefff x7 : ffffb3dbdf28bff8 x6 : 80000000fffff000 > > [ 21.288082] x5 : ffff04933ffdcd08 x4 : 0000000000000000 x3 : 0000000000000000 > > [ 21.288364] x2 : 0000000000000000 x1 : ffff049340106780 x0 : 0000000000000328 > > [ 21.288712] Call trace: > > [ 21.288832] move_pages_huge_pmd+0x4d0/0x8a8 > > [ 21.289058] move_pages+0x8b8/0x13d8 > > [ 21.289205] userfaultfd_ioctl+0x11e0/0x1e90 > > [ 21.289411] __arm64_sys_ioctl+0xb4/0x100 > > [ 21.289579] invoke_syscall+0x50/0x128 > > [ 21.289738] el0_svc_common.constprop.0+0x48/0xf0 > > [ 21.289942] do_el0_svc+0x24/0x38 > > [ 21.290078] el0_svc+0x34/0xb8 > > [ 21.290219] el0t_64_sync_handler+0x100/0x130 > > [ 21.290387] el0t_64_sync+0x190/0x198 > > [ 21.290535] Code: 17ffff7e a90363f7 a9046bf9 97fde2a7 (d4210000) > > [ 21.290776] ---[ end trace 0000000000000000 ]--- > > [ 21.290966] note: uffd-unit-tests[1226] exited with irqs disabled > > [ 21.291254] note: uffd-unit-tests[1226] exited with preempt_count 2 > > [ 21.292864] ------------[ cut here ]------------ > > [ 21.293020] WARNING: CPU: 3 PID: 0 at kernel/context_tracking.c:128 ct_kernel_exit.constprop.0+0x108/0x120 > > [ 21.293711] Modules linked in: > > [ 21.294070] CPU: 3 PID: 0 Comm: swapper/3 Tainted: G D 6.8.0-rc3-00297-g0a0d05338f13 #19 > > [ 21.295447] Hardware name: linux,dummy-virt (DT) > > [ 21.295850] pstate: 214003c5 (nzCv DAIF +PAN -UAO -TCO +DIT -SSBS BTYPE=--) > > [ 21.296152] pc : ct_kernel_exit.constprop.0+0x108/0x120 > > [ 21.296685] lr : ct_idle_enter+0x10/0x20 > > [ 21.296848] sp : ffff8000801b3dc0 > > [ 21.296983] x29: ffff8000801b3dc0 x28: 0000000000000000 x27: 0000000000000000 > > [ 21.297255] x26: 0000000000000000 x25: ffff049301496780 x24: 0000000000000000 > > [ 21.297533] x23: 0000000000000000 x22: ffff049301496780 x21: ffffb3dbdf209ae0 > > [ 21.297844] x20: ffffb3dbdf209a20 x19: ffff04933ffeace0 x18: ffff800088573648 > > [ 21.298152] x17: ffffb3dbdf65def0 x16: 00000000d72145a7 x15: 00000000f1a17b35 > > [ 21.298439] x14: 0000000000000004 x13: ffffb3dbdf22c3a8 x12: 0000000000000000 > > [ 21.298738] x11: ffff0493541472a8 x10: 2b58349b7608bfc4 x9 : ffffb3dbdc57f43c > > [ 21.299031] x8 : ffff049301497858 x7 : 00000000000001c1 x6 : 000000001eeeb49c > > [ 21.299329] x5 : 4000000000000002 x4 : ffff50b7618d7000 x3 : ffff8000801b3dc0 > > [ 21.299615] x2 : ffffb3dbde713ce0 x1 : 4000000000000000 x0 : ffffb3dbde713ce0 > > [ 21.299900] Call trace: > > [ 21.299998] ct_kernel_exit.constprop.0+0x108/0x120 > > [ 21.300460] ct_idle_enter+0x10/0x20 > > [ 21.300690] default_idle_call+0x3c/0x170 > > [ 21.300861] do_idle+0x218/0x278 > > [ 21.300992] cpu_startup_entry+0x3c/0x50 > > [ 21.301148] secondary_start_kernel+0x130/0x158 > > [ 21.301329] __secondary_switched+0xb8/0xc0 > > [ 21.301641] ---[ end trace 0000000000000000 ]--- > > > > > > Can this series be removed from mm-unstable until fixed, please? > > > > Thanks, > > Ryan > > > > > > > > > > Signed-off-by: Lokesh Gidra <lokeshgidra@google.com> > > > --- > > > fs/userfaultfd.c | 13 +- > > > include/linux/userfaultfd_k.h | 5 +- > > > mm/userfaultfd.c | 380 ++++++++++++++++++++++++++-------- > > > 3 files changed, 296 insertions(+), 102 deletions(-) > > > > > > diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c > > > index c00a021bcce4..60dcfafdc11a 100644 > > > --- a/fs/userfaultfd.c > > > +++ b/fs/userfaultfd.c > > > @@ -2005,17 +2005,8 @@ static int userfaultfd_move(struct userfaultfd_ctx *ctx, > > > return -EINVAL; > > > > > > if (mmget_not_zero(mm)) { > > > - mmap_read_lock(mm); > > > - > > > - /* Re-check after taking map_changing_lock */ > > > - down_read(&ctx->map_changing_lock); > > > - if (likely(!atomic_read(&ctx->mmap_changing))) > > > - ret = move_pages(ctx, mm, uffdio_move.dst, uffdio_move.src, > > > - uffdio_move.len, uffdio_move.mode); > > > - else > > > - ret = -EAGAIN; > > > - up_read(&ctx->map_changing_lock); > > > - mmap_read_unlock(mm); > > > + ret = move_pages(ctx, uffdio_move.dst, uffdio_move.src, > > > + uffdio_move.len, uffdio_move.mode); > > > mmput(mm); > > > } else { > > > return -ESRCH; > > > diff --git a/include/linux/userfaultfd_k.h b/include/linux/userfaultfd_k.h > > > index 3210c3552976..05d59f74fc88 100644 > > > --- a/include/linux/userfaultfd_k.h > > > +++ b/include/linux/userfaultfd_k.h > > > @@ -138,9 +138,8 @@ extern long uffd_wp_range(struct vm_area_struct *vma, > > > /* move_pages */ > > > void double_pt_lock(spinlock_t *ptl1, spinlock_t *ptl2); > > > void double_pt_unlock(spinlock_t *ptl1, spinlock_t *ptl2); > > > -ssize_t move_pages(struct userfaultfd_ctx *ctx, struct mm_struct *mm, > > > - unsigned long dst_start, unsigned long src_start, > > > - unsigned long len, __u64 flags); > > > +ssize_t move_pages(struct userfaultfd_ctx *ctx, unsigned long dst_start, > > > + unsigned long src_start, unsigned long len, __u64 flags); > > > int move_pages_huge_pmd(struct mm_struct *mm, pmd_t *dst_pmd, pmd_t *src_pmd, pmd_t dst_pmdval, > > > struct vm_area_struct *dst_vma, > > > struct vm_area_struct *src_vma, > > > diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c > > > index 74aad0831e40..4744d6a96f96 100644 > > > --- a/mm/userfaultfd.c > > > +++ b/mm/userfaultfd.c > > > @@ -20,19 +20,11 @@ > > > #include "internal.h" > > > > > > static __always_inline > > > -struct vm_area_struct *find_dst_vma(struct mm_struct *dst_mm, > > > - unsigned long dst_start, > > > - unsigned long len) > > > +bool validate_dst_vma(struct vm_area_struct *dst_vma, unsigned long dst_end) > > > { > > > - /* > > > - * Make sure that the dst range is both valid and fully within a > > > - * single existing vma. > > > - */ > > > - struct vm_area_struct *dst_vma; > > > - > > > - dst_vma = find_vma(dst_mm, dst_start); > > > - if (!range_in_vma(dst_vma, dst_start, dst_start + len)) > > > - return NULL; > > > + /* Make sure that the dst range is fully within dst_vma. */ > > > + if (dst_end > dst_vma->vm_end) > > > + return false; > > > > > > /* > > > * Check the vma is registered in uffd, this is required to > > > @@ -40,11 +32,122 @@ struct vm_area_struct *find_dst_vma(struct mm_struct *dst_mm, > > > * time. > > > */ > > > if (!dst_vma->vm_userfaultfd_ctx.ctx) > > > - return NULL; > > > + return false; > > > + > > > + return true; > > > +} > > > + > > > +static __always_inline > > > +struct vm_area_struct *find_vma_and_prepare_anon(struct mm_struct *mm, > > > + unsigned long addr) > > > +{ > > > + struct vm_area_struct *vma; > > > + > > > + mmap_assert_locked(mm); > > > + vma = vma_lookup(mm, addr); > > > + if (!vma) > > > + vma = ERR_PTR(-ENOENT); > > > + else if (!(vma->vm_flags & VM_SHARED) && > > > + unlikely(anon_vma_prepare(vma))) > > > + vma = ERR_PTR(-ENOMEM); > > > + > > > + return vma; > > > +} > > > + > > > +#ifdef CONFIG_PER_VMA_LOCK > > > +/* > > > + * lock_vma() - Lookup and lock vma corresponding to @address. > > > + * @mm: mm to search vma in. > > > + * @address: address that the vma should contain. > > > + * > > > + * Should be called without holding mmap_lock. vma should be unlocked after use > > > + * with unlock_vma(). > > > + * > > > + * Return: A locked vma containing @address, -ENOENT if no vma is found, or > > > + * -ENOMEM if anon_vma couldn't be allocated. > > > + */ > > > +static struct vm_area_struct *lock_vma(struct mm_struct *mm, > > > + unsigned long address) > > > +{ > > > + struct vm_area_struct *vma; > > > + > > > + vma = lock_vma_under_rcu(mm, address); > > > + if (vma) { > > > + /* > > > + * lock_vma_under_rcu() only checks anon_vma for private > > > + * anonymous mappings. But we need to ensure it is assigned in > > > + * private file-backed vmas as well. > > > + */ > > > + if (!(vma->vm_flags & VM_SHARED) && unlikely(!vma->anon_vma)) > > > + vma_end_read(vma); > > > + else > > > + return vma; > > > + } > > > + > > > + mmap_read_lock(mm); > > > + vma = find_vma_and_prepare_anon(mm, address); > > > + if (!IS_ERR(vma)) { > > > + /* > > > + * We cannot use vma_start_read() as it may fail due to > > > + * false locked (see comment in vma_start_read()). We > > > + * can avoid that by directly locking vm_lock under > > > + * mmap_lock, which guarantees that nobody can lock the > > > + * vma for write (vma_start_write()) under us. > > > + */ > > > + down_read(&vma->vm_lock->lock); > > > + } > > > + > > > + mmap_read_unlock(mm); > > > + return vma; > > > +} > > > + > > > +static struct vm_area_struct *uffd_mfill_lock(struct mm_struct *dst_mm, > > > + unsigned long dst_start, > > > + unsigned long len) > > > +{ > > > + struct vm_area_struct *dst_vma; > > > > > > + dst_vma = lock_vma(dst_mm, dst_start); > > > + if (IS_ERR(dst_vma) || validate_dst_vma(dst_vma, dst_start + len)) > > > + return dst_vma; > > > + > > > + vma_end_read(dst_vma); > > > + return ERR_PTR(-ENOENT); > > > +} > > > + > > > +static void uffd_mfill_unlock(struct vm_area_struct *vma) > > > +{ > > > + vma_end_read(vma); > > > +} > > > + > > > +#else > > > + > > > +static struct vm_area_struct *uffd_mfill_lock(struct mm_struct *dst_mm, > > > + unsigned long dst_start, > > > + unsigned long len) > > > +{ > > > + struct vm_area_struct *dst_vma; > > > + > > > + mmap_read_lock(dst_mm); > > > + dst_vma = find_vma_and_prepare_anon(dst_mm, dst_start); > > > + if (IS_ERR(dst_vma)) > > > + goto out_unlock; > > > + > > > + if (validate_dst_vma(dst_vma, dst_start + len)) > > > + return dst_vma; > > > + > > > + dst_vma = ERR_PTR(-ENOENT); > > > +out_unlock: > > > + mmap_read_unlock(dst_mm); > > > return dst_vma; > > > } > > > > > > +static void uffd_mfill_unlock(struct vm_area_struct *vma) > > > +{ > > > + mmap_read_unlock(vma->vm_mm); > > > +} > > > +#endif > > > + > > > /* Check if dst_addr is outside of file's size. Must be called with ptl held. */ > > > static bool mfill_file_over_size(struct vm_area_struct *dst_vma, > > > unsigned long dst_addr) > > > @@ -350,7 +453,8 @@ static pmd_t *mm_alloc_pmd(struct mm_struct *mm, unsigned long address) > > > #ifdef CONFIG_HUGETLB_PAGE > > > /* > > > * mfill_atomic processing for HUGETLB vmas. Note that this routine is > > > - * called with mmap_lock held, it will release mmap_lock before returning. > > > + * called with either vma-lock or mmap_lock held, it will release the lock > > > + * before returning. > > > */ > > > static __always_inline ssize_t mfill_atomic_hugetlb( > > > struct userfaultfd_ctx *ctx, > > > @@ -361,7 +465,6 @@ static __always_inline ssize_t mfill_atomic_hugetlb( > > > uffd_flags_t flags) > > > { > > > struct mm_struct *dst_mm = dst_vma->vm_mm; > > > - int vm_shared = dst_vma->vm_flags & VM_SHARED; > > > ssize_t err; > > > pte_t *dst_pte; > > > unsigned long src_addr, dst_addr; > > > @@ -380,7 +483,7 @@ static __always_inline ssize_t mfill_atomic_hugetlb( > > > */ > > > if (uffd_flags_mode_is(flags, MFILL_ATOMIC_ZEROPAGE)) { > > > up_read(&ctx->map_changing_lock); > > > - mmap_read_unlock(dst_mm); > > > + uffd_mfill_unlock(dst_vma); > > > return -EINVAL; > > > } > > > > > > @@ -403,24 +506,28 @@ static __always_inline ssize_t mfill_atomic_hugetlb( > > > * retry, dst_vma will be set to NULL and we must lookup again. > > > */ > > > if (!dst_vma) { > > > + dst_vma = uffd_mfill_lock(dst_mm, dst_start, len); > > > + if (IS_ERR(dst_vma)) { > > > + err = PTR_ERR(dst_vma); > > > + goto out; > > > + } > > > + > > > err = -ENOENT; > > > - dst_vma = find_dst_vma(dst_mm, dst_start, len); > > > - if (!dst_vma || !is_vm_hugetlb_page(dst_vma)) > > > - goto out_unlock; > > > + if (!is_vm_hugetlb_page(dst_vma)) > > > + goto out_unlock_vma; > > > > > > err = -EINVAL; > > > if (vma_hpagesize != vma_kernel_pagesize(dst_vma)) > > > - goto out_unlock; > > > - > > > - vm_shared = dst_vma->vm_flags & VM_SHARED; > > > - } > > > + goto out_unlock_vma; > > > > > > - /* > > > - * If not shared, ensure the dst_vma has a anon_vma. > > > - */ > > > - err = -ENOMEM; > > > - if (!vm_shared) { > > > - if (unlikely(anon_vma_prepare(dst_vma))) > > > + /* > > > + * If memory mappings are changing because of non-cooperative > > > + * operation (e.g. mremap) running in parallel, bail out and > > > + * request the user to retry later > > > + */ > > > + down_read(&ctx->map_changing_lock); > > > + err = -EAGAIN; > > > + if (atomic_read(&ctx->mmap_changing)) > > > goto out_unlock; > > > } > > > > > > @@ -465,7 +572,7 @@ static __always_inline ssize_t mfill_atomic_hugetlb( > > > > > > if (unlikely(err == -ENOENT)) { > > > up_read(&ctx->map_changing_lock); > > > - mmap_read_unlock(dst_mm); > > > + uffd_mfill_unlock(dst_vma); > > > BUG_ON(!folio); > > > > > > err = copy_folio_from_user(folio, > > > @@ -474,17 +581,6 @@ static __always_inline ssize_t mfill_atomic_hugetlb( > > > err = -EFAULT; > > > goto out; > > > } > > > - mmap_read_lock(dst_mm); > > > - down_read(&ctx->map_changing_lock); > > > - /* > > > - * If memory mappings are changing because of non-cooperative > > > - * operation (e.g. mremap) running in parallel, bail out and > > > - * request the user to retry later > > > - */ > > > - if (atomic_read(&ctx->mmap_changing)) { > > > - err = -EAGAIN; > > > - break; > > > - } > > > > > > dst_vma = NULL; > > > goto retry; > > > @@ -505,7 +601,8 @@ static __always_inline ssize_t mfill_atomic_hugetlb( > > > > > > out_unlock: > > > up_read(&ctx->map_changing_lock); > > > - mmap_read_unlock(dst_mm); > > > +out_unlock_vma: > > > + uffd_mfill_unlock(dst_vma); > > > out: > > > if (folio) > > > folio_put(folio); > > > @@ -597,7 +694,15 @@ static __always_inline ssize_t mfill_atomic(struct userfaultfd_ctx *ctx, > > > copied = 0; > > > folio = NULL; > > > retry: > > > - mmap_read_lock(dst_mm); > > > + /* > > > + * Make sure the vma is not shared, that the dst range is > > > + * both valid and fully within a single existing vma. > > > + */ > > > + dst_vma = uffd_mfill_lock(dst_mm, dst_start, len); > > > + if (IS_ERR(dst_vma)) { > > > + err = PTR_ERR(dst_vma); > > > + goto out; > > > + } > > > > > > /* > > > * If memory mappings are changing because of non-cooperative > > > @@ -609,15 +714,6 @@ static __always_inline ssize_t mfill_atomic(struct userfaultfd_ctx *ctx, > > > if (atomic_read(&ctx->mmap_changing)) > > > goto out_unlock; > > > > > > - /* > > > - * Make sure the vma is not shared, that the dst range is > > > - * both valid and fully within a single existing vma. > > > - */ > > > - err = -ENOENT; > > > - dst_vma = find_dst_vma(dst_mm, dst_start, len); > > > - if (!dst_vma) > > > - goto out_unlock; > > > - > > > err = -EINVAL; > > > /* > > > * shmem_zero_setup is invoked in mmap for MAP_ANONYMOUS|MAP_SHARED but > > > @@ -647,16 +743,6 @@ static __always_inline ssize_t mfill_atomic(struct userfaultfd_ctx *ctx, > > > uffd_flags_mode_is(flags, MFILL_ATOMIC_CONTINUE)) > > > goto out_unlock; > > > > > > - /* > > > - * Ensure the dst_vma has a anon_vma or this page > > > - * would get a NULL anon_vma when moved in the > > > - * dst_vma. > > > - */ > > > - err = -ENOMEM; > > > - if (!(dst_vma->vm_flags & VM_SHARED) && > > > - unlikely(anon_vma_prepare(dst_vma))) > > > - goto out_unlock; > > > - > > > while (src_addr < src_start + len) { > > > pmd_t dst_pmdval; > > > > > > @@ -699,7 +785,7 @@ static __always_inline ssize_t mfill_atomic(struct userfaultfd_ctx *ctx, > > > void *kaddr; > > > > > > up_read(&ctx->map_changing_lock); > > > - mmap_read_unlock(dst_mm); > > > + uffd_mfill_unlock(dst_vma); > > > BUG_ON(!folio); > > > > > > kaddr = kmap_local_folio(folio, 0); > > > @@ -730,7 +816,7 @@ static __always_inline ssize_t mfill_atomic(struct userfaultfd_ctx *ctx, > > > > > > out_unlock: > > > up_read(&ctx->map_changing_lock); > > > - mmap_read_unlock(dst_mm); > > > + uffd_mfill_unlock(dst_vma); > > > out: > > > if (folio) > > > folio_put(folio); > > > @@ -1267,27 +1353,136 @@ static int validate_move_areas(struct userfaultfd_ctx *ctx, > > > if (!vma_is_anonymous(src_vma) || !vma_is_anonymous(dst_vma)) > > > return -EINVAL; > > > > > > + return 0; > > > +} > > > + > > > +static __always_inline > > > +int find_vmas_mm_locked(struct mm_struct *mm, > > > + unsigned long dst_start, > > > + unsigned long src_start, > > > + struct vm_area_struct **dst_vmap, > > > + struct vm_area_struct **src_vmap) > > > +{ > > > + struct vm_area_struct *vma; > > > + > > > + mmap_assert_locked(mm); > > > + vma = find_vma_and_prepare_anon(mm, dst_start); > > > + if (IS_ERR(vma)) > > > + return PTR_ERR(vma); > > > + > > > + *dst_vmap = vma; > > > + /* Skip finding src_vma if src_start is in dst_vma */ > > > + if (src_start >= vma->vm_start && src_start < vma->vm_end) > > > + goto out_success; > > > + > > > + vma = vma_lookup(mm, src_start); > > > + if (!vma) > > > + return -ENOENT; > > > +out_success: > > > + *src_vmap = vma; > > > + return 0; > > > +} > > > + > > > +#ifdef CONFIG_PER_VMA_LOCK > > > +static int uffd_move_lock(struct mm_struct *mm, > > > + unsigned long dst_start, > > > + unsigned long src_start, > > > + struct vm_area_struct **dst_vmap, > > > + struct vm_area_struct **src_vmap) > > > +{ > > > + struct vm_area_struct *vma; > > > + int err; > > > + > > > + vma = lock_vma(mm, dst_start); > > > + if (IS_ERR(vma)) > > > + return PTR_ERR(vma); > > > + > > > + *dst_vmap = vma; > > > /* > > > - * Ensure the dst_vma has a anon_vma or this page > > > - * would get a NULL anon_vma when moved in the > > > - * dst_vma. > > > + * Skip finding src_vma if src_start is in dst_vma. This also ensures > > > + * that we don't lock the same vma twice. > > > */ > > > - if (unlikely(anon_vma_prepare(dst_vma))) > > > - return -ENOMEM; > > > + if (src_start >= vma->vm_start && src_start < vma->vm_end) { > > > + *src_vmap = vma; > > > + return 0; > > > + } > > > > > > - return 0; > > > + /* > > > + * Using lock_vma() to get src_vma can lead to following deadlock: > > > + * > > > + * Thread1 Thread2 > > > + * ------- ------- > > > + * vma_start_read(dst_vma) > > > + * mmap_write_lock(mm) > > > + * vma_start_write(src_vma) > > > + * vma_start_read(src_vma) > > > + * mmap_read_lock(mm) > > > + * vma_start_write(dst_vma) > > > + */ > > > + *src_vmap = lock_vma_under_rcu(mm, src_start); > > > + if (likely(*src_vmap)) > > > + return 0; > > > + > > > + /* Undo any locking and retry in mmap_lock critical section */ > > > + vma_end_read(*dst_vmap); > > > + > > > + mmap_read_lock(mm); > > > + err = find_vmas_mm_locked(mm, dst_start, src_start, dst_vmap, src_vmap); > > > + if (!err) { > > > + /* > > > + * See comment in lock_vma() as to why not using > > > + * vma_start_read() here. > > > + */ > > > + down_read(&(*dst_vmap)->vm_lock->lock); > > > + if (*dst_vmap != *src_vmap) > > > + down_read(&(*src_vmap)->vm_lock->lock); > > > + } > > > + mmap_read_unlock(mm); > > > + return err; > > > +} > > > + > > > +static void uffd_move_unlock(struct vm_area_struct *dst_vma, > > > + struct vm_area_struct *src_vma) > > > +{ > > > + vma_end_read(src_vma); > > > + if (src_vma != dst_vma) > > > + vma_end_read(dst_vma); > > > } > > > > > > +#else > > > + > > > +static int uffd_move_lock(struct mm_struct *mm, > > > + unsigned long dst_start, > > > + unsigned long src_start, > > > + struct vm_area_struct **dst_vmap, > > > + struct vm_area_struct **src_vmap) > > > +{ > > > + int err; > > > + > > > + mmap_read_lock(mm); > > > + err = find_vmas_mm_locked(mm, dst_start, src_start, dst_vmap, src_vmap); > > > + if (err) > > > + mmap_read_unlock(mm); > > > + return err; > > > +} > > > + > > > +static void uffd_move_unlock(struct vm_area_struct *dst_vma, > > > + struct vm_area_struct *src_vma) > > > +{ > > > + mmap_assert_locked(src_vma->vm_mm); > > > + mmap_read_unlock(dst_vma->vm_mm); > > > +} > > > +#endif > > > + > > > /** > > > * move_pages - move arbitrary anonymous pages of an existing vma > > > * @ctx: pointer to the userfaultfd context > > > - * @mm: the address space to move pages > > > * @dst_start: start of the destination virtual memory range > > > * @src_start: start of the source virtual memory range > > > * @len: length of the virtual memory range > > > * @mode: flags from uffdio_move.mode > > > * > > > - * Must be called with mmap_lock held for read. > > > + * It will either use the mmap_lock in read mode or per-vma locks > > > * > > > * move_pages() remaps arbitrary anonymous pages atomically in zero > > > * copy. It only works on non shared anonymous pages because those can > > > @@ -1355,10 +1550,10 @@ static int validate_move_areas(struct userfaultfd_ctx *ctx, > > > * could be obtained. This is the only additional complexity added to > > > * the rmap code to provide this anonymous page remapping functionality. > > > */ > > > -ssize_t move_pages(struct userfaultfd_ctx *ctx, struct mm_struct *mm, > > > - unsigned long dst_start, unsigned long src_start, > > > - unsigned long len, __u64 mode) > > > +ssize_t move_pages(struct userfaultfd_ctx *ctx, unsigned long dst_start, > > > + unsigned long src_start, unsigned long len, __u64 mode) > > > { > > > + struct mm_struct *mm = ctx->mm; > > > struct vm_area_struct *src_vma, *dst_vma; > > > unsigned long src_addr, dst_addr; > > > pmd_t *src_pmd, *dst_pmd; > > > @@ -1376,28 +1571,34 @@ ssize_t move_pages(struct userfaultfd_ctx *ctx, struct mm_struct *mm, > > > WARN_ON_ONCE(dst_start + len <= dst_start)) > > > goto out; > > > > > > + err = uffd_move_lock(mm, dst_start, src_start, &dst_vma, &src_vma); > > > + if (err) > > > + goto out; > > > + > > > + /* Re-check after taking map_changing_lock */ > > > + err = -EAGAIN; > > > + down_read(&ctx->map_changing_lock); > > > + if (likely(atomic_read(&ctx->mmap_changing))) > > > + goto out_unlock; > > > /* > > > * Make sure the vma is not shared, that the src and dst remap > > > * ranges are both valid and fully within a single existing > > > * vma. > > > */ > > > - src_vma = find_vma(mm, src_start); > > > - if (!src_vma || (src_vma->vm_flags & VM_SHARED)) > > > - goto out; > > > - if (src_start < src_vma->vm_start || > > > - src_start + len > src_vma->vm_end) > > > - goto out; > > > + err = -EINVAL; > > > + if (src_vma->vm_flags & VM_SHARED) > > > + goto out_unlock; > > > + if (src_start + len > src_vma->vm_end) > > > + goto out_unlock; > > > > > > - dst_vma = find_vma(mm, dst_start); > > > - if (!dst_vma || (dst_vma->vm_flags & VM_SHARED)) > > > - goto out; > > > - if (dst_start < dst_vma->vm_start || > > > - dst_start + len > dst_vma->vm_end) > > > - goto out; > > > + if (dst_vma->vm_flags & VM_SHARED) > > > + goto out_unlock; > > > + if (dst_start + len > dst_vma->vm_end) > > > + goto out_unlock; > > > > > > err = validate_move_areas(ctx, src_vma, dst_vma); > > > if (err) > > > - goto out; > > > + goto out_unlock; > > > > > > for (src_addr = src_start, dst_addr = dst_start; > > > src_addr < src_start + len;) { > > > @@ -1514,6 +1715,9 @@ ssize_t move_pages(struct userfaultfd_ctx *ctx, struct mm_struct *mm, > > > moved += step_size; > > > } > > > > > > +out_unlock: > > > + up_read(&ctx->map_changing_lock); > > > + uffd_move_unlock(dst_vma, src_vma); > > > out: > > > VM_WARN_ON(moved < 0); > > > VM_WARN_ON(err > 0); > > > > -- > > To unsubscribe from this group and stop receiving emails from it, send an email to kernel-team+unsubscribe@android.com. > > ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v6 3/3] userfaultfd: use per-vma locks in userfaultfd operations 2024-02-14 22:27 ` Lokesh Gidra @ 2024-02-14 22:33 ` Ryan Roberts 0 siblings, 0 replies; 9+ messages in thread From: Ryan Roberts @ 2024-02-14 22:33 UTC (permalink / raw) To: Lokesh Gidra, Suren Baghdasaryan Cc: akpm, linux-fsdevel, linux-mm, linux-kernel, selinux, kernel-team, aarcange, peterx, david, axelrasmussen, bgeffon, willy, jannh, kaleshsingh, ngeoffray, timmurray, rppt, Liam.Howlett On 14/02/2024 22:27, Lokesh Gidra wrote: > On Wed, Feb 14, 2024 at 2:20 PM Suren Baghdasaryan <surenb@google.com> wrote: >> >> On Wed, Feb 14, 2024 at 2:12 PM Ryan Roberts <ryan.roberts@arm.com> wrote: >>> >>> On 13/02/2024 21:57, Lokesh Gidra wrote: >>>> All userfaultfd operations, except write-protect, opportunistically use >>>> per-vma locks to lock vmas. On failure, attempt again inside mmap_lock >>>> critical section. >>>> >>>> Write-protect operation requires mmap_lock as it iterates over multiple >>>> vmas. >>> >>> Hi, >>> >>> I'm seeing the below OOPS when running on arm64 against mm-unstable. It can be reliably reproduced by running the `uffd-unit-tests` mm selftest. Bisecting mm-unstable leads to this patch: >>> >>> # bad: [649936c3db47f8f75b9b927e4edf5922a0f240a6] mm: add swappiness= arg to memory.reclaim >>> git bisect bad 649936c3db47f8f75b9b927e4edf5922a0f240a6 >>> # good: [54be6c6c5ae8e0d93a6c4641cb7528eb0b6ba478] Linux 6.8-rc3 >>> git bisect good 54be6c6c5ae8e0d93a6c4641cb7528eb0b6ba478 >>> # good: [10794cd18bb46c91c75fac44e551201cfe006baf] mm: zswap: warn when referencing a dead entry >>> git bisect good 10794cd18bb46c91c75fac44e551201cfe006baf >>> # good: [cb769b427edc7f46c7c764daa3421725bffdf315] mm-memcg-use-larger-batches-for-proactive-reclaim-v4 >>> git bisect good cb769b427edc7f46c7c764daa3421725bffdf315 >>> # good: [cfc5c1be4010c9972bc3f3d991235e8ea6928672] mm/z3fold: remove unneeded spinlock >>> git bisect good cfc5c1be4010c9972bc3f3d991235e8ea6928672 >>> # good: [31094bce101651acb4747ab25d614bc893d65c89] kasan/test: avoid gcc warning for intentional overflow >>> git bisect good 31094bce101651acb4747ab25d614bc893d65c89 >>> # bad: [55be0b2cd1fbf00e036e2e48ee0999599135af66] zram: do not allocate physically contiguous strm buffers >>> git bisect bad 55be0b2cd1fbf00e036e2e48ee0999599135af66 >>> # good: [b11ca4a0a13024c0175dde56f9bd848803eddcd2] mm/mglru: improve swappiness handling >>> git bisect good b11ca4a0a13024c0175dde56f9bd848803eddcd2 >>> # good: [22e7ccd57a1220afcd7c4da1f3005fd04d70014e] userfaultfd: move userfaultfd_ctx struct to header file >>> git bisect good 22e7ccd57a1220afcd7c4da1f3005fd04d70014e >>> # bad: [0a0d05338f13e64c9fb7ccd8f8d1793aaf33ec7d] userfaultfd: use per-vma locks in userfaultfd operations >>> git bisect bad 0a0d05338f13e64c9fb7ccd8f8d1793aaf33ec7d >>> # good: [8459e1c7acbe4442c6c0eef59825da1339e0a3cf] userfaultfd: protect mmap_changing with rw_sem in userfaulfd_ctx >>> git bisect good 8459e1c7acbe4442c6c0eef59825da1339e0a3cf >>> # first bad commit: [0a0d05338f13e64c9fb7ccd8f8d1793aaf33ec7d] userfaultfd: use per-vma locks in userfaultfd operations >>> >>> This is the oops: >> >> That's a call to mmap_assert_locked() from inside >> move_pages_huge_pmd(). Since move_pages() are now happening under VMA >> lock this call should be replaced with vma_assert_locked(). >> > So sorry for missing this. I missed testing with CONFIG_DEBUG_VM > turned on. I'll fix and thoroughly test before sending the next > version. No worries, I've reverted this patch in my branch and that's unblocked me, so don't sweat it. > >>> >>> [ 21.280142] mm ffff049340ed0000 task_size 281474976710656 >>> [ 21.280142] get_unmapped_area ffffb3dbdc725e48 >>> [ 21.280142] mmap_base 281474842492928 mmap_legacy_base 0 >>> [ 21.280142] pgd ffff04935829a000 mm_users 3 mm_count 4 pgtables_bytes 98304 map_count 21 >>> [ 21.280142] hiwater_rss 2167 hiwater_vm 8234 total_vm 4a44 locked_vm 0 >>> [ 21.280142] pinned_vm 0 data_vm 4835 exec_vm 1c6 stack_vm 21 >>> [ 21.280142] start_code aaaaaaaa0000 end_code aaaaaaab1b28 start_data aaaaaaac2a60 end_data aaaaaaac3410 >>> [ 21.280142] start_brk aaaaaaac5000 brk aaaaaaae6000 start_stack fffffffff6b0 >>> [ 21.280142] arg_start fffffffff8c7 arg_end fffffffff8e2 env_start fffffffff8e2 env_end ffffffffffdd >>> [ 21.280142] binfmt ffffb3dbdf2f8cf8 flags 82008d >>> [ 21.280142] ioctx_table 0000000000000000 >>> [ 21.280142] owner ffff049311f02280 exe_file ffff049355314c00 >>> [ 21.280142] notifier_subscriptions 0000000000000000 >>> [ 21.280142] numa_next_scan 4294897864 numa_scan_offset 0 numa_scan_seq 0 >>> [ 21.280142] tlb_flush_pending 0 >>> [ 21.280142] def_flags: 0x0() >>> [ 21.283302] kernel BUG at include/linux/mmap_lock.h:66! >>> [ 21.283481] Internal error: Oops - BUG: 00000000f2000800 [#1] PREEMPT SMP >>> [ 21.283720] Modules linked in: >>> [ 21.283867] CPU: 3 PID: 1226 Comm: uffd-unit-tests Not tainted 6.8.0-rc3-00297-g0a0d05338f13 #19 >>> [ 21.284306] Hardware name: linux,dummy-virt (DT) >>> [ 21.284495] pstate: 61400005 (nZCv daif +PAN -UAO -TCO +DIT -SSBS BTYPE=--) >>> [ 21.284833] pc : move_pages_huge_pmd+0x4d0/0x8a8 >>> [ 21.285072] lr : move_pages_huge_pmd+0x4d0/0x8a8 >>> [ 21.285289] sp : ffff800088573b30 >>> [ 21.285439] x29: ffff800088573b30 x28: ffff049358292d78 x27: fffffc0000000000 >>> [ 21.285770] x26: ffff049358292d78 x25: 0000fffff5e00000 x24: ffff049359db3730 >>> [ 21.286097] x23: 0000fffff6000000 x22: ffff049340ed0000 x21: ffff049359db3678 >>> [ 21.286422] x20: 0000000000000002 x19: fffffc124d60a480 x18: 0000000000000006 >>> [ 21.286747] x17: 626c740a30207165 x16: 735f6e6163735f61 x15: 6d756e2030207465 >>> [ 21.287084] x14: 7366666f5f6e6163 x13: 2928307830203a73 x12: 67616c665f666564 >>> [ 21.287417] x11: 0a3020676e69646e x10: ffffb3dbdf28bff8 x9 : ffffb3dbdc620f80 >>> [ 21.287745] x8 : 00000000ffffefff x7 : ffffb3dbdf28bff8 x6 : 80000000fffff000 >>> [ 21.288082] x5 : ffff04933ffdcd08 x4 : 0000000000000000 x3 : 0000000000000000 >>> [ 21.288364] x2 : 0000000000000000 x1 : ffff049340106780 x0 : 0000000000000328 >>> [ 21.288712] Call trace: >>> [ 21.288832] move_pages_huge_pmd+0x4d0/0x8a8 >>> [ 21.289058] move_pages+0x8b8/0x13d8 >>> [ 21.289205] userfaultfd_ioctl+0x11e0/0x1e90 >>> [ 21.289411] __arm64_sys_ioctl+0xb4/0x100 >>> [ 21.289579] invoke_syscall+0x50/0x128 >>> [ 21.289738] el0_svc_common.constprop.0+0x48/0xf0 >>> [ 21.289942] do_el0_svc+0x24/0x38 >>> [ 21.290078] el0_svc+0x34/0xb8 >>> [ 21.290219] el0t_64_sync_handler+0x100/0x130 >>> [ 21.290387] el0t_64_sync+0x190/0x198 >>> [ 21.290535] Code: 17ffff7e a90363f7 a9046bf9 97fde2a7 (d4210000) >>> [ 21.290776] ---[ end trace 0000000000000000 ]--- >>> [ 21.290966] note: uffd-unit-tests[1226] exited with irqs disabled >>> [ 21.291254] note: uffd-unit-tests[1226] exited with preempt_count 2 >>> [ 21.292864] ------------[ cut here ]------------ >>> [ 21.293020] WARNING: CPU: 3 PID: 0 at kernel/context_tracking.c:128 ct_kernel_exit.constprop.0+0x108/0x120 >>> [ 21.293711] Modules linked in: >>> [ 21.294070] CPU: 3 PID: 0 Comm: swapper/3 Tainted: G D 6.8.0-rc3-00297-g0a0d05338f13 #19 >>> [ 21.295447] Hardware name: linux,dummy-virt (DT) >>> [ 21.295850] pstate: 214003c5 (nzCv DAIF +PAN -UAO -TCO +DIT -SSBS BTYPE=--) >>> [ 21.296152] pc : ct_kernel_exit.constprop.0+0x108/0x120 >>> [ 21.296685] lr : ct_idle_enter+0x10/0x20 >>> [ 21.296848] sp : ffff8000801b3dc0 >>> [ 21.296983] x29: ffff8000801b3dc0 x28: 0000000000000000 x27: 0000000000000000 >>> [ 21.297255] x26: 0000000000000000 x25: ffff049301496780 x24: 0000000000000000 >>> [ 21.297533] x23: 0000000000000000 x22: ffff049301496780 x21: ffffb3dbdf209ae0 >>> [ 21.297844] x20: ffffb3dbdf209a20 x19: ffff04933ffeace0 x18: ffff800088573648 >>> [ 21.298152] x17: ffffb3dbdf65def0 x16: 00000000d72145a7 x15: 00000000f1a17b35 >>> [ 21.298439] x14: 0000000000000004 x13: ffffb3dbdf22c3a8 x12: 0000000000000000 >>> [ 21.298738] x11: ffff0493541472a8 x10: 2b58349b7608bfc4 x9 : ffffb3dbdc57f43c >>> [ 21.299031] x8 : ffff049301497858 x7 : 00000000000001c1 x6 : 000000001eeeb49c >>> [ 21.299329] x5 : 4000000000000002 x4 : ffff50b7618d7000 x3 : ffff8000801b3dc0 >>> [ 21.299615] x2 : ffffb3dbde713ce0 x1 : 4000000000000000 x0 : ffffb3dbde713ce0 >>> [ 21.299900] Call trace: >>> [ 21.299998] ct_kernel_exit.constprop.0+0x108/0x120 >>> [ 21.300460] ct_idle_enter+0x10/0x20 >>> [ 21.300690] default_idle_call+0x3c/0x170 >>> [ 21.300861] do_idle+0x218/0x278 >>> [ 21.300992] cpu_startup_entry+0x3c/0x50 >>> [ 21.301148] secondary_start_kernel+0x130/0x158 >>> [ 21.301329] __secondary_switched+0xb8/0xc0 >>> [ 21.301641] ---[ end trace 0000000000000000 ]--- >>> >>> >>> Can this series be removed from mm-unstable until fixed, please? >>> >>> Thanks, >>> Ryan >>> >>> >>>> >>>> Signed-off-by: Lokesh Gidra <lokeshgidra@google.com> >>>> --- >>>> fs/userfaultfd.c | 13 +- >>>> include/linux/userfaultfd_k.h | 5 +- >>>> mm/userfaultfd.c | 380 ++++++++++++++++++++++++++-------- >>>> 3 files changed, 296 insertions(+), 102 deletions(-) >>>> >>>> diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c >>>> index c00a021bcce4..60dcfafdc11a 100644 >>>> --- a/fs/userfaultfd.c >>>> +++ b/fs/userfaultfd.c >>>> @@ -2005,17 +2005,8 @@ static int userfaultfd_move(struct userfaultfd_ctx *ctx, >>>> return -EINVAL; >>>> >>>> if (mmget_not_zero(mm)) { >>>> - mmap_read_lock(mm); >>>> - >>>> - /* Re-check after taking map_changing_lock */ >>>> - down_read(&ctx->map_changing_lock); >>>> - if (likely(!atomic_read(&ctx->mmap_changing))) >>>> - ret = move_pages(ctx, mm, uffdio_move.dst, uffdio_move.src, >>>> - uffdio_move.len, uffdio_move.mode); >>>> - else >>>> - ret = -EAGAIN; >>>> - up_read(&ctx->map_changing_lock); >>>> - mmap_read_unlock(mm); >>>> + ret = move_pages(ctx, uffdio_move.dst, uffdio_move.src, >>>> + uffdio_move.len, uffdio_move.mode); >>>> mmput(mm); >>>> } else { >>>> return -ESRCH; >>>> diff --git a/include/linux/userfaultfd_k.h b/include/linux/userfaultfd_k.h >>>> index 3210c3552976..05d59f74fc88 100644 >>>> --- a/include/linux/userfaultfd_k.h >>>> +++ b/include/linux/userfaultfd_k.h >>>> @@ -138,9 +138,8 @@ extern long uffd_wp_range(struct vm_area_struct *vma, >>>> /* move_pages */ >>>> void double_pt_lock(spinlock_t *ptl1, spinlock_t *ptl2); >>>> void double_pt_unlock(spinlock_t *ptl1, spinlock_t *ptl2); >>>> -ssize_t move_pages(struct userfaultfd_ctx *ctx, struct mm_struct *mm, >>>> - unsigned long dst_start, unsigned long src_start, >>>> - unsigned long len, __u64 flags); >>>> +ssize_t move_pages(struct userfaultfd_ctx *ctx, unsigned long dst_start, >>>> + unsigned long src_start, unsigned long len, __u64 flags); >>>> int move_pages_huge_pmd(struct mm_struct *mm, pmd_t *dst_pmd, pmd_t *src_pmd, pmd_t dst_pmdval, >>>> struct vm_area_struct *dst_vma, >>>> struct vm_area_struct *src_vma, >>>> diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c >>>> index 74aad0831e40..4744d6a96f96 100644 >>>> --- a/mm/userfaultfd.c >>>> +++ b/mm/userfaultfd.c >>>> @@ -20,19 +20,11 @@ >>>> #include "internal.h" >>>> >>>> static __always_inline >>>> -struct vm_area_struct *find_dst_vma(struct mm_struct *dst_mm, >>>> - unsigned long dst_start, >>>> - unsigned long len) >>>> +bool validate_dst_vma(struct vm_area_struct *dst_vma, unsigned long dst_end) >>>> { >>>> - /* >>>> - * Make sure that the dst range is both valid and fully within a >>>> - * single existing vma. >>>> - */ >>>> - struct vm_area_struct *dst_vma; >>>> - >>>> - dst_vma = find_vma(dst_mm, dst_start); >>>> - if (!range_in_vma(dst_vma, dst_start, dst_start + len)) >>>> - return NULL; >>>> + /* Make sure that the dst range is fully within dst_vma. */ >>>> + if (dst_end > dst_vma->vm_end) >>>> + return false; >>>> >>>> /* >>>> * Check the vma is registered in uffd, this is required to >>>> @@ -40,11 +32,122 @@ struct vm_area_struct *find_dst_vma(struct mm_struct *dst_mm, >>>> * time. >>>> */ >>>> if (!dst_vma->vm_userfaultfd_ctx.ctx) >>>> - return NULL; >>>> + return false; >>>> + >>>> + return true; >>>> +} >>>> + >>>> +static __always_inline >>>> +struct vm_area_struct *find_vma_and_prepare_anon(struct mm_struct *mm, >>>> + unsigned long addr) >>>> +{ >>>> + struct vm_area_struct *vma; >>>> + >>>> + mmap_assert_locked(mm); >>>> + vma = vma_lookup(mm, addr); >>>> + if (!vma) >>>> + vma = ERR_PTR(-ENOENT); >>>> + else if (!(vma->vm_flags & VM_SHARED) && >>>> + unlikely(anon_vma_prepare(vma))) >>>> + vma = ERR_PTR(-ENOMEM); >>>> + >>>> + return vma; >>>> +} >>>> + >>>> +#ifdef CONFIG_PER_VMA_LOCK >>>> +/* >>>> + * lock_vma() - Lookup and lock vma corresponding to @address. >>>> + * @mm: mm to search vma in. >>>> + * @address: address that the vma should contain. >>>> + * >>>> + * Should be called without holding mmap_lock. vma should be unlocked after use >>>> + * with unlock_vma(). >>>> + * >>>> + * Return: A locked vma containing @address, -ENOENT if no vma is found, or >>>> + * -ENOMEM if anon_vma couldn't be allocated. >>>> + */ >>>> +static struct vm_area_struct *lock_vma(struct mm_struct *mm, >>>> + unsigned long address) >>>> +{ >>>> + struct vm_area_struct *vma; >>>> + >>>> + vma = lock_vma_under_rcu(mm, address); >>>> + if (vma) { >>>> + /* >>>> + * lock_vma_under_rcu() only checks anon_vma for private >>>> + * anonymous mappings. But we need to ensure it is assigned in >>>> + * private file-backed vmas as well. >>>> + */ >>>> + if (!(vma->vm_flags & VM_SHARED) && unlikely(!vma->anon_vma)) >>>> + vma_end_read(vma); >>>> + else >>>> + return vma; >>>> + } >>>> + >>>> + mmap_read_lock(mm); >>>> + vma = find_vma_and_prepare_anon(mm, address); >>>> + if (!IS_ERR(vma)) { >>>> + /* >>>> + * We cannot use vma_start_read() as it may fail due to >>>> + * false locked (see comment in vma_start_read()). We >>>> + * can avoid that by directly locking vm_lock under >>>> + * mmap_lock, which guarantees that nobody can lock the >>>> + * vma for write (vma_start_write()) under us. >>>> + */ >>>> + down_read(&vma->vm_lock->lock); >>>> + } >>>> + >>>> + mmap_read_unlock(mm); >>>> + return vma; >>>> +} >>>> + >>>> +static struct vm_area_struct *uffd_mfill_lock(struct mm_struct *dst_mm, >>>> + unsigned long dst_start, >>>> + unsigned long len) >>>> +{ >>>> + struct vm_area_struct *dst_vma; >>>> >>>> + dst_vma = lock_vma(dst_mm, dst_start); >>>> + if (IS_ERR(dst_vma) || validate_dst_vma(dst_vma, dst_start + len)) >>>> + return dst_vma; >>>> + >>>> + vma_end_read(dst_vma); >>>> + return ERR_PTR(-ENOENT); >>>> +} >>>> + >>>> +static void uffd_mfill_unlock(struct vm_area_struct *vma) >>>> +{ >>>> + vma_end_read(vma); >>>> +} >>>> + >>>> +#else >>>> + >>>> +static struct vm_area_struct *uffd_mfill_lock(struct mm_struct *dst_mm, >>>> + unsigned long dst_start, >>>> + unsigned long len) >>>> +{ >>>> + struct vm_area_struct *dst_vma; >>>> + >>>> + mmap_read_lock(dst_mm); >>>> + dst_vma = find_vma_and_prepare_anon(dst_mm, dst_start); >>>> + if (IS_ERR(dst_vma)) >>>> + goto out_unlock; >>>> + >>>> + if (validate_dst_vma(dst_vma, dst_start + len)) >>>> + return dst_vma; >>>> + >>>> + dst_vma = ERR_PTR(-ENOENT); >>>> +out_unlock: >>>> + mmap_read_unlock(dst_mm); >>>> return dst_vma; >>>> } >>>> >>>> +static void uffd_mfill_unlock(struct vm_area_struct *vma) >>>> +{ >>>> + mmap_read_unlock(vma->vm_mm); >>>> +} >>>> +#endif >>>> + >>>> /* Check if dst_addr is outside of file's size. Must be called with ptl held. */ >>>> static bool mfill_file_over_size(struct vm_area_struct *dst_vma, >>>> unsigned long dst_addr) >>>> @@ -350,7 +453,8 @@ static pmd_t *mm_alloc_pmd(struct mm_struct *mm, unsigned long address) >>>> #ifdef CONFIG_HUGETLB_PAGE >>>> /* >>>> * mfill_atomic processing for HUGETLB vmas. Note that this routine is >>>> - * called with mmap_lock held, it will release mmap_lock before returning. >>>> + * called with either vma-lock or mmap_lock held, it will release the lock >>>> + * before returning. >>>> */ >>>> static __always_inline ssize_t mfill_atomic_hugetlb( >>>> struct userfaultfd_ctx *ctx, >>>> @@ -361,7 +465,6 @@ static __always_inline ssize_t mfill_atomic_hugetlb( >>>> uffd_flags_t flags) >>>> { >>>> struct mm_struct *dst_mm = dst_vma->vm_mm; >>>> - int vm_shared = dst_vma->vm_flags & VM_SHARED; >>>> ssize_t err; >>>> pte_t *dst_pte; >>>> unsigned long src_addr, dst_addr; >>>> @@ -380,7 +483,7 @@ static __always_inline ssize_t mfill_atomic_hugetlb( >>>> */ >>>> if (uffd_flags_mode_is(flags, MFILL_ATOMIC_ZEROPAGE)) { >>>> up_read(&ctx->map_changing_lock); >>>> - mmap_read_unlock(dst_mm); >>>> + uffd_mfill_unlock(dst_vma); >>>> return -EINVAL; >>>> } >>>> >>>> @@ -403,24 +506,28 @@ static __always_inline ssize_t mfill_atomic_hugetlb( >>>> * retry, dst_vma will be set to NULL and we must lookup again. >>>> */ >>>> if (!dst_vma) { >>>> + dst_vma = uffd_mfill_lock(dst_mm, dst_start, len); >>>> + if (IS_ERR(dst_vma)) { >>>> + err = PTR_ERR(dst_vma); >>>> + goto out; >>>> + } >>>> + >>>> err = -ENOENT; >>>> - dst_vma = find_dst_vma(dst_mm, dst_start, len); >>>> - if (!dst_vma || !is_vm_hugetlb_page(dst_vma)) >>>> - goto out_unlock; >>>> + if (!is_vm_hugetlb_page(dst_vma)) >>>> + goto out_unlock_vma; >>>> >>>> err = -EINVAL; >>>> if (vma_hpagesize != vma_kernel_pagesize(dst_vma)) >>>> - goto out_unlock; >>>> - >>>> - vm_shared = dst_vma->vm_flags & VM_SHARED; >>>> - } >>>> + goto out_unlock_vma; >>>> >>>> - /* >>>> - * If not shared, ensure the dst_vma has a anon_vma. >>>> - */ >>>> - err = -ENOMEM; >>>> - if (!vm_shared) { >>>> - if (unlikely(anon_vma_prepare(dst_vma))) >>>> + /* >>>> + * If memory mappings are changing because of non-cooperative >>>> + * operation (e.g. mremap) running in parallel, bail out and >>>> + * request the user to retry later >>>> + */ >>>> + down_read(&ctx->map_changing_lock); >>>> + err = -EAGAIN; >>>> + if (atomic_read(&ctx->mmap_changing)) >>>> goto out_unlock; >>>> } >>>> >>>> @@ -465,7 +572,7 @@ static __always_inline ssize_t mfill_atomic_hugetlb( >>>> >>>> if (unlikely(err == -ENOENT)) { >>>> up_read(&ctx->map_changing_lock); >>>> - mmap_read_unlock(dst_mm); >>>> + uffd_mfill_unlock(dst_vma); >>>> BUG_ON(!folio); >>>> >>>> err = copy_folio_from_user(folio, >>>> @@ -474,17 +581,6 @@ static __always_inline ssize_t mfill_atomic_hugetlb( >>>> err = -EFAULT; >>>> goto out; >>>> } >>>> - mmap_read_lock(dst_mm); >>>> - down_read(&ctx->map_changing_lock); >>>> - /* >>>> - * If memory mappings are changing because of non-cooperative >>>> - * operation (e.g. mremap) running in parallel, bail out and >>>> - * request the user to retry later >>>> - */ >>>> - if (atomic_read(&ctx->mmap_changing)) { >>>> - err = -EAGAIN; >>>> - break; >>>> - } >>>> >>>> dst_vma = NULL; >>>> goto retry; >>>> @@ -505,7 +601,8 @@ static __always_inline ssize_t mfill_atomic_hugetlb( >>>> >>>> out_unlock: >>>> up_read(&ctx->map_changing_lock); >>>> - mmap_read_unlock(dst_mm); >>>> +out_unlock_vma: >>>> + uffd_mfill_unlock(dst_vma); >>>> out: >>>> if (folio) >>>> folio_put(folio); >>>> @@ -597,7 +694,15 @@ static __always_inline ssize_t mfill_atomic(struct userfaultfd_ctx *ctx, >>>> copied = 0; >>>> folio = NULL; >>>> retry: >>>> - mmap_read_lock(dst_mm); >>>> + /* >>>> + * Make sure the vma is not shared, that the dst range is >>>> + * both valid and fully within a single existing vma. >>>> + */ >>>> + dst_vma = uffd_mfill_lock(dst_mm, dst_start, len); >>>> + if (IS_ERR(dst_vma)) { >>>> + err = PTR_ERR(dst_vma); >>>> + goto out; >>>> + } >>>> >>>> /* >>>> * If memory mappings are changing because of non-cooperative >>>> @@ -609,15 +714,6 @@ static __always_inline ssize_t mfill_atomic(struct userfaultfd_ctx *ctx, >>>> if (atomic_read(&ctx->mmap_changing)) >>>> goto out_unlock; >>>> >>>> - /* >>>> - * Make sure the vma is not shared, that the dst range is >>>> - * both valid and fully within a single existing vma. >>>> - */ >>>> - err = -ENOENT; >>>> - dst_vma = find_dst_vma(dst_mm, dst_start, len); >>>> - if (!dst_vma) >>>> - goto out_unlock; >>>> - >>>> err = -EINVAL; >>>> /* >>>> * shmem_zero_setup is invoked in mmap for MAP_ANONYMOUS|MAP_SHARED but >>>> @@ -647,16 +743,6 @@ static __always_inline ssize_t mfill_atomic(struct userfaultfd_ctx *ctx, >>>> uffd_flags_mode_is(flags, MFILL_ATOMIC_CONTINUE)) >>>> goto out_unlock; >>>> >>>> - /* >>>> - * Ensure the dst_vma has a anon_vma or this page >>>> - * would get a NULL anon_vma when moved in the >>>> - * dst_vma. >>>> - */ >>>> - err = -ENOMEM; >>>> - if (!(dst_vma->vm_flags & VM_SHARED) && >>>> - unlikely(anon_vma_prepare(dst_vma))) >>>> - goto out_unlock; >>>> - >>>> while (src_addr < src_start + len) { >>>> pmd_t dst_pmdval; >>>> >>>> @@ -699,7 +785,7 @@ static __always_inline ssize_t mfill_atomic(struct userfaultfd_ctx *ctx, >>>> void *kaddr; >>>> >>>> up_read(&ctx->map_changing_lock); >>>> - mmap_read_unlock(dst_mm); >>>> + uffd_mfill_unlock(dst_vma); >>>> BUG_ON(!folio); >>>> >>>> kaddr = kmap_local_folio(folio, 0); >>>> @@ -730,7 +816,7 @@ static __always_inline ssize_t mfill_atomic(struct userfaultfd_ctx *ctx, >>>> >>>> out_unlock: >>>> up_read(&ctx->map_changing_lock); >>>> - mmap_read_unlock(dst_mm); >>>> + uffd_mfill_unlock(dst_vma); >>>> out: >>>> if (folio) >>>> folio_put(folio); >>>> @@ -1267,27 +1353,136 @@ static int validate_move_areas(struct userfaultfd_ctx *ctx, >>>> if (!vma_is_anonymous(src_vma) || !vma_is_anonymous(dst_vma)) >>>> return -EINVAL; >>>> >>>> + return 0; >>>> +} >>>> + >>>> +static __always_inline >>>> +int find_vmas_mm_locked(struct mm_struct *mm, >>>> + unsigned long dst_start, >>>> + unsigned long src_start, >>>> + struct vm_area_struct **dst_vmap, >>>> + struct vm_area_struct **src_vmap) >>>> +{ >>>> + struct vm_area_struct *vma; >>>> + >>>> + mmap_assert_locked(mm); >>>> + vma = find_vma_and_prepare_anon(mm, dst_start); >>>> + if (IS_ERR(vma)) >>>> + return PTR_ERR(vma); >>>> + >>>> + *dst_vmap = vma; >>>> + /* Skip finding src_vma if src_start is in dst_vma */ >>>> + if (src_start >= vma->vm_start && src_start < vma->vm_end) >>>> + goto out_success; >>>> + >>>> + vma = vma_lookup(mm, src_start); >>>> + if (!vma) >>>> + return -ENOENT; >>>> +out_success: >>>> + *src_vmap = vma; >>>> + return 0; >>>> +} >>>> + >>>> +#ifdef CONFIG_PER_VMA_LOCK >>>> +static int uffd_move_lock(struct mm_struct *mm, >>>> + unsigned long dst_start, >>>> + unsigned long src_start, >>>> + struct vm_area_struct **dst_vmap, >>>> + struct vm_area_struct **src_vmap) >>>> +{ >>>> + struct vm_area_struct *vma; >>>> + int err; >>>> + >>>> + vma = lock_vma(mm, dst_start); >>>> + if (IS_ERR(vma)) >>>> + return PTR_ERR(vma); >>>> + >>>> + *dst_vmap = vma; >>>> /* >>>> - * Ensure the dst_vma has a anon_vma or this page >>>> - * would get a NULL anon_vma when moved in the >>>> - * dst_vma. >>>> + * Skip finding src_vma if src_start is in dst_vma. This also ensures >>>> + * that we don't lock the same vma twice. >>>> */ >>>> - if (unlikely(anon_vma_prepare(dst_vma))) >>>> - return -ENOMEM; >>>> + if (src_start >= vma->vm_start && src_start < vma->vm_end) { >>>> + *src_vmap = vma; >>>> + return 0; >>>> + } >>>> >>>> - return 0; >>>> + /* >>>> + * Using lock_vma() to get src_vma can lead to following deadlock: >>>> + * >>>> + * Thread1 Thread2 >>>> + * ------- ------- >>>> + * vma_start_read(dst_vma) >>>> + * mmap_write_lock(mm) >>>> + * vma_start_write(src_vma) >>>> + * vma_start_read(src_vma) >>>> + * mmap_read_lock(mm) >>>> + * vma_start_write(dst_vma) >>>> + */ >>>> + *src_vmap = lock_vma_under_rcu(mm, src_start); >>>> + if (likely(*src_vmap)) >>>> + return 0; >>>> + >>>> + /* Undo any locking and retry in mmap_lock critical section */ >>>> + vma_end_read(*dst_vmap); >>>> + >>>> + mmap_read_lock(mm); >>>> + err = find_vmas_mm_locked(mm, dst_start, src_start, dst_vmap, src_vmap); >>>> + if (!err) { >>>> + /* >>>> + * See comment in lock_vma() as to why not using >>>> + * vma_start_read() here. >>>> + */ >>>> + down_read(&(*dst_vmap)->vm_lock->lock); >>>> + if (*dst_vmap != *src_vmap) >>>> + down_read(&(*src_vmap)->vm_lock->lock); >>>> + } >>>> + mmap_read_unlock(mm); >>>> + return err; >>>> +} >>>> + >>>> +static void uffd_move_unlock(struct vm_area_struct *dst_vma, >>>> + struct vm_area_struct *src_vma) >>>> +{ >>>> + vma_end_read(src_vma); >>>> + if (src_vma != dst_vma) >>>> + vma_end_read(dst_vma); >>>> } >>>> >>>> +#else >>>> + >>>> +static int uffd_move_lock(struct mm_struct *mm, >>>> + unsigned long dst_start, >>>> + unsigned long src_start, >>>> + struct vm_area_struct **dst_vmap, >>>> + struct vm_area_struct **src_vmap) >>>> +{ >>>> + int err; >>>> + >>>> + mmap_read_lock(mm); >>>> + err = find_vmas_mm_locked(mm, dst_start, src_start, dst_vmap, src_vmap); >>>> + if (err) >>>> + mmap_read_unlock(mm); >>>> + return err; >>>> +} >>>> + >>>> +static void uffd_move_unlock(struct vm_area_struct *dst_vma, >>>> + struct vm_area_struct *src_vma) >>>> +{ >>>> + mmap_assert_locked(src_vma->vm_mm); >>>> + mmap_read_unlock(dst_vma->vm_mm); >>>> +} >>>> +#endif >>>> + >>>> /** >>>> * move_pages - move arbitrary anonymous pages of an existing vma >>>> * @ctx: pointer to the userfaultfd context >>>> - * @mm: the address space to move pages >>>> * @dst_start: start of the destination virtual memory range >>>> * @src_start: start of the source virtual memory range >>>> * @len: length of the virtual memory range >>>> * @mode: flags from uffdio_move.mode >>>> * >>>> - * Must be called with mmap_lock held for read. >>>> + * It will either use the mmap_lock in read mode or per-vma locks >>>> * >>>> * move_pages() remaps arbitrary anonymous pages atomically in zero >>>> * copy. It only works on non shared anonymous pages because those can >>>> @@ -1355,10 +1550,10 @@ static int validate_move_areas(struct userfaultfd_ctx *ctx, >>>> * could be obtained. This is the only additional complexity added to >>>> * the rmap code to provide this anonymous page remapping functionality. >>>> */ >>>> -ssize_t move_pages(struct userfaultfd_ctx *ctx, struct mm_struct *mm, >>>> - unsigned long dst_start, unsigned long src_start, >>>> - unsigned long len, __u64 mode) >>>> +ssize_t move_pages(struct userfaultfd_ctx *ctx, unsigned long dst_start, >>>> + unsigned long src_start, unsigned long len, __u64 mode) >>>> { >>>> + struct mm_struct *mm = ctx->mm; >>>> struct vm_area_struct *src_vma, *dst_vma; >>>> unsigned long src_addr, dst_addr; >>>> pmd_t *src_pmd, *dst_pmd; >>>> @@ -1376,28 +1571,34 @@ ssize_t move_pages(struct userfaultfd_ctx *ctx, struct mm_struct *mm, >>>> WARN_ON_ONCE(dst_start + len <= dst_start)) >>>> goto out; >>>> >>>> + err = uffd_move_lock(mm, dst_start, src_start, &dst_vma, &src_vma); >>>> + if (err) >>>> + goto out; >>>> + >>>> + /* Re-check after taking map_changing_lock */ >>>> + err = -EAGAIN; >>>> + down_read(&ctx->map_changing_lock); >>>> + if (likely(atomic_read(&ctx->mmap_changing))) >>>> + goto out_unlock; >>>> /* >>>> * Make sure the vma is not shared, that the src and dst remap >>>> * ranges are both valid and fully within a single existing >>>> * vma. >>>> */ >>>> - src_vma = find_vma(mm, src_start); >>>> - if (!src_vma || (src_vma->vm_flags & VM_SHARED)) >>>> - goto out; >>>> - if (src_start < src_vma->vm_start || >>>> - src_start + len > src_vma->vm_end) >>>> - goto out; >>>> + err = -EINVAL; >>>> + if (src_vma->vm_flags & VM_SHARED) >>>> + goto out_unlock; >>>> + if (src_start + len > src_vma->vm_end) >>>> + goto out_unlock; >>>> >>>> - dst_vma = find_vma(mm, dst_start); >>>> - if (!dst_vma || (dst_vma->vm_flags & VM_SHARED)) >>>> - goto out; >>>> - if (dst_start < dst_vma->vm_start || >>>> - dst_start + len > dst_vma->vm_end) >>>> - goto out; >>>> + if (dst_vma->vm_flags & VM_SHARED) >>>> + goto out_unlock; >>>> + if (dst_start + len > dst_vma->vm_end) >>>> + goto out_unlock; >>>> >>>> err = validate_move_areas(ctx, src_vma, dst_vma); >>>> if (err) >>>> - goto out; >>>> + goto out_unlock; >>>> >>>> for (src_addr = src_start, dst_addr = dst_start; >>>> src_addr < src_start + len;) { >>>> @@ -1514,6 +1715,9 @@ ssize_t move_pages(struct userfaultfd_ctx *ctx, struct mm_struct *mm, >>>> moved += step_size; >>>> } >>>> >>>> +out_unlock: >>>> + up_read(&ctx->map_changing_lock); >>>> + uffd_move_unlock(dst_vma, src_vma); >>>> out: >>>> VM_WARN_ON(moved < 0); >>>> VM_WARN_ON(err > 0); >>> >>> -- >>> To unsubscribe from this group and stop receiving emails from it, send an email to kernel-team+unsubscribe@android.com. >>> ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v6 0/3] per-vma locks in userfaultfd 2024-02-13 21:57 [PATCH v6 0/3] per-vma locks in userfaultfd Lokesh Gidra ` (2 preceding siblings ...) 2024-02-13 21:57 ` [PATCH v6 3/3] userfaultfd: use per-vma locks in userfaultfd operations Lokesh Gidra @ 2024-02-14 15:17 ` Liam R. Howlett 3 siblings, 0 replies; 9+ messages in thread From: Liam R. Howlett @ 2024-02-14 15:17 UTC (permalink / raw) To: Lokesh Gidra Cc: akpm, linux-fsdevel, linux-mm, linux-kernel, selinux, surenb, kernel-team, aarcange, peterx, david, axelrasmussen, bgeffon, willy, jannh, kaleshsingh, ngeoffray, timmurray, rppt * Lokesh Gidra <lokeshgidra@google.com> [240213 16:57]: > Performing userfaultfd operations (like copy/move etc.) in critical > section of mmap_lock (read-mode) causes significant contention on the > lock when operations requiring the lock in write-mode are taking place > concurrently. We can use per-vma locks instead to significantly reduce > the contention issue. > > Android runtime's Garbage Collector uses userfaultfd for concurrent > compaction. mmap-lock contention during compaction potentially causes > jittery experience for the user. During one such reproducible scenario, > we observed the following improvements with this patch-set: > > - Wall clock time of compaction phase came down from ~3s to <500ms > - Uninterruptible sleep time (across all threads in the process) was > ~10ms (none in mmap_lock) during compaction, instead of >20s This series looks good, Thanks! Reviewed-by: Liam R. Howlett <Liam.Howlett@oracle.com> > > Changes since v5 [5]: > - Use abstract function names (like uffd_mfill_lock/uffd_mfill_unlock) > to avoid using too many #ifdef's, per Suren Baghdasaryan and Liam > Howlett > - Use 'unlikely' (as earlier) to anon_vma related checks, per Liam Howlett > - Eliminate redundant ptr->err->ptr conversion, per Liam Howlett > - Use 'int' instead of 'long' for error return type, per Liam Howlett > > Changes since v4 [4]: > - Fix possible deadlock in find_and_lock_vmas() which may arise if > lock_vma() is used for both src and dst vmas. > - Ensure we lock vma only once if src and dst vmas are same. > - Fix error handling in move_pages() after successfully locking vmas. > - Introduce helper function for finding dst vma and preparing its > anon_vma when done in mmap_lock critical section, per Liam Howlett. > - Introduce helper function for finding dst and src vmas when done in > mmap_lock critical section. > > Changes since v3 [3]: > - Rename function names to clearly reflect which lock is being taken, > per Liam Howlett. > - Have separate functions and abstractions in mm/userfaultfd.c to avoid > confusion around which lock is being acquired/released, per Liam Howlett. > - Prepare anon_vma for all private vmas, anonymous or file-backed, > per Jann Horn. > > Changes since v2 [2]: > - Implement and use lock_vma() which uses mmap_lock critical section > to lock the VMA using per-vma lock if lock_vma_under_rcu() fails, > per Liam R. Howlett. This helps simplify the code and also avoids > performing the entire userfaultfd operation under mmap_lock. > > Changes since v1 [1]: > - rebase patches on 'mm-unstable' branch > > [1] https://lore.kernel.org/all/20240126182647.2748949-1-lokeshgidra@google.com/ > [2] https://lore.kernel.org/all/20240129193512.123145-1-lokeshgidra@google.com/ > [3] https://lore.kernel.org/all/20240206010919.1109005-1-lokeshgidra@google.com/ > [4] https://lore.kernel.org/all/20240208212204.2043140-1-lokeshgidra@google.com/ > [5] https://lore.kernel.org/all/20240213001920.3551772-1-lokeshgidra@google.com/ > > Lokesh Gidra (3): > userfaultfd: move userfaultfd_ctx struct to header file > userfaultfd: protect mmap_changing with rw_sem in userfaulfd_ctx > userfaultfd: use per-vma locks in userfaultfd operations > > fs/userfaultfd.c | 86 ++----- > include/linux/userfaultfd_k.h | 75 ++++-- > mm/userfaultfd.c | 438 +++++++++++++++++++++++++--------- > 3 files changed, 405 insertions(+), 194 deletions(-) > > -- > 2.43.0.687.g38aa6559b0-goog > ^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2024-02-14 22:33 UTC | newest] Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2024-02-13 21:57 [PATCH v6 0/3] per-vma locks in userfaultfd Lokesh Gidra 2024-02-13 21:57 ` [PATCH v6 1/3] userfaultfd: move userfaultfd_ctx struct to header file Lokesh Gidra 2024-02-13 21:57 ` [PATCH v6 2/3] userfaultfd: protect mmap_changing with rw_sem in userfaulfd_ctx Lokesh Gidra 2024-02-13 21:57 ` [PATCH v6 3/3] userfaultfd: use per-vma locks in userfaultfd operations Lokesh Gidra 2024-02-14 22:12 ` Ryan Roberts 2024-02-14 22:20 ` Suren Baghdasaryan 2024-02-14 22:27 ` Lokesh Gidra 2024-02-14 22:33 ` Ryan Roberts 2024-02-14 15:17 ` [PATCH v6 0/3] per-vma locks in userfaultfd Liam R. Howlett
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox