From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E9DF6C48BEF for ; Thu, 15 Feb 2024 18:28:15 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7CEBC8D000E; Thu, 15 Feb 2024 13:28:15 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 756128D0006; Thu, 15 Feb 2024 13:28:15 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5D2058D000E; Thu, 15 Feb 2024 13:28:15 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 3ADBD8D0006 for ; Thu, 15 Feb 2024 13:28:15 -0500 (EST) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 1AAF6160894 for ; Thu, 15 Feb 2024 18:28:15 +0000 (UTC) X-FDA: 81794872950.23.D8F3B68 Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) by imf19.hostedemail.com (Postfix) with ESMTP id 38B951A0003 for ; Thu, 15 Feb 2024 18:28:13 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b="olV/0OY2"; spf=pass (imf19.hostedemail.com: domain of 3vFfOZQsKCNkGJF9NCBD8M5BJJBG9.7JHGDIPS-HHFQ57F.JMB@flex--lokeshgidra.bounces.google.com designates 209.85.128.202 as permitted sender) smtp.mailfrom=3vFfOZQsKCNkGJF9NCBD8M5BJJBG9.7JHGDIPS-HHFQ57F.JMB@flex--lokeshgidra.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1708021693; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Fcctbu40uMom0FYs5iKpq/6WlwkN62DCYz1tSKTG8lI=; b=Mij1Kr9+raQxZx3e9tVCzX9wGV7M6S7/ALSZqrSXXSfSw4tyZkUvB2FdidjlELgBJyb5NO G/khiJmQmsj5kry3SeR05hfaNMc3lqnt0FQIlmMNisHcaPvtXgTBLKVCzpUkAESrYyuoQ7 AAI95AuW7WL4U4BdzSN8HI/TrGaJabI= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1708021693; a=rsa-sha256; cv=none; b=1fxui5mENz3ON9kWyKnoe2oaNuCM5mEG9Lucsy+Z6NeLavUILsa2FmfG/mDosKKCHM5bLW 7PcI7oTcOdqbSERYcykLrrQq5KFBGVZwQLoCDMWVkZ485UFo21un7dSAzkYj3pAFYNR/0Q 8RsLKYErZr6pJMsMKQ1xdpDVpjZQraA= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b="olV/0OY2"; spf=pass (imf19.hostedemail.com: domain of 3vFfOZQsKCNkGJF9NCBD8M5BJJBG9.7JHGDIPS-HHFQ57F.JMB@flex--lokeshgidra.bounces.google.com designates 209.85.128.202 as permitted sender) smtp.mailfrom=3vFfOZQsKCNkGJF9NCBD8M5BJJBG9.7JHGDIPS-HHFQ57F.JMB@flex--lokeshgidra.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-607c9677a91so22013177b3.2 for ; Thu, 15 Feb 2024 10:28:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1708021692; x=1708626492; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Fcctbu40uMom0FYs5iKpq/6WlwkN62DCYz1tSKTG8lI=; b=olV/0OY282Msk16Dth5ZktHzSGARU9aFkknI5XqvV9NLAmOow3ccFBPghVfZKzlUna VuZMnslE8CDD7+d48TBYCZgTjREqyZFcQZYZfdRc4G5WSfoduFkZeXPwxgwrncepqypZ cy+4Nq6b6iBZ82JRQscXzYrXD0slKIXGEo2oU92ka+fv5mORDaG3CKV/UjITFcCtQEO5 GSV3y5qucfnHHBlf8ZmhVo2nmZfMiVq7D/Qv6wBRULKD/wxPMUXt7/CTpgTCbR8cSExF x4KTmEwAPezVU39pM8kYq0XN1+PARC1CoqZ8Rhj8+qYFGRP3xx3zrFqppwf5bnQqyW6U kvAg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1708021692; x=1708626492; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Fcctbu40uMom0FYs5iKpq/6WlwkN62DCYz1tSKTG8lI=; b=oJZFCk0JhOuXgaizF4WzdnjyymEFRNrS8AFoVjo+u7IWzd7qb3UhVXA1DnMdOyMTRG JlXJG+KpF5YV2UkAjd2FsuI1zKe7J+kWwPu7YnF0C0O+L93cdjb4mfYb9e9hfKab7Arr E1qMJ/zp2abf2uCNV6Lih3cx67hrAEHghQxQVi5tlXOFQ7oUS5mjafKwHrpJ2haic9SP bKxYzyPKyga2dLSmiQFA9g6F/g337q/4uGKfKT91461YML4cA+r/IHChNMTWoxkJui7f 8fE/83IIU4/MDGnO0PE4A/707ANfqC7XGlSJi47xHS3+D681EKiaQ0dBnl3XZDeOBj4K 1/BQ== X-Forwarded-Encrypted: i=1; AJvYcCV5xskrvUITcPGwHqk5DnIgfjvcELPCvbNh1RIeki4zVD+xCHe8P+R51Pj42LDZqF3PjeICAKHN262lik5/Po/Fsw0= X-Gm-Message-State: AOJu0YwGoFzQ4VJMnK6UvJmwPcW05dHxkveAeRbxFlunlsdDQpq7jg+S cYVYrE0ObHW++Gk3l6O777ViuFQQmXa+vu00by1IkuigAu5eHUBeZWiXMgYKIjkJn0qOzzxWDNe +ZbM+Rgnj6zZVOqJpnz15QQ== X-Google-Smtp-Source: AGHT+IF9VE8A2Eb1+apIi+s8kSWn55FpfbHwy7Dntox5xt8VbrK7eDxZpoiuJ8Ap3aW9CEwVJJ8SnUyVfI1bXY6XFw== X-Received: from lg.mtv.corp.google.com ([2620:15c:211:202:6186:87a3:6b94:9b81]) (user=lokeshgidra job=sendgmr) by 2002:a81:5748:0:b0:607:c418:33ba with SMTP id l69-20020a815748000000b00607c41833bamr590841ywb.8.1708021692378; Thu, 15 Feb 2024 10:28:12 -0800 (PST) Date: Thu, 15 Feb 2024 10:27:54 -0800 In-Reply-To: <20240215182756.3448972-1-lokeshgidra@google.com> Mime-Version: 1.0 References: <20240215182756.3448972-1-lokeshgidra@google.com> X-Mailer: git-send-email 2.43.0.687.g38aa6559b0-goog Message-ID: <20240215182756.3448972-3-lokeshgidra@google.com> Subject: [PATCH v7 2/4] userfaultfd: protect mmap_changing with rw_sem in userfaulfd_ctx From: Lokesh Gidra To: akpm@linux-foundation.org Cc: lokeshgidra@google.com, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, selinux@vger.kernel.org, surenb@google.com, kernel-team@android.com, aarcange@redhat.com, peterx@redhat.com, david@redhat.com, axelrasmussen@google.com, bgeffon@google.com, willy@infradead.org, jannh@google.com, kaleshsingh@google.com, ngeoffray@google.com, timmurray@google.com, rppt@kernel.org, Liam.Howlett@oracle.com, ryan.roberts@arm.com Content-Type: text/plain; charset="UTF-8" X-Stat-Signature: pj3zrgrt5wg1pnxqh9oxtbzeujkzmyj6 X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 38B951A0003 X-Rspam-User: X-HE-Tag: 1708021693-318492 X-HE-Meta: U2FsdGVkX19nzsDMTifopXgz9uEgJC4uMcxYh0GGDwTSCqqhfaAN1YYWxJKHnFR4wBJyttVrqEyEUdMYYBYlLmdqt9thtaYWoyQ1OOsHmabzvG3U9OzbkE3TI8Gx/9JWb381Qne7/AiWdoNNT0dOC5mnWI5nPPC11aHX4YlbYuFM6y9GlYpLB89c8eegM0POqO+m6VhFYsUDtiBBV62YtcnJnWh7t5FDde/11NPv0DmFfSN1eue5/oggXqwE9u6ieJl8TsJ9RwPUNwgKzTAkAYE2XueUSpw6jf5sOyx68xJrgYtZjE/Sm5YZvGhHKnmUAExRxPsNFX+0YHdoiUtf8fj9GeXYcfMFcGmjQkHMauHNuT7yH4Zyn2NHAkMw0sPi1usgaN9PfLOf/V/3WH+SOapc8+V5/IclUzIjosMUHhGptUXrk6b/0HSYIaxmyrP3kUm15dKIbNvpWuf046oeNuvcuRfyW7feTHDRJB8CfGCoxwAQXd0xQ7T6uXAaZza7Knie1eHI9Q66wZiHsqS+j5gygg9vV2G/2LNFhngLx4A6eu44BiMIugIbZ9xDBBSua4ipw20nHZ0HHxOYGKA5wI4eY/EdFBCeDuoIPqixbFGUqucvs4O6SzvOHWnG62lFOnZZQko1IfaSG4ExKoAPxhFrOlQkOeOU68fNFOMKmepC+1ByzW6CasFsdue5kykXL+hUqV9SCj3scVfGIidvibRZ1lnxEZ2eB67O8bt/6eNxD4yWeCI8rOnPaaJ1qerrtQ0PmxIBuLeSKtm5d6buSUUoB6rsIH+uJPdC7aBmFSncEeGa36vcARQxzWozg0MJeGG+MF1GFKreoAdNgctXbim0GV/BDVshd23TKiC7L2CC9R1tcxiQQq7Tl8TBPdpOGuSHyjvyQstIdWw31fSM5R8C455w435RffZuoF0iwzjQHZVhe6OBuSRkG0iY4Q79KBimPnBCv4dIk1QRJUs py+Yk356 4rTj5Ag1ssqrV5IRd1mnbl0zj1xGIJ9nu+U6Uoo3s8KNHJvUvKliZrjOBzmlYOcmoNn94cEcBHRMUXKnEvCuqBkysE8reaJR0nBV1xdVEEdvwKTqy3GyhkTUtaHQ4fU94imrvwkPs7tM4/gRpKiUwRw2BYxr3NXGcHkk79xBVOs55awnseHMaDe8awNccDq/eVB7OEYSTltj7YUHk3E5XBGQ3joYSrIwQO8QjyRR2+1PGE/Ob1cN2S40VfGkYvu/fLw+tML96Po34tTgWegNPQjM9QooW1q1WprnHfXBddfdT/I4MBmc2UiQ3A9DKziw9m49JdOPsXIueUxwoYnF0STe82QIRbjUY8i/2SqJfgDZxWoDWtfdiEX+iZrMp5IZdAh5qtfM3deUR4SRMMZuv3ddRf7nuXh03bjTtqXyCiDEuO2bMR6izsGaDzYpf3KwA1PYb50l7j4VZB4fmfvSW+IEXd0MvmJzQwP0cQzFVM3cP7nGV3HsEkjIpFbHe/g7D689jDWoFPBMeGa0fOf33DPZqgvBWNF6a7WBwu1pA4z0eekArsVTjQv2bSv2X6LslI2Nty8D/Dz5MXwafy9Ggl4d8l9vH/XAYTtmfGa1XohYFFmZF8GN+jSR7F4u3R18cgR1HnTmUdX3xLhPcgNc/XZ234qJ6sy5f9qb4djBNWBlqvjKE3y5UPkwrs2Jov3Bz4OD16E+vzZt3gaEpYF7DvCg9cxWgOf9qBmy0rEAqgeBD6fBf2PN0qKo0u9h6Xu3iSG0WnPD1M9MmlBnHm2fh93O+RA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Increments and loads to mmap_changing are always in mmap_lock critical section. This ensures that if userspace requests event notification for non-cooperative operations (e.g. mremap), userfaultfd operations don't occur concurrently. This can be achieved by using a separate read-write semaphore in userfaultfd_ctx such that increments are done in write-mode and loads in read-mode, thereby eliminating the dependency on mmap_lock for this purpose. This is a preparatory step before we replace mmap_lock usage with per-vma locks in fill/move ioctls. Signed-off-by: Lokesh Gidra Reviewed-by: Mike Rapoport (IBM) Reviewed-by: Liam R. Howlett --- fs/userfaultfd.c | 40 ++++++++++++---------- include/linux/userfaultfd_k.h | 31 ++++++++++-------- mm/userfaultfd.c | 62 ++++++++++++++++++++--------------- 3 files changed, 75 insertions(+), 58 deletions(-) diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c index 58331b83d648..c00a021bcce4 100644 --- a/fs/userfaultfd.c +++ b/fs/userfaultfd.c @@ -685,12 +685,15 @@ int dup_userfaultfd(struct vm_area_struct *vma, struct list_head *fcs) ctx->flags = octx->flags; ctx->features = octx->features; ctx->released = false; + init_rwsem(&ctx->map_changing_lock); atomic_set(&ctx->mmap_changing, 0); ctx->mm = vma->vm_mm; mmgrab(ctx->mm); userfaultfd_ctx_get(octx); + down_write(&octx->map_changing_lock); atomic_inc(&octx->mmap_changing); + up_write(&octx->map_changing_lock); fctx->orig = octx; fctx->new = ctx; list_add_tail(&fctx->list, fcs); @@ -737,7 +740,9 @@ void mremap_userfaultfd_prep(struct vm_area_struct *vma, if (ctx->features & UFFD_FEATURE_EVENT_REMAP) { vm_ctx->ctx = ctx; userfaultfd_ctx_get(ctx); + down_write(&ctx->map_changing_lock); atomic_inc(&ctx->mmap_changing); + up_write(&ctx->map_changing_lock); } else { /* Drop uffd context if remap feature not enabled */ vma_start_write(vma); @@ -783,7 +788,9 @@ bool userfaultfd_remove(struct vm_area_struct *vma, return true; userfaultfd_ctx_get(ctx); + down_write(&ctx->map_changing_lock); atomic_inc(&ctx->mmap_changing); + up_write(&ctx->map_changing_lock); mmap_read_unlock(mm); msg_init(&ewq.msg); @@ -825,7 +832,9 @@ int userfaultfd_unmap_prep(struct vm_area_struct *vma, unsigned long start, return -ENOMEM; userfaultfd_ctx_get(ctx); + down_write(&ctx->map_changing_lock); atomic_inc(&ctx->mmap_changing); + up_write(&ctx->map_changing_lock); unmap_ctx->ctx = ctx; unmap_ctx->start = start; unmap_ctx->end = end; @@ -1709,9 +1718,8 @@ static int userfaultfd_copy(struct userfaultfd_ctx *ctx, if (uffdio_copy.mode & UFFDIO_COPY_MODE_WP) flags |= MFILL_ATOMIC_WP; if (mmget_not_zero(ctx->mm)) { - ret = mfill_atomic_copy(ctx->mm, uffdio_copy.dst, uffdio_copy.src, - uffdio_copy.len, &ctx->mmap_changing, - flags); + ret = mfill_atomic_copy(ctx, uffdio_copy.dst, uffdio_copy.src, + uffdio_copy.len, flags); mmput(ctx->mm); } else { return -ESRCH; @@ -1761,9 +1769,8 @@ static int userfaultfd_zeropage(struct userfaultfd_ctx *ctx, goto out; if (mmget_not_zero(ctx->mm)) { - ret = mfill_atomic_zeropage(ctx->mm, uffdio_zeropage.range.start, - uffdio_zeropage.range.len, - &ctx->mmap_changing); + ret = mfill_atomic_zeropage(ctx, uffdio_zeropage.range.start, + uffdio_zeropage.range.len); mmput(ctx->mm); } else { return -ESRCH; @@ -1818,9 +1825,8 @@ static int userfaultfd_writeprotect(struct userfaultfd_ctx *ctx, return -EINVAL; if (mmget_not_zero(ctx->mm)) { - ret = mwriteprotect_range(ctx->mm, uffdio_wp.range.start, - uffdio_wp.range.len, mode_wp, - &ctx->mmap_changing); + ret = mwriteprotect_range(ctx, uffdio_wp.range.start, + uffdio_wp.range.len, mode_wp); mmput(ctx->mm); } else { return -ESRCH; @@ -1870,9 +1876,8 @@ static int userfaultfd_continue(struct userfaultfd_ctx *ctx, unsigned long arg) flags |= MFILL_ATOMIC_WP; if (mmget_not_zero(ctx->mm)) { - ret = mfill_atomic_continue(ctx->mm, uffdio_continue.range.start, - uffdio_continue.range.len, - &ctx->mmap_changing, flags); + ret = mfill_atomic_continue(ctx, uffdio_continue.range.start, + uffdio_continue.range.len, flags); mmput(ctx->mm); } else { return -ESRCH; @@ -1925,9 +1930,8 @@ static inline int userfaultfd_poison(struct userfaultfd_ctx *ctx, unsigned long goto out; if (mmget_not_zero(ctx->mm)) { - ret = mfill_atomic_poison(ctx->mm, uffdio_poison.range.start, - uffdio_poison.range.len, - &ctx->mmap_changing, 0); + ret = mfill_atomic_poison(ctx, uffdio_poison.range.start, + uffdio_poison.range.len, 0); mmput(ctx->mm); } else { return -ESRCH; @@ -2003,13 +2007,14 @@ static int userfaultfd_move(struct userfaultfd_ctx *ctx, if (mmget_not_zero(mm)) { mmap_read_lock(mm); - /* Re-check after taking mmap_lock */ + /* Re-check after taking map_changing_lock */ + down_read(&ctx->map_changing_lock); if (likely(!atomic_read(&ctx->mmap_changing))) ret = move_pages(ctx, mm, uffdio_move.dst, uffdio_move.src, uffdio_move.len, uffdio_move.mode); else ret = -EAGAIN; - + up_read(&ctx->map_changing_lock); mmap_read_unlock(mm); mmput(mm); } else { @@ -2216,6 +2221,7 @@ static int new_userfaultfd(int flags) ctx->flags = flags; ctx->features = 0; ctx->released = false; + init_rwsem(&ctx->map_changing_lock); atomic_set(&ctx->mmap_changing, 0); ctx->mm = current->mm; /* prevent the mm struct to be freed */ diff --git a/include/linux/userfaultfd_k.h b/include/linux/userfaultfd_k.h index 691d928ee864..3210c3552976 100644 --- a/include/linux/userfaultfd_k.h +++ b/include/linux/userfaultfd_k.h @@ -69,6 +69,13 @@ struct userfaultfd_ctx { unsigned int features; /* released */ bool released; + /* + * Prevents userfaultfd operations (fill/move/wp) from happening while + * some non-cooperative event(s) is taking place. Increments are done + * in write-mode. Whereas, userfaultfd operations, which includes + * reading mmap_changing, is done under read-mode. + */ + struct rw_semaphore map_changing_lock; /* memory mappings are changing because of non-cooperative event */ atomic_t mmap_changing; /* mm with one ore more vmas attached to this userfaultfd_ctx */ @@ -113,22 +120,18 @@ extern int mfill_atomic_install_pte(pmd_t *dst_pmd, unsigned long dst_addr, struct page *page, bool newly_allocated, uffd_flags_t flags); -extern ssize_t mfill_atomic_copy(struct mm_struct *dst_mm, unsigned long dst_start, +extern ssize_t mfill_atomic_copy(struct userfaultfd_ctx *ctx, unsigned long dst_start, unsigned long src_start, unsigned long len, - atomic_t *mmap_changing, uffd_flags_t flags); -extern ssize_t mfill_atomic_zeropage(struct mm_struct *dst_mm, + uffd_flags_t flags); +extern ssize_t mfill_atomic_zeropage(struct userfaultfd_ctx *ctx, unsigned long dst_start, - unsigned long len, - atomic_t *mmap_changing); -extern ssize_t mfill_atomic_continue(struct mm_struct *dst_mm, unsigned long dst_start, - unsigned long len, atomic_t *mmap_changing, - uffd_flags_t flags); -extern ssize_t mfill_atomic_poison(struct mm_struct *dst_mm, unsigned long start, - unsigned long len, atomic_t *mmap_changing, - uffd_flags_t flags); -extern int mwriteprotect_range(struct mm_struct *dst_mm, - unsigned long start, unsigned long len, - bool enable_wp, atomic_t *mmap_changing); + unsigned long len); +extern ssize_t mfill_atomic_continue(struct userfaultfd_ctx *ctx, unsigned long dst_start, + unsigned long len, uffd_flags_t flags); +extern ssize_t mfill_atomic_poison(struct userfaultfd_ctx *ctx, unsigned long start, + unsigned long len, uffd_flags_t flags); +extern int mwriteprotect_range(struct userfaultfd_ctx *ctx, unsigned long start, + unsigned long len, bool enable_wp); extern long uffd_wp_range(struct vm_area_struct *vma, unsigned long start, unsigned long len, bool enable_wp); diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index 9cc93cc1330b..74aad0831e40 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -353,11 +353,11 @@ static pmd_t *mm_alloc_pmd(struct mm_struct *mm, unsigned long address) * called with mmap_lock held, it will release mmap_lock before returning. */ static __always_inline ssize_t mfill_atomic_hugetlb( + struct userfaultfd_ctx *ctx, struct vm_area_struct *dst_vma, unsigned long dst_start, unsigned long src_start, unsigned long len, - atomic_t *mmap_changing, uffd_flags_t flags) { struct mm_struct *dst_mm = dst_vma->vm_mm; @@ -379,6 +379,7 @@ static __always_inline ssize_t mfill_atomic_hugetlb( * feature is not supported. */ if (uffd_flags_mode_is(flags, MFILL_ATOMIC_ZEROPAGE)) { + up_read(&ctx->map_changing_lock); mmap_read_unlock(dst_mm); return -EINVAL; } @@ -463,6 +464,7 @@ static __always_inline ssize_t mfill_atomic_hugetlb( cond_resched(); if (unlikely(err == -ENOENT)) { + up_read(&ctx->map_changing_lock); mmap_read_unlock(dst_mm); BUG_ON(!folio); @@ -473,12 +475,13 @@ static __always_inline ssize_t mfill_atomic_hugetlb( goto out; } mmap_read_lock(dst_mm); + down_read(&ctx->map_changing_lock); /* * If memory mappings are changing because of non-cooperative * operation (e.g. mremap) running in parallel, bail out and * request the user to retry later */ - if (mmap_changing && atomic_read(mmap_changing)) { + if (atomic_read(&ctx->mmap_changing)) { err = -EAGAIN; break; } @@ -501,6 +504,7 @@ static __always_inline ssize_t mfill_atomic_hugetlb( } out_unlock: + up_read(&ctx->map_changing_lock); mmap_read_unlock(dst_mm); out: if (folio) @@ -512,11 +516,11 @@ static __always_inline ssize_t mfill_atomic_hugetlb( } #else /* !CONFIG_HUGETLB_PAGE */ /* fail at build time if gcc attempts to use this */ -extern ssize_t mfill_atomic_hugetlb(struct vm_area_struct *dst_vma, +extern ssize_t mfill_atomic_hugetlb(struct userfaultfd_ctx *ctx, + struct vm_area_struct *dst_vma, unsigned long dst_start, unsigned long src_start, unsigned long len, - atomic_t *mmap_changing, uffd_flags_t flags); #endif /* CONFIG_HUGETLB_PAGE */ @@ -564,13 +568,13 @@ static __always_inline ssize_t mfill_atomic_pte(pmd_t *dst_pmd, return err; } -static __always_inline ssize_t mfill_atomic(struct mm_struct *dst_mm, +static __always_inline ssize_t mfill_atomic(struct userfaultfd_ctx *ctx, unsigned long dst_start, unsigned long src_start, unsigned long len, - atomic_t *mmap_changing, uffd_flags_t flags) { + struct mm_struct *dst_mm = ctx->mm; struct vm_area_struct *dst_vma; ssize_t err; pmd_t *dst_pmd; @@ -600,8 +604,9 @@ static __always_inline ssize_t mfill_atomic(struct mm_struct *dst_mm, * operation (e.g. mremap) running in parallel, bail out and * request the user to retry later */ + down_read(&ctx->map_changing_lock); err = -EAGAIN; - if (mmap_changing && atomic_read(mmap_changing)) + if (atomic_read(&ctx->mmap_changing)) goto out_unlock; /* @@ -633,8 +638,8 @@ static __always_inline ssize_t mfill_atomic(struct mm_struct *dst_mm, * If this is a HUGETLB vma, pass off to appropriate routine */ if (is_vm_hugetlb_page(dst_vma)) - return mfill_atomic_hugetlb(dst_vma, dst_start, src_start, - len, mmap_changing, flags); + return mfill_atomic_hugetlb(ctx, dst_vma, dst_start, + src_start, len, flags); if (!vma_is_anonymous(dst_vma) && !vma_is_shmem(dst_vma)) goto out_unlock; @@ -693,6 +698,7 @@ static __always_inline ssize_t mfill_atomic(struct mm_struct *dst_mm, if (unlikely(err == -ENOENT)) { void *kaddr; + up_read(&ctx->map_changing_lock); mmap_read_unlock(dst_mm); BUG_ON(!folio); @@ -723,6 +729,7 @@ static __always_inline ssize_t mfill_atomic(struct mm_struct *dst_mm, } out_unlock: + up_read(&ctx->map_changing_lock); mmap_read_unlock(dst_mm); out: if (folio) @@ -733,34 +740,33 @@ static __always_inline ssize_t mfill_atomic(struct mm_struct *dst_mm, return copied ? copied : err; } -ssize_t mfill_atomic_copy(struct mm_struct *dst_mm, unsigned long dst_start, +ssize_t mfill_atomic_copy(struct userfaultfd_ctx *ctx, unsigned long dst_start, unsigned long src_start, unsigned long len, - atomic_t *mmap_changing, uffd_flags_t flags) + uffd_flags_t flags) { - return mfill_atomic(dst_mm, dst_start, src_start, len, mmap_changing, + return mfill_atomic(ctx, dst_start, src_start, len, uffd_flags_set_mode(flags, MFILL_ATOMIC_COPY)); } -ssize_t mfill_atomic_zeropage(struct mm_struct *dst_mm, unsigned long start, - unsigned long len, atomic_t *mmap_changing) +ssize_t mfill_atomic_zeropage(struct userfaultfd_ctx *ctx, + unsigned long start, + unsigned long len) { - return mfill_atomic(dst_mm, start, 0, len, mmap_changing, + return mfill_atomic(ctx, start, 0, len, uffd_flags_set_mode(0, MFILL_ATOMIC_ZEROPAGE)); } -ssize_t mfill_atomic_continue(struct mm_struct *dst_mm, unsigned long start, - unsigned long len, atomic_t *mmap_changing, - uffd_flags_t flags) +ssize_t mfill_atomic_continue(struct userfaultfd_ctx *ctx, unsigned long start, + unsigned long len, uffd_flags_t flags) { - return mfill_atomic(dst_mm, start, 0, len, mmap_changing, + return mfill_atomic(ctx, start, 0, len, uffd_flags_set_mode(flags, MFILL_ATOMIC_CONTINUE)); } -ssize_t mfill_atomic_poison(struct mm_struct *dst_mm, unsigned long start, - unsigned long len, atomic_t *mmap_changing, - uffd_flags_t flags) +ssize_t mfill_atomic_poison(struct userfaultfd_ctx *ctx, unsigned long start, + unsigned long len, uffd_flags_t flags) { - return mfill_atomic(dst_mm, start, 0, len, mmap_changing, + return mfill_atomic(ctx, start, 0, len, uffd_flags_set_mode(flags, MFILL_ATOMIC_POISON)); } @@ -793,10 +799,10 @@ long uffd_wp_range(struct vm_area_struct *dst_vma, return ret; } -int mwriteprotect_range(struct mm_struct *dst_mm, unsigned long start, - unsigned long len, bool enable_wp, - atomic_t *mmap_changing) +int mwriteprotect_range(struct userfaultfd_ctx *ctx, unsigned long start, + unsigned long len, bool enable_wp) { + struct mm_struct *dst_mm = ctx->mm; unsigned long end = start + len; unsigned long _start, _end; struct vm_area_struct *dst_vma; @@ -820,8 +826,9 @@ int mwriteprotect_range(struct mm_struct *dst_mm, unsigned long start, * operation (e.g. mremap) running in parallel, bail out and * request the user to retry later */ + down_read(&ctx->map_changing_lock); err = -EAGAIN; - if (mmap_changing && atomic_read(mmap_changing)) + if (atomic_read(&ctx->mmap_changing)) goto out_unlock; err = -ENOENT; @@ -850,6 +857,7 @@ int mwriteprotect_range(struct mm_struct *dst_mm, unsigned long start, err = 0; } out_unlock: + up_read(&ctx->map_changing_lock); mmap_read_unlock(dst_mm); return err; } -- 2.43.0.687.g38aa6559b0-goog