From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3D9FDC7EE2A for ; Wed, 25 Jun 2025 20:32:30 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C24E26B009B; Wed, 25 Jun 2025 16:32:29 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BFDDE6B00A4; Wed, 25 Jun 2025 16:32:29 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B3A056B00B0; Wed, 25 Jun 2025 16:32:29 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id A31076B009B for ; Wed, 25 Jun 2025 16:32:29 -0400 (EDT) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 144F91A0766 for ; Wed, 25 Jun 2025 20:32:29 +0000 (UTC) X-FDA: 83595070818.14.D00B77D Received: from mail-yw1-f174.google.com (mail-yw1-f174.google.com [209.85.128.174]) by imf01.hostedemail.com (Postfix) with ESMTP id 3F55F4000B for ; Wed, 25 Jun 2025 20:32:27 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=F5z21y9m; spf=pass (imf01.hostedemail.com: domain of jthoughton@google.com designates 209.85.128.174 as permitted sender) smtp.mailfrom=jthoughton@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1750883547; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=SLFYUt98P2q0h/fZsXZ99NhjaWK3IxRzSYbzaWT17Y4=; b=YGizl4SjXqWjNVqqIPZ8FlaQDh4DlISfPrlrHAUhS/k53mEX3k3EcbqhRCWpBKecl5LLn/ Oa4o6mhslLbCrc+Y40rbHqSOjbQmL3Qkew98yevVofoCGWbvFX72e58SHox9lEVv9uLNFN kmLBEmd9XgdVfwXjd8N2yL98Gxi6HdU= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=F5z21y9m; spf=pass (imf01.hostedemail.com: domain of jthoughton@google.com designates 209.85.128.174 as permitted sender) smtp.mailfrom=jthoughton@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1750883547; a=rsa-sha256; cv=none; b=6p0+kEfxify38kIi8zUIG7of2yY3ZIM9wxgea4gGepjMFlT670jsqBXRNVZ+6X/JC/um5W oYp+jeQC/MUfWKz1AkixT+9AVYf0eZpiEmUr5zxJnlcyI6+YbwH/ZWFjwt7t3sVpqGKYQ9 PckW7/QKdy/jxVRRGOMQROUfXNCrzuc= Received: by mail-yw1-f174.google.com with SMTP id 00721157ae682-714067ecea3so3016817b3.0 for ; Wed, 25 Jun 2025 13:32:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1750883546; x=1751488346; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=SLFYUt98P2q0h/fZsXZ99NhjaWK3IxRzSYbzaWT17Y4=; b=F5z21y9m5pg2CK8rf6ZRV0twb0guNmYMZ22OPoNN7qSqo6WRIN9tKjYYt51T98ncnE rO08LAs4kr7OIdnAHD9CD7zIiTfmPdQ0xe1R6xWGLUsCCVgO+ciIbIdfJ10KCo+ixm/G 9y2QCkawIj8LOxQPEPn3f+kDadxrYpbXhcDQ9LEoXM6CDo9WDqgDq9+sIOpCa/9Bngy/ co9FUbHEyjf51Tgafdv6I9j2R6nPhDT+2OXnR/0y1r53KIa0UmlzRXAAGKXx8RwGV8vW KWvUQ7SKvSSXxuFO/+Tg2XIa6yq3ravORKgyAPY1nVyESdMDuzWrpSNBfslcafSixn6a lFIQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750883546; x=1751488346; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=SLFYUt98P2q0h/fZsXZ99NhjaWK3IxRzSYbzaWT17Y4=; b=Zz1EY2FbykHXDImQjaytXUkfXnzhtx0QLIt9cKdZpqMmbL0WZHuLLXDhMaLnoWSh5c NzlGzrnx4W01HRXJfNqV/pCINFrK5312JNuM1kxuPKBV8/cx5tTlVBqFkovYdV4bhGeK qlYWfvf3hogTaltgPobA3bQgnZo2E/cgICO7GhZktH9r8FHxacjcXZhyuKg+2UQiifV+ ZdMiNOaiWWR33paVWWhd67WYTSokkxW6ATHhltD3jTGOvu7OA+OEXexIK6rYsaaveAWi jB2IWBQGKshwL09bXZ/9+/AmP37DIta4xxxIVlUTakzBED8d28xnZsLmMFk6FvnF1xF3 MieA== X-Gm-Message-State: AOJu0YyN9o3LGfrkF+hZE449FK6aXNntiXmLXeFSIPlkExwGJ9luvEWT uY4+1OkarHFa2QyDOWM2IPOBHz6IK+ZmPXUdzNy195gtSASKI7IcRZ3nsmL/OZhNpYKaHCbjMVN CNepYl+Yv/hhjVYLzQrVng+Zo+VgiUMmHRCFDDyi6 X-Gm-Gg: ASbGncvkC6ir3fIabxEoShcpKnPCJ/UbjBjx3/lvmf/4PHY27sdOe92hE3rDhOoDJJ4 vz5dD0DPKoWLcKsaI3ItGNxO3JKc5tagIq5rhbA3yt0nAuYAF33FY9W5nl6OE18qy+oKFBfd0Om g9O2sDMSw6dzEqs7ZSyqirDbCldex2o4hshbhmJsoK+uNxsc4neoFw0NFZumEUzMStbUvmOfNY X-Google-Smtp-Source: AGHT+IHhfR6/VlJVSXraZCY/fQlZAWZbH/2E7bRPGkB0TTeVRkt1lIwuCEsaLSMizSmISjMwsYtat4LpSX+OYWyJJ70= X-Received: by 2002:a05:690c:85:b0:712:e2b5:e61b with SMTP id 00721157ae682-71406ca2e13mr70129327b3.13.1750883545165; Wed, 25 Jun 2025 13:32:25 -0700 (PDT) MIME-Version: 1.0 References: <20250620190342.1780170-1-peterx@redhat.com> <20250620190342.1780170-5-peterx@redhat.com> In-Reply-To: <20250620190342.1780170-5-peterx@redhat.com> From: James Houghton Date: Wed, 25 Jun 2025 13:31:49 -0700 X-Gm-Features: Ac12FXzgcuNaEB-qFo3pCvx2b2klwV6rpJlAl6SRM316awXTv0oTOGmJj0ZOazg Message-ID: Subject: Re: [PATCH 4/4] mm: Apply vm_uffd_ops API to core mm To: Peter Xu Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Nikita Kalyazin , Hugh Dickins , Oscar Salvador , Michal Hocko , David Hildenbrand , Muchun Song , Andrea Arcangeli , Ujwal Kundur , Suren Baghdasaryan , Andrew Morton , Vlastimil Babka , "Liam R . Howlett" , Mike Rapoport , Lorenzo Stoakes , Axel Rasmussen Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: 3F55F4000B X-Stat-Signature: w4mnbxoymaxnp5rjh7hzrypbx17bu1ay X-Rspamd-Server: rspam09 X-Rspam-User: X-HE-Tag: 1750883547-537297 X-HE-Meta: U2FsdGVkX18HoUMddFMIccDK7ouhfsmIJM5HYCpwMJgKhQ5rb7W+lviePzjQJ3s+aczJYZ2b/RCQhZDUFuP/pNUuAB9Y7V3jU7/Wzjk8sxDNThIRC+4GlSAOj52rc5Qd77+wJSwzkLu0WLYKcMSInArxj/MRM0DZ85LQkzx0N8yP1INIWeGj2Lu/Zi4Xgh00z3UQ7swYSwPQni1u73G7Gba5XMkb388eidpJ+mJhJHHzV7LkSwFN4C1Xo1uqYelyhH5/kW1GtxFrU+ih4lNF7c3ZLGj6FzJsOIxHgDCH1JwlYvSiTX2VjvqOSTp7+QxI7fFAz+IdBP8Z3bRurJ/LX+nLxBPvjgNv61LN6jGyIXyd9Adfz/SRM01rvYJjaHzq6U0cJMihvU0KEZyJUcQZdxweExVb6RG6M88G5RFVCBELhaOKsWoKAbYI/TSs0qpkF+5ELWUpXU7Zh79sQvTcFkVBwQaOwzegMAO4zkk72mL7d8CUI5eHlK6NowbIMcIIdDmQA1PJoxSvX7WrFzMouB8UFWlqTI+1aFMotIOqLaYCoWJ2NNIKBoI5gHkNUAgscVFVHygyS9RJh9YKwjvPvfC39rcSsfMe2Z6KVTvsVHTXpG9DsUZcp10/+ns4c5DK5XhRrgZTYmXKx6Nv0VJQBaOaOJrhow0jV8WIzyzlZTO+NHP9Fjz4XQW2lnszK3wgjcp1zMnnq6hX2PnBSFlwGA6qmgoEcB9wRpoZ3ITlobFAGmntT9sfHhGM2s4HcF7SGQcdbP6MWXM+K6ClJOjNjD79yKIB5Ayq/q3f/OaX1j7HCqmbyhTeMNZ4t/s4tzNCtoWr0B7qWOr3QGo+jugDp4BEBSgJzQ12LRxvw2ORMZBQs20PZGYpeckH0uQ3lgGY//7V1zi3pi7kWr2NOmAdqvnlBHePE3NfJUopptL2B57OIwIUqRVJcbKvgv8TXIKJLAyGowQUl/2K1ZXAKSt 91S1KKYF IP/YEUOOFWnXwo5Y09YKOSWYNp0lE8O+K7fsuq/VQsqAzOR8R+MSz9ulo0HQofiOMvx+ttksQ/PDVRE2hgGNSqQ2pPYui5CvN8X7N7dkvabqcZx1SRSUdy2BLsuv9vOJ8NrXpHf1D1U3Ku6Hh0HVsqM3Ux9unI1DcW+BGxoQp8GRVTK8eywyk5xJcTZxOhepTuzowtpoGvt9I1v8dCbNsJYnpZ6IKipCB0IsP9800j+Xf+IsPxOD/dM84CEfR93CLbqMASZt0JyWlOIJnOYLaNx+M3XyH9rcO4TT0R5X6roCPL3lpPqlcIYU823bsN1Fi6Q+L3blotdIOcZjGgtR2GnZo6pJXEQJE1OO5axtwYMY7JyjC4lfLpciC5w== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Fri, Jun 20, 2025 at 12:04=E2=80=AFPM Peter Xu wrote= : > > This patch completely moves the old userfaultfd core to use the new > vm_uffd_ops API. After this change, existing file systems will start to > use the new API for userfault operations. > > When at it, moving vma_can_userfault() into mm/userfaultfd.c instead, > because it's getting too big. It's only used in slow paths so it shouldn= 't > be an issue. > > This will also remove quite some hard-coded checks for either shmem or > hugetlbfs. Now all the old checks should still work but with vm_uffd_ops= . > > Note that anonymous memory will still need to be processed separately > because it doesn't have vm_ops at all. > > Signed-off-by: Peter Xu > --- > include/linux/shmem_fs.h | 14 ----- > include/linux/userfaultfd_k.h | 46 ++++---------- > mm/shmem.c | 2 +- > mm/userfaultfd.c | 115 +++++++++++++++++++++++++--------- > 4 files changed, 101 insertions(+), 76 deletions(-) > > diff --git a/include/linux/shmem_fs.h b/include/linux/shmem_fs.h > index 6d0f9c599ff7..2f5b7b295cf6 100644 > --- a/include/linux/shmem_fs.h > +++ b/include/linux/shmem_fs.h > @@ -195,20 +195,6 @@ static inline pgoff_t shmem_fallocend(struct inode *= inode, pgoff_t eof) > extern bool shmem_charge(struct inode *inode, long pages); > extern void shmem_uncharge(struct inode *inode, long pages); > > -#ifdef CONFIG_USERFAULTFD > -#ifdef CONFIG_SHMEM > -extern int shmem_mfill_atomic_pte(pmd_t *dst_pmd, > - struct vm_area_struct *dst_vma, > - unsigned long dst_addr, > - unsigned long src_addr, > - uffd_flags_t flags, > - struct folio **foliop); > -#else /* !CONFIG_SHMEM */ > -#define shmem_mfill_atomic_pte(dst_pmd, dst_vma, dst_addr, \ > - src_addr, flags, foliop) ({ BUG(); 0; }) > -#endif /* CONFIG_SHMEM */ > -#endif /* CONFIG_USERFAULTFD */ > - > /* > * Used space is stored as unsigned 64-bit value in bytes but > * quota core supports only signed 64-bit values so use that > diff --git a/include/linux/userfaultfd_k.h b/include/linux/userfaultfd_k.= h > index e79c724b3b95..4e56ad423a4a 100644 > --- a/include/linux/userfaultfd_k.h > +++ b/include/linux/userfaultfd_k.h > @@ -85,9 +85,14 @@ extern vm_fault_t handle_userfault(struct vm_fault *vm= f, unsigned long reason); > #define MFILL_ATOMIC_FLAG(nr) ((__force uffd_flags_t) MFILL_ATOMIC_BIT(n= r)) > #define MFILL_ATOMIC_MODE_MASK ((__force uffd_flags_t) (MFILL_ATOMIC_BIT= (0) - 1)) > > +static inline enum mfill_atomic_mode uffd_flags_get_mode(uffd_flags_t fl= ags) > +{ > + return (enum mfill_atomic_mode)(flags & MFILL_ATOMIC_MODE_MASK); > +} > + > static inline bool uffd_flags_mode_is(uffd_flags_t flags, enum mfill_ato= mic_mode expected) > { > - return (flags & MFILL_ATOMIC_MODE_MASK) =3D=3D ((__force uffd_fla= gs_t) expected); > + return uffd_flags_get_mode(flags) =3D=3D expected; > } > > static inline uffd_flags_t uffd_flags_set_mode(uffd_flags_t flags, enum = mfill_atomic_mode mode) > @@ -196,41 +201,16 @@ static inline bool userfaultfd_armed(struct vm_area= _struct *vma) > return vma->vm_flags & __VM_UFFD_FLAGS; > } > > -static inline bool vma_can_userfault(struct vm_area_struct *vma, > - unsigned long vm_flags, > - bool wp_async) > +static inline const vm_uffd_ops *vma_get_uffd_ops(struct vm_area_struct = *vma) > { > - vm_flags &=3D __VM_UFFD_FLAGS; > - > - if (vma->vm_flags & VM_DROPPABLE) > - return false; > - > - if ((vm_flags & VM_UFFD_MINOR) && > - (!is_vm_hugetlb_page(vma) && !vma_is_shmem(vma))) > - return false; > - > - /* > - * If wp async enabled, and WP is the only mode enabled, allow an= y > - * memory type. > - */ > - if (wp_async && (vm_flags =3D=3D VM_UFFD_WP)) > - return true; > - > -#ifndef CONFIG_PTE_MARKER_UFFD_WP > - /* > - * If user requested uffd-wp but not enabled pte markers for > - * uffd-wp, then shmem & hugetlbfs are not supported but only > - * anonymous. > - */ > - if ((vm_flags & VM_UFFD_WP) && !vma_is_anonymous(vma)) > - return false; > -#endif Hi Peter, Thanks for this cleanup! It looks like the above two checks, the wp-async one and the PTE marker check, have been reordered in this patch. Does this result in a functional difference? The rest of this series looks fine to me. :) > - > - /* By default, allow any of anon|shmem|hugetlb */ > - return vma_is_anonymous(vma) || is_vm_hugetlb_page(vma) || > - vma_is_shmem(vma); > + if (vma->vm_ops && vma->vm_ops->userfaultfd_ops) > + return vma->vm_ops->userfaultfd_ops; > + return NULL; > } > > +bool vma_can_userfault(struct vm_area_struct *vma, > + unsigned long vm_flags, bool wp_async); > + > static inline bool vma_has_uffd_without_event_remap(struct vm_area_struc= t *vma) > { > struct userfaultfd_ctx *uffd_ctx =3D vma->vm_userfaultfd_ctx.ctx; > diff --git a/mm/shmem.c b/mm/shmem.c > index bd0a29000318..4d71fc7be358 100644 > --- a/mm/shmem.c > +++ b/mm/shmem.c > @@ -3158,7 +3158,7 @@ static int shmem_uffd_get_folio(struct inode *inode= , pgoff_t pgoff, > return shmem_get_folio(inode, pgoff, 0, folio, SGP_NOALLOC); > } > > -int shmem_mfill_atomic_pte(pmd_t *dst_pmd, > +static int shmem_mfill_atomic_pte(pmd_t *dst_pmd, > struct vm_area_struct *dst_vma, > unsigned long dst_addr, > unsigned long src_addr, > diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c > index 879505c6996f..61783ff2d335 100644 > --- a/mm/userfaultfd.c > +++ b/mm/userfaultfd.c > @@ -14,12 +14,48 @@ > #include > #include > #include > -#include > #include > #include > #include "internal.h" > #include "swap.h" > > +bool vma_can_userfault(struct vm_area_struct *vma, > + unsigned long vm_flags, bool wp_async) > +{ > + unsigned long supported; > + > + if (vma->vm_flags & VM_DROPPABLE) > + return false; > + > + vm_flags &=3D __VM_UFFD_FLAGS; > + > +#ifndef CONFIG_PTE_MARKER_UFFD_WP > + /* > + * If user requested uffd-wp but not enabled pte markers for > + * uffd-wp, then any file system (like shmem or hugetlbfs) are no= t > + * supported but only anonymous. > + */ > + if ((vm_flags & VM_UFFD_WP) && !vma_is_anonymous(vma)) > + return false; > +#endif > + /* > + * If wp async enabled, and WP is the only mode enabled, allow an= y > + * memory type. > + */ > + if (wp_async && (vm_flags =3D=3D VM_UFFD_WP)) > + return true; > + > + if (vma_is_anonymous(vma)) > + /* Anonymous has no page cache, MINOR not supported */ > + supported =3D VM_UFFD_MISSING | VM_UFFD_WP; > + else if (vma_get_uffd_ops(vma)) > + supported =3D vma_get_uffd_ops(vma)->uffd_features; > + else > + return false; > + > + return !(vm_flags & (~supported)); > +} > + > static __always_inline > bool validate_dst_vma(struct vm_area_struct *dst_vma, unsigned long dst_= end) > { > @@ -384,11 +420,15 @@ static int mfill_atomic_pte_continue(pmd_t *dst_pmd= , > { > struct inode *inode =3D file_inode(dst_vma->vm_file); > pgoff_t pgoff =3D linear_page_index(dst_vma, dst_addr); > + const vm_uffd_ops *uffd_ops =3D vma_get_uffd_ops(dst_vma); > struct folio *folio; > struct page *page; > int ret; > > - ret =3D shmem_get_folio(inode, pgoff, 0, &folio, SGP_NOALLOC); > + if (WARN_ON_ONCE(!uffd_ops || !uffd_ops->uffd_get_folio)) > + return -EINVAL; > + > + ret =3D uffd_ops->uffd_get_folio(inode, pgoff, &folio); > /* Our caller expects us to return -EFAULT if we failed to find f= olio */ > if (ret =3D=3D -ENOENT) > ret =3D -EFAULT; > @@ -504,18 +544,6 @@ static __always_inline ssize_t mfill_atomic_hugetlb( > u32 hash; > struct address_space *mapping; > > - /* > - * There is no default zero huge page for all huge page sizes as > - * supported by hugetlb. A PMD_SIZE huge pages may exist as used > - * by THP. Since we can not reliably insert a zero page, this > - * feature is not supported. > - */ > - if (uffd_flags_mode_is(flags, MFILL_ATOMIC_ZEROPAGE)) { > - up_read(&ctx->map_changing_lock); > - uffd_mfill_unlock(dst_vma); > - return -EINVAL; > - } > - > src_addr =3D src_start; > dst_addr =3D dst_start; > copied =3D 0; > @@ -686,14 +714,55 @@ static __always_inline ssize_t mfill_atomic_pte(pmd= _t *dst_pmd, > err =3D mfill_atomic_pte_zeropage(dst_pmd, > dst_vma, dst_addr); > } else { > - err =3D shmem_mfill_atomic_pte(dst_pmd, dst_vma, > - dst_addr, src_addr, > - flags, foliop); > + const vm_uffd_ops *uffd_ops =3D vma_get_uffd_ops(dst_vma)= ; > + > + if (WARN_ON_ONCE(!uffd_ops || !uffd_ops->uffd_copy)) { > + err =3D -EINVAL; > + } else { > + err =3D uffd_ops->uffd_copy(dst_pmd, dst_vma, > + dst_addr, src_addr, > + flags, foliop); > + } > } > > return err; > } > > +static inline bool > +vma_uffd_ops_supported(struct vm_area_struct *vma, uffd_flags_t flags) > +{ > + enum mfill_atomic_mode mode =3D uffd_flags_get_mode(flags); > + const vm_uffd_ops *uffd_ops; > + unsigned long uffd_ioctls; > + > + if ((flags & MFILL_ATOMIC_WP) && !(vma->vm_flags & VM_UFFD_WP)) > + return false; > + > + /* Anonymous supports everything except CONTINUE */ > + if (vma_is_anonymous(vma)) > + return mode !=3D MFILL_ATOMIC_CONTINUE; > + > + uffd_ops =3D vma_get_uffd_ops(vma); > + if (!uffd_ops) > + return false; > + > + uffd_ioctls =3D uffd_ops->uffd_ioctls; > + switch (mode) { > + case MFILL_ATOMIC_COPY: > + return uffd_ioctls & BIT(_UFFDIO_COPY); > + case MFILL_ATOMIC_ZEROPAGE: > + return uffd_ioctls & BIT(_UFFDIO_ZEROPAGE); > + case MFILL_ATOMIC_CONTINUE: > + if (!(vma->vm_flags & VM_SHARED)) > + return false; > + return uffd_ioctls & BIT(_UFFDIO_CONTINUE); > + case MFILL_ATOMIC_POISON: > + return uffd_ioctls & BIT(_UFFDIO_POISON); > + default: > + return false; > + } > +} > + > static __always_inline ssize_t mfill_atomic(struct userfaultfd_ctx *ctx, > unsigned long dst_start, > unsigned long src_start, > @@ -752,11 +821,7 @@ static __always_inline ssize_t mfill_atomic(struct u= serfaultfd_ctx *ctx, > dst_vma->vm_flags & VM_SHARED)) > goto out_unlock; > > - /* > - * validate 'mode' now that we know the dst_vma: don't allow > - * a wrprotect copy if the userfaultfd didn't register as WP. > - */ > - if ((flags & MFILL_ATOMIC_WP) && !(dst_vma->vm_flags & VM_UFFD_WP= )) > + if (!vma_uffd_ops_supported(dst_vma, flags)) > goto out_unlock; > > /* > @@ -766,12 +831,6 @@ static __always_inline ssize_t mfill_atomic(struct u= serfaultfd_ctx *ctx, > return mfill_atomic_hugetlb(ctx, dst_vma, dst_start, > src_start, len, flags); > > - if (!vma_is_anonymous(dst_vma) && !vma_is_shmem(dst_vma)) > - goto out_unlock; > - if (!vma_is_shmem(dst_vma) && > - uffd_flags_mode_is(flags, MFILL_ATOMIC_CONTINUE)) > - goto out_unlock; > - > while (src_addr < src_start + len) { > pmd_t dst_pmdval; > > -- > 2.49.0 >