From: "Liam R. Howlett" <Liam.Howlett@oracle.com>
To: Mike Rapoport <rppt@kernel.org>
Cc: "David Hildenbrand (Red Hat)" <david@kernel.org>,
Peter Xu <peterx@redhat.com>,
Lorenzo Stoakes <lorenzo.stoakes@oracle.com>,
David Hildenbrand <david@redhat.com>,
linux-kernel@vger.kernel.org, linux-mm@kvack.org,
Muchun Song <muchun.song@linux.dev>,
Nikita Kalyazin <kalyazin@amazon.com>,
Vlastimil Babka <vbabka@suse.cz>,
Axel Rasmussen <axelrasmussen@google.com>,
Andrew Morton <akpm@linux-foundation.org>,
James Houghton <jthoughton@google.com>,
Hugh Dickins <hughd@google.com>, Michal Hocko <mhocko@suse.com>,
Ujwal Kundur <ujwal.kundur@gmail.com>,
Oscar Salvador <osalvador@suse.de>,
Suren Baghdasaryan <surenb@google.com>,
Andrea Arcangeli <aarcange@redhat.com>,
conduct@kernel.org
Subject: Re: [PATCH v4 0/4] mm/userfaultfd: modulize memory types
Date: Thu, 6 Nov 2025 11:32:46 -0500 [thread overview]
Message-ID: <mnjrtg62qh2rd353mbudryvs3neukt26xtovyddm5uosxurmfi@lldnrp7a3666> (raw)
In-Reply-To: <aQmplrpNjvCVjWb_@kernel.org>
* Mike Rapoport <rppt@kernel.org> [251104 02:22]:
> On Mon, Nov 03, 2025 at 10:27:05PM +0100, David Hildenbrand (Red Hat) wrote:
> >
> > And maybe that's the main problem here: Liam talks about general uffd
> > cleanups while you are focused on supporting guest_memfd minor mode "as
> > simple as possible" (as you write below).
>
> Hijacking for the technical part for a moment ;-)
>
> It seems that "as simple as possible" can even avoid data members in struct
> vm_uffd_ops, e.g something along these lines:
I like this because it removes the flag.
If we don't want to return the folio, we could modify the
mfill_atomic_pte_continue() to __mfill_atomic_pte_continue() which takes
a function pointer and have the callers pass a different get_folio() by
memory type. Each memory type (anon, shmem, and guest_memfd) would have
a small stub that would be set in the vm_ops.
It also looks similar to vma_get_uffd_ops() in 1fa9377e57eb1
("mm/userfaultfd: Introduce userfaultfd ops and use it for destination
validation") [1]. But I always returned a uffd ops, which passes all
uffd testing. When would your NULL uffd ops be hit? That is, when
would uffd_ops not be set and not be anon?
[1]. https://git.infradead.org/?p=users/jedix/linux-maple.git;a=blobdiff;f=mm/userfaultfd.c;h=e2570e72242e5a350508f785119c5dee4d8176c1;hp=e8341a45e7e8d239c64f460afeb5b2b8b29ed853;hb=1fa9377e57eb16d7fa579ea7f8eb832164d209ac;hpb=2166e91882eb195677717ac2f8fbfc58171196ce
Thanks,
Liam
>
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index d16b33bacc32..840986780cb5 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -605,6 +605,8 @@ struct vm_fault {
> */
> };
>
> +struct vm_uffd_ops;
> +
> /*
> * These are the virtual MM functions - opening of an area, closing and
> * unmapping it (needed to keep files on disk up-to-date etc), pointer
> @@ -690,6 +692,9 @@ struct vm_operations_struct {
> struct page *(*find_normal_page)(struct vm_area_struct *vma,
> unsigned long addr);
> #endif /* CONFIG_FIND_NORMAL_PAGE */
> +#ifdef CONFIG_USERFAULTFD
> + const struct vm_uffd_ops *uffd_ops;
> +#endif
> };
>
> #ifdef CONFIG_NUMA_BALANCING
> diff --git a/include/linux/userfaultfd_k.h b/include/linux/userfaultfd_k.h
> index c0e716aec26a..aac7ac616636 100644
> --- a/include/linux/userfaultfd_k.h
> +++ b/include/linux/userfaultfd_k.h
> @@ -111,6 +111,11 @@ static inline uffd_flags_t uffd_flags_set_mode(uffd_flags_t flags, enum mfill_at
> /* Flags controlling behavior. These behavior changes are mode-independent. */
> #define MFILL_ATOMIC_WP MFILL_ATOMIC_FLAG(0)
>
> +struct vm_uffd_ops {
> + int (*minor_get_folio)(struct inode *inode, pgoff_t pgoff,
> + struct folio **folio);
> +};
> +
> extern int mfill_atomic_install_pte(pmd_t *dst_pmd,
> struct vm_area_struct *dst_vma,
> unsigned long dst_addr, struct page *page,
> diff --git a/mm/shmem.c b/mm/shmem.c
> index b9081b817d28..b4318ad3bdf9 100644
> --- a/mm/shmem.c
> +++ b/mm/shmem.c
> @@ -3260,6 +3260,17 @@ int shmem_mfill_atomic_pte(pmd_t *dst_pmd,
> shmem_inode_unacct_blocks(inode, 1);
> return ret;
> }
> +
> +static int shmem_uffd_minor_get_folio(struct inode *inode, pgoff_t pgoff,
> + struct folio **folio)
> +{
> + return shmem_get_folio(inode, pgoff, 0, folio, SGP_NOALLOC);
> +}
> +
> +static const struct vm_uffd_ops shmem_uffd_ops = {
> + .minor_get_folio = shmem_uffd_minor_get_folio,
> +};
> +
> #endif /* CONFIG_USERFAULTFD */
>
> #ifdef CONFIG_TMPFS
> @@ -5292,6 +5303,9 @@ static const struct vm_operations_struct shmem_vm_ops = {
> .set_policy = shmem_set_policy,
> .get_policy = shmem_get_policy,
> #endif
> +#ifdef CONFIG_USERFAULTFD
> + .uffd_ops = &shmem_uffd_ops,
> +#endif
> };
>
> static const struct vm_operations_struct shmem_anon_vm_ops = {
> @@ -5301,6 +5315,9 @@ static const struct vm_operations_struct shmem_anon_vm_ops = {
> .set_policy = shmem_set_policy,
> .get_policy = shmem_get_policy,
> #endif
> +#ifdef CONFIG_USERFAULTFD
> + .uffd_ops = &shmem_uffd_ops,
> +#endif
> };
>
> int shmem_init_fs_context(struct fs_context *fc)
> diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
> index af61b95c89e4..6b30a8f39f4d 100644
> --- a/mm/userfaultfd.c
> +++ b/mm/userfaultfd.c
> @@ -20,6 +20,20 @@
> #include "internal.h"
> #include "swap.h"
>
> +static const struct vm_uffd_ops anon_uffd_ops = {
> +};
> +
> +static inline const struct vm_uffd_ops *vma_get_uffd_ops(struct vm_area_struct *vma)
> +{
> + if (vma->vm_ops && vma->vm_ops->uffd_ops)
> + return vma->vm_ops->uffd_ops;
> +
> + if (vma_is_anonymous(vma))
> + return &anon_uffd_ops;
> +
> + return NULL;
> +}
> +
> static __always_inline
> bool validate_dst_vma(struct vm_area_struct *dst_vma, unsigned long dst_end)
> {
> @@ -382,13 +396,14 @@ static int mfill_atomic_pte_continue(pmd_t *dst_pmd,
> unsigned long dst_addr,
> uffd_flags_t flags)
> {
> + const struct vm_uffd_ops *uffd_ops = vma_get_uffd_ops(dst_vma);
> struct inode *inode = file_inode(dst_vma->vm_file);
> pgoff_t pgoff = linear_page_index(dst_vma, dst_addr);
> struct folio *folio;
> struct page *page;
> int ret;
>
> - ret = shmem_get_folio(inode, pgoff, 0, &folio, SGP_NOALLOC);
> + ret = uffd_ops->minor_get_folio(inode, pgoff, &folio);
> /* Our caller expects us to return -EFAULT if we failed to find folio */
> if (ret == -ENOENT)
> ret = -EFAULT;
> @@ -707,6 +722,7 @@ static __always_inline ssize_t mfill_atomic(struct userfaultfd_ctx *ctx,
> unsigned long src_addr, dst_addr;
> long copied;
> struct folio *folio;
> + const struct vm_uffd_ops *uffd_ops;
>
> /*
> * Sanitize the command parameters:
> @@ -766,10 +782,11 @@ static __always_inline ssize_t mfill_atomic(struct userfaultfd_ctx *ctx,
> return mfill_atomic_hugetlb(ctx, dst_vma, dst_start,
> src_start, len, flags);
>
> - if (!vma_is_anonymous(dst_vma) && !vma_is_shmem(dst_vma))
> + uffd_ops = vma_get_uffd_ops(dst_vma);
> + if (!uffd_ops)
> goto out_unlock;
> - if (!vma_is_shmem(dst_vma) &&
> - uffd_flags_mode_is(flags, MFILL_ATOMIC_CONTINUE))
> + if (uffd_flags_mode_is(flags, MFILL_ATOMIC_CONTINUE) &&
> + !uffd_ops->minor_get_folio)
> goto out_unlock;
>
> while (src_addr < src_start + len) {
>
> --
> Sincerely yours,
> Mike.
next prev parent reply other threads:[~2025-11-06 16:33 UTC|newest]
Thread overview: 41+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-10-14 23:14 Peter Xu
2025-10-14 23:14 ` [PATCH v4 1/4] mm: Introduce vm_uffd_ops API Peter Xu
2025-10-20 14:18 ` David Hildenbrand
2025-10-14 23:14 ` [PATCH v4 2/4] mm/shmem: Support " Peter Xu
2025-10-20 14:18 ` David Hildenbrand
2025-10-14 23:15 ` [PATCH v4 3/4] mm/hugetlb: " Peter Xu
2025-10-20 14:19 ` David Hildenbrand
2025-10-14 23:15 ` [PATCH v4 4/4] mm: Apply vm_uffd_ops API to core mm Peter Xu
2025-10-20 13:34 ` [PATCH v4 0/4] mm/userfaultfd: modulize memory types David Hildenbrand
2025-10-20 14:12 ` Peter Xu
2025-10-21 15:51 ` Liam R. Howlett
2025-10-21 16:28 ` Peter Xu
2025-10-30 17:13 ` Liam R. Howlett
2025-10-30 18:00 ` Nikita Kalyazin
2025-10-30 19:07 ` Peter Xu
2025-10-30 19:55 ` Peter Xu
2025-10-30 20:23 ` Lorenzo Stoakes
2025-10-30 21:13 ` Peter Xu
2025-10-30 21:27 ` Peter
2025-11-03 20:01 ` David Hildenbrand (Red Hat)
2025-11-03 20:46 ` Peter Xu
2025-11-03 21:27 ` David Hildenbrand (Red Hat)
2025-11-03 22:49 ` Peter Xu
2025-11-04 7:10 ` Lorenzo Stoakes
2025-11-04 14:18 ` David Hildenbrand (Red Hat)
2025-11-04 7:21 ` Mike Rapoport
2025-11-04 12:23 ` David Hildenbrand (Red Hat)
2025-11-06 16:32 ` Liam R. Howlett [this message]
2025-11-09 7:11 ` Mike Rapoport
2025-11-10 16:34 ` Liam R. Howlett
2025-11-11 10:05 ` Mike Rapoport
2025-10-30 20:52 ` Liam R. Howlett
2025-10-30 21:33 ` Peter Xu
2025-10-30 20:24 ` Liam R. Howlett
2025-10-30 21:26 ` Peter Xu
2025-11-03 16:11 ` Mike Rapoport
2025-11-03 18:43 ` Liam R. Howlett
2025-11-05 21:23 ` David Hildenbrand
2025-11-06 16:16 ` Liam R. Howlett
2025-11-07 10:16 ` David Hildenbrand (Red Hat)
2025-11-07 16:55 ` Liam R. Howlett
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=mnjrtg62qh2rd353mbudryvs3neukt26xtovyddm5uosxurmfi@lldnrp7a3666 \
--to=liam.howlett@oracle.com \
--cc=aarcange@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=axelrasmussen@google.com \
--cc=conduct@kernel.org \
--cc=david@kernel.org \
--cc=david@redhat.com \
--cc=hughd@google.com \
--cc=jthoughton@google.com \
--cc=kalyazin@amazon.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lorenzo.stoakes@oracle.com \
--cc=mhocko@suse.com \
--cc=muchun.song@linux.dev \
--cc=osalvador@suse.de \
--cc=peterx@redhat.com \
--cc=rppt@kernel.org \
--cc=surenb@google.com \
--cc=ujwal.kundur@gmail.com \
--cc=vbabka@suse.cz \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox