linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
To: David Hildenbrand <david@redhat.com>
Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org,
	xen-devel@lists.xenproject.org, linux-fsdevel@vger.kernel.org,
	nvdimm@lists.linux.dev, Andrew Morton <akpm@linux-foundation.org>,
	Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	Dan Williams <dan.j.williams@intel.com>,
	Matthew Wilcox <willy@infradead.org>, Jan Kara <jack@suse.cz>,
	Alexander Viro <viro@zeniv.linux.org.uk>,
	Christian Brauner <brauner@kernel.org>,
	"Liam R. Howlett" <Liam.Howlett@oracle.com>,
	Vlastimil Babka <vbabka@suse.cz>, Mike Rapoport <rppt@kernel.org>,
	Suren Baghdasaryan <surenb@google.com>,
	Michal Hocko <mhocko@suse.com>, Zi Yan <ziy@nvidia.com>,
	Baolin Wang <baolin.wang@linux.alibaba.com>,
	Nico Pache <npache@redhat.com>,
	Ryan Roberts <ryan.roberts@arm.com>, Dev Jain <dev.jain@arm.com>,
	Barry Song <baohua@kernel.org>, Jann Horn <jannh@google.com>,
	Pedro Falcato <pfalcato@suse.de>, Hugh Dickins <hughd@google.com>,
	Oscar Salvador <osalvador@suse.de>,
	Lance Yang <lance.yang@linux.dev>,
	David Vrabel <david.vrabel@citrix.com>
Subject: Re: [PATCH v2 9/9] mm: rename vm_ops->find_special_page() to vm_ops->find_normal_page()
Date: Thu, 17 Jul 2025 21:07:01 +0100	[thread overview]
Message-ID: <4f7bf35f-ed83-4db0-8b93-5333eca7c6a5@lucifer.local> (raw)
In-Reply-To: <20250717115212.1825089-10-david@redhat.com>

On Thu, Jul 17, 2025 at 01:52:12PM +0200, David Hildenbrand wrote:
> ... and hide it behind a kconfig option. There is really no need for
> any !xen code to perform this check.

Lovely :)

>
> The naming is a bit off: we want to find the "normal" page when a PTE
> was marked "special". So it's really not "finding a special" page.
>
> Improve the documentation, and add a comment in the code where XEN ends
> up performing the pte_mkspecial() through a hypercall. More details can
> be found in commit 923b2919e2c3 ("xen/gntdev: mark userspace PTEs as
> special on x86 PV guests").
>
> Cc: David Vrabel <david.vrabel@citrix.com>
> Reviewed-by: Oscar Salvador <osalvador@suse.de>
> Signed-off-by: David Hildenbrand <david@redhat.com>

Yes, yes thank you thank you! This is long overdue. Glorious.

Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>

> ---
>  drivers/xen/Kconfig              |  1 +
>  drivers/xen/gntdev.c             |  5 +++--
>  include/linux/mm.h               | 18 +++++++++++++-----
>  mm/Kconfig                       |  2 ++
>  mm/memory.c                      | 12 ++++++++++--
>  tools/testing/vma/vma_internal.h | 18 +++++++++++++-----
>  6 files changed, 42 insertions(+), 14 deletions(-)
>
> diff --git a/drivers/xen/Kconfig b/drivers/xen/Kconfig
> index 24f485827e039..f9a35ed266ecf 100644
> --- a/drivers/xen/Kconfig
> +++ b/drivers/xen/Kconfig
> @@ -138,6 +138,7 @@ config XEN_GNTDEV
>  	depends on XEN
>  	default m
>  	select MMU_NOTIFIER
> +	select FIND_NORMAL_PAGE
>  	help
>  	  Allows userspace processes to use grants.
>
> diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
> index 61faea1f06630..d1bc0dae2cdf9 100644
> --- a/drivers/xen/gntdev.c
> +++ b/drivers/xen/gntdev.c
> @@ -309,6 +309,7 @@ static int find_grant_ptes(pte_t *pte, unsigned long addr, void *data)
>  	BUG_ON(pgnr >= map->count);
>  	pte_maddr = arbitrary_virt_to_machine(pte).maddr;
>
> +	/* Note: this will perform a pte_mkspecial() through the hypercall. */
>  	gnttab_set_map_op(&map->map_ops[pgnr], pte_maddr, flags,
>  			  map->grants[pgnr].ref,
>  			  map->grants[pgnr].domid);
> @@ -516,7 +517,7 @@ static void gntdev_vma_close(struct vm_area_struct *vma)
>  	gntdev_put_map(priv, map);
>  }
>
> -static struct page *gntdev_vma_find_special_page(struct vm_area_struct *vma,
> +static struct page *gntdev_vma_find_normal_page(struct vm_area_struct *vma,
>  						 unsigned long addr)
>  {
>  	struct gntdev_grant_map *map = vma->vm_private_data;
> @@ -527,7 +528,7 @@ static struct page *gntdev_vma_find_special_page(struct vm_area_struct *vma,
>  static const struct vm_operations_struct gntdev_vmops = {
>  	.open = gntdev_vma_open,
>  	.close = gntdev_vma_close,
> -	.find_special_page = gntdev_vma_find_special_page,
> +	.find_normal_page = gntdev_vma_find_normal_page,
>  };
>
>  /* ------------------------------------------------------------------ */
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index 0eb991262fbbf..036800514aa90 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -648,13 +648,21 @@ struct vm_operations_struct {
>  	struct mempolicy *(*get_policy)(struct vm_area_struct *vma,
>  					unsigned long addr, pgoff_t *ilx);
>  #endif
> +#ifdef CONFIG_FIND_NORMAL_PAGE
>  	/*
> -	 * Called by vm_normal_page() for special PTEs to find the
> -	 * page for @addr.  This is useful if the default behavior
> -	 * (using pte_page()) would not find the correct page.
> +	 * Called by vm_normal_page() for special PTEs in @vma at @addr. This
> +	 * allows for returning a "normal" page from vm_normal_page() even
> +	 * though the PTE indicates that the "struct page" either does not exist
> +	 * or should not be touched: "special".
> +	 *
> +	 * Do not add new users: this really only works when a "normal" page
> +	 * was mapped, but then the PTE got changed to something weird (+
> +	 * marked special) that would not make pte_pfn() identify the originally
> +	 * inserted page.

Yes great, glad to quarantine this.

>  	 */
> -	struct page *(*find_special_page)(struct vm_area_struct *vma,
> -					  unsigned long addr);
> +	struct page *(*find_normal_page)(struct vm_area_struct *vma,
> +					 unsigned long addr);
> +#endif /* CONFIG_FIND_NORMAL_PAGE */
>  };
>
>  #ifdef CONFIG_NUMA_BALANCING
> diff --git a/mm/Kconfig b/mm/Kconfig
> index 0287e8d94aea7..82c281b4f6937 100644
> --- a/mm/Kconfig
> +++ b/mm/Kconfig
> @@ -1397,6 +1397,8 @@ config PT_RECLAIM
>
>  	  Note: now only empty user PTE page table pages will be reclaimed.
>
> +config FIND_NORMAL_PAGE
> +	def_bool n
>
>  source "mm/damon/Kconfig"
>
> diff --git a/mm/memory.c b/mm/memory.c
> index 00a0d7ae3ba4a..52804ca343261 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -613,6 +613,12 @@ static void print_bad_page_map(struct vm_area_struct *vma,
>   * trivial. Secondly, an architecture may not have a spare page table
>   * entry bit, which requires a more complicated scheme, described below.
>   *
> + * With CONFIG_FIND_NORMAL_PAGE, we might have the "special" bit set on
> + * page table entries that actually map "normal" pages: however, that page
> + * cannot be looked up through the PFN stored in the page table entry, but
> + * instead will be looked up through vm_ops->find_normal_page(). So far, this
> + * only applies to PTEs.
> + *
>   * A raw VM_PFNMAP mapping (ie. one that is not COWed) is always considered a
>   * special mapping (even if there are underlying and valid "struct pages").
>   * COWed pages of a VM_PFNMAP are always normal.
> @@ -710,8 +716,10 @@ struct page *vm_normal_page(struct vm_area_struct *vma, unsigned long addr,
>  	unsigned long pfn = pte_pfn(pte);
>
>  	if (unlikely(pte_special(pte))) {
> -		if (vma->vm_ops && vma->vm_ops->find_special_page)
> -			return vma->vm_ops->find_special_page(vma, addr);
> +#ifdef CONFIG_FIND_NORMAL_PAGE
> +		if (vma->vm_ops && vma->vm_ops->find_normal_page)
> +			return vma->vm_ops->find_normal_page(vma, addr);
> +#endif /* CONFIG_FIND_NORMAL_PAGE */
>  		if (vma->vm_flags & (VM_PFNMAP | VM_MIXEDMAP))
>  			return NULL;
>  		if (is_zero_pfn(pfn))
> diff --git a/tools/testing/vma/vma_internal.h b/tools/testing/vma/vma_internal.h
> index 0fe52fd6782bf..8646af15a5fc0 100644
> --- a/tools/testing/vma/vma_internal.h
> +++ b/tools/testing/vma/vma_internal.h
> @@ -467,13 +467,21 @@ struct vm_operations_struct {
>  	struct mempolicy *(*get_policy)(struct vm_area_struct *vma,
>  					unsigned long addr, pgoff_t *ilx);
>  #endif
> +#ifdef CONFIG_FIND_NORMAL_PAGE
>  	/*
> -	 * Called by vm_normal_page() for special PTEs to find the
> -	 * page for @addr.  This is useful if the default behavior
> -	 * (using pte_page()) would not find the correct page.
> +	 * Called by vm_normal_page() for special PTEs in @vma at @addr. This
> +	 * allows for returning a "normal" page from vm_normal_page() even
> +	 * though the PTE indicates that the "struct page" either does not exist
> +	 * or should not be touched: "special".
> +	 *
> +	 * Do not add new users: this really only works when a "normal" page
> +	 * was mapped, but then the PTE got changed to something weird (+
> +	 * marked special) that would not make pte_pfn() identify the originally
> +	 * inserted page.

Also glorious.

>  	 */
> -	struct page *(*find_special_page)(struct vm_area_struct *vma,
> -					  unsigned long addr);
> +	struct page *(*find_normal_page)(struct vm_area_struct *vma,
> +					 unsigned long addr);
> +#endif /* CONFIG_FIND_NORMAL_PAGE */
>  };
>
>  struct vm_unmapped_area_info {
> --
> 2.50.1
>


  reply	other threads:[~2025-07-17 20:07 UTC|newest]

Thread overview: 48+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-07-17 11:52 [PATCH v2 0/9] mm: vm_normal_page*() improvements David Hildenbrand
2025-07-17 11:52 ` [PATCH v2 1/9] mm/huge_memory: move more common code into insert_pmd() David Hildenbrand
2025-07-17 15:34   ` Lorenzo Stoakes
2025-07-25  2:47   ` Wei Yang
2025-07-17 11:52 ` [PATCH v2 2/9] mm/huge_memory: move more common code into insert_pud() David Hildenbrand
2025-07-17 15:42   ` Lorenzo Stoakes
2025-07-25  2:56   ` Wei Yang
2025-07-17 11:52 ` [PATCH v2 3/9] mm/huge_memory: support huge zero folio in vmf_insert_folio_pmd() David Hildenbrand
2025-07-17 15:47   ` Lorenzo Stoakes
2025-07-25  8:07   ` Wei Yang
2025-07-17 11:52 ` [PATCH v2 4/9] fs/dax: use vmf_insert_folio_pmd() to insert the huge zero folio David Hildenbrand
2025-07-17 18:09   ` Lorenzo Stoakes
2025-07-17 11:52 ` [PATCH v2 5/9] mm/huge_memory: mark PMD mappings of the huge zero folio special David Hildenbrand
2025-07-17 18:29   ` Lorenzo Stoakes
2025-07-17 20:31     ` David Hildenbrand
2025-07-18 10:41       ` Lorenzo Stoakes
2025-07-18 10:54         ` David Hildenbrand
2025-07-18 13:06           ` Lorenzo Stoakes
2025-07-28  8:49   ` Wei Yang
2025-07-17 11:52 ` [PATCH v2 6/9] mm/memory: convert print_bad_pte() to print_bad_page_map() David Hildenbrand
2025-07-17 19:17   ` Lorenzo Stoakes
2025-07-17 20:03     ` David Hildenbrand
2025-07-18 10:15       ` Lorenzo Stoakes
2025-07-18 11:04         ` David Hildenbrand
2025-07-18 12:55           ` Lorenzo Stoakes
2025-07-17 22:06   ` Demi Marie Obenour
2025-07-18  7:44     ` David Hildenbrand
2025-07-18  7:59       ` Demi Marie Obenour
2025-07-18  8:26         ` David Hildenbrand
2025-07-17 11:52 ` [PATCH v2 7/9] mm/memory: factor out common code from vm_normal_page_*() David Hildenbrand
2025-07-17 19:51   ` Lorenzo Stoakes
2025-07-17 19:55     ` Lorenzo Stoakes
2025-07-17 20:03       ` David Hildenbrand
2025-07-18 12:43         ` Lorenzo Stoakes
2025-07-30 12:54           ` David Hildenbrand
2025-07-30 13:24             ` Lorenzo Stoakes
2025-07-17 20:12     ` David Hildenbrand
2025-07-18 12:35       ` Lorenzo Stoakes
2025-07-17 11:52 ` [PATCH v2 8/9] mm: introduce and use vm_normal_page_pud() David Hildenbrand
2025-07-17 20:03   ` Lorenzo Stoakes
2025-07-17 20:14     ` David Hildenbrand
2025-07-18 10:47       ` Lorenzo Stoakes
2025-07-18 11:06         ` David Hildenbrand
2025-07-18 12:44           ` Lorenzo Stoakes
2025-07-29  7:52   ` Wei Yang
2025-07-17 11:52 ` [PATCH v2 9/9] mm: rename vm_ops->find_special_page() to vm_ops->find_normal_page() David Hildenbrand
2025-07-17 20:07   ` Lorenzo Stoakes [this message]
2025-07-29  7:53   ` Wei Yang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4f7bf35f-ed83-4db0-8b93-5333eca7c6a5@lucifer.local \
    --to=lorenzo.stoakes@oracle.com \
    --cc=Liam.Howlett@oracle.com \
    --cc=akpm@linux-foundation.org \
    --cc=baohua@kernel.org \
    --cc=baolin.wang@linux.alibaba.com \
    --cc=brauner@kernel.org \
    --cc=dan.j.williams@intel.com \
    --cc=david.vrabel@citrix.com \
    --cc=david@redhat.com \
    --cc=dev.jain@arm.com \
    --cc=hughd@google.com \
    --cc=jack@suse.cz \
    --cc=jannh@google.com \
    --cc=jgross@suse.com \
    --cc=lance.yang@linux.dev \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@suse.com \
    --cc=npache@redhat.com \
    --cc=nvdimm@lists.linux.dev \
    --cc=oleksandr_tyshchenko@epam.com \
    --cc=osalvador@suse.de \
    --cc=pfalcato@suse.de \
    --cc=rppt@kernel.org \
    --cc=ryan.roberts@arm.com \
    --cc=sstabellini@kernel.org \
    --cc=surenb@google.com \
    --cc=vbabka@suse.cz \
    --cc=viro@zeniv.linux.org.uk \
    --cc=willy@infradead.org \
    --cc=xen-devel@lists.xenproject.org \
    --cc=ziy@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox