From: "Liam R. Howlett" <Liam.Howlett@oracle.com>
To: David Hildenbrand <david@redhat.com>
Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org,
intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org,
linux-trace-kernel@vger.kernel.org,
Dave Hansen <dave.hansen@linux.intel.com>,
Andy Lutomirski <luto@kernel.org>,
Peter Zijlstra <peterz@infradead.org>,
Thomas Gleixner <tglx@linutronix.de>,
Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
"H. Peter Anvin" <hpa@zytor.com>,
Jani Nikula <jani.nikula@linux.intel.com>,
Joonas Lahtinen <joonas.lahtinen@linux.intel.com>,
Rodrigo Vivi <rodrigo.vivi@intel.com>,
Tvrtko Ursulin <tursulin@ursulin.net>,
David Airlie <airlied@gmail.com>, Simona Vetter <simona@ffwll.ch>,
Andrew Morton <akpm@linux-foundation.org>,
Steven Rostedt <rostedt@goodmis.org>,
Masami Hiramatsu <mhiramat@kernel.org>,
Mathieu Desnoyers <mathieu.desnoyers@efficios.com>,
Lorenzo Stoakes <lorenzo.stoakes@oracle.com>,
Vlastimil Babka <vbabka@suse.cz>, Jann Horn <jannh@google.com>,
Pedro Falcato <pfalcato@suse.de>, Peter Xu <peterx@redhat.com>,
Ingo Molnar <mingo@kernel.org>
Subject: Re: [PATCH v2 05/11] x86/mm/pat: remove old pfnmap tracking interface
Date: Tue, 13 May 2025 13:42:40 -0400 [thread overview]
Message-ID: <3vfsriwsr675s2eqnytyntsdlmopczimdyxr3sm3nohebebdzi@wgrp3xtsr3o5> (raw)
In-Reply-To: <20250512123424.637989-6-david@redhat.com>
* David Hildenbrand <david@redhat.com> [250512 08:34]:
> We can now get rid of the old interface along with get_pat_info() and
> follow_phys().
>
> Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> Acked-by: Ingo Molnar <mingo@kernel.org> # x86 bits
> Signed-off-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Liam R. Howlett <Liam.Howlett@oracle.com>
> ---
> arch/x86/mm/pat/memtype.c | 147 --------------------------------------
> include/linux/pgtable.h | 66 -----------------
> 2 files changed, 213 deletions(-)
>
> diff --git a/arch/x86/mm/pat/memtype.c b/arch/x86/mm/pat/memtype.c
> index 1ec8af6cad6bf..c88d1cbdc1de1 100644
> --- a/arch/x86/mm/pat/memtype.c
> +++ b/arch/x86/mm/pat/memtype.c
> @@ -933,119 +933,6 @@ static void free_pfn_range(u64 paddr, unsigned long size)
> memtype_free(paddr, paddr + size);
> }
>
> -static int follow_phys(struct vm_area_struct *vma, unsigned long *prot,
> - resource_size_t *phys)
> -{
> - struct follow_pfnmap_args args = { .vma = vma, .address = vma->vm_start };
> -
> - if (follow_pfnmap_start(&args))
> - return -EINVAL;
> -
> - /* Never return PFNs of anon folios in COW mappings. */
> - if (!args.special) {
> - follow_pfnmap_end(&args);
> - return -EINVAL;
> - }
> -
> - *prot = pgprot_val(args.pgprot);
> - *phys = (resource_size_t)args.pfn << PAGE_SHIFT;
> - follow_pfnmap_end(&args);
> - return 0;
> -}
> -
> -static int get_pat_info(struct vm_area_struct *vma, resource_size_t *paddr,
> - pgprot_t *pgprot)
> -{
> - unsigned long prot;
> -
> - VM_WARN_ON_ONCE(!(vma->vm_flags & VM_PAT));
> -
> - /*
> - * We need the starting PFN and cachemode used for track_pfn_remap()
> - * that covered the whole VMA. For most mappings, we can obtain that
> - * information from the page tables. For COW mappings, we might now
> - * suddenly have anon folios mapped and follow_phys() will fail.
> - *
> - * Fallback to using vma->vm_pgoff, see remap_pfn_range_notrack(), to
> - * detect the PFN. If we need the cachemode as well, we're out of luck
> - * for now and have to fail fork().
> - */
> - if (!follow_phys(vma, &prot, paddr)) {
> - if (pgprot)
> - *pgprot = __pgprot(prot);
> - return 0;
> - }
> - if (is_cow_mapping(vma->vm_flags)) {
> - if (pgprot)
> - return -EINVAL;
> - *paddr = (resource_size_t)vma->vm_pgoff << PAGE_SHIFT;
> - return 0;
> - }
> - WARN_ON_ONCE(1);
> - return -EINVAL;
> -}
> -
> -int track_pfn_copy(struct vm_area_struct *dst_vma,
> - struct vm_area_struct *src_vma, unsigned long *pfn)
> -{
> - const unsigned long vma_size = src_vma->vm_end - src_vma->vm_start;
> - resource_size_t paddr;
> - pgprot_t pgprot;
> - int rc;
> -
> - if (!(src_vma->vm_flags & VM_PAT))
> - return 0;
> -
> - /*
> - * Duplicate the PAT information for the dst VMA based on the src
> - * VMA.
> - */
> - if (get_pat_info(src_vma, &paddr, &pgprot))
> - return -EINVAL;
> - rc = reserve_pfn_range(paddr, vma_size, &pgprot, 1);
> - if (rc)
> - return rc;
> -
> - /* Reservation for the destination VMA succeeded. */
> - vm_flags_set(dst_vma, VM_PAT);
> - *pfn = PHYS_PFN(paddr);
> - return 0;
> -}
> -
> -void untrack_pfn_copy(struct vm_area_struct *dst_vma, unsigned long pfn)
> -{
> - untrack_pfn(dst_vma, pfn, dst_vma->vm_end - dst_vma->vm_start, true);
> - /*
> - * Reservation was freed, any copied page tables will get cleaned
> - * up later, but without getting PAT involved again.
> - */
> -}
> -
> -/*
> - * prot is passed in as a parameter for the new mapping. If the vma has
> - * a linear pfn mapping for the entire range, or no vma is provided,
> - * reserve the entire pfn + size range with single reserve_pfn_range
> - * call.
> - */
> -int track_pfn_remap(struct vm_area_struct *vma, pgprot_t *prot,
> - unsigned long pfn, unsigned long addr, unsigned long size)
> -{
> - resource_size_t paddr = (resource_size_t)pfn << PAGE_SHIFT;
> -
> - /* reserve the whole chunk starting from paddr */
> - if (!vma || (addr == vma->vm_start
> - && size == (vma->vm_end - vma->vm_start))) {
> - int ret;
> -
> - ret = reserve_pfn_range(paddr, size, prot, 0);
> - if (ret == 0 && vma)
> - vm_flags_set(vma, VM_PAT);
> - return ret;
> - }
> -
> - return pfnmap_setup_cachemode(pfn, size, prot);
> -}
> -
> int pfnmap_setup_cachemode(unsigned long pfn, unsigned long size, pgprot_t *prot)
> {
> resource_size_t paddr = (resource_size_t)pfn << PAGE_SHIFT;
> @@ -1082,40 +969,6 @@ void pfnmap_untrack(unsigned long pfn, unsigned long size)
> free_pfn_range(paddr, size);
> }
>
> -/*
> - * untrack_pfn is called while unmapping a pfnmap for a region.
> - * untrack can be called for a specific region indicated by pfn and size or
> - * can be for the entire vma (in which case pfn, size are zero).
> - */
> -void untrack_pfn(struct vm_area_struct *vma, unsigned long pfn,
> - unsigned long size, bool mm_wr_locked)
> -{
> - resource_size_t paddr;
> -
> - if (vma && !(vma->vm_flags & VM_PAT))
> - return;
> -
> - /* free the chunk starting from pfn or the whole chunk */
> - paddr = (resource_size_t)pfn << PAGE_SHIFT;
> - if (!paddr && !size) {
> - if (get_pat_info(vma, &paddr, NULL))
> - return;
> - size = vma->vm_end - vma->vm_start;
> - }
> - free_pfn_range(paddr, size);
> - if (vma) {
> - if (mm_wr_locked)
> - vm_flags_clear(vma, VM_PAT);
> - else
> - __vm_flags_mod(vma, 0, VM_PAT);
> - }
> -}
> -
> -void untrack_pfn_clear(struct vm_area_struct *vma)
> -{
> - vm_flags_clear(vma, VM_PAT);
> -}
> -
> pgprot_t pgprot_writecombine(pgprot_t prot)
> {
> pgprot_set_cachemode(&prot, _PAGE_CACHE_MODE_WC);
> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
> index 90f72cd358390..0b6e1f781d86d 100644
> --- a/include/linux/pgtable.h
> +++ b/include/linux/pgtable.h
> @@ -1485,17 +1485,6 @@ static inline pmd_t pmd_swp_clear_soft_dirty(pmd_t pmd)
> * vmf_insert_pfn.
> */
>
> -/*
> - * track_pfn_remap is called when a _new_ pfn mapping is being established
> - * by remap_pfn_range() for physical range indicated by pfn and size.
> - */
> -static inline int track_pfn_remap(struct vm_area_struct *vma, pgprot_t *prot,
> - unsigned long pfn, unsigned long addr,
> - unsigned long size)
> -{
> - return 0;
> -}
> -
> static inline int pfnmap_setup_cachemode(unsigned long pfn, unsigned long size,
> pgprot_t *prot)
> {
> @@ -1511,55 +1500,7 @@ static inline int pfnmap_track(unsigned long pfn, unsigned long size,
> static inline void pfnmap_untrack(unsigned long pfn, unsigned long size)
> {
> }
> -
> -/*
> - * track_pfn_copy is called when a VM_PFNMAP VMA is about to get the page
> - * tables copied during copy_page_range(). Will store the pfn to be
> - * passed to untrack_pfn_copy() only if there is something to be untracked.
> - * Callers should initialize the pfn to 0.
> - */
> -static inline int track_pfn_copy(struct vm_area_struct *dst_vma,
> - struct vm_area_struct *src_vma, unsigned long *pfn)
> -{
> - return 0;
> -}
> -
> -/*
> - * untrack_pfn_copy is called when a VM_PFNMAP VMA failed to copy during
> - * copy_page_range(), but after track_pfn_copy() was already called. Can
> - * be called even if track_pfn_copy() did not actually track anything:
> - * handled internally.
> - */
> -static inline void untrack_pfn_copy(struct vm_area_struct *dst_vma,
> - unsigned long pfn)
> -{
> -}
> -
> -/*
> - * untrack_pfn is called while unmapping a pfnmap for a region.
> - * untrack can be called for a specific region indicated by pfn and size or
> - * can be for the entire vma (in which case pfn, size are zero).
> - */
> -static inline void untrack_pfn(struct vm_area_struct *vma,
> - unsigned long pfn, unsigned long size,
> - bool mm_wr_locked)
> -{
> -}
> -
> -/*
> - * untrack_pfn_clear is called in the following cases on a VM_PFNMAP VMA:
> - *
> - * 1) During mremap() on the src VMA after the page tables were moved.
> - * 2) During fork() on the dst VMA, immediately after duplicating the src VMA.
> - */
> -static inline void untrack_pfn_clear(struct vm_area_struct *vma)
> -{
> -}
> #else
> -extern int track_pfn_remap(struct vm_area_struct *vma, pgprot_t *prot,
> - unsigned long pfn, unsigned long addr,
> - unsigned long size);
> -
> /**
> * pfnmap_setup_cachemode - setup the cachemode in the pgprot for a pfn range
> * @pfn: the start of the pfn range
> @@ -1614,13 +1555,6 @@ int pfnmap_track(unsigned long pfn, unsigned long size, pgprot_t *prot);
> * Untrack a pfn range previously tracked through pfnmap_track().
> */
> void pfnmap_untrack(unsigned long pfn, unsigned long size);
> -extern int track_pfn_copy(struct vm_area_struct *dst_vma,
> - struct vm_area_struct *src_vma, unsigned long *pfn);
> -extern void untrack_pfn_copy(struct vm_area_struct *dst_vma,
> - unsigned long pfn);
> -extern void untrack_pfn(struct vm_area_struct *vma, unsigned long pfn,
> - unsigned long size, bool mm_wr_locked);
> -extern void untrack_pfn_clear(struct vm_area_struct *vma);
> #endif
>
> /**
> --
> 2.49.0
>
next prev parent reply other threads:[~2025-05-13 17:43 UTC|newest]
Thread overview: 36+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-05-12 12:34 [PATCH v2 00/11] mm: rewrite pfnmap tracking and remove VM_PAT David Hildenbrand
2025-05-12 12:34 ` [PATCH v2 01/11] x86/mm/pat: factor out setting cachemode into pgprot_set_cachemode() David Hildenbrand
2025-05-13 17:29 ` Liam R. Howlett
2025-05-12 12:34 ` [PATCH v2 02/11] mm: convert track_pfn_insert() to pfnmap_setup_cachemode*() David Hildenbrand
2025-05-12 15:43 ` Lorenzo Stoakes
2025-05-13 9:06 ` David Hildenbrand
2025-05-13 17:29 ` Liam R. Howlett
2025-05-12 12:34 ` [PATCH v2 03/11] mm: introduce pfnmap_track() and pfnmap_untrack() and use them for memremap David Hildenbrand
2025-05-13 17:40 ` Liam R. Howlett
2025-05-14 17:57 ` David Hildenbrand
2025-05-12 12:34 ` [PATCH v2 04/11] mm: convert VM_PFNMAP tracking to pfnmap_track() + pfnmap_untrack() David Hildenbrand
2025-05-12 16:42 ` Lorenzo Stoakes
2025-05-13 9:10 ` David Hildenbrand
2025-05-13 10:16 ` Lorenzo Stoakes
2025-05-13 10:22 ` David Hildenbrand
2025-05-13 17:42 ` Liam R. Howlett
2025-05-12 12:34 ` [PATCH v2 05/11] x86/mm/pat: remove old pfnmap tracking interface David Hildenbrand
2025-05-13 17:42 ` Liam R. Howlett [this message]
2025-05-12 12:34 ` [PATCH v2 06/11] mm: remove VM_PAT David Hildenbrand
2025-05-13 17:42 ` Liam R. Howlett
2025-05-12 12:34 ` [PATCH v2 07/11] x86/mm/pat: remove strict_prot parameter from reserve_pfn_range() David Hildenbrand
2025-05-13 17:43 ` Liam R. Howlett
2025-05-12 12:34 ` [PATCH v2 08/11] x86/mm/pat: remove MEMTYPE_*_MATCH David Hildenbrand
2025-05-13 17:48 ` Liam R. Howlett
2025-05-14 17:53 ` David Hildenbrand
2025-05-15 14:10 ` David Hildenbrand
2025-05-12 12:34 ` [PATCH v2 09/11] x86/mm/pat: inline memtype_match() into memtype_erase() David Hildenbrand
2025-05-12 16:49 ` Lorenzo Stoakes
2025-05-13 9:11 ` David Hildenbrand
2025-05-13 17:49 ` Liam R. Howlett
2025-05-12 12:34 ` [PATCH v2 10/11] drm/i915: track_pfn() -> "pfnmap tracking" David Hildenbrand
2025-05-13 17:50 ` Liam R. Howlett
2025-05-12 12:34 ` [PATCH v2 11/11] mm/io-mapping: " David Hildenbrand
2025-05-13 17:50 ` Liam R. Howlett
2025-05-13 15:53 ` [PATCH v2 00/11] mm: rewrite pfnmap tracking and remove VM_PAT Liam R. Howlett
2025-05-13 17:17 ` David Hildenbrand
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=3vfsriwsr675s2eqnytyntsdlmopczimdyxr3sm3nohebebdzi@wgrp3xtsr3o5 \
--to=liam.howlett@oracle.com \
--cc=airlied@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=bp@alien8.de \
--cc=dave.hansen@linux.intel.com \
--cc=david@redhat.com \
--cc=dri-devel@lists.freedesktop.org \
--cc=hpa@zytor.com \
--cc=intel-gfx@lists.freedesktop.org \
--cc=jani.nikula@linux.intel.com \
--cc=jannh@google.com \
--cc=joonas.lahtinen@linux.intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-trace-kernel@vger.kernel.org \
--cc=lorenzo.stoakes@oracle.com \
--cc=luto@kernel.org \
--cc=mathieu.desnoyers@efficios.com \
--cc=mhiramat@kernel.org \
--cc=mingo@kernel.org \
--cc=mingo@redhat.com \
--cc=peterx@redhat.com \
--cc=peterz@infradead.org \
--cc=pfalcato@suse.de \
--cc=rodrigo.vivi@intel.com \
--cc=rostedt@goodmis.org \
--cc=simona@ffwll.ch \
--cc=tglx@linutronix.de \
--cc=tursulin@ursulin.net \
--cc=vbabka@suse.cz \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox