From: Peter Xu <peterx@redhat.com>
To: David Hildenbrand <david@redhat.com>
Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org,
intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org,
linux-trace-kernel@vger.kernel.org,
Dave Hansen <dave.hansen@linux.intel.com>,
Andy Lutomirski <luto@kernel.org>,
Peter Zijlstra <peterz@infradead.org>,
Thomas Gleixner <tglx@linutronix.de>,
Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
"H. Peter Anvin" <hpa@zytor.com>,
Jani Nikula <jani.nikula@linux.intel.com>,
Joonas Lahtinen <joonas.lahtinen@linux.intel.com>,
Rodrigo Vivi <rodrigo.vivi@intel.com>,
Tvrtko Ursulin <tursulin@ursulin.net>,
David Airlie <airlied@gmail.com>, Simona Vetter <simona@ffwll.ch>,
Andrew Morton <akpm@linux-foundation.org>,
Steven Rostedt <rostedt@goodmis.org>,
Masami Hiramatsu <mhiramat@kernel.org>,
Mathieu Desnoyers <mathieu.desnoyers@efficios.com>,
"Liam R. Howlett" <Liam.Howlett@oracle.com>,
Lorenzo Stoakes <lorenzo.stoakes@oracle.com>,
Vlastimil Babka <vbabka@suse.cz>, Jann Horn <jannh@google.com>,
Pedro Falcato <pfalcato@suse.de>
Subject: Re: [PATCH v1 05/11] mm: convert VM_PFNMAP tracking to pfnmap_track() + pfnmap_untrack()
Date: Mon, 28 Apr 2025 12:08:22 -0400 [thread overview]
Message-ID: <aA-n9hvSX9JLsRM-@x1.local> (raw)
In-Reply-To: <bbadf008-9ffc-4628-9809-2d8cf104a424@redhat.com>
On Fri, Apr 25, 2025 at 10:36:55PM +0200, David Hildenbrand wrote:
> On 25.04.25 22:23, Peter Xu wrote:
> > On Fri, Apr 25, 2025 at 10:17:09AM +0200, David Hildenbrand wrote:
> > > Let's use our new interface. In remap_pfn_range(), we'll now decide
> > > whether we have to track (full VMA covered) or only sanitize the pgprot
> > > (partial VMA covered).
> > >
> > > Remember what we have to untrack by linking it from the VMA. When
> > > duplicating VMAs (e.g., splitting, mremap, fork), we'll handle it similar
> > > to anon VMA names, and use a kref to share the tracking.
> > >
> > > Once the last VMA un-refs our tracking data, we'll do the untracking,
> > > which simplifies things a lot and should sort our various issues we saw
> > > recently, for example, when partially unmapping/zapping a tracked VMA.
> > >
> > > This change implies that we'll keep tracking the original PFN range even
> > > after splitting + partially unmapping it: not too bad, because it was
> > > not working reliably before. The only thing that kind-of worked before
> > > was shrinking such a mapping using mremap(): we managed to adjust the
> > > reservation in a hacky way, now we won't adjust the reservation but
> > > leave it around until all involved VMAs are gone.
> > >
> > > Signed-off-by: David Hildenbrand <david@redhat.com>
> > > ---
> > > include/linux/mm_inline.h | 2 +
> > > include/linux/mm_types.h | 11 ++++++
> > > kernel/fork.c | 54 ++++++++++++++++++++++++--
> > > mm/memory.c | 81 +++++++++++++++++++++++++++++++--------
> > > mm/mremap.c | 4 --
> > > 5 files changed, 128 insertions(+), 24 deletions(-)
> > >
> > > diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h
> > > index f9157a0c42a5c..89b518ff097e6 100644
> > > --- a/include/linux/mm_inline.h
> > > +++ b/include/linux/mm_inline.h
> > > @@ -447,6 +447,8 @@ static inline bool anon_vma_name_eq(struct anon_vma_name *anon_name1,
> > > #endif /* CONFIG_ANON_VMA_NAME */
> > > +void pfnmap_track_ctx_release(struct kref *ref);
> > > +
> > > static inline void init_tlb_flush_pending(struct mm_struct *mm)
> > > {
> > > atomic_set(&mm->tlb_flush_pending, 0);
> > > diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
> > > index 56d07edd01f91..91124761cfda8 100644
> > > --- a/include/linux/mm_types.h
> > > +++ b/include/linux/mm_types.h
> > > @@ -764,6 +764,14 @@ struct vma_numab_state {
> > > int prev_scan_seq;
> > > };
> > > +#ifdef __HAVE_PFNMAP_TRACKING
> > > +struct pfnmap_track_ctx {
> > > + struct kref kref;
> > > + unsigned long pfn;
> > > + unsigned long size;
> > > +};
> > > +#endif
> > > +
> > > /*
> > > * This struct describes a virtual memory area. There is one of these
> > > * per VM-area/task. A VM area is any part of the process virtual memory
> > > @@ -877,6 +885,9 @@ struct vm_area_struct {
> > > struct anon_vma_name *anon_name;
> > > #endif
> > > struct vm_userfaultfd_ctx vm_userfaultfd_ctx;
> > > +#ifdef __HAVE_PFNMAP_TRACKING
> > > + struct pfnmap_track_ctx *pfnmap_track_ctx;
> > > +#endif
> >
> > So this was originally the small concern (or is it small?) that this will
> > grow every vma on x86, am I right?
>
> Yeah, and last time I looked into this, it would have grown it such that it would
> require a bigger slab. Right now:
Probably due to what config you have. E.g., when I'm looking mine it's
much bigger and already consuming 256B, but it's because I enabled more
things (userfaultfd, lockdep, etc.).
>
> Before this change:
>
> struct vm_area_struct {
> union {
> struct {
> long unsigned int vm_start; /* 0 8 */
> long unsigned int vm_end; /* 8 8 */
> }; /* 0 16 */
> freeptr_t vm_freeptr; /* 0 8 */
> }; /* 0 16 */
> struct mm_struct * vm_mm; /* 16 8 */
> pgprot_t vm_page_prot; /* 24 8 */
> union {
> const vm_flags_t vm_flags; /* 32 8 */
> vm_flags_t __vm_flags; /* 32 8 */
> }; /* 32 8 */
> unsigned int vm_lock_seq; /* 40 4 */
>
> /* XXX 4 bytes hole, try to pack */
>
> struct list_head anon_vma_chain; /* 48 16 */
> /* --- cacheline 1 boundary (64 bytes) --- */
> struct anon_vma * anon_vma; /* 64 8 */
> const struct vm_operations_struct * vm_ops; /* 72 8 */
> long unsigned int vm_pgoff; /* 80 8 */
> struct file * vm_file; /* 88 8 */
> void * vm_private_data; /* 96 8 */
> atomic_long_t swap_readahead_info; /* 104 8 */
> struct mempolicy * vm_policy; /* 112 8 */
> struct vma_numab_state * numab_state; /* 120 8 */
> /* --- cacheline 2 boundary (128 bytes) --- */
> refcount_t vm_refcnt __attribute__((__aligned__(64))); /* 128 4 */
>
> /* XXX 4 bytes hole, try to pack */
>
> struct {
> struct rb_node rb __attribute__((__aligned__(8))); /* 136 24 */
> long unsigned int rb_subtree_last; /* 160 8 */
> } __attribute__((__aligned__(8))) shared __attribute__((__aligned__(8))); /* 136 32 */
> struct anon_vma_name * anon_name; /* 168 8 */
> struct vm_userfaultfd_ctx vm_userfaultfd_ctx; /* 176 0 */
>
> /* size: 192, cachelines: 3, members: 18 */
> /* sum members: 168, holes: 2, sum holes: 8 */
> /* padding: 16 */
> /* forced alignments: 2, forced holes: 1, sum forced holes: 4 */
> } __attribute__((__aligned__(64)));
>
> After this change:
>
> struct vm_area_struct {
> union {
> struct {
> long unsigned int vm_start; /* 0 8 */
> long unsigned int vm_end; /* 8 8 */
> }; /* 0 16 */
> freeptr_t vm_freeptr; /* 0 8 */
> }; /* 0 16 */
> struct mm_struct * vm_mm; /* 16 8 */
> pgprot_t vm_page_prot; /* 24 8 */
> union {
> const vm_flags_t vm_flags; /* 32 8 */
> vm_flags_t __vm_flags; /* 32 8 */
> }; /* 32 8 */
> unsigned int vm_lock_seq; /* 40 4 */
>
> /* XXX 4 bytes hole, try to pack */
>
> struct list_head anon_vma_chain; /* 48 16 */
> /* --- cacheline 1 boundary (64 bytes) --- */
> struct anon_vma * anon_vma; /* 64 8 */
> const struct vm_operations_struct * vm_ops; /* 72 8 */
> long unsigned int vm_pgoff; /* 80 8 */
> struct file * vm_file; /* 88 8 */
> void * vm_private_data; /* 96 8 */
> atomic_long_t swap_readahead_info; /* 104 8 */
> struct mempolicy * vm_policy; /* 112 8 */
> struct vma_numab_state * numab_state; /* 120 8 */
> /* --- cacheline 2 boundary (128 bytes) --- */
> refcount_t vm_refcnt __attribute__((__aligned__(64))); /* 128 4 */
>
> /* XXX 4 bytes hole, try to pack */
>
> struct {
> struct rb_node rb __attribute__((__aligned__(8))); /* 136 24 */
> long unsigned int rb_subtree_last; /* 160 8 */
> } __attribute__((__aligned__(8))) shared __attribute__((__aligned__(8))); /* 136 32 */
> struct anon_vma_name * anon_name; /* 168 8 */
> struct vm_userfaultfd_ctx vm_userfaultfd_ctx; /* 176 0 */
> struct pfnmap_track_ctx * pfnmap_track_ctx; /* 176 8 */
>
> /* size: 192, cachelines: 3, members: 19 */
> /* sum members: 176, holes: 2, sum holes: 8 */
> /* padding: 8 */
> /* forced alignments: 2, forced holes: 1, sum forced holes: 4 */
> } __attribute__((__aligned__(64)));
>
> Observe that we allocate 192 bytes with or without pfnmap_track_ctx. (IIRC,
> slab sizes are ... 128, 192, 256, 512, ...)
True. I just double checked, vm_area_cachep has SLAB_HWCACHE_ALIGN set, I
think it means it's working like that on x86_64 at least indeed. So looks
like the new field at least isn't an immediate concern.
Thanks,
--
Peter Xu
next prev parent reply other threads:[~2025-04-28 16:08 UTC|newest]
Thread overview: 59+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-04-25 8:17 [PATCH v1 00/11] mm: rewrite pfnmap tracking and remove VM_PAT David Hildenbrand
2025-04-25 8:17 ` [PATCH v1 01/11] x86/mm/pat: factor out setting cachemode into pgprot_set_cachemode() David Hildenbrand
2025-04-28 16:16 ` Lorenzo Stoakes
2025-04-28 16:19 ` David Hildenbrand
2025-04-25 8:17 ` [PATCH v1 02/11] mm: convert track_pfn_insert() to pfnmap_sanitize_pgprot() David Hildenbrand
2025-04-25 19:31 ` Peter Xu
2025-04-25 19:48 ` David Hildenbrand
2025-04-25 23:59 ` Peter Xu
2025-04-28 14:58 ` David Hildenbrand
2025-04-28 16:21 ` Peter Xu
2025-04-28 20:37 ` David Hildenbrand
2025-04-29 13:44 ` Peter Xu
2025-04-29 16:25 ` David Hildenbrand
2025-04-29 16:36 ` Peter Xu
2025-04-25 19:56 ` David Hildenbrand
2025-04-25 8:17 ` [PATCH v1 03/11] x86/mm/pat: introduce pfnmap_track() and pfnmap_untrack() David Hildenbrand
2025-04-28 16:53 ` Lorenzo Stoakes
2025-04-28 17:12 ` David Hildenbrand
2025-04-28 18:58 ` Lorenzo Stoakes
2025-04-25 8:17 ` [PATCH v1 04/11] mm/memremap: convert to pfnmap_track() + pfnmap_untrack() David Hildenbrand
2025-04-25 20:00 ` Peter Xu
2025-04-25 20:14 ` David Hildenbrand
2025-04-28 16:54 ` Lorenzo Stoakes
2025-04-28 17:07 ` Lorenzo Stoakes
2025-04-25 8:17 ` [PATCH v1 05/11] mm: convert VM_PFNMAP tracking " David Hildenbrand
2025-04-25 20:23 ` Peter Xu
2025-04-25 20:36 ` David Hildenbrand
2025-04-28 16:08 ` Peter Xu [this message]
2025-04-28 16:16 ` David Hildenbrand
2025-04-28 16:24 ` Peter Xu
2025-04-28 17:23 ` David Hildenbrand
2025-04-28 19:37 ` Lorenzo Stoakes
2025-04-28 19:57 ` Suren Baghdasaryan
2025-04-28 20:23 ` David Hildenbrand
2025-04-28 20:19 ` David Hildenbrand
2025-04-28 19:38 ` Lorenzo Stoakes
2025-04-28 20:00 ` Suren Baghdasaryan
2025-04-28 20:21 ` David Hildenbrand
2025-04-28 20:10 ` Lorenzo Stoakes
2025-05-05 13:00 ` David Hildenbrand
2025-05-07 13:25 ` David Hildenbrand
2025-05-07 14:27 ` Lorenzo Stoakes
2025-04-25 8:17 ` [PATCH v1 06/11] x86/mm/pat: remove old pfnmap tracking interface David Hildenbrand
2025-04-28 20:12 ` Lorenzo Stoakes
2025-04-25 8:17 ` [PATCH v1 07/11] mm: remove VM_PAT David Hildenbrand
2025-04-28 20:16 ` Lorenzo Stoakes
2025-04-25 8:17 ` [PATCH v1 08/11] x86/mm/pat: remove strict_prot parameter from reserve_pfn_range() David Hildenbrand
2025-04-28 20:18 ` Lorenzo Stoakes
2025-04-25 8:17 ` [PATCH v1 09/11] x86/mm/pat: remove MEMTYPE_*_MATCH David Hildenbrand
2025-04-28 20:23 ` Lorenzo Stoakes
2025-05-05 12:10 ` David Hildenbrand
2025-05-06 9:30 ` Lorenzo Stoakes
2025-04-25 8:17 ` [PATCH v1 10/11] drm/i915: track_pfn() -> "pfnmap tracking" David Hildenbrand
2025-04-28 20:23 ` Lorenzo Stoakes
2025-04-25 8:17 ` [PATCH v1 11/11] mm/io-mapping: " David Hildenbrand
2025-04-28 16:06 ` Lorenzo Stoakes
2025-04-28 16:14 ` David Hildenbrand
2025-04-25 8:54 ` [PATCH v1 00/11] mm: rewrite pfnmap tracking and remove VM_PAT Ingo Molnar
2025-04-25 9:27 ` David Hildenbrand
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=aA-n9hvSX9JLsRM-@x1.local \
--to=peterx@redhat.com \
--cc=Liam.Howlett@oracle.com \
--cc=airlied@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=bp@alien8.de \
--cc=dave.hansen@linux.intel.com \
--cc=david@redhat.com \
--cc=dri-devel@lists.freedesktop.org \
--cc=hpa@zytor.com \
--cc=intel-gfx@lists.freedesktop.org \
--cc=jani.nikula@linux.intel.com \
--cc=jannh@google.com \
--cc=joonas.lahtinen@linux.intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-trace-kernel@vger.kernel.org \
--cc=lorenzo.stoakes@oracle.com \
--cc=luto@kernel.org \
--cc=mathieu.desnoyers@efficios.com \
--cc=mhiramat@kernel.org \
--cc=mingo@redhat.com \
--cc=peterz@infradead.org \
--cc=pfalcato@suse.de \
--cc=rodrigo.vivi@intel.com \
--cc=rostedt@goodmis.org \
--cc=simona@ffwll.ch \
--cc=tglx@linutronix.de \
--cc=tursulin@ursulin.net \
--cc=vbabka@suse.cz \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox