From: Shakeel Butt <shakeel.butt@linux.dev>
To: Suren Baghdasaryan <surenb@google.com>
Cc: akpm@linux-foundation.org, willy@infradead.org,
liam.howlett@oracle.com, lorenzo.stoakes@oracle.com,
mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org,
mjguzik@gmail.com, oliver.sang@intel.com,
mgorman@techsingularity.net, david@redhat.com,
peterx@redhat.com, oleg@redhat.com, dave@stgolabs.net,
paulmck@kernel.org, brauner@kernel.org, dhowells@redhat.com,
hdanton@sina.com, hughd@google.com, minchan@google.com,
jannh@google.com, souravpanda@google.com,
pasha.tatashin@soleen.com, corbet@lwn.net,
linux-doc@vger.kernel.org, linux-mm@kvack.org,
linux-kernel@vger.kernel.org, kernel-team@android.com
Subject: Re: [PATCH v4 2/5] mm: move per-vma lock into vm_area_struct
Date: Wed, 20 Nov 2024 23:01:55 -0800 [thread overview]
Message-ID: <iw34akksaz6wjlygwuztlkvto3aiduekrhw6rjlqq4lr7vzmug@tprkddvgrj3e> (raw)
In-Reply-To: <CAJuCfpGx6LCd7qCOsLc6hm-qMGtyM3ceitYbRdx1yKPHFHT-jQ@mail.gmail.com>
On Wed, Nov 20, 2024 at 04:33:37PM -0800, Suren Baghdasaryan wrote:
[...]
> > > >
> > > > Do we just want 'struct vm_area_struct' to be cacheline aligned or do we
> > > > want 'struct vma_lock vm_lock' to be on a separate cacheline as well?
> > >
> > > We want both to minimize cacheline sharing.
> > >
> >
> > For later, you will need to add a pad after vm_lock as well, so any
> > future addition will not share the cacheline with vm_lock. I would do
> > something like below. This is a nit and can be done later.
> >
> > diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
> > index 7654c766cbe2..5cc4fff163a0 100644
> > --- a/include/linux/mm_types.h
> > +++ b/include/linux/mm_types.h
> > @@ -751,10 +751,12 @@ struct vm_area_struct {
> > #endif
> > struct vm_userfaultfd_ctx vm_userfaultfd_ctx;
> > #ifdef CONFIG_PER_VMA_LOCK
> > + CACHELINE_PADDING(__pad1__);
> > /* Unstable RCU readers are allowed to read this. */
> > - struct vma_lock vm_lock ____cacheline_aligned_in_smp;
> > + struct vma_lock vm_lock;
> > + CACHELINE_PADDING(__pad2__);
> > #endif
> > -} __randomize_layout;
> > +} __randomize_layout ____cacheline_aligned_in_smp;
>
> I thought SLAB_HWCACHE_ALIGN for vm_area_cachep added in this patch
> would have the same result, no?
>
SLAB_HWCACHE_ALIGN is more about the slub allocator allocating cache
aligned memory. It does not say anything about the internals of the
struct for which the kmem_cache is being created. The
____cacheline_aligned_in_smp tag in your patch made sure that the field
vm_lock will be put in a new cacheline and there can be a hole between
vm_lock and the previous field if the previous field is not ending at
the cacheline boundary. Please note that if you add a new field after
vm_lock (without cacheline alignment tag), it will be on the same
cacheline as vm_lock. So, your code is achieving the vm_lock on its own
cacheline goal but vm_lock being the only field on that cacheline is not
being achieved.
next prev parent reply other threads:[~2024-11-21 7:02 UTC|newest]
Thread overview: 27+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-11-20 0:08 [PATCH v4 0/5] " Suren Baghdasaryan
2024-11-20 0:08 ` [PATCH v4 1/5] mm: introduce vma_start_read_locked{_nested} helpers Suren Baghdasaryan
2024-11-20 22:11 ` Shakeel Butt
2024-11-20 0:08 ` [PATCH v4 2/5] mm: move per-vma lock into vm_area_struct Suren Baghdasaryan
2024-11-20 23:32 ` Shakeel Butt
2024-11-20 23:44 ` Suren Baghdasaryan
2024-11-21 0:04 ` Shakeel Butt
2024-11-21 0:33 ` Suren Baghdasaryan
2024-11-21 7:01 ` Shakeel Butt [this message]
2024-11-21 17:05 ` Suren Baghdasaryan
2024-11-21 18:25 ` Shakeel Butt
2024-11-20 0:08 ` [PATCH v4 3/5] mm: mark vma as detached until it's added into vma tree Suren Baghdasaryan
2024-11-21 0:13 ` Shakeel Butt
2024-11-22 16:46 ` Lorenzo Stoakes
2024-11-22 17:47 ` Suren Baghdasaryan
2024-11-20 0:08 ` [PATCH v4 4/5] mm: make vma cache SLAB_TYPESAFE_BY_RCU Suren Baghdasaryan
2024-11-20 4:36 ` Matthew Wilcox
2024-11-20 6:37 ` Suren Baghdasaryan
2024-11-22 22:43 ` Suren Baghdasaryan
2024-11-20 10:16 ` Vlastimil Babka
2024-11-20 15:54 ` Suren Baghdasaryan
2024-11-20 0:08 ` [PATCH v4 5/5] docs/mm: document latest changes to vm_lock Suren Baghdasaryan
2024-11-20 22:10 ` [PATCH v4 0/5] move per-vma lock into vm_area_struct Shakeel Butt
2024-11-20 23:52 ` Suren Baghdasaryan
2024-11-21 2:00 ` Matthew Wilcox
2024-11-22 11:56 ` Lorenzo Stoakes
2024-11-22 15:06 ` Suren Baghdasaryan
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=iw34akksaz6wjlygwuztlkvto3aiduekrhw6rjlqq4lr7vzmug@tprkddvgrj3e \
--to=shakeel.butt@linux.dev \
--cc=akpm@linux-foundation.org \
--cc=brauner@kernel.org \
--cc=corbet@lwn.net \
--cc=dave@stgolabs.net \
--cc=david@redhat.com \
--cc=dhowells@redhat.com \
--cc=hannes@cmpxchg.org \
--cc=hdanton@sina.com \
--cc=hughd@google.com \
--cc=jannh@google.com \
--cc=kernel-team@android.com \
--cc=liam.howlett@oracle.com \
--cc=linux-doc@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lorenzo.stoakes@oracle.com \
--cc=mgorman@techsingularity.net \
--cc=mhocko@suse.com \
--cc=minchan@google.com \
--cc=mjguzik@gmail.com \
--cc=oleg@redhat.com \
--cc=oliver.sang@intel.com \
--cc=pasha.tatashin@soleen.com \
--cc=paulmck@kernel.org \
--cc=peterx@redhat.com \
--cc=souravpanda@google.com \
--cc=surenb@google.com \
--cc=vbabka@suse.cz \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox