From: Suren Baghdasaryan <surenb@google.com>
To: Vlastimil Babka <vbabka@suse.cz>
Cc: Harry Yoo <harry.yoo@oracle.com>,
Petr Tesarik <ptesarik@suse.com>,
Christoph Lameter <cl@gentwo.org>,
David Rientjes <rientjes@google.com>,
Roman Gushchin <roman.gushchin@linux.dev>,
Hao Li <hao.li@linux.dev>,
Andrew Morton <akpm@linux-foundation.org>,
Uladzislau Rezki <urezki@gmail.com>,
"Liam R. Howlett" <Liam.Howlett@oracle.com>,
Sebastian Andrzej Siewior <bigeasy@linutronix.de>,
Alexei Starovoitov <ast@kernel.org>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org,
linux-rt-devel@lists.linux.dev, bpf@vger.kernel.org,
kasan-dev@googlegroups.com
Subject: Re: [PATCH RFC v2 04/20] slab: add sheaves to most caches
Date: Fri, 16 Jan 2026 08:59:18 -0800 [thread overview]
Message-ID: <CAJuCfpEqZwgB65y3zbm0Pwb_sVLjMbmHbTmJY6SdiVvvOPq+2A@mail.gmail.com> (raw)
In-Reply-To: <d310d788-b6df-47dc-9557-643813351838@suse.cz>
On Fri, Jan 16, 2026 at 3:24 AM Vlastimil Babka <vbabka@suse.cz> wrote:
>
> On 1/16/26 06:45, Suren Baghdasaryan wrote:
> > On Mon, Jan 12, 2026 at 3:17 PM Vlastimil Babka <vbabka@suse.cz> wrote:
> >>
> >> In the first step to replace cpu (partial) slabs with sheaves, enable
> >> sheaves for almost all caches. Treat args->sheaf_capacity as a minimum,
> >> and calculate sheaf capacity with a formula that roughly follows the
> >> formula for number of objects in cpu partial slabs in set_cpu_partial().
> >>
> >> This should achieve roughly similar contention on the barn spin lock as
> >> there's currently for node list_lock without sheaves, to make
> >> benchmarking results comparable. It can be further tuned later.
> >>
> >> Don't enable sheaves for bootstrap caches as that wouldn't work. In
> >> order to recognize them by SLAB_NO_OBJ_EXT, make sure the flag exists
> >> even for !CONFIG_SLAB_OBJ_EXT.
> >>
> >> This limitation will be lifted for kmalloc caches after the necessary
> >> bootstrapping changes.
> >>
> >> Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
> >
> > One nit but otherwise LGTM.
> >
> > Reviewed-by: Suren Baghdasaryan <surenb@google.com>
>
> Thanks.
>
> >> ---
> >> include/linux/slab.h | 6 ------
> >> mm/slub.c | 51 +++++++++++++++++++++++++++++++++++++++++++++++----
> >> 2 files changed, 47 insertions(+), 10 deletions(-)
> >>
> >> diff --git a/include/linux/slab.h b/include/linux/slab.h
> >> index 2482992248dc..2682ee57ec90 100644
> >> --- a/include/linux/slab.h
> >> +++ b/include/linux/slab.h
> >> @@ -57,9 +57,7 @@ enum _slab_flag_bits {
> >> #endif
> >> _SLAB_OBJECT_POISON,
> >> _SLAB_CMPXCHG_DOUBLE,
> >> -#ifdef CONFIG_SLAB_OBJ_EXT
> >> _SLAB_NO_OBJ_EXT,
> >> -#endif
> >> _SLAB_FLAGS_LAST_BIT
> >> };
> >>
> >> @@ -238,11 +236,7 @@ enum _slab_flag_bits {
> >> #define SLAB_TEMPORARY SLAB_RECLAIM_ACCOUNT /* Objects are short-lived */
> >>
> >> /* Slab created using create_boot_cache */
> >> -#ifdef CONFIG_SLAB_OBJ_EXT
> >> #define SLAB_NO_OBJ_EXT __SLAB_FLAG_BIT(_SLAB_NO_OBJ_EXT)
> >> -#else
> >> -#define SLAB_NO_OBJ_EXT __SLAB_FLAG_UNUSED
> >> -#endif
> >>
> >> /*
> >> * ZERO_SIZE_PTR will be returned for zero sized kmalloc requests.
> >> diff --git a/mm/slub.c b/mm/slub.c
> >> index 8ffeb3ab3228..6e05e3cc5c49 100644
> >> --- a/mm/slub.c
> >> +++ b/mm/slub.c
> >> @@ -7857,6 +7857,48 @@ static void set_cpu_partial(struct kmem_cache *s)
> >> #endif
> >> }
> >>
> >> +static unsigned int calculate_sheaf_capacity(struct kmem_cache *s,
> >> + struct kmem_cache_args *args)
> >> +
> >> +{
> >> + unsigned int capacity;
> >> + size_t size;
> >> +
> >> +
> >> + if (IS_ENABLED(CONFIG_SLUB_TINY) || s->flags & SLAB_DEBUG_FLAGS)
> >> + return 0;
> >> +
> >> + /* bootstrap caches can't have sheaves for now */
> >> + if (s->flags & SLAB_NO_OBJ_EXT)
> >> + return 0;
> >> +
> >> + /*
> >> + * For now we use roughly similar formula (divided by two as there are
> >> + * two percpu sheaves) as what was used for percpu partial slabs, which
> >> + * should result in similar lock contention (barn or list_lock)
> >> + */
> >> + if (s->size >= PAGE_SIZE)
> >> + capacity = 4;
> >> + else if (s->size >= 1024)
> >> + capacity = 12;
> >> + else if (s->size >= 256)
> >> + capacity = 26;
> >> + else
> >> + capacity = 60;
> >> +
> >> + /* Increment capacity to make sheaf exactly a kmalloc size bucket */
> >> + size = struct_size_t(struct slab_sheaf, objects, capacity);
> >> + size = kmalloc_size_roundup(size);
> >> + capacity = (size - struct_size_t(struct slab_sheaf, objects, 0)) / sizeof(void *);
> >> +
> >> + /*
> >> + * Respect an explicit request for capacity that's typically motivated by
> >> + * expected maximum size of kmem_cache_prefill_sheaf() to not end up
> >> + * using low-performance oversize sheaves
> >> + */
> >> + return max(capacity, args->sheaf_capacity);
> >> +}
> >> +
> >> /*
> >> * calculate_sizes() determines the order and the distribution of data within
> >> * a slab object.
> >> @@ -7991,6 +8033,10 @@ static int calculate_sizes(struct kmem_cache_args *args, struct kmem_cache *s)
> >> if (s->flags & SLAB_RECLAIM_ACCOUNT)
> >> s->allocflags |= __GFP_RECLAIMABLE;
> >>
> >> + /* kmalloc caches need extra care to support sheaves */
> >> + if (!is_kmalloc_cache(s))
> >
> > nit: All the checks for the cases when sheaves should not be used
> > (like SLAB_DEBUG_FLAGS and SLAB_NO_OBJ_EXT) are done inside
> > calculate_sheaf_capacity(). Only this is_kmalloc_cache() one is here.
> > It would be nice to have all of them in the same place but maybe you
> > have a reason for keeping it here?
>
> Yeah, in "slab: handle kmalloc sheaves bootstrap" we call
> calculate_sheaf_capacity() from another place for kmalloc normal caches so
> the check has to be outside.
Ok, I suspected the answer will be in the later patches. Thanks!
>
> >> + s->sheaf_capacity = calculate_sheaf_capacity(s, args);
> >> +
> >> /*
> >> * Determine the number of objects per slab
> >> */
> >> @@ -8595,15 +8641,12 @@ int do_kmem_cache_create(struct kmem_cache *s, const char *name,
> >>
> >> set_cpu_partial(s);
> >>
> >> - if (args->sheaf_capacity && !IS_ENABLED(CONFIG_SLUB_TINY)
> >> - && !(s->flags & SLAB_DEBUG_FLAGS)) {
> >> + if (s->sheaf_capacity) {
> >> s->cpu_sheaves = alloc_percpu(struct slub_percpu_sheaves);
> >> if (!s->cpu_sheaves) {
> >> err = -ENOMEM;
> >> goto out;
> >> }
> >> - // TODO: increase capacity to grow slab_sheaf up to next kmalloc size?
> >> - s->sheaf_capacity = args->sheaf_capacity;
> >> }
> >>
> >> #ifdef CONFIG_NUMA
> >>
> >> --
> >> 2.52.0
> >>
>
next prev parent reply other threads:[~2026-01-16 16:59 UTC|newest]
Thread overview: 63+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-01-12 15:16 [PATCH RFC v2 00/20] slab: replace cpu (partial) slabs with sheaves Vlastimil Babka
2026-01-12 15:16 ` [PATCH RFC v2 01/20] mm/slab: add rcu_barrier() to kvfree_rcu_barrier_on_cache() Vlastimil Babka
2026-01-13 2:08 ` Harry Yoo
2026-01-13 9:32 ` Vlastimil Babka
2026-01-13 12:31 ` Harry Yoo
2026-01-13 13:09 ` Vlastimil Babka
2026-01-14 11:14 ` Harry Yoo
2026-01-14 13:02 ` Vlastimil Babka
2026-01-15 23:52 ` Suren Baghdasaryan
2026-01-14 4:56 ` Harry Yoo
2026-01-12 15:16 ` [PATCH RFC v2 02/20] mm/slab: move and refactor __kmem_cache_alias() Vlastimil Babka
2026-01-13 7:06 ` Harry Yoo
2026-01-16 0:06 ` Suren Baghdasaryan
2026-01-12 15:16 ` [PATCH RFC v2 03/20] mm/slab: make caches with sheaves mergeable Vlastimil Babka
2026-01-13 7:47 ` Harry Yoo
2026-01-16 0:22 ` Suren Baghdasaryan
2026-01-16 7:24 ` Vlastimil Babka
2026-01-16 8:46 ` Harry Yoo
2026-01-16 11:01 ` Vlastimil Babka
2026-01-16 11:10 ` Harry Yoo
2026-01-16 16:58 ` Suren Baghdasaryan
2026-01-12 15:16 ` [PATCH RFC v2 04/20] slab: add sheaves to most caches Vlastimil Babka
2026-01-16 5:45 ` Suren Baghdasaryan
2026-01-16 11:24 ` Vlastimil Babka
2026-01-16 16:59 ` Suren Baghdasaryan [this message]
2026-01-16 6:46 ` Harry Yoo
2026-01-12 15:16 ` [PATCH RFC v2 05/20] slab: introduce percpu sheaves bootstrap Vlastimil Babka
2026-01-13 12:49 ` Hao Li
2026-01-15 10:11 ` Vlastimil Babka
2026-01-16 7:29 ` Harry Yoo
2026-01-12 15:17 ` [PATCH RFC v2 06/20] slab: make percpu sheaves compatible with kmalloc_nolock()/kfree_nolock() Vlastimil Babka
2026-01-13 15:42 ` Hao Li
2026-01-15 11:07 ` Vlastimil Babka
2026-01-13 18:36 ` Sebastian Andrzej Siewior
2026-01-13 23:26 ` Alexei Starovoitov
2026-01-14 13:57 ` Vlastimil Babka
2026-01-14 14:05 ` Vlastimil Babka
2026-01-14 15:07 ` Sebastian Andrzej Siewior
2026-01-12 15:17 ` [PATCH RFC v2 07/20] slab: handle kmalloc sheaves bootstrap Vlastimil Babka
2026-01-12 15:17 ` [PATCH RFC v2 08/20] slab: add optimized sheaf refill from partial list Vlastimil Babka
2026-01-15 14:25 ` Vlastimil Babka
2026-01-16 6:27 ` Hao Li
2026-01-16 7:32 ` Vlastimil Babka
2026-01-16 7:56 ` Hao Li
2026-01-12 15:17 ` [PATCH RFC v2 09/20] slab: remove cpu (partial) slabs usage from allocation paths Vlastimil Babka
2026-01-14 6:07 ` Hao Li
2026-01-15 13:53 ` Vlastimil Babka
2026-01-12 15:17 ` [PATCH RFC v2 10/20] slab: remove SLUB_CPU_PARTIAL Vlastimil Babka
2026-01-12 15:17 ` [PATCH RFC v2 11/20] slab: remove the do_slab_free() fastpath Vlastimil Babka
2026-01-12 15:17 ` [PATCH RFC v2 12/20] slab: remove defer_deactivate_slab() Vlastimil Babka
2026-01-15 14:09 ` Hao Li
2026-01-15 14:47 ` Vlastimil Babka
2026-01-12 15:17 ` [PATCH RFC v2 13/20] slab: simplify kmalloc_nolock() Vlastimil Babka
2026-01-14 3:31 ` Alexei Starovoitov
2026-01-12 15:17 ` [PATCH RFC v2 14/20] slab: remove struct kmem_cache_cpu Vlastimil Babka
2026-01-12 15:17 ` [PATCH RFC v2 15/20] slab: remove unused PREEMPT_RT specific macros Vlastimil Babka
2026-01-12 15:17 ` [PATCH RFC v2 16/20] slab: refill sheaves from all nodes Vlastimil Babka
2026-01-12 15:17 ` [PATCH RFC v2 17/20] slab: update overview comments Vlastimil Babka
2026-01-12 15:17 ` [PATCH RFC v2 18/20] slab: remove frozen slab checks from __slab_free() Vlastimil Babka
2026-01-12 15:17 ` [PATCH RFC v2 19/20] mm/slub: remove DEACTIVATE_TO_* stat items Vlastimil Babka
2026-01-12 15:17 ` [PATCH RFC v2 20/20] mm/slub: cleanup and repurpose some " Vlastimil Babka
2026-01-12 15:20 ` [PATCH v2 00/20] slab: replace cpu (partial) slabs with sheaves Vlastimil Babka
2026-01-15 15:12 ` [PATCH RFC " Vlastimil Babka
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAJuCfpEqZwgB65y3zbm0Pwb_sVLjMbmHbTmJY6SdiVvvOPq+2A@mail.gmail.com \
--to=surenb@google.com \
--cc=Liam.Howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=ast@kernel.org \
--cc=bigeasy@linutronix.de \
--cc=bpf@vger.kernel.org \
--cc=cl@gentwo.org \
--cc=hao.li@linux.dev \
--cc=harry.yoo@oracle.com \
--cc=kasan-dev@googlegroups.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-rt-devel@lists.linux.dev \
--cc=ptesarik@suse.com \
--cc=rientjes@google.com \
--cc=roman.gushchin@linux.dev \
--cc=urezki@gmail.com \
--cc=vbabka@suse.cz \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox