linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Vlastimil Babka <vbabka@suse.cz>
To: Suren Baghdasaryan <surenb@google.com>
Cc: Harry Yoo <harry.yoo@oracle.com>,
	Petr Tesarik <ptesarik@suse.com>,
	Christoph Lameter <cl@gentwo.org>,
	David Rientjes <rientjes@google.com>,
	Roman Gushchin <roman.gushchin@linux.dev>,
	Hao Li <hao.li@linux.dev>,
	Andrew Morton <akpm@linux-foundation.org>,
	Uladzislau Rezki <urezki@gmail.com>,
	"Liam R. Howlett" <Liam.Howlett@oracle.com>,
	Sebastian Andrzej Siewior <bigeasy@linutronix.de>,
	Alexei Starovoitov <ast@kernel.org>,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	linux-rt-devel@lists.linux.dev, bpf@vger.kernel.org,
	kasan-dev@googlegroups.com
Subject: Re: [PATCH v3 06/21] slab: introduce percpu sheaves bootstrap
Date: Mon, 19 Jan 2026 10:34:07 +0100	[thread overview]
Message-ID: <4f60e230-c76e-4ab3-a0f0-7598dcb15d1a@suse.cz> (raw)
In-Reply-To: <CAJuCfpERcCzBysPVh63g7d0FpUBNQeq9nCL+ycem1iR08gDmaQ@mail.gmail.com>

On 1/17/26 03:11, Suren Baghdasaryan wrote:
> On Fri, Jan 16, 2026 at 2:40 PM Vlastimil Babka <vbabka@suse.cz> wrote:
>> Thus sharing the single bootstrap sheaf like this for multiple caches
>> and cpus is safe.
>>
>> Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
>> ---
>>  mm/slub.c | 119 ++++++++++++++++++++++++++++++++++++++++++--------------------
>>  1 file changed, 81 insertions(+), 38 deletions(-)
>>
>> diff --git a/mm/slub.c b/mm/slub.c
>> index edf341c87e20..706cb6398f05 100644
>> --- a/mm/slub.c
>> +++ b/mm/slub.c
>> @@ -501,6 +501,18 @@ struct kmem_cache_node {
>>         struct node_barn *barn;
>>  };
>>
>> +/*
>> + * Every cache has !NULL s->cpu_sheaves but they may point to the
>> + * bootstrap_sheaf temporarily during init, or permanently for the boot caches
>> + * and caches with debugging enabled, or all caches with CONFIG_SLUB_TINY. This
>> + * helper distinguishes whether cache has real non-bootstrap sheaves.
>> + */
>> +static inline bool cache_has_sheaves(struct kmem_cache *s)
>> +{
>> +       /* Test CONFIG_SLUB_TINY for code elimination purposes */
>> +       return !IS_ENABLED(CONFIG_SLUB_TINY) && s->sheaf_capacity;
>> +}
>> +
>>  static inline struct kmem_cache_node *get_node(struct kmem_cache *s, int node)
>>  {
>>         return s->node[node];
>> @@ -2855,6 +2867,10 @@ static void pcs_destroy(struct kmem_cache *s)
>>                 if (!pcs->main)
>>                         continue;
>>
>> +               /* bootstrap or debug caches, it's the bootstrap_sheaf */
>> +               if (!pcs->main->cache)
>> +                       continue;
> 
> I wonder why we can't simply check cache_has_sheaves(s) at the
> beginning and skip the loop altogether.
> I realize that __kmem_cache_release()->pcs_destroy() is called in the
> failure path of do_kmem_cache_create() and s->cpu_sheaves might be
> partially initialized if alloc_empty_sheaf() fails somewhere in the
> middle of the loop inside init_percpu_sheaves(). But for that,
> s->sheaf_capacity should still be non-zero, so checking
> cache_has_sheaves() at the beginning of pcs_destroy() should still
> work, no?

I think it should, will do.

> BTW, I see one last check for s->cpu_sheaves that you didn't replace
> with cache_has_sheaves() inside __kmem_cache_release(). I think that's
> because it's also in the failure path of do_kmem_cache_create() and
> it's possible that s->sheaf_capacity > 0 while s->cpu_sheaves == NULL
> (if alloc_percpu(struct slub_percpu_sheaves) fails). It might be
> helpful to add a comment inside __kmem_cache_release() to explain why
> cache_has_sheaves() can't be used there.

The reason is rather what Harry said. I'll move the check to pcs_destroy()
and add comment there.

diff --git a/mm/slub.c b/mm/slub.c
index 706cb6398f05..6b19aa518a1a 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -2858,19 +2858,26 @@ static void pcs_destroy(struct kmem_cache *s)
 {
 	int cpu;
 
+	/*
+	 * We may be unwinding cache creation that failed before or during the
+	 * allocation of this.
+	 */
+	if (!s->cpu_sheaves)
+		return;
+
+	/* pcs->main can only point to the bootstrap sheaf, nothing to free */
+	if (!cache_has_sheaves(s))
+		goto free_pcs;
+
 	for_each_possible_cpu(cpu) {
 		struct slub_percpu_sheaves *pcs;
 
 		pcs = per_cpu_ptr(s->cpu_sheaves, cpu);
 
-		/* can happen when unwinding failed create */
+		/* This can happen when unwinding failed cache creation. */
 		if (!pcs->main)
 			continue;
 
-		/* bootstrap or debug caches, it's the bootstrap_sheaf */
-		if (!pcs->main->cache)
-			continue;
-
 		/*
 		 * We have already passed __kmem_cache_shutdown() so everything
 		 * was flushed and there should be no objects allocated from
@@ -2889,6 +2896,7 @@ static void pcs_destroy(struct kmem_cache *s)
 		}
 	}
 
+free_pcs:
 	free_percpu(s->cpu_sheaves);
 	s->cpu_sheaves = NULL;
 }
@@ -5379,6 +5387,9 @@ kmem_cache_prefill_sheaf(struct kmem_cache *s, gfp_t gfp, unsigned int size)
 	struct slab_sheaf *sheaf = NULL;
 	struct node_barn *barn;
 
+	if (unlikely(!size))
+		return NULL;
+
 	if (unlikely(size > s->sheaf_capacity)) {
 
 		sheaf = kzalloc(struct_size(sheaf, objects, size), gfp);
@@ -7833,8 +7844,7 @@ static void free_kmem_cache_nodes(struct kmem_cache *s)
 void __kmem_cache_release(struct kmem_cache *s)
 {
 	cache_random_seq_destroy(s);
-	if (s->cpu_sheaves)
-		pcs_destroy(s);
+	pcs_destroy(s);
 #ifdef CONFIG_PREEMPT_RT
 	if (s->cpu_slab)
 		lockdep_unregister_key(&s->lock_key);



  parent reply	other threads:[~2026-01-19  9:34 UTC|newest]

Thread overview: 106+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-01-16 14:40 [PATCH v3 00/21] slab: replace cpu (partial) slabs with sheaves Vlastimil Babka
2026-01-16 14:40 ` [PATCH v3 01/21] mm/slab: add rcu_barrier() to kvfree_rcu_barrier_on_cache() Vlastimil Babka
2026-01-16 14:40 ` [PATCH v3 02/21] slab: add SLAB_CONSISTENCY_CHECKS to SLAB_NEVER_MERGE Vlastimil Babka
2026-01-16 17:22   ` Suren Baghdasaryan
2026-01-19  3:41   ` Harry Yoo
2026-01-16 14:40 ` [PATCH v3 03/21] mm/slab: move and refactor __kmem_cache_alias() Vlastimil Babka
2026-01-16 14:40 ` [PATCH v3 04/21] mm/slab: make caches with sheaves mergeable Vlastimil Babka
2026-01-16 14:40 ` [PATCH v3 05/21] slab: add sheaves to most caches Vlastimil Babka
2026-01-20 18:47   ` Breno Leitao
2026-01-21  8:12     ` Vlastimil Babka
2026-01-16 14:40 ` [PATCH v3 06/21] slab: introduce percpu sheaves bootstrap Vlastimil Babka
2026-01-17  2:11   ` Suren Baghdasaryan
2026-01-19  3:40     ` Harry Yoo
2026-01-19  9:13       ` Vlastimil Babka
2026-01-19  9:34     ` Vlastimil Babka [this message]
2026-01-21 10:52     ` Vlastimil Babka
2026-01-19 11:32   ` Hao Li
2026-01-21 10:54     ` Vlastimil Babka
2026-01-16 14:40 ` [PATCH v3 07/21] slab: make percpu sheaves compatible with kmalloc_nolock()/kfree_nolock() Vlastimil Babka
2026-01-18 20:45   ` Suren Baghdasaryan
2026-01-19  4:31   ` Harry Yoo
2026-01-19 10:09     ` Vlastimil Babka
2026-01-19 10:23       ` Vlastimil Babka
2026-01-19 12:06         ` Hao Li
2026-01-16 14:40 ` [PATCH v3 08/21] slab: handle kmalloc sheaves bootstrap Vlastimil Babka
2026-01-19  5:23   ` Harry Yoo
2026-01-20  1:04   ` Hao Li
2026-01-16 14:40 ` [PATCH v3 09/21] slab: add optimized sheaf refill from partial list Vlastimil Babka
2026-01-19  6:41   ` Harry Yoo
2026-01-19  8:02     ` Harry Yoo
2026-01-19 10:54     ` Vlastimil Babka
2026-01-20  1:41       ` Harry Yoo
2026-01-20  9:32         ` Hao Li
2026-01-20 10:22           ` Harry Yoo
2026-01-20  2:32   ` Harry Yoo
2026-01-20  6:33     ` Vlastimil Babka
2026-01-20 10:27       ` Harry Yoo
2026-01-20 10:32         ` Vlastimil Babka
2026-01-20  2:55   ` Hao Li
2026-01-20 17:19   ` Suren Baghdasaryan
2026-01-21 13:22     ` Vlastimil Babka
2026-01-21 16:12       ` Suren Baghdasaryan
2026-01-16 14:40 ` [PATCH v3 10/21] slab: remove cpu (partial) slabs usage from allocation paths Vlastimil Babka
2026-01-20  4:20   ` Harry Yoo
2026-01-20  8:36   ` Hao Li
2026-01-20 18:06   ` Suren Baghdasaryan
2026-01-21 13:56     ` Vlastimil Babka
2026-01-16 14:40 ` [PATCH v3 11/21] slab: remove SLUB_CPU_PARTIAL Vlastimil Babka
2026-01-20  5:24   ` Harry Yoo
2026-01-20 12:10   ` Hao Li
2026-01-20 22:25   ` Suren Baghdasaryan
2026-01-21  0:58     ` Harry Yoo
2026-01-21  1:06       ` Harry Yoo
2026-01-21 16:21       ` Suren Baghdasaryan
2026-01-21 14:22     ` Vlastimil Babka
2026-01-21 14:43       ` Vlastimil Babka
2026-01-21 16:22       ` Suren Baghdasaryan
2026-01-16 14:40 ` [PATCH v3 12/21] slab: remove the do_slab_free() fastpath Vlastimil Babka
2026-01-20  5:35   ` Harry Yoo
2026-01-20 12:29   ` Hao Li
2026-01-21 16:57     ` Suren Baghdasaryan
2026-01-16 14:40 ` [PATCH v3 13/21] slab: remove defer_deactivate_slab() Vlastimil Babka
2026-01-20  5:47   ` Harry Yoo
2026-01-20  9:35   ` Hao Li
2026-01-21 17:11     ` Suren Baghdasaryan
2026-01-16 14:40 ` [PATCH v3 14/21] slab: simplify kmalloc_nolock() Vlastimil Babka
2026-01-20 12:06   ` Hao Li
2026-01-21 17:39     ` Suren Baghdasaryan
2026-01-22  1:53   ` Harry Yoo
2026-01-22  8:16     ` Vlastimil Babka
2026-01-22  8:34       ` Harry Yoo
2026-01-16 14:40 ` [PATCH v3 15/21] slab: remove struct kmem_cache_cpu Vlastimil Babka
2026-01-20 12:40   ` Hao Li
2026-01-21 14:29     ` Vlastimil Babka
2026-01-21 17:54       ` Suren Baghdasaryan
2026-01-21 19:03         ` Vlastimil Babka
2026-01-22  3:10   ` Harry Yoo
2026-01-16 14:40 ` [PATCH v3 16/21] slab: remove unused PREEMPT_RT specific macros Vlastimil Babka
2026-01-21  6:42   ` Hao Li
2026-01-21 17:57     ` Suren Baghdasaryan
2026-01-22  3:50   ` Harry Yoo
2026-01-16 14:40 ` [PATCH v3 17/21] slab: refill sheaves from all nodes Vlastimil Babka
2026-01-21 18:30   ` Suren Baghdasaryan
2026-01-22  4:44   ` Harry Yoo
2026-01-22  8:37     ` Vlastimil Babka
2026-01-22  4:58   ` Hao Li
2026-01-22  8:32     ` Vlastimil Babka
2026-01-22  7:02   ` Harry Yoo
2026-01-22  8:42     ` Vlastimil Babka
2026-01-16 14:40 ` [PATCH v3 18/21] slab: update overview comments Vlastimil Babka
2026-01-21 20:58   ` Suren Baghdasaryan
2026-01-22  3:54   ` Hao Li
2026-01-22  6:41   ` Harry Yoo
2026-01-22  8:49     ` Vlastimil Babka
2026-01-16 14:40 ` [PATCH v3 19/21] slab: remove frozen slab checks from __slab_free() Vlastimil Babka
2026-01-22  0:54   ` Suren Baghdasaryan
2026-01-22  6:31     ` Vlastimil Babka
2026-01-22  5:01   ` Hao Li
2026-01-16 14:40 ` [PATCH v3 20/21] mm/slub: remove DEACTIVATE_TO_* stat items Vlastimil Babka
2026-01-22  0:58   ` Suren Baghdasaryan
2026-01-22  5:17   ` Hao Li
2026-01-16 14:40 ` [PATCH v3 21/21] mm/slub: cleanup and repurpose some " Vlastimil Babka
2026-01-22  2:35   ` Suren Baghdasaryan
2026-01-22  9:30     ` Vlastimil Babka
2026-01-22  5:52   ` Hao Li
2026-01-22  9:30     ` Vlastimil Babka

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4f60e230-c76e-4ab3-a0f0-7598dcb15d1a@suse.cz \
    --to=vbabka@suse.cz \
    --cc=Liam.Howlett@oracle.com \
    --cc=akpm@linux-foundation.org \
    --cc=ast@kernel.org \
    --cc=bigeasy@linutronix.de \
    --cc=bpf@vger.kernel.org \
    --cc=cl@gentwo.org \
    --cc=hao.li@linux.dev \
    --cc=harry.yoo@oracle.com \
    --cc=kasan-dev@googlegroups.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-rt-devel@lists.linux.dev \
    --cc=ptesarik@suse.com \
    --cc=rientjes@google.com \
    --cc=roman.gushchin@linux.dev \
    --cc=surenb@google.com \
    --cc=urezki@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox