linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Vlastimil Babka <vbabka@suse.cz>
To: Andrew Morton <akpm@linux-foundation.org>,
	 Christoph Lameter <cl@gentwo.org>,
	David Rientjes <rientjes@google.com>,
	 Roman Gushchin <roman.gushchin@linux.dev>,
	Harry Yoo <harry.yoo@oracle.com>
Cc: Uladzislau Rezki <urezki@gmail.com>,
	 "Liam R. Howlett" <Liam.Howlett@oracle.com>,
	 Suren Baghdasaryan <surenb@google.com>,
	 Sebastian Andrzej Siewior <bigeasy@linutronix.de>,
	 Alexei Starovoitov <ast@kernel.org>,
	linux-mm@kvack.org,  linux-kernel@vger.kernel.org,
	linux-rt-devel@lists.linux.dev,  bpf@vger.kernel.org,
	kasan-dev@googlegroups.com,  Vlastimil Babka <vbabka@suse.cz>
Subject: [PATCH RFC 05/19] slab: add sheaves to most caches
Date: Thu, 23 Oct 2025 15:52:27 +0200	[thread overview]
Message-ID: <20251023-sheaves-for-all-v1-5-6ffa2c9941c0@suse.cz> (raw)
In-Reply-To: <20251023-sheaves-for-all-v1-0-6ffa2c9941c0@suse.cz>

In the first step to replace cpu (partial) slabs with sheaves, enable
sheaves for almost all caches. Treat args->sheaf_capacity as a minimum,
and calculate sheaf capacity with a formula that roughly follows the
formula for number of objects in cpu partial slabs in set_cpu_partial().

This should achieve roughly similar contention on the barn spin lock as
there's currently for node list_lock without sheaves, to make
benchmarking results comparable. It can be further tuned later.

Don't enable sheaves for kmalloc caches yet, as that needs further
changes to bootstraping.

Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
---
 include/linux/slab.h |  6 ------
 mm/slub.c            | 51 +++++++++++++++++++++++++++++++++++++++++++++++----
 2 files changed, 47 insertions(+), 10 deletions(-)

diff --git a/include/linux/slab.h b/include/linux/slab.h
index cf443f064a66..e42aa6a3d202 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -57,9 +57,7 @@ enum _slab_flag_bits {
 #endif
 	_SLAB_OBJECT_POISON,
 	_SLAB_CMPXCHG_DOUBLE,
-#ifdef CONFIG_SLAB_OBJ_EXT
 	_SLAB_NO_OBJ_EXT,
-#endif
 	_SLAB_FLAGS_LAST_BIT
 };
 
@@ -238,11 +236,7 @@ enum _slab_flag_bits {
 #define SLAB_TEMPORARY		SLAB_RECLAIM_ACCOUNT	/* Objects are short-lived */
 
 /* Slab created using create_boot_cache */
-#ifdef CONFIG_SLAB_OBJ_EXT
 #define SLAB_NO_OBJ_EXT		__SLAB_FLAG_BIT(_SLAB_NO_OBJ_EXT)
-#else
-#define SLAB_NO_OBJ_EXT		__SLAB_FLAG_UNUSED
-#endif
 
 /*
  * ZERO_SIZE_PTR will be returned for zero sized kmalloc requests.
diff --git a/mm/slub.c b/mm/slub.c
index f2b2a6180759..a6e58d3708f4 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -7810,6 +7810,48 @@ static void set_cpu_partial(struct kmem_cache *s)
 #endif
 }
 
+static unsigned int calculate_sheaf_capacity(struct kmem_cache *s,
+					     struct kmem_cache_args *args)
+
+{
+	unsigned int capacity;
+	size_t size;
+
+
+	if (IS_ENABLED(CONFIG_SLUB_TINY) || s->flags & SLAB_DEBUG_FLAGS)
+		return 0;
+
+	/* bootstrap caches can't have sheaves for now */
+	if (s->flags & SLAB_NO_OBJ_EXT)
+		return 0;
+
+	/*
+	 * For now we use roughly similar formula (divided by two as there are
+	 * two percpu sheaves) as what was used for percpu partial slabs, which
+	 * should result in similar lock contention (barn or list_lock)
+	 */
+	if (s->size >= PAGE_SIZE)
+		capacity = 4;
+	else if (s->size >= 1024)
+		capacity = 12;
+	else if (s->size >= 256)
+		capacity = 26;
+	else
+		capacity = 60;
+
+	/* Increment capacity to make sheaf exactly a kmalloc size bucket */
+	size = struct_size_t(struct slab_sheaf, objects, capacity);
+	size = kmalloc_size_roundup(size);
+	capacity = (size - struct_size_t(struct slab_sheaf, objects, 0)) / sizeof(void *);
+
+	/*
+	 * Respect an explicit request for capacity that's typically motivated by
+	 * expected maximum size of kmem_cache_prefill_sheaf() to not end up
+	 * using low-performance oversize sheaves
+	 */
+	return max(capacity, args->sheaf_capacity);
+}
+
 /*
  * calculate_sizes() determines the order and the distribution of data within
  * a slab object.
@@ -7944,6 +7986,10 @@ static int calculate_sizes(struct kmem_cache_args *args, struct kmem_cache *s)
 	if (s->flags & SLAB_RECLAIM_ACCOUNT)
 		s->allocflags |= __GFP_RECLAIMABLE;
 
+	/* kmalloc caches need extra care to support sheaves */
+	if (!is_kmalloc_cache(s))
+		s->sheaf_capacity = calculate_sheaf_capacity(s, args);
+
 	/*
 	 * Determine the number of objects per slab
 	 */
@@ -8562,15 +8608,12 @@ int do_kmem_cache_create(struct kmem_cache *s, const char *name,
 
 	set_cpu_partial(s);
 
-	if (args->sheaf_capacity && !IS_ENABLED(CONFIG_SLUB_TINY)
-					&& !(s->flags & SLAB_DEBUG_FLAGS)) {
+	if (s->sheaf_capacity) {
 		s->cpu_sheaves = alloc_percpu(struct slub_percpu_sheaves);
 		if (!s->cpu_sheaves) {
 			err = -ENOMEM;
 			goto out;
 		}
-		// TODO: increase capacity to grow slab_sheaf up to next kmalloc size?
-		s->sheaf_capacity = args->sheaf_capacity;
 	}
 
 #ifdef CONFIG_NUMA

-- 
2.51.1



  parent reply	other threads:[~2025-10-23 13:53 UTC|newest]

Thread overview: 61+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-10-23 13:52 [PATCH RFC 00/19] slab: replace cpu (partial) slabs with sheaves Vlastimil Babka
2025-10-23 13:52 ` [PATCH RFC 01/19] slab: move kfence_alloc() out of internal bulk alloc Vlastimil Babka
2025-10-23 15:20   ` Marco Elver
2025-10-29 14:38     ` Vlastimil Babka
2025-10-29 15:30       ` Marco Elver
2025-10-23 13:52 ` [PATCH RFC 02/19] slab: handle pfmemalloc slabs properly with sheaves Vlastimil Babka
2025-10-24 14:21   ` Chris Mason
2025-10-29 15:00     ` Vlastimil Babka
2025-10-29 16:06       ` Chris Mason
2025-10-23 13:52 ` [PATCH RFC 03/19] slub: remove CONFIG_SLUB_TINY specific code paths Vlastimil Babka
2025-10-24 22:34   ` Alexei Starovoitov
2025-10-29 15:37     ` Vlastimil Babka
2025-10-23 13:52 ` [PATCH RFC 04/19] slab: prevent recursive kmalloc() in alloc_empty_sheaf() Vlastimil Babka
2025-10-23 13:52 ` Vlastimil Babka [this message]
2025-10-27  0:24   ` [PATCH RFC 05/19] slab: add sheaves to most caches Harry Yoo
2025-10-29 15:42     ` Vlastimil Babka
2025-10-23 13:52 ` [PATCH RFC 06/19] slab: introduce percpu sheaves bootstrap Vlastimil Babka
2025-10-24 15:29   ` Chris Mason
2025-10-29 15:51     ` Vlastimil Babka
2025-10-23 13:52 ` [PATCH RFC 07/19] slab: make percpu sheaves compatible with kmalloc_nolock()/kfree_nolock() Vlastimil Babka
2025-10-24 14:04   ` Chris Mason
2025-10-29 17:30     ` Vlastimil Babka
2025-10-24 19:43   ` Alexei Starovoitov
2025-10-29 17:46     ` Vlastimil Babka
2025-10-23 13:52 ` [PATCH RFC 08/19] slab: handle kmalloc sheaves bootstrap Vlastimil Babka
2025-10-27  6:12   ` Harry Yoo
2025-10-29 20:06     ` Vlastimil Babka
2025-10-29 20:06       ` Vlastimil Babka
2025-10-30  0:11         ` Harry Yoo
2025-10-23 13:52 ` [PATCH RFC 09/19] slab: add optimized sheaf refill from partial list Vlastimil Babka
2025-10-27  7:20   ` Harry Yoo
2025-10-27  9:11     ` Harry Yoo
2025-10-29 20:48     ` Vlastimil Babka
2025-10-30  0:07       ` Harry Yoo
2025-10-30 13:18         ` Vlastimil Babka
2025-10-23 13:52 ` [PATCH RFC 10/19] slab: remove cpu (partial) slabs usage from allocation paths Vlastimil Babka
2025-10-24 14:29   ` Chris Mason
2025-10-29 21:31     ` Vlastimil Babka
2025-10-30  4:32   ` Harry Yoo
2025-10-30 13:09     ` Vlastimil Babka
2025-10-30 15:27       ` Alexei Starovoitov
2025-10-30 15:35         ` Vlastimil Babka
2025-10-30 15:59           ` Alexei Starovoitov
2025-11-03  3:44           ` Harry Yoo
2025-10-23 13:52 ` [PATCH RFC 11/19] slab: remove SLUB_CPU_PARTIAL Vlastimil Babka
2025-10-24 20:43   ` Alexei Starovoitov
2025-10-29 22:31     ` Vlastimil Babka
2025-10-30  0:26       ` Alexei Starovoitov
2025-10-23 13:52 ` [PATCH RFC 12/19] slab: remove the do_slab_free() fastpath Vlastimil Babka
2025-10-24 22:32   ` Alexei Starovoitov
2025-10-29 22:44     ` Vlastimil Babka
2025-10-30  0:24       ` Alexei Starovoitov
2025-10-23 13:52 ` [PATCH RFC 13/19] slab: remove defer_deactivate_slab() Vlastimil Babka
2025-10-23 13:52 ` [PATCH RFC 14/19] slab: simplify kmalloc_nolock() Vlastimil Babka
2025-10-23 13:52 ` [PATCH RFC 15/19] slab: remove struct kmem_cache_cpu Vlastimil Babka
2025-10-23 13:52 ` [PATCH RFC 16/19] slab: remove unused PREEMPT_RT specific macros Vlastimil Babka
2025-10-23 13:52 ` [PATCH RFC 17/19] slab: refill sheaves from all nodes Vlastimil Babka
2025-10-23 13:52 ` [PATCH RFC 18/19] slab: update overview comments Vlastimil Babka
2025-10-23 13:52 ` [PATCH RFC 19/19] slab: remove frozen slab checks from __slab_free() Vlastimil Babka
2025-10-24 23:57 ` [PATCH RFC 00/19] slab: replace cpu (partial) slabs with sheaves Alexei Starovoitov
2025-11-04 22:11 ` Christoph Lameter (Ampere)

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20251023-sheaves-for-all-v1-5-6ffa2c9941c0@suse.cz \
    --to=vbabka@suse.cz \
    --cc=Liam.Howlett@oracle.com \
    --cc=akpm@linux-foundation.org \
    --cc=ast@kernel.org \
    --cc=bigeasy@linutronix.de \
    --cc=bpf@vger.kernel.org \
    --cc=cl@gentwo.org \
    --cc=harry.yoo@oracle.com \
    --cc=kasan-dev@googlegroups.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-rt-devel@lists.linux.dev \
    --cc=rientjes@google.com \
    --cc=roman.gushchin@linux.dev \
    --cc=surenb@google.com \
    --cc=urezki@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox