From: Vlastimil Babka <vbabka@suse.cz>
To: Harry Yoo <harry.yoo@oracle.com>,
Petr Tesarik <ptesarik@suse.com>,
Christoph Lameter <cl@gentwo.org>,
David Rientjes <rientjes@google.com>,
Roman Gushchin <roman.gushchin@linux.dev>
Cc: Hao Li <hao.li@linux.dev>,
Andrew Morton <akpm@linux-foundation.org>,
Uladzislau Rezki <urezki@gmail.com>,
"Liam R. Howlett" <Liam.Howlett@oracle.com>,
Suren Baghdasaryan <surenb@google.com>,
Sebastian Andrzej Siewior <bigeasy@linutronix.de>,
Alexei Starovoitov <ast@kernel.org>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org,
linux-rt-devel@lists.linux.dev, bpf@vger.kernel.org,
kasan-dev@googlegroups.com, Vlastimil Babka <vbabka@suse.cz>
Subject: [PATCH v3 14/21] slab: simplify kmalloc_nolock()
Date: Fri, 16 Jan 2026 15:40:34 +0100 [thread overview]
Message-ID: <20260116-sheaves-for-all-v3-14-5595cb000772@suse.cz> (raw)
In-Reply-To: <20260116-sheaves-for-all-v3-0-5595cb000772@suse.cz>
The kmalloc_nolock() implementation has several complications and
restrictions due to SLUB's cpu slab locking, lockless fastpath and
PREEMPT_RT differences. With cpu slab usage removed, we can simplify
things:
- relax the PREEMPT_RT context checks as they were before commit
a4ae75d1b6a2 ("slab: fix kmalloc_nolock() context check for
PREEMPT_RT") and also reference the explanation comment in the page
allocator
- the local_lock_cpu_slab() macros became unused, remove them
- we no longer need to set up lockdep classes on PREEMPT_RT
- we no longer need to annotate ___slab_alloc as NOKPROBE_SYMBOL
since there's no lockless cpu freelist manipulation anymore
- __slab_alloc_node() can be called from kmalloc_nolock_noprof()
unconditionally. It can also no longer return EBUSY. But trylock
failures can still happen so retry with the larger bucket if the
allocation fails for any reason.
Note that we still need __CMPXCHG_DOUBLE, because while it was removed
we don't use cmpxchg16b on cpu freelist anymore, we still use it on
slab freelist, and the alternative is slab_lock() which can be
interrupted by a nmi. Clarify the comment to mention it specifically.
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
---
mm/slab.h | 1 -
mm/slub.c | 144 +++++++++++++-------------------------------------------------
2 files changed, 29 insertions(+), 116 deletions(-)
diff --git a/mm/slab.h b/mm/slab.h
index 4efec41b6445..e9a0738133ed 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -190,7 +190,6 @@ struct kmem_cache_order_objects {
*/
struct kmem_cache {
struct kmem_cache_cpu __percpu *cpu_slab;
- struct lock_class_key lock_key;
struct slub_percpu_sheaves __percpu *cpu_sheaves;
/* Used for retrieving partial slabs, etc. */
slab_flags_t flags;
diff --git a/mm/slub.c b/mm/slub.c
index 33f218c0e8d6..8746d9d3f3a3 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3694,29 +3694,12 @@ static inline unsigned int init_tid(int cpu)
static void init_kmem_cache_cpus(struct kmem_cache *s)
{
-#ifdef CONFIG_PREEMPT_RT
- /*
- * Register lockdep key for non-boot kmem caches to avoid
- * WARN_ON_ONCE(static_obj(key))) in lockdep_register_key()
- */
- bool finegrain_lockdep = !init_section_contains(s, 1);
-#else
- /*
- * Don't bother with different lockdep classes for each
- * kmem_cache, since we only use local_trylock_irqsave().
- */
- bool finegrain_lockdep = false;
-#endif
int cpu;
struct kmem_cache_cpu *c;
- if (finegrain_lockdep)
- lockdep_register_key(&s->lock_key);
for_each_possible_cpu(cpu) {
c = per_cpu_ptr(s->cpu_slab, cpu);
local_trylock_init(&c->lock);
- if (finegrain_lockdep)
- lockdep_set_class(&c->lock, &s->lock_key);
c->tid = init_tid(cpu);
}
}
@@ -3803,47 +3786,6 @@ static void deactivate_slab(struct kmem_cache *s, struct slab *slab,
}
}
-/*
- * ___slab_alloc()'s caller is supposed to check if kmem_cache::kmem_cache_cpu::lock
- * can be acquired without a deadlock before invoking the function.
- *
- * Without LOCKDEP we trust the code to be correct. kmalloc_nolock() is
- * using local_lock_is_locked() properly before calling local_lock_cpu_slab(),
- * and kmalloc() is not used in an unsupported context.
- *
- * With LOCKDEP, on PREEMPT_RT lockdep does its checking in local_lock_irqsave().
- * On !PREEMPT_RT we use trylock to avoid false positives in NMI, but
- * lockdep_assert() will catch a bug in case:
- * #1
- * kmalloc() -> ___slab_alloc() -> irqsave -> NMI -> bpf -> kmalloc_nolock()
- * or
- * #2
- * kmalloc() -> ___slab_alloc() -> irqsave -> tracepoint/kprobe -> bpf -> kmalloc_nolock()
- *
- * On PREEMPT_RT an invocation is not possible from IRQ-off or preempt
- * disabled context. The lock will always be acquired and if needed it
- * block and sleep until the lock is available.
- * #1 is possible in !PREEMPT_RT only.
- * #2 is possible in both with a twist that irqsave is replaced with rt_spinlock:
- * kmalloc() -> ___slab_alloc() -> rt_spin_lock(kmem_cache_A) ->
- * tracepoint/kprobe -> bpf -> kmalloc_nolock() -> rt_spin_lock(kmem_cache_B)
- *
- * local_lock_is_locked() prevents the case kmem_cache_A == kmem_cache_B
- */
-#if defined(CONFIG_PREEMPT_RT) || !defined(CONFIG_LOCKDEP)
-#define local_lock_cpu_slab(s, flags) \
- local_lock_irqsave(&(s)->cpu_slab->lock, flags)
-#else
-#define local_lock_cpu_slab(s, flags) \
- do { \
- bool __l = local_trylock_irqsave(&(s)->cpu_slab->lock, flags); \
- lockdep_assert(__l); \
- } while (0)
-#endif
-
-#define local_unlock_cpu_slab(s, flags) \
- local_unlock_irqrestore(&(s)->cpu_slab->lock, flags)
-
static inline void flush_slab(struct kmem_cache *s, struct kmem_cache_cpu *c)
{
unsigned long flags;
@@ -4402,20 +4344,6 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node,
return freelist;
}
-/*
- * We disallow kprobes in ___slab_alloc() to prevent reentrance
- *
- * kmalloc() -> ___slab_alloc() -> local_lock_cpu_slab() protected part of
- * ___slab_alloc() manipulating c->freelist -> kprobe -> bpf ->
- * kmalloc_nolock() or kfree_nolock() -> __update_cpu_freelist_fast()
- * manipulating c->freelist without lock.
- *
- * This does not prevent kprobe in functions called from ___slab_alloc() such as
- * local_lock_irqsave() itself, and that is fine, we only need to protect the
- * c->freelist manipulation in ___slab_alloc() itself.
- */
-NOKPROBE_SYMBOL(___slab_alloc);
-
static __always_inline void *__slab_alloc_node(struct kmem_cache *s,
gfp_t gfpflags, int node, unsigned long addr, size_t orig_size)
{
@@ -5253,13 +5181,13 @@ void *kmalloc_nolock_noprof(size_t size, gfp_t gfp_flags, int node)
if (unlikely(!size))
return ZERO_SIZE_PTR;
- if (IS_ENABLED(CONFIG_PREEMPT_RT) && !preemptible())
- /*
- * kmalloc_nolock() in PREEMPT_RT is not supported from
- * non-preemptible context because local_lock becomes a
- * sleeping lock on RT.
- */
+ /*
+ * See the comment for the same check in
+ * alloc_frozen_pages_nolock_noprof()
+ */
+ if (IS_ENABLED(CONFIG_PREEMPT_RT) && (in_nmi() || in_hardirq()))
return NULL;
+
retry:
if (unlikely(size > KMALLOC_MAX_CACHE_SIZE))
return NULL;
@@ -5268,10 +5196,11 @@ void *kmalloc_nolock_noprof(size_t size, gfp_t gfp_flags, int node)
if (!(s->flags & __CMPXCHG_DOUBLE) && !kmem_cache_debug(s))
/*
* kmalloc_nolock() is not supported on architectures that
- * don't implement cmpxchg16b, but debug caches don't use
- * per-cpu slab and per-cpu partial slabs. They rely on
- * kmem_cache_node->list_lock, so kmalloc_nolock() can
- * attempt to allocate from debug caches by
+ * don't implement cmpxchg16b and thus need slab_lock()
+ * which could be preempted by a nmi.
+ * But debug caches don't use that and only rely on
+ * kmem_cache_node->list_lock, so kmalloc_nolock() can attempt
+ * to allocate from debug caches by
* spin_trylock_irqsave(&n->list_lock, ...)
*/
return NULL;
@@ -5280,42 +5209,31 @@ void *kmalloc_nolock_noprof(size_t size, gfp_t gfp_flags, int node)
if (ret)
goto success;
- ret = ERR_PTR(-EBUSY);
-
/*
* Do not call slab_alloc_node(), since trylock mode isn't
* compatible with slab_pre_alloc_hook/should_failslab and
* kfence_alloc. Hence call __slab_alloc_node() (at most twice)
* and slab_post_alloc_hook() directly.
- *
- * In !PREEMPT_RT ___slab_alloc() manipulates (freelist,tid) pair
- * in irq saved region. It assumes that the same cpu will not
- * __update_cpu_freelist_fast() into the same (freelist,tid) pair.
- * Therefore use in_nmi() to check whether particular bucket is in
- * irq protected section.
- *
- * If in_nmi() && local_lock_is_locked(s->cpu_slab) then it means that
- * this cpu was interrupted somewhere inside ___slab_alloc() after
- * it did local_lock_irqsave(&s->cpu_slab->lock, flags).
- * In this case fast path with __update_cpu_freelist_fast() is not safe.
*/
- if (!in_nmi() || !local_lock_is_locked(&s->cpu_slab->lock))
- ret = __slab_alloc_node(s, alloc_gfp, node, _RET_IP_, size);
+ ret = __slab_alloc_node(s, alloc_gfp, node, _RET_IP_, size);
- if (PTR_ERR(ret) == -EBUSY) {
- if (can_retry) {
- /* pick the next kmalloc bucket */
- size = s->object_size + 1;
- /*
- * Another alternative is to
- * if (memcg) alloc_gfp &= ~__GFP_ACCOUNT;
- * else if (!memcg) alloc_gfp |= __GFP_ACCOUNT;
- * to retry from bucket of the same size.
- */
- can_retry = false;
- goto retry;
- }
- ret = NULL;
+ /*
+ * It's possible we failed due to trylock as we preempted someone with
+ * the sheaves locked, and the list_lock is also held by another cpu.
+ * But it should be rare that multiple kmalloc buckets would have
+ * sheaves locked, so try a larger one.
+ */
+ if (!ret && can_retry) {
+ /* pick the next kmalloc bucket */
+ size = s->object_size + 1;
+ /*
+ * Another alternative is to
+ * if (memcg) alloc_gfp &= ~__GFP_ACCOUNT;
+ * else if (!memcg) alloc_gfp |= __GFP_ACCOUNT;
+ * to retry from bucket of the same size.
+ */
+ can_retry = false;
+ goto retry;
}
success:
@@ -7334,10 +7252,6 @@ void __kmem_cache_release(struct kmem_cache *s)
cache_random_seq_destroy(s);
if (s->cpu_sheaves)
pcs_destroy(s);
-#ifdef CONFIG_PREEMPT_RT
- if (s->cpu_slab)
- lockdep_unregister_key(&s->lock_key);
-#endif
free_percpu(s->cpu_slab);
free_kmem_cache_nodes(s);
}
--
2.52.0
next prev parent reply other threads:[~2026-01-16 14:41 UTC|newest]
Thread overview: 106+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-01-16 14:40 [PATCH v3 00/21] slab: replace cpu (partial) slabs with sheaves Vlastimil Babka
2026-01-16 14:40 ` [PATCH v3 01/21] mm/slab: add rcu_barrier() to kvfree_rcu_barrier_on_cache() Vlastimil Babka
2026-01-16 14:40 ` [PATCH v3 02/21] slab: add SLAB_CONSISTENCY_CHECKS to SLAB_NEVER_MERGE Vlastimil Babka
2026-01-16 17:22 ` Suren Baghdasaryan
2026-01-19 3:41 ` Harry Yoo
2026-01-16 14:40 ` [PATCH v3 03/21] mm/slab: move and refactor __kmem_cache_alias() Vlastimil Babka
2026-01-16 14:40 ` [PATCH v3 04/21] mm/slab: make caches with sheaves mergeable Vlastimil Babka
2026-01-16 14:40 ` [PATCH v3 05/21] slab: add sheaves to most caches Vlastimil Babka
2026-01-20 18:47 ` Breno Leitao
2026-01-21 8:12 ` Vlastimil Babka
2026-01-16 14:40 ` [PATCH v3 06/21] slab: introduce percpu sheaves bootstrap Vlastimil Babka
2026-01-17 2:11 ` Suren Baghdasaryan
2026-01-19 3:40 ` Harry Yoo
2026-01-19 9:13 ` Vlastimil Babka
2026-01-19 9:34 ` Vlastimil Babka
2026-01-21 10:52 ` Vlastimil Babka
2026-01-19 11:32 ` Hao Li
2026-01-21 10:54 ` Vlastimil Babka
2026-01-16 14:40 ` [PATCH v3 07/21] slab: make percpu sheaves compatible with kmalloc_nolock()/kfree_nolock() Vlastimil Babka
2026-01-18 20:45 ` Suren Baghdasaryan
2026-01-19 4:31 ` Harry Yoo
2026-01-19 10:09 ` Vlastimil Babka
2026-01-19 10:23 ` Vlastimil Babka
2026-01-19 12:06 ` Hao Li
2026-01-16 14:40 ` [PATCH v3 08/21] slab: handle kmalloc sheaves bootstrap Vlastimil Babka
2026-01-19 5:23 ` Harry Yoo
2026-01-20 1:04 ` Hao Li
2026-01-16 14:40 ` [PATCH v3 09/21] slab: add optimized sheaf refill from partial list Vlastimil Babka
2026-01-19 6:41 ` Harry Yoo
2026-01-19 8:02 ` Harry Yoo
2026-01-19 10:54 ` Vlastimil Babka
2026-01-20 1:41 ` Harry Yoo
2026-01-20 9:32 ` Hao Li
2026-01-20 10:22 ` Harry Yoo
2026-01-20 2:32 ` Harry Yoo
2026-01-20 6:33 ` Vlastimil Babka
2026-01-20 10:27 ` Harry Yoo
2026-01-20 10:32 ` Vlastimil Babka
2026-01-20 2:55 ` Hao Li
2026-01-20 17:19 ` Suren Baghdasaryan
2026-01-21 13:22 ` Vlastimil Babka
2026-01-21 16:12 ` Suren Baghdasaryan
2026-01-16 14:40 ` [PATCH v3 10/21] slab: remove cpu (partial) slabs usage from allocation paths Vlastimil Babka
2026-01-20 4:20 ` Harry Yoo
2026-01-20 8:36 ` Hao Li
2026-01-20 18:06 ` Suren Baghdasaryan
2026-01-21 13:56 ` Vlastimil Babka
2026-01-16 14:40 ` [PATCH v3 11/21] slab: remove SLUB_CPU_PARTIAL Vlastimil Babka
2026-01-20 5:24 ` Harry Yoo
2026-01-20 12:10 ` Hao Li
2026-01-20 22:25 ` Suren Baghdasaryan
2026-01-21 0:58 ` Harry Yoo
2026-01-21 1:06 ` Harry Yoo
2026-01-21 16:21 ` Suren Baghdasaryan
2026-01-21 14:22 ` Vlastimil Babka
2026-01-21 14:43 ` Vlastimil Babka
2026-01-21 16:22 ` Suren Baghdasaryan
2026-01-16 14:40 ` [PATCH v3 12/21] slab: remove the do_slab_free() fastpath Vlastimil Babka
2026-01-20 5:35 ` Harry Yoo
2026-01-20 12:29 ` Hao Li
2026-01-21 16:57 ` Suren Baghdasaryan
2026-01-16 14:40 ` [PATCH v3 13/21] slab: remove defer_deactivate_slab() Vlastimil Babka
2026-01-20 5:47 ` Harry Yoo
2026-01-20 9:35 ` Hao Li
2026-01-21 17:11 ` Suren Baghdasaryan
2026-01-16 14:40 ` Vlastimil Babka [this message]
2026-01-20 12:06 ` [PATCH v3 14/21] slab: simplify kmalloc_nolock() Hao Li
2026-01-21 17:39 ` Suren Baghdasaryan
2026-01-22 1:53 ` Harry Yoo
2026-01-22 8:16 ` Vlastimil Babka
2026-01-22 8:34 ` Harry Yoo
2026-01-16 14:40 ` [PATCH v3 15/21] slab: remove struct kmem_cache_cpu Vlastimil Babka
2026-01-20 12:40 ` Hao Li
2026-01-21 14:29 ` Vlastimil Babka
2026-01-21 17:54 ` Suren Baghdasaryan
2026-01-21 19:03 ` Vlastimil Babka
2026-01-22 3:10 ` Harry Yoo
2026-01-16 14:40 ` [PATCH v3 16/21] slab: remove unused PREEMPT_RT specific macros Vlastimil Babka
2026-01-21 6:42 ` Hao Li
2026-01-21 17:57 ` Suren Baghdasaryan
2026-01-22 3:50 ` Harry Yoo
2026-01-16 14:40 ` [PATCH v3 17/21] slab: refill sheaves from all nodes Vlastimil Babka
2026-01-21 18:30 ` Suren Baghdasaryan
2026-01-22 4:44 ` Harry Yoo
2026-01-22 8:37 ` Vlastimil Babka
2026-01-22 4:58 ` Hao Li
2026-01-22 8:32 ` Vlastimil Babka
2026-01-22 7:02 ` Harry Yoo
2026-01-22 8:42 ` Vlastimil Babka
2026-01-16 14:40 ` [PATCH v3 18/21] slab: update overview comments Vlastimil Babka
2026-01-21 20:58 ` Suren Baghdasaryan
2026-01-22 3:54 ` Hao Li
2026-01-22 6:41 ` Harry Yoo
2026-01-22 8:49 ` Vlastimil Babka
2026-01-16 14:40 ` [PATCH v3 19/21] slab: remove frozen slab checks from __slab_free() Vlastimil Babka
2026-01-22 0:54 ` Suren Baghdasaryan
2026-01-22 6:31 ` Vlastimil Babka
2026-01-22 5:01 ` Hao Li
2026-01-16 14:40 ` [PATCH v3 20/21] mm/slub: remove DEACTIVATE_TO_* stat items Vlastimil Babka
2026-01-22 0:58 ` Suren Baghdasaryan
2026-01-22 5:17 ` Hao Li
2026-01-16 14:40 ` [PATCH v3 21/21] mm/slub: cleanup and repurpose some " Vlastimil Babka
2026-01-22 2:35 ` Suren Baghdasaryan
2026-01-22 9:30 ` Vlastimil Babka
2026-01-22 5:52 ` Hao Li
2026-01-22 9:30 ` Vlastimil Babka
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260116-sheaves-for-all-v3-14-5595cb000772@suse.cz \
--to=vbabka@suse.cz \
--cc=Liam.Howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=ast@kernel.org \
--cc=bigeasy@linutronix.de \
--cc=bpf@vger.kernel.org \
--cc=cl@gentwo.org \
--cc=hao.li@linux.dev \
--cc=harry.yoo@oracle.com \
--cc=kasan-dev@googlegroups.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-rt-devel@lists.linux.dev \
--cc=ptesarik@suse.com \
--cc=rientjes@google.com \
--cc=roman.gushchin@linux.dev \
--cc=surenb@google.com \
--cc=urezki@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox