* [PATCH 0/2] mm/slab: support kmalloc_nolock() -> kfree[_rcu]()
@ 2026-02-09 12:10 Harry Yoo
2026-02-09 12:10 ` [PATCH 1/2] mm/slab: allow freeing kmalloc_nolock()'d objects using kfree[_rcu]() Harry Yoo
2026-02-09 12:10 ` [PATCH 2/2] mm/slab: free a bit in enum objexts_flags Harry Yoo
0 siblings, 2 replies; 6+ messages in thread
From: Harry Yoo @ 2026-02-09 12:10 UTC (permalink / raw)
To: Andrew Morton, Vlastimil Babka
Cc: Christoph Lameter, David Rientjes, Roman Gushchin, Hao Li,
Alexei Starovoitov, Catalin Marinas, Uladzislau Rezki,
Suren Baghdasaryan, linux-mm, Harry Yoo
This is separated from the RFC version of "k[v]free_rcu() improvements"
series [1], as these changes are relatively small and beneficial for BPF
because it enables the bpf code to use kfree_rcu() instead of
call_rcu() + kfree_nolock().
Hopefully we can get acks from kmemleak folks for kmemleak part in
patch 1, if it looks good for them.
Patch 1 allows kfree() and kfree_rcu() to be used with objects that are
allocated from kmalloc_nolock().
Patch 2 is a cleanup that frees a bit used to record whether obj_exts
was allocated using kmalloc_nolock() or kmalloc(), since now both cases
can be freed with kfree().
[1] https://lore.kernel.org/linux-mm/20260206093410.160622-1-harry.yoo@oracle.com
RFC -> v1:
- Added acked-bys from Alexei, thanks!
- Patch 1: While developing the RFC version, I mistakenly thought that
removing "Trying to color unknown object at ..." warning in
paint_ptr() became unnecessary after changing the kfree_rcu_nolock()
implementation several times, but during testing I discovered this is
still needed to silence the warning in kmalloc_nolock() -> kfree_rcu()
(-> kmemleak_ignore()) path.
So removed the warning in paint_ptr() again.
Harry Yoo (2):
mm/slab: allow freeing kmalloc_nolock()'d objects using kfree[_rcu]()
mm/slab: free a bit in enum objexts_flags
include/linux/memcontrol.h | 3 +--
include/linux/rcupdate.h | 4 ++--
mm/kmemleak.c | 22 ++++++++++------------
mm/slub.c | 33 ++++++++++++++++++++++-----------
4 files changed, 35 insertions(+), 27 deletions(-)
base-commit: f6ed7e47c1fc78e78c9bfeb668b1ad9ba5c58120
--
2.43.0
^ permalink raw reply [flat|nested] 6+ messages in thread
* [PATCH 1/2] mm/slab: allow freeing kmalloc_nolock()'d objects using kfree[_rcu]()
2026-02-09 12:10 [PATCH 0/2] mm/slab: support kmalloc_nolock() -> kfree[_rcu]() Harry Yoo
@ 2026-02-09 12:10 ` Harry Yoo
2026-02-09 18:34 ` Catalin Marinas
2026-02-09 12:10 ` [PATCH 2/2] mm/slab: free a bit in enum objexts_flags Harry Yoo
1 sibling, 1 reply; 6+ messages in thread
From: Harry Yoo @ 2026-02-09 12:10 UTC (permalink / raw)
To: Andrew Morton, Vlastimil Babka
Cc: Christoph Lameter, David Rientjes, Roman Gushchin, Hao Li,
Alexei Starovoitov, Catalin Marinas, Uladzislau Rezki,
Suren Baghdasaryan, linux-mm, Harry Yoo
Slab objects that are allocated with kmalloc_nolock() must be freed
using kfree_nolock() because only a subset of alloc hooks are called,
since kmalloc_nolock() can't spin on a lock during allocation.
This imposes a limitation: such objects cannot be freed with kfree_rcu(),
forcing users to work around this limitation by calling call_rcu()
with a callback that frees the object using kfree_nolock().
Remove this limitation by teaching kmemleak to gracefully ignore cases
when kmemleak_free() or kmemleak_ignore() (called by kvfree_call_rcu())
is called without a prior kmemleak_alloc().
Unlike kmemleak, kfence already handles this case, because,
due to its design, only a subset of allocations are served from kfence.
With this change, kfree() and kfree_rcu() can be used to free objects
that are allocated using kmalloc_nolock().
Suggested-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Harry Yoo <harry.yoo@oracle.com>
---
include/linux/rcupdate.h | 4 ++--
mm/kmemleak.c | 22 ++++++++++------------
mm/slub.c | 21 ++++++++++++++++++++-
3 files changed, 32 insertions(+), 15 deletions(-)
diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
index c5b30054cd01..72ba681360ad 100644
--- a/include/linux/rcupdate.h
+++ b/include/linux/rcupdate.h
@@ -1076,8 +1076,8 @@ static inline void rcu_read_unlock_migrate(void)
* either fall back to use of call_rcu() or rearrange the structure to
* position the rcu_head structure into the first 4096 bytes.
*
- * The object to be freed can be allocated either by kmalloc() or
- * kmem_cache_alloc().
+ * The object to be freed can be allocated either by kmalloc(),
+ * kmalloc_nolock(), or kmem_cache_alloc().
*
* Note that the allowable offset might decrease in the future.
*
diff --git a/mm/kmemleak.c b/mm/kmemleak.c
index 1ac56ceb29b6..95ad827fcd69 100644
--- a/mm/kmemleak.c
+++ b/mm/kmemleak.c
@@ -837,13 +837,12 @@ static void delete_object_full(unsigned long ptr, unsigned int objflags)
struct kmemleak_object *object;
object = find_and_remove_object(ptr, 0, objflags);
- if (!object) {
-#ifdef DEBUG
- kmemleak_warn("Freeing unknown object at 0x%08lx\n",
- ptr);
-#endif
+ if (!object)
+ /*
+ * kmalloc_nolock() -> kfree() calls kmemleak_free()
+ * without kmemleak_alloc().
+ */
return;
- }
__delete_object(object);
}
@@ -926,13 +925,12 @@ static void paint_ptr(unsigned long ptr, int color, unsigned int objflags)
struct kmemleak_object *object;
object = __find_and_get_object(ptr, 0, objflags);
- if (!object) {
- kmemleak_warn("Trying to color unknown object at 0x%08lx as %s\n",
- ptr,
- (color == KMEMLEAK_GREY) ? "Grey" :
- (color == KMEMLEAK_BLACK) ? "Black" : "Unknown");
+ if (!object)
+ /*
+ * kmalloc_nolock() -> kfree_rcu() calls kmemleak_ignore()
+ * without kmemleak_alloc().
+ */
return;
- }
paint_it(object, color);
put_object(object);
}
diff --git a/mm/slub.c b/mm/slub.c
index 11a99bd06ac7..63b03fd62ca7 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -2584,6 +2584,24 @@ struct rcu_delayed_free {
* Returns true if freeing of the object can proceed, false if its reuse
* was delayed by CONFIG_SLUB_RCU_DEBUG or KASAN quarantine, or it was returned
* to KFENCE.
+ *
+ * For objects allocated via kmalloc_nolock(), only a subset of alloc hooks
+ * are invoked, so some free hooks must handle asymmetric hook calls.
+ *
+ * Alloc hooks called for kmalloc_nolock():
+ * - kmsan_slab_alloc()
+ * - kasan_slab_alloc()
+ * - memcg_slab_post_alloc_hook()
+ * - alloc_tagging_slab_alloc_hook()
+ *
+ * Free hooks that must handle missing corresponding alloc hooks:
+ * - kmemleak_free_recursive()
+ * - kfence_free()
+ *
+ * Free hooks that have no alloc hook counterpart, and thus safe to call:
+ * - debug_check_no_locks_freed()
+ * - debug_check_no_obj_freed()
+ * - __kcsan_check_access()
*/
static __always_inline
bool slab_free_hook(struct kmem_cache *s, void *x, bool init,
@@ -6368,7 +6386,7 @@ void kvfree_rcu_cb(struct rcu_head *head)
/**
* kfree - free previously allocated memory
- * @object: pointer returned by kmalloc() or kmem_cache_alloc()
+ * @object: pointer returned by kmalloc(), kmalloc_nolock(), or kmem_cache_alloc()
*
* If @object is NULL, no operation is performed.
*/
@@ -6387,6 +6405,7 @@ void kfree(const void *object)
page = virt_to_page(object);
slab = page_slab(page);
if (!slab) {
+ /* kmalloc_nolock() doesn't support large kmalloc */
free_large_kmalloc(page, (void *)object);
return;
}
--
2.43.0
^ permalink raw reply [flat|nested] 6+ messages in thread
* [PATCH 2/2] mm/slab: free a bit in enum objexts_flags
2026-02-09 12:10 [PATCH 0/2] mm/slab: support kmalloc_nolock() -> kfree[_rcu]() Harry Yoo
2026-02-09 12:10 ` [PATCH 1/2] mm/slab: allow freeing kmalloc_nolock()'d objects using kfree[_rcu]() Harry Yoo
@ 2026-02-09 12:10 ` Harry Yoo
2026-02-10 1:44 ` Harry Yoo
1 sibling, 1 reply; 6+ messages in thread
From: Harry Yoo @ 2026-02-09 12:10 UTC (permalink / raw)
To: Andrew Morton, Vlastimil Babka
Cc: Christoph Lameter, David Rientjes, Roman Gushchin, Hao Li,
Alexei Starovoitov, Catalin Marinas, Uladzislau Rezki,
Suren Baghdasaryan, linux-mm, Harry Yoo
Since kfree() now supports freeing objects allocated with
kmalloc_nolock(), free one bit in enum object_flags.
Signed-off-by: Harry Yoo <harry.yoo@oracle.com>
---
include/linux/memcontrol.h | 3 +--
mm/slub.c | 12 ++----------
2 files changed, 3 insertions(+), 12 deletions(-)
diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 0651865a4564..bb789ec4a2a2 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -359,8 +359,7 @@ enum objext_flags {
* MEMCG_DATA_OBJEXTS.
*/
OBJEXTS_ALLOC_FAIL = __OBJEXTS_ALLOC_FAIL,
- /* slabobj_ext vector allocated with kmalloc_nolock() */
- OBJEXTS_NOSPIN_ALLOC = __FIRST_OBJEXT_FLAG,
+ __OBJEXTS_FLAG_UNUSED = __FIRST_OBJEXT_FLAG,
/* the next bit after the last actual flag */
__NR_OBJEXTS_FLAGS = (__FIRST_OBJEXT_FLAG << 1),
};
diff --git a/mm/slub.c b/mm/slub.c
index 63b03fd62ca7..33d2cae8f939 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -2189,8 +2189,6 @@ int alloc_slab_obj_exts(struct slab *slab, struct kmem_cache *s,
virt_to_slab(vec)->slab_cache == s);
new_exts = (unsigned long)vec;
- if (unlikely(!allow_spin))
- new_exts |= OBJEXTS_NOSPIN_ALLOC;
#ifdef CONFIG_MEMCG
new_exts |= MEMCG_DATA_OBJEXTS;
#endif
@@ -2213,10 +2211,7 @@ int alloc_slab_obj_exts(struct slab *slab, struct kmem_cache *s,
* objcg vector should be reused.
*/
mark_objexts_empty(vec);
- if (unlikely(!allow_spin))
- kfree_nolock(vec);
- else
- kfree(vec);
+ kfree(vec);
return 0;
} else if (cmpxchg(&slab->obj_exts, old_exts, new_exts) != old_exts) {
/* Retry if a racing thread changed slab->obj_exts from under us. */
@@ -2256,10 +2251,7 @@ static inline void free_slab_obj_exts(struct slab *slab)
* the extension for obj_exts is expected to be NULL.
*/
mark_objexts_empty(obj_exts);
- if (unlikely(READ_ONCE(slab->obj_exts) & OBJEXTS_NOSPIN_ALLOC))
- kfree_nolock(obj_exts);
- else
- kfree(obj_exts);
+ kfree(obj_exts);
slab->obj_exts = 0;
}
--
2.43.0
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH 1/2] mm/slab: allow freeing kmalloc_nolock()'d objects using kfree[_rcu]()
2026-02-09 12:10 ` [PATCH 1/2] mm/slab: allow freeing kmalloc_nolock()'d objects using kfree[_rcu]() Harry Yoo
@ 2026-02-09 18:34 ` Catalin Marinas
2026-02-10 1:50 ` Harry Yoo
0 siblings, 1 reply; 6+ messages in thread
From: Catalin Marinas @ 2026-02-09 18:34 UTC (permalink / raw)
To: Harry Yoo
Cc: Andrew Morton, Vlastimil Babka, Christoph Lameter,
David Rientjes, Roman Gushchin, Hao Li, Alexei Starovoitov,
Uladzislau Rezki, Suren Baghdasaryan, linux-mm
On Mon, Feb 09, 2026 at 09:10:12PM +0900, Harry Yoo wrote:
> Slab objects that are allocated with kmalloc_nolock() must be freed
> using kfree_nolock() because only a subset of alloc hooks are called,
> since kmalloc_nolock() can't spin on a lock during allocation.
>
> This imposes a limitation: such objects cannot be freed with kfree_rcu(),
> forcing users to work around this limitation by calling call_rcu()
> with a callback that frees the object using kfree_nolock().
>
> Remove this limitation by teaching kmemleak to gracefully ignore cases
> when kmemleak_free() or kmemleak_ignore() (called by kvfree_call_rcu())
> is called without a prior kmemleak_alloc().
>
> Unlike kmemleak, kfence already handles this case, because,
> due to its design, only a subset of allocations are served from kfence.
>
> With this change, kfree() and kfree_rcu() can be used to free objects
> that are allocated using kmalloc_nolock().
>
> Suggested-by: Alexei Starovoitov <ast@kernel.org>
> Acked-by: Alexei Starovoitov <ast@kernel.org>
> Signed-off-by: Harry Yoo <harry.yoo@oracle.com>
It looks fine to me. The alternative would have been to track objects
allocated by kmalloc_nolock() but that's not (easily) possible without
taking more locks in kmemleak.
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH 2/2] mm/slab: free a bit in enum objexts_flags
2026-02-09 12:10 ` [PATCH 2/2] mm/slab: free a bit in enum objexts_flags Harry Yoo
@ 2026-02-10 1:44 ` Harry Yoo
0 siblings, 0 replies; 6+ messages in thread
From: Harry Yoo @ 2026-02-10 1:44 UTC (permalink / raw)
To: Andrew Morton, Vlastimil Babka
Cc: Christoph Lameter, David Rientjes, Roman Gushchin, Hao Li,
Alexei Starovoitov, Catalin Marinas, Uladzislau Rezki,
Suren Baghdasaryan, linux-mm
On Mon, Feb 09, 2026 at 09:10:13PM +0900, Harry Yoo wrote:
> Since kfree() now supports freeing objects allocated with
> kmalloc_nolock(), free one bit in enum object_flags.
>
> Signed-off-by: Harry Yoo <harry.yoo@oracle.com>
Oops, looks like I forgot to add Alexei's ack, and...
> ---
> include/linux/memcontrol.h | 3 +--
> mm/slub.c | 12 ++----------
> 2 files changed, 3 insertions(+), 12 deletions(-)
>
> diff --git a/mm/slub.c b/mm/slub.c
> index 63b03fd62ca7..33d2cae8f939 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -2213,10 +2211,7 @@ int alloc_slab_obj_exts(struct slab *slab, struct kmem_cache *s,
> * objcg vector should be reused.
> */
> mark_objexts_empty(vec);
> - if (unlikely(!allow_spin))
> - kfree_nolock(vec);
> - else
> - kfree(vec);
> + kfree(vec);
> return 0;
Oh Harry, no.
We still need to check allow_spin in this case.
Just because you can free objects allocated from kmalloc_nolock() with
kfree() doesn't mean you can call kfree() when allow_spin == false.
I'll respin v2.
> } else if (cmpxchg(&slab->obj_exts, old_exts, new_exts) != old_exts) {
> /* Retry if a racing thread changed slab->obj_exts from under us. */
--
Cheers,
Harry / Hyeonggon
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH 1/2] mm/slab: allow freeing kmalloc_nolock()'d objects using kfree[_rcu]()
2026-02-09 18:34 ` Catalin Marinas
@ 2026-02-10 1:50 ` Harry Yoo
0 siblings, 0 replies; 6+ messages in thread
From: Harry Yoo @ 2026-02-10 1:50 UTC (permalink / raw)
To: Catalin Marinas
Cc: Andrew Morton, Vlastimil Babka, Christoph Lameter,
David Rientjes, Roman Gushchin, Hao Li, Alexei Starovoitov,
Uladzislau Rezki, Suren Baghdasaryan, linux-mm
On Mon, Feb 09, 2026 at 06:34:16PM +0000, Catalin Marinas wrote:
> On Mon, Feb 09, 2026 at 09:10:12PM +0900, Harry Yoo wrote:
> > Slab objects that are allocated with kmalloc_nolock() must be freed
> > using kfree_nolock() because only a subset of alloc hooks are called,
> > since kmalloc_nolock() can't spin on a lock during allocation.
> >
> > This imposes a limitation: such objects cannot be freed with kfree_rcu(),
> > forcing users to work around this limitation by calling call_rcu()
> > with a callback that frees the object using kfree_nolock().
> >
> > Remove this limitation by teaching kmemleak to gracefully ignore cases
> > when kmemleak_free() or kmemleak_ignore() (called by kvfree_call_rcu())
> > is called without a prior kmemleak_alloc().
> >
> > Unlike kmemleak, kfence already handles this case, because,
> > due to its design, only a subset of allocations are served from kfence.
> >
> > With this change, kfree() and kfree_rcu() can be used to free objects
> > that are allocated using kmalloc_nolock().
> >
> > Suggested-by: Alexei Starovoitov <ast@kernel.org>
> > Acked-by: Alexei Starovoitov <ast@kernel.org>
> > Signed-off-by: Harry Yoo <harry.yoo@oracle.com>
>
> It looks fine to me. The alternative would have been to track objects
> allocated by kmalloc_nolock() but that's not (easily) possible without
> taking more locks in kmemleak.
Haha, yeah... I wasn't brave enough to have fun with changing the locking
in kmemleak :)
> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Thanks a lot for quick review, Catalin!
--
Cheers,
Harry / Hyeonggon
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2026-02-10 1:50 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2026-02-09 12:10 [PATCH 0/2] mm/slab: support kmalloc_nolock() -> kfree[_rcu]() Harry Yoo
2026-02-09 12:10 ` [PATCH 1/2] mm/slab: allow freeing kmalloc_nolock()'d objects using kfree[_rcu]() Harry Yoo
2026-02-09 18:34 ` Catalin Marinas
2026-02-10 1:50 ` Harry Yoo
2026-02-09 12:10 ` [PATCH 2/2] mm/slab: free a bit in enum objexts_flags Harry Yoo
2026-02-10 1:44 ` Harry Yoo
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox