* [PATCH v2] slab: replace cache_from_obj() with inline checks
@ 2026-01-21 6:57 Vlastimil Babka
2026-01-21 8:33 ` Eric Dumazet
0 siblings, 1 reply; 3+ messages in thread
From: Vlastimil Babka @ 2026-01-21 6:57 UTC (permalink / raw)
To: Harry Yoo
Cc: Eric Dumazet, Hao Li, Christoph Lameter, David Rientjes,
Roman Gushchin, linux-mm, linux-kernel, llvm, Vlastimil Babka
Eric Dumazet has noticed cache_from_obj() is not inlined with clang and
suggested splitting it into two functions, where the smaller inlined one
assumes the fastpath is !CONFIG_SLAB_FREELIST_HARDENED. However most
distros enable it these days and so this would likely add a function
call to the object free fastpaths.
Instead take a step back and consider that cache_from_obj() is a relict
from when memcgs created their separate kmem_cache copies, as the
outdated comment in build_detached_freelist() reminds us.
Meanwhile hardening/debugging had reused cache_from_obj() to validate
that the freed object really belongs to a slab from the cache we think
we are freeing from.
In build_detached_freelist() simply remove this, because it did not
handle the NULL result from cache_from_obj() failure properly, nor
validate objects (for the NULL slab->slab_cache pointer) when called via
kfree_bulk(). If anyone is motivated to implement it properly, it should
be possible in a similar way to kmem_cache_free().
In kmem_cache_free(), do the hardening/debugging checks directly so they
are inlined by definition and virt_to_slab(obj) is performed just once.
In case they failed, call a newly introduced warn_free_bad_obj() that
performs the warnings outside of the fastpath, and leak the object.
As an intentional change, leak the object when slab->slab_cache differs
from the cache given to kmem_cache_free(). Previously we would only leak
when the object is not in a valid slab page or the slab->slab_cache
pointer is NULL, and otherwise trust the slab->slab_cache over the
kmem_cache_free() argument. But if those differ, it means something went
wrong enough that it's best not to continue freeing.
As a result the fastpath should be inlined in all configs and the
warnings are moved away.
Reported-by: Eric Dumazet <edumazet@google.com>
Closes: https://lore.kernel.org/all/20260115130642.3419324-1-edumazet@google.com/
Reviewed-by: Harry Yoo <harry.yoo@oracle.com>
Reviewed-by: Hao Li <hao.li@linux.dev>
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
---
I'll put this to slab/for-next. Thanks for the reviews!
Changes in v2:
- Add a comment and mention in the commit log that we are leaking the
freed object also in cases where s != slab->slab_cache (Hao Li)
- Link to v1: https://patch.msgid.link/20260120-b4-remove_cache_from_obj-v1-1-ace30c41eecf@suse.cz
---
mm/slub.c | 56 +++++++++++++++++++++++++++++++++-----------------------
1 file changed, 33 insertions(+), 23 deletions(-)
diff --git a/mm/slub.c b/mm/slub.c
index 861592ac5425..fd915ea95121 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -6738,30 +6738,26 @@ void ___cache_free(struct kmem_cache *cache, void *x, unsigned long addr)
}
#endif
-static inline struct kmem_cache *virt_to_cache(const void *obj)
+static noinline void warn_free_bad_obj(struct kmem_cache *s, void *obj)
{
+ struct kmem_cache *cachep;
struct slab *slab;
slab = virt_to_slab(obj);
- if (WARN_ONCE(!slab, "%s: Object is not a Slab page!\n", __func__))
- return NULL;
- return slab->slab_cache;
-}
-
-static inline struct kmem_cache *cache_from_obj(struct kmem_cache *s, void *x)
-{
- struct kmem_cache *cachep;
+ if (WARN_ONCE(!slab,
+ "kmem_cache_free(%s, %p): object is not in a slab page\n",
+ s->name, obj))
+ return;
- if (!IS_ENABLED(CONFIG_SLAB_FREELIST_HARDENED) &&
- !kmem_cache_debug_flags(s, SLAB_CONSISTENCY_CHECKS))
- return s;
+ cachep = slab->slab_cache;
- cachep = virt_to_cache(x);
- if (WARN(cachep && cachep != s,
- "%s: Wrong slab cache. %s but object is from %s\n",
- __func__, s->name, cachep->name))
- print_tracking(cachep, x);
- return cachep;
+ if (WARN_ONCE(cachep != s,
+ "kmem_cache_free(%s, %p): object belongs to different cache %s\n",
+ s->name, obj, cachep ? cachep->name : "(NULL)")) {
+ if (cachep)
+ print_tracking(cachep, obj);
+ return;
+ }
}
/**
@@ -6774,11 +6770,25 @@ static inline struct kmem_cache *cache_from_obj(struct kmem_cache *s, void *x)
*/
void kmem_cache_free(struct kmem_cache *s, void *x)
{
- s = cache_from_obj(s, x);
- if (!s)
- return;
+ struct slab *slab;
+
+ slab = virt_to_slab(x);
+
+ if (IS_ENABLED(CONFIG_SLAB_FREELIST_HARDENED) ||
+ kmem_cache_debug_flags(s, SLAB_CONSISTENCY_CHECKS)) {
+
+ /*
+ * Intentionally leak the object in these cases, because it
+ * would be too dangerous to continue.
+ */
+ if (unlikely(!slab || (slab->slab_cache != s))) {
+ warn_free_bad_obj(s, x);
+ return;
+ }
+ }
+
trace_kmem_cache_free(_RET_IP_, x, s);
- slab_free(s, virt_to_slab(x), x, _RET_IP_);
+ slab_free(s, slab, x, _RET_IP_);
}
EXPORT_SYMBOL(kmem_cache_free);
@@ -7305,7 +7315,7 @@ int build_detached_freelist(struct kmem_cache *s, size_t size,
df->s = slab->slab_cache;
} else {
df->slab = slab;
- df->s = cache_from_obj(s, object); /* Support for memcg */
+ df->s = s;
}
/* Start new detached freelist */
---
base-commit: 0f61b1860cc3f52aef9036d7235ed1f017632193
change-id: 20260120-b4-remove_cache_from_obj-190fcaf16789
Best regards,
--
Vlastimil Babka <vbabka@suse.cz>
^ permalink raw reply [flat|nested] 3+ messages in thread* Re: [PATCH v2] slab: replace cache_from_obj() with inline checks
2026-01-21 6:57 [PATCH v2] slab: replace cache_from_obj() with inline checks Vlastimil Babka
@ 2026-01-21 8:33 ` Eric Dumazet
2026-01-25 0:24 ` David Rientjes
0 siblings, 1 reply; 3+ messages in thread
From: Eric Dumazet @ 2026-01-21 8:33 UTC (permalink / raw)
To: Vlastimil Babka
Cc: Harry Yoo, Hao Li, Christoph Lameter, David Rientjes,
Roman Gushchin, linux-mm, linux-kernel, llvm
On Wed, Jan 21, 2026 at 7:57 AM Vlastimil Babka <vbabka@suse.cz> wrote:
>
> Eric Dumazet has noticed cache_from_obj() is not inlined with clang and
> suggested splitting it into two functions, where the smaller inlined one
> assumes the fastpath is !CONFIG_SLAB_FREELIST_HARDENED. However most
> distros enable it these days and so this would likely add a function
> call to the object free fastpaths.
>
> Instead take a step back and consider that cache_from_obj() is a relict
> from when memcgs created their separate kmem_cache copies, as the
> outdated comment in build_detached_freelist() reminds us.
>
> Meanwhile hardening/debugging had reused cache_from_obj() to validate
> that the freed object really belongs to a slab from the cache we think
> we are freeing from.
>
> In build_detached_freelist() simply remove this, because it did not
> handle the NULL result from cache_from_obj() failure properly, nor
> validate objects (for the NULL slab->slab_cache pointer) when called via
> kfree_bulk(). If anyone is motivated to implement it properly, it should
> be possible in a similar way to kmem_cache_free().
>
> In kmem_cache_free(), do the hardening/debugging checks directly so they
> are inlined by definition and virt_to_slab(obj) is performed just once.
> In case they failed, call a newly introduced warn_free_bad_obj() that
> performs the warnings outside of the fastpath, and leak the object.
>
> As an intentional change, leak the object when slab->slab_cache differs
> from the cache given to kmem_cache_free(). Previously we would only leak
> when the object is not in a valid slab page or the slab->slab_cache
> pointer is NULL, and otherwise trust the slab->slab_cache over the
> kmem_cache_free() argument. But if those differ, it means something went
> wrong enough that it's best not to continue freeing.
>
> As a result the fastpath should be inlined in all configs and the
> warnings are moved away.
>
> Reported-by: Eric Dumazet <edumazet@google.com>
> Closes: https://lore.kernel.org/all/20260115130642.3419324-1-edumazet@google.com/
> Reviewed-by: Harry Yoo <harry.yoo@oracle.com>
> Reviewed-by: Hao Li <hao.li@linux.dev>
> Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
Acked-by: Eric Dumazet <edumazet@google.com>
Thanks !
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [PATCH v2] slab: replace cache_from_obj() with inline checks
2026-01-21 8:33 ` Eric Dumazet
@ 2026-01-25 0:24 ` David Rientjes
0 siblings, 0 replies; 3+ messages in thread
From: David Rientjes @ 2026-01-25 0:24 UTC (permalink / raw)
To: Eric Dumazet
Cc: Vlastimil Babka, Harry Yoo, Hao Li, Christoph Lameter,
Roman Gushchin, linux-mm, linux-kernel, llvm
On Wed, 21 Jan 2026, Eric Dumazet wrote:
> > Eric Dumazet has noticed cache_from_obj() is not inlined with clang and
> > suggested splitting it into two functions, where the smaller inlined one
> > assumes the fastpath is !CONFIG_SLAB_FREELIST_HARDENED. However most
> > distros enable it these days and so this would likely add a function
> > call to the object free fastpaths.
> >
> > Instead take a step back and consider that cache_from_obj() is a relict
> > from when memcgs created their separate kmem_cache copies, as the
> > outdated comment in build_detached_freelist() reminds us.
> >
> > Meanwhile hardening/debugging had reused cache_from_obj() to validate
> > that the freed object really belongs to a slab from the cache we think
> > we are freeing from.
> >
> > In build_detached_freelist() simply remove this, because it did not
> > handle the NULL result from cache_from_obj() failure properly, nor
> > validate objects (for the NULL slab->slab_cache pointer) when called via
> > kfree_bulk(). If anyone is motivated to implement it properly, it should
> > be possible in a similar way to kmem_cache_free().
> >
> > In kmem_cache_free(), do the hardening/debugging checks directly so they
> > are inlined by definition and virt_to_slab(obj) is performed just once.
> > In case they failed, call a newly introduced warn_free_bad_obj() that
> > performs the warnings outside of the fastpath, and leak the object.
> >
> > As an intentional change, leak the object when slab->slab_cache differs
> > from the cache given to kmem_cache_free(). Previously we would only leak
> > when the object is not in a valid slab page or the slab->slab_cache
> > pointer is NULL, and otherwise trust the slab->slab_cache over the
> > kmem_cache_free() argument. But if those differ, it means something went
> > wrong enough that it's best not to continue freeing.
> >
> > As a result the fastpath should be inlined in all configs and the
> > warnings are moved away.
> >
> > Reported-by: Eric Dumazet <edumazet@google.com>
> > Closes: https://lore.kernel.org/all/20260115130642.3419324-1-edumazet@google.com/
> > Reviewed-by: Harry Yoo <harry.yoo@oracle.com>
> > Reviewed-by: Hao Li <hao.li@linux.dev>
> > Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
>
> Acked-by: Eric Dumazet <edumazet@google.com>
>
Tested-by: David Rientjes <rientjes@google.com>
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2026-01-25 0:24 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2026-01-21 6:57 [PATCH v2] slab: replace cache_from_obj() with inline checks Vlastimil Babka
2026-01-21 8:33 ` Eric Dumazet
2026-01-25 0:24 ` David Rientjes
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox