From: Vlastimil Babka <vbabka@suse.cz>
To: Harry Yoo <harry.yoo@oracle.com>, akpm@linux-foundation.org
Cc: linux-mm@kvack.org, cl@gentwo.org, rientjes@google.com,
surenb@google.com, hao.li@linux.dev,
kernel test robot <oliver.sang@intel.com>,
stable@vger.kernel.org
Subject: Re: [PATCH] mm/slab: avoid allocating slabobj_ext array from its own slab
Date: Mon, 26 Jan 2026 08:36:16 +0100 [thread overview]
Message-ID: <2b116198-b27a-4b20-90b2-951343f9fff1@suse.cz> (raw)
In-Reply-To: <20260124104614.9739-1-harry.yoo@oracle.com>
On 1/24/26 11:46, Harry Yoo wrote:
> When allocating slabobj_ext array in alloc_slab_obj_exts(), the array
> can be allocated from the same slab we're allocating the array for.
> This led to obj_exts_in_slab() incorrectly returning true [1],
> although the array is not allocated from wasted space of the slab.
>
> Vlastimil Babka observed that this problem should be fixed even when
> ignoring its incompatibility with obj_exts_in_slab(), because it creates
> slabs that are never freed as there is always at least one allocated
> object.
>
> To avoid this, use the next kmalloc size or large kmalloc when
> kmalloc_slab() returns the same cache we're allocating the array for.
>
> In case of random kmalloc caches, there are multiple kmalloc caches for
> the same size and the cache is selected based on the caller address.
> Because it is fragile to ensure the same caller address is passed to
> kmalloc_slab(), kmalloc_noprof(), and kmalloc_node_noprof(), fall back
> to (s->object_size + 1) when the sizes are equal.
>
> Note that this doesn't happen when memory allocation profiling is
> disabled, as when the allocation of the array is triggered by memory
> cgroup (KMALLOC_CGROUP), the array is allocated from KMALLOC_NORMAL.
>
> Reported-by: kernel test robot <oliver.sang@intel.com>
> Closes: https://lore.kernel.org/oe-lkp/202601231457.f7b31e09-lkp@intel.com [1]
> Cc: stable@vger.kernel.org
> Fixes: 4b8736964640 ("mm/slab: add allocation accounting into slab allocation and free paths")
> Signed-off-by: Harry Yoo <harry.yoo@oracle.com>
Thanks! Just wondering if we could simplify a bit.
> ---
> mm/slub.c | 62 ++++++++++++++++++++++++++++++++++++++++++++++++-------
> 1 file changed, 55 insertions(+), 7 deletions(-)
>
> diff --git a/mm/slub.c b/mm/slub.c
> index 3ff1c475b0f1..43ddb96c4081 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -2104,6 +2104,52 @@ static inline void init_slab_obj_exts(struct slab *slab)
> slab->obj_exts = 0;
> }
>
> +/*
> + * Calculate the allocation size for slabobj_ext array.
> + *
> + * When memory allocation profiling is enabled, the obj_exts array
> + * could be allocated from the same slab cache it's being allocated for.
> + * This would prevent the slab from ever being freed because it would
> + * always contain at least one allocated object (its own obj_exts array).
> + *
> + * To avoid this, increase the allocation size when we detect the array
> + * would come from the same cache, forcing it to use a different cache.
> + */
> +static inline size_t obj_exts_alloc_size(struct kmem_cache *s,
> + struct slab *slab, gfp_t gfp)
> +{
> + size_t sz = sizeof(struct slabobj_ext) * slab->objects;
> + struct kmem_cache *obj_exts_cache;
> +
> + /*
> + * slabobj_ext array for KMALLOC_CGROUP allocations
> + * are served from KMALLOC_NORMAL caches.
> + */
> + if (!mem_alloc_profiling_enabled())
> + return sz;
> +
> + if (sz > KMALLOC_MAX_CACHE_SIZE)
> + return sz;
Could we bail out here immediately if !is_kmalloc_normal(s)?
> +
> + obj_exts_cache = kmalloc_slab(sz, NULL, gfp, 0);
Then do this.
> + if (s == obj_exts_cache)
> + return obj_exts_cache->object_size + 1;
But not this.
> + /*
> + * Random kmalloc caches have multiple caches per size, and the cache
> + * is selected by the caller address. Since caller address may differ
> + * between kmalloc_slab() and actual allocation, bump size when both
> + * are normal kmalloc caches of same size.
> + */
> + if (IS_ENABLED(CONFIG_RANDOM_KMALLOC_CACHES) &&
Instead just compare object_size unconditionally.
> + is_kmalloc_normal(s) &&
This we already checked.
> + is_kmalloc_normal(obj_exts_cache) &&
I think this is guaranteed thanks to "gfp &= ~OBJCGS_CLEAR_MASK;" below so
we don't need it.
> + (s->object_size == obj_exts_cache->object_size))
> + return obj_exts_cache->object_size + 1;
> +
> + return sz;
> +}
> +
> int alloc_slab_obj_exts(struct slab *slab, struct kmem_cache *s,
> gfp_t gfp, bool new_slab)
> {
> @@ -2112,26 +2158,26 @@ int alloc_slab_obj_exts(struct slab *slab, struct kmem_cache *s,
> unsigned long new_exts;
> unsigned long old_exts;
> struct slabobj_ext *vec;
> + size_t sz;
>
> gfp &= ~OBJCGS_CLEAR_MASK;
> /* Prevent recursive extension vector allocation */
> gfp |= __GFP_NO_OBJ_EXT;
>
> + sz = obj_exts_alloc_size(s, slab, gfp);
> +
> /*
> * Note that allow_spin may be false during early boot and its
> * restricted GFP_BOOT_MASK. Due to kmalloc_nolock() only supporting
> * architectures with cmpxchg16b, early obj_exts will be missing for
> * very early allocations on those.
> */
> - if (unlikely(!allow_spin)) {
> - size_t sz = objects * sizeof(struct slabobj_ext);
> -
> + if (unlikely(!allow_spin))
> vec = kmalloc_nolock(sz, __GFP_ZERO | __GFP_NO_OBJ_EXT,
> slab_nid(slab));
> - } else {
> - vec = kcalloc_node(objects, sizeof(struct slabobj_ext), gfp,
> - slab_nid(slab));
> - }
> + else
> + vec = kmalloc_node(sz, gfp | __GFP_ZERO, slab_nid(slab));
> +
> if (!vec) {
> /*
> * Try to mark vectors which failed to allocate.
> @@ -2145,6 +2191,8 @@ int alloc_slab_obj_exts(struct slab *slab, struct kmem_cache *s,
> return -ENOMEM;
> }
>
> + VM_WARN_ON_ONCE(virt_to_slab(vec)->slab_cache == s);
> +
> new_exts = (unsigned long)vec;
> if (unlikely(!allow_spin))
> new_exts |= OBJEXTS_NOSPIN_ALLOC;
next prev parent reply other threads:[~2026-01-26 7:36 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-01-24 10:46 Harry Yoo
2026-01-24 10:53 ` Harry Yoo
2026-01-26 0:51 ` Hao Li
2026-01-26 13:00 ` Harry Yoo
2026-01-26 14:31 ` Hao Li
2026-01-26 7:36 ` Vlastimil Babka [this message]
2026-01-26 8:30 ` Harry Yoo
2026-01-26 8:37 ` Vlastimil Babka
2026-01-26 8:57 ` Harry Yoo
2026-01-26 9:10 ` Vlastimil Babka
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=2b116198-b27a-4b20-90b2-951343f9fff1@suse.cz \
--to=vbabka@suse.cz \
--cc=akpm@linux-foundation.org \
--cc=cl@gentwo.org \
--cc=hao.li@linux.dev \
--cc=harry.yoo@oracle.com \
--cc=linux-mm@kvack.org \
--cc=oliver.sang@intel.com \
--cc=rientjes@google.com \
--cc=stable@vger.kernel.org \
--cc=surenb@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox