From: Harry Yoo <harry.yoo@oracle.com>
To: akpm@linux-foundation.org, vbabka@suse.cz
Cc: linux-mm@kvack.org, cl@gentwo.org, rientjes@google.com,
surenb@google.com, hao.li@linux.dev,
kernel test robot <oliver.sang@intel.com>,
stable@vger.kernel.org
Subject: Re: [PATCH V2] mm/slab: avoid allocating slabobj_ext array from its own slab
Date: Mon, 26 Jan 2026 22:03:51 +0900 [thread overview]
Message-ID: <aXdmN1jUR5bZ6rK8@hyeyoo> (raw)
In-Reply-To: <20260126125714.88008-1-harry.yoo@oracle.com>
On Mon, Jan 26, 2026 at 09:57:14PM +0900, Harry Yoo wrote:
> When allocating slabobj_ext array in alloc_slab_obj_exts(), the array
> can be allocated from the same slab we're allocating the array for.
> This led to obj_exts_in_slab() incorrectly returning true [1],
> although the array is not allocated from wasted space of the slab.
>
> Vlastimil Babka observed that this problem should be fixed even when
> ignoring its incompatibility with obj_exts_in_slab(), because it creates
> slabs that are never freed as there is always at least one allocated
> object.
>
> To avoid this, use the next kmalloc size or large kmalloc when
> the array can be allocated from the same cache we're allocating
> the array for.
>
> In case of random kmalloc caches, there are multiple kmalloc caches
> for the same size and the cache is selected based on the caller address.
> Because it is fragile to ensure the same caller address is passed to
> kmalloc_slab(), kmalloc_noprof(), and kmalloc_node_noprof(), bump the
> size to (s->object_size + 1) when the sizes are equal, instead of
> directly comparing the kmem_cache pointers.
>
> Note that this doesn't happen when memory allocation profiling is
> disabled, as when the allocation of the array is triggered by memory
> cgroup (KMALLOC_CGROUP), the array is allocated from KMALLOC_NORMAL.
>
> Reported-by: kernel test robot <oliver.sang@intel.com>
> Closes: https://lore.kernel.org/oe-lkp/202601231457.f7b31e09-lkp@intel.com [1]
> Cc: stable@vger.kernel.org
> Fixes: 4b8736964640 ("mm/slab: add allocation accounting into slab allocation and free paths")
> Signed-off-by: Harry Yoo <harry.yoo@oracle.com>
> ---
>
> V1 -> V2:
> - Simplified implementation based on Vlastimil's comment
> - added virt_to_slab() != NULL check before dereferencing it - because
> (in theory) it may be allocated via large kmalloc.
>
> mm/slub.c | 60 ++++++++++++++++++++++++++++++++++++++++++++++++-------
> 1 file changed, 53 insertions(+), 7 deletions(-)
>
> diff --git a/mm/slub.c b/mm/slub.c
> index f21b2f0c6f5a..5b4a3b9b7826 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -2095,6 +2095,49 @@ static inline void init_slab_obj_exts(struct slab *slab)
> slab->obj_exts = 0;
> }
>
> +/*
> + * Calculate the allocation size for slabobj_ext array.
> + *
> + * When memory allocation profiling is enabled, the obj_exts array
> + * could be allocated from the same slab cache it's being allocated for.
> + * This would prevent the slab from ever being freed because it would
> + * always contain at least one allocated object (its own obj_exts array).
> + *
> + * To avoid this, increase the allocation size when we detect the array
> + * may come from the same cache, forcing it to use a different cache.
> + */
> +static inline size_t obj_exts_alloc_size(struct kmem_cache *s,
> + struct slab *slab, gfp_t gfp)
> +{
> + size_t sz = sizeof(struct slabobj_ext) * slab->objects;
> + struct kmem_cache *obj_exts_cache;
> +
> + /*
> + * slabobj_ext array for KMALLOC_CGROUP allocations
> + * are served from KMALLOC_NORMAL caches.
> + */
> + if (!mem_alloc_profiling_enabled())
> + return sz;
Hmm maybe we don't need this as there's !is_kmalloc_normal(s) check,
but this allows optimizing out the checks below when
CONFIG_MEM_ALLOC_PROFILING is not enabled.
So probably worth keeping it.
> +
> + if (sz > KMALLOC_MAX_CACHE_SIZE)
> + return sz;
> +
> + if (!is_kmalloc_normal(s))
> + return sz;
> +
> + obj_exts_cache = kmalloc_slab(sz, NULL, gfp, 0);
> + /*
> + * We can't simply compare s with obj_exts_cache, because random kmalloc
> + * caches have multiple caches per size, selected by caller address.
> + * Since caller address may differ between kmalloc_slab() and actual
> + * allocation, bump size when sizes are equal.
> + */
> + if (s->object_size == obj_exts_cache->object_size)
> + return obj_exts_cache->object_size + 1;
> +
> + return sz;
> +}
> +
> int alloc_slab_obj_exts(struct slab *slab, struct kmem_cache *s,
> gfp_t gfp, bool new_slab)
> {
> @@ -2103,26 +2146,26 @@ int alloc_slab_obj_exts(struct slab *slab, struct kmem_cache *s,
> unsigned long new_exts;
> unsigned long old_exts;
> struct slabobj_ext *vec;
> + size_t sz;
>
> gfp &= ~OBJCGS_CLEAR_MASK;
> /* Prevent recursive extension vector allocation */
> gfp |= __GFP_NO_OBJ_EXT;
>
> + sz = obj_exts_alloc_size(s, slab, gfp);
> +
> /*
> * Note that allow_spin may be false during early boot and its
> * restricted GFP_BOOT_MASK. Due to kmalloc_nolock() only supporting
> * architectures with cmpxchg16b, early obj_exts will be missing for
> * very early allocations on those.
> */
> - if (unlikely(!allow_spin)) {
> - size_t sz = objects * sizeof(struct slabobj_ext);
> -
> + if (unlikely(!allow_spin))
> vec = kmalloc_nolock(sz, __GFP_ZERO | __GFP_NO_OBJ_EXT,
> slab_nid(slab));
> - } else {
> - vec = kcalloc_node(objects, sizeof(struct slabobj_ext), gfp,
> - slab_nid(slab));
> - }
> + else
> + vec = kmalloc_node(sz, gfp | __GFP_ZERO, slab_nid(slab));
> +
> if (!vec) {
> /*
> * Try to mark vectors which failed to allocate.
> @@ -2136,6 +2179,9 @@ int alloc_slab_obj_exts(struct slab *slab, struct kmem_cache *s,
> return -ENOMEM;
> }
>
> + VM_WARN_ON_ONCE(virt_to_slab(vec) != NULL &&
> + virt_to_slab(vec)->slab_cache == s);
> +
> new_exts = (unsigned long)vec;
> if (unlikely(!allow_spin))
> new_exts |= OBJEXTS_NOSPIN_ALLOC;
> --
> 2.43.0
>
--
Cheers,
Harry / Hyeonggon
next prev parent reply other threads:[~2026-01-26 13:04 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-01-26 12:57 Harry Yoo
2026-01-26 13:03 ` Harry Yoo [this message]
2026-01-26 13:46 ` Vlastimil Babka
2026-01-26 14:37 ` Hao Li
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=aXdmN1jUR5bZ6rK8@hyeyoo \
--to=harry.yoo@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=cl@gentwo.org \
--cc=hao.li@linux.dev \
--cc=linux-mm@kvack.org \
--cc=oliver.sang@intel.com \
--cc=rientjes@google.com \
--cc=stable@vger.kernel.org \
--cc=surenb@google.com \
--cc=vbabka@suse.cz \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox