From: Hao Li <hao.li@linux.dev>
To: Vlastimil Babka <vbabka@suse.cz>
Cc: Harry Yoo <harry.yoo@oracle.com>,
akpm@linux-foundation.org, linux-mm@kvack.org, cl@gentwo.org,
rientjes@google.com, surenb@google.com,
kernel test robot <oliver.sang@intel.com>,
stable@vger.kernel.org
Subject: Re: [PATCH V2] mm/slab: avoid allocating slabobj_ext array from its own slab
Date: Mon, 26 Jan 2026 22:37:27 +0800 [thread overview]
Message-ID: <73gdb2vktj6dmog3z4kzpl2tefkuvt4ckcom24eo6xveznc7lx@v2b74xbwa36r> (raw)
In-Reply-To: <795a4294-001f-4462-8afc-7310e9059943@suse.cz>
On Mon, Jan 26, 2026 at 02:46:46PM +0100, Vlastimil Babka wrote:
> On 1/26/26 14:03, Harry Yoo wrote:
> > On Mon, Jan 26, 2026 at 09:57:14PM +0900, Harry Yoo wrote:
> >> When allocating slabobj_ext array in alloc_slab_obj_exts(), the array
> >> can be allocated from the same slab we're allocating the array for.
> >> This led to obj_exts_in_slab() incorrectly returning true [1],
> >> although the array is not allocated from wasted space of the slab.
> >>
> >> Vlastimil Babka observed that this problem should be fixed even when
> >> ignoring its incompatibility with obj_exts_in_slab(), because it creates
> >> slabs that are never freed as there is always at least one allocated
> >> object.
> >>
> >> To avoid this, use the next kmalloc size or large kmalloc when
> >> the array can be allocated from the same cache we're allocating
> >> the array for.
> >>
> >> In case of random kmalloc caches, there are multiple kmalloc caches
> >> for the same size and the cache is selected based on the caller address.
> >> Because it is fragile to ensure the same caller address is passed to
> >> kmalloc_slab(), kmalloc_noprof(), and kmalloc_node_noprof(), bump the
> >> size to (s->object_size + 1) when the sizes are equal, instead of
> >> directly comparing the kmem_cache pointers.
> >>
> >> Note that this doesn't happen when memory allocation profiling is
> >> disabled, as when the allocation of the array is triggered by memory
> >> cgroup (KMALLOC_CGROUP), the array is allocated from KMALLOC_NORMAL.
> >>
> >> Reported-by: kernel test robot <oliver.sang@intel.com>
> >> Closes: https://lore.kernel.org/oe-lkp/202601231457.f7b31e09-lkp@intel.com [1]
> >> Cc: stable@vger.kernel.org
> >> Fixes: 4b8736964640 ("mm/slab: add allocation accounting into slab allocation and free paths")
> >> Signed-off-by: Harry Yoo <harry.yoo@oracle.com>
> >> ---
> >>
> >> V1 -> V2:
> >> - Simplified implementation based on Vlastimil's comment
> >> - added virt_to_slab() != NULL check before dereferencing it - because
> >> (in theory) it may be allocated via large kmalloc.
> >>
> >> mm/slub.c | 60 ++++++++++++++++++++++++++++++++++++++++++++++++-------
> >> 1 file changed, 53 insertions(+), 7 deletions(-)
> >>
> >> diff --git a/mm/slub.c b/mm/slub.c
> >> index f21b2f0c6f5a..5b4a3b9b7826 100644
> >> --- a/mm/slub.c
> >> +++ b/mm/slub.c
> >> @@ -2095,6 +2095,49 @@ static inline void init_slab_obj_exts(struct slab *slab)
> >> slab->obj_exts = 0;
> >> }
> >>
> >> +/*
> >> + * Calculate the allocation size for slabobj_ext array.
> >> + *
> >> + * When memory allocation profiling is enabled, the obj_exts array
> >> + * could be allocated from the same slab cache it's being allocated for.
> >> + * This would prevent the slab from ever being freed because it would
> >> + * always contain at least one allocated object (its own obj_exts array).
> >> + *
> >> + * To avoid this, increase the allocation size when we detect the array
> >> + * may come from the same cache, forcing it to use a different cache.
> >> + */
> >> +static inline size_t obj_exts_alloc_size(struct kmem_cache *s,
> >> + struct slab *slab, gfp_t gfp)
> >> +{
> >> + size_t sz = sizeof(struct slabobj_ext) * slab->objects;
> >> + struct kmem_cache *obj_exts_cache;
> >> +
> >> + /*
> >> + * slabobj_ext array for KMALLOC_CGROUP allocations
> >> + * are served from KMALLOC_NORMAL caches.
> >> + */
> >> + if (!mem_alloc_profiling_enabled())
> >> + return sz;
> >
> > Hmm maybe we don't need this as there's !is_kmalloc_normal(s) check,
> > but this allows optimizing out the checks below when
> > CONFIG_MEM_ALLOC_PROFILING is not enabled.
> >
> > So probably worth keeping it.
>
> Right.
>
> Thanks, added to slab/for-next as the first commit of the obj_metadata branch.
Hi Vlastimil,
This v2 patch still looks good to me!
Feel free to fold this R-b tag into the commit on slab/for-next.
Reviewed-by: Hao Li <hao.li@linux.dev>
prev parent reply other threads:[~2026-01-26 14:37 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-01-26 12:57 Harry Yoo
2026-01-26 13:03 ` Harry Yoo
2026-01-26 13:46 ` Vlastimil Babka
2026-01-26 14:37 ` Hao Li [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=73gdb2vktj6dmog3z4kzpl2tefkuvt4ckcom24eo6xveznc7lx@v2b74xbwa36r \
--to=hao.li@linux.dev \
--cc=akpm@linux-foundation.org \
--cc=cl@gentwo.org \
--cc=harry.yoo@oracle.com \
--cc=linux-mm@kvack.org \
--cc=oliver.sang@intel.com \
--cc=rientjes@google.com \
--cc=stable@vger.kernel.org \
--cc=surenb@google.com \
--cc=vbabka@suse.cz \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox