linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Harry Yoo <harry.yoo@oracle.com>
To: Vlastimil Babka <vbabka@suse.cz>
Cc: akpm@linux-foundation.org, linux-mm@kvack.org, cl@gentwo.org,
	rientjes@google.com, surenb@google.com, hao.li@linux.dev,
	kernel test robot <oliver.sang@intel.com>,
	stable@vger.kernel.org
Subject: Re: [PATCH] mm/slab: avoid allocating slabobj_ext array from its own slab
Date: Mon, 26 Jan 2026 17:30:16 +0900	[thread overview]
Message-ID: <aXcmGMlH3sWO03rv@hyeyoo> (raw)
In-Reply-To: <2b116198-b27a-4b20-90b2-951343f9fff1@suse.cz>

On Mon, Jan 26, 2026 at 08:36:16AM +0100, Vlastimil Babka wrote:
> On 1/24/26 11:46, Harry Yoo wrote:
> > When allocating slabobj_ext array in alloc_slab_obj_exts(), the array
> > can be allocated from the same slab we're allocating the array for.
> > This led to obj_exts_in_slab() incorrectly returning true [1],
> > although the array is not allocated from wasted space of the slab.
> > 
> > Vlastimil Babka observed that this problem should be fixed even when
> > ignoring its incompatibility with obj_exts_in_slab(), because it creates
> > slabs that are never freed as there is always at least one allocated
> > object.
> > 
> > To avoid this, use the next kmalloc size or large kmalloc when
> > kmalloc_slab() returns the same cache we're allocating the array for.
> > 
> > In case of random kmalloc caches, there are multiple kmalloc caches for
> > the same size and the cache is selected based on the caller address.
> > Because it is fragile to ensure the same caller address is passed to
> > kmalloc_slab(), kmalloc_noprof(), and kmalloc_node_noprof(), fall back
> > to (s->object_size + 1) when the sizes are equal.
> > 
> > Note that this doesn't happen when memory allocation profiling is
> > disabled, as when the allocation of the array is triggered by memory
> > cgroup (KMALLOC_CGROUP), the array is allocated from KMALLOC_NORMAL.
> > 
> > Reported-by: kernel test robot <oliver.sang@intel.com>
> > Closes: https://lore.kernel.org/oe-lkp/202601231457.f7b31e09-lkp@intel.com
> > Cc: stable@vger.kernel.org
> > Fixes: 4b8736964640 ("mm/slab: add allocation accounting into slab allocation and free paths")
> > Signed-off-by: Harry Yoo <harry.yoo@oracle.com>
> 
> Thanks! Just wondering if we could simplify a bit.
> 
> > ---
> >  mm/slub.c | 62 ++++++++++++++++++++++++++++++++++++++++++++++++-------
> >  1 file changed, 55 insertions(+), 7 deletions(-)
> > 
> > diff --git a/mm/slub.c b/mm/slub.c
> > index 3ff1c475b0f1..43ddb96c4081 100644
> > --- a/mm/slub.c
> > +++ b/mm/slub.c
> > @@ -2104,6 +2104,52 @@ static inline void init_slab_obj_exts(struct slab *slab)
> >  	slab->obj_exts = 0;
> >  }
> >  
> > +/*
> > + * Calculate the allocation size for slabobj_ext array.
> > + *
> > + * When memory allocation profiling is enabled, the obj_exts array
> > + * could be allocated from the same slab cache it's being allocated for.
> > + * This would prevent the slab from ever being freed because it would
> > + * always contain at least one allocated object (its own obj_exts array).
> > + *
> > + * To avoid this, increase the allocation size when we detect the array
> > + * would come from the same cache, forcing it to use a different cache.
> > + */
> > +static inline size_t obj_exts_alloc_size(struct kmem_cache *s,
> > +					 struct slab *slab, gfp_t gfp)
> > +{
> > +	size_t sz = sizeof(struct slabobj_ext) * slab->objects;
> > +	struct kmem_cache *obj_exts_cache;
> > +
> > +	/*
> > +	 * slabobj_ext array for KMALLOC_CGROUP allocations
> > +	 * are served from KMALLOC_NORMAL caches.
> > +	 */
> > +	if (!mem_alloc_profiling_enabled())
> > +		return sz;
> > +
> > +	if (sz > KMALLOC_MAX_CACHE_SIZE)
> > +		return sz;
> 
> Could we bail out here immediately if !is_kmalloc_normal(s)?

Yes.

> > +
> > +	obj_exts_cache = kmalloc_slab(sz, NULL, gfp, 0);
> 
> Then do this.

Yes.

> > +	if (s == obj_exts_cache)
> > +		return obj_exts_cache->object_size + 1;
> 
> But not this.

Yes.

> > +	/*
> > +	 * Random kmalloc caches have multiple caches per size, and the cache
> > +	 * is selected by the caller address. Since caller address may differ
> > +	 * between kmalloc_slab() and actual allocation, bump size when both
> > +	 * are normal kmalloc caches of same size.
> > +	 */
> > +	if (IS_ENABLED(CONFIG_RANDOM_KMALLOC_CACHES) &&
> 
> Instead just compare object_size unconditionally.

Yes. Will do.
But perhaps the comment is still worth it?

> > +			is_kmalloc_normal(s) &&
> 
> This we already checked.
> 
> > +			is_kmalloc_normal(obj_exts_cache) &&
> 
> I think this is guaranteed thanks to "gfp &= ~OBJCGS_CLEAR_MASK;" below so
> we don't need it.

Right.

Will respin soon after testing. Thanks!

The end result will be:

/*
 * Calculate the allocation size for slabobj_ext array.
 *
 * When memory allocation profiling is enabled, the obj_exts array
 * could be allocated from the same slab it's being allocated for.
 * This would prevent the slab from ever being freed because it would
 * always contain at least one allocated object (its own obj_exts array).
 *
 * To avoid this, increase the allocation size when we detect the array
 * would come from the same cache, forcing it to use a different cache.
 */
static inline size_t obj_exts_alloc_size(struct kmem_cache *s,
                                         struct slab *slab, gfp_t gfp)
{
        size_t sz = sizeof(struct slabobj_ext) * slab->objects;
        struct kmem_cache *obj_exts_cache;

        /*
         * slabobj_ext array for KMALLOC_CGROUP allocations
         * are served from KMALLOC_NORMAL caches.
         */
        if (!mem_alloc_profiling_enabled())
                return sz;

        if (sz > KMALLOC_MAX_CACHE_SIZE)
                return sz;

        if (!is_kmalloc_normal(s))
                return sz;

        obj_exts_cache = kmalloc_slab(sz, NULL, gfp, 0);
        /*
         * Random kmalloc caches have multiple caches per size, and the cache
         * is selected by the caller address. Since caller address may differ
         * between kmalloc_slab() and actual allocation, bump size when both
         * are normal kmalloc caches of same size.
         */
        if (s->size == obj_exts_cache->size)
                return s->object_size + 1;

        return sz;
}

-- 
Cheers,
Harry / Hyeonggon


  reply	other threads:[~2026-01-26  8:30 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-01-24 10:46 Harry Yoo
2026-01-24 10:53 ` Harry Yoo
2026-01-26  0:51 ` Hao Li
2026-01-26 13:00   ` Harry Yoo
2026-01-26 14:31     ` Hao Li
2026-01-26  7:36 ` Vlastimil Babka
2026-01-26  8:30   ` Harry Yoo [this message]
2026-01-26  8:37     ` Vlastimil Babka
2026-01-26  8:57       ` Harry Yoo
2026-01-26  9:10         ` Vlastimil Babka

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=aXcmGMlH3sWO03rv@hyeyoo \
    --to=harry.yoo@oracle.com \
    --cc=akpm@linux-foundation.org \
    --cc=cl@gentwo.org \
    --cc=hao.li@linux.dev \
    --cc=linux-mm@kvack.org \
    --cc=oliver.sang@intel.com \
    --cc=rientjes@google.com \
    --cc=stable@vger.kernel.org \
    --cc=surenb@google.com \
    --cc=vbabka@suse.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox