* [PATCH V2] mm/slab: avoid allocating slabobj_ext array from its own slab
@ 2026-01-26 12:57 Harry Yoo
2026-01-26 13:03 ` Harry Yoo
0 siblings, 1 reply; 4+ messages in thread
From: Harry Yoo @ 2026-01-26 12:57 UTC (permalink / raw)
To: akpm, vbabka
Cc: linux-mm, cl, rientjes, surenb, harry.yoo, hao.li,
kernel test robot, stable
When allocating slabobj_ext array in alloc_slab_obj_exts(), the array
can be allocated from the same slab we're allocating the array for.
This led to obj_exts_in_slab() incorrectly returning true [1],
although the array is not allocated from wasted space of the slab.
Vlastimil Babka observed that this problem should be fixed even when
ignoring its incompatibility with obj_exts_in_slab(), because it creates
slabs that are never freed as there is always at least one allocated
object.
To avoid this, use the next kmalloc size or large kmalloc when
the array can be allocated from the same cache we're allocating
the array for.
In case of random kmalloc caches, there are multiple kmalloc caches
for the same size and the cache is selected based on the caller address.
Because it is fragile to ensure the same caller address is passed to
kmalloc_slab(), kmalloc_noprof(), and kmalloc_node_noprof(), bump the
size to (s->object_size + 1) when the sizes are equal, instead of
directly comparing the kmem_cache pointers.
Note that this doesn't happen when memory allocation profiling is
disabled, as when the allocation of the array is triggered by memory
cgroup (KMALLOC_CGROUP), the array is allocated from KMALLOC_NORMAL.
Reported-by: kernel test robot <oliver.sang@intel.com>
Closes: https://lore.kernel.org/oe-lkp/202601231457.f7b31e09-lkp@intel.com [1]
Cc: stable@vger.kernel.org
Fixes: 4b8736964640 ("mm/slab: add allocation accounting into slab allocation and free paths")
Signed-off-by: Harry Yoo <harry.yoo@oracle.com>
---
V1 -> V2:
- Simplified implementation based on Vlastimil's comment
- added virt_to_slab() != NULL check before dereferencing it - because
(in theory) it may be allocated via large kmalloc.
mm/slub.c | 60 ++++++++++++++++++++++++++++++++++++++++++++++++-------
1 file changed, 53 insertions(+), 7 deletions(-)
diff --git a/mm/slub.c b/mm/slub.c
index f21b2f0c6f5a..5b4a3b9b7826 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -2095,6 +2095,49 @@ static inline void init_slab_obj_exts(struct slab *slab)
slab->obj_exts = 0;
}
+/*
+ * Calculate the allocation size for slabobj_ext array.
+ *
+ * When memory allocation profiling is enabled, the obj_exts array
+ * could be allocated from the same slab cache it's being allocated for.
+ * This would prevent the slab from ever being freed because it would
+ * always contain at least one allocated object (its own obj_exts array).
+ *
+ * To avoid this, increase the allocation size when we detect the array
+ * may come from the same cache, forcing it to use a different cache.
+ */
+static inline size_t obj_exts_alloc_size(struct kmem_cache *s,
+ struct slab *slab, gfp_t gfp)
+{
+ size_t sz = sizeof(struct slabobj_ext) * slab->objects;
+ struct kmem_cache *obj_exts_cache;
+
+ /*
+ * slabobj_ext array for KMALLOC_CGROUP allocations
+ * are served from KMALLOC_NORMAL caches.
+ */
+ if (!mem_alloc_profiling_enabled())
+ return sz;
+
+ if (sz > KMALLOC_MAX_CACHE_SIZE)
+ return sz;
+
+ if (!is_kmalloc_normal(s))
+ return sz;
+
+ obj_exts_cache = kmalloc_slab(sz, NULL, gfp, 0);
+ /*
+ * We can't simply compare s with obj_exts_cache, because random kmalloc
+ * caches have multiple caches per size, selected by caller address.
+ * Since caller address may differ between kmalloc_slab() and actual
+ * allocation, bump size when sizes are equal.
+ */
+ if (s->object_size == obj_exts_cache->object_size)
+ return obj_exts_cache->object_size + 1;
+
+ return sz;
+}
+
int alloc_slab_obj_exts(struct slab *slab, struct kmem_cache *s,
gfp_t gfp, bool new_slab)
{
@@ -2103,26 +2146,26 @@ int alloc_slab_obj_exts(struct slab *slab, struct kmem_cache *s,
unsigned long new_exts;
unsigned long old_exts;
struct slabobj_ext *vec;
+ size_t sz;
gfp &= ~OBJCGS_CLEAR_MASK;
/* Prevent recursive extension vector allocation */
gfp |= __GFP_NO_OBJ_EXT;
+ sz = obj_exts_alloc_size(s, slab, gfp);
+
/*
* Note that allow_spin may be false during early boot and its
* restricted GFP_BOOT_MASK. Due to kmalloc_nolock() only supporting
* architectures with cmpxchg16b, early obj_exts will be missing for
* very early allocations on those.
*/
- if (unlikely(!allow_spin)) {
- size_t sz = objects * sizeof(struct slabobj_ext);
-
+ if (unlikely(!allow_spin))
vec = kmalloc_nolock(sz, __GFP_ZERO | __GFP_NO_OBJ_EXT,
slab_nid(slab));
- } else {
- vec = kcalloc_node(objects, sizeof(struct slabobj_ext), gfp,
- slab_nid(slab));
- }
+ else
+ vec = kmalloc_node(sz, gfp | __GFP_ZERO, slab_nid(slab));
+
if (!vec) {
/*
* Try to mark vectors which failed to allocate.
@@ -2136,6 +2179,9 @@ int alloc_slab_obj_exts(struct slab *slab, struct kmem_cache *s,
return -ENOMEM;
}
+ VM_WARN_ON_ONCE(virt_to_slab(vec) != NULL &&
+ virt_to_slab(vec)->slab_cache == s);
+
new_exts = (unsigned long)vec;
if (unlikely(!allow_spin))
new_exts |= OBJEXTS_NOSPIN_ALLOC;
--
2.43.0
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH V2] mm/slab: avoid allocating slabobj_ext array from its own slab
2026-01-26 12:57 [PATCH V2] mm/slab: avoid allocating slabobj_ext array from its own slab Harry Yoo
@ 2026-01-26 13:03 ` Harry Yoo
2026-01-26 13:46 ` Vlastimil Babka
0 siblings, 1 reply; 4+ messages in thread
From: Harry Yoo @ 2026-01-26 13:03 UTC (permalink / raw)
To: akpm, vbabka
Cc: linux-mm, cl, rientjes, surenb, hao.li, kernel test robot, stable
On Mon, Jan 26, 2026 at 09:57:14PM +0900, Harry Yoo wrote:
> When allocating slabobj_ext array in alloc_slab_obj_exts(), the array
> can be allocated from the same slab we're allocating the array for.
> This led to obj_exts_in_slab() incorrectly returning true [1],
> although the array is not allocated from wasted space of the slab.
>
> Vlastimil Babka observed that this problem should be fixed even when
> ignoring its incompatibility with obj_exts_in_slab(), because it creates
> slabs that are never freed as there is always at least one allocated
> object.
>
> To avoid this, use the next kmalloc size or large kmalloc when
> the array can be allocated from the same cache we're allocating
> the array for.
>
> In case of random kmalloc caches, there are multiple kmalloc caches
> for the same size and the cache is selected based on the caller address.
> Because it is fragile to ensure the same caller address is passed to
> kmalloc_slab(), kmalloc_noprof(), and kmalloc_node_noprof(), bump the
> size to (s->object_size + 1) when the sizes are equal, instead of
> directly comparing the kmem_cache pointers.
>
> Note that this doesn't happen when memory allocation profiling is
> disabled, as when the allocation of the array is triggered by memory
> cgroup (KMALLOC_CGROUP), the array is allocated from KMALLOC_NORMAL.
>
> Reported-by: kernel test robot <oliver.sang@intel.com>
> Closes: https://lore.kernel.org/oe-lkp/202601231457.f7b31e09-lkp@intel.com [1]
> Cc: stable@vger.kernel.org
> Fixes: 4b8736964640 ("mm/slab: add allocation accounting into slab allocation and free paths")
> Signed-off-by: Harry Yoo <harry.yoo@oracle.com>
> ---
>
> V1 -> V2:
> - Simplified implementation based on Vlastimil's comment
> - added virt_to_slab() != NULL check before dereferencing it - because
> (in theory) it may be allocated via large kmalloc.
>
> mm/slub.c | 60 ++++++++++++++++++++++++++++++++++++++++++++++++-------
> 1 file changed, 53 insertions(+), 7 deletions(-)
>
> diff --git a/mm/slub.c b/mm/slub.c
> index f21b2f0c6f5a..5b4a3b9b7826 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -2095,6 +2095,49 @@ static inline void init_slab_obj_exts(struct slab *slab)
> slab->obj_exts = 0;
> }
>
> +/*
> + * Calculate the allocation size for slabobj_ext array.
> + *
> + * When memory allocation profiling is enabled, the obj_exts array
> + * could be allocated from the same slab cache it's being allocated for.
> + * This would prevent the slab from ever being freed because it would
> + * always contain at least one allocated object (its own obj_exts array).
> + *
> + * To avoid this, increase the allocation size when we detect the array
> + * may come from the same cache, forcing it to use a different cache.
> + */
> +static inline size_t obj_exts_alloc_size(struct kmem_cache *s,
> + struct slab *slab, gfp_t gfp)
> +{
> + size_t sz = sizeof(struct slabobj_ext) * slab->objects;
> + struct kmem_cache *obj_exts_cache;
> +
> + /*
> + * slabobj_ext array for KMALLOC_CGROUP allocations
> + * are served from KMALLOC_NORMAL caches.
> + */
> + if (!mem_alloc_profiling_enabled())
> + return sz;
Hmm maybe we don't need this as there's !is_kmalloc_normal(s) check,
but this allows optimizing out the checks below when
CONFIG_MEM_ALLOC_PROFILING is not enabled.
So probably worth keeping it.
> +
> + if (sz > KMALLOC_MAX_CACHE_SIZE)
> + return sz;
> +
> + if (!is_kmalloc_normal(s))
> + return sz;
> +
> + obj_exts_cache = kmalloc_slab(sz, NULL, gfp, 0);
> + /*
> + * We can't simply compare s with obj_exts_cache, because random kmalloc
> + * caches have multiple caches per size, selected by caller address.
> + * Since caller address may differ between kmalloc_slab() and actual
> + * allocation, bump size when sizes are equal.
> + */
> + if (s->object_size == obj_exts_cache->object_size)
> + return obj_exts_cache->object_size + 1;
> +
> + return sz;
> +}
> +
> int alloc_slab_obj_exts(struct slab *slab, struct kmem_cache *s,
> gfp_t gfp, bool new_slab)
> {
> @@ -2103,26 +2146,26 @@ int alloc_slab_obj_exts(struct slab *slab, struct kmem_cache *s,
> unsigned long new_exts;
> unsigned long old_exts;
> struct slabobj_ext *vec;
> + size_t sz;
>
> gfp &= ~OBJCGS_CLEAR_MASK;
> /* Prevent recursive extension vector allocation */
> gfp |= __GFP_NO_OBJ_EXT;
>
> + sz = obj_exts_alloc_size(s, slab, gfp);
> +
> /*
> * Note that allow_spin may be false during early boot and its
> * restricted GFP_BOOT_MASK. Due to kmalloc_nolock() only supporting
> * architectures with cmpxchg16b, early obj_exts will be missing for
> * very early allocations on those.
> */
> - if (unlikely(!allow_spin)) {
> - size_t sz = objects * sizeof(struct slabobj_ext);
> -
> + if (unlikely(!allow_spin))
> vec = kmalloc_nolock(sz, __GFP_ZERO | __GFP_NO_OBJ_EXT,
> slab_nid(slab));
> - } else {
> - vec = kcalloc_node(objects, sizeof(struct slabobj_ext), gfp,
> - slab_nid(slab));
> - }
> + else
> + vec = kmalloc_node(sz, gfp | __GFP_ZERO, slab_nid(slab));
> +
> if (!vec) {
> /*
> * Try to mark vectors which failed to allocate.
> @@ -2136,6 +2179,9 @@ int alloc_slab_obj_exts(struct slab *slab, struct kmem_cache *s,
> return -ENOMEM;
> }
>
> + VM_WARN_ON_ONCE(virt_to_slab(vec) != NULL &&
> + virt_to_slab(vec)->slab_cache == s);
> +
> new_exts = (unsigned long)vec;
> if (unlikely(!allow_spin))
> new_exts |= OBJEXTS_NOSPIN_ALLOC;
> --
> 2.43.0
>
--
Cheers,
Harry / Hyeonggon
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH V2] mm/slab: avoid allocating slabobj_ext array from its own slab
2026-01-26 13:03 ` Harry Yoo
@ 2026-01-26 13:46 ` Vlastimil Babka
2026-01-26 14:37 ` Hao Li
0 siblings, 1 reply; 4+ messages in thread
From: Vlastimil Babka @ 2026-01-26 13:46 UTC (permalink / raw)
To: Harry Yoo, akpm
Cc: linux-mm, cl, rientjes, surenb, hao.li, kernel test robot, stable
On 1/26/26 14:03, Harry Yoo wrote:
> On Mon, Jan 26, 2026 at 09:57:14PM +0900, Harry Yoo wrote:
>> When allocating slabobj_ext array in alloc_slab_obj_exts(), the array
>> can be allocated from the same slab we're allocating the array for.
>> This led to obj_exts_in_slab() incorrectly returning true [1],
>> although the array is not allocated from wasted space of the slab.
>>
>> Vlastimil Babka observed that this problem should be fixed even when
>> ignoring its incompatibility with obj_exts_in_slab(), because it creates
>> slabs that are never freed as there is always at least one allocated
>> object.
>>
>> To avoid this, use the next kmalloc size or large kmalloc when
>> the array can be allocated from the same cache we're allocating
>> the array for.
>>
>> In case of random kmalloc caches, there are multiple kmalloc caches
>> for the same size and the cache is selected based on the caller address.
>> Because it is fragile to ensure the same caller address is passed to
>> kmalloc_slab(), kmalloc_noprof(), and kmalloc_node_noprof(), bump the
>> size to (s->object_size + 1) when the sizes are equal, instead of
>> directly comparing the kmem_cache pointers.
>>
>> Note that this doesn't happen when memory allocation profiling is
>> disabled, as when the allocation of the array is triggered by memory
>> cgroup (KMALLOC_CGROUP), the array is allocated from KMALLOC_NORMAL.
>>
>> Reported-by: kernel test robot <oliver.sang@intel.com>
>> Closes: https://lore.kernel.org/oe-lkp/202601231457.f7b31e09-lkp@intel.com [1]
>> Cc: stable@vger.kernel.org
>> Fixes: 4b8736964640 ("mm/slab: add allocation accounting into slab allocation and free paths")
>> Signed-off-by: Harry Yoo <harry.yoo@oracle.com>
>> ---
>>
>> V1 -> V2:
>> - Simplified implementation based on Vlastimil's comment
>> - added virt_to_slab() != NULL check before dereferencing it - because
>> (in theory) it may be allocated via large kmalloc.
>>
>> mm/slub.c | 60 ++++++++++++++++++++++++++++++++++++++++++++++++-------
>> 1 file changed, 53 insertions(+), 7 deletions(-)
>>
>> diff --git a/mm/slub.c b/mm/slub.c
>> index f21b2f0c6f5a..5b4a3b9b7826 100644
>> --- a/mm/slub.c
>> +++ b/mm/slub.c
>> @@ -2095,6 +2095,49 @@ static inline void init_slab_obj_exts(struct slab *slab)
>> slab->obj_exts = 0;
>> }
>>
>> +/*
>> + * Calculate the allocation size for slabobj_ext array.
>> + *
>> + * When memory allocation profiling is enabled, the obj_exts array
>> + * could be allocated from the same slab cache it's being allocated for.
>> + * This would prevent the slab from ever being freed because it would
>> + * always contain at least one allocated object (its own obj_exts array).
>> + *
>> + * To avoid this, increase the allocation size when we detect the array
>> + * may come from the same cache, forcing it to use a different cache.
>> + */
>> +static inline size_t obj_exts_alloc_size(struct kmem_cache *s,
>> + struct slab *slab, gfp_t gfp)
>> +{
>> + size_t sz = sizeof(struct slabobj_ext) * slab->objects;
>> + struct kmem_cache *obj_exts_cache;
>> +
>> + /*
>> + * slabobj_ext array for KMALLOC_CGROUP allocations
>> + * are served from KMALLOC_NORMAL caches.
>> + */
>> + if (!mem_alloc_profiling_enabled())
>> + return sz;
>
> Hmm maybe we don't need this as there's !is_kmalloc_normal(s) check,
> but this allows optimizing out the checks below when
> CONFIG_MEM_ALLOC_PROFILING is not enabled.
>
> So probably worth keeping it.
Right.
Thanks, added to slab/for-next as the first commit of the obj_metadata branch.
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH V2] mm/slab: avoid allocating slabobj_ext array from its own slab
2026-01-26 13:46 ` Vlastimil Babka
@ 2026-01-26 14:37 ` Hao Li
0 siblings, 0 replies; 4+ messages in thread
From: Hao Li @ 2026-01-26 14:37 UTC (permalink / raw)
To: Vlastimil Babka
Cc: Harry Yoo, akpm, linux-mm, cl, rientjes, surenb,
kernel test robot, stable
On Mon, Jan 26, 2026 at 02:46:46PM +0100, Vlastimil Babka wrote:
> On 1/26/26 14:03, Harry Yoo wrote:
> > On Mon, Jan 26, 2026 at 09:57:14PM +0900, Harry Yoo wrote:
> >> When allocating slabobj_ext array in alloc_slab_obj_exts(), the array
> >> can be allocated from the same slab we're allocating the array for.
> >> This led to obj_exts_in_slab() incorrectly returning true [1],
> >> although the array is not allocated from wasted space of the slab.
> >>
> >> Vlastimil Babka observed that this problem should be fixed even when
> >> ignoring its incompatibility with obj_exts_in_slab(), because it creates
> >> slabs that are never freed as there is always at least one allocated
> >> object.
> >>
> >> To avoid this, use the next kmalloc size or large kmalloc when
> >> the array can be allocated from the same cache we're allocating
> >> the array for.
> >>
> >> In case of random kmalloc caches, there are multiple kmalloc caches
> >> for the same size and the cache is selected based on the caller address.
> >> Because it is fragile to ensure the same caller address is passed to
> >> kmalloc_slab(), kmalloc_noprof(), and kmalloc_node_noprof(), bump the
> >> size to (s->object_size + 1) when the sizes are equal, instead of
> >> directly comparing the kmem_cache pointers.
> >>
> >> Note that this doesn't happen when memory allocation profiling is
> >> disabled, as when the allocation of the array is triggered by memory
> >> cgroup (KMALLOC_CGROUP), the array is allocated from KMALLOC_NORMAL.
> >>
> >> Reported-by: kernel test robot <oliver.sang@intel.com>
> >> Closes: https://lore.kernel.org/oe-lkp/202601231457.f7b31e09-lkp@intel.com [1]
> >> Cc: stable@vger.kernel.org
> >> Fixes: 4b8736964640 ("mm/slab: add allocation accounting into slab allocation and free paths")
> >> Signed-off-by: Harry Yoo <harry.yoo@oracle.com>
> >> ---
> >>
> >> V1 -> V2:
> >> - Simplified implementation based on Vlastimil's comment
> >> - added virt_to_slab() != NULL check before dereferencing it - because
> >> (in theory) it may be allocated via large kmalloc.
> >>
> >> mm/slub.c | 60 ++++++++++++++++++++++++++++++++++++++++++++++++-------
> >> 1 file changed, 53 insertions(+), 7 deletions(-)
> >>
> >> diff --git a/mm/slub.c b/mm/slub.c
> >> index f21b2f0c6f5a..5b4a3b9b7826 100644
> >> --- a/mm/slub.c
> >> +++ b/mm/slub.c
> >> @@ -2095,6 +2095,49 @@ static inline void init_slab_obj_exts(struct slab *slab)
> >> slab->obj_exts = 0;
> >> }
> >>
> >> +/*
> >> + * Calculate the allocation size for slabobj_ext array.
> >> + *
> >> + * When memory allocation profiling is enabled, the obj_exts array
> >> + * could be allocated from the same slab cache it's being allocated for.
> >> + * This would prevent the slab from ever being freed because it would
> >> + * always contain at least one allocated object (its own obj_exts array).
> >> + *
> >> + * To avoid this, increase the allocation size when we detect the array
> >> + * may come from the same cache, forcing it to use a different cache.
> >> + */
> >> +static inline size_t obj_exts_alloc_size(struct kmem_cache *s,
> >> + struct slab *slab, gfp_t gfp)
> >> +{
> >> + size_t sz = sizeof(struct slabobj_ext) * slab->objects;
> >> + struct kmem_cache *obj_exts_cache;
> >> +
> >> + /*
> >> + * slabobj_ext array for KMALLOC_CGROUP allocations
> >> + * are served from KMALLOC_NORMAL caches.
> >> + */
> >> + if (!mem_alloc_profiling_enabled())
> >> + return sz;
> >
> > Hmm maybe we don't need this as there's !is_kmalloc_normal(s) check,
> > but this allows optimizing out the checks below when
> > CONFIG_MEM_ALLOC_PROFILING is not enabled.
> >
> > So probably worth keeping it.
>
> Right.
>
> Thanks, added to slab/for-next as the first commit of the obj_metadata branch.
Hi Vlastimil,
This v2 patch still looks good to me!
Feel free to fold this R-b tag into the commit on slab/for-next.
Reviewed-by: Hao Li <hao.li@linux.dev>
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2026-01-26 14:37 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2026-01-26 12:57 [PATCH V2] mm/slab: avoid allocating slabobj_ext array from its own slab Harry Yoo
2026-01-26 13:03 ` Harry Yoo
2026-01-26 13:46 ` Vlastimil Babka
2026-01-26 14:37 ` Hao Li
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox