* [PATCH v2] mm/slab: simplify SLAB_* flag handling
@ 2025-01-24 16:48 Kevin Brodsky
2025-02-02 6:57 ` Hyeonggon Yoo
2025-02-05 14:57 ` Vlastimil Babka
0 siblings, 2 replies; 5+ messages in thread
From: Kevin Brodsky @ 2025-01-24 16:48 UTC (permalink / raw)
To: linux-mm
Cc: linux-kernel, Kevin Brodsky, Christoph Lameter, Pekka Enberg,
David Rientjes, Joonsoo Kim, Andrew Morton, Vlastimil Babka,
Roman Gushchin, Hyeonggon Yoo
SLUB is the only remaining allocator. We can therefore get rid of
the logic for allocator-specific flags:
* Merge SLAB_CACHE_FLAGS into SLAB_CORE_FLAGS.
* Remove CACHE_CREATE_MASK and instead mask out SLAB_DEBUG_FLAGS if
!CONFIG_SLUB_DEBUG. SLAB_DEBUG_FLAGS is now defined
unconditionally (no impact on existing code, which ignores it if
!CONFIG_SLUB_DEBUG).
* Define SLAB_FLAGS_PERMITTED in terms of SLAB_CORE_FLAGS and
SLAB_DEBUG_FLAGS (no functional change).
While at it also remove misleading comments that suggest that
multiple allocators are available.
Signed-off-by: Kevin Brodsky <kevin.brodsky@arm.com>
---
v1..v2:
* Keep the SLAB_FLAGS_PERMITTED check and remove CACHE_CREATE_MASK
instead. SLAB_DEBUG_FLAGS is now defined unconditionally and
explicitly masked out.
v1: https://lore.kernel.org/all/20250117113226.3484784-1-kevin.brodsky@arm.com/
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Roman Gushchin <roman.gushchin@linux.dev>
Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com>
---
mm/slab.h | 32 +++++---------------------------
mm/slab_common.c | 11 ++---------
2 files changed, 7 insertions(+), 36 deletions(-)
diff --git a/mm/slab.h b/mm/slab.h
index e9fd9bf0bfa6..1a081f50f947 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -457,39 +457,17 @@ static inline bool is_kmalloc_normal(struct kmem_cache *s)
return !(s->flags & (SLAB_CACHE_DMA|SLAB_ACCOUNT|SLAB_RECLAIM_ACCOUNT));
}
-/* Legal flag mask for kmem_cache_create(), for various configurations */
#define SLAB_CORE_FLAGS (SLAB_HWCACHE_ALIGN | SLAB_CACHE_DMA | \
SLAB_CACHE_DMA32 | SLAB_PANIC | \
- SLAB_TYPESAFE_BY_RCU | SLAB_DEBUG_OBJECTS )
+ SLAB_TYPESAFE_BY_RCU | SLAB_DEBUG_OBJECTS | \
+ SLAB_NOLEAKTRACE | SLAB_RECLAIM_ACCOUNT | \
+ SLAB_TEMPORARY | SLAB_ACCOUNT | \
+ SLAB_NO_USER_FLAGS | SLAB_KMALLOC | SLAB_NO_MERGE)
-#ifdef CONFIG_SLUB_DEBUG
#define SLAB_DEBUG_FLAGS (SLAB_RED_ZONE | SLAB_POISON | SLAB_STORE_USER | \
SLAB_TRACE | SLAB_CONSISTENCY_CHECKS)
-#else
-#define SLAB_DEBUG_FLAGS (0)
-#endif
-#define SLAB_CACHE_FLAGS (SLAB_NOLEAKTRACE | SLAB_RECLAIM_ACCOUNT | \
- SLAB_TEMPORARY | SLAB_ACCOUNT | \
- SLAB_NO_USER_FLAGS | SLAB_KMALLOC | SLAB_NO_MERGE)
-
-/* Common flags available with current configuration */
-#define CACHE_CREATE_MASK (SLAB_CORE_FLAGS | SLAB_DEBUG_FLAGS | SLAB_CACHE_FLAGS)
-
-/* Common flags permitted for kmem_cache_create */
-#define SLAB_FLAGS_PERMITTED (SLAB_CORE_FLAGS | \
- SLAB_RED_ZONE | \
- SLAB_POISON | \
- SLAB_STORE_USER | \
- SLAB_TRACE | \
- SLAB_CONSISTENCY_CHECKS | \
- SLAB_NOLEAKTRACE | \
- SLAB_RECLAIM_ACCOUNT | \
- SLAB_TEMPORARY | \
- SLAB_ACCOUNT | \
- SLAB_KMALLOC | \
- SLAB_NO_MERGE | \
- SLAB_NO_USER_FLAGS)
+#define SLAB_FLAGS_PERMITTED (SLAB_CORE_FLAGS | SLAB_DEBUG_FLAGS)
bool __kmem_cache_empty(struct kmem_cache *);
int __kmem_cache_shutdown(struct kmem_cache *);
diff --git a/mm/slab_common.c b/mm/slab_common.c
index 69f2d19010de..50b9c6497171 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -298,6 +298,8 @@ struct kmem_cache *__kmem_cache_create_args(const char *name,
static_branch_enable(&slub_debug_enabled);
if (flags & SLAB_STORE_USER)
stack_depot_init();
+#else
+ flags &= ~SLAB_DEBUG_FLAGS;
#endif
mutex_lock(&slab_mutex);
@@ -307,20 +309,11 @@ struct kmem_cache *__kmem_cache_create_args(const char *name,
goto out_unlock;
}
- /* Refuse requests with allocator specific flags */
if (flags & ~SLAB_FLAGS_PERMITTED) {
err = -EINVAL;
goto out_unlock;
}
- /*
- * Some allocators will constraint the set of valid flags to a subset
- * of all flags. We expect them to define CACHE_CREATE_MASK in this
- * case, and we'll just provide them with a sanitized version of the
- * passed flags.
- */
- flags &= CACHE_CREATE_MASK;
-
/* Fail closed on bad usersize of useroffset values. */
if (!IS_ENABLED(CONFIG_HARDENED_USERCOPY) ||
WARN_ON(!args->usersize && args->useroffset) ||
base-commit: ab18b8fff124c9b76ea12692571ca822dcd92854
--
2.47.0
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH v2] mm/slab: simplify SLAB_* flag handling
2025-01-24 16:48 [PATCH v2] mm/slab: simplify SLAB_* flag handling Kevin Brodsky
@ 2025-02-02 6:57 ` Hyeonggon Yoo
2025-02-05 10:38 ` Vlastimil Babka
2025-02-05 14:57 ` Vlastimil Babka
1 sibling, 1 reply; 5+ messages in thread
From: Hyeonggon Yoo @ 2025-02-02 6:57 UTC (permalink / raw)
To: Kevin Brodsky
Cc: linux-mm, linux-kernel, Christoph Lameter, Pekka Enberg,
David Rientjes, Joonsoo Kim, Andrew Morton, Vlastimil Babka,
Roman Gushchin
On Sat, Jan 25, 2025 at 1:49 AM Kevin Brodsky <kevin.brodsky@arm.com> wrote:
>
> SLUB is the only remaining allocator. We can therefore get rid of
> the logic for allocator-specific flags:
>
> * Merge SLAB_CACHE_FLAGS into SLAB_CORE_FLAGS.
>
> * Remove CACHE_CREATE_MASK and instead mask out SLAB_DEBUG_FLAGS if
> !CONFIG_SLUB_DEBUG. SLAB_DEBUG_FLAGS is now defined
> unconditionally (no impact on existing code, which ignores it if
> !CONFIG_SLUB_DEBUG).
>
> * Define SLAB_FLAGS_PERMITTED in terms of SLAB_CORE_FLAGS and
> SLAB_DEBUG_FLAGS (no functional change).
>
> While at it also remove misleading comments that suggest that
> multiple allocators are available.
>
> Signed-off-by: Kevin Brodsky <kevin.brodsky@arm.com>
Hi Kevin,
This patch in general looks fine to me.
However, there are subtle changes that were not documented by the changelog.
SLUB currently does not _completely_ ignore debug flags even when
CONFIG_SLUB_DEBUG=n. For example, calculate_sizes() relocate the free pointer
regardless of CONFIG_SLUB_DEBUG.
I believe completely ignoring debug flags should be acceptable
when CONFIG_SLUB_DEBUG=n, but this change should at least be documented
in the changelog.
Additionally, beyond what this patch currently does, we can remove several
CONFIG_SLUB_DEBUG #ifdefs in some functions (e.g., in calculate_sizes())
on top of this patch.
How does that sound to you?
Best,
Hyeonggon
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH v2] mm/slab: simplify SLAB_* flag handling
2025-02-02 6:57 ` Hyeonggon Yoo
@ 2025-02-05 10:38 ` Vlastimil Babka
2025-02-05 12:11 ` Harry (Hyeonggon) Yoo
0 siblings, 1 reply; 5+ messages in thread
From: Vlastimil Babka @ 2025-02-05 10:38 UTC (permalink / raw)
To: Hyeonggon Yoo, Kevin Brodsky
Cc: linux-mm, linux-kernel, Christoph Lameter, Pekka Enberg,
David Rientjes, Joonsoo Kim, Andrew Morton, Roman Gushchin
On 2/2/25 07:57, Hyeonggon Yoo wrote:
> On Sat, Jan 25, 2025 at 1:49 AM Kevin Brodsky <kevin.brodsky@arm.com> wrote:
>>
>> SLUB is the only remaining allocator. We can therefore get rid of
>> the logic for allocator-specific flags:
>>
>> * Merge SLAB_CACHE_FLAGS into SLAB_CORE_FLAGS.
>>
>> * Remove CACHE_CREATE_MASK and instead mask out SLAB_DEBUG_FLAGS if
>> !CONFIG_SLUB_DEBUG. SLAB_DEBUG_FLAGS is now defined
>> unconditionally (no impact on existing code, which ignores it if
>> !CONFIG_SLUB_DEBUG).
>>
>> * Define SLAB_FLAGS_PERMITTED in terms of SLAB_CORE_FLAGS and
>> SLAB_DEBUG_FLAGS (no functional change).
>>
>> While at it also remove misleading comments that suggest that
>> multiple allocators are available.
>>
>> Signed-off-by: Kevin Brodsky <kevin.brodsky@arm.com>
>
> Hi Kevin,
> This patch in general looks fine to me.
>
> However, there are subtle changes that were not documented by the changelog.
> SLUB currently does not _completely_ ignore debug flags even when
> CONFIG_SLUB_DEBUG=n. For example, calculate_sizes() relocate the free pointer
> regardless of CONFIG_SLUB_DEBUG.
I think the patch makes no difference there effectively? Before the patch,
CACHE_CREATE_MASK would be used to filter flags and not include the debug
flags, so that free pointer relocation due to SLAB_POISON or SLAB_RED_ZONE
would not happen. After the patch, it's filtered effectively the same? Am I
missing something?
> I believe completely ignoring debug flags should be acceptable
> when CONFIG_SLUB_DEBUG=n, but this change should at least be documented
> in the changelog.
>
> Additionally, beyond what this patch currently does, we can remove several
> CONFIG_SLUB_DEBUG #ifdefs in some functions (e.g., in calculate_sizes())
> on top of this patch.
So AFAIK that should be possible both before and after this patch and you
can try (it's out of scope of this patch). But I suspect the reason for the
#ifdefs is that it the code would reference structures or fields that don't
exist without CONFIG_SLUB_DEBUG. Maybe that's not (no longer?) true in some
cases and we could clean up. Another reason would be performance, but we
shouldn't care in code that's only executed during cache creation.
> How does that sound to you?
>
> Best,
> Hyeonggon
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH v2] mm/slab: simplify SLAB_* flag handling
2025-02-05 10:38 ` Vlastimil Babka
@ 2025-02-05 12:11 ` Harry (Hyeonggon) Yoo
0 siblings, 0 replies; 5+ messages in thread
From: Harry (Hyeonggon) Yoo @ 2025-02-05 12:11 UTC (permalink / raw)
To: Vlastimil Babka
Cc: Kevin Brodsky, linux-mm, linux-kernel, Christoph Lameter,
Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
Roman Gushchin
On Wed, Feb 05, 2025 at 11:38:51AM +0100, Vlastimil Babka wrote:
> On 2/2/25 07:57, Hyeonggon Yoo wrote:
> > On Sat, Jan 25, 2025 at 1:49 AM Kevin Brodsky <kevin.brodsky@arm.com> wrote:
> >>
> >> SLUB is the only remaining allocator. We can therefore get rid of
> >> the logic for allocator-specific flags:
> >>
> >> * Merge SLAB_CACHE_FLAGS into SLAB_CORE_FLAGS.
> >>
> >> * Remove CACHE_CREATE_MASK and instead mask out SLAB_DEBUG_FLAGS if
> >> !CONFIG_SLUB_DEBUG. SLAB_DEBUG_FLAGS is now defined
> >> unconditionally (no impact on existing code, which ignores it if
> >> !CONFIG_SLUB_DEBUG).
> >>
> >> * Define SLAB_FLAGS_PERMITTED in terms of SLAB_CORE_FLAGS and
> >> SLAB_DEBUG_FLAGS (no functional change).
> >>
> >> While at it also remove misleading comments that suggest that
> >> multiple allocators are available.
> >>
> >> Signed-off-by: Kevin Brodsky <kevin.brodsky@arm.com>
Apologizes for the noise. My previous feedback was deemed valid to me
at the time of writing, but it's incorrect and doesn't add much value
in improving the patch.
FWIW,
Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
> > Hi Kevin,
> > This patch in general looks fine to me.
> >
> > However, there are subtle changes that were not documented by the changelog.
> > SLUB currently does not _completely_ ignore debug flags even when
> > CONFIG_SLUB_DEBUG=n. For example, calculate_sizes() relocate the free pointer
> > regardless of CONFIG_SLUB_DEBUG.
>
> I think the patch makes no difference there effectively? Before the patch,
> CACHE_CREATE_MASK would be used to filter flags and not include the debug
> flags, so that free pointer relocation due to SLAB_POISON or SLAB_RED_ZONE
> would not happen. After the patch, it's filtered effectively the same? Am I
> missing something?
You're not missing anything, it just slipped my mind :/
> > I believe completely ignoring debug flags should be acceptable
> > when CONFIG_SLUB_DEBUG=n, but this change should at least be documented
> > in the changelog.
> >
> > Additionally, beyond what this patch currently does, we can remove several
> > CONFIG_SLUB_DEBUG #ifdefs in some functions (e.g., in calculate_sizes())
> > on top of this patch.
>
> So AFAIK that should be possible both before and after this patch
Right.
> and you can try (it's out of scope of this patch).
Yes. it is out of scope.
> But I suspect the reason for the #ifdefs is that it the code would reference
> structures or fields that don't exist without CONFIG_SLUB_DEBUG.
> Maybe that's not (no longer?) true in some cases and we could clean up.
Yes in most of cases it's it won't build without #ifdefs...
There are only few cases we could clean up.
> Another reason would be performance, but we shouldn't care in code that's
> only executed during cache creation.
Or perhaps due to a slight increase in code size, but I doubt the
difference in code size will be significant.
--
Harry
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH v2] mm/slab: simplify SLAB_* flag handling
2025-01-24 16:48 [PATCH v2] mm/slab: simplify SLAB_* flag handling Kevin Brodsky
2025-02-02 6:57 ` Hyeonggon Yoo
@ 2025-02-05 14:57 ` Vlastimil Babka
1 sibling, 0 replies; 5+ messages in thread
From: Vlastimil Babka @ 2025-02-05 14:57 UTC (permalink / raw)
To: Kevin Brodsky, linux-mm
Cc: linux-kernel, Christoph Lameter, Pekka Enberg, David Rientjes,
Joonsoo Kim, Andrew Morton, Roman Gushchin, Hyeonggon Yoo
On 1/24/25 17:48, Kevin Brodsky wrote:
> SLUB is the only remaining allocator. We can therefore get rid of
> the logic for allocator-specific flags:
>
> * Merge SLAB_CACHE_FLAGS into SLAB_CORE_FLAGS.
>
> * Remove CACHE_CREATE_MASK and instead mask out SLAB_DEBUG_FLAGS if
> !CONFIG_SLUB_DEBUG. SLAB_DEBUG_FLAGS is now defined
> unconditionally (no impact on existing code, which ignores it if
> !CONFIG_SLUB_DEBUG).
>
> * Define SLAB_FLAGS_PERMITTED in terms of SLAB_CORE_FLAGS and
> SLAB_DEBUG_FLAGS (no functional change).
>
> While at it also remove misleading comments that suggest that
> multiple allocators are available.
>
> Signed-off-by: Kevin Brodsky <kevin.brodsky@arm.com>
Added to slab/for-next, thanks!
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2025-02-05 14:57 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-01-24 16:48 [PATCH v2] mm/slab: simplify SLAB_* flag handling Kevin Brodsky
2025-02-02 6:57 ` Hyeonggon Yoo
2025-02-05 10:38 ` Vlastimil Babka
2025-02-05 12:11 ` Harry (Hyeonggon) Yoo
2025-02-05 14:57 ` Vlastimil Babka
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox