From: Chengming Zhou <chengming.zhou@linux.dev>
To: Vlastimil Babka <vbabka@suse.cz>,
Christoph Lameter <cl@linux.com>,
Pekka Enberg <penberg@kernel.org>,
David Rientjes <rientjes@google.com>,
Joonsoo Kim <iamjoonsoo.kim@lge.com>,
Andrew Morton <akpm@linux-foundation.org>,
Roman Gushchin <roman.gushchin@linux.dev>,
Hyeonggon Yoo <42.hyeyoo@gmail.com>,
Andrey Ryabinin <ryabinin.a.a@gmail.com>,
Alexander Potapenko <glider@google.com>,
Andrey Konovalov <andreyknvl@gmail.com>,
Dmitry Vyukov <dvyukov@google.com>,
Vincenzo Frascino <vincenzo.frascino@arm.com>
Cc: Zheng Yejian <zhengyejian1@huawei.com>,
Xiongwei Song <xiongwei.song@windriver.com>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org,
kasan-dev@googlegroups.com
Subject: Re: [PATCH 2/3] mm, slab: use an enum to define SLAB_ cache creation flags
Date: Wed, 21 Feb 2024 15:13:06 +0800 [thread overview]
Message-ID: <aef16f2d-b20f-4999-b959-b4bf4209b4dc@linux.dev> (raw)
In-Reply-To: <20240220-slab-cleanup-flags-v1-2-e657e373944a@suse.cz>
On 2024/2/21 00:58, Vlastimil Babka wrote:
> The values of SLAB_ cache creation flagsare defined by hand, which is
> tedious and error-prone. Use an enum to assign the bit number and a
> __SF_BIT() macro to #define the final flags.
>
> This renumbers the flag values, which is OK as they are only used
> internally.
>
> Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
Reviewed-by: Chengming Zhou <chengming.zhou@linux.dev>
Thanks!
> ---
> include/linux/slab.h | 81 ++++++++++++++++++++++++++++++++++++++--------------
> mm/slub.c | 6 ++--
> 2 files changed, 63 insertions(+), 24 deletions(-)
>
> diff --git a/include/linux/slab.h b/include/linux/slab.h
> index 6252f44115c2..f893a132dd5a 100644
> --- a/include/linux/slab.h
> +++ b/include/linux/slab.h
> @@ -21,29 +21,68 @@
> #include <linux/cleanup.h>
> #include <linux/hash.h>
>
> +enum _slab_flag_bits {
> + _SLAB_CONSISTENCY_CHECKS,
> + _SLAB_RED_ZONE,
> + _SLAB_POISON,
> + _SLAB_KMALLOC,
> + _SLAB_HWCACHE_ALIGN,
> + _SLAB_CACHE_DMA,
> + _SLAB_CACHE_DMA32,
> + _SLAB_STORE_USER,
> + _SLAB_PANIC,
> + _SLAB_TYPESAFE_BY_RCU,
> + _SLAB_TRACE,
> +#ifdef CONFIG_DEBUG_OBJECTS
> + _SLAB_DEBUG_OBJECTS,
> +#endif
> + _SLAB_NOLEAKTRACE,
> + _SLAB_NO_MERGE,
> +#ifdef CONFIG_FAILSLAB
> + _SLAB_FAILSLAB,
> +#endif
> +#ifdef CONFIG_MEMCG_KMEM
> + _SLAB_ACCOUNT,
> +#endif
> +#ifdef CONFIG_KASAN_GENERIC
> + _SLAB_KASAN,
> +#endif
> + _SLAB_NO_USER_FLAGS,
> +#ifdef CONFIG_KFENCE
> + _SLAB_SKIP_KFENCE,
> +#endif
> +#ifndef CONFIG_SLUB_TINY
> + _SLAB_RECLAIM_ACCOUNT,
> +#endif
> + _SLAB_OBJECT_POISON,
> + _SLAB_CMPXCHG_DOUBLE,
> + _SLAB_FLAGS_LAST_BIT
> +};
> +
> +#define __SF_BIT(nr) ((slab_flags_t __force)(1U << (nr)))
>
> /*
> * Flags to pass to kmem_cache_create().
> * The ones marked DEBUG need CONFIG_SLUB_DEBUG enabled, otherwise are no-op
> */
> /* DEBUG: Perform (expensive) checks on alloc/free */
> -#define SLAB_CONSISTENCY_CHECKS ((slab_flags_t __force)0x00000100U)
> +#define SLAB_CONSISTENCY_CHECKS __SF_BIT(_SLAB_CONSISTENCY_CHECKS)
> /* DEBUG: Red zone objs in a cache */
> -#define SLAB_RED_ZONE ((slab_flags_t __force)0x00000400U)
> +#define SLAB_RED_ZONE __SF_BIT(_SLAB_RED_ZONE)
> /* DEBUG: Poison objects */
> -#define SLAB_POISON ((slab_flags_t __force)0x00000800U)
> +#define SLAB_POISON __SF_BIT(_SLAB_POISON)
> /* Indicate a kmalloc slab */
> -#define SLAB_KMALLOC ((slab_flags_t __force)0x00001000U)
> +#define SLAB_KMALLOC __SF_BIT(_SLAB_KMALLOC)
> /* Align objs on cache lines */
> -#define SLAB_HWCACHE_ALIGN ((slab_flags_t __force)0x00002000U)
> +#define SLAB_HWCACHE_ALIGN __SF_BIT(_SLAB_HWCACHE_ALIGN)
> /* Use GFP_DMA memory */
> -#define SLAB_CACHE_DMA ((slab_flags_t __force)0x00004000U)
> +#define SLAB_CACHE_DMA __SF_BIT(_SLAB_CACHE_DMA)
> /* Use GFP_DMA32 memory */
> -#define SLAB_CACHE_DMA32 ((slab_flags_t __force)0x00008000U)
> +#define SLAB_CACHE_DMA32 __SF_BIT(_SLAB_CACHE_DMA32)
> /* DEBUG: Store the last owner for bug hunting */
> -#define SLAB_STORE_USER ((slab_flags_t __force)0x00010000U)
> +#define SLAB_STORE_USER __SF_BIT(_SLAB_STORE_USER)
> /* Panic if kmem_cache_create() fails */
> -#define SLAB_PANIC ((slab_flags_t __force)0x00040000U)
> +#define SLAB_PANIC __SF_BIT(_SLAB_PANIC)
> /*
> * SLAB_TYPESAFE_BY_RCU - **WARNING** READ THIS!
> *
> @@ -95,19 +134,19 @@
> * Note that SLAB_TYPESAFE_BY_RCU was originally named SLAB_DESTROY_BY_RCU.
> */
> /* Defer freeing slabs to RCU */
> -#define SLAB_TYPESAFE_BY_RCU ((slab_flags_t __force)0x00080000U)
> +#define SLAB_TYPESAFE_BY_RCU __SF_BIT(_SLAB_TYPESAFE_BY_RCU)
> /* Trace allocations and frees */
> -#define SLAB_TRACE ((slab_flags_t __force)0x00200000U)
> +#define SLAB_TRACE __SF_BIT(_SLAB_TRACE)
>
> /* Flag to prevent checks on free */
> #ifdef CONFIG_DEBUG_OBJECTS
> -# define SLAB_DEBUG_OBJECTS ((slab_flags_t __force)0x00400000U)
> +# define SLAB_DEBUG_OBJECTS __SF_BIT(_SLAB_DEBUG_OBJECTS)
> #else
> # define SLAB_DEBUG_OBJECTS 0
> #endif
>
> /* Avoid kmemleak tracing */
> -#define SLAB_NOLEAKTRACE ((slab_flags_t __force)0x00800000U)
> +#define SLAB_NOLEAKTRACE __SF_BIT(_SLAB_NOLEAKTRACE)
>
> /*
> * Prevent merging with compatible kmem caches. This flag should be used
> @@ -119,23 +158,23 @@
> * - performance critical caches, should be very rare and consulted with slab
> * maintainers, and not used together with CONFIG_SLUB_TINY
> */
> -#define SLAB_NO_MERGE ((slab_flags_t __force)0x01000000U)
> +#define SLAB_NO_MERGE __SF_BIT(_SLAB_NO_MERGE)
>
> /* Fault injection mark */
> #ifdef CONFIG_FAILSLAB
> -# define SLAB_FAILSLAB ((slab_flags_t __force)0x02000000U)
> +# define SLAB_FAILSLAB __SF_BIT(_SLAB_FAILSLAB)
> #else
> # define SLAB_FAILSLAB 0
> #endif
> /* Account to memcg */
> #ifdef CONFIG_MEMCG_KMEM
> -# define SLAB_ACCOUNT ((slab_flags_t __force)0x04000000U)
> +# define SLAB_ACCOUNT __SF_BIT(_SLAB_ACCOUNT)
> #else
> # define SLAB_ACCOUNT 0
> #endif
>
> #ifdef CONFIG_KASAN_GENERIC
> -#define SLAB_KASAN ((slab_flags_t __force)0x08000000U)
> +#define SLAB_KASAN __SF_BIT(_SLAB_KASAN)
> #else
> #define SLAB_KASAN 0
> #endif
> @@ -145,10 +184,10 @@
> * Intended for caches created for self-tests so they have only flags
> * specified in the code and other flags are ignored.
> */
> -#define SLAB_NO_USER_FLAGS ((slab_flags_t __force)0x10000000U)
> +#define SLAB_NO_USER_FLAGS __SF_BIT(_SLAB_NO_USER_FLAGS)
>
> #ifdef CONFIG_KFENCE
> -#define SLAB_SKIP_KFENCE ((slab_flags_t __force)0x20000000U)
> +#define SLAB_SKIP_KFENCE __SF_BIT(_SLAB_SKIP_KFENCE)
> #else
> #define SLAB_SKIP_KFENCE 0
> #endif
> @@ -156,9 +195,9 @@
> /* The following flags affect the page allocator grouping pages by mobility */
> /* Objects are reclaimable */
> #ifndef CONFIG_SLUB_TINY
> -#define SLAB_RECLAIM_ACCOUNT ((slab_flags_t __force)0x00020000U)
> +#define SLAB_RECLAIM_ACCOUNT __SF_BIT(_SLAB_RECLAIM_ACCOUNT)
> #else
> -#define SLAB_RECLAIM_ACCOUNT ((slab_flags_t __force)0)
> +#define SLAB_RECLAIM_ACCOUNT 0
> #endif
> #define SLAB_TEMPORARY SLAB_RECLAIM_ACCOUNT /* Objects are short-lived */
>
> diff --git a/mm/slub.c b/mm/slub.c
> index 2ef88bbf56a3..a93c5a17cbbb 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -306,13 +306,13 @@ static inline bool kmem_cache_has_cpu_partial(struct kmem_cache *s)
>
> /* Internal SLUB flags */
> /* Poison object */
> -#define __OBJECT_POISON ((slab_flags_t __force)0x80000000U)
> +#define __OBJECT_POISON __SF_BIT(_SLAB_OBJECT_POISON)
> /* Use cmpxchg_double */
>
> #ifdef system_has_freelist_aba
> -#define __CMPXCHG_DOUBLE ((slab_flags_t __force)0x40000000U)
> +#define __CMPXCHG_DOUBLE __SF_BIT(_SLAB_CMPXCHG_DOUBLE)
> #else
> -#define __CMPXCHG_DOUBLE ((slab_flags_t __force)0U)
> +#define __CMPXCHG_DOUBLE 0
> #endif
>
> /*
>
next prev parent reply other threads:[~2024-02-21 7:13 UTC|newest]
Thread overview: 23+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-02-20 16:58 [PATCH 0/3] cleanup of SLAB_ flags Vlastimil Babka
2024-02-20 16:58 ` [PATCH 1/3] mm, slab: deprecate SLAB_MEM_SPREAD flag Vlastimil Babka
2024-02-21 2:17 ` Song, Xiongwei
2024-02-21 7:11 ` Chengming Zhou
2024-02-21 18:30 ` Roman Gushchin
2024-02-22 1:10 ` Song, Xiongwei
2024-02-22 2:32 ` Chengming Zhou
2024-02-22 3:13 ` Song, Xiongwei
2024-02-23 16:41 ` Vlastimil Babka
2024-02-24 9:32 ` Chengming Zhou
2024-02-20 16:58 ` [PATCH 2/3] mm, slab: use an enum to define SLAB_ cache creation flags Vlastimil Babka
2024-02-21 2:23 ` Song, Xiongwei
2024-02-21 7:13 ` Chengming Zhou [this message]
2024-02-21 18:33 ` Roman Gushchin
2024-02-23 16:42 ` Vlastimil Babka
2024-02-21 22:19 ` Vlastimil Babka
2024-02-23 3:12 ` Christoph Lameter (Ampere)
2024-02-23 16:43 ` Vlastimil Babka
2024-02-23 17:06 ` Vlastimil Babka
2024-02-20 16:58 ` [PATCH 3/3] mm, slab, kasan: replace kasan_never_merge() with SLAB_NO_MERGE Vlastimil Babka
2024-02-21 2:25 ` Song, Xiongwei
2024-02-21 7:14 ` Chengming Zhou
2024-02-21 20:48 ` Andrey Konovalov
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=aef16f2d-b20f-4999-b959-b4bf4209b4dc@linux.dev \
--to=chengming.zhou@linux.dev \
--cc=42.hyeyoo@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=andreyknvl@gmail.com \
--cc=cl@linux.com \
--cc=dvyukov@google.com \
--cc=glider@google.com \
--cc=iamjoonsoo.kim@lge.com \
--cc=kasan-dev@googlegroups.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=penberg@kernel.org \
--cc=rientjes@google.com \
--cc=roman.gushchin@linux.dev \
--cc=ryabinin.a.a@gmail.com \
--cc=vbabka@suse.cz \
--cc=vincenzo.frascino@arm.com \
--cc=xiongwei.song@windriver.com \
--cc=zhengyejian1@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox