linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 0/4] supplement of slab allocator removal
@ 2023-12-09 13:51 sxwjean
  2023-12-09 13:52 ` [PATCH v3 1/4] Documentation: kernel-parameters: remove noaliencache sxwjean
                   ` (3 more replies)
  0 siblings, 4 replies; 10+ messages in thread
From: sxwjean @ 2023-12-09 13:51 UTC (permalink / raw)
  To: vbabka, 42.hyeyoo, cl, linux-mm
  Cc: penberg, rientjes, iamjoonsoo.kim, roman.gushchin, corbet,
	keescook, arnd, akpm, gregkh, linux-doc, linux-kernel,
	Xiongwei Song

From: Xiongwei Song <xiongwei.song@windriver.com>

Hi,

Patch 1 is to remove an unused parameter. It has a longer history, please
see the change history inside the patch.

---
Patch 2 is to replace slub_$params with slab_$params.
Vlastimil Babka pointed out we should use "slab_$param" as the primary
prefix for long-term plan. Please see [1] for more information.

This patch is to implements that.

I did the basic tests with qemu, which passed values by sl[au]b_max_order,
sl[au]b_min_order, sl[au]b_min_objects and sl[au]b_debug in command line.
The values looks correct by printing them out before calculating orders.

---
Patch 3 is to replace slub_$params in Documentation/mm/slub.rst based on
the changes of patch 2.

---
Patch 4 is original patch 3. It is not related to slab allocator removal.
It's to correct the description of default value of slub_min_objects in
Documentation/mm/slub.rst. 

---
This series is based on [2].

---
CHANGES
V3:
- patch 1: Collect Reviewed-by tag.
           Reifne the commit message.
- patch 2: Remove the changes for variables and functions.
           Resort slab_$params in doc.
           Refine the commit message.
           Remove RFC tag.
- patch 3: Use slab_$params in slub.rst.
- patch 4: It's original patch 3. Just resorted patch orders, no any other
           Changes.

v2: https://lore.kernel.org/linux-mm/457899ac-baab-e976-44ec-dfdeb23be031@suse.cz/T/#t
- patch 1: Collect Reviewed-by tag.
- patch 3: Correct spelling mistakes in commit message.

v1: https://lore.kernel.org/linux-mm/20231201031505.286117-1-sxwjean@me.com/

---
Regards,
Xiongwei

[1] https://lore.kernel.org/linux-mm/7512b350-4317-21a0-fab3-4101bc4d8f7a@suse.cz/
[2] https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/slab.git/log/?h=slab/for-6.8/slab-removal

Xiongwei Song (4):
  Documentation: kernel-parameters: remove noaliencache
  mm/slub: unify all sl[au]b parameters with "slab_$param"
  mm/slub: replace slub_$params with slab_$params in slub.rst
  mm/slub: correct the default value of slub_min_objects in doc

 .../admin-guide/kernel-parameters.txt         | 75 ++++++++-----------
 Documentation/mm/slub.rst                     | 60 +++++++--------
 drivers/misc/lkdtm/heap.c                     |  2 +-
 mm/Kconfig.debug                              |  6 +-
 mm/slab.h                                     |  2 +-
 mm/slab_common.c                              |  4 +-
 mm/slub.c                                     | 39 +++++-----
 7 files changed, 91 insertions(+), 97 deletions(-)

-- 
2.34.1



^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH v3 1/4] Documentation: kernel-parameters: remove noaliencache
  2023-12-09 13:51 [PATCH v3 0/4] supplement of slab allocator removal sxwjean
@ 2023-12-09 13:52 ` sxwjean
  2023-12-11 17:47   ` Jeff Johnson
  2023-12-09 13:52 ` [PATCH v3 2/4] mm/slub: unify all sl[au]b parameters with "slab_$param" sxwjean
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 10+ messages in thread
From: sxwjean @ 2023-12-09 13:52 UTC (permalink / raw)
  To: vbabka, 42.hyeyoo, cl, linux-mm
  Cc: penberg, rientjes, iamjoonsoo.kim, roman.gushchin, corbet,
	keescook, arnd, akpm, gregkh, linux-doc, linux-kernel,
	Xiongwei Song

From: Xiongwei Song <xiongwei.song@windriver.com>

ince slab allocator has already been removed. There is no users of
the noaliencachei parameter, so let's remove it.

Suggested-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Xiongwei Song <xiongwei.song@windriver.com>
---
v5: Collect Reviewed-by tag.
v4: Collect Reviewed-by tag.
v3: Remove the changes for slab_max_order.
v2: Add changes for removing "noaliencache".
    https://lore.kernel.org/linux-mm/20231122143603.85297-1-sxwjean@me.com/
v1: Remove slab_max_order.
    https://lore.kernel.org/linux-mm/20231120091214.150502-2-sxwjean@me.com/
---
 Documentation/admin-guide/kernel-parameters.txt | 4 ----
 1 file changed, 4 deletions(-)

diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index 65731b060e3f..9f94baeb2f82 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -3740,10 +3740,6 @@
 	no5lvl		[X86-64,RISCV] Disable 5-level paging mode. Forces
 			kernel to use 4-level paging instead.
 
-	noaliencache	[MM, NUMA, SLAB] Disables the allocation of alien
-			caches in the slab allocator.  Saves per-node memory,
-			but will impact performance.
-
 	noalign		[KNL,ARM]
 
 	noaltinstr	[S390] Disables alternative instructions patching
-- 
2.34.1



^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH v3 2/4] mm/slub: unify all sl[au]b parameters with "slab_$param"
  2023-12-09 13:51 [PATCH v3 0/4] supplement of slab allocator removal sxwjean
  2023-12-09 13:52 ` [PATCH v3 1/4] Documentation: kernel-parameters: remove noaliencache sxwjean
@ 2023-12-09 13:52 ` sxwjean
  2023-12-09 13:52 ` [PATCH v3 3/4] mm/slub: replace slub_$params with slab_$params in slub.rst sxwjean
  2023-12-09 13:52 ` [PATCH v3 4/4] mm/slub: correct the default value of slub_min_objects in doc sxwjean
  3 siblings, 0 replies; 10+ messages in thread
From: sxwjean @ 2023-12-09 13:52 UTC (permalink / raw)
  To: vbabka, 42.hyeyoo, cl, linux-mm
  Cc: penberg, rientjes, iamjoonsoo.kim, roman.gushchin, corbet,
	keescook, arnd, akpm, gregkh, linux-doc, linux-kernel,
	Xiongwei Song

From: Xiongwei Song <xiongwei.song@windriver.com>

Since the SLAB allocator has been removed, so we can clean up the
sl[au]b_$params. With only one slab allocator left, it's better to use the
generic "slab" term instead of "slub" which is an implementation detail,
which is pointed out by Vlastimil Babka. For more information please see
[1]. Hence, we are going to use "slab_$param" as the primary prefix.

This patch is changing the following slab parameters
- slub_max_order
- slub_min_order
- slub_min_objects
- slub_debug
to
- slab_max_order
- slab_min_order
- slab_min_objects
- slab_debug
as the primary slab parameters for all references of them in docs and
comments. But this patch won't change variables and functions inside
slub as we will have wider slub/slab change.

Meanwhile, "slub_$params" can also be passed by command line, which is
to keep backward compatibility. Also mark all "slub_$params" as legacy.

Remove the separate descriptions for slub_[no]merge, append legacy tip
for them at the end of descriptions of slab_[no]merge.

[1] https://lore.kernel.org/linux-mm/7512b350-4317-21a0-fab3-4101bc4d8f7a@suse.cz/

Signed-off-by: Xiongwei Song <xiongwei.song@windriver.com>
---
 .../admin-guide/kernel-parameters.txt         | 71 +++++++++----------
 drivers/misc/lkdtm/heap.c                     |  2 +-
 mm/Kconfig.debug                              |  6 +-
 mm/slab.h                                     |  2 +-
 mm/slab_common.c                              |  4 +-
 mm/slub.c                                     | 41 ++++++-----
 6 files changed, 62 insertions(+), 64 deletions(-)

diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index 9f94baeb2f82..abfc0838feb9 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -5864,65 +5864,58 @@
 	simeth=		[IA-64]
 	simscsi=
 
-	slram=		[HW,MTD]
-
-	slab_merge	[MM]
-			Enable merging of slabs with similar size when the
-			kernel is built without CONFIG_SLAB_MERGE_DEFAULT.
-
-	slab_nomerge	[MM]
-			Disable merging of slabs with similar size. May be
-			necessary if there is some reason to distinguish
-			allocs to different slabs, especially in hardened
-			environments where the risk of heap overflows and
-			layout control by attackers can usually be
-			frustrated by disabling merging. This will reduce
-			most of the exposure of a heap attack to a single
-			cache (risks via metadata attacks are mostly
-			unchanged). Debug options disable merging on their
-			own.
-			For more information see Documentation/mm/slub.rst.
-
-	slab_max_order=	[MM, SLAB]
-			Determines the maximum allowed order for slabs.
-			A high setting may cause OOMs due to memory
-			fragmentation.  Defaults to 1 for systems with
-			more than 32MB of RAM, 0 otherwise.
-
-	slub_debug[=options[,slabs][;[options[,slabs]]...]	[MM, SLUB]
-			Enabling slub_debug allows one to determine the
+	slab_debug[=options[,slabs][;[options[,slabs]]...]	[MM]
+			Enabling slab_debug allows one to determine the
 			culprit if slab objects become corrupted. Enabling
-			slub_debug can create guard zones around objects and
+			slab_debug can create guard zones around objects and
 			may poison objects when not in use. Also tracks the
 			last alloc / free. For more information see
 			Documentation/mm/slub.rst.
+			(slub_debug legacy name also accepted for now)
 
-	slub_max_order= [MM, SLUB]
+	slab_max_order= [MM]
 			Determines the maximum allowed order for slabs.
 			A high setting may cause OOMs due to memory
 			fragmentation. For more information see
 			Documentation/mm/slub.rst.
+			(slub_max_order legacy name also accepted for now)
+
+	slab_merge	[MM]
+			Enable merging of slabs with similar size when the
+			kernel is built without CONFIG_SLAB_MERGE_DEFAULT.
+			(slub_merge legacy name also accepted for now)
 
-	slub_min_objects=	[MM, SLUB]
+	slab_min_objects=	[MM]
 			The minimum number of objects per slab. SLUB will
-			increase the slab order up to slub_max_order to
+			increase the slab order up to slab_max_order to
 			generate a sufficiently large slab able to contain
 			the number of objects indicated. The higher the number
 			of objects the smaller the overhead of tracking slabs
 			and the less frequently locks need to be acquired.
 			For more information see Documentation/mm/slub.rst.
+			(slub_min_objects legacy name also accepted for now)
 
-	slub_min_order=	[MM, SLUB]
+	slab_min_order=	[MM]
 			Determines the minimum page order for slabs. Must be
-			lower than slub_max_order.
-			For more information see Documentation/mm/slub.rst.
+			lower or equal to slab_max_order. For more information see
+			Documentation/mm/slub.rst.
+			(slub_min_order legacy name also accepted for now)
 
-	slub_merge	[MM, SLUB]
-			Same with slab_merge.
+	slab_nomerge	[MM]
+			Disable merging of slabs with similar size. May be
+			necessary if there is some reason to distinguish
+			allocs to different slabs, especially in hardened
+			environments where the risk of heap overflows and
+			layout control by attackers can usually be
+			frustrated by disabling merging. This will reduce
+			most of the exposure of a heap attack to a single
+			cache (risks via metadata attacks are mostly
+			unchanged). Debug options disable merging on their
+			own.
+			For more information see Documentation/mm/slub.rst.
+			(slub_nomerge legacy name also accepted for now)
 
-	slub_nomerge	[MM, SLUB]
-			Same with slab_nomerge. This is supported for legacy.
-			See slab_nomerge for more information.
+	slram=		[HW,MTD]
 
 	smart2=		[HW]
 			Format: <io1>[,<io2>[,...,<io8>]]
diff --git a/drivers/misc/lkdtm/heap.c b/drivers/misc/lkdtm/heap.c
index 0ce4cbf6abda..076ca9b225de 100644
--- a/drivers/misc/lkdtm/heap.c
+++ b/drivers/misc/lkdtm/heap.c
@@ -47,7 +47,7 @@ static void lkdtm_VMALLOC_LINEAR_OVERFLOW(void)
  * correctly.
  *
  * This should get caught by either memory tagging, KASan, or by using
- * CONFIG_SLUB_DEBUG=y and slub_debug=ZF (or CONFIG_SLUB_DEBUG_ON=y).
+ * CONFIG_SLUB_DEBUG=y and slab_debug=ZF (or CONFIG_SLUB_DEBUG_ON=y).
  */
 static void lkdtm_SLAB_LINEAR_OVERFLOW(void)
 {
diff --git a/mm/Kconfig.debug b/mm/Kconfig.debug
index 321ab379994f..afc72fde0f03 100644
--- a/mm/Kconfig.debug
+++ b/mm/Kconfig.debug
@@ -64,11 +64,11 @@ config SLUB_DEBUG_ON
 	help
 	  Boot with debugging on by default. SLUB boots by default with
 	  the runtime debug capabilities switched off. Enabling this is
-	  equivalent to specifying the "slub_debug" parameter on boot.
+	  equivalent to specifying the "slab_debug" parameter on boot.
 	  There is no support for more fine grained debug control like
-	  possible with slub_debug=xxx. SLUB debugging may be switched
+	  possible with slab_debug=xxx. SLUB debugging may be switched
 	  off in a kernel built with CONFIG_SLUB_DEBUG_ON by specifying
-	  "slub_debug=-".
+	  "slab_debug=-".
 
 config PAGE_OWNER
 	bool "Track page owner"
diff --git a/mm/slab.h b/mm/slab.h
index 54deeb0428c6..f7df6d701c5b 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -528,7 +528,7 @@ static inline bool __slub_debug_enabled(void)
 #endif
 
 /*
- * Returns true if any of the specified slub_debug flags is enabled for the
+ * Returns true if any of the specified slab_debug flags is enabled for the
  * cache. Use only for flags parsed by setup_slub_debug() as it also enables
  * the static key.
  */
diff --git a/mm/slab_common.c b/mm/slab_common.c
index 238293b1dbe1..230ef7cc3467 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -282,7 +282,7 @@ kmem_cache_create_usercopy(const char *name,
 
 #ifdef CONFIG_SLUB_DEBUG
 	/*
-	 * If no slub_debug was enabled globally, the static key is not yet
+	 * If no slab_debug was enabled globally, the static key is not yet
 	 * enabled by setup_slub_debug(). Enable it if the cache is being
 	 * created with any of the debugging flags passed explicitly.
 	 * It's also possible that this is the first cache created with
@@ -766,7 +766,7 @@ EXPORT_SYMBOL(kmalloc_size_roundup);
 }
 
 /*
- * kmalloc_info[] is to make slub_debug=,kmalloc-xx option work at boot time.
+ * kmalloc_info[] is to make slab_debug=,kmalloc-xx option work at boot time.
  * kmalloc_index() supports up to 2^21=2MB, so the final entry of the table is
  * kmalloc-2M.
  */
diff --git a/mm/slub.c b/mm/slub.c
index 3f8b95757106..6efff3f47be2 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -280,7 +280,7 @@ static inline bool kmem_cache_has_cpu_partial(struct kmem_cache *s)
 
 /*
  * Debugging flags that require metadata to be stored in the slab.  These get
- * disabled when slub_debug=O is used and a cache's min order increases with
+ * disabled when slab_debug=O is used and a cache's min order increases with
  * metadata.
  */
 #define DEBUG_METADATA_FLAGS (SLAB_RED_ZONE | SLAB_POISON | SLAB_STORE_USER)
@@ -1582,7 +1582,7 @@ static inline int free_consistency_checks(struct kmem_cache *s,
 }
 
 /*
- * Parse a block of slub_debug options. Blocks are delimited by ';'
+ * Parse a block of slab_debug options. Blocks are delimited by ';'
  *
  * @str:    start of block
  * @flags:  returns parsed flags, or DEBUG_DEFAULT_FLAGS if none specified
@@ -1643,7 +1643,7 @@ parse_slub_debug_flags(char *str, slab_flags_t *flags, char **slabs, bool init)
 			break;
 		default:
 			if (init)
-				pr_err("slub_debug option '%c' unknown. skipped\n", *str);
+				pr_err("slab_debug option '%c' unknown. skipped\n", *str);
 		}
 	}
 check_slabs:
@@ -1702,7 +1702,7 @@ static int __init setup_slub_debug(char *str)
 	/*
 	 * For backwards compatibility, a single list of flags with list of
 	 * slabs means debugging is only changed for those slabs, so the global
-	 * slub_debug should be unchanged (0 or DEBUG_DEFAULT_FLAGS, depending
+	 * slab_debug should be unchanged (0 or DEBUG_DEFAULT_FLAGS, depending
 	 * on CONFIG_SLUB_DEBUG_ON). We can extended that to multiple lists as
 	 * long as there is no option specifying flags without a slab list.
 	 */
@@ -1726,7 +1726,8 @@ static int __init setup_slub_debug(char *str)
 	return 1;
 }
 
-__setup("slub_debug", setup_slub_debug);
+__setup("slab_debug", setup_slub_debug);
+__setup_param("slub_debug", slub_debug, setup_slub_debug, 0);
 
 /*
  * kmem_cache_flags - apply debugging options to the cache
@@ -1736,7 +1737,7 @@ __setup("slub_debug", setup_slub_debug);
  *
  * Debug option(s) are applied to @flags. In addition to the debug
  * option(s), if a slab name (or multiple) is specified i.e.
- * slub_debug=<Debug-Options>,<slab name1>,<slab name2> ...
+ * slab_debug=<Debug-Options>,<slab name1>,<slab name2> ...
  * then only the select slabs will receive the debug option(s).
  */
 slab_flags_t kmem_cache_flags(unsigned int object_size,
@@ -3285,7 +3286,7 @@ slab_out_of_memory(struct kmem_cache *s, gfp_t gfpflags, int nid)
 		oo_order(s->min));
 
 	if (oo_order(s->min) > get_order(s->object_size))
-		pr_warn("  %s debugging increased min order, use slub_debug=O to disable.\n",
+		pr_warn("  %s debugging increased min order, use slab_debug=O to disable.\n",
 			s->name);
 
 	for_each_kmem_cache_node(s, node, n) {
@@ -3778,11 +3779,11 @@ void slab_post_alloc_hook(struct kmem_cache *s,	struct obj_cgroup *objcg,
 		zero_size = orig_size;
 
 	/*
-	 * When slub_debug is enabled, avoid memory initialization integrated
+	 * When slab_debug is enabled, avoid memory initialization integrated
 	 * into KASAN and instead zero out the memory via the memset below with
 	 * the proper size. Otherwise, KASAN might overwrite SLUB redzones and
 	 * cause false-positive reports. This does not lead to a performance
-	 * penalty on production builds, as slub_debug is not intended to be
+	 * penalty on production builds, as slab_debug is not intended to be
 	 * enabled there.
 	 */
 	if (__slub_debug_enabled())
@@ -4658,8 +4659,8 @@ static unsigned int slub_min_objects;
  * activity on the partial lists which requires taking the list_lock. This is
  * less a concern for large slabs though which are rarely used.
  *
- * slub_max_order specifies the order where we begin to stop considering the
- * number of objects in a slab as critical. If we reach slub_max_order then
+ * slab_max_order specifies the order where we begin to stop considering the
+ * number of objects in a slab as critical. If we reach slab_max_order then
  * we try to keep the page order as low as possible. So we accept more waste
  * of space in favor of a small page order.
  *
@@ -4726,14 +4727,14 @@ static inline int calculate_order(unsigned int size)
 	 * and backing off gradually.
 	 *
 	 * We start with accepting at most 1/16 waste and try to find the
-	 * smallest order from min_objects-derived/slub_min_order up to
-	 * slub_max_order that will satisfy the constraint. Note that increasing
+	 * smallest order from min_objects-derived/slab_min_order up to
+	 * slab_max_order that will satisfy the constraint. Note that increasing
 	 * the order can only result in same or less fractional waste, not more.
 	 *
 	 * If that fails, we increase the acceptable fraction of waste and try
 	 * again. The last iteration with fraction of 1/2 would effectively
 	 * accept any waste and give us the order determined by min_objects, as
-	 * long as at least single object fits within slub_max_order.
+	 * long as at least single object fits within slab_max_order.
 	 */
 	for (unsigned int fraction = 16; fraction > 1; fraction /= 2) {
 		order = calc_slab_order(size, min_order, slub_max_order,
@@ -4743,7 +4744,7 @@ static inline int calculate_order(unsigned int size)
 	}
 
 	/*
-	 * Doh this slab cannot be placed using slub_max_order.
+	 * Doh this slab cannot be placed using slab_max_order.
 	 */
 	order = get_order(size);
 	if (order <= MAX_ORDER)
@@ -5269,7 +5270,9 @@ static int __init setup_slub_min_order(char *str)
 	return 1;
 }
 
-__setup("slub_min_order=", setup_slub_min_order);
+__setup("slab_min_order=", setup_slub_min_order);
+__setup_param("slub_min_order=", slub_min_order, setup_slub_min_order, 0);
+
 
 static int __init setup_slub_max_order(char *str)
 {
@@ -5282,7 +5285,8 @@ static int __init setup_slub_max_order(char *str)
 	return 1;
 }
 
-__setup("slub_max_order=", setup_slub_max_order);
+__setup("slab_max_order=", setup_slub_max_order);
+__setup_param("slub_max_order=", slub_max_order, setup_slub_max_order, 0);
 
 static int __init setup_slub_min_objects(char *str)
 {
@@ -5291,7 +5295,8 @@ static int __init setup_slub_min_objects(char *str)
 	return 1;
 }
 
-__setup("slub_min_objects=", setup_slub_min_objects);
+__setup("slab_min_objects=", setup_slub_min_objects);
+__setup_param("slub_min_objects=", slub_min_objects, setup_slub_min_objects, 0);
 
 #ifdef CONFIG_HARDENED_USERCOPY
 /*
-- 
2.34.1



^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH v3 3/4] mm/slub: replace slub_$params with slab_$params in slub.rst
  2023-12-09 13:51 [PATCH v3 0/4] supplement of slab allocator removal sxwjean
  2023-12-09 13:52 ` [PATCH v3 1/4] Documentation: kernel-parameters: remove noaliencache sxwjean
  2023-12-09 13:52 ` [PATCH v3 2/4] mm/slub: unify all sl[au]b parameters with "slab_$param" sxwjean
@ 2023-12-09 13:52 ` sxwjean
  2023-12-09 13:52 ` [PATCH v3 4/4] mm/slub: correct the default value of slub_min_objects in doc sxwjean
  3 siblings, 0 replies; 10+ messages in thread
From: sxwjean @ 2023-12-09 13:52 UTC (permalink / raw)
  To: vbabka, 42.hyeyoo, cl, linux-mm
  Cc: penberg, rientjes, iamjoonsoo.kim, roman.gushchin, corbet,
	keescook, arnd, akpm, gregkh, linux-doc, linux-kernel,
	Xiongwei Song

From: Xiongwei Song <xiongwei.song@windriver.com>

We've unified slab parameters with "slab_$params", then we can use
slab_$params in Documentation/mm/slub.rst.

Signed-off-by: Xiongwei Song <xiongwei.song@windriver.com>
---
 Documentation/mm/slub.rst | 60 +++++++++++++++++++--------------------
 1 file changed, 30 insertions(+), 30 deletions(-)

diff --git a/Documentation/mm/slub.rst b/Documentation/mm/slub.rst
index be75971532f5..6579a55b7852 100644
--- a/Documentation/mm/slub.rst
+++ b/Documentation/mm/slub.rst
@@ -9,7 +9,7 @@ SLUB can enable debugging only for selected slabs in order to avoid
 an impact on overall system performance which may make a bug more
 difficult to find.
 
-In order to switch debugging on one can add an option ``slub_debug``
+In order to switch debugging on one can add an option ``slab_debug``
 to the kernel command line. That will enable full debugging for
 all slabs.
 
@@ -26,16 +26,16 @@ be enabled on the command line. F.e. no tracking information will be
 available without debugging on and validation can only partially
 be performed if debugging was not switched on.
 
-Some more sophisticated uses of slub_debug:
+Some more sophisticated uses of slab_debug:
 -------------------------------------------
 
-Parameters may be given to ``slub_debug``. If none is specified then full
+Parameters may be given to ``slab_debug``. If none is specified then full
 debugging is enabled. Format:
 
-slub_debug=<Debug-Options>
+slab_debug=<Debug-Options>
 	Enable options for all slabs
 
-slub_debug=<Debug-Options>,<slab name1>,<slab name2>,...
+slab_debug=<Debug-Options>,<slab name1>,<slab name2>,...
 	Enable options only for select slabs (no spaces
 	after a comma)
 
@@ -60,23 +60,23 @@ Possible debug options are::
 
 F.e. in order to boot just with sanity checks and red zoning one would specify::
 
-	slub_debug=FZ
+	slab_debug=FZ
 
 Trying to find an issue in the dentry cache? Try::
 
-	slub_debug=,dentry
+	slab_debug=,dentry
 
 to only enable debugging on the dentry cache.  You may use an asterisk at the
 end of the slab name, in order to cover all slabs with the same prefix.  For
 example, here's how you can poison the dentry cache as well as all kmalloc
 slabs::
 
-	slub_debug=P,kmalloc-*,dentry
+	slab_debug=P,kmalloc-*,dentry
 
 Red zoning and tracking may realign the slab.  We can just apply sanity checks
 to the dentry cache with::
 
-	slub_debug=F,dentry
+	slab_debug=F,dentry
 
 Debugging options may require the minimum possible slab order to increase as
 a result of storing the metadata (for example, caches with PAGE_SIZE object
@@ -84,20 +84,20 @@ sizes).  This has a higher liklihood of resulting in slab allocation errors
 in low memory situations or if there's high fragmentation of memory.  To
 switch off debugging for such caches by default, use::
 
-	slub_debug=O
+	slab_debug=O
 
 You can apply different options to different list of slab names, using blocks
 of options. This will enable red zoning for dentry and user tracking for
 kmalloc. All other slabs will not get any debugging enabled::
 
-	slub_debug=Z,dentry;U,kmalloc-*
+	slab_debug=Z,dentry;U,kmalloc-*
 
 You can also enable options (e.g. sanity checks and poisoning) for all caches
 except some that are deemed too performance critical and don't need to be
 debugged by specifying global debug options followed by a list of slab names
 with "-" as options::
 
-	slub_debug=FZ;-,zs_handle,zspage
+	slab_debug=FZ;-,zs_handle,zspage
 
 The state of each debug option for a slab can be found in the respective files
 under::
@@ -105,7 +105,7 @@ under::
 	/sys/kernel/slab/<slab name>/
 
 If the file contains 1, the option is enabled, 0 means disabled. The debug
-options from the ``slub_debug`` parameter translate to the following files::
+options from the ``slab_debug`` parameter translate to the following files::
 
 	F	sanity_checks
 	Z	red_zone
@@ -129,7 +129,7 @@ in order to reduce overhead and increase cache hotness of objects.
 Slab validation
 ===============
 
-SLUB can validate all object if the kernel was booted with slub_debug. In
+SLUB can validate all object if the kernel was booted with slab_debug. In
 order to do so you must have the ``slabinfo`` tool. Then you can do
 ::
 
@@ -150,29 +150,29 @@ list_lock once in a while to deal with partial slabs. That overhead is
 governed by the order of the allocation for each slab. The allocations
 can be influenced by kernel parameters:
 
-.. slub_min_objects=x		(default 4)
-.. slub_min_order=x		(default 0)
-.. slub_max_order=x		(default 3 (PAGE_ALLOC_COSTLY_ORDER))
+.. slab_min_objects=x		(default 4)
+.. slab_min_order=x		(default 0)
+.. slab_max_order=x		(default 3 (PAGE_ALLOC_COSTLY_ORDER))
 
-``slub_min_objects``
+``slab_min_objects``
 	allows to specify how many objects must at least fit into one
 	slab in order for the allocation order to be acceptable.  In
 	general slub will be able to perform this number of
 	allocations on a slab without consulting centralized resources
 	(list_lock) where contention may occur.
 
-``slub_min_order``
+``slab_min_order``
 	specifies a minimum order of slabs. A similar effect like
-	``slub_min_objects``.
+	``slab_min_objects``.
 
-``slub_max_order``
-	specified the order at which ``slub_min_objects`` should no
+``slab_max_order``
+	specified the order at which ``slab_min_objects`` should no
 	longer be checked. This is useful to avoid SLUB trying to
-	generate super large order pages to fit ``slub_min_objects``
+	generate super large order pages to fit ``slab_min_objects``
 	of a slab cache with large object sizes into one high order
 	page. Setting command line parameter
 	``debug_guardpage_minorder=N`` (N > 0), forces setting
-	``slub_max_order`` to 0, what cause minimum possible order of
+	``slab_max_order`` to 0, what cause minimum possible order of
 	slabs allocation.
 
 SLUB Debug output
@@ -219,7 +219,7 @@ Here is a sample of slub debug output::
  FIX kmalloc-8: Restoring Redzone 0xc90f6d28-0xc90f6d2b=0xcc
 
 If SLUB encounters a corrupted object (full detection requires the kernel
-to be booted with slub_debug) then the following output will be dumped
+to be booted with slab_debug) then the following output will be dumped
 into the syslog:
 
 1. Description of the problem encountered
@@ -239,7 +239,7 @@ into the syslog:
 	pid=<pid of the process>
 
    (Object allocation / free information is only available if SLAB_STORE_USER is
-   set for the slab. slub_debug sets that option)
+   set for the slab. slab_debug sets that option)
 
 2. The object contents if an object was involved.
 
@@ -262,7 +262,7 @@ into the syslog:
 	the object boundary.
 
 	(Redzone information is only available if SLAB_RED_ZONE is set.
-	slub_debug sets that option)
+	slab_debug sets that option)
 
    Padding <address> : <bytes>
 	Unused data to fill up the space in order to get the next object
@@ -296,7 +296,7 @@ Emergency operations
 
 Minimal debugging (sanity checks alone) can be enabled by booting with::
 
-	slub_debug=F
+	slab_debug=F
 
 This will be generally be enough to enable the resiliency features of slub
 which will keep the system running even if a bad kernel component will
@@ -311,13 +311,13 @@ and enabling debugging only for that cache
 
 I.e.::
 
-	slub_debug=F,dentry
+	slab_debug=F,dentry
 
 If the corruption occurs by writing after the end of the object then it
 may be advisable to enable a Redzone to avoid corrupting the beginning
 of other objects::
 
-	slub_debug=FZ,dentry
+	slab_debug=FZ,dentry
 
 Extended slabinfo mode and plotting
 ===================================
-- 
2.34.1



^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH v3 4/4] mm/slub: correct the default value of slub_min_objects in doc
  2023-12-09 13:51 [PATCH v3 0/4] supplement of slab allocator removal sxwjean
                   ` (2 preceding siblings ...)
  2023-12-09 13:52 ` [PATCH v3 3/4] mm/slub: replace slub_$params with slab_$params in slub.rst sxwjean
@ 2023-12-09 13:52 ` sxwjean
  2023-12-09 13:59   ` Song, Xiongwei
  3 siblings, 1 reply; 10+ messages in thread
From: sxwjean @ 2023-12-09 13:52 UTC (permalink / raw)
  To: vbabka, 42.hyeyoo, cl, linux-mm
  Cc: penberg, rientjes, iamjoonsoo.kim, roman.gushchin, corbet,
	keescook, arnd, akpm, gregkh, linux-doc, linux-kernel,
	Xiongwei Song

From: Xiongwei Song <xiongwei.song@windriver.com>

There is no a value assigned to slub_min_objects by default, it always
is 0 that is initialized by compiler if no assigned value by command line.
min_objects is calculated based on processor numbers in calculate_order().
For more details, see commit 9b2cd506e5f2 ("slub: Calculate min_objects
based on number of processors.")

Signed-off-by: Xiongwei Song <xiongwei.song@windriver.com>
---
 Documentation/mm/slub.rst | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/Documentation/mm/slub.rst b/Documentation/mm/slub.rst
index 6579a55b7852..56b27f493ba7 100644
--- a/Documentation/mm/slub.rst
+++ b/Documentation/mm/slub.rst
@@ -150,7 +150,7 @@ list_lock once in a while to deal with partial slabs. That overhead is
 governed by the order of the allocation for each slab. The allocations
 can be influenced by kernel parameters:
 
-.. slab_min_objects=x		(default 4)
+.. slab_min_objects=x		(default 0)
 .. slab_min_order=x		(default 0)
 .. slab_max_order=x		(default 3 (PAGE_ALLOC_COSTLY_ORDER))
 
-- 
2.34.1



^ permalink raw reply	[flat|nested] 10+ messages in thread

* RE: [PATCH v3 4/4] mm/slub: correct the default value of slub_min_objects in doc
  2023-12-09 13:52 ` [PATCH v3 4/4] mm/slub: correct the default value of slub_min_objects in doc sxwjean
@ 2023-12-09 13:59   ` Song, Xiongwei
  2023-12-13 11:23     ` Vlastimil Babka
  0 siblings, 1 reply; 10+ messages in thread
From: Song, Xiongwei @ 2023-12-09 13:59 UTC (permalink / raw)
  To: sxwjean, vbabka, 42.hyeyoo, cl, linux-mm
  Cc: penberg, rientjes, iamjoonsoo.kim, roman.gushchin, corbet,
	keescook, arnd, akpm, gregkh, linux-doc, linux-kernel

Hi Vlastimil,

I didn't change description as you mentioned because slab_min_objects doesn't
save the calculated value based on the number of processors, but the local
variables min_objects doesn't.

Regards,
Xiongwei 

> -----Original Message-----
> From: owner-linux-mm@kvack.org <owner-linux-mm@kvack.org> On Behalf Of
> sxwjean@me.com
> Sent: Saturday, December 9, 2023 9:52 PM
> To: vbabka@suse.cz; 42.hyeyoo@gmail.com; cl@linux.com; linux-mm@kvack.org
> Cc: penberg@kernel.org; rientjes@google.com; iamjoonsoo.kim@lge.com;
> roman.gushchin@linux.dev; corbet@lwn.net; keescook@chromium.org; arnd@arndb.de;
> akpm@linux-foundation.org; gregkh@linuxfoundation.org; linux-doc@vger.kernel.org; linux-
> kernel@vger.kernel.org; Song, Xiongwei <Xiongwei.Song@windriver.com>
> Subject: [PATCH v3 4/4] mm/slub: correct the default value of slub_min_objects in doc
> 
> From: Xiongwei Song <xiongwei.song@windriver.com>
> 
> There is no a value assigned to slub_min_objects by default, it always
> is 0 that is initialized by compiler if no assigned value by command line.
> min_objects is calculated based on processor numbers in calculate_order().
> For more details, see commit 9b2cd506e5f2 ("slub: Calculate min_objects
> based on number of processors.")
> 
> Signed-off-by: Xiongwei Song <xiongwei.song@windriver.com>
> ---
>  Documentation/mm/slub.rst | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/Documentation/mm/slub.rst b/Documentation/mm/slub.rst
> index 6579a55b7852..56b27f493ba7 100644
> --- a/Documentation/mm/slub.rst
> +++ b/Documentation/mm/slub.rst
> @@ -150,7 +150,7 @@ list_lock once in a while to deal with partial slabs. That overhead is
>  governed by the order of the allocation for each slab. The allocations
>  can be influenced by kernel parameters:
> 
> -.. slab_min_objects=x          (default 4)
> +.. slab_min_objects=x          (default 0)
>  .. slab_min_order=x            (default 0)
>  .. slab_max_order=x            (default 3 (PAGE_ALLOC_COSTLY_ORDER))
> 
> --
> 2.34.1
> 



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v3 1/4] Documentation: kernel-parameters: remove noaliencache
  2023-12-09 13:52 ` [PATCH v3 1/4] Documentation: kernel-parameters: remove noaliencache sxwjean
@ 2023-12-11 17:47   ` Jeff Johnson
  2023-12-12 13:57     ` Song, Xiongwei
  0 siblings, 1 reply; 10+ messages in thread
From: Jeff Johnson @ 2023-12-11 17:47 UTC (permalink / raw)
  To: sxwjean, vbabka, 42.hyeyoo, cl, linux-mm
  Cc: penberg, rientjes, iamjoonsoo.kim, roman.gushchin, corbet,
	keescook, arnd, akpm, gregkh, linux-doc, linux-kernel,
	Xiongwei Song

On 12/9/2023 5:52 AM, sxwjean@me.com wrote:
> From: Xiongwei Song <xiongwei.song@windriver.com>
> 
> ince slab allocator has already been removed. There is no users of

s/ince/Since/

> the noaliencachei parameter, so let's remove it.

s/noaliencachei/noaliencache/

> 
> Suggested-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
> Reviewed-by: Kees Cook <keescook@chromium.org>
> Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
> Signed-off-by: Xiongwei Song <xiongwei.song@windriver.com>
> ---
> v5: Collect Reviewed-by tag.
> v4: Collect Reviewed-by tag.
> v3: Remove the changes for slab_max_order.
> v2: Add changes for removing "noaliencache".
>     https://lore.kernel.org/linux-mm/20231122143603.85297-1-sxwjean@me.com/
> v1: Remove slab_max_order.
>     https://lore.kernel.org/linux-mm/20231120091214.150502-2-sxwjean@me.com/
> ---
>  Documentation/admin-guide/kernel-parameters.txt | 4 ----
>  1 file changed, 4 deletions(-)
> 
> diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
> index 65731b060e3f..9f94baeb2f82 100644
> --- a/Documentation/admin-guide/kernel-parameters.txt
> +++ b/Documentation/admin-guide/kernel-parameters.txt
> @@ -3740,10 +3740,6 @@
>  	no5lvl		[X86-64,RISCV] Disable 5-level paging mode. Forces
>  			kernel to use 4-level paging instead.
>  
> -	noaliencache	[MM, NUMA, SLAB] Disables the allocation of alien
> -			caches in the slab allocator.  Saves per-node memory,
> -			but will impact performance.
> -
>  	noalign		[KNL,ARM]
>  
>  	noaltinstr	[S390] Disables alternative instructions patching



^ permalink raw reply	[flat|nested] 10+ messages in thread

* RE: [PATCH v3 1/4] Documentation: kernel-parameters: remove noaliencache
  2023-12-11 17:47   ` Jeff Johnson
@ 2023-12-12 13:57     ` Song, Xiongwei
  0 siblings, 0 replies; 10+ messages in thread
From: Song, Xiongwei @ 2023-12-12 13:57 UTC (permalink / raw)
  To: Jeff Johnson, sxwjean, vbabka, 42.hyeyoo, cl, linux-mm
  Cc: penberg, rientjes, iamjoonsoo.kim, roman.gushchin, corbet,
	keescook, arnd, akpm, gregkh, linux-doc, linux-kernel



> -----Original Message-----
> From: owner-linux-mm@kvack.org <owner-linux-mm@kvack.org> On Behalf Of Jeff Johnson
> Sent: Tuesday, December 12, 2023 1:48 AM
> To: sxwjean@me.com; vbabka@suse.cz; 42.hyeyoo@gmail.com; cl@linux.com; linux-
> mm@kvack.org
> Cc: penberg@kernel.org; rientjes@google.com; iamjoonsoo.kim@lge.com;
> roman.gushchin@linux.dev; corbet@lwn.net; keescook@chromium.org; arnd@arndb.de;
> akpm@linux-foundation.org; gregkh@linuxfoundation.org; linux-doc@vger.kernel.org; linux-
> kernel@vger.kernel.org; Song, Xiongwei <Xiongwei.Song@windriver.com>
> Subject: Re: [PATCH v3 1/4] Documentation: kernel-parameters: remove noaliencache
> 
> 
> On 12/9/2023 5:52 AM, sxwjean@me.com wrote:
> > From: Xiongwei Song <xiongwei.song@windriver.com>
> >
> > ince slab allocator has already been removed. There is no users of
> 
> s/ince/Since/
> 
> > the noaliencachei parameter, so let's remove it.
> 
> s/noaliencachei/noaliencache/

Oh, thanks. I got the flu last week,  sometimes lost patience to check it carefully .

Regards,
Xiongwei

> 
> >
> > Suggested-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
> > Reviewed-by: Kees Cook <keescook@chromium.org>
> > Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
> > Signed-off-by: Xiongwei Song <xiongwei.song@windriver.com>
> > ---
> > v5: Collect Reviewed-by tag.
> > v4: Collect Reviewed-by tag.
> > v3: Remove the changes for slab_max_order.
> > v2: Add changes for removing "noaliencache".
> >     https://lore.kernel.org/linux-mm/20231122143603.85297-1-sxwjean@me.com/
> > v1: Remove slab_max_order.
> >     https://lore.kernel.org/linux-mm/20231120091214.150502-2-sxwjean@me.com/
> > ---
> >  Documentation/admin-guide/kernel-parameters.txt | 4 ----
> >  1 file changed, 4 deletions(-)
> >
> > diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-
> guide/kernel-parameters.txt
> > index 65731b060e3f..9f94baeb2f82 100644
> > --- a/Documentation/admin-guide/kernel-parameters.txt
> > +++ b/Documentation/admin-guide/kernel-parameters.txt
> > @@ -3740,10 +3740,6 @@
> >       no5lvl          [X86-64,RISCV] Disable 5-level paging mode. Forces
> >                       kernel to use 4-level paging instead.
> >
> > -     noaliencache    [MM, NUMA, SLAB] Disables the allocation of alien
> > -                     caches in the slab allocator.  Saves per-node memory,
> > -                     but will impact performance.
> > -
> >       noalign         [KNL,ARM]
> >
> >       noaltinstr      [S390] Disables alternative instructions patching
> 


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v3 4/4] mm/slub: correct the default value of slub_min_objects in doc
  2023-12-09 13:59   ` Song, Xiongwei
@ 2023-12-13 11:23     ` Vlastimil Babka
  2023-12-14 14:11       ` Song, Xiongwei
  0 siblings, 1 reply; 10+ messages in thread
From: Vlastimil Babka @ 2023-12-13 11:23 UTC (permalink / raw)
  To: Song, Xiongwei, sxwjean, 42.hyeyoo, cl, linux-mm
  Cc: penberg, rientjes, iamjoonsoo.kim, roman.gushchin, corbet,
	keescook, arnd, akpm, gregkh, linux-doc, linux-kernel

On 12/9/23 14:59, Song, Xiongwei wrote:
> I didn't change description as you mentioned because slab_min_objects doesn't
> save the calculated value based on the number of processors, but the local
> variables min_objects doesn't.

Hm I think that's less helpful. The user/admin who will read the doc doesn't
care about implementation details such as value of a variable, but what's
the actual observable default behavior, and that is still the automatic
scaling, right?

> Regards,
> Xiongwei 



^ permalink raw reply	[flat|nested] 10+ messages in thread

* RE: [PATCH v3 4/4] mm/slub: correct the default value of slub_min_objects in doc
  2023-12-13 11:23     ` Vlastimil Babka
@ 2023-12-14 14:11       ` Song, Xiongwei
  0 siblings, 0 replies; 10+ messages in thread
From: Song, Xiongwei @ 2023-12-14 14:11 UTC (permalink / raw)
  To: Vlastimil Babka, sxwjean, 42.hyeyoo, cl, linux-mm
  Cc: penberg, rientjes, iamjoonsoo.kim, roman.gushchin, corbet,
	keescook, arnd, akpm, gregkh, linux-doc, linux-kernel



> -----Original Message-----
> From: Vlastimil Babka <vbabka@suse.cz>
> Sent: Wednesday, December 13, 2023 7:23 PM
> To: Song, Xiongwei <Xiongwei.Song@windriver.com>; sxwjean@me.com;
> 42.hyeyoo@gmail.com; cl@linux.com; linux-mm@kvack.org
> Cc: penberg@kernel.org; rientjes@google.com; iamjoonsoo.kim@lge.com;
> roman.gushchin@linux.dev; corbet@lwn.net; keescook@chromium.org; arnd@arndb.de;
> akpm@linux-foundation.org; gregkh@linuxfoundation.org; linux-doc@vger.kernel.org; linux-
> kernel@vger.kernel.org
> Subject: Re: [PATCH v3 4/4] mm/slub: correct the default value of slub_min_objects in doc
> 
> 
> On 12/9/23 14:59, Song, Xiongwei wrote:
> > I didn't change description as you mentioned because slab_min_objects doesn't
> > save the calculated value based on the number of processors, but the local
> > variables min_objects doesn't.
> 
> Hm I think that's less helpful. The user/admin who will read the doc doesn't
> care about implementation details such as value of a variable, but what's
> the actual observable default behavior, and that is still the automatic
> scaling, right?

Ok, sure. Will update. 

Thanks.



^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2023-12-14 14:11 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-12-09 13:51 [PATCH v3 0/4] supplement of slab allocator removal sxwjean
2023-12-09 13:52 ` [PATCH v3 1/4] Documentation: kernel-parameters: remove noaliencache sxwjean
2023-12-11 17:47   ` Jeff Johnson
2023-12-12 13:57     ` Song, Xiongwei
2023-12-09 13:52 ` [PATCH v3 2/4] mm/slub: unify all sl[au]b parameters with "slab_$param" sxwjean
2023-12-09 13:52 ` [PATCH v3 3/4] mm/slub: replace slub_$params with slab_$params in slub.rst sxwjean
2023-12-09 13:52 ` [PATCH v3 4/4] mm/slub: correct the default value of slub_min_objects in doc sxwjean
2023-12-09 13:59   ` Song, Xiongwei
2023-12-13 11:23     ` Vlastimil Babka
2023-12-14 14:11       ` Song, Xiongwei

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox