linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [patch 0/8] SLUB patches vs. 2.6.21-rc7-mm2 + yesterdays accepted patches
@ 2007-04-27 20:21 clameter
  2007-04-27 20:21 ` [patch 1/8] SLUB sysfs support: fix unique id generation clameter
                   ` (7 more replies)
  0 siblings, 8 replies; 9+ messages in thread
From: clameter @ 2007-04-27 20:21 UTC (permalink / raw)
  To: akpm; +Cc: linux-mm

This fixes up the sysfs unique id generation issue and some issues in
kmem_cache_shrink. Also improves the statistics available through slabinfo.

I have split up the printk cleanup patch and put it at the end. If any patch
after the object_err patch does not apply then just toss it.

--

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [patch 1/8] SLUB sysfs support: fix unique id generation
  2007-04-27 20:21 [patch 0/8] SLUB patches vs. 2.6.21-rc7-mm2 + yesterdays accepted patches clameter
@ 2007-04-27 20:21 ` clameter
  2007-04-27 20:21 ` [patch 2/8] SLUB: Fixes to kmem_cache_shrink() clameter
                   ` (6 subsequent siblings)
  7 siblings, 0 replies; 9+ messages in thread
From: clameter @ 2007-04-27 20:21 UTC (permalink / raw)
  To: akpm; +Cc: linux-mm

[-- Attachment #1: slub_unique_id --]
[-- Type: text/plain, Size: 3761 bytes --]

Generate a unique id for mergeable slabs through combining the
slab size with the flags that distinguish slabs of the same size.
That yields a unique id that is fairly short and descriptive. It no
longer includes the kmem_cache address.

Extract slab_unmergable() from find_mergeable and use that
in sysfs_add_slab to make handling more consistent.

Signed-off-by: Christoph Lameter <clameter@sgi.com>

---
 mm/slub.c |   48 +++++++++++++++++++++++++++++-------------------
 1 file changed, 29 insertions(+), 19 deletions(-)

Index: slub/mm/slub.c
===================================================================
--- slub.orig/mm/slub.c	2007-04-27 13:03:55.000000000 -0700
+++ slub/mm/slub.c	2007-04-27 13:05:17.000000000 -0700
@@ -2367,6 +2367,17 @@ void __init kmem_cache_init(void)
 /*
  * Find a mergeable slab cache
  */
+static int slab_unmergeable(struct kmem_cache *s)
+{
+	if (slub_nomerge || (s->flags & SLUB_NEVER_MERGE))
+		return 1;
+
+	if (s->ctor || s->dtor)
+		return 1;
+
+	return 0;
+}
+
 static struct kmem_cache *find_mergeable(size_t size,
 		size_t align, unsigned long flags,
 		void (*ctor)(void *, struct kmem_cache *, unsigned long),
@@ -2388,13 +2399,10 @@ static struct kmem_cache *find_mergeable
 		struct kmem_cache *s =
 			container_of(h, struct kmem_cache, list);
 
-		if (size > s->size)
-			continue;
-
-		if (s->flags & SLUB_NEVER_MERGE)
+		if (slab_unmergeable(s))
 			continue;
 
-		if (s->dtor || s->ctor)
+		if (size > s->size)
 			continue;
 
 		if (((flags | slub_debug) & SLUB_MERGE_SAME) !=
@@ -3452,23 +3460,21 @@ static char *create_unique_id(struct kme
 
 	*p++ = ':';
 	/*
-	 * First flags affecting slabcache operations */
+	 * First flags affecting slabcache operations. We will only
+	 * get here for aliasable slabs so we do not need to support
+	 * too many flags. The flags here must cover all flags that
+	 * are matched during merging to guarantee that the id is
+	 * unique.
+	 */
 	if (s->flags & SLAB_CACHE_DMA)
 		*p++ = 'd';
 	if (s->flags & SLAB_RECLAIM_ACCOUNT)
 		*p++ = 'a';
-	if (s->flags & SLAB_DESTROY_BY_RCU)
-		*p++ = 'r';\
-	/* Debug flags */
-	if (s->flags & SLAB_RED_ZONE)
-		*p++ = 'Z';
-	if (s->flags & SLAB_POISON)
-		*p++ = 'P';
-	if (s->flags & SLAB_STORE_USER)
-		*p++ = 'U';
+	if (s->flags & SLAB_DEBUG_FREE)
+		*p++ = 'F';
 	if (p != name + 1)
 		*p++ = '-';
-	p += sprintf(p,"%07d:0x%p" ,s->size, s);
+	p += sprintf(p, "%07d", s->size);
 	BUG_ON(p > name + ID_STR_LENGTH - 1);
 	return name;
 }
@@ -3477,12 +3483,14 @@ static int sysfs_slab_add(struct kmem_ca
 {
 	int err;
 	const char *name;
+	int unmergeable;
 
 	if (slab_state < SYSFS)
 		/* Defer until later */
 		return 0;
 
-	if (s->flags & SLUB_NEVER_MERGE) {
+	unmergeable = slab_unmergeable(s);
+	if (unmergeable) {
 		/*
 		 * Slabcache can never be merged so we can use the name proper.
 		 * This is typically the case for debug situations. In that
@@ -3490,12 +3498,13 @@ static int sysfs_slab_add(struct kmem_ca
 		 */
 		sysfs_remove_link(&slab_subsys.kset.kobj, s->name);
 		name = s->name;
-	} else
+	} else {
 		/*
 		 * Create a unique name for the slab as a target
 		 * for the symlinks.
 		 */
 		name = create_unique_id(s);
+	}
 
 	kobj_set_kset_s(s, slab_subsys);
 	kobject_set_name(&s->kobj, name);
@@ -3508,7 +3517,8 @@ static int sysfs_slab_add(struct kmem_ca
 	if (err)
 		return err;
 	kobject_uevent(&s->kobj, KOBJ_ADD);
-	if (!(s->flags & SLUB_NEVER_MERGE)) {
+	if (!unmergeable) {
+		/* Setup first alias */
 		sysfs_slab_alias(s, s->name);
 		kfree(name);
 	}

--

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [patch 2/8] SLUB: Fixes to kmem_cache_shrink()
  2007-04-27 20:21 [patch 0/8] SLUB patches vs. 2.6.21-rc7-mm2 + yesterdays accepted patches clameter
  2007-04-27 20:21 ` [patch 1/8] SLUB sysfs support: fix unique id generation clameter
@ 2007-04-27 20:21 ` clameter
  2007-04-27 20:21 ` [patch 3/8] SLUB slabinfo: Remove hackname() clameter
                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 9+ messages in thread
From: clameter @ 2007-04-27 20:21 UTC (permalink / raw)
  To: akpm; +Cc: linux-mm

[-- Attachment #1: slub_shrink_race_fix --]
[-- Type: text/plain, Size: 2533 bytes --]

1. Reclaim all empty slabs even if we are below MIN_PARTIAL partial slabs.
   The point here is to recover all possible memory.

2. Fix race condition vs. slab_free. If we want to free a slab then
   we need to acquire the slab lock since slab_free may have freed
   an object and is waiting to acquire the lock to remove the slab.
   We do a trylock. If its unsuccessful then we are racing with
   slab_free. Simply keep the empty slab on the partial lists.
   slab_free will remove the slab as soon as we drop the list_lock.

3. #2 may have the result that we end up with empty slabs on the
   slabs_by_inuse array. So make sure that we also splice in the
   zeroeth element.

Signed-off-by: Christoph Lameter <clameter@sgi.com>

---
 mm/slub.c |   19 +++++++++++++------
 1 file changed, 13 insertions(+), 6 deletions(-)

Index: slub/mm/slub.c
===================================================================
--- slub.orig/mm/slub.c	2007-04-27 13:05:17.000000000 -0700
+++ slub/mm/slub.c	2007-04-27 13:05:24.000000000 -0700
@@ -2220,7 +2220,7 @@ int kmem_cache_shrink(struct kmem_cache 
 	for_each_online_node(node) {
 		n = get_node(s, node);
 
-		if (n->nr_partial <= MIN_PARTIAL)
+		if (!n->nr_partial)
 			continue;
 
 		for (i = 0; i < s->objects; i++)
@@ -2237,14 +2237,21 @@ int kmem_cache_shrink(struct kmem_cache 
 		 * the upper limit.
 		 */
 		list_for_each_entry_safe(page, t, &n->partial, lru) {
-			if (!page->inuse) {
+			if (!page->inuse && slab_trylock(page)) {
+				/*
+				 * Must hold slab lock here because slab_free
+				 * may have freed the last object and be
+				 * waiting to release the slab.
+				 */
 				list_del(&page->lru);
 				n->nr_partial--;
+				slab_unlock(page);
 				discard_slab(s, page);
-			} else
-			if (n->nr_partial > MAX_PARTIAL)
-				list_move(&page->lru,
+			} else {
+				if (n->nr_partial > MAX_PARTIAL)
+					list_move(&page->lru,
 					slabs_by_inuse + page->inuse);
+			}
 		}
 
 		if (n->nr_partial <= MAX_PARTIAL)
@@ -2254,7 +2261,7 @@ int kmem_cache_shrink(struct kmem_cache 
 		 * Rebuild the partial list with the slabs filled up
 		 * most first and the least used slabs at the end.
 		 */
-		for (i = s->objects - 1; i > 0; i--)
+		for (i = s->objects - 1; i >= 0; i--)
 			list_splice(slabs_by_inuse + i, n->partial.prev);
 
 	out:

--

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [patch 3/8] SLUB slabinfo: Remove hackname()
  2007-04-27 20:21 [patch 0/8] SLUB patches vs. 2.6.21-rc7-mm2 + yesterdays accepted patches clameter
  2007-04-27 20:21 ` [patch 1/8] SLUB sysfs support: fix unique id generation clameter
  2007-04-27 20:21 ` [patch 2/8] SLUB: Fixes to kmem_cache_shrink() clameter
@ 2007-04-27 20:21 ` clameter
  2007-04-27 20:21 ` [patch 4/8] SLUB printk cleanup: object_err() clameter
                   ` (4 subsequent siblings)
  7 siblings, 0 replies; 9+ messages in thread
From: clameter @ 2007-04-27 20:21 UTC (permalink / raw)
  To: akpm; +Cc: linux-mm

[-- Attachment #1: slabinfo_fix --]
[-- Type: text/plain, Size: 13731 bytes --]

hackname() is no longer needed since we changed the way to generate
the unique id.

Fixup SLUB totals display. Add some comments to explain what all these
different statistics do. Try to get some systematic arrangement of the
code done.

Signed-off-by: Christoph Lameter <clameter@sgi.com>

---
 Documentation/vm/slabinfo.c |  244 ++++++++++++++++++++++++++------------------
 1 file changed, 149 insertions(+), 95 deletions(-)

Index: slub/Documentation/vm/slabinfo.c
===================================================================
--- slub.orig/Documentation/vm/slabinfo.c	2007-04-27 12:49:44.000000000 -0700
+++ slub/Documentation/vm/slabinfo.c	2007-04-27 12:52:48.000000000 -0700
@@ -42,6 +42,7 @@ struct aliasinfo {
 
 int slabs = 0;
 int aliases = 0;
+int alias_targets = 0;
 int highest_node = 0;
 
 char buffer[4096];
@@ -211,24 +212,6 @@ void decode_numa_list(int *numa, char *t
 	}
 }
 
-char *hackname(struct slabinfo *s)
-{
-	char *n = s->name;
-
-	if (n[0] == ':') {
-		char *nn = malloc(20);
-		char *p;
-
-		strncpy(nn, n, 20);
-		n = nn;
-		p = n + 4;
-		while (*p && *p !=':')
-			p++;
-		*p = 0;
-	}
-	return n;
-}
-
 void slab_validate(struct slabinfo *s)
 {
 	set_obj(s, "validate", 1);
@@ -281,7 +264,6 @@ void slabcache(struct slabinfo *s)
 	char dist_str[40];
 	char flags[20];
 	char *p = flags;
-	char *n;
 
 	if (skip_zero && !s->slabs)
 		return;
@@ -312,19 +294,17 @@ void slabcache(struct slabinfo *s)
 		*p++ = 'T';
 
 	*p = 0;
-	n = hackname(s);
 	printf("%-21s %8ld %7d %8s %14s %4d %1d %3ld %3ld %s\n",
-			n, s->objects, s->object_size, size_str, dist_str,
-			s->objs_per_slab, s->order,
-			s->slabs ? (s->partial * 100) / s->slabs : 100,
-			s->slabs ? (s->objects * s->object_size * 100) /
-				(s->slabs * (page_size << s->order)) : 100,
-			flags);
+		s->name, s->objects, s->object_size, size_str, dist_str,
+		s->objs_per_slab, s->order,
+		s->slabs ? (s->partial * 100) / s->slabs : 100,
+		s->slabs ? (s->objects * s->object_size * 100) /
+			(s->slabs * (page_size << s->order)) : 100,
+		flags);
 }
 
 void slab_numa(struct slabinfo *s)
 {
-	char *n;
 	int node;
 
 	if (!highest_node)
@@ -332,7 +312,6 @@ void slab_numa(struct slabinfo *s)
 
 	if (skip_zero && !s->slabs)
 		return;
-	n = hackname(s);
 
 	if (!line) {
 		printf("\nSlab             Node ");
@@ -343,7 +322,7 @@ void slab_numa(struct slabinfo *s)
 			printf("-----");
 		printf("\n");
 	}
-	printf("%-21s ", n);
+	printf("%-21s ", s->name);
 	for(node = 0; node <= highest_node; node++) {
 		char b[20];
 
@@ -374,27 +353,61 @@ void totals(void)
 
 	int used_slabs = 0;
 	char b1[20], b2[20], b3[20], b4[20];
-	unsigned long long min_objsize = 0, max_objsize = 0, avg_objsize;
-	unsigned long long min_partial = 0, max_partial = 0, avg_partial, total_partial = 0;
-	unsigned long long min_slabs = 0, max_slabs = 0, avg_slabs, total_slabs = 0;
-	unsigned long long min_size = 0, max_size = 0, avg_size, total_size = 0;
-	unsigned long long min_waste = 0, max_waste = 0, avg_waste, total_waste = 0;
-	unsigned long long min_objects = 0, max_objects = 0, avg_objects, total_objects = 0;
-	unsigned long long min_objwaste = 0, max_objwaste = 0, avg_objwaste;
-	unsigned long long min_used = 0, max_used = 0, avg_used, total_used = 0;
-	unsigned long min_ppart = 0, max_ppart = 0, avg_ppart, total_ppart = 0;
-	unsigned long min_partobj = 0, max_partobj = 0, avg_partobj;
-	unsigned long total_objects_in_partial = 0;
+	unsigned long long max = 1ULL << 63;
+
+	/* Object size */
+	unsigned long long min_objsize = max, max_objsize = 0, avg_objsize;
+
+	/* Number of partial slabs in a slabcache */
+	unsigned long long min_partial = max, max_partial = 0,
+				avg_partial, total_partial = 0;
+
+	/* Number of slabs in a slab cache */
+	unsigned long long min_slabs = max, max_slabs = 0,
+				avg_slabs, total_slabs = 0;
+
+	/* Size of the whole slab */
+	unsigned long long min_size = max, max_size = 0,
+				avg_size, total_size = 0;
+
+	/* Bytes used for object storage in a slab */
+	unsigned long long min_used = max, max_used = 0, avg_used, total_used = 0;
+
+	/* Waste: Bytes used for aligned and padding */
+	unsigned long long min_waste = max, max_waste = 0,
+				avg_waste, total_waste = 0;
+	/* Number of objects in a slab */
+	unsigned long long min_objects = max, max_objects = 0,
+				avg_objects, total_objects = 0;
+	/* Waste per object */
+	unsigned long long min_objwaste = max,
+				max_objwaste = 0, avg_objwaste;
+
+	/* Memory per object */
+	unsigned long long min_memobj = max,
+				max_memobj = 0, avg_memobj;
+
+	/* Percentage of partial slabs per slab */
+	unsigned long min_ppart = 100, max_ppart = 0,
+				avg_ppart, total_ppart = 0;
+
+	/* Number of objects in partial slabs */
+	unsigned long min_partobj = max, max_partobj = 0,
+				avg_partobj, total_partobj = 0;
+
+	/* Percentage of partial objects of all objects in a slab */
+	unsigned long min_ppartobj = 100, max_ppartobj = 0,
+				avg_ppartobj, total_ppartobj = 0;
+
 
 	for (s = slabinfo; s < slabinfo + slabs; s++) {
 		unsigned long long size;
-		unsigned long partial;
-		unsigned long slabs;
 		unsigned long used;
 		unsigned long long wasted;
 		unsigned long long objwaste;
-		long long objects_in_partial;
-		unsigned long percentage_partial;
+		long long objects_in_partial_slabs;
+		unsigned long percentage_partial_slabs;
+		unsigned long percentage_partial_objs;
 
 		if (!s->slabs || !s->objects)
 			continue;
@@ -402,49 +415,58 @@ void totals(void)
 		used_slabs++;
 
 		size = slab_size(s);
-		partial = s->partial << s->order;
-		slabs = s->slabs << s->order;
 		used = s->objects * s->object_size;
 		wasted = size - used;
-		objwaste = wasted / s->objects;
+		objwaste = s->slab_size - s->object_size;
+
+		objects_in_partial_slabs = s->objects -
+			(s->slabs - s->partial - s ->cpu_slabs) *
+			s->objs_per_slab;
 
-		objects_in_partial = s->objects - (s->slabs - s->partial - s ->cpu_slabs)
-					* s->objs_per_slab;
+		if (objects_in_partial_slabs < 0)
+			objects_in_partial_slabs = 0;
 
-		if (objects_in_partial < 0)
-			objects_in_partial = 0;
+		percentage_partial_slabs = s->partial * 100 / s->slabs;
+		if (percentage_partial_slabs > 100)
+			percentage_partial_slabs = 100;
 
-		percentage_partial = objects_in_partial * 100 / s->objects;
-		if (percentage_partial > 100)
-			percentage_partial = 100;
+		percentage_partial_objs = objects_in_partial_slabs * 100
+							/ s->objects;
 
-		if (s->object_size < min_objsize || !min_objsize)
+		if (percentage_partial_objs > 100)
+			percentage_partial_objs = 100;
+
+		if (s->object_size < min_objsize)
 			min_objsize = s->object_size;
-		if (partial && (partial < min_partial || !min_partial))
-			min_partial = partial;
-		if (slabs < min_slabs || !min_partial)
-			min_slabs = slabs;
+		if (s->partial < min_partial)
+			min_partial = s->partial;
+		if (s->slabs < min_slabs)
+			min_slabs = s->slabs;
 		if (size < min_size)
 			min_size = size;
-		if (wasted < min_waste && !min_waste)
+		if (wasted < min_waste)
 			min_waste = wasted;
-		if (objwaste < min_objwaste || !min_objwaste)
+		if (objwaste < min_objwaste)
 			min_objwaste = objwaste;
-		if (s->objects < min_objects || !min_objects)
+		if (s->objects < min_objects)
 			min_objects = s->objects;
-		if (used < min_used || !min_used)
+		if (used < min_used)
 			min_used = used;
-		if (objects_in_partial < min_partobj || !min_partobj)
-			min_partobj = objects_in_partial;
-		if (percentage_partial < min_ppart || !min_ppart)
-			min_ppart = percentage_partial;
+		if (objects_in_partial_slabs < min_partobj)
+			min_partobj = objects_in_partial_slabs;
+		if (percentage_partial_slabs < min_ppart)
+			min_ppart = percentage_partial_slabs;
+		if (percentage_partial_objs < min_ppartobj)
+			min_ppartobj = percentage_partial_objs;
+		if (s->slab_size < min_memobj)
+			min_memobj = s->slab_size;
 
 		if (s->object_size > max_objsize)
 			max_objsize = s->object_size;
-		if (partial > max_partial)
-			max_partial = partial;
-		if (slabs > max_slabs)
-			max_slabs = slabs;
+		if (s->partial > max_partial)
+			max_partial = s->partial;
+		if (s->slabs > max_slabs)
+			max_slabs = s->slabs;
 		if (size > max_size)
 			max_size = size;
 		if (wasted > max_waste)
@@ -455,19 +477,25 @@ void totals(void)
 			max_objects = s->objects;
 		if (used > max_used)
 			max_used = used;
-		if (objects_in_partial > max_partobj)
-			max_partobj = objects_in_partial;
-		if (percentage_partial > max_ppart)
-			max_ppart = percentage_partial;
+		if (objects_in_partial_slabs > max_partobj)
+			max_partobj = objects_in_partial_slabs;
+		if (percentage_partial_slabs > max_ppart)
+			max_ppart = percentage_partial_slabs;
+		if (percentage_partial_objs > max_ppartobj)
+			max_ppartobj = percentage_partial_objs;
+		if (s->slab_size > max_memobj)
+			max_memobj = s->slab_size;
+
+		total_partial += s->partial;
+		total_slabs += s->slabs;
+		total_size += size;
+		total_waste += wasted;
 
 		total_objects += s->objects;
-		total_partial += partial;
-		total_slabs += slabs;
 		total_used += used;
-		total_waste += wasted;
-		total_size += size;
-		total_ppart += percentage_partial;
-		total_objects_in_partial += objects_in_partial;
+		total_partobj += objects_in_partial_slabs;
+		total_ppart += percentage_partial_slabs;
+		total_ppartobj += percentage_partial_objs;
 	}
 
 	if (!total_objects) {
@@ -478,29 +506,36 @@ void totals(void)
 		printf("No slabs\n");
 		return;
 	}
+
+	/* Per slab averages */
 	avg_partial = total_partial / used_slabs;
 	avg_slabs = total_slabs / used_slabs;
+	avg_size = total_size / used_slabs;
 	avg_waste = total_waste / used_slabs;
-	avg_size = total_waste / used_slabs;
+
 	avg_objects = total_objects / used_slabs;
 	avg_used = total_used / used_slabs;
+	avg_partobj = total_partobj / used_slabs;
 	avg_ppart = total_ppart / used_slabs;
-	avg_partobj = total_objects_in_partial / used_slabs;
+	avg_ppartobj = total_ppartobj / used_slabs;
 
+	/* Per object object sizes */
 	avg_objsize = total_used / total_objects;
 	avg_objwaste = total_waste / total_objects;
+	avg_partobj = total_partobj * 100 / total_objects;
+	avg_memobj = total_size / total_objects;
 
 	printf("Slabcache Totals\n");
 	printf("----------------\n");
-	printf("Slabcaches : %3d      Aliases  : %3d      Active: %3d\n",
-			slabs, aliases, used_slabs);
+	printf("Slabcaches : %3d      Aliases  : %3d->%-3d Active: %3d\n",
+			slabs, aliases, alias_targets, used_slabs);
 
-	store_size(b1, total_used);store_size(b2, total_waste);
+	store_size(b1, total_size);store_size(b2, total_waste);
 	store_size(b3, total_waste * 100 / total_used);
 	printf("Memory used: %6s   # Loss   : %6s   MRatio: %6s%%\n", b1, b2, b3);
 
-	store_size(b1, total_objects);store_size(b2, total_objects_in_partial);
-	store_size(b3, total_objects_in_partial * 100 / total_objects);
+	store_size(b1, total_objects);store_size(b2, total_partobj);
+	store_size(b3, total_partobj * 100 / total_objects);
 	printf("# Objects  : %6s   # PartObj: %6s   ORatio: %6s%%\n", b1, b2, b3);
 
 	printf("\n");
@@ -509,22 +544,35 @@ void totals(void)
 
 	store_size(b1, avg_objects);store_size(b2, min_objects);
 	store_size(b3, max_objects);store_size(b4, total_objects);
-	printf("# Objects %10s  %10s  %10s  %10s\n",
+	printf("#Objects  %10s  %10s  %10s  %10s\n",
 			b1,	b2,	b3,	b4);
 
 	store_size(b1, avg_slabs);store_size(b2, min_slabs);
 	store_size(b3, max_slabs);store_size(b4, total_slabs);
-	printf("# Slabs   %10s  %10s  %10s  %10s\n",
+	printf("#Slabs    %10s  %10s  %10s  %10s\n",
 			b1,	b2,	b3,	b4);
 
 	store_size(b1, avg_partial);store_size(b2, min_partial);
 	store_size(b3, max_partial);store_size(b4, total_partial);
-	printf("# Partial %10s  %10s  %10s  %10s\n",
+	printf("#PartSlab %10s  %10s  %10s  %10s\n",
 			b1,	b2,	b3,	b4);
 	store_size(b1, avg_ppart);store_size(b2, min_ppart);
 	store_size(b3, max_ppart);
-	printf("%% Partial %10s%% %10s%% %10s%%\n",
-			b1,	b2,	b3);
+	store_size(b4, total_partial * 100  / total_slabs);
+	printf("%%PartSlab %10s%% %10s%% %10s%% %10s%%\n",
+			b1,	b2,	b3,	b4);
+
+	store_size(b1, avg_partobj);store_size(b2, min_partobj);
+	store_size(b3, max_partobj);
+	store_size(b4, total_partobj);
+	printf("PartObjs  %10s  %10s  %10s  %10s\n",
+			b1,	b2,	b3,	b4);
+
+	store_size(b1, avg_ppartobj);store_size(b2, min_ppartobj);
+	store_size(b3, max_ppartobj);
+	store_size(b4, total_partobj * 100 / total_objects);
+	printf("%% PartObj %10s%% %10s%% %10s%% %10s%%\n",
+			b1,	b2,	b3,	b4);
 
 	store_size(b1, avg_size);store_size(b2, min_size);
 	store_size(b3, max_size);store_size(b4, total_size);
@@ -545,14 +593,18 @@ void totals(void)
 	printf("Per Object   Average         Min         Max\n");
 	printf("---------------------------------------------\n");
 
+	store_size(b1, avg_memobj);store_size(b2, min_memobj);
+	store_size(b3, max_memobj);
+	printf("Memory    %10s  %10s  %10s\n",
+			b1,	b2,	b3);
 	store_size(b1, avg_objsize);store_size(b2, min_objsize);
 	store_size(b3, max_objsize);
-	printf("Size      %10s  %10s  %10s\n",
+	printf("User      %10s  %10s  %10s\n",
 			b1,	b2,	b3);
 
 	store_size(b1, avg_objwaste);store_size(b2, min_objwaste);
 	store_size(b3, max_objwaste);
-	printf("Loss      %10s  %10s  %10s\n",
+	printf("Waste     %10s  %10s  %10s\n",
 			b1,	b2,	b3);
 }
 
@@ -739,6 +791,8 @@ void read_slab_dir(void)
 			slab->store_user = get_obj("store_user");
 			slab->trace = get_obj("trace");
 			chdir("..");
+			if (slab->name[0] == ':')
+				alias_targets++;
 			slab++;
 			break;
 		   default :

--

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [patch 4/8] SLUB printk cleanup: object_err()
  2007-04-27 20:21 [patch 0/8] SLUB patches vs. 2.6.21-rc7-mm2 + yesterdays accepted patches clameter
                   ` (2 preceding siblings ...)
  2007-04-27 20:21 ` [patch 3/8] SLUB slabinfo: Remove hackname() clameter
@ 2007-04-27 20:21 ` clameter
  2007-04-27 20:21 ` [patch 5/8] SLUB printk cleanup: add slab_err clameter
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 9+ messages in thread
From: clameter @ 2007-04-27 20:21 UTC (permalink / raw)
  To: akpm; +Cc: linux-mm

[-- Attachment #1: slub_printk_object_err --]
[-- Type: text/plain, Size: 956 bytes --]

---
 mm/slub.c |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

Index: slub/mm/slub.c
===================================================================
--- slub.orig/mm/slub.c	2007-04-27 10:31:55.000000000 -0700
+++ slub/mm/slub.c	2007-04-27 12:46:48.000000000 -0700
@@ -356,8 +356,8 @@ static void object_err(struct kmem_cache
 {
 	u8 *addr = page_address(page);
 
-	printk(KERN_ERR "*** SLUB: %s in %s@0x%p slab 0x%p\n",
-			reason, s->name, object, page);
+	printk(KERN_ERR "*** SLUB %s: %s@0x%p slab 0x%p\n",
+			s->name, reason, object, page);
 	printk(KERN_ERR "    offset=%tu flags=0x%04lx inuse=%u freelist=0x%p\n",
 		object - addr, page->flags, page->inuse, page->freelist);
 	if (object > addr + 16)

--

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [patch 5/8] SLUB printk cleanup: add slab_err
  2007-04-27 20:21 [patch 0/8] SLUB patches vs. 2.6.21-rc7-mm2 + yesterdays accepted patches clameter
                   ` (3 preceding siblings ...)
  2007-04-27 20:21 ` [patch 4/8] SLUB printk cleanup: object_err() clameter
@ 2007-04-27 20:21 ` clameter
  2007-04-27 20:21 ` [patch 6/8] SLUB printk cleanup: Diagnostic functions clameter
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 9+ messages in thread
From: clameter @ 2007-04-27 20:21 UTC (permalink / raw)
  To: akpm; +Cc: linux-mm

[-- Attachment #1: slub_printk_add_slab_err --]
[-- Type: text/plain, Size: 1180 bytes --]

Add a function to report on an error condition in a slab. This is similar
to object_err which reports on an error condition in an object.

Signed-off-by: Christoph Lameter <clameter@sgi.com>

---
 mm/slub.c |   13 +++++++++++++
 1 file changed, 13 insertions(+)

Index: slub/mm/slub.c
===================================================================
--- slub.orig/mm/slub.c	2007-04-27 10:33:06.000000000 -0700
+++ slub/mm/slub.c	2007-04-27 10:39:01.000000000 -0700
@@ -367,6 +367,19 @@ static void object_err(struct kmem_cache
 	dump_stack();
 }
 
+static void slab_err(struct kmem_cache *s, struct page *page, char *reason, ...)
+{
+	va_list args;
+	char buf[100];
+
+	va_start(args, reason);
+	vsnprintf(buf, sizeof(buf), reason, args);
+	va_end(args);
+	printk(KERN_ERR "*** SLUB %s: %s in slab @0x%p\n", s->name, buf,
+		page);
+	dump_stack();
+}
+
 static void init_object(struct kmem_cache *s, void *object, int active)
 {
 	u8 *p = object;

--

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [patch 6/8] SLUB printk cleanup: Diagnostic functions
  2007-04-27 20:21 [patch 0/8] SLUB patches vs. 2.6.21-rc7-mm2 + yesterdays accepted patches clameter
                   ` (4 preceding siblings ...)
  2007-04-27 20:21 ` [patch 5/8] SLUB printk cleanup: add slab_err clameter
@ 2007-04-27 20:21 ` clameter
  2007-04-27 20:21 ` [patch 7/8] SLUB printk cleanup: Fix up printks in the resiliency check clameter
  2007-04-27 20:21 ` [patch 8/8] SLUB printk cleanup: Slab validation printks clameter
  7 siblings, 0 replies; 9+ messages in thread
From: clameter @ 2007-04-27 20:21 UTC (permalink / raw)
  To: akpm; +Cc: linux-mm

[-- Attachment #1: slub_printk_diag --]
[-- Type: text/plain, Size: 5735 bytes --]

Make printk output of the diagnostic functions consistent and use the new
slab_err function as much as possible to consolidate code.

Signed-off-by: Christoph Lameter <clameter@sgi.com>

---
 mm/slub.c |   73 +++++++++++++++++++++++++-------------------------------------
 1 file changed, 30 insertions(+), 43 deletions(-)

Index: slub/mm/slub.c
===================================================================
--- slub.orig/mm/slub.c	2007-04-27 10:34:12.000000000 -0700
+++ slub/mm/slub.c	2007-04-27 10:36:34.000000000 -0700
@@ -457,7 +457,7 @@ static int check_valid_pointer(struct km
 static void restore_bytes(struct kmem_cache *s, char *message, u8 data,
 						void *from, void *to)
 {
-	printk(KERN_ERR "@@@ SLUB: %s Restoring %s (0x%x) from 0x%p-0x%p\n",
+	printk(KERN_ERR "@@@ SLUB %s: Restoring %s (0x%x) from 0x%p-0x%p\n",
 		s->name, message, data, from, to - 1);
 	memset(from, data, to - from);
 }
@@ -504,9 +504,7 @@ static int slab_pad_check(struct kmem_ca
 		return 1;
 
 	if (!check_bytes(p + length, POISON_INUSE, remainder)) {
-		printk(KERN_ERR "SLUB: %s slab 0x%p: Padding fails check\n",
-			s->name, p);
-		dump_stack();
+		slab_err(s, page, "Padding check failed");
 		restore_bytes(s, "slab padding", POISON_INUSE, p + length,
 			p + length + remainder);
 		return 0;
@@ -592,30 +590,25 @@ static int check_slab(struct kmem_cache 
 	VM_BUG_ON(!irqs_disabled());
 
 	if (!PageSlab(page)) {
-		printk(KERN_ERR "SLUB: %s Not a valid slab page @0x%p "
-			"flags=%lx mapping=0x%p count=%d \n",
-			s->name, page, page->flags, page->mapping,
+		slab_err(s, page, "Not a valid slab page flags=%lx "
+			"mapping=0x%p count=%d", page->flags, page->mapping,
 			page_count(page));
 		return 0;
 	}
 	if (page->offset * sizeof(void *) != s->offset) {
-		printk(KERN_ERR "SLUB: %s Corrupted offset %lu in slab @0x%p"
-			" flags=0x%lx mapping=0x%p count=%d\n",
-			s->name,
+		slab_err(s, page, "Corrupted offset %lu flags=0x%lx "
+			"mapping=0x%p count=%d",
 			(unsigned long)(page->offset * sizeof(void *)),
-			page,
 			page->flags,
 			page->mapping,
 			page_count(page));
-		dump_stack();
 		return 0;
 	}
 	if (page->inuse > s->objects) {
-		printk(KERN_ERR "SLUB: %s inuse %u > max %u in slab "
-			"page @0x%p flags=%lx mapping=0x%p count=%d\n",
-			s->name, page->inuse, s->objects, page, page->flags,
+		slab_err(s, page, "inuse %u > max %u @0x%p flags=%lx "
+			"mapping=0x%p count=%d",
+			s->name, page->inuse, s->objects, page->flags,
 			page->mapping, page_count(page));
-		dump_stack();
 		return 0;
 	}
 	/* Slab_pad_check fixes things up after itself */
@@ -644,12 +637,13 @@ static int on_freelist(struct kmem_cache
 				set_freepointer(s, object, NULL);
 				break;
 			} else {
-				printk(KERN_ERR "SLUB: %s slab 0x%p "
-					"freepointer 0x%p corrupted.\n",
-					s->name, page, fp);
-				dump_stack();
+				slab_err(s, page, "Freepointer 0x%p corrupt",
+									fp);
 				page->freelist = NULL;
 				page->inuse = s->objects;
+				printk(KERN_ERR "@@@ SLUB %s: Freelist "
+					"cleared. Slab 0x%p\n",
+					s->name, page);
 				return 0;
 			}
 			break;
@@ -660,11 +654,12 @@ static int on_freelist(struct kmem_cache
 	}
 
 	if (page->inuse != s->objects - nr) {
-		printk(KERN_ERR "slab %s: page 0x%p wrong object count."
-			" counter is %d but counted were %d\n",
-			s->name, page, page->inuse,
-			s->objects - nr);
+		slab_err(s, page, "Wrong object count. Counter is %d but "
+			"counted were %d", s, page, page->inuse,
+							s->objects - nr);
 		page->inuse = s->objects - nr;
+		printk(KERN_ERR "@@@ SLUB %s: Object count adjusted. "
+			"Slab @0x%p\n", s->name, page);
 	}
 	return search == NULL;
 }
@@ -700,10 +695,7 @@ static int alloc_object_checks(struct km
 		goto bad;
 
 	if (object && !on_freelist(s, page, object)) {
-		printk(KERN_ERR "SLUB: %s Object 0x%p@0x%p "
-			"already allocated.\n",
-			s->name, object, page);
-		dump_stack();
+		slab_err(s, page, "Object 0x%p already allocated", object);
 		goto bad;
 	}
 
@@ -743,15 +735,12 @@ static int free_object_checks(struct kme
 		goto fail;
 
 	if (!check_valid_pointer(s, page, object)) {
-		printk(KERN_ERR "SLUB: %s slab 0x%p invalid "
-			"object pointer 0x%p\n",
-			s->name, page, object);
+		slab_err(s, page, "Invalid object pointer 0x%p", object);
 		goto fail;
 	}
 
 	if (on_freelist(s, page, object)) {
-		printk(KERN_ERR "SLUB: %s slab 0x%p object "
-			"0x%p already free.\n", s->name, page, object);
+		slab_err(s, page, "Object 0x%p already free", object);
 		goto fail;
 	}
 
@@ -760,24 +749,22 @@ static int free_object_checks(struct kme
 
 	if (unlikely(s != page->slab)) {
 		if (!PageSlab(page))
-			printk(KERN_ERR "slab_free %s size %d: attempt to"
-				"free object(0x%p) outside of slab.\n",
-				s->name, s->size, object);
+			slab_err(s, page, "Attempt to free object(0x%p) "
+				"outside of slab", object);
 		else
-		if (!page->slab)
+		if (!page->slab) {
 			printk(KERN_ERR
-				"slab_free : no slab(NULL) for object 0x%p.\n",
+				"SLUB <none>: no slab for object 0x%p.\n",
 						object);
+			dump_stack();
+		}
 		else
-			printk(KERN_ERR "slab_free %s(%d): object at 0x%p"
-				" belongs to slab %s(%d)\n",
-				s->name, s->size, object,
-				page->slab->name, page->slab->size);
+			slab_err(s, page, "object at 0x%p belongs "
+				"to slab %s", object, page->slab->name);
 		goto fail;
 	}
 	return 1;
 fail:
-	dump_stack();
 	printk(KERN_ERR "@@@ SLUB: %s slab 0x%p object at 0x%p not freed.\n",
 		s->name, page, object);
 	return 0;

--

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [patch 7/8] SLUB printk cleanup: Fix up printks in the resiliency check
  2007-04-27 20:21 [patch 0/8] SLUB patches vs. 2.6.21-rc7-mm2 + yesterdays accepted patches clameter
                   ` (5 preceding siblings ...)
  2007-04-27 20:21 ` [patch 6/8] SLUB printk cleanup: Diagnostic functions clameter
@ 2007-04-27 20:21 ` clameter
  2007-04-27 20:21 ` [patch 8/8] SLUB printk cleanup: Slab validation printks clameter
  7 siblings, 0 replies; 9+ messages in thread
From: clameter @ 2007-04-27 20:21 UTC (permalink / raw)
  To: akpm; +Cc: linux-mm

[-- Attachment #1: slub_printk_resilience --]
[-- Type: text/plain, Size: 2576 bytes --]

---
 mm/slub.c |   14 ++++++++------
 1 file changed, 8 insertions(+), 6 deletions(-)

Index: slub/mm/slub.c
===================================================================
--- slub.orig/mm/slub.c	2007-04-27 10:36:34.000000000 -0700
+++ slub/mm/slub.c	2007-04-27 10:37:42.000000000 -0700
@@ -2613,6 +2613,8 @@ __initcall(cpucache_init);
 #endif
 
 #ifdef SLUB_RESILIENCY_TEST
+static unsigned long validate_slab_cache(struct kmem_cache *s);
+
 static void resiliency_test(void)
 {
 	u8 *p;
@@ -2624,7 +2626,7 @@ static void resiliency_test(void)
 	p = kzalloc(16, GFP_KERNEL);
 	p[16] = 0x12;
 	printk(KERN_ERR "\n1. kmalloc-16: Clobber Redzone/next pointer"
-			" 0x12->%p\n\n", p + 16);
+			" 0x12->0x%p\n\n", p + 16);
 
 	validate_slab_cache(kmalloc_caches + 4);
 
@@ -2632,14 +2634,14 @@ static void resiliency_test(void)
 	p = kzalloc(32, GFP_KERNEL);
 	p[32 + sizeof(void *)] = 0x34;
 	printk(KERN_ERR "\n2. kmalloc-32: Clobber next pointer/next slab"
-		 	" 0x34 -> %p\n", p);
+		 	" 0x34 -> -0x%p\n", p);
 	printk(KERN_ERR "If allocated object is overwritten then not detectable\n\n");
 
 	validate_slab_cache(kmalloc_caches + 5);
 	p = kzalloc(64, GFP_KERNEL);
 	p += 64 + (get_cycles() & 0xff) * sizeof(void *);
 	*p = 0x56;
-	printk(KERN_ERR "\n3. kmalloc-64: corrupting random byte 0x56->%p\n",
+	printk(KERN_ERR "\n3. kmalloc-64: corrupting random byte 0x56->0x%p\n",
 									p);
 	printk(KERN_ERR "If allocated object is overwritten then not detectable\n\n");
 	validate_slab_cache(kmalloc_caches + 6);
@@ -2648,19 +2650,19 @@ static void resiliency_test(void)
 	p = kzalloc(128, GFP_KERNEL);
 	kfree(p);
 	*p = 0x78;
-	printk(KERN_ERR "1. kmalloc-128: Clobber first word 0x78->%p\n\n", p);
+	printk(KERN_ERR "1. kmalloc-128: Clobber first word 0x78->0x%p\n\n", p);
 	validate_slab_cache(kmalloc_caches + 7);
 
 	p = kzalloc(256, GFP_KERNEL);
 	kfree(p);
 	p[50] = 0x9a;
-	printk(KERN_ERR "\n2. kmalloc-256: Clobber 50th byte 0x9a->%p\n\n", p);
+	printk(KERN_ERR "\n2. kmalloc-256: Clobber 50th byte 0x9a->0x%p\n\n", p);
 	validate_slab_cache(kmalloc_caches + 8);
 
 	p = kzalloc(512, GFP_KERNEL);
 	kfree(p);
 	p[512] = 0xab;
-	printk(KERN_ERR "\n3. kmalloc-512: Clobber redzone 0xab->%p\n\n", p);
+	printk(KERN_ERR "\n3. kmalloc-512: Clobber redzone 0xab->0x%p\n\n", p);
 	validate_slab_cache(kmalloc_caches + 9);
 }
 #else

--

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [patch 8/8] SLUB printk cleanup: Slab validation printks
  2007-04-27 20:21 [patch 0/8] SLUB patches vs. 2.6.21-rc7-mm2 + yesterdays accepted patches clameter
                   ` (6 preceding siblings ...)
  2007-04-27 20:21 ` [patch 7/8] SLUB printk cleanup: Fix up printks in the resiliency check clameter
@ 2007-04-27 20:21 ` clameter
  7 siblings, 0 replies; 9+ messages in thread
From: clameter @ 2007-04-27 20:21 UTC (permalink / raw)
  To: akpm; +Cc: linux-mm

[-- Attachment #1: slub_printk_validate_slab --]
[-- Type: text/plain, Size: 2034 bytes --]

---
 mm/slub.c |   19 ++++++++++---------
 1 file changed, 10 insertions(+), 9 deletions(-)

Index: slub/mm/slub.c
===================================================================
--- slub.orig/mm/slub.c	2007-04-27 10:37:42.000000000 -0700
+++ slub/mm/slub.c	2007-04-27 10:38:47.000000000 -0700
@@ -2729,17 +2729,17 @@ static void validate_slab_slab(struct km
 		validate_slab(s, page);
 		slab_unlock(page);
 	} else
-		printk(KERN_INFO "SLUB: %s Skipped busy slab %p\n",
+		printk(KERN_INFO "SLUB %s: Skipped busy slab 0x%p\n",
 			s->name, page);
 
 	if (s->flags & DEBUG_DEFAULT_FLAGS) {
 		if (!PageError(page))
-			printk(KERN_ERR "SLUB: %s PageError not set "
-				"on slab %p\n", s->name, page);
+			printk(KERN_ERR "SLUB %s: PageError not set "
+				"on slab 0x%p\n", s->name, page);
 	} else {
 		if (PageError(page))
-			printk(KERN_ERR "SLUB: %s PageError set on "
-				"slab %p\n", s->name, page);
+			printk(KERN_ERR "SLUB %s: PageError set on "
+				"slab 0x%p\n", s->name, page);
 	}
 }
 
@@ -2756,8 +2756,8 @@ static int validate_slab_node(struct kme
 		count++;
 	}
 	if (count != n->nr_partial)
-		printk("SLUB: %s %ld partial slabs counted but counter=%ld\n",
-			s->name, count, n->nr_partial);
+		printk(KERN_ERR "SLUB %s: %ld partial slabs counted but "
+			"counter=%ld\n", s->name, count, n->nr_partial);
 
 	if (!(s->flags & SLAB_STORE_USER))
 		goto out;
@@ -2767,8 +2767,9 @@ static int validate_slab_node(struct kme
 		count++;
 	}
 	if (count != atomic_long_read(&n->nr_slabs))
-		printk("SLUB: %s %ld slabs counted but counter=%ld\n",
-		s->name, count, atomic_long_read(&n->nr_slabs));
+		printk(KERN_ERR "SLUB: %s %ld slabs counted but "
+			"counter=%ld\n", s->name, count,
+			atomic_long_read(&n->nr_slabs));
 
 out:
 	spin_unlock_irqrestore(&n->list_lock, flags);

--

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2007-04-27 20:21 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2007-04-27 20:21 [patch 0/8] SLUB patches vs. 2.6.21-rc7-mm2 + yesterdays accepted patches clameter
2007-04-27 20:21 ` [patch 1/8] SLUB sysfs support: fix unique id generation clameter
2007-04-27 20:21 ` [patch 2/8] SLUB: Fixes to kmem_cache_shrink() clameter
2007-04-27 20:21 ` [patch 3/8] SLUB slabinfo: Remove hackname() clameter
2007-04-27 20:21 ` [patch 4/8] SLUB printk cleanup: object_err() clameter
2007-04-27 20:21 ` [patch 5/8] SLUB printk cleanup: add slab_err clameter
2007-04-27 20:21 ` [patch 6/8] SLUB printk cleanup: Diagnostic functions clameter
2007-04-27 20:21 ` [patch 7/8] SLUB printk cleanup: Fix up printks in the resiliency check clameter
2007-04-27 20:21 ` [patch 8/8] SLUB printk cleanup: Slab validation printks clameter

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox