* [SLUB 1/5] Fix validation
@ 2007-04-13 1:36 Christoph Lameter
2007-04-13 1:36 ` [SLUB 2/5] Add after object padding Christoph Lameter
` (3 more replies)
0 siblings, 4 replies; 6+ messages in thread
From: Christoph Lameter @ 2007-04-13 1:36 UTC (permalink / raw)
To: akpm; +Cc: linux-mm, Christoph Lameter
Some parts of object validation will never occur because on_freelist
does return the wrong exit code for a NULL object.
Fix that.
Signed-off-by: Christoph Lameter <clameter@sgi.com>
Index: linux-2.6.21-rc6/mm/slub.c
===================================================================
--- linux-2.6.21-rc6.orig/mm/slub.c 2007-04-12 15:06:54.000000000 -0700
+++ linux-2.6.21-rc6/mm/slub.c 2007-04-12 15:07:23.000000000 -0700
@@ -588,7 +588,7 @@ static int on_freelist(struct kmem_cache
s->objects - nr);
page->inuse = s->objects - nr;
}
- return 0;
+ return search == NULL;
}
/*
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 6+ messages in thread
* [SLUB 2/5] Add after object padding
2007-04-13 1:36 [SLUB 1/5] Fix validation Christoph Lameter
@ 2007-04-13 1:36 ` Christoph Lameter
2007-04-13 1:36 ` [SLUB 3/5] Remove object activities out of checking functions Christoph Lameter
` (2 subsequent siblings)
3 siblings, 0 replies; 6+ messages in thread
From: Christoph Lameter @ 2007-04-13 1:36 UTC (permalink / raw)
To: akpm; +Cc: linux-mm, Christoph Lameter
Without padding there is the danger that we do not notice writing
before the allocated object. So increase the slab size by another
word in the debug case. That will force the creation of some fill
space which SLUB will continue to check.
Signed-off-by: Christoph Lameter <clameter@sgi.com>
Index: linux-2.6.21-rc6/mm/slub.c
===================================================================
--- linux-2.6.21-rc6.orig/mm/slub.c 2007-04-12 16:44:13.000000000 -0700
+++ linux-2.6.21-rc6/mm/slub.c 2007-04-12 16:45:18.000000000 -0700
@@ -484,7 +484,7 @@ static int check_object(struct kmem_cach
if (s->flags & SLAB_POISON) {
if (!active && (s->flags & __OBJECT_POISON) &&
(!check_bytes(p, POISON_FREE, s->objsize - 1) ||
- p[s->objsize -1] != POISON_END)) {
+ p[s->objsize - 1] != POISON_END)) {
object_err(s, page, p, "Poison check failed");
return 0;
}
@@ -1623,6 +1623,15 @@ static int calculate_sizes(struct kmem_c
*/
size += 2 * sizeof(struct track);
+ if (flags & DEBUG_DEFAULT_FLAGS)
+ /*
+ * Add some empty padding so that we can catch
+ * overwrites from earlier objects rather than let
+ * tracking information or the free pointer be
+ * corrupted if an user writes before the start
+ * of the object.
+ */
+ size += sizeof(void *);
/*
* Determine the alignment based on various parameters that the
* user specified (this is unecessarily complex due to the attempt
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 6+ messages in thread
* [SLUB 3/5] Remove object activities out of checking functions
2007-04-13 1:36 [SLUB 1/5] Fix validation Christoph Lameter
2007-04-13 1:36 ` [SLUB 2/5] Add after object padding Christoph Lameter
@ 2007-04-13 1:36 ` Christoph Lameter
2007-04-13 1:36 ` [SLUB 4/5] Resiliency fixups Christoph Lameter
2007-04-13 1:36 ` [SLUB 5/5] Resiliency test Christoph Lameter
3 siblings, 0 replies; 6+ messages in thread
From: Christoph Lameter @ 2007-04-13 1:36 UTC (permalink / raw)
To: akpm; +Cc: linux-mm, Christoph Lameter
Make sure that the check function really only check things and do not
perform activities. Extract the tracing and object seeding out
of the two check functions and place them into slab_alloc and slab_free
Signed-off-by: Christoph Lameter <clameter@sgi.com>
Index: linux-2.6.21-rc6/mm/slub.c
===================================================================
--- linux-2.6.21-rc6.orig/mm/slub.c 2007-04-12 12:29:22.000000000 -0700
+++ linux-2.6.21-rc6/mm/slub.c 2007-04-12 13:09:01.000000000 -0700
@@ -536,7 +536,7 @@ static int check_slab(struct kmem_cache
return 0;
}
if (page->inuse > s->objects) {
- printk(KERN_ERR "SLUB: %s Inuse %u > max %u in slab "
+ printk(KERN_ERR "SLUB: %s inuse %u > max %u in slab "
"page @0x%p flags=%lx mapping=0x%p count=%d\n",
s->name, page->inuse, s->objects, page, page->flags,
page->mapping, page_count(page));
@@ -635,12 +635,13 @@ static int alloc_object_checks(struct km
printk(KERN_ERR "SLUB: %s Object 0x%p@0x%p "
"already allocated.\n",
s->name, object, page);
- goto dump;
+ dump_stack();
+ goto bad;
}
if (!check_valid_pointer(s, page, object)) {
object_err(s, page, object, "Freelist Pointer check fails");
- goto dump;
+ goto bad;
}
if (!object)
@@ -648,17 +649,8 @@ static int alloc_object_checks(struct km
if (!check_object(s, page, object, 0))
goto bad;
- init_object(s, object, 1);
- if (s->flags & SLAB_TRACE) {
- printk(KERN_INFO "TRACE %s alloc 0x%p inuse=%d fp=0x%p\n",
- s->name, object, page->inuse,
- page->freelist);
- dump_stack();
- }
return 1;
-dump:
- dump_stack();
bad:
/* Mark slab full */
page->inuse = s->objects;
@@ -699,20 +691,12 @@ static int free_object_checks(struct kme
"slab_free : no slab(NULL) for object 0x%p.\n",
object);
else
- printk(KERN_ERR "slab_free %s(%d): object at 0x%p"
+ printk(KERN_ERR "slab_free %s(%d): object at 0x%p"
" belongs to slab %s(%d)\n",
s->name, s->size, object,
page->slab->name, page->slab->size);
goto fail;
}
- if (s->flags & SLAB_TRACE) {
- printk(KERN_INFO "TRACE %s free 0x%p inuse=%d fp=0x%p\n",
- s->name, object, page->inuse,
- page->freelist);
- print_section("Object", object, s->objsize);
- dump_stack();
- }
- init_object(s, object, 0);
return 1;
fail:
dump_stack();
@@ -1241,6 +1225,13 @@ debug:
goto another_slab;
if (s->flags & SLAB_STORE_USER)
set_track(s, object, TRACK_ALLOC, addr);
+ if (s->flags & SLAB_TRACE) {
+ printk(KERN_INFO "TRACE %s alloc 0x%p inuse=%d fp=0x%p\n",
+ s->name, object, page->inuse,
+ page->freelist);
+ dump_stack();
+ }
+ init_object(s, object, 1);
goto have_object;
}
@@ -1323,6 +1314,14 @@ debug:
remove_full(s, page);
if (s->flags & SLAB_STORE_USER)
set_track(s, x, TRACK_FREE, addr);
+ if (s->flags & SLAB_TRACE) {
+ printk(KERN_INFO "TRACE %s free 0x%p inuse=%d fp=0x%p\n",
+ s->name, object, page->inuse,
+ page->freelist);
+ print_section("Object", (void *)object, s->objsize);
+ dump_stack();
+ }
+ init_object(s, object, 0);
goto checks_ok;
}
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 6+ messages in thread
* [SLUB 4/5] Resiliency fixups
2007-04-13 1:36 [SLUB 1/5] Fix validation Christoph Lameter
2007-04-13 1:36 ` [SLUB 2/5] Add after object padding Christoph Lameter
2007-04-13 1:36 ` [SLUB 3/5] Remove object activities out of checking functions Christoph Lameter
@ 2007-04-13 1:36 ` Christoph Lameter
2007-04-13 1:58 ` Christoph Lameter
2007-04-13 1:36 ` [SLUB 5/5] Resiliency test Christoph Lameter
3 siblings, 1 reply; 6+ messages in thread
From: Christoph Lameter @ 2007-04-13 1:36 UTC (permalink / raw)
To: akpm; +Cc: linux-mm, Christoph Lameter
Do more fixups if we detect problems in order to potentially heal
problems so that the system can continue. This will also avoid
multiple reports about the same corruption.
Add messages what slub does to fix up things. These all begin with @@@.
Signed-off-by: Christoph Lameter <clameter@sgi.com>
Index: linux-2.6.21-rc6/mm/slub.c
===================================================================
--- linux-2.6.21-rc6.orig/mm/slub.c 2007-04-12 16:47:18.000000000 -0700
+++ linux-2.6.21-rc6/mm/slub.c 2007-04-12 18:17:23.000000000 -0700
@@ -190,8 +190,6 @@ static void print_section(char *text, u8
int newline = 1;
char ascii[17];
- if (length > 128)
- length = 128;
ascii[16] = 0;
for (i = 0; i < length; i++) {
@@ -331,13 +329,13 @@ static void object_err(struct kmem_cache
{
u8 *addr = page_address(page);
- printk(KERN_ERR "*** SLUB: %s in %s@0x%p Slab 0x%p\n",
+ printk(KERN_ERR "*** SLUB: %s in %s@0x%p slab 0x%p\n",
reason, s->name, object, page);
printk(KERN_ERR " offset=%tu flags=0x%04lx inuse=%u freelist=0x%p\n",
object - addr, page->flags, page->inuse, page->freelist);
if (object > addr + 16)
print_section("Bytes b4", object - 16, 16);
- print_section("Object", object, s->objsize);
+ print_section("Object", object, min(s->objsize, 128));
print_trailer(s, object);
dump_stack();
}
@@ -416,6 +414,14 @@ static int check_valid_pointer(struct km
* may be used with merged slabcaches.
*/
+static void restore_bytes(struct kmem_cache *s, char *message, u8 data,
+ void *from, void *to)
+{
+ printk(KERN_ERR "@@@ SLUB: %s Restoring %s (0x%x) from 0x%p-0x%p\n",
+ s->name, message, data, from, to - 1);
+ memset(from, data, to - from);
+}
+
static int check_pad_bytes(struct kmem_cache *s, struct page *page, u8 *p)
{
unsigned long off = s->inuse; /* The end of info */
@@ -435,6 +441,11 @@ static int check_pad_bytes(struct kmem_c
return 1;
object_err(s, page, p, "Object padding check fails");
+
+ /*
+ * Restore padding
+ */
+ restore_bytes(s, "object padding", POISON_INUSE, p + off, p + s->size);
return 0;
}
@@ -455,7 +466,9 @@ static int slab_pad_check(struct kmem_ca
if (!check_bytes(p + length, POISON_INUSE, remainder)) {
printk(KERN_ERR "SLUB: %s slab 0x%p: Padding fails check\n",
s->name, p);
- print_section("Slab Pad", p + length, remainder);
+ dump_stack();
+ restore_bytes(s, "slab padding", POISON_INUSE, p + length,
+ p + length + remainder);
return 0;
}
return 1;
@@ -468,28 +481,48 @@ static int check_object(struct kmem_cach
u8 *endobject = object + s->objsize;
if (s->flags & SLAB_RED_ZONE) {
- if (!check_bytes(endobject,
- active ? SLUB_RED_ACTIVE : SLUB_RED_INACTIVE,
- s->inuse - s->objsize)) {
- object_err(s, page, object,
- active ? "Redzone Active check fails" :
- "Redzone Inactive check fails");
- return 0;
+ unsigned int red =
+ active ? SLUB_RED_ACTIVE : SLUB_RED_INACTIVE;
+
+ if (!check_bytes(endobject, red, s->inuse - s->objsize)) {
+ object_err(s, page, object,
+ active ? "Redzone Active" : "Redzone Inactive");
+ restore_bytes(s, "redzone", red,
+ endobject, object + s->inuse);
+ return 0;
}
- } else if ((s->flags & SLAB_POISON) && s->objsize < s->inuse &&
+ } else {
+ if ((s->flags & SLAB_POISON) && s->objsize < s->inuse &&
!check_bytes(endobject, POISON_INUSE,
- s->inuse - s->objsize))
+ s->inuse - s->objsize)) {
object_err(s, page, p, "Alignment padding check fails");
+ /*
+ * Fix it so that there will not be another report.
+ *
+ * Hmmm... We may be corrupting an object that now expects
+ * to be longer than allowed.
+ */
+ restore_bytes(s, "alignment padding", POISON_INUSE,
+ endobject, object + s->inuse);
+ }
+ }
if (s->flags & SLAB_POISON) {
if (!active && (s->flags & __OBJECT_POISON) &&
(!check_bytes(p, POISON_FREE, s->objsize - 1) ||
p[s->objsize - 1] != POISON_END)) {
+
object_err(s, page, p, "Poison check failed");
+ restore_bytes(s, "Poison", POISON_FREE,
+ p, p + s->objsize -1);
+ restore_bytes(s, "Poison", POISON_END,
+ p + s->objsize - 1, p + s->objsize);
return 0;
}
- if (!check_pad_bytes(s, page, p))
- return 0;
+ /*
+ * check_pad_bytes cleans up on its own.
+ */
+ check_pad_bytes(s, page, p);
}
if (!s->offset && active)
@@ -503,9 +536,10 @@ static int check_object(struct kmem_cach
if (!check_valid_pointer(s, page, get_freepointer(s, p))) {
object_err(s, page, p, "Freepointer corrupt");
/*
- * No choice but to zap it. This may cause
- * another error because the object count
- * is now wrong.
+ * No choice but to zap it and thus loose the remainder
+ * of the free objects in this slab. May cause
+ * another error because the object count maybe
+ * wrong now.
*/
set_freepointer(s, p, NULL);
return 0;
@@ -532,7 +566,8 @@ static int check_slab(struct kmem_cache
page,
page->flags,
page->mapping,
- page_count(page));
+ page_count(page));\
+ dump_stack();
return 0;
}
if (page->inuse > s->objects) {
@@ -540,9 +575,12 @@ static int check_slab(struct kmem_cache
"page @0x%p flags=%lx mapping=0x%p count=%d\n",
s->name, page->inuse, s->objects, page, page->flags,
page->mapping, page_count(page));
+ dump_stack();
return 0;
}
- return slab_pad_check(s, page);
+ /* Slab_pad_check fixes things up after itself */
+ slab_pad_check(s, page);
+ return 1;
}
/*
@@ -652,9 +690,19 @@ static int alloc_object_checks(struct km
return 1;
bad:
- /* Mark slab full */
- page->inuse = s->objects;
- page->freelist = NULL;
+ if (PageSlab(page)) {
+ /*
+ * If this is a slab page then lets do the best we can
+ * to avoid issues in the future. Marking all objects
+ * as used avoids touching the remainder.
+ */
+ printk(KERN_ERR "@@@ SLUB: %s slab 0x%p. Marking all objects used.\n",
+ s->name, page);
+ page->inuse = s->objects;
+ page->freelist = NULL;
+ /* Fix up fields that may be corrupted */
+ page->offset = s->offset / sizeof(void *);
+ }
return 0;
}
@@ -700,6 +748,8 @@ static int free_object_checks(struct kme
return 1;
fail:
dump_stack();
+ printk(KERN_ERR "@@@ SLUB: %s slab 0x%p object at 0x%p not freed.\n",
+ s->name, page, object);
return 0;
}
@@ -1574,9 +1624,9 @@ static int calculate_sizes(struct kmem_c
*/
if ((flags & SLAB_POISON) && !(flags & SLAB_DESTROY_BY_RCU) &&
!s->ctor && !s->dtor)
- flags |= __OBJECT_POISON;
+ s->flags |= __OBJECT_POISON;
else
- flags &= ~__OBJECT_POISON;
+ s->flags &= ~__OBJECT_POISON;
/*
* Round up object size to the next word boundary. We can only
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 6+ messages in thread
* [SLUB 5/5] Resiliency test
2007-04-13 1:36 [SLUB 1/5] Fix validation Christoph Lameter
` (2 preceding siblings ...)
2007-04-13 1:36 ` [SLUB 4/5] Resiliency fixups Christoph Lameter
@ 2007-04-13 1:36 ` Christoph Lameter
3 siblings, 0 replies; 6+ messages in thread
From: Christoph Lameter @ 2007-04-13 1:36 UTC (permalink / raw)
To: akpm; +Cc: linux-mm, Christoph Lameter
Add a test that can be performed on bootup to test the recoverability
from slab corruption.
Note that 2 of those tests are potentially dangerous. Off by default
Signed-off-by: Christoph Lameter <clameter@sgi.com>
Index: linux-2.6.21-rc6/mm/slub.c
===================================================================
--- linux-2.6.21-rc6.orig/mm/slub.c 2007-04-12 18:24:44.000000000 -0700
+++ linux-2.6.21-rc6/mm/slub.c 2007-04-12 18:29:16.000000000 -0700
@@ -109,6 +109,9 @@
* - Variable sizing of the per node arrays
*/
+/* Enable to test recovery from slab corruption on boot */
+#undef SLUB_RESILIENCY_TEST
+
/*
* Flags from the regular SLAB that SLUB does not support:
*/
@@ -2591,6 +2594,61 @@ static unsigned long validate_slab_cache
return count;
}
+#ifdef SLUB_RESILIENCY_TEST
+static void resiliency_test(void)
+{
+ u8 *p;
+
+ printk(KERN_ERR "SLUB resiliency testing\n");
+ printk(KERN_ERR "-----------------------\n");
+ printk(KERN_ERR "A. Corruption after allocation\n");
+
+ p = kzalloc(16, GFP_KERNEL);
+ p[16] = 0x12;
+ printk(KERN_ERR "\n1. kmalloc-16: Clobber Redzone/next pointer"
+ " 0x12->%p\n\n", p + 16);
+
+ validate_slab_cache(kmalloc_caches + 4);
+
+ /* Hmmm... The next two are dangerous */
+ p = kzalloc(32, GFP_KERNEL);
+ p[32 + sizeof(void *)] = 0x34;
+ printk(KERN_ERR "\n2. kmalloc-32: Clobber next pointer/next slab"
+ " 0x34 -> %p\n", p);
+ printk(KERN_ERR "If allocated object is overwritten then not detectable\n\n");
+
+ validate_slab_cache(kmalloc_caches + 5);
+ p = kzalloc(64, GFP_KERNEL);
+ p += 64 + (get_cycles() & 0xff) * sizeof(void *);
+ *p = 0x56;
+ printk(KERN_ERR "\n3. kmalloc-64: corrupting random byte 0x56->%p\n",
+ p);
+ printk(KERN_ERR "If allocated object is overwritten then not detectable\n\n");
+ validate_slab_cache(kmalloc_caches + 6);
+
+ printk(KERN_ERR "\nB. Corruption after free\n");
+ p = kzalloc(128, GFP_KERNEL);
+ kfree(p);
+ *p = 0x78;
+ printk(KERN_ERR "1. kmalloc-128: Clobber first word 0x78->%p\n\n", p);
+ validate_slab_cache(kmalloc_caches + 7);
+
+ p = kzalloc(256, GFP_KERNEL);
+ kfree(p);
+ p[50] = 0x9a;
+ printk(KERN_ERR "\n2. kmalloc-256: Clobber 50th byte 0x9a->%p\n\n", p);
+ validate_slab_cache(kmalloc_caches + 8);
+
+ p = kzalloc(512, GFP_KERNEL);
+ kfree(p);
+ p[512] = 0xab;
+ printk(KERN_ERR "\n3. kmalloc-512: Clobber redzone 0xab->%p\n\n", p);
+ validate_slab_cache(kmalloc_caches + 9);
+}
+#else
+static void resiliency_test(void) {};
+#endif
+
/*
* Generate lists of locations where slabcache objects are allocated
* and freed.
@@ -3317,6 +3375,7 @@ int __init slab_sysfs_init(void)
kfree(al);
}
+ resiliency_test();
return 0;
}
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [SLUB 4/5] Resiliency fixups
2007-04-13 1:36 ` [SLUB 4/5] Resiliency fixups Christoph Lameter
@ 2007-04-13 1:58 ` Christoph Lameter
0 siblings, 0 replies; 6+ messages in thread
From: Christoph Lameter @ 2007-04-13 1:58 UTC (permalink / raw)
To: akpm; +Cc: linux-mm
On Thu, 12 Apr 2007, Christoph Lameter wrote:
> @@ -532,7 +566,8 @@ static int check_slab(struct kmem_cache
> page,
> page->flags,
> page->mapping,
> - page_count(page));
> + page_count(page));\
> + dump_stack();
Eek.
Signed-off-by: Christoph Lameter <clameter@sgi.com>
Index: linux-2.6.21-rc6/mm/slub.c
===================================================================
--- linux-2.6.21-rc6.orig/mm/slub.c 2007-04-12 18:29:31.000000000 -0700
+++ linux-2.6.21-rc6/mm/slub.c 2007-04-12 18:57:41.000000000 -0700
@@ -569,7 +569,7 @@ static int check_slab(struct kmem_cache
page,
page->flags,
page->mapping,
- page_count(page));\
+ page_count(page));
dump_stack();
return 0;
}
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2007-04-13 1:58 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2007-04-13 1:36 [SLUB 1/5] Fix validation Christoph Lameter
2007-04-13 1:36 ` [SLUB 2/5] Add after object padding Christoph Lameter
2007-04-13 1:36 ` [SLUB 3/5] Remove object activities out of checking functions Christoph Lameter
2007-04-13 1:36 ` [SLUB 4/5] Resiliency fixups Christoph Lameter
2007-04-13 1:58 ` Christoph Lameter
2007-04-13 1:36 ` [SLUB 5/5] Resiliency test Christoph Lameter
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox