linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] SLUB: fix ARCH_KMALLOC_MINALIGN cases 64 and 256
@ 2009-08-27 15:38 Aaro Koskinen
  2009-08-27 15:56 ` Christoph Lameter
  0 siblings, 1 reply; 4+ messages in thread
From: Aaro Koskinen @ 2009-08-27 15:38 UTC (permalink / raw)
  To: mpm, penberg, cl, linux-mm; +Cc: Artem.Bityutskiy

If the minalign is 64 bytes, then the 96 byte cache should not be created
because it would conflict with the 128 byte cache.

If the minalign is 256 bytes, patching the size_index table should not
result in a buffer overrun.

Signed-off-by: Aaro Koskinen <aaro.koskinen@nokia.com>
---

The patch is against v2.6.31-rc7.

 include/linux/slub_def.h |    2 ++
 mm/slub.c                |   15 ++++++++++++---
 2 files changed, 14 insertions(+), 3 deletions(-)

diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
index c1c862b..ed291c8 100644
--- a/include/linux/slub_def.h
+++ b/include/linux/slub_def.h
@@ -154,8 +154,10 @@ static __always_inline int kmalloc_index(size_t size)
 		return KMALLOC_SHIFT_LOW;
 
 #if KMALLOC_MIN_SIZE <= 64
+#if KMALLOC_MIN_SIZE <= 32
 	if (size > 64 && size <= 96)
 		return 1;
+#endif
 	if (size > 128 && size <= 192)
 		return 2;
 #endif
diff --git a/mm/slub.c b/mm/slub.c
index b9f1491..3d32ebf 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3156,10 +3156,12 @@ void __init kmem_cache_init(void)
 	slab_state = PARTIAL;
 
 	/* Caches that are not of the two-to-the-power-of size */
-	if (KMALLOC_MIN_SIZE <= 64) {
+	if (KMALLOC_MIN_SIZE <= 32) {
 		create_kmalloc_cache(&kmalloc_caches[1],
 				"kmalloc-96", 96, GFP_NOWAIT);
 		caches++;
+	}
+	if (KMALLOC_MIN_SIZE <= 64) {
 		create_kmalloc_cache(&kmalloc_caches[2],
 				"kmalloc-192", 192, GFP_NOWAIT);
 		caches++;
@@ -3186,10 +3188,17 @@ void __init kmem_cache_init(void)
 	BUILD_BUG_ON(KMALLOC_MIN_SIZE > 256 ||
 		(KMALLOC_MIN_SIZE & (KMALLOC_MIN_SIZE - 1)));
 
-	for (i = 8; i < KMALLOC_MIN_SIZE; i += 8)
+	for (i = 8; i < min(KMALLOC_MIN_SIZE, 192 + 8); i += 8)
 		size_index[(i - 1) / 8] = KMALLOC_SHIFT_LOW;
 
-	if (KMALLOC_MIN_SIZE == 128) {
+	if (KMALLOC_MIN_SIZE == 64) {
+		/*
+		 * The 96 byte size cache is not used if the alignment
+		 * is 64 byte.
+		 */
+		for (i = 64 + 8; i <= 96; i += 8)
+			size_index[(i - 1) / 8] = 7;
+	} else if (KMALLOC_MIN_SIZE == 128) {
 		/*
 		 * The 192 byte sized cache is not used if the alignment
 		 * is 128 byte. Redirect kmalloc to use the 256 byte cache
-- 
1.5.4.3

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2009-08-27 16:23 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2009-08-27 15:38 [PATCH] SLUB: fix ARCH_KMALLOC_MINALIGN cases 64 and 256 Aaro Koskinen
2009-08-27 15:56 ` Christoph Lameter
2009-08-27 16:03   ` Artem Bityutskiy
2009-08-27 16:23     ` Christoph Lameter

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox