* [PATCH] SLUB: fix ARCH_KMALLOC_MINALIGN cases 64 and 256
@ 2009-08-27 15:38 Aaro Koskinen
2009-08-27 15:56 ` Christoph Lameter
0 siblings, 1 reply; 4+ messages in thread
From: Aaro Koskinen @ 2009-08-27 15:38 UTC (permalink / raw)
To: mpm, penberg, cl, linux-mm; +Cc: Artem.Bityutskiy
If the minalign is 64 bytes, then the 96 byte cache should not be created
because it would conflict with the 128 byte cache.
If the minalign is 256 bytes, patching the size_index table should not
result in a buffer overrun.
Signed-off-by: Aaro Koskinen <aaro.koskinen@nokia.com>
---
The patch is against v2.6.31-rc7.
include/linux/slub_def.h | 2 ++
mm/slub.c | 15 ++++++++++++---
2 files changed, 14 insertions(+), 3 deletions(-)
diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
index c1c862b..ed291c8 100644
--- a/include/linux/slub_def.h
+++ b/include/linux/slub_def.h
@@ -154,8 +154,10 @@ static __always_inline int kmalloc_index(size_t size)
return KMALLOC_SHIFT_LOW;
#if KMALLOC_MIN_SIZE <= 64
+#if KMALLOC_MIN_SIZE <= 32
if (size > 64 && size <= 96)
return 1;
+#endif
if (size > 128 && size <= 192)
return 2;
#endif
diff --git a/mm/slub.c b/mm/slub.c
index b9f1491..3d32ebf 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3156,10 +3156,12 @@ void __init kmem_cache_init(void)
slab_state = PARTIAL;
/* Caches that are not of the two-to-the-power-of size */
- if (KMALLOC_MIN_SIZE <= 64) {
+ if (KMALLOC_MIN_SIZE <= 32) {
create_kmalloc_cache(&kmalloc_caches[1],
"kmalloc-96", 96, GFP_NOWAIT);
caches++;
+ }
+ if (KMALLOC_MIN_SIZE <= 64) {
create_kmalloc_cache(&kmalloc_caches[2],
"kmalloc-192", 192, GFP_NOWAIT);
caches++;
@@ -3186,10 +3188,17 @@ void __init kmem_cache_init(void)
BUILD_BUG_ON(KMALLOC_MIN_SIZE > 256 ||
(KMALLOC_MIN_SIZE & (KMALLOC_MIN_SIZE - 1)));
- for (i = 8; i < KMALLOC_MIN_SIZE; i += 8)
+ for (i = 8; i < min(KMALLOC_MIN_SIZE, 192 + 8); i += 8)
size_index[(i - 1) / 8] = KMALLOC_SHIFT_LOW;
- if (KMALLOC_MIN_SIZE == 128) {
+ if (KMALLOC_MIN_SIZE == 64) {
+ /*
+ * The 96 byte size cache is not used if the alignment
+ * is 64 byte.
+ */
+ for (i = 64 + 8; i <= 96; i += 8)
+ size_index[(i - 1) / 8] = 7;
+ } else if (KMALLOC_MIN_SIZE == 128) {
/*
* The 192 byte sized cache is not used if the alignment
* is 128 byte. Redirect kmalloc to use the 256 byte cache
--
1.5.4.3
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 4+ messages in thread* Re: [PATCH] SLUB: fix ARCH_KMALLOC_MINALIGN cases 64 and 256
2009-08-27 15:38 [PATCH] SLUB: fix ARCH_KMALLOC_MINALIGN cases 64 and 256 Aaro Koskinen
@ 2009-08-27 15:56 ` Christoph Lameter
2009-08-27 16:03 ` Artem Bityutskiy
0 siblings, 1 reply; 4+ messages in thread
From: Christoph Lameter @ 2009-08-27 15:56 UTC (permalink / raw)
To: Aaro Koskinen; +Cc: mpm, penberg, linux-mm, Artem.Bityutskiy
On Thu, 27 Aug 2009, Aaro Koskinen wrote:
> +++ b/include/linux/slub_def.h
> @@ -154,8 +154,10 @@ static __always_inline int kmalloc_index(size_t size)
> return KMALLOC_SHIFT_LOW;
>
> #if KMALLOC_MIN_SIZE <= 64
> +#if KMALLOC_MIN_SIZE <= 32
> if (size > 64 && size <= 96)
> return 1;
> +#endif
Use elif here to move the condition together with the action?
> if (size > 128 && size <= 192)
> return 2;
> #endif
> diff --git a/mm/slub.c b/mm/slub.c
> index b9f1491..3d32ebf 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -3156,10 +3156,12 @@ void __init kmem_cache_init(void)
> slab_state = PARTIAL;
>
> /* Caches that are not of the two-to-the-power-of size */
> - if (KMALLOC_MIN_SIZE <= 64) {
> + if (KMALLOC_MIN_SIZE <= 32) {
> create_kmalloc_cache(&kmalloc_caches[1],
> "kmalloc-96", 96, GFP_NOWAIT);
> caches++;
> + }
> + if (KMALLOC_MIN_SIZE <= 64) {
> create_kmalloc_cache(&kmalloc_caches[2],
> "kmalloc-192", 192, GFP_NOWAIT);
> caches++;
> @@ -3186,10 +3188,17 @@ void __init kmem_cache_init(void)
> BUILD_BUG_ON(KMALLOC_MIN_SIZE > 256 ||
> (KMALLOC_MIN_SIZE & (KMALLOC_MIN_SIZE - 1)));
>
> - for (i = 8; i < KMALLOC_MIN_SIZE; i += 8)
> + for (i = 8; i < min(KMALLOC_MIN_SIZE, 192 + 8); i += 8)
> size_index[(i - 1) / 8] = KMALLOC_SHIFT_LOW;
192 + 8 is related to the # of elements in size_index.
Define a constant for that and express 192 + 8 as ((NR_SIZE_INDEX + 1 ) *
8)?
size_index[(i - 1) /8] appears frequently now. Can we put this into an
inline function or macro to make it more understandable?
> - if (KMALLOC_MIN_SIZE == 128) {
> + if (KMALLOC_MIN_SIZE == 64) {
> + /*
> + * The 96 byte size cache is not used if the alignment
> + * is 64 byte.
> + */
> + for (i = 64 + 8; i <= 96; i += 8)
> + size_index[(i - 1) / 8] = 7;
> + } else if (KMALLOC_MIN_SIZE == 128) {
> /*
> * The 192 byte sized cache is not used if the alignment
> * is 128 byte. Redirect kmalloc to use the 256 byte cache
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 4+ messages in thread* Re: [PATCH] SLUB: fix ARCH_KMALLOC_MINALIGN cases 64 and 256
2009-08-27 15:56 ` Christoph Lameter
@ 2009-08-27 16:03 ` Artem Bityutskiy
2009-08-27 16:23 ` Christoph Lameter
0 siblings, 1 reply; 4+ messages in thread
From: Artem Bityutskiy @ 2009-08-27 16:03 UTC (permalink / raw)
To: Christoph Lameter
Cc: Koskinen Aaro (Nokia-D/Helsinki), mpm, penberg, linux-mm
On 08/27/2009 06:56 PM, ext Christoph Lameter wrote:
> On Thu, 27 Aug 2009, Aaro Koskinen wrote:
>
>> +++ b/include/linux/slub_def.h
>> @@ -154,8 +154,10 @@ static __always_inline int kmalloc_index(size_t size)
>> return KMALLOC_SHIFT_LOW;
>>
>> #if KMALLOC_MIN_SIZE<= 64
>> +#if KMALLOC_MIN_SIZE<= 32
>> if (size> 64&& size<= 96)
>> return 1;
>> +#endif
>
> Use elif here to move the condition together with the action?
Just a related question. KMALLOC_MIN_SIZE sounds confusing. If this is
about alignment, why not to call it KMALLOC_MIN_ALIGN instead?
--
Best Regards,
Artem Bityutskiy (D?N?N?N?D 1/4 D?D,N?N?N?DoD,D1)
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH] SLUB: fix ARCH_KMALLOC_MINALIGN cases 64 and 256
2009-08-27 16:03 ` Artem Bityutskiy
@ 2009-08-27 16:23 ` Christoph Lameter
0 siblings, 0 replies; 4+ messages in thread
From: Christoph Lameter @ 2009-08-27 16:23 UTC (permalink / raw)
To: Artem Bityutskiy; +Cc: Koskinen Aaro (Nokia-D/Helsinki), mpm, penberg, linux-mm
On Thu, 27 Aug 2009, Artem Bityutskiy wrote:
> Just a related question. KMALLOC_MIN_SIZE sounds confusing. If this is
> about alignment, why not to call it KMALLOC_MIN_ALIGN instead?
KMALLOC_MIN_SIZE is the size of the smallest kmalloc slab.
ARCH_KMALLOC_MINALIGN is the minimum alignment required by the arch code.
KMALLOC_MIN_SIZE is set to ARCH_KMALLOC_MINALIGN if the alignment is
greater than 8 (see slub_def.h)
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2009-08-27 16:23 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2009-08-27 15:38 [PATCH] SLUB: fix ARCH_KMALLOC_MINALIGN cases 64 and 256 Aaro Koskinen
2009-08-27 15:56 ` Christoph Lameter
2009-08-27 16:03 ` Artem Bityutskiy
2009-08-27 16:23 ` Christoph Lameter
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox