From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from d12nrmr1607.megacenter.de.ibm.com (d12nrmr1607.megacenter.de.ibm.com [9.149.167.49]) by mtagate3.de.ibm.com (8.13.7/8.13.7) with ESMTP id k6Q8qtu8026792 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL) for ; Wed, 26 Jul 2006 08:52:57 GMT Received: from d06av03.portsmouth.uk.ibm.com (d06av03.portsmouth.uk.ibm.com [9.149.37.213]) by d12nrmr1607.megacenter.de.ibm.com (8.13.6/NCO/VER7.0) with ESMTP id k6Q8uCiX154668 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO) for ; Wed, 26 Jul 2006 10:56:12 +0200 Received: from d06av03.portsmouth.uk.ibm.com (loopback [127.0.0.1]) by d06av03.portsmouth.uk.ibm.com (8.12.11.20060308/8.13.3) with ESMTP id k6Q8qmf3009717 for ; Wed, 26 Jul 2006 09:52:49 +0100 Date: Wed, 26 Jul 2006 10:50:28 +0200 From: Heiko Carstens Subject: [patch 1/2] slab: always consider caller mandated alignment Message-ID: <20060726085028.GC9592@osiris.boeblingen.de.ibm.com> References: <20060722110601.GA9572@osiris.boeblingen.de.ibm.com> <20060722162607.GA10550@osiris.ibm.com> <20060723073500.GA10556@osiris.ibm.com> <20060723162427.GA10553@osiris.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20060723162427.GA10553@osiris.ibm.com> Sender: owner-linux-mm@kvack.org From: Heiko Carstens Return-Path: To: Christoph Lameter Cc: Andrew Morton , linux-kernel@vger.kernel.org, Pekka Enberg , linux-mm@kvack.org, Martin Schwidefsky List-ID: In case of CONFIG_DEBUG_SLAB kmem_cache_create() creates caches with an alignment lesser than ARCH_KMALLOC_MINALIGN. This breaks s390 (32bit), since it needs an eight byte alignment. Also it doesn't behave like it's decribed in mm/slab.c : * Enforce a minimum alignment for the kmalloc caches. * Usually, the kmalloc caches are cache_line_size() aligned, except when * DEBUG and FORCED_DEBUG are enabled, then they are BYTES_PER_WORD aligned. * Some archs want to perform DMA into kmalloc caches and need a guaranteed * alignment larger than BYTES_PER_WORD. ARCH_KMALLOC_MINALIGN allows that. * Note that this flag disables some debug features. For example the following might happen if kmem_cache_create() gets called with -- size: 64; align: 8; flags with SLAB_HWCACHE_ALIGN, SLAB_RED_ZONE and SLAB_STORE_USER set. These are the steps as numbered in kmem_cache_create() where 5) is after the "if (flags & SLAB_RED_ZONE)" statement. 1) align: 8 ralign 64 2) align: 8 ralign 64 3) align: 8 ralign 64 4) align: 64 ralign 64 5) align: 4 ralign 64 Note that in this case in step 3) the flags SLAB_RED_ZONE and SLAB_STORE_USER don't get masked out and that this causes an BYTES_PER_WORD alignment in step 5) which breaks s390. Cc: Christoph Lameter Cc: Pekka Enberg Signed-off-by: Heiko Carstens --- mm/slab.c | 3 +++ 1 files changed, 3 insertions(+) Index: linux-2.6/mm/slab.c =================================================================== --- linux-2.6.orig/mm/slab.c 2006-07-24 09:41:36.000000000 +0200 +++ linux-2.6/mm/slab.c 2006-07-26 09:55:54.000000000 +0200 @@ -2109,6 +2109,9 @@ if (ralign > BYTES_PER_WORD) flags &= ~(SLAB_RED_ZONE | SLAB_STORE_USER); } + if (align > BYTES_PER_WORD) + flags &= ~(SLAB_RED_ZONE | SLAB_STORE_USER); + /* * 4) Store it. Note that the debug code below can reduce * the alignment to BYTES_PER_WORD. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org