From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-ed1-f69.google.com (mail-ed1-f69.google.com [209.85.208.69]) by kanga.kvack.org (Postfix) with ESMTP id 5E04F6B028F for ; Wed, 10 Oct 2018 03:58:50 -0400 (EDT) Received: by mail-ed1-f69.google.com with SMTP id x44-v6so2668215edd.17 for ; Wed, 10 Oct 2018 00:58:50 -0700 (PDT) Received: from mx1.suse.de (mx2.suse.de. [195.135.220.15]) by mx.google.com with ESMTPS id bo22-v6si6742207ejb.123.2018.10.10.00.58.48 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 10 Oct 2018 00:58:48 -0700 (PDT) Date: Wed, 10 Oct 2018 09:58:44 +0200 From: Michal Hocko Subject: Re: [PATCH] memblock: stop using implicit alignement to SMP_CACHE_BYTES Message-ID: <20181010075844.GA5873@dhcp22.suse.cz> References: <1538687224-17535-1-git-send-email-rppt@linux.vnet.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1538687224-17535-1-git-send-email-rppt@linux.vnet.ibm.com> Sender: owner-linux-mm@kvack.org List-ID: To: Mike Rapoport Cc: linux-mm@kvack.org, Andrew Morton , Catalin Marinas , Chris Zankel , Geert Uytterhoeven , Guan Xuetao , Ingo Molnar , Matt Turner , Michael Ellerman , Michal Simek , Paul Burton , Richard Weinberger , Russell King , Thomas Gleixner , Tony Luck , linux-alpha@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-ia64@vger.kernel.org, linux-kernel@vger.kernel.org, linux-m68k@lists.linux-m68k.org, linux-mips@linux-mips.org, linuxppc-dev@lists.ozlabs.org, linux-um@lists.infradead.org On Fri 05-10-18 00:07:04, Mike Rapoport wrote: > When a memblock allocation APIs are called with align = 0, the alignment is > implicitly set to SMP_CACHE_BYTES. I would add something like " Implicit alignment is done deep in the memblock allocator and it can come as a surprise. Not that such an alignment would be wrong even when used incorrectly but it is better to be explicit for the sake of clarity and the prinicple of the least surprise. " > Replace all such uses of memblock APIs with the 'align' parameter explicitly > set to SMP_CACHE_BYTES and stop implicit alignment assignment in the > memblock internal allocation functions. > > For the case when memblock APIs are used via helper functions, e.g. like > iommu_arena_new_node() in Alpha, the helper functions were detected with > Coccinelle's help and then manually examined and updated where appropriate. > > The direct memblock APIs users were updated using the semantic patch below: > > @@ > expression size, min_addr, max_addr, nid; > @@ > ( > | > - memblock_alloc_try_nid_raw(size, 0, min_addr, max_addr, nid) > + memblock_alloc_try_nid_raw(size, SMP_CACHE_BYTES, min_addr, max_addr, > nid) > | > - memblock_alloc_try_nid_nopanic(size, 0, min_addr, max_addr, nid) > + memblock_alloc_try_nid_nopanic(size, SMP_CACHE_BYTES, min_addr, max_addr, > nid) > | > - memblock_alloc_try_nid(size, 0, min_addr, max_addr, nid) > + memblock_alloc_try_nid(size, SMP_CACHE_BYTES, min_addr, max_addr, nid) > | > - memblock_alloc(size, 0) > + memblock_alloc(size, SMP_CACHE_BYTES) > | > - memblock_alloc_raw(size, 0) > + memblock_alloc_raw(size, SMP_CACHE_BYTES) > | > - memblock_alloc_from(size, 0, min_addr) > + memblock_alloc_from(size, SMP_CACHE_BYTES, min_addr) > | > - memblock_alloc_nopanic(size, 0) > + memblock_alloc_nopanic(size, SMP_CACHE_BYTES) > | > - memblock_alloc_low(size, 0) > + memblock_alloc_low(size, SMP_CACHE_BYTES) > | > - memblock_alloc_low_nopanic(size, 0) > + memblock_alloc_low_nopanic(size, SMP_CACHE_BYTES) > | > - memblock_alloc_from_nopanic(size, 0, min_addr) > + memblock_alloc_from_nopanic(size, SMP_CACHE_BYTES, min_addr) > | > - memblock_alloc_node(size, 0, nid) > + memblock_alloc_node(size, SMP_CACHE_BYTES, nid) > ) > > Suggested-by: Michal Hocko > Signed-off-by: Mike Rapoport I do agree that this is an improvement. I would also add WARN_ON_ONCE on 0 alignment to catch some left overs. If we ever grown a user which would explicitly require the zero alignment (I would be surprised) then we can remove the warning. Acked-by: Michal Hocko -- Michal Hocko SUSE Labs