From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 73EABC433F5 for ; Fri, 5 Nov 2021 19:49:14 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E984A61139 for ; Fri, 5 Nov 2021 19:49:13 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org E984A61139 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id E5DC56B006C; Fri, 5 Nov 2021 15:49:12 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id DE5C96B0072; Fri, 5 Nov 2021 15:49:12 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CAE19940007; Fri, 5 Nov 2021 15:49:12 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0174.hostedemail.com [216.40.44.174]) by kanga.kvack.org (Postfix) with ESMTP id B46136B006C for ; Fri, 5 Nov 2021 15:49:12 -0400 (EDT) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 6BF02182405AB for ; Fri, 5 Nov 2021 19:49:12 +0000 (UTC) X-FDA: 78775915344.30.02FFDD0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf18.hostedemail.com (Postfix) with ESMTP id E88FC400208D for ; Fri, 5 Nov 2021 19:49:11 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id 9B51D61053; Fri, 5 Nov 2021 19:49:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1636141751; bh=y9E2/3tUW798ZN2V86zmgMhgAcJlwxxzk/tb4Atgp4w=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=U46T9o8IV/RIMPrYDEotuj/2aagThYdTPu35w0FmRmg+DLk/WW9tVuxnDYxA/o/8x MLzqCcFnOPeDMubg/0HZ0I+7FyAjygtoQbBYonTTSZDe8sxDjz4DPzmFLXQYMaOiJy sSR+H1V0kgrd47R0nYq6JrsbuhE345saZ6CDn8e1DuUk2+j3nKroTGkBUtWGaP08BU /sR00KaRFzdk+3VVZgcQtK0/e2gcXauJQPRyxNUzal1gY0oYibV6hvtPDQTLABeoHD WIUBpuiov1aPJ/F+Woxd8WnC6KYaUciiBBez8zhsr6wL88PuFztyKzbPKDSreid6hV oUXeZPCUJcsEg== Date: Fri, 5 Nov 2021 21:49:01 +0200 From: Mike Rapoport To: Qian Cai Cc: Catalin Marinas , Will Deacon , Andrew Morton , Andrey Ryabinin , Alexander Potapenko , Andrey Konovalov , Dmitry Vyukov , Russell King , kasan-dev@googlegroups.com, linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH v2] arm64: Track no early_pgtable_alloc() for kmemleak Message-ID: References: <20211105150509.7826-1-quic_qiancai@quicinc.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20211105150509.7826-1-quic_qiancai@quicinc.com> X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: E88FC400208D X-Stat-Signature: 3wzrqqn473zc1c86up56zxtmqirzxihh Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=U46T9o8I; spf=pass (imf18.hostedemail.com: domain of rppt@kernel.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=none) header.from=kernel.org X-HE-Tag: 1636141751-894250 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri, Nov 05, 2021 at 11:05:09AM -0400, Qian Cai wrote: > After switched page size from 64KB to 4KB on several arm64 servers here, > kmemleak starts to run out of early memory pool due to a huge number of > those early_pgtable_alloc() calls: > > kmemleak_alloc_phys() > memblock_alloc_range_nid() > memblock_phys_alloc_range() > early_pgtable_alloc() > init_pmd() > alloc_init_pud() > __create_pgd_mapping() > __map_memblock() > paging_init() > setup_arch() > start_kernel() > > Increased the default value of DEBUG_KMEMLEAK_MEM_POOL_SIZE by 4 times > won't be enough for a server with 200GB+ memory. There isn't much > interesting to check memory leaks for those early page tables and those > early memory mappings should not reference to other memory. Hence, no > kmemleak false positives, and we can safely skip tracking those early > allocations from kmemleak like we did in the commit fed84c785270 > ("mm/memblock.c: skip kmemleak for kasan_init()") without needing to > introduce complications to automatically scale the value depends on the > runtime memory size etc. After the patch, the default value of > DEBUG_KMEMLEAK_MEM_POOL_SIZE becomes sufficient again. > > Signed-off-by: Qian Cai Reviewed-by: Mike Rapoport > --- > v2: > Rename MEMBLOCK_ALLOC_KASAN to MEMBLOCK_ALLOC_NOLEAKTRACE to deal with > those situations in general. > > arch/arm/mm/kasan_init.c | 2 +- > arch/arm64/mm/kasan_init.c | 5 +++-- > arch/arm64/mm/mmu.c | 3 ++- > include/linux/memblock.h | 2 +- > mm/memblock.c | 9 ++++++--- > 5 files changed, 13 insertions(+), 8 deletions(-) > > diff --git a/arch/arm/mm/kasan_init.c b/arch/arm/mm/kasan_init.c > index 4b1619584b23..5ad0d6c56d56 100644 > --- a/arch/arm/mm/kasan_init.c > +++ b/arch/arm/mm/kasan_init.c > @@ -32,7 +32,7 @@ pmd_t tmp_pmd_table[PTRS_PER_PMD] __page_aligned_bss; > static __init void *kasan_alloc_block(size_t size) > { > return memblock_alloc_try_nid(size, size, __pa(MAX_DMA_ADDRESS), > - MEMBLOCK_ALLOC_KASAN, NUMA_NO_NODE); > + MEMBLOCK_ALLOC_NOLEAKTRACE, NUMA_NO_NODE); > } > > static void __init kasan_pte_populate(pmd_t *pmdp, unsigned long addr, > diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c > index 6f5a6fe8edd7..c12cd700598f 100644 > --- a/arch/arm64/mm/kasan_init.c > +++ b/arch/arm64/mm/kasan_init.c > @@ -36,7 +36,7 @@ static phys_addr_t __init kasan_alloc_zeroed_page(int node) > { > void *p = memblock_alloc_try_nid(PAGE_SIZE, PAGE_SIZE, > __pa(MAX_DMA_ADDRESS), > - MEMBLOCK_ALLOC_KASAN, node); > + MEMBLOCK_ALLOC_NOLEAKTRACE, node); > if (!p) > panic("%s: Failed to allocate %lu bytes align=0x%lx nid=%d from=%llx\n", > __func__, PAGE_SIZE, PAGE_SIZE, node, > @@ -49,7 +49,8 @@ static phys_addr_t __init kasan_alloc_raw_page(int node) > { > void *p = memblock_alloc_try_nid_raw(PAGE_SIZE, PAGE_SIZE, > __pa(MAX_DMA_ADDRESS), > - MEMBLOCK_ALLOC_KASAN, node); > + MEMBLOCK_ALLOC_NOLEAKTRACE, > + node); > if (!p) > panic("%s: Failed to allocate %lu bytes align=0x%lx nid=%d from=%llx\n", > __func__, PAGE_SIZE, PAGE_SIZE, node, > diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c > index d77bf06d6a6d..acfae9b41cc8 100644 > --- a/arch/arm64/mm/mmu.c > +++ b/arch/arm64/mm/mmu.c > @@ -96,7 +96,8 @@ static phys_addr_t __init early_pgtable_alloc(int shift) > phys_addr_t phys; > void *ptr; > > - phys = memblock_phys_alloc(PAGE_SIZE, PAGE_SIZE); > + phys = memblock_phys_alloc_range(PAGE_SIZE, PAGE_SIZE, 0, > + MEMBLOCK_ALLOC_NOLEAKTRACE); > if (!phys) > panic("Failed to allocate page table page\n"); > > diff --git a/include/linux/memblock.h b/include/linux/memblock.h > index 7df557b16c1e..8adcf1fa8096 100644 > --- a/include/linux/memblock.h > +++ b/include/linux/memblock.h > @@ -389,7 +389,7 @@ static inline int memblock_get_region_node(const struct memblock_region *r) > /* Flags for memblock allocation APIs */ > #define MEMBLOCK_ALLOC_ANYWHERE (~(phys_addr_t)0) > #define MEMBLOCK_ALLOC_ACCESSIBLE 0 > -#define MEMBLOCK_ALLOC_KASAN 1 > +#define MEMBLOCK_ALLOC_NOLEAKTRACE 1 > > /* We are using top down, so it is safe to use 0 here */ > #define MEMBLOCK_LOW_LIMIT 0 > diff --git a/mm/memblock.c b/mm/memblock.c > index 659bf0ffb086..1018e50566f3 100644 > --- a/mm/memblock.c > +++ b/mm/memblock.c > @@ -287,7 +287,7 @@ static phys_addr_t __init_memblock memblock_find_in_range_node(phys_addr_t size, > { > /* pump up @end */ > if (end == MEMBLOCK_ALLOC_ACCESSIBLE || > - end == MEMBLOCK_ALLOC_KASAN) > + end == MEMBLOCK_ALLOC_NOLEAKTRACE) > end = memblock.current_limit; > > /* avoid allocating the first page */ > @@ -1387,8 +1387,11 @@ phys_addr_t __init memblock_alloc_range_nid(phys_addr_t size, > return 0; > > done: > - /* Skip kmemleak for kasan_init() due to high volume. */ > - if (end != MEMBLOCK_ALLOC_KASAN) > + /* > + * Skip kmemleak for those places like kasan_init() and > + * early_pgtable_alloc() due to high volume. > + */ > + if (end != MEMBLOCK_ALLOC_NOLEAKTRACE) > /* > * The min_count is set to 0 so that memblock allocated > * blocks are never reported as leaks. This is because many > -- > 2.30.2 > -- Sincerely yours, Mike.