From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.2 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 74C02C433B4 for ; Wed, 19 May 2021 11:44:27 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1AE63610A2 for ; Wed, 19 May 2021 11:44:27 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1AE63610A2 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id AE0A46B006E; Wed, 19 May 2021 07:44:26 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A911B6B0070; Wed, 19 May 2021 07:44:26 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 97FF16B0071; Wed, 19 May 2021 07:44:26 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0240.hostedemail.com [216.40.44.240]) by kanga.kvack.org (Postfix) with ESMTP id 695A46B006E for ; Wed, 19 May 2021 07:44:26 -0400 (EDT) Received: from smtpin24.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 150EB8249980 for ; Wed, 19 May 2021 11:44:26 +0000 (UTC) X-FDA: 78157797732.24.B7015F9 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf01.hostedemail.com (Postfix) with ESMTP id A43EF50018A6 for ; Wed, 19 May 2021 11:44:24 +0000 (UTC) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 8A333101E; Wed, 19 May 2021 04:44:24 -0700 (PDT) Received: from [10.163.79.253] (unknown [10.163.79.253]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 07DD13F719; Wed, 19 May 2021 04:44:19 -0700 (PDT) Subject: Re: [PATCH] arm64: mm: hugetlb: add support for free vmemmap pages of HugeTLB To: Muchun Song , will@kernel.org, akpm@linux-foundation.org, david@redhat.com, bodeddub@amazon.com, osalvador@suse.de, mike.kravetz@oracle.com, rientjes@google.com Cc: linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, fam.zheng@bytedance.com, zhengqi.arch@bytedance.com References: <20210518091826.36937-1-songmuchun@bytedance.com> From: Anshuman Khandual Message-ID: <1b9d008a-7544-cc85-5c2f-532b984eb5b5@arm.com> Date: Wed, 19 May 2021 17:15:03 +0530 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.10.0 MIME-Version: 1.0 In-Reply-To: <20210518091826.36937-1-songmuchun@bytedance.com> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit X-Rspamd-Queue-Id: A43EF50018A6 Authentication-Results: imf01.hostedemail.com; dkim=none; spf=pass (imf01.hostedemail.com: domain of anshuman.khandual@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=anshuman.khandual@arm.com; dmarc=pass (policy=none) header.from=arm.com X-Rspamd-Server: rspam03 X-Stat-Signature: 466ytnrfjo495o4k647rfcr3caqknb5m X-HE-Tag: 1621424664-852588 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 5/18/21 2:48 PM, Muchun Song wrote: > The preparation of supporting freeing vmemmap associated with each > HugeTLB page is ready, so we can support this feature for arm64. > > Signed-off-by: Muchun Song > --- > arch/arm64/mm/mmu.c | 5 +++++ > fs/Kconfig | 2 +- > 2 files changed, 6 insertions(+), 1 deletion(-) > > diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c > index 5d37e461c41f..967b01ce468d 100644 > --- a/arch/arm64/mm/mmu.c > +++ b/arch/arm64/mm/mmu.c > @@ -23,6 +23,7 @@ > #include > #include > #include > +#include > > #include > #include > @@ -1134,6 +1135,10 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node, > pmd_t *pmdp; > > WARN_ON((start < VMEMMAP_START) || (end > VMEMMAP_END)); > + > + if (is_hugetlb_free_vmemmap_enabled() && !altmap) > + return vmemmap_populate_basepages(start, end, node, altmap); Not considering the fact that this will force the kernel to have only base page size mapping for vmemmap (unless altmap is also requested) which might reduce the performance, it also enables vmemmap mapping to be teared down or build up at runtime which could potentially collide with other kernel page table walkers like ptdump or memory hotremove operation ! How those possible collisions are protected right now ? Does not this vmemmap operation increase latency for HugeTLB usage ? Should not this runtime enablement also take into account some other qualifying information apart from potential memory save from struct page areas. Just wondering. > + > do { > next = pmd_addr_end(addr, end); > > diff --git a/fs/Kconfig b/fs/Kconfig > index 6ce6fdac00a3..02c2d3bf1cb8 100644 > --- a/fs/Kconfig > +++ b/fs/Kconfig > @@ -242,7 +242,7 @@ config HUGETLB_PAGE > > config HUGETLB_PAGE_FREE_VMEMMAP > def_bool HUGETLB_PAGE > - depends on X86_64 > + depends on X86_64 || ARM64 > depends on SPARSEMEM_VMEMMAP > > config MEMFD_CREATE >