From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7DE91C433ED for ; Wed, 19 May 2021 12:49:41 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E1A43611AD for ; Wed, 19 May 2021 12:49:40 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E1A43611AD Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 7650D6B0036; Wed, 19 May 2021 08:49:40 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 714276B006C; Wed, 19 May 2021 08:49:40 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 58C726B006E; Wed, 19 May 2021 08:49:40 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0186.hostedemail.com [216.40.44.186]) by kanga.kvack.org (Postfix) with ESMTP id 27FE76B0036 for ; Wed, 19 May 2021 08:49:40 -0400 (EDT) Received: from smtpin36.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 9D9CC6C37 for ; Wed, 19 May 2021 12:49:39 +0000 (UTC) X-FDA: 78157962078.36.A7FC9A0 Received: from mail-pg1-f177.google.com (mail-pg1-f177.google.com [209.85.215.177]) by imf26.hostedemail.com (Postfix) with ESMTP id AA78440B8CE8 for ; Wed, 19 May 2021 12:49:38 +0000 (UTC) Received: by mail-pg1-f177.google.com with SMTP id i5so9418648pgm.0 for ; Wed, 19 May 2021 05:49:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=tvJutpwC6OgwGmGHfHy5EWnjg1FWF0WzmnPf7pobyMA=; b=QufUefRIlU44FY39RoXA9yMQtgsz4rayOLXiUTmjXOG5uGd77aWagMHwUBMDX+RtIN C2eCS6q4KG2YNrXHZv2Pu3m5g0ERToP8QbxEtaMlDRU2gEuUyeaiEcG/3sYDxLt+ZP6c hI2RBFsfGa+wL0IVG2OBtugJ/vQM/eQ+t2eVJUl6I1en2K9CiwxU/KwuY6JLlYiX5Dn1 JIFUtRf9S5UnpgNvVQP8HO3nDkbsURG+773vRxvuHzeNCnNc/OVNX06PzOFw7Mk0WPOS 4bAebUHu9NpE8sAijFDlO/7hDQ315Oa9hGah43zz4zDls/1VWfSiwywpAKAe13SMUQSo JSgw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=tvJutpwC6OgwGmGHfHy5EWnjg1FWF0WzmnPf7pobyMA=; b=FmIr4FsvMBeKdomK0slZhBt9+57M0a5wvuoLj/V0i4HK7AZ5pRQGIiBz5RTLGYgyI5 0OUATa7ax+L8YvQWGhBq2PhZk3VgvHyaO8wNz6UOo3wZDmlEDSNr1+jq0SA1QcPh3Rok 65Y/XOL6P5oQNi/f3COxtxIzKH2oXAQOZQI+HUB+5G2VhPLocRQN8BrccQLua6UUsrm0 OhW8Vz0hoTUpBUUUXfgiuHWx4ixEBvWE/DEdv0Xa0+t9MnMPRvAp22ZAw9ILisVWWn4N U5yU++0z1BQ5K+P2KJxOYy5Kt1w8MOPbMdtRHqgAsctnNQXd8gC6xI6BNHSOrfS/tK/p MsGQ== X-Gm-Message-State: AOAM530MGB89oOoS6MUGEIwrFIXh84f9Hk0atikNobsepWtIPzd9jVxN VJWiNBa9NCMzeoL0zyp/OWJUCsncQKeGfRH9SOPlEg== X-Google-Smtp-Source: ABdhPJyZx4agyuvNquMytHMJMDBzKLpAAvLI6jhho/A1kk6p6EGRPN0rxd2JSuQn+Or+HTp5dG1IPPv7arMrUucHTSI= X-Received: by 2002:a63:1953:: with SMTP id 19mr10638032pgz.273.1621428577422; Wed, 19 May 2021 05:49:37 -0700 (PDT) MIME-Version: 1.0 References: <20210518091826.36937-1-songmuchun@bytedance.com> <1b9d008a-7544-cc85-5c2f-532b984eb5b5@arm.com> In-Reply-To: <1b9d008a-7544-cc85-5c2f-532b984eb5b5@arm.com> From: Muchun Song Date: Wed, 19 May 2021 20:49:02 +0800 Message-ID: Subject: Re: [External] Re: [PATCH] arm64: mm: hugetlb: add support for free vmemmap pages of HugeTLB To: Anshuman Khandual Cc: Will Deacon , Andrew Morton , David Hildenbrand , "Bodeddula, Balasubramaniam" , Oscar Salvador , Mike Kravetz , David Rientjes , linux-arm-kernel@lists.infradead.org, LKML , Linux Memory Management List , Xiongchun duan , fam.zheng@bytedance.com, zhengqi.arch@bytedance.com Content-Type: text/plain; charset="UTF-8" Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=bytedance-com.20150623.gappssmtp.com header.s=20150623 header.b=QufUefRI; dmarc=pass (policy=none) header.from=bytedance.com; spf=pass (imf26.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.215.177 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com X-Stat-Signature: pbb9bqchqjwu1n5kix4shcwd14wcmkm8 X-Rspamd-Queue-Id: AA78440B8CE8 X-Rspamd-Server: rspam02 X-HE-Tag: 1621428578-986246 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, May 19, 2021 at 7:44 PM Anshuman Khandual wrote: > On 5/18/21 2:48 PM, Muchun Song wrote: > > The preparation of supporting freeing vmemmap associated with each > > HugeTLB page is ready, so we can support this feature for arm64. > > > > Signed-off-by: Muchun Song > > --- > > arch/arm64/mm/mmu.c | 5 +++++ > > fs/Kconfig | 2 +- > > 2 files changed, 6 insertions(+), 1 deletion(-) > > > > diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c > > index 5d37e461c41f..967b01ce468d 100644 > > --- a/arch/arm64/mm/mmu.c > > +++ b/arch/arm64/mm/mmu.c > > @@ -23,6 +23,7 @@ > > #include > > #include > > #include > > +#include > > > > #include > > #include > > @@ -1134,6 +1135,10 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node, > > pmd_t *pmdp; > > > > WARN_ON((start < VMEMMAP_START) || (end > VMEMMAP_END)); > > + > > + if (is_hugetlb_free_vmemmap_enabled() && !altmap) > > + return vmemmap_populate_basepages(start, end, node, altmap); > > Not considering the fact that this will force the kernel to have only > base page size mapping for vmemmap (unless altmap is also requested) > which might reduce the performance, it also enables vmemmap mapping to > be teared down or build up at runtime which could potentially collide > with other kernel page table walkers like ptdump or memory hotremove > operation ! How those possible collisions are protected right now ? For the ptdump, there seems no problem. The change of pte seems not to affect the ptdump unless I miss something. > > Does not this vmemmap operation increase latency for HugeTLB usage ? > Should not this runtime enablement also take into account some other > qualifying information apart from potential memory save from struct > page areas. Just wondering. The disadvantage is we add a PTE level mapping for vmemmap pages, from this point of view, the latency will be increased. But There's an additional benefit which is that page (un)pinners will see an improvement, because there are fewer vmemmap pages and thus the tail/head pages are staying in cache more often. >From this point of view, the latency will be decreased. So if the user cares about the memory usage of the struct page, he can enable this feature via cmdline when boot. As David said "That's one of the reasons why it explicitly has to be enabled by an admin". > > > + > > do { > > next = pmd_addr_end(addr, end); > > > > diff --git a/fs/Kconfig b/fs/Kconfig > > index 6ce6fdac00a3..02c2d3bf1cb8 100644 > > --- a/fs/Kconfig > > +++ b/fs/Kconfig > > @@ -242,7 +242,7 @@ config HUGETLB_PAGE > > > > config HUGETLB_PAGE_FREE_VMEMMAP > > def_bool HUGETLB_PAGE > > - depends on X86_64 > > + depends on X86_64 || ARM64 > > depends on SPARSEMEM_VMEMMAP > > > > config MEMFD_CREATE > >