From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8C2ADC433ED for ; Mon, 10 May 2021 03:04:42 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 14FFD613D1 for ; Mon, 10 May 2021 03:04:42 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 14FFD613D1 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id A986C6B0071; Sun, 9 May 2021 23:04:41 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A70706B0078; Sun, 9 May 2021 23:04:41 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8C3CB6B007B; Sun, 9 May 2021 23:04:41 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0198.hostedemail.com [216.40.44.198]) by kanga.kvack.org (Postfix) with ESMTP id 6FB0E6B0071 for ; Sun, 9 May 2021 23:04:41 -0400 (EDT) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 269569895 for ; Mon, 10 May 2021 03:04:41 +0000 (UTC) X-FDA: 78123828762.30.952338F Received: from mail-pl1-f178.google.com (mail-pl1-f178.google.com [209.85.214.178]) by imf17.hostedemail.com (Postfix) with ESMTP id BDDFB40002C0 for ; Mon, 10 May 2021 03:04:35 +0000 (UTC) Received: by mail-pl1-f178.google.com with SMTP id v13so8409133ple.9 for ; Sun, 09 May 2021 20:04:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=sYTUTjFUbmon1SGhbccDrj5pYUlY6vb4o2URlhlRuqM=; b=UlYurxDUOYaTylyy3bFLR3po7g0XViDSRns7W24iohK5GSrihpYUpFPQ/2ZHXExdYy lEJ58mpiWlV/UvjAmS9BW0oVSbllli8c12A8bjSCuRLxoGUMMhSX5jflOzfKCD0iJPWL MmbuK7dMaQV7e+I60pgjih1Qj0HHHU1Pd0nphZuAuR0OW+mex4w2Ij+Ud7BEHWhICzP9 NIgMtOin8pmw6LIm9uuTeLV1aPhT+dSsBApcH7It89OqwUn/4A9T7lrUpeoGZS6mnY2E TrzOt2MX5C7FljGZpUBye/JBe6M3PeKZMV309gzweIEX4/mvpqD8o/ONjxWNTFcoKVoI NOSA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=sYTUTjFUbmon1SGhbccDrj5pYUlY6vb4o2URlhlRuqM=; b=ZQU6XQe7/amdugiGxcBWgRGygbsT84Qdu+wjQ8ZfbsuY8YSn2DKGZQ28q777FgwZDc 19TkZ6ml4VyCtFss1Unsa7gp/wYCrZNhm747zwRXIWiwKWwYXmnac4C8bWl/D74qCNAo 0KKuVwENDUjseS58FoMFT6UgEZOFcNkay6QuWZ5y5recrRRgHiRqt3TfB9UNuIsi3a2U 5SdmLKRGo61HDRbwrwiuK8XVQu3erQYdvzx8zS/F6c210nIDTy8iSfFYhBv3IeziSIdG Sazrci/C1g+gcx5uOCknlpRMDZkQPAMi9YQzvbWrs1ae/+C29uDKuHj0135n2ARUzwDs SNmA== X-Gm-Message-State: AOAM531cr7cMB2Ch0OK0AFpILTTOiayidWHhApYFKy/u4LskhPD9bs9J hrRCaCI++cPPPL7Rz9XtdABIOw== X-Google-Smtp-Source: ABdhPJxkDdkIGWtHNeVQpWZx49oQgK+ngR5gzzb/o7ZGD01+GFeKKhbded0r7JH3VTjB56zuG3ZgzQ== X-Received: by 2002:a17:90a:454c:: with SMTP id r12mr25539597pjm.52.1620615879874; Sun, 09 May 2021 20:04:39 -0700 (PDT) Received: from localhost.localdomain ([139.177.225.236]) by smtp.gmail.com with ESMTPSA id a128sm9777003pfd.115.2021.05.09.20.04.28 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 09 May 2021 20:04:39 -0700 (PDT) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com, david@redhat.com, naoya.horiguchi@nec.com, joao.m.martins@oracle.com Cc: duanxiongchun@bytedance.com, fam.zheng@bytedance.com, zhengqi.arch@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song , Miaohe Lin , Chen Huang , Bodeddula Balasubramaniam Subject: [PATCH v23 7/9] mm: hugetlb: add a kernel parameter hugetlb_free_vmemmap Date: Mon, 10 May 2021 11:00:25 +0800 Message-Id: <20210510030027.56044-8-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20210510030027.56044-1-songmuchun@bytedance.com> References: <20210510030027.56044-1-songmuchun@bytedance.com> MIME-Version: 1.0 Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=bytedance-com.20150623.gappssmtp.com header.s=20150623 header.b=UlYurxDU; dmarc=pass (policy=none) header.from=bytedance.com; spf=pass (imf17.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.214.178 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: BDDFB40002C0 X-Stat-Signature: d5apx16jt6dt63xnsrwhu4xcw7o3k9z3 Received-SPF: none (bytedance.com>: No applicable sender policy available) receiver=imf17; identity=mailfrom; envelope-from=""; helo=mail-pl1-f178.google.com; client-ip=209.85.214.178 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1620615875-880805 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add a kernel parameter hugetlb_free_vmemmap to enable the feature of freeing unused vmemmap pages associated with each hugetlb page on boot. We disables PMD mapping of vmemmap pages for x86-64 arch when this feature is enabled. Because vmemmap_remap_free() depends on vmemmap being base page mapped. Signed-off-by: Muchun Song Reviewed-by: Oscar Salvador Reviewed-by: Barry Song Reviewed-by: Miaohe Lin Tested-by: Chen Huang Tested-by: Bodeddula Balasubramaniam Reviewed-by: Mike Kravetz --- Documentation/admin-guide/kernel-parameters.txt | 17 +++++++++++++++++ Documentation/admin-guide/mm/hugetlbpage.rst | 3 +++ arch/x86/mm/init_64.c | 8 ++++++-- include/linux/hugetlb.h | 19 +++++++++++++++++++ mm/hugetlb_vmemmap.c | 24 +++++++++++++++++++= +++++ 5 files changed, 69 insertions(+), 2 deletions(-) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentat= ion/admin-guide/kernel-parameters.txt index 1d56ad77189b..3cc19cb78b85 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -1621,6 +1621,23 @@ Documentation/admin-guide/mm/hugetlbpage.rst. Format: size[KMG] =20 + hugetlb_free_vmemmap=3D + [KNL] Reguires CONFIG_HUGETLB_PAGE_FREE_VMEMMAP + enabled. + Allows heavy hugetlb users to free up some more + memory (6 * PAGE_SIZE for each 2MB hugetlb page). + This feauture is not free though. Large page + tables are not used to back vmemmap pages which + can lead to a performance degradation for some + workloads. Also there will be memory allocation + required when hugetlb pages are freed from the + pool which can lead to corner cases under heavy + memory pressure. + Format: { on | off (default) } + + on: enable the feature + off: disable the feature + hung_task_panic=3D [KNL] Should the hung task detector generate panics. Format: 0 | 1 diff --git a/Documentation/admin-guide/mm/hugetlbpage.rst b/Documentation= /admin-guide/mm/hugetlbpage.rst index 6988895d09a8..8abaeb144e44 100644 --- a/Documentation/admin-guide/mm/hugetlbpage.rst +++ b/Documentation/admin-guide/mm/hugetlbpage.rst @@ -153,6 +153,9 @@ default_hugepagesz =20 will all result in 256 2M huge pages being allocated. Valid default huge page size is architecture dependent. +hugetlb_free_vmemmap + When CONFIG_HUGETLB_PAGE_FREE_VMEMMAP is set, this enables freeing + unused vmemmap pages associated with each HugeTLB page. =20 When multiple huge page sizes are supported, ``/proc/sys/vm/nr_hugepages= `` indicates the current number of pre-allocated huge pages of the default = size. diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c index 65ea58527176..9d9d18d0c2a1 100644 --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -34,6 +34,7 @@ #include #include #include +#include =20 #include #include @@ -1609,7 +1610,8 @@ int __meminit vmemmap_populate(unsigned long start,= unsigned long end, int node, VM_BUG_ON(!IS_ALIGNED(start, PAGE_SIZE)); VM_BUG_ON(!IS_ALIGNED(end, PAGE_SIZE)); =20 - if (end - start < PAGES_PER_SECTION * sizeof(struct page)) + if ((is_hugetlb_free_vmemmap_enabled() && !altmap) || + end - start < PAGES_PER_SECTION * sizeof(struct page)) err =3D vmemmap_populate_basepages(start, end, node, NULL); else if (boot_cpu_has(X86_FEATURE_PSE)) err =3D vmemmap_populate_hugepages(start, end, node, altmap); @@ -1637,6 +1639,8 @@ void register_page_bootmem_memmap(unsigned long sec= tion_nr, pmd_t *pmd; unsigned int nr_pmd_pages; struct page *page; + bool base_mapping =3D !boot_cpu_has(X86_FEATURE_PSE) || + is_hugetlb_free_vmemmap_enabled(); =20 for (; addr < end; addr =3D next) { pte_t *pte =3D NULL; @@ -1662,7 +1666,7 @@ void register_page_bootmem_memmap(unsigned long sec= tion_nr, } get_page_bootmem(section_nr, pud_page(*pud), MIX_SECTION_INFO); =20 - if (!boot_cpu_has(X86_FEATURE_PSE)) { + if (base_mapping) { next =3D (addr + PAGE_SIZE) & PAGE_MASK; pmd =3D pmd_offset(pud, addr); if (pmd_none(*pmd)) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index c5cc16af897c..3258177f6494 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -895,6 +895,20 @@ static inline void huge_ptep_modify_prot_commit(stru= ct vm_area_struct *vma, } #endif =20 +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP +extern bool hugetlb_free_vmemmap_enabled; + +static inline bool is_hugetlb_free_vmemmap_enabled(void) +{ + return hugetlb_free_vmemmap_enabled; +} +#else +static inline bool is_hugetlb_free_vmemmap_enabled(void) +{ + return false; +} +#endif + #else /* CONFIG_HUGETLB_PAGE */ struct hstate {}; =20 @@ -1054,6 +1068,11 @@ static inline void set_huge_swap_pte_at(struct mm_= struct *mm, unsigned long addr pte_t *ptep, pte_t pte, unsigned long sz) { } + +static inline bool is_hugetlb_free_vmemmap_enabled(void) +{ + return false; +} #endif /* CONFIG_HUGETLB_PAGE */ =20 static inline spinlock_t *huge_pte_lock(struct hstate *h, diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index a897c7778246..3070e1465b1b 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -168,6 +168,8 @@ * (last) level. So this type of HugeTLB page can be optimized only when= its * size of the struct page structs is greater than 2 pages. */ +#define pr_fmt(fmt) "HugeTLB: " fmt + #include "hugetlb_vmemmap.h" =20 /* @@ -180,6 +182,28 @@ #define RESERVE_VMEMMAP_NR 2U #define RESERVE_VMEMMAP_SIZE (RESERVE_VMEMMAP_NR << PAGE_SHIFT) =20 +bool hugetlb_free_vmemmap_enabled; + +static int __init early_hugetlb_free_vmemmap_param(char *buf) +{ + /* We cannot optimize if a "struct page" crosses page boundaries. */ + if ((!is_power_of_2(sizeof(struct page)))) { + pr_warn("cannot free vmemmap pages because \"struct page\" crosses pag= e boundaries\n"); + return 0; + } + + if (!buf) + return -EINVAL; + + if (!strcmp(buf, "on")) + hugetlb_free_vmemmap_enabled =3D true; + else if (strcmp(buf, "off")) + return -EINVAL; + + return 0; +} +early_param("hugetlb_free_vmemmap", early_hugetlb_free_vmemmap_param); + static inline unsigned long free_vmemmap_pages_size_per_hpage(struct hst= ate *h) { return (unsigned long)free_vmemmap_pages_per_hpage(h) << PAGE_SHIFT; --=20 2.11.0