From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8DB5BC4363A for ; Wed, 28 Oct 2020 07:27:25 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A9EF9221FB for ; Wed, 28 Oct 2020 07:27:24 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=bytedance-com.20150623.gappssmtp.com header.i=@bytedance-com.20150623.gappssmtp.com header.b="086mSmYN" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A9EF9221FB Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id D456F6B005D; Wed, 28 Oct 2020 03:27:23 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id CD0336B0068; Wed, 28 Oct 2020 03:27:23 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B974B6B006C; Wed, 28 Oct 2020 03:27:23 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0197.hostedemail.com [216.40.44.197]) by kanga.kvack.org (Postfix) with ESMTP id 880676B005D for ; Wed, 28 Oct 2020 03:27:23 -0400 (EDT) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 1E939824999B for ; Wed, 28 Oct 2020 07:27:23 +0000 (UTC) X-FDA: 77420503566.18.cover66_3f13a6327283 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin18.hostedemail.com (Postfix) with ESMTP id 03DB6100ED9DF for ; Wed, 28 Oct 2020 07:27:22 +0000 (UTC) X-HE-Tag: cover66_3f13a6327283 X-Filterd-Recvd-Size: 12399 Received: from mail-pl1-f193.google.com (mail-pl1-f193.google.com [209.85.214.193]) by imf48.hostedemail.com (Postfix) with ESMTP for ; Wed, 28 Oct 2020 07:27:22 +0000 (UTC) Received: by mail-pl1-f193.google.com with SMTP id t22so2047541plr.9 for ; Wed, 28 Oct 2020 00:27:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=mFVMIrLdCxV+8cp8GWoWf3/5fAgqPZAXfATkx0X3HRI=; b=086mSmYNIKquvUs/3FC33XpsVOTW3Qu76/x5SHnCISOWO7PoMcOtMKpnf6YwvyXtWB aVkxPco0eXT5sebizGVALWfK7l/mloGkmvFhYSfkvujPlWMxB92gkENSYqGfl/x6A47O 3ZsD+s4AZackwpMGyzvoCgDtJebFSPTaVA7tlY8X+3YCmb15eBdQgtNLDFbvShQ5GLna lNq6bCm/nv0eq30LsJk1kNX7UMGNtOS00lVj+I8U5Hd583ZZjtjGuemnrMdPZlxe+XdA cwmZsRIQWvOaEfEo/LFog+Pqh9o1/M8GEIQcLQWrazWaiqpdou1KIDcg3GmFkasnHgel ZjrQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=mFVMIrLdCxV+8cp8GWoWf3/5fAgqPZAXfATkx0X3HRI=; b=M7xDNecHEp81vTgg+pC0MYtTtgy3Ox9QRKB03io3pGxwBEHOIfYS+ZaJUpE5s1wZ6K yaRLwsIAyJXdEU+tOO6K4zWRMA2Pv70q9AemNg1n8Ii2o9GJqsXqaMFvD9cOkfcV1a25 ZgCb91JKen3k7Q54mfsNvlYc0dyxZAk6z3M0l518UsGyH2DB4BdPNkuahSKIxbPTAtFm 4HCcW9izWI+U3PMMNBdZY1YYnye1w164CLxCST7699FCP0jp78f5ZkmHpv+FQb15OhHB Qf7W7+m0qFiYl5we2hkQRkVXkW17AFPKTrKhIKkJ0C3HHdd6ZhTbwuYj3JJF91dsB/Ne qIuw== X-Gm-Message-State: AOAM531jNRWPyCnln2iSI0CkjtkYgkuWFzTo/9eIEHdXhPoq34sGsrw5 gxcc2Uz0T+A7Wuo0sMUUMZ+gELnfSIoy2OZCetAtcg== X-Google-Smtp-Source: ABdhPJz39oMPi6YrX9/B/1ZjqP5C8IWRl2J8WjAA1b9U6+Ei/bQPgo8wmN0cxwmD0qErFckBlhXeahkf6JuLDLeWbzA= X-Received: by 2002:a17:90b:198d:: with SMTP id mv13mr5681238pjb.13.1603870040809; Wed, 28 Oct 2020 00:27:20 -0700 (PDT) MIME-Version: 1.0 References: <20201026145114.59424-1-songmuchun@bytedance.com> <20201026145114.59424-6-songmuchun@bytedance.com> <81a7a7f0-fe0e-42e4-8de0-9092b033addc@oracle.com> In-Reply-To: <81a7a7f0-fe0e-42e4-8de0-9092b033addc@oracle.com> From: Muchun Song Date: Wed, 28 Oct 2020 15:26:44 +0800 Message-ID: Subject: Re: [External] Re: [PATCH v2 05/19] mm/hugetlb: Introduce pgtable allocation/freeing helpers To: Mike Kravetz Cc: Jonathan Corbet , Thomas Gleixner , mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, Peter Zijlstra , viro@zeniv.linux.org.uk, Andrew Morton , paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, Randy Dunlap , oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, Mina Almasry , David Rientjes , Matthew Wilcox , Xiongchun duan , linux-doc@vger.kernel.org, LKML , Linux Memory Management List , linux-fsdevel Content-Type: text/plain; charset="UTF-8" X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Oct 28, 2020 at 8:33 AM Mike Kravetz wrote: > > On 10/26/20 7:51 AM, Muchun Song wrote: > > On some architectures, the vmemmap areas use huge page mapping. > > If we want to free the unused vmemmap pages, we have to split > > the huge pmd firstly. So we should pre-allocate pgtable to split > > huge pmd. > > > > Signed-off-by: Muchun Song > > --- > > arch/x86/include/asm/hugetlb.h | 5 ++ > > include/linux/hugetlb.h | 17 +++++ > > mm/hugetlb.c | 117 +++++++++++++++++++++++++++++++++ > > 3 files changed, 139 insertions(+) > > > > diff --git a/arch/x86/include/asm/hugetlb.h b/arch/x86/include/asm/hugetlb.h > > index 1721b1aadeb1..f5e882f999cd 100644 > > --- a/arch/x86/include/asm/hugetlb.h > > +++ b/arch/x86/include/asm/hugetlb.h > > @@ -5,6 +5,11 @@ > > #include > > #include > > > > +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP > > +#define VMEMMAP_HPAGE_SHIFT PMD_SHIFT > > +#define arch_vmemmap_support_huge_mapping() boot_cpu_has(X86_FEATURE_PSE) > > +#endif > > + > > #define hugepages_supported() boot_cpu_has(X86_FEATURE_PSE) > > > > #endif /* _ASM_X86_HUGETLB_H */ > > diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h > > index eed3dd3bd626..ace304a6196c 100644 > > --- a/include/linux/hugetlb.h > > +++ b/include/linux/hugetlb.h > > @@ -593,6 +593,23 @@ static inline unsigned int blocks_per_huge_page(struct hstate *h) > > > > #include > > > > +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP > > +#ifndef arch_vmemmap_support_huge_mapping > > +static inline bool arch_vmemmap_support_huge_mapping(void) > > +{ > > + return false; > > +} > > +#endif > > + > > +#ifndef VMEMMAP_HPAGE_SHIFT > > +#define VMEMMAP_HPAGE_SHIFT PMD_SHIFT > > +#endif > > +#define VMEMMAP_HPAGE_ORDER (VMEMMAP_HPAGE_SHIFT - PAGE_SHIFT) > > +#define VMEMMAP_HPAGE_NR (1 << VMEMMAP_HPAGE_ORDER) > > +#define VMEMMAP_HPAGE_SIZE ((1UL) << VMEMMAP_HPAGE_SHIFT) > > +#define VMEMMAP_HPAGE_MASK (~(VMEMMAP_HPAGE_SIZE - 1)) > > +#endif /* CONFIG_HUGETLB_PAGE_FREE_VMEMMAP */ > > + > > #ifndef is_hugepage_only_range > > static inline int is_hugepage_only_range(struct mm_struct *mm, > > unsigned long addr, unsigned long len) > > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > > index f1b2b733b49b..d6ae9b6876be 100644 > > --- a/mm/hugetlb.c > > +++ b/mm/hugetlb.c > > @@ -1295,11 +1295,108 @@ static inline void destroy_compound_gigantic_page(struct page *page, > > #ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP > > #define RESERVE_VMEMMAP_NR 2U > > > > +#define page_huge_pte(page) ((page)->pmd_huge_pte) > > + > > I am not good at function names. The following suggestions may be too > verbose. However, they helped me understand purpose of routines. > > > static inline unsigned int nr_free_vmemmap(struct hstate *h) > > perhaps? free_vmemmap_pages_per_hpage() > > > { > > return h->nr_free_vmemmap_pages; > > } > > > > +static inline unsigned int nr_vmemmap(struct hstate *h) > > perhaps? vmemmap_pages_per_hpage() > > > +{ > > + return nr_free_vmemmap(h) + RESERVE_VMEMMAP_NR; > > +} > > + > > +static inline unsigned long nr_vmemmap_size(struct hstate *h) > > perhaps? vmemmap_pages_size_per_hpage() > > > +{ > > + return (unsigned long)nr_vmemmap(h) << PAGE_SHIFT; > > +} > > + > > +static inline unsigned int nr_pgtable(struct hstate *h) > > perhaps? pgtable_pages_to_prealloc_per_hpage() Good suggestions. Thanks. I will apply this. > > > +{ > > + unsigned long vmemmap_size = nr_vmemmap_size(h); > > + > > + if (!arch_vmemmap_support_huge_mapping()) > > + return 0; > > + > > + /* > > + * No need pre-allocate page tabels when there is no vmemmap pages > > + * to free. > > + */ > > + if (!nr_free_vmemmap(h)) > > + return 0; > > + > > + return ALIGN(vmemmap_size, VMEMMAP_HPAGE_SIZE) >> VMEMMAP_HPAGE_SHIFT; > > +} > > + > > +static inline void vmemmap_pgtable_init(struct page *page) > > +{ > > + page_huge_pte(page) = NULL; > > +} > > + > > I see the following routines follow the pattern for vmemmap manipulation > in dax. Did you mean move those functions to mm/sparse-vmemmap.c? > > > +static void vmemmap_pgtable_deposit(struct page *page, pte_t *pte_p) > > +{ > > + pgtable_t pgtable = virt_to_page(pte_p); > > + > > + /* FIFO */ > > + if (!page_huge_pte(page)) > > + INIT_LIST_HEAD(&pgtable->lru); > > + else > > + list_add(&pgtable->lru, &page_huge_pte(page)->lru); > > + page_huge_pte(page) = pgtable; > > +} > > + > > +static pte_t *vmemmap_pgtable_withdraw(struct page *page) > > +{ > > + pgtable_t pgtable; > > + > > + /* FIFO */ > > + pgtable = page_huge_pte(page); > > + if (unlikely(!pgtable)) > > + return NULL; > > + page_huge_pte(page) = list_first_entry_or_null(&pgtable->lru, > > + struct page, lru); > > + if (page_huge_pte(page)) > > + list_del(&pgtable->lru); > > + return page_to_virt(pgtable); > > +} > > + > > +static int vmemmap_pgtable_prealloc(struct hstate *h, struct page *page) > > +{ > > + int i; > > + pte_t *pte_p; > > + unsigned int nr = nr_pgtable(h); > > + > > + if (!nr) > > + return 0; > > + > > + vmemmap_pgtable_init(page); > > + > > + for (i = 0; i < nr; i++) { > > + pte_p = pte_alloc_one_kernel(&init_mm); > > + if (!pte_p) > > + goto out; > > + vmemmap_pgtable_deposit(page, pte_p); > > + } > > + > > + return 0; > > +out: > > + while (i-- && (pte_p = vmemmap_pgtable_withdraw(page))) > > + pte_free_kernel(&init_mm, pte_p); > > + return -ENOMEM; > > +} > > + > > +static inline void vmemmap_pgtable_free(struct hstate *h, struct page *page) > > +{ > > + pte_t *pte_p; > > + > > + if (!nr_pgtable(h)) > > + return; > > + > > + while ((pte_p = vmemmap_pgtable_withdraw(page))) > > + pte_free_kernel(&init_mm, pte_p); > > +} > > + > > static void __init hugetlb_vmemmap_init(struct hstate *h) > > { > > unsigned int order = huge_page_order(h); > > @@ -1323,6 +1420,15 @@ static void __init hugetlb_vmemmap_init(struct hstate *h) > > static inline void hugetlb_vmemmap_init(struct hstate *h) > > { > > } > > + > > +static inline int vmemmap_pgtable_prealloc(struct hstate *h, struct page *page) > > +{ > > + return 0; > > +} > > + > > +static inline void vmemmap_pgtable_free(struct hstate *h, struct page *page) > > +{ > > +} > > #endif > > > > static void update_and_free_page(struct hstate *h, struct page *page) > > @@ -1531,6 +1637,9 @@ void free_huge_page(struct page *page) > > > > static void prep_new_huge_page(struct hstate *h, struct page *page, int nid) > > { > > + /* Must be called before the initialization of @page->lru */ > > + vmemmap_pgtable_free(h, page); > > + > > INIT_LIST_HEAD(&page->lru); > > set_compound_page_dtor(page, HUGETLB_PAGE_DTOR); > > set_hugetlb_cgroup(page, NULL); > > @@ -1783,6 +1892,14 @@ static struct page *alloc_fresh_huge_page(struct hstate *h, > > if (!page) > > return NULL; > > > > + if (vmemmap_pgtable_prealloc(h, page)) { > > + if (hstate_is_gigantic(h)) > > + free_gigantic_page(page, huge_page_order(h)); > > + else > > + put_page(page); > > + return NULL; > > + } > > + > > It seems a bit strange that we will fail a huge page allocation if > vmemmap_pgtable_prealloc fails. Not sure, but it almost seems like we shold > allow the allocation and log a warning? It is somewhat unfortunate that > we need to allocate a page to free pages. Yeah, it seems unfortunate. But if we allocate success, we can free some vmemmap pages later. Like a compromise :) . If we can successfully allocate a huge page, I also prefer to be able to successfully allocate another one page. If we allow the allocation when vmemmap_pgtable_prealloc fails, we also need to mark this page that vmemmap has not been released. Seems increase complexity. Thanks. > > > if (hstate_is_gigantic(h)) > > prep_compound_gigantic_page(page, huge_page_order(h)); > > prep_new_huge_page(h, page, page_to_nid(page)); > > > > > -- > Mike Kravetz -- Yours, Muchun