linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Muchun Song <songmuchun@bytedance.com>
To: Mike Kravetz <mike.kravetz@oracle.com>
Cc: Jonathan Corbet <corbet@lwn.net>,
	Thomas Gleixner <tglx@linutronix.de>,
	mingo@redhat.com, bp@alien8.de,  x86@kernel.org, hpa@zytor.com,
	dave.hansen@linux.intel.com, luto@kernel.org,
	 Peter Zijlstra <peterz@infradead.org>,
	viro@zeniv.linux.org.uk,
	 Andrew Morton <akpm@linux-foundation.org>,
	paulmck@kernel.org, mchehab+huawei@kernel.org,
	 pawan.kumar.gupta@linux.intel.com,
	Randy Dunlap <rdunlap@infradead.org>,
	oneukum@suse.com,  anshuman.khandual@arm.com, jroedel@suse.de,
	 Mina Almasry <almasrymina@google.com>,
	David Rientjes <rientjes@google.com>,
	 Matthew Wilcox <willy@infradead.org>,
	Oscar Salvador <osalvador@suse.de>,
	Michal Hocko <mhocko@suse.com>,
	 Xiongchun duan <duanxiongchun@bytedance.com>,
	linux-doc@vger.kernel.org,  LKML <linux-kernel@vger.kernel.org>,
	 Linux Memory Management List <linux-mm@kvack.org>,
	linux-fsdevel <linux-fsdevel@vger.kernel.org>
Subject: Re: [External] Re: [PATCH v3 05/21] mm/hugetlb: Introduce pgtable allocation/freeing helpers
Date: Wed, 11 Nov 2020 11:41:37 +0800	[thread overview]
Message-ID: <CAMZfGtV+_vP66N1WagwNfxs4r3QGwnrYoR60yimwutTs=awXag@mail.gmail.com> (raw)
In-Reply-To: <9dc62874-379f-b126-94a7-5bd477529407@oracle.com>

On Wed, Nov 11, 2020 at 8:47 AM Mike Kravetz <mike.kravetz@oracle.com> wrote:
>
> On 11/8/20 6:10 AM, Muchun Song wrote:
> > On x86_64, vmemmap is always PMD mapped if the machine has hugepages
> > support and if we have 2MB contiguos pages and PMD aligned. If we want
> > to free the unused vmemmap pages, we have to split the huge pmd firstly.
> > So we should pre-allocate pgtable to split PMD to PTE.
> >
> > diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> > index a0007902fafb..5c7be2ee7e15 100644
> > --- a/mm/hugetlb.c
> > +++ b/mm/hugetlb.c
> > @@ -1303,6 +1303,108 @@ static inline void destroy_compound_gigantic_page(struct page *page,
> ...
> > +static inline void vmemmap_pgtable_init(struct page *page)
> > +{
> > +     page_huge_pte(page) = NULL;
> > +}
> > +
> > +static void vmemmap_pgtable_deposit(struct page *page, pgtable_t pgtable)
> > +{
> > +     /* FIFO */
> > +     if (!page_huge_pte(page))
> > +             INIT_LIST_HEAD(&pgtable->lru);
> > +     else
> > +             list_add(&pgtable->lru, &page_huge_pte(page)->lru);
> > +     page_huge_pte(page) = pgtable;
> > +}
> > +
> > +static pgtable_t vmemmap_pgtable_withdraw(struct page *page)
> > +{
> > +     pgtable_t pgtable;
> > +
> > +     /* FIFO */
> > +     pgtable = page_huge_pte(page);
> > +     page_huge_pte(page) = list_first_entry_or_null(&pgtable->lru,
> > +                                                    struct page, lru);
> > +     if (page_huge_pte(page))
> > +             list_del(&pgtable->lru);
> > +     return pgtable;
> > +}
> > +
> > +static int vmemmap_pgtable_prealloc(struct hstate *h, struct page *page)
> > +{
> > +     int i;
> > +     pgtable_t pgtable;
> > +     unsigned int nr = pgtable_pages_to_prealloc_per_hpage(h);
> > +
> > +     if (!nr)
> > +             return 0;
> > +
> > +     vmemmap_pgtable_init(page);
> > +
> > +     for (i = 0; i < nr; i++) {
> > +             pte_t *pte_p;
> > +
> > +             pte_p = pte_alloc_one_kernel(&init_mm);
> > +             if (!pte_p)
> > +                     goto out;
> > +             vmemmap_pgtable_deposit(page, virt_to_page(pte_p));
> > +     }
> > +
> > +     return 0;
> > +out:
> > +     while (i-- && (pgtable = vmemmap_pgtable_withdraw(page)))
> > +             pte_free_kernel(&init_mm, page_to_virt(pgtable));
> > +     return -ENOMEM;
> > +}
> > +
> > +static void vmemmap_pgtable_free(struct hstate *h, struct page *page)
> > +{
> > +     pgtable_t pgtable;
> > +     unsigned int nr = pgtable_pages_to_prealloc_per_hpage(h);
> > +
> > +     if (!nr)
> > +             return;
> > +
> > +     pgtable = page_huge_pte(page);
> > +     if (!pgtable)
> > +             return;
> > +
> > +     while (nr-- && (pgtable = vmemmap_pgtable_withdraw(page)))
> > +             pte_free_kernel(&init_mm, page_to_virt(pgtable));
> > +}
>
> I may be confused.
>
> In patch 9 of this series, the first call to vmemmap_pgtable_free() is made:
>
> > @@ -1645,6 +1799,10 @@ void free_huge_page(struct page *page)
> >
> >  static void prep_new_huge_page(struct hstate *h, struct page *page, int nid)
> >  {
> > +     free_huge_page_vmemmap(h, page);
> > +     /* Must be called before the initialization of @page->lru */
> > +     vmemmap_pgtable_free(h, page);
> > +
> >       INIT_LIST_HEAD(&page->lru);
> >       set_compound_page_dtor(page, HUGETLB_PAGE_DTOR);
> >       set_hugetlb_cgroup(page, NULL);
>
> When I saw that comment in previous patch series, I assumed page->lru was
> being used to store preallocated pages and pages to free.  However, unless

Yeah, you are right.

> I am reading the code incorrectly it does not appear page->lru (of the huge
> page) is being used for this purpose.  Is that correct?
>
> If it is correct, would using page->lru of the huge page make this code
> simpler?  I am just missing the reason why you are using
> page_huge_pte(page)->lru

For 1GB HugeTLB pages, we should pre-allocate more than one page
table. So I use a linked list. The page_huge_pte(page) is the list head.
Because the page->lru shares storage with page->pmd_huge_pte.

+     /* Must be called before the initialization of @page->lru */
+     vmemmap_pgtable_free(h, page);
+
       INIT_LIST_HEAD(&page->lru);

Here we initialize the lru. So the vmemmap_pgtable_free should
be called before this. It seems like that I should point out this "share"
in the comment.

Thanks.

>
> --
> Mike Kravetz



-- 
Yours,
Muchun


  reply	other threads:[~2020-11-11  3:42 UTC|newest]

Thread overview: 54+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-11-08 14:10 [PATCH v3 00/21] Free some vmemmap pages of hugetlb page Muchun Song
2020-11-08 14:10 ` [PATCH v3 01/21] mm/memory_hotplug: Move bootmem info registration API to bootmem_info.c Muchun Song
2020-11-08 14:10 ` [PATCH v3 02/21] mm/memory_hotplug: Move {get,put}_page_bootmem() " Muchun Song
2020-11-08 14:10 ` [PATCH v3 03/21] mm/hugetlb: Introduce a new config HUGETLB_PAGE_FREE_VMEMMAP Muchun Song
2020-11-09 13:52   ` Oscar Salvador
2020-11-09 14:20     ` [External] " Muchun Song
2020-11-10 19:31     ` Mike Kravetz
2020-11-10 19:50       ` Matthew Wilcox
2020-11-10 20:30         ` Mike Kravetz
2020-11-17 15:35         ` [External] " Muchun Song
2020-11-11  3:28       ` Muchun Song
2020-11-08 14:10 ` [PATCH v3 04/21] mm/hugetlb: Introduce nr_free_vmemmap_pages in the struct hstate Muchun Song
2020-11-09 16:48   ` Oscar Salvador
2020-11-10  2:42     ` [External] " Muchun Song
2020-11-10 19:38       ` Mike Kravetz
2020-11-11  3:22         ` Muchun Song
2020-11-08 14:10 ` [PATCH v3 05/21] mm/hugetlb: Introduce pgtable allocation/freeing helpers Muchun Song
2020-11-09 17:21   ` Oscar Salvador
2020-11-10  3:49     ` [External] " Muchun Song
2020-11-10  5:42       ` Oscar Salvador
2020-11-10  6:08         ` Muchun Song
2020-11-10  6:33           ` Oscar Salvador
2020-11-10  7:10             ` Muchun Song
2020-11-11  0:47   ` Mike Kravetz
2020-11-11  3:41     ` Muchun Song [this message]
2020-11-13  0:35       ` [External] " Mike Kravetz
2020-11-13  1:02         ` Mike Kravetz
2020-11-13  4:18         ` Muchun Song
2020-11-08 14:10 ` [PATCH v3 06/21] mm/bootmem_info: Introduce {free,prepare}_vmemmap_page() Muchun Song
2020-11-08 14:10 ` [PATCH v3 07/21] mm/bootmem_info: Combine bootmem info and type into page->freelist Muchun Song
2020-11-08 14:11 ` [PATCH v3 08/21] mm/vmemmap: Initialize page table lock for vmemmap Muchun Song
2020-11-09 18:11   ` Oscar Salvador
2020-11-10  5:17     ` [External] " Muchun Song
2020-11-08 14:11 ` [PATCH v3 09/21] mm/hugetlb: Free the vmemmap pages associated with each hugetlb page Muchun Song
2020-11-09 18:51   ` Oscar Salvador
2020-11-10  6:40     ` [External] " Muchun Song
2020-11-10  9:48       ` Oscar Salvador
2020-11-10 10:47         ` Muchun Song
2020-11-10 13:52           ` Oscar Salvador
2020-11-10 14:00             ` Muchun Song
2020-11-08 14:11 ` [PATCH v3 10/21] mm/hugetlb: Defer freeing of hugetlb pages Muchun Song
2020-11-08 14:11 ` [PATCH v3 11/21] mm/hugetlb: Allocate the vmemmap pages associated with each hugetlb page Muchun Song
2020-11-08 14:11 ` [PATCH v3 12/21] mm/hugetlb: Introduce remap_huge_page_pmd_vmemmap helper Muchun Song
2020-11-08 14:11 ` [PATCH v3 13/21] mm/hugetlb: Use PG_slab to indicate split pmd Muchun Song
2020-11-08 14:11 ` [PATCH v3 14/21] mm/hugetlb: Support freeing vmemmap pages of gigantic page Muchun Song
2020-11-08 14:11 ` [PATCH v3 15/21] mm/hugetlb: Add a BUILD_BUG_ON to check if struct page size is a power of two Muchun Song
2020-11-08 14:11 ` [PATCH v3 16/21] mm/hugetlb: Set the PageHWPoison to the raw error page Muchun Song
2020-11-08 14:11 ` [PATCH v3 17/21] mm/hugetlb: Flush work when dissolving hugetlb page Muchun Song
2020-11-08 14:11 ` [PATCH v3 18/21] mm/hugetlb: Add a kernel parameter hugetlb_free_vmemmap Muchun Song
2020-11-08 14:11 ` [PATCH v3 19/21] mm/hugetlb: Merge pte to huge pmd only for gigantic page Muchun Song
2020-11-08 14:11 ` [PATCH v3 20/21] mm/hugetlb: Gather discrete indexes of tail page Muchun Song
2020-11-08 14:11 ` [PATCH v3 21/21] mm/hugetlb: Add BUILD_BUG_ON to catch invalid usage of tail struct page Muchun Song
2020-11-10 19:23 ` [PATCH v3 00/21] Free some vmemmap pages of hugetlb page Mike Kravetz
2020-11-11  3:21   ` [External] " Muchun Song

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAMZfGtV+_vP66N1WagwNfxs4r3QGwnrYoR60yimwutTs=awXag@mail.gmail.com' \
    --to=songmuchun@bytedance.com \
    --cc=akpm@linux-foundation.org \
    --cc=almasrymina@google.com \
    --cc=anshuman.khandual@arm.com \
    --cc=bp@alien8.de \
    --cc=corbet@lwn.net \
    --cc=dave.hansen@linux.intel.com \
    --cc=duanxiongchun@bytedance.com \
    --cc=hpa@zytor.com \
    --cc=jroedel@suse.de \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=luto@kernel.org \
    --cc=mchehab+huawei@kernel.org \
    --cc=mhocko@suse.com \
    --cc=mike.kravetz@oracle.com \
    --cc=mingo@redhat.com \
    --cc=oneukum@suse.com \
    --cc=osalvador@suse.de \
    --cc=paulmck@kernel.org \
    --cc=pawan.kumar.gupta@linux.intel.com \
    --cc=peterz@infradead.org \
    --cc=rdunlap@infradead.org \
    --cc=rientjes@google.com \
    --cc=tglx@linutronix.de \
    --cc=viro@zeniv.linux.org.uk \
    --cc=willy@infradead.org \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox