linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Muchun Song <songmuchun@bytedance.com>
To: David Hildenbrand <david@redhat.com>
Cc: Jonathan Corbet <corbet@lwn.net>,
	Mike Kravetz <mike.kravetz@oracle.com>,
	 Thomas Gleixner <tglx@linutronix.de>,
	mingo@redhat.com, bp@alien8.de, x86@kernel.org,  hpa@zytor.com,
	dave.hansen@linux.intel.com, luto@kernel.org,
	 Peter Zijlstra <peterz@infradead.org>,
	viro@zeniv.linux.org.uk,
	 Andrew Morton <akpm@linux-foundation.org>,
	paulmck@kernel.org, mchehab+huawei@kernel.org,
	 pawan.kumar.gupta@linux.intel.com,
	Randy Dunlap <rdunlap@infradead.org>,
	oneukum@suse.com,  anshuman.khandual@arm.com, jroedel@suse.de,
	 Mina Almasry <almasrymina@google.com>,
	David Rientjes <rientjes@google.com>,
	 Matthew Wilcox <willy@infradead.org>,
	Oscar Salvador <osalvador@suse.de>,
	Michal Hocko <mhocko@suse.com>,
	 "Song Bao Hua (Barry Song)" <song.bao.hua@hisilicon.com>,
	Xiongchun duan <duanxiongchun@bytedance.com>,
	 linux-doc@vger.kernel.org, LKML <linux-kernel@vger.kernel.org>,
	 Linux Memory Management List <linux-mm@kvack.org>,
	linux-fsdevel <linux-fsdevel@vger.kernel.org>
Subject: Re: [External] Re: [PATCH v7 04/15] mm/hugetlb: Introduce nr_free_vmemmap_pages in the struct hstate
Date: Wed, 9 Dec 2020 17:27:54 +0800	[thread overview]
Message-ID: <CAMZfGtV8kG8MjfOZoVNGfLYgiRziG_YgeW+C6tKcUALj-xsJfg@mail.gmail.com> (raw)
In-Reply-To: <03a8b8b6-5d0c-b48e-562b-61f866722a31@redhat.com>

On Wed, Dec 9, 2020 at 4:54 PM David Hildenbrand <david@redhat.com> wrote:
>
> On 07.12.20 14:11, Muchun Song wrote:
> > On Mon, Dec 7, 2020 at 8:36 PM David Hildenbrand <david@redhat.com> wrote:
> >>
> >> On 30.11.20 16:18, Muchun Song wrote:
> >>> Every HugeTLB has more than one struct page structure. The 2M HugeTLB
> >>> has 512 struct page structure and 1G HugeTLB has 4096 struct page
> >>> structures. We __know__ that we only use the first 4(HUGETLB_CGROUP_MIN_ORDER)
> >>> struct page structures to store metadata associated with each HugeTLB.
> >>>
> >>> There are a lot of struct page structures(8 page frames for 2MB HugeTLB
> >>> page and 4096 page frames for 1GB HugeTLB page) associated with each
> >>> HugeTLB page. For tail pages, the value of compound_head is the same.
> >>> So we can reuse first page of tail page structures. We map the virtual
> >>> addresses of the remaining pages of tail page structures to the first
> >>> tail page struct, and then free these page frames. Therefore, we need
> >>> to reserve two pages as vmemmap areas.
> >>>
> >>> So we introduce a new nr_free_vmemmap_pages field in the hstate to
> >>> indicate how many vmemmap pages associated with a HugeTLB page that we
> >>> can free to buddy system.
> >>>
> >>> Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> >>> Acked-by: Mike Kravetz <mike.kravetz@oracle.com>
> >>> ---
> >>>  include/linux/hugetlb.h |   3 ++
> >>>  mm/Makefile             |   1 +
> >>>  mm/hugetlb.c            |   3 ++
> >>>  mm/hugetlb_vmemmap.c    | 129 ++++++++++++++++++++++++++++++++++++++++++++++++
> >>>  mm/hugetlb_vmemmap.h    |  20 ++++++++
> >>>  5 files changed, 156 insertions(+)
> >>>  create mode 100644 mm/hugetlb_vmemmap.c
> >>>  create mode 100644 mm/hugetlb_vmemmap.h
> >>>
> >>> diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
> >>> index ebca2ef02212..4efeccb7192c 100644
> >>> --- a/include/linux/hugetlb.h
> >>> +++ b/include/linux/hugetlb.h
> >>> @@ -492,6 +492,9 @@ struct hstate {
> >>>       unsigned int nr_huge_pages_node[MAX_NUMNODES];
> >>>       unsigned int free_huge_pages_node[MAX_NUMNODES];
> >>>       unsigned int surplus_huge_pages_node[MAX_NUMNODES];
> >>> +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
> >>> +     unsigned int nr_free_vmemmap_pages;
> >>> +#endif
> >>>  #ifdef CONFIG_CGROUP_HUGETLB
> >>>       /* cgroup control files */
> >>>       struct cftype cgroup_files_dfl[7];
> >>> diff --git a/mm/Makefile b/mm/Makefile
> >>> index ed4b88fa0f5e..056801d8daae 100644
> >>> --- a/mm/Makefile
> >>> +++ b/mm/Makefile
> >>> @@ -71,6 +71,7 @@ obj-$(CONFIG_FRONTSWAP)     += frontswap.o
> >>>  obj-$(CONFIG_ZSWAP)  += zswap.o
> >>>  obj-$(CONFIG_HAS_DMA)        += dmapool.o
> >>>  obj-$(CONFIG_HUGETLBFS)      += hugetlb.o
> >>> +obj-$(CONFIG_HUGETLB_PAGE_FREE_VMEMMAP)      += hugetlb_vmemmap.o
> >>>  obj-$(CONFIG_NUMA)   += mempolicy.o
> >>>  obj-$(CONFIG_SPARSEMEM)      += sparse.o
> >>>  obj-$(CONFIG_SPARSEMEM_VMEMMAP) += sparse-vmemmap.o
> >>> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> >>> index 1f3bf1710b66..25f9e8e9fc4a 100644
> >>> --- a/mm/hugetlb.c
> >>> +++ b/mm/hugetlb.c
> >>> @@ -42,6 +42,7 @@
> >>>  #include <linux/userfaultfd_k.h>
> >>>  #include <linux/page_owner.h>
> >>>  #include "internal.h"
> >>> +#include "hugetlb_vmemmap.h"
> >>>
> >>>  int hugetlb_max_hstate __read_mostly;
> >>>  unsigned int default_hstate_idx;
> >>> @@ -3206,6 +3207,8 @@ void __init hugetlb_add_hstate(unsigned int order)
> >>>       snprintf(h->name, HSTATE_NAME_LEN, "hugepages-%lukB",
> >>>                                       huge_page_size(h)/1024);
> >>>
> >>> +     hugetlb_vmemmap_init(h);
> >>> +
> >>>       parsed_hstate = h;
> >>>  }
> >>>
> >>> diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
> >>> new file mode 100644
> >>> index 000000000000..51152e258f39
> >>> --- /dev/null
> >>> +++ b/mm/hugetlb_vmemmap.c
> >>> @@ -0,0 +1,129 @@
> >>> +// SPDX-License-Identifier: GPL-2.0
> >>> +/*
> >>> + * Free some vmemmap pages of HugeTLB
> >>> + *
> >>> + * Copyright (c) 2020, Bytedance. All rights reserved.
> >>> + *
> >>> + *     Author: Muchun Song <songmuchun@bytedance.com>
> >>> + *
> >>> + * The struct page structures (page structs) are used to describe a physical
> >>> + * page frame. By default, there is a one-to-one mapping from a page frame to
> >>> + * it's corresponding page struct.
> >>> + *
> >>> + * The HugeTLB pages consist of multiple base page size pages and is supported
> >>> + * by many architectures. See hugetlbpage.rst in the Documentation directory
> >>> + * for more details. On the x86 architecture, HugeTLB pages of size 2MB and 1GB
> >>> + * are currently supported. Since the base page size on x86 is 4KB, a 2MB
> >>> + * HugeTLB page consists of 512 base pages and a 1GB HugeTLB page consists of
> >>> + * 4096 base pages. For each base page, there is a corresponding page struct.
> >>> + *
> >>> + * Within the HugeTLB subsystem, only the first 4 page structs are used to
> >>> + * contain unique information about a HugeTLB page. HUGETLB_CGROUP_MIN_ORDER
> >>> + * provides this upper limit. The only 'useful' information in the remaining
> >>> + * page structs is the compound_head field, and this field is the same for all
> >>> + * tail pages.
> >>> + *
> >>> + * By removing redundant page structs for HugeTLB pages, memory can returned to
> >>> + * the buddy allocator for other uses.
> >>> + *
> >>> + * When the system boot up, every 2M HugeTLB has 512 struct page structs which
> >>> + * size is 8 pages(sizeof(struct page) * 512 / PAGE_SIZE).
> >>
> >>
> >> You should try to generalize all descriptions regarding differing base
> >> page sizes. E.g., arm64 supports 4k, 16k, and 64k base pages.
> >
> > Will do. Thanks.
> >
> >>
> >> [...]
> >>
> >>> @@ -0,0 +1,20 @@
> >>> +// SPDX-License-Identifier: GPL-2.0
> >>> +/*
> >>> + * Free some vmemmap pages of HugeTLB
> >>> + *
> >>> + * Copyright (c) 2020, Bytedance. All rights reserved.
> >>> + *
> >>> + *     Author: Muchun Song <songmuchun@bytedance.com>
> >>> + */
> >>> +#ifndef _LINUX_HUGETLB_VMEMMAP_H
> >>> +#define _LINUX_HUGETLB_VMEMMAP_H
> >>> +#include <linux/hugetlb.h>
> >>> +
> >>> +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
> >>> +void __init hugetlb_vmemmap_init(struct hstate *h);
> >>> +#else
> >>> +static inline void hugetlb_vmemmap_init(struct hstate *h)
> >>> +{
> >>> +}
> >>> +#endif /* CONFIG_HUGETLB_PAGE_FREE_VMEMMAP */
> >>> +#endif /* _LINUX_HUGETLB_VMEMMAP_H */
> >>>
> >>
> >> This patch as it stands is rather sub-optimal. I mean, all it does is
> >> add documentation and print what could be done.
> >>
> >> Can we instead introduce the basic infrastructure and enable it via this
> >> patch on top, where we glue all the pieces together? Or is there
> >> something I am missing?
> >
> > Maybe we can make the config of CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
> > default n in the Kconfig. When everything is ready, then make it
> > default to y. Right?
>
> I think it can make sense to introduce the
> CONFIG_HUGETLB_PAGE_FREE_VMEMMAP option first if necessary for other
> patches. But I think the the documentation and the dummy call should
> rather be moved to the end of the series where you glue everything you
> introduced together and officially unlock the feature. Others might
> disagree :)

I see. Thanks for your suggestions.

>
> BTW, I'm planning on reviewing the other parts of this series, I'm just
> fairly busy, so it might take a while (I think we're targeting 5.12
> either way as the 5.11 merge window will start fairly soon).
>

Very thanks.

> --
> Thanks,
>
> David / dhildenb
>


-- 
Yours,
Muchun


  reply	other threads:[~2020-12-09  9:28 UTC|newest]

Thread overview: 49+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-11-30 15:18 [PATCH v7 00/15] Free some vmemmap pages of hugetlb page Muchun Song
2020-11-30 15:18 ` [PATCH v7 01/15] mm/memory_hotplug: Move bootmem info registration API to bootmem_info.c Muchun Song
2020-12-07 12:12   ` David Hildenbrand
2020-11-30 15:18 ` [PATCH v7 02/15] mm/memory_hotplug: Move {get,put}_page_bootmem() " Muchun Song
2020-12-07 12:14   ` David Hildenbrand
2020-12-07 12:16     ` [External] " Muchun Song
2020-11-30 15:18 ` [PATCH v7 03/15] mm/hugetlb: Introduce a new config HUGETLB_PAGE_FREE_VMEMMAP Muchun Song
2020-12-07 12:19   ` David Hildenbrand
2020-12-07 12:42     ` [External] " Muchun Song
2020-12-07 12:47       ` David Hildenbrand
2020-12-07 13:22         ` Muchun Song
2020-11-30 15:18 ` [PATCH v7 04/15] mm/hugetlb: Introduce nr_free_vmemmap_pages in the struct hstate Muchun Song
2020-12-07 12:36   ` David Hildenbrand
2020-12-07 13:11     ` [External] " Muchun Song
2020-12-09  8:54       ` David Hildenbrand
2020-12-09  9:27         ` Muchun Song [this message]
2020-11-30 15:18 ` [PATCH v7 05/15] mm/bootmem_info: Introduce {free,prepare}_vmemmap_page() Muchun Song
2020-12-07 12:39   ` David Hildenbrand
2020-12-07 13:23     ` [External] " Muchun Song
2020-12-09  7:36     ` Muchun Song
2020-12-09  8:49       ` David Hildenbrand
2020-12-09  9:25         ` Muchun Song
2020-12-09  9:32           ` David Hildenbrand
2020-12-09  9:43             ` Muchun Song
2020-11-30 15:18 ` [PATCH v7 06/15] mm/hugetlb: Disable freeing vmemmap if struct page size is not power of two Muchun Song
2020-12-09  9:57   ` David Hildenbrand
2020-12-09 10:03     ` [External] " Muchun Song
2020-12-09 10:06       ` David Hildenbrand
2020-12-09 10:10         ` David Hildenbrand
2020-12-09 10:16           ` Muchun Song
2020-12-09 15:13           ` Muchun Song
2020-12-09 15:47             ` David Hildenbrand
2020-12-09 15:50               ` Muchun Song
2020-12-09 10:10         ` Muchun Song
2020-11-30 15:18 ` [PATCH v7 07/15] x86/mm/64: Disable PMD page mapping of vmemmap Muchun Song
2020-11-30 15:18 ` [PATCH v7 08/15] mm/hugetlb: Free the vmemmap pages associated with each hugetlb page Muchun Song
2020-11-30 15:18 ` [PATCH v7 09/15] mm/hugetlb: Defer freeing of HugeTLB pages Muchun Song
2020-11-30 15:18 ` [PATCH v7 10/15] mm/hugetlb: Allocate the vmemmap pages associated with each hugetlb page Muchun Song
2020-11-30 15:18 ` [PATCH v7 11/15] mm/hugetlb: Set the PageHWPoison to the raw error page Muchun Song
2020-11-30 15:18 ` [PATCH v7 12/15] mm/hugetlb: Flush work when dissolving hugetlb page Muchun Song
2020-11-30 15:18 ` [PATCH v7 13/15] mm/hugetlb: Add a kernel parameter hugetlb_free_vmemmap Muchun Song
2020-12-04  0:01   ` Song Bao Hua (Barry Song)
2020-11-30 15:18 ` [PATCH v7 14/15] mm/hugetlb: Gather discrete indexes of tail page Muchun Song
2020-11-30 15:18 ` [PATCH v7 15/15] mm/hugetlb: Add BUILD_BUG_ON to catch invalid usage of tail struct page Muchun Song
2020-12-03  8:35 ` [PATCH v7 00/15] Free some vmemmap pages of hugetlb page Muchun Song
2020-12-03 23:48   ` Mike Kravetz
2020-12-04  3:39     ` [External] " Muchun Song
2020-12-07 18:38       ` Oscar Salvador
2020-12-08  2:26         ` Muchun Song

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAMZfGtV8kG8MjfOZoVNGfLYgiRziG_YgeW+C6tKcUALj-xsJfg@mail.gmail.com \
    --to=songmuchun@bytedance.com \
    --cc=akpm@linux-foundation.org \
    --cc=almasrymina@google.com \
    --cc=anshuman.khandual@arm.com \
    --cc=bp@alien8.de \
    --cc=corbet@lwn.net \
    --cc=dave.hansen@linux.intel.com \
    --cc=david@redhat.com \
    --cc=duanxiongchun@bytedance.com \
    --cc=hpa@zytor.com \
    --cc=jroedel@suse.de \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=luto@kernel.org \
    --cc=mchehab+huawei@kernel.org \
    --cc=mhocko@suse.com \
    --cc=mike.kravetz@oracle.com \
    --cc=mingo@redhat.com \
    --cc=oneukum@suse.com \
    --cc=osalvador@suse.de \
    --cc=paulmck@kernel.org \
    --cc=pawan.kumar.gupta@linux.intel.com \
    --cc=peterz@infradead.org \
    --cc=rdunlap@infradead.org \
    --cc=rientjes@google.com \
    --cc=song.bao.hua@hisilicon.com \
    --cc=tglx@linutronix.de \
    --cc=viro@zeniv.linux.org.uk \
    --cc=willy@infradead.org \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox