linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Muchun Song <songmuchun@bytedance.com>
To: Oscar Salvador <osalvador@suse.de>,
	Mike Kravetz <mike.kravetz@oracle.com>
Cc: "Xiongchun duan" <duanxiongchun@bytedance.com>,
	"Jonathan Corbet" <corbet@lwn.net>,
	"Thomas Gleixner" <tglx@linutronix.de>,
	paulmck@kernel.org, dave.hansen@linux.intel.com,
	anshuman.khandual@arm.com, oneukum@suse.com, bp@alien8.de,
	hpa@zytor.com, x86@kernel.org,
	"Randy Dunlap" <rdunlap@infradead.org>,
	mingo@redhat.com, mchehab+huawei@kernel.org, luto@kernel.org,
	"Andrew Morton" <akpm@linux-foundation.org>,
	viro@zeniv.linux.org.uk, "Peter Zijlstra" <peterz@infradead.org>,
	"David Rientjes" <rientjes@google.com>,
	"Michal Hocko" <mhocko@suse.com>,
	jroedel@suse.de, "Mina Almasry" <almasrymina@google.com>,
	pawan.kumar.gupta@linux.intel.com,
	"HORIGUCHI NAOYA(堀口 直也)" <naoya.horiguchi@nec.com>,
	"David Hildenbrand" <david@redhat.com>,
	"Song Bao Hua (Barry Song)" <song.bao.hua@hisilicon.com>,
	linux-doc@vger.kernel.org, LKML <linux-kernel@vger.kernel.org>,
	"Linux Memory Management List" <linux-mm@kvack.org>,
	linux-fsdevel <linux-fsdevel@vger.kernel.org>,
	"Matthew Wilcox" <willy@infradead.org>
Subject: Re: [PATCH v13 00/12] Free some vmemmap pages of HugeTLB page
Date: Wed, 20 Jan 2021 20:52:50 +0800	[thread overview]
Message-ID: <CAMZfGtVkjS4TXpRWsmCDxXKxP7W+-D1EgTZt30h3b1Si1+u9pA@mail.gmail.com> (raw)
In-Reply-To: <20210117151053.24600-1-songmuchun@bytedance.com>

On Sun, Jan 17, 2021 at 11:12 PM Muchun Song <songmuchun@bytedance.com> wrote:
>
> Hi all,
>
> This patch series will free some vmemmap pages(struct page structures)
> associated with each hugetlbpage when preallocated to save memory.
>
> In order to reduce the difficulty of the first version of code review.
> From this version, we disable PMD/huge page mapping of vmemmap if this
> feature was enabled. This accutualy eliminate a bunch of the complex code
> doing page table manipulation. When this patch series is solid, we cam add
> the code of vmemmap page table manipulation in the future.
>
> The struct page structures (page structs) are used to describe a physical
> page frame. By default, there is a one-to-one mapping from a page frame to
> it's corresponding page struct.
>
> The HugeTLB pages consist of multiple base page size pages and is supported
> by many architectures. See hugetlbpage.rst in the Documentation directory
> for more details. On the x86 architecture, HugeTLB pages of size 2MB and 1GB
> are currently supported. Since the base page size on x86 is 4KB, a 2MB
> HugeTLB page consists of 512 base pages and a 1GB HugeTLB page consists of
> 4096 base pages. For each base page, there is a corresponding page struct.
>
> Within the HugeTLB subsystem, only the first 4 page structs are used to
> contain unique information about a HugeTLB page. HUGETLB_CGROUP_MIN_ORDER
> provides this upper limit. The only 'useful' information in the remaining
> page structs is the compound_head field, and this field is the same for all
> tail pages.
>
> By removing redundant page structs for HugeTLB pages, memory can returned to
> the buddy allocator for other uses.
>
> When the system boot up, every 2M HugeTLB has 512 struct page structs which
> size is 8 pages(sizeof(struct page) * 512 / PAGE_SIZE).
>
>     HugeTLB                  struct pages(8 pages)         page frame(8 pages)
>  +-----------+ ---virt_to_page---> +-----------+   mapping to   +-----------+
>  |           |                     |     0     | -------------> |     0     |
>  |           |                     +-----------+                +-----------+
>  |           |                     |     1     | -------------> |     1     |
>  |           |                     +-----------+                +-----------+
>  |           |                     |     2     | -------------> |     2     |
>  |           |                     +-----------+                +-----------+
>  |           |                     |     3     | -------------> |     3     |
>  |           |                     +-----------+                +-----------+
>  |           |                     |     4     | -------------> |     4     |
>  |    2MB    |                     +-----------+                +-----------+
>  |           |                     |     5     | -------------> |     5     |
>  |           |                     +-----------+                +-----------+
>  |           |                     |     6     | -------------> |     6     |
>  |           |                     +-----------+                +-----------+
>  |           |                     |     7     | -------------> |     7     |
>  |           |                     +-----------+                +-----------+
>  |           |
>  |           |
>  |           |
>  +-----------+
>
> The value of page->compound_head is the same for all tail pages. The first
> page of page structs (page 0) associated with the HugeTLB page contains the 4
> page structs necessary to describe the HugeTLB. The only use of the remaining
> pages of page structs (page 1 to page 7) is to point to page->compound_head.
> Therefore, we can remap pages 2 to 7 to page 1. Only 2 pages of page structs
> will be used for each HugeTLB page. This will allow us to free the remaining
> 6 pages to the buddy allocator.
>
> Here is how things look after remapping.
>
>     HugeTLB                  struct pages(8 pages)         page frame(8 pages)
>  +-----------+ ---virt_to_page---> +-----------+   mapping to   +-----------+
>  |           |                     |     0     | -------------> |     0     |
>  |           |                     +-----------+                +-----------+
>  |           |                     |     1     | -------------> |     1     |
>  |           |                     +-----------+                +-----------+
>  |           |                     |     2     | ----------------^ ^ ^ ^ ^ ^
>  |           |                     +-----------+                   | | | | |
>  |           |                     |     3     | ------------------+ | | | |
>  |           |                     +-----------+                     | | | |
>  |           |                     |     4     | --------------------+ | | |
>  |    2MB    |                     +-----------+                       | | |
>  |           |                     |     5     | ----------------------+ | |
>  |           |                     +-----------+                         | |
>  |           |                     |     6     | ------------------------+ |
>  |           |                     +-----------+                           |
>  |           |                     |     7     | --------------------------+
>  |           |                     +-----------+
>  |           |
>  |           |
>  |           |
>  +-----------+
>
> When a HugeTLB is freed to the buddy system, we should allocate 6 pages for
> vmemmap pages and restore the previous mapping relationship.
>
> Apart from 2MB HugeTLB page, we also have 1GB HugeTLB page. It is similar
> to the 2MB HugeTLB page. We also can use this approach to free the vmemmap
> pages.
>
> In this case, for the 1GB HugeTLB page, we can save 4094 pages. This is a
> very substantial gain. On our server, run some SPDK/QEMU applications which
> will use 1024GB hugetlbpage. With this feature enabled, we can save ~16GB
> (1G hugepage)/~12GB (2MB hugepage) memory.
>
> Because there are vmemmap page tables reconstruction on the freeing/allocating
> path, it increases some overhead. Here are some overhead analysis.
>
> 1) Allocating 10240 2MB hugetlb pages.
>
>    a) With this patch series applied:
>    # time echo 10240 > /proc/sys/vm/nr_hugepages
>
>    real     0m0.166s
>    user     0m0.000s
>    sys      0m0.166s
>
>    # bpftrace -e 'kprobe:alloc_fresh_huge_page { @start[tid] = nsecs; } kretprobe:alloc_fresh_huge_page /@start[tid]/ { @latency = hist(nsecs - @start[tid]); delete(@start[tid]); }'
>    Attaching 2 probes...
>
>    @latency:
>    [8K, 16K)           8360 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
>    [16K, 32K)          1868 |@@@@@@@@@@@                                         |
>    [32K, 64K)            10 |                                                    |
>    [64K, 128K)            2 |                                                    |
>
>    b) Without this patch series:
>    # time echo 10240 > /proc/sys/vm/nr_hugepages
>
>    real     0m0.066s
>    user     0m0.000s
>    sys      0m0.066s
>
>    # bpftrace -e 'kprobe:alloc_fresh_huge_page { @start[tid] = nsecs; } kretprobe:alloc_fresh_huge_page /@start[tid]/ { @latency = hist(nsecs - @start[tid]); delete(@start[tid]); }'
>    Attaching 2 probes...
>
>    @latency:
>    [4K, 8K)           10176 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
>    [8K, 16K)             62 |                                                    |
>    [16K, 32K)             2 |                                                    |
>
>    Summarize: this feature is about ~2x slower than before.
>
> 2) Freeing 10240 2MB hugetlb pages.
>
>    a) With this patch series applied:
>    # time echo 0 > /proc/sys/vm/nr_hugepages
>
>    real     0m0.004s
>    user     0m0.000s
>    sys      0m0.002s
>
>    # bpftrace -e 'kprobe:__free_hugepage { @start[tid] = nsecs; } kretprobe:__free_hugepage /@start[tid]/ { @latency = hist(nsecs - @start[tid]); delete(@start[tid]); }'
>    Attaching 2 probes...
>
>    @latency:
>    [16K, 32K)         10240 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
>
>    b) Without this patch series:
>    # time echo 0 > /proc/sys/vm/nr_hugepages
>
>    real     0m0.077s
>    user     0m0.001s
>    sys      0m0.075s
>
>    # bpftrace -e 'kprobe:__free_hugepage { @start[tid] = nsecs; } kretprobe:__free_hugepage /@start[tid]/ { @latency = hist(nsecs - @start[tid]); delete(@start[tid]); }'
>    Attaching 2 probes...
>
>    @latency:
>    [4K, 8K)            9950 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
>    [8K, 16K)            287 |@                                                   |
>    [16K, 32K)             3 |                                                    |
>
>    Summarize: The overhead of __free_hugepage is about ~2-4x slower than before.
>               But according to the allocation test above, I think that here is
>               also ~2x slower than before.
>
>               But why the 'real' time of patched is smaller than before? Because
>               In this patch series, the freeing hugetlb is asynchronous(through
>               kwoker).
>
> Although the overhead has increased, the overhead is not significant. Like Mike
> said, "However, remember that the majority of use cases create hugetlb pages at
> or shortly after boot time and add them to the pool. So, additional overhead is
> at pool creation time. There is no change to 'normal run time' operations of
> getting a page from or returning a page to the pool (think page fault/unmap)".
>
> Todo:
>   - Free all of the tail vmemmap pages
>     Now for the 2MB HugrTLB page, we only free 6 vmemmap pages. we really can
>     free 7 vmemmap pages. In this case, we can see 8 of the 512 struct page
>     structures has beed set PG_head flag. If we can adjust compound_head()
>     slightly and make compound_head() return the real head struct page when
>     the parameter is the tail struct page but with PG_head flag set.
>
>     In order to make the code evolution route clearer. This feature can can be
>     a separate patch after this patchset is solid.
>
>   - Support for other architectures (e.g. aarch64).
>   - Enable PMD/huge page mapping of vmemmap even if this feature was enabled.
>
> Changelog in v12 -> v13:
>   - Remove VM_WARN_ON_PAGE macro.
>   - Add more comments in vmemmap_pte_range() and vmemmap_remap_free().
>
>   Thanks to Oscar and Mike's suggestions and review.

Hi Oscar and Mike,

Any suggestions about this version? Looking forward to your
review. Thanks a lot.

>
> Changelog in v11 -> v12:
>   - Move VM_WARN_ON_PAGE to a separate patch.
>   - Call __free_hugepage() with hugetlb_lock (See patch #5.) to serialize
>     with dissolve_free_huge_page(). It is to prepare for patch #9.
>   - Introduce PageHugeInflight. See patch #9.
>
> Changelog in v10 -> v11:
>   - Fix compiler error when !CONFIG_HUGETLB_PAGE_FREE_VMEMMAP.
>   - Rework some comments and commit changes.
>   - Rework vmemmap_remap_free() to 3 parameters.
>
>   Thanks to Oscar and Mike's suggestions and review.
>
> Changelog in v9 -> v10:
>   - Fix a bug in patch #11. Thanks to Oscar for pointing that out.
>   - Rework some commit log or comments. Thanks Mike and Oscar for the suggestions.
>   - Drop VMEMMAP_TAIL_PAGE_REUSE in the patch #3.
>
>   Thank you very much Mike and Oscar for reviewing the code.
>
> Changelog in v8 -> v9:
>   - Rework some code. Very thanks to Oscar.
>   - Put all the non-hugetlb vmemmap functions under sparsemem-vmemmap.c.
>
> Changelog in v7 -> v8:
>   - Adjust the order of patches.
>
>   Very thanks to David and Oscar. Your suggestions are very valuable.
>
> Changelog in v6 -> v7:
>   - Rebase to linux-next 20201130
>   - Do not use basepage mapping for vmemmap when this feature is disabled.
>   - Rework some patchs.
>     [PATCH v6 08/16] mm/hugetlb: Free the vmemmap pages associated with each hugetlb page
>     [PATCH v6 10/16] mm/hugetlb: Allocate the vmemmap pages associated with each hugetlb page
>
>   Thanks to Oscar and Barry.
>
> Changelog in v5 -> v6:
>   - Disable PMD/huge page mapping of vmemmap if this feature was enabled.
>   - Simplify the first version code.
>
> Changelog in v4 -> v5:
>   - Rework somme comments and code in the [PATCH v4 04/21] and [PATCH v4 05/21].
>
>   Thanks to Mike and Oscar's suggestions.
>
> Changelog in v3 -> v4:
>   - Move all the vmemmap functions to hugetlb_vmemmap.c.
>   - Make the CONFIG_HUGETLB_PAGE_FREE_VMEMMAP default to y, if we want to
>     disable this feature, we should disable it by a boot/kernel command line.
>   - Remove vmemmap_pgtable_{init, deposit, withdraw}() helper functions.
>   - Initialize page table lock for vmemmap through core_initcall mechanism.
>
>   Thanks for Mike and Oscar's suggestions.
>
> Changelog in v2 -> v3:
>   - Rename some helps function name. Thanks Mike.
>   - Rework some code. Thanks Mike and Oscar.
>   - Remap the tail vmemmap page with PAGE_KERNEL_RO instead of PAGE_KERNEL.
>     Thanks Matthew.
>   - Add some overhead analysis in the cover letter.
>   - Use vmemap pmd table lock instead of a hugetlb specific global lock.
>
> Changelog in v1 -> v2:
>   - Fix do not call dissolve_compound_page in alloc_huge_page_vmemmap().
>   - Fix some typo and code style problems.
>   - Remove unused handle_vmemmap_fault().
>   - Merge some commits to one commit suggested by Mike.
>
> Muchun Song (12):
>   mm: memory_hotplug: factor out bootmem core functions to
>     bootmem_info.c
>   mm: hugetlb: introduce a new config HUGETLB_PAGE_FREE_VMEMMAP
>   mm: hugetlb: free the vmemmap pages associated with each HugeTLB page
>   mm: hugetlb: defer freeing of HugeTLB pages
>   mm: hugetlb: allocate the vmemmap pages associated with each HugeTLB
>     page
>   mm: hugetlb: set the PageHWPoison to the raw error page
>   mm: hugetlb: flush work when dissolving a HugeTLB page
>   mm: hugetlb: introduce PageHugeInflight
>   mm: hugetlb: add a kernel parameter hugetlb_free_vmemmap
>   mm: hugetlb: introduce nr_free_vmemmap_pages in the struct hstate
>   mm: hugetlb: gather discrete indexes of tail page
>   mm: hugetlb: optimize the code with the help of the compiler
>
>  Documentation/admin-guide/kernel-parameters.txt |  14 ++
>  Documentation/admin-guide/mm/hugetlbpage.rst    |   3 +
>  arch/x86/mm/init_64.c                           |  13 +-
>  fs/Kconfig                                      |  18 ++
>  include/linux/bootmem_info.h                    |  65 ++++++
>  include/linux/hugetlb.h                         |  37 ++++
>  include/linux/hugetlb_cgroup.h                  |  15 +-
>  include/linux/memory_hotplug.h                  |  27 ---
>  include/linux/mm.h                              |   5 +
>  mm/Makefile                                     |   2 +
>  mm/bootmem_info.c                               | 124 +++++++++++
>  mm/hugetlb.c                                    | 218 +++++++++++++++++--
>  mm/hugetlb_vmemmap.c                            | 278 ++++++++++++++++++++++++
>  mm/hugetlb_vmemmap.h                            |  45 ++++
>  mm/memory_hotplug.c                             | 116 ----------
>  mm/sparse-vmemmap.c                             | 273 +++++++++++++++++++++++
>  mm/sparse.c                                     |   1 +
>  17 files changed, 1082 insertions(+), 172 deletions(-)
>  create mode 100644 include/linux/bootmem_info.h
>  create mode 100644 mm/bootmem_info.c
>  create mode 100644 mm/hugetlb_vmemmap.c
>  create mode 100644 mm/hugetlb_vmemmap.h
>
> --
> 2.11.0
>


  parent reply	other threads:[~2021-01-20 12:53 UTC|newest]

Thread overview: 62+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-01-17 15:10 Muchun Song
2021-01-17 15:10 ` [PATCH v13 01/12] mm: memory_hotplug: factor out bootmem core functions to bootmem_info.c Muchun Song
2021-01-25  2:48   ` Miaohe Lin
2021-01-17 15:10 ` [PATCH v13 02/12] mm: hugetlb: introduce a new config HUGETLB_PAGE_FREE_VMEMMAP Muchun Song
2021-01-24 23:58   ` David Rientjes
2021-01-25  3:16     ` Randy Dunlap
2021-01-25  4:06     ` [External] " Muchun Song
2021-01-25  4:08       ` Randy Dunlap
2021-01-25  5:06         ` Muchun Song
2021-01-25 18:47           ` David Rientjes
2021-01-26  2:45             ` Muchun Song
2021-01-26 20:13               ` David Rientjes
2021-01-26  2:07   ` Miaohe Lin
2021-01-17 15:10 ` [PATCH v13 03/12] mm: hugetlb: free the vmemmap pages associated with each HugeTLB page Muchun Song
2021-01-17 15:29   ` Muchun Song
2021-01-23  0:59   ` Mike Kravetz
2021-01-23  3:22     ` [External] " Muchun Song
2021-01-23 17:52   ` Oscar Salvador
2021-01-24  6:48     ` [External] " Muchun Song
2021-01-17 15:10 ` [PATCH v13 04/12] mm: hugetlb: defer freeing of HugeTLB pages Muchun Song
2021-01-24 23:55   ` David Rientjes
2021-01-25  3:58     ` [External] " Muchun Song
2021-01-17 15:10 ` [PATCH v13 05/12] mm: hugetlb: allocate the vmemmap pages associated with each HugeTLB page Muchun Song
2021-01-25  0:05   ` David Rientjes
2021-01-25  6:40     ` [External] " Muchun Song
2021-01-25  7:41       ` Muchun Song
2021-01-25  9:15         ` David Hildenbrand
2021-01-25  9:34           ` Muchun Song
2021-01-25 23:25             ` Mike Kravetz
2021-01-26  7:48               ` Oscar Salvador
2021-01-26  9:29   ` Oscar Salvador
2021-01-26  9:36     ` David Hildenbrand
2021-01-26 14:58       ` Oscar Salvador
2021-01-26 15:10         ` David Hildenbrand
2021-01-26 15:34           ` Oscar Salvador
2021-01-26 15:56             ` David Hildenbrand
2021-01-27 10:36               ` David Hildenbrand
2021-01-28 12:37                 ` [External] " Muchun Song
2021-01-28 13:08                   ` Muchun Song
2021-01-29  1:04                   ` Mike Kravetz
2021-01-29  6:56                     ` Muchun Song
2021-02-01 16:10                     ` David Hildenbrand
2021-02-02  0:05                       ` Mike Kravetz
2021-01-28 22:29                 ` Oscar Salvador
2021-01-29  6:16                   ` [External] " Muchun Song
2021-02-01 15:50                   ` David Hildenbrand
2021-01-17 15:10 ` [PATCH v13 06/12] mm: hugetlb: set the PageHWPoison to the raw error page Muchun Song
2021-01-25  0:06   ` David Rientjes
2021-01-25  5:06     ` [External] " Muchun Song
2021-01-17 15:10 ` [PATCH v13 07/12] mm: hugetlb: flush work when dissolving a HugeTLB page Muchun Song
2021-01-17 15:10 ` [PATCH v13 08/12] mm: hugetlb: introduce PageHugeInflight Muchun Song
2021-01-17 15:10 ` [PATCH v13 09/12] mm: hugetlb: add a kernel parameter hugetlb_free_vmemmap Muchun Song
2021-01-25 11:43   ` David Hildenbrand
2021-01-25 12:08     ` Oscar Salvador
2021-01-25 12:31       ` [External] " Muchun Song
2021-01-25 12:30     ` Muchun Song
2021-01-17 15:10 ` [PATCH v13 10/12] mm: hugetlb: introduce nr_free_vmemmap_pages in the struct hstate Muchun Song
2021-01-17 15:10 ` [PATCH v13 11/12] mm: hugetlb: gather discrete indexes of tail page Muchun Song
2021-01-17 15:10 ` [PATCH v13 12/12] mm: hugetlb: optimize the code with the help of the compiler Muchun Song
2021-01-20 12:52 ` Muchun Song [this message]
2021-01-20 13:10   ` [PATCH v13 00/12] Free some vmemmap pages of HugeTLB page Oscar Salvador
2021-01-20 14:22     ` [External] " Muchun Song

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAMZfGtVkjS4TXpRWsmCDxXKxP7W+-D1EgTZt30h3b1Si1+u9pA@mail.gmail.com \
    --to=songmuchun@bytedance.com \
    --cc=akpm@linux-foundation.org \
    --cc=almasrymina@google.com \
    --cc=anshuman.khandual@arm.com \
    --cc=bp@alien8.de \
    --cc=corbet@lwn.net \
    --cc=dave.hansen@linux.intel.com \
    --cc=david@redhat.com \
    --cc=duanxiongchun@bytedance.com \
    --cc=hpa@zytor.com \
    --cc=jroedel@suse.de \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=luto@kernel.org \
    --cc=mchehab+huawei@kernel.org \
    --cc=mhocko@suse.com \
    --cc=mike.kravetz@oracle.com \
    --cc=mingo@redhat.com \
    --cc=naoya.horiguchi@nec.com \
    --cc=oneukum@suse.com \
    --cc=osalvador@suse.de \
    --cc=paulmck@kernel.org \
    --cc=pawan.kumar.gupta@linux.intel.com \
    --cc=peterz@infradead.org \
    --cc=rdunlap@infradead.org \
    --cc=rientjes@google.com \
    --cc=song.bao.hua@hisilicon.com \
    --cc=tglx@linutronix.de \
    --cc=viro@zeniv.linux.org.uk \
    --cc=willy@infradead.org \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox