From: Usama Arif <usama.arif@bytedance.com>
To: linux-mm@kvack.org, muchun.song@linux.dev,
mike.kravetz@oracle.com, rppt@kernel.org
Cc: linux-kernel@vger.kernel.org, fam.zheng@bytedance.com,
liangma@liangbit.com, simon.evans@bytedance.com,
punit.agrawal@bytedance.com,
Usama Arif <usama.arif@bytedance.com>
Subject: [RFC 4/4] mm/memblock: Skip initialization of struct pages freed later by HVO
Date: Mon, 24 Jul 2023 14:46:44 +0100 [thread overview]
Message-ID: <20230724134644.1299963-5-usama.arif@bytedance.com> (raw)
In-Reply-To: <20230724134644.1299963-1-usama.arif@bytedance.com>
If the region is for hugepages and if HVO is enabled, then those
struct pages which will be freed later don't need to be initialized.
This can save significant time when a large number of hugepages are
allocated at boot time. As memmap_init_reserved_pages is only called at
boot time, we don't need to worry about memory hotplug.
Hugepage regions are kept separate from non hugepage regions in
memblock_merge_regions so that initialization for unused struct pages
can be skipped for the entire region.
Signed-off-by: Usama Arif <usama.arif@bytedance.com>
---
mm/hugetlb_vmemmap.c | 2 +-
mm/hugetlb_vmemmap.h | 3 +++
mm/memblock.c | 27 ++++++++++++++++++++++-----
3 files changed, 26 insertions(+), 6 deletions(-)
diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
index bdf750a4786b..b5b7834e0f42 100644
--- a/mm/hugetlb_vmemmap.c
+++ b/mm/hugetlb_vmemmap.c
@@ -443,7 +443,7 @@ static int vmemmap_remap_alloc(unsigned long start, unsigned long end,
DEFINE_STATIC_KEY_FALSE(hugetlb_optimize_vmemmap_key);
EXPORT_SYMBOL(hugetlb_optimize_vmemmap_key);
-static bool vmemmap_optimize_enabled = IS_ENABLED(CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP_DEFAULT_ON);
+bool vmemmap_optimize_enabled = IS_ENABLED(CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP_DEFAULT_ON);
core_param(hugetlb_free_vmemmap, vmemmap_optimize_enabled, bool, 0);
/**
diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h
index 3525c514c061..8b9a1563f7b9 100644
--- a/mm/hugetlb_vmemmap.h
+++ b/mm/hugetlb_vmemmap.h
@@ -58,4 +58,7 @@ static inline bool hugetlb_vmemmap_optimizable(const struct hstate *h)
return hugetlb_vmemmap_optimizable_size(h) != 0;
}
bool vmemmap_should_optimize(const struct hstate *h, const struct page *head);
+
+extern bool vmemmap_optimize_enabled;
+
#endif /* _LINUX_HUGETLB_VMEMMAP_H */
diff --git a/mm/memblock.c b/mm/memblock.c
index e92d437bcb51..62072a0226de 100644
--- a/mm/memblock.c
+++ b/mm/memblock.c
@@ -21,6 +21,7 @@
#include <linux/io.h>
#include "internal.h"
+#include "hugetlb_vmemmap.h"
#define INIT_MEMBLOCK_REGIONS 128
#define INIT_PHYSMEM_REGIONS 4
@@ -519,7 +520,8 @@ static void __init_memblock memblock_merge_regions(struct memblock_type *type,
if (this->base + this->size != next->base ||
memblock_get_region_node(this) !=
memblock_get_region_node(next) ||
- this->flags != next->flags) {
+ this->flags != next->flags ||
+ this->hugepage_size != next->hugepage_size) {
BUG_ON(this->base + this->size > next->base);
i++;
continue;
@@ -2125,10 +2127,25 @@ static void __init memmap_init_reserved_pages(void)
/* initialize struct pages for the reserved regions */
for_each_reserved_mem_region(region) {
nid = memblock_get_region_node(region);
- start = region->base;
- end = start + region->size;
-
- reserve_bootmem_region(start, end, nid);
+ /*
+ * If the region is for hugepages and if HVO is enabled, then those
+ * struct pages which will be freed later don't need to be initialized.
+ * This can save significant time when a large number of hugepages are
+ * allocated at boot time. As this is at boot time, we don't need to
+ * worry about memory hotplug.
+ */
+ if (region->hugepage_size && vmemmap_optimize_enabled) {
+ for (start = region->base;
+ start < region->base + region->size;
+ start += region->hugepage_size) {
+ end = start + HUGETLB_VMEMMAP_RESERVE_SIZE * sizeof(struct page);
+ reserve_bootmem_region(start, end, nid);
+ }
+ } else {
+ start = region->base;
+ end = start + region->size;
+ reserve_bootmem_region(start, end, nid);
+ }
}
}
--
2.25.1
next prev parent reply other threads:[~2023-07-24 13:47 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-07-24 13:46 [RFC 0/4] mm/memblock: Skip prep and " Usama Arif
2023-07-24 13:46 ` [RFC 1/4] mm/hugetlb: Skip prep of tail pages when HVO is enabled Usama Arif
2023-07-24 13:46 ` [RFC 2/4] mm/memblock: Add hugepage_size member to struct memblock_region Usama Arif
2023-07-26 11:01 ` Mike Rapoport
2023-07-26 15:02 ` [External] " Usama Arif
2023-07-27 4:30 ` Mike Rapoport
2023-07-27 20:56 ` Usama Arif
2023-07-24 13:46 ` [RFC 3/4] mm/hugetlb_vmemmap: Use nid of the head page to reallocate it Usama Arif
2023-07-24 13:46 ` Usama Arif [this message]
2023-07-26 10:34 ` [RFC 0/4] mm/memblock: Skip prep and initialization of struct pages freed later by HVO Usama Arif
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230724134644.1299963-5-usama.arif@bytedance.com \
--to=usama.arif@bytedance.com \
--cc=fam.zheng@bytedance.com \
--cc=liangma@liangbit.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mike.kravetz@oracle.com \
--cc=muchun.song@linux.dev \
--cc=punit.agrawal@bytedance.com \
--cc=rppt@kernel.org \
--cc=simon.evans@bytedance.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox