linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH resend] mm: hugetlb_vmemmap: use bulk allocator in alloc_vmemmap_page_list()
@ 2023-09-05 10:35 Kefeng Wang
  2023-09-06  2:41 ` Muchun Song
  2023-09-06  2:47 ` Matthew Wilcox
  0 siblings, 2 replies; 13+ messages in thread
From: Kefeng Wang @ 2023-09-05 10:35 UTC (permalink / raw)
  To: Andrew Morton, Mike Kravetz, Muchun Song, linux-mm; +Cc: Kefeng Wang, Yuan Can

It is needed 4095 pages(1G) or 7 pages(2M) to be allocated once in
alloc_vmemmap_page_list(), so let's add a bulk allocator varietas
alloc_pages_bulk_list_node() and switch alloc_vmemmap_page_list()
to use it to accelerate page allocation.

Simple test on arm64's qemu with 1G Hugetlb, 870,842ns vs 3,845,252ns,
even if there is a certain fluctuation, it is still a nice improvement.

Tested-by: Yuan Can <yuancan@huawei.com>
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
resend: fix allocated spell and decrease nr_pages in fallback logical

 include/linux/gfp.h  | 9 +++++++++
 mm/hugetlb_vmemmap.c | 6 ++++++
 2 files changed, 15 insertions(+)

diff --git a/include/linux/gfp.h b/include/linux/gfp.h
index 665f06675c83..d6e82f15b61f 100644
--- a/include/linux/gfp.h
+++ b/include/linux/gfp.h
@@ -195,6 +195,15 @@ alloc_pages_bulk_list(gfp_t gfp, unsigned long nr_pages, struct list_head *list)
 	return __alloc_pages_bulk(gfp, numa_mem_id(), NULL, nr_pages, list, NULL);
 }
 
+static inline unsigned long
+alloc_pages_bulk_list_node(gfp_t gfp, int nid, unsigned long nr_pages, struct list_head *list)
+{
+	if (nid == NUMA_NO_NODE)
+		nid = numa_mem_id();
+
+	return __alloc_pages_bulk(gfp, nid, NULL, nr_pages, list, NULL);
+}
+
 static inline unsigned long
 alloc_pages_bulk_array(gfp_t gfp, unsigned long nr_pages, struct page **page_array)
 {
diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
index 4b9734777f69..786e581703c7 100644
--- a/mm/hugetlb_vmemmap.c
+++ b/mm/hugetlb_vmemmap.c
@@ -384,7 +384,13 @@ static int alloc_vmemmap_page_list(unsigned long start, unsigned long end,
 	unsigned long nr_pages = (end - start) >> PAGE_SHIFT;
 	int nid = page_to_nid((struct page *)start);
 	struct page *page, *next;
+	unsigned long nr_allocated;
 
+	nr_allocated = alloc_pages_bulk_list_node(gfp_mask, nid, nr_pages, list);
+	if (!nr_allocated)
+		return -ENOMEM;
+
+	nr_pages -= nr_allocated;
 	while (nr_pages--) {
 		page = alloc_pages_node(nid, gfp_mask, 0);
 		if (!page)
-- 
2.27.0



^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2023-10-25 13:51 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-09-05 10:35 [PATCH resend] mm: hugetlb_vmemmap: use bulk allocator in alloc_vmemmap_page_list() Kefeng Wang
2023-09-06  2:41 ` Muchun Song
2023-09-06  2:47 ` Matthew Wilcox
2023-09-06  3:13   ` Kefeng Wang
2023-09-06  3:14     ` Matthew Wilcox
2023-09-06 12:32       ` Kefeng Wang
2023-09-06  3:25     ` Muchun Song
2023-09-06  9:33       ` Kefeng Wang
2023-09-06 14:32         ` Muchun Song
2023-09-06 14:58           ` Kefeng Wang
2023-09-07  6:35             ` Muchun Song
2023-10-25  9:32   ` Mel Gorman
2023-10-25 13:50     ` kernel test robot

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox