linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v4 1/2] mm/sparse.c: Use kvmalloc/kvfree to alloc/free memmap for the classic sparse
@ 2020-03-16 10:21 Baoquan He
  2020-03-16 10:21 ` [PATCH v4 2/2] mm/sparse.c: allocate memmap preferring the given node Baoquan He
                   ` (3 more replies)
  0 siblings, 4 replies; 13+ messages in thread
From: Baoquan He @ 2020-03-16 10:21 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mm, mhocko, akpm, david, willy, richard.weiyang, vbabka, bhe

This change makes populate_section_memmap()/depopulate_section_memmap
much simpler.

Suggested-by: Michal Hocko <mhocko@kernel.org>
Signed-off-by: Baoquan He <bhe@redhat.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Acked-by: Michal Hocko <mhocko@suse.com>
---
v3->v4:
  Split the old v3 into two patches, to carve out the using 'nid'
  as preferred node to allocate memmap into a separate patch. This
  is suggested by Michal, and the carving out is put in patch 2.

v2->v3:
  Remove __GFP_NOWARN and use array_size when calling kvmalloc_node()
  per Matthew's comments.
http://lkml.kernel.org/r/20200312141749.GL27711@MiWiFi-R3L-srv

 mm/sparse.c | 27 +++------------------------
 1 file changed, 3 insertions(+), 24 deletions(-)

diff --git a/mm/sparse.c b/mm/sparse.c
index e747a238a860..d01d09cc7d99 100644
--- a/mm/sparse.c
+++ b/mm/sparse.c
@@ -719,35 +719,14 @@ static int fill_subsection_map(unsigned long pfn, unsigned long nr_pages)
 struct page * __meminit populate_section_memmap(unsigned long pfn,
 		unsigned long nr_pages, int nid, struct vmem_altmap *altmap)
 {
-	struct page *page, *ret;
-	unsigned long memmap_size = sizeof(struct page) * PAGES_PER_SECTION;
-
-	page = alloc_pages(GFP_KERNEL|__GFP_NOWARN, get_order(memmap_size));
-	if (page)
-		goto got_map_page;
-
-	ret = vmalloc(memmap_size);
-	if (ret)
-		goto got_map_ptr;
-
-	return NULL;
-got_map_page:
-	ret = (struct page *)pfn_to_kaddr(page_to_pfn(page));
-got_map_ptr:
-
-	return ret;
+	return kvmalloc(array_size(sizeof(struct page),
+			PAGES_PER_SECTION), GFP_KERNEL);
 }
 
 static void depopulate_section_memmap(unsigned long pfn, unsigned long nr_pages,
 		struct vmem_altmap *altmap)
 {
-	struct page *memmap = pfn_to_page(pfn);
-
-	if (is_vmalloc_addr(memmap))
-		vfree(memmap);
-	else
-		free_pages((unsigned long)memmap,
-			   get_order(sizeof(struct page) * PAGES_PER_SECTION));
+	kvfree(pfn_to_page(pfn));
 }
 
 static void free_map_bootmem(struct page *memmap)
-- 
2.17.2



^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2020-03-24  1:07 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-03-16 10:21 [PATCH v4 1/2] mm/sparse.c: Use kvmalloc/kvfree to alloc/free memmap for the classic sparse Baoquan He
2020-03-16 10:21 ` [PATCH v4 2/2] mm/sparse.c: allocate memmap preferring the given node Baoquan He
2020-03-16 12:56   ` [PATCH v5 " Baoquan He
2020-03-16 16:28     ` Pankaj Gupta
2020-03-16 16:29     ` David Hildenbrand
2020-03-16 22:16     ` Wei Yang
2020-03-24  1:07     ` Baoquan He
2020-03-16 11:00 ` [PATCH v4 1/2] mm/sparse.c: Use kvmalloc/kvfree to alloc/free memmap for the classic sparse David Hildenbrand
2020-03-16 12:40   ` Baoquan He
2020-03-16 11:17 ` Pankaj Gupta
2020-03-16 12:18   ` Matthew Wilcox
2020-03-16 12:54 ` [PATCH v5 " Baoquan He
2020-03-16 22:16   ` Wei Yang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox