linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2] mm/sparse: Remove sparse buffer pre-allocation mechanism
@ 2026-04-10  9:24 Muchun Song
  2026-04-12 16:26 ` Mike Rapoport
  0 siblings, 1 reply; 2+ messages in thread
From: Muchun Song @ 2026-04-10  9:24 UTC (permalink / raw)
  To: Andrew Morton, David Hildenbrand
  Cc: Muchun Song, Muchun Song, Lorenzo Stoakes, Liam R. Howlett,
	Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
	linux-mm, linux-kernel

Commit 9bdac9142407 ("sparsemem: Put mem map for one node together.")
introduced a mechanism to pre-allocate a large memory block to hold all
memmaps for a NUMA node upfront.

However, the original commit message did not clearly state the actual
benefits or the necessity of explicitly pre-allocating a single chunk
for all memmap areas of a given node.

One of the concerns about removing this pre-allocation is that the
subsequent per-section memmap allocations could become scattered around,
and might turn too many memory blocks/sections into an "un-offlinable"
state. However, tests show that even without the explicit node-wide
pre-allocation, memblock still allocates memory closely and
back-to-back. When tracing vmemmap_set_pmd allocations, the physical
chunks allocated by memblock are strictly adjacent to each other in a
single contiguous physical range (mapped top-down). Because they are
packed tightly together naturally, they will at most consume or pollute
the exact same number of memory blocks as the explicit pre-allocation
did.

Another concern is the boot performance impact of calling memmap_alloc()
multiple times compared to one large node-wide allocation. Tests on a
256GB VM showed that memmap allocation time increased from 199,555 ns
to 741,292 ns. Even though it is 3.7x slower, on a 1TB machine, the
entire memory allocation time would only take a few milliseconds. This
boot performance difference is completely negligible.

Since no negative impact on memory offlining behavior or noticeable
boot performance regression was found, this patch proposes removing
the explicit node-wide memmap pre-allocation mechanism to reduce the
maintenance burden.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
Changes in v2:
 - Addressed David Hildenbrand's and Mike Rapoport's concerns from the
   v1 discussion by incorporating the detailed memblock contiguous
   allocation analysis and the boot performance measurements directly
   into the commit message.
---
 include/linux/mm.h  |  1 -
 mm/sparse-vmemmap.c |  7 +-----
 mm/sparse.c         | 58 +--------------------------------------------
 3 files changed, 2 insertions(+), 64 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 0b776907152e..1d676fef4303 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -4855,7 +4855,6 @@ static inline void print_vma_addr(char *prefix, unsigned long rip)
 }
 #endif
 
-void *sparse_buffer_alloc(unsigned long size);
 unsigned long section_map_size(void);
 struct page * __populate_section_memmap(unsigned long pfn,
 		unsigned long nr_pages, int nid, struct vmem_altmap *altmap,
diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c
index 6eadb9d116e4..aca1b00e86dd 100644
--- a/mm/sparse-vmemmap.c
+++ b/mm/sparse-vmemmap.c
@@ -87,15 +87,10 @@ static void * __meminit altmap_alloc_block_buf(unsigned long size,
 void * __meminit vmemmap_alloc_block_buf(unsigned long size, int node,
 					 struct vmem_altmap *altmap)
 {
-	void *ptr;
-
 	if (altmap)
 		return altmap_alloc_block_buf(size, altmap);
 
-	ptr = sparse_buffer_alloc(size);
-	if (!ptr)
-		ptr = vmemmap_alloc_block(size, node);
-	return ptr;
+	return vmemmap_alloc_block(size, node);
 }
 
 static unsigned long __meminit vmem_altmap_next_pfn(struct vmem_altmap *altmap)
diff --git a/mm/sparse.c b/mm/sparse.c
index effdac6b0ab1..672e2ad396a8 100644
--- a/mm/sparse.c
+++ b/mm/sparse.c
@@ -241,12 +241,9 @@ struct page __init *__populate_section_memmap(unsigned long pfn,
 		struct dev_pagemap *pgmap)
 {
 	unsigned long size = section_map_size();
-	struct page *map = sparse_buffer_alloc(size);
+	struct page *map;
 	phys_addr_t addr = __pa(MAX_DMA_ADDRESS);
 
-	if (map)
-		return map;
-
 	map = memmap_alloc(size, size, addr, nid, false);
 	if (!map)
 		panic("%s: Failed to allocate %lu bytes align=0x%lx nid=%d from=%pa\n",
@@ -256,55 +253,6 @@ struct page __init *__populate_section_memmap(unsigned long pfn,
 }
 #endif /* !CONFIG_SPARSEMEM_VMEMMAP */
 
-static void *sparsemap_buf __meminitdata;
-static void *sparsemap_buf_end __meminitdata;
-
-static inline void __meminit sparse_buffer_free(unsigned long size)
-{
-	WARN_ON(!sparsemap_buf || size == 0);
-	memblock_free(sparsemap_buf, size);
-}
-
-static void __init sparse_buffer_init(unsigned long size, int nid)
-{
-	phys_addr_t addr = __pa(MAX_DMA_ADDRESS);
-	WARN_ON(sparsemap_buf);	/* forgot to call sparse_buffer_fini()? */
-	/*
-	 * Pre-allocated buffer is mainly used by __populate_section_memmap
-	 * and we want it to be properly aligned to the section size - this is
-	 * especially the case for VMEMMAP which maps memmap to PMDs
-	 */
-	sparsemap_buf = memmap_alloc(size, section_map_size(), addr, nid, true);
-	sparsemap_buf_end = sparsemap_buf + size;
-}
-
-static void __init sparse_buffer_fini(void)
-{
-	unsigned long size = sparsemap_buf_end - sparsemap_buf;
-
-	if (sparsemap_buf && size > 0)
-		sparse_buffer_free(size);
-	sparsemap_buf = NULL;
-}
-
-void * __meminit sparse_buffer_alloc(unsigned long size)
-{
-	void *ptr = NULL;
-
-	if (sparsemap_buf) {
-		ptr = (void *) roundup((unsigned long)sparsemap_buf, size);
-		if (ptr + size > sparsemap_buf_end)
-			ptr = NULL;
-		else {
-			/* Free redundant aligned space */
-			if ((unsigned long)(ptr - sparsemap_buf) > 0)
-				sparse_buffer_free((unsigned long)(ptr - sparsemap_buf));
-			sparsemap_buf = ptr + size;
-		}
-	}
-	return ptr;
-}
-
 void __weak __meminit vmemmap_populate_print_last(void)
 {
 }
@@ -362,8 +310,6 @@ static void __init sparse_init_nid(int nid, unsigned long pnum_begin,
 		goto failed;
 	}
 
-	sparse_buffer_init(map_count * section_map_size(), nid);
-
 	sparse_vmemmap_init_nid_early(nid);
 
 	for_each_present_section_nr(pnum_begin, pnum) {
@@ -381,7 +327,6 @@ static void __init sparse_init_nid(int nid, unsigned long pnum_begin,
 				       __func__, nid);
 				pnum_begin = pnum;
 				sparse_usage_fini();
-				sparse_buffer_fini();
 				goto failed;
 			}
 			memmap_boot_pages_add(DIV_ROUND_UP(PAGES_PER_SECTION * sizeof(struct page),
@@ -390,7 +335,6 @@ static void __init sparse_init_nid(int nid, unsigned long pnum_begin,
 		}
 	}
 	sparse_usage_fini();
-	sparse_buffer_fini();
 	return;
 failed:
 	/*
-- 
2.20.1



^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: [PATCH v2] mm/sparse: Remove sparse buffer pre-allocation mechanism
  2026-04-10  9:24 [PATCH v2] mm/sparse: Remove sparse buffer pre-allocation mechanism Muchun Song
@ 2026-04-12 16:26 ` Mike Rapoport
  0 siblings, 0 replies; 2+ messages in thread
From: Mike Rapoport @ 2026-04-12 16:26 UTC (permalink / raw)
  To: Muchun Song
  Cc: Andrew Morton, David Hildenbrand, Muchun Song, Lorenzo Stoakes,
	Liam R. Howlett, Vlastimil Babka, Suren Baghdasaryan,
	Michal Hocko, linux-mm, linux-kernel

On Fri, Apr 10, 2026 at 05:24:19PM +0800, Muchun Song wrote:
> Commit 9bdac9142407 ("sparsemem: Put mem map for one node together.")
> introduced a mechanism to pre-allocate a large memory block to hold all
> memmaps for a NUMA node upfront.
> 
> However, the original commit message did not clearly state the actual
> benefits or the necessity of explicitly pre-allocating a single chunk
> for all memmap areas of a given node.
> 
> One of the concerns about removing this pre-allocation is that the
> subsequent per-section memmap allocations could become scattered around,
> and might turn too many memory blocks/sections into an "un-offlinable"
> state. However, tests show that even without the explicit node-wide
> pre-allocation, memblock still allocates memory closely and
> back-to-back. When tracing vmemmap_set_pmd allocations, the physical
> chunks allocated by memblock are strictly adjacent to each other in a
> single contiguous physical range (mapped top-down). Because they are
> packed tightly together naturally, they will at most consume or pollute
> the exact same number of memory blocks as the explicit pre-allocation
> did.
> 
> Another concern is the boot performance impact of calling memmap_alloc()
> multiple times compared to one large node-wide allocation. Tests on a
> 256GB VM showed that memmap allocation time increased from 199,555 ns
> to 741,292 ns. Even though it is 3.7x slower, on a 1TB machine, the
> entire memory allocation time would only take a few milliseconds. This
> boot performance difference is completely negligible.
> 
> Since no negative impact on memory offlining behavior or noticeable
> boot performance regression was found, this patch proposes removing
> the explicit node-wide memmap pre-allocation mechanism to reduce the
> maintenance burden.
> 
> Signed-off-by: Muchun Song <songmuchun@bytedance.com>

Acked-by: Mike Rapoport (Microsoft) <rppt@kernel.org>

> ---
> Changes in v2:
>  - Addressed David Hildenbrand's and Mike Rapoport's concerns from the
>    v1 discussion by incorporating the detailed memblock contiguous
>    allocation analysis and the boot performance measurements directly
>    into the commit message.
> ---
>  include/linux/mm.h  |  1 -
>  mm/sparse-vmemmap.c |  7 +-----
>  mm/sparse.c         | 58 +--------------------------------------------
>  3 files changed, 2 insertions(+), 64 deletions(-)

-- 
Sincerely yours,
Mike.


^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2026-04-12 16:26 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2026-04-10  9:24 [PATCH v2] mm/sparse: Remove sparse buffer pre-allocation mechanism Muchun Song
2026-04-12 16:26 ` Mike Rapoport

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox