linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2] mm/page_alloc: use batch page clearing in kernel_init_pages()
@ 2026-04-21  4:24 Hrushikesh Salunke
  2026-04-21  9:27 ` Vlastimil Babka (SUSE)
  0 siblings, 1 reply; 2+ messages in thread
From: Hrushikesh Salunke @ 2026-04-21  4:24 UTC (permalink / raw)
  To: akpm, david, ljs, Liam.Howlett, vbabka, rppt, surenb, mhocko,
	jackmanb, hannes, ziy
  Cc: linux-mm, linux-kernel, rkodsara, bharata, ankur.a.arora,
	shivankg, hsalunke

When init_on_alloc is enabled, kernel_init_pages() clears every page
one at a time via clear_highpage_kasan_tagged(), which incurs per-page
kmap_local_page()/kunmap_local() overhead and prevents the architecture
clearing primitive from operating on contiguous ranges.

Introduce clear_highpages_kasan_tagged() in highmem.h, a batch
clearing helper that calls clear_pages() for the full contiguous range
on !HIGHMEM systems, bypassing the per-page kmap overhead and allowing
a single invocation of the arch clearing primitive across the entire
allocation. The HIGHMEM path falls back to per-page clearing since
those pages require kmap.

Use it in kernel_init_pages() to replace the per-page loop.

Allocating 8192 x 2MB HugeTLB pages (16GB) with init_on_alloc=1:

  Before: 0.445s
  After:  0.166s  (-62.7%, 2.68x faster)

Kernel time (sys) reduction per workload with init_on_alloc=1:

  Workload            Before       After       Change
  Graph500 64C128T    30m 41.8s    15m 14.8s   -50.3%
  Graph500 16C32T     15m 56.7s     9m 43.7s   -39.0%
  Pagerank 32T         1m 58.5s     1m 12.8s   -38.5%
  Pagerank 128T        2m 36.3s     1m 40.4s   -35.7%

Signed-off-by: Hrushikesh Salunke <hsalunke@amd.com>
---
base commit: f1541b40cd422d7e22273be9b7e9edfc9ea4f0d7

v1: https://lore.kernel.org/all/20260408092441.435133-1-hsalunke@amd.com/

Changes since v1:
- Dropped cond_resched() and PROCESS_PAGES_NON_PREEMPT_BATCH as
  kernel_init_pages() runs inside the page allocator and can be
  called from atomic context, making cond_resched() unsafe. The 
  original code never had a cond_resched() here, and the 
  performance gain comes from batching, not rescheduling.

- Moved the !HIGHMEM/HIGHMEM branching into a new
  clear_highpages_kasan_tagged() helper in highmem.h, per David's
  suggestion.

 include/linux/highmem.h | 12 ++++++++++++
 mm/page_alloc.c         |  5 +----
 2 files changed, 13 insertions(+), 4 deletions(-)

diff --git a/include/linux/highmem.h b/include/linux/highmem.h
index af03db851a1d..ad0f42d06ce6 100644
--- a/include/linux/highmem.h
+++ b/include/linux/highmem.h
@@ -345,6 +345,18 @@ static inline void clear_highpage_kasan_tagged(struct page *page)
 	kunmap_local(kaddr);
 }
 
+static inline void clear_highpages_kasan_tagged(struct page *page, int numpages)
+{
+	if (!IS_ENABLED(CONFIG_HIGHMEM)) {
+		clear_pages(kasan_reset_tag(page_address(page)), numpages);
+	} else {
+		int i;
+
+		for (i = 0; i < numpages; i++)
+			clear_highpage_kasan_tagged(page + i);
+	}
+}
+
 #ifndef __HAVE_ARCH_TAG_CLEAR_HIGHPAGES
 
 /* Return false to let people know we did not initialize the pages */
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index b1c5430cad4e..1aaf7f839ff4 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1220,12 +1220,9 @@ static inline bool should_skip_kasan_poison(struct page *page)
 
 static void kernel_init_pages(struct page *page, int numpages)
 {
-	int i;
-
 	/* s390's use of memset() could override KASAN redzones. */
 	kasan_disable_current();
-	for (i = 0; i < numpages; i++)
-		clear_highpage_kasan_tagged(page + i);
+	clear_highpages_kasan_tagged(page, numpages);
 	kasan_enable_current();
 }
 
-- 
2.43.0



^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: [PATCH v2] mm/page_alloc: use batch page clearing in kernel_init_pages()
  2026-04-21  4:24 [PATCH v2] mm/page_alloc: use batch page clearing in kernel_init_pages() Hrushikesh Salunke
@ 2026-04-21  9:27 ` Vlastimil Babka (SUSE)
  0 siblings, 0 replies; 2+ messages in thread
From: Vlastimil Babka (SUSE) @ 2026-04-21  9:27 UTC (permalink / raw)
  To: Hrushikesh Salunke, akpm, david, ljs, Liam.Howlett, rppt, surenb,
	mhocko, jackmanb, hannes, ziy
  Cc: linux-mm, linux-kernel, rkodsara, bharata, ankur.a.arora, shivankg

On 4/21/26 06:24, Hrushikesh Salunke wrote:
> When init_on_alloc is enabled, kernel_init_pages() clears every page
> one at a time via clear_highpage_kasan_tagged(), which incurs per-page
> kmap_local_page()/kunmap_local() overhead and prevents the architecture
> clearing primitive from operating on contiguous ranges.
> 
> Introduce clear_highpages_kasan_tagged() in highmem.h, a batch
> clearing helper that calls clear_pages() for the full contiguous range
> on !HIGHMEM systems, bypassing the per-page kmap overhead and allowing
> a single invocation of the arch clearing primitive across the entire
> allocation. The HIGHMEM path falls back to per-page clearing since
> those pages require kmap.
> 
> Use it in kernel_init_pages() to replace the per-page loop.
> 
> Allocating 8192 x 2MB HugeTLB pages (16GB) with init_on_alloc=1:
> 
>   Before: 0.445s
>   After:  0.166s  (-62.7%, 2.68x faster)
> 
> Kernel time (sys) reduction per workload with init_on_alloc=1:
> 
>   Workload            Before       After       Change
>   Graph500 64C128T    30m 41.8s    15m 14.8s   -50.3%
>   Graph500 16C32T     15m 56.7s     9m 43.7s   -39.0%
>   Pagerank 32T         1m 58.5s     1m 12.8s   -38.5%
>   Pagerank 128T        2m 36.3s     1m 40.4s   -35.7%
> 
> Signed-off-by: Hrushikesh Salunke <hsalunke@amd.com>
> ---
> base commit: f1541b40cd422d7e22273be9b7e9edfc9ea4f0d7
> 
> v1: https://lore.kernel.org/all/20260408092441.435133-1-hsalunke@amd.com/
> 
> Changes since v1:
> - Dropped cond_resched() and PROCESS_PAGES_NON_PREEMPT_BATCH as
>   kernel_init_pages() runs inside the page allocator and can be
>   called from atomic context, making cond_resched() unsafe. The 
>   original code never had a cond_resched() here, and the 
>   performance gain comes from batching, not rescheduling.
> 
> - Moved the !HIGHMEM/HIGHMEM branching into a new
>   clear_highpages_kasan_tagged() helper in highmem.h, per David's
>   suggestion.
> 
>  include/linux/highmem.h | 12 ++++++++++++
>  mm/page_alloc.c         |  5 +----
>  2 files changed, 13 insertions(+), 4 deletions(-)

Acked-by: Vlastimil Babka (SUSE) <vbabka@kernel.org>

> 
> diff --git a/include/linux/highmem.h b/include/linux/highmem.h
> index af03db851a1d..ad0f42d06ce6 100644
> --- a/include/linux/highmem.h
> +++ b/include/linux/highmem.h
> @@ -345,6 +345,18 @@ static inline void clear_highpage_kasan_tagged(struct page *page)
>  	kunmap_local(kaddr);
>  }
>  
> +static inline void clear_highpages_kasan_tagged(struct page *page, int numpages)
> +{
> +	if (!IS_ENABLED(CONFIG_HIGHMEM)) {
> +		clear_pages(kasan_reset_tag(page_address(page)), numpages);
> +	} else {
> +		int i;
> +
> +		for (i = 0; i < numpages; i++)
> +			clear_highpage_kasan_tagged(page + i);
> +	}
> +}
> +
>  #ifndef __HAVE_ARCH_TAG_CLEAR_HIGHPAGES
>  
>  /* Return false to let people know we did not initialize the pages */
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index b1c5430cad4e..1aaf7f839ff4 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -1220,12 +1220,9 @@ static inline bool should_skip_kasan_poison(struct page *page)
>  
>  static void kernel_init_pages(struct page *page, int numpages)
>  {
> -	int i;
> -
>  	/* s390's use of memset() could override KASAN redzones. */
>  	kasan_disable_current();
> -	for (i = 0; i < numpages; i++)
> -		clear_highpage_kasan_tagged(page + i);
> +	clear_highpages_kasan_tagged(page, numpages);
>  	kasan_enable_current();
>  }
>  



^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2026-04-21  9:27 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2026-04-21  4:24 [PATCH v2] mm/page_alloc: use batch page clearing in kernel_init_pages() Hrushikesh Salunke
2026-04-21  9:27 ` Vlastimil Babka (SUSE)

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox