* [PATCH v2] mm/page_alloc: use batch page clearing in kernel_init_pages()
@ 2026-04-21 4:24 Hrushikesh Salunke
2026-04-21 9:27 ` Vlastimil Babka (SUSE)
2026-04-21 13:44 ` Zi Yan
0 siblings, 2 replies; 6+ messages in thread
From: Hrushikesh Salunke @ 2026-04-21 4:24 UTC (permalink / raw)
To: akpm, david, ljs, Liam.Howlett, vbabka, rppt, surenb, mhocko,
jackmanb, hannes, ziy
Cc: linux-mm, linux-kernel, rkodsara, bharata, ankur.a.arora,
shivankg, hsalunke
When init_on_alloc is enabled, kernel_init_pages() clears every page
one at a time via clear_highpage_kasan_tagged(), which incurs per-page
kmap_local_page()/kunmap_local() overhead and prevents the architecture
clearing primitive from operating on contiguous ranges.
Introduce clear_highpages_kasan_tagged() in highmem.h, a batch
clearing helper that calls clear_pages() for the full contiguous range
on !HIGHMEM systems, bypassing the per-page kmap overhead and allowing
a single invocation of the arch clearing primitive across the entire
allocation. The HIGHMEM path falls back to per-page clearing since
those pages require kmap.
Use it in kernel_init_pages() to replace the per-page loop.
Allocating 8192 x 2MB HugeTLB pages (16GB) with init_on_alloc=1:
Before: 0.445s
After: 0.166s (-62.7%, 2.68x faster)
Kernel time (sys) reduction per workload with init_on_alloc=1:
Workload Before After Change
Graph500 64C128T 30m 41.8s 15m 14.8s -50.3%
Graph500 16C32T 15m 56.7s 9m 43.7s -39.0%
Pagerank 32T 1m 58.5s 1m 12.8s -38.5%
Pagerank 128T 2m 36.3s 1m 40.4s -35.7%
Signed-off-by: Hrushikesh Salunke <hsalunke@amd.com>
---
base commit: f1541b40cd422d7e22273be9b7e9edfc9ea4f0d7
v1: https://lore.kernel.org/all/20260408092441.435133-1-hsalunke@amd.com/
Changes since v1:
- Dropped cond_resched() and PROCESS_PAGES_NON_PREEMPT_BATCH as
kernel_init_pages() runs inside the page allocator and can be
called from atomic context, making cond_resched() unsafe. The
original code never had a cond_resched() here, and the
performance gain comes from batching, not rescheduling.
- Moved the !HIGHMEM/HIGHMEM branching into a new
clear_highpages_kasan_tagged() helper in highmem.h, per David's
suggestion.
include/linux/highmem.h | 12 ++++++++++++
mm/page_alloc.c | 5 +----
2 files changed, 13 insertions(+), 4 deletions(-)
diff --git a/include/linux/highmem.h b/include/linux/highmem.h
index af03db851a1d..ad0f42d06ce6 100644
--- a/include/linux/highmem.h
+++ b/include/linux/highmem.h
@@ -345,6 +345,18 @@ static inline void clear_highpage_kasan_tagged(struct page *page)
kunmap_local(kaddr);
}
+static inline void clear_highpages_kasan_tagged(struct page *page, int numpages)
+{
+ if (!IS_ENABLED(CONFIG_HIGHMEM)) {
+ clear_pages(kasan_reset_tag(page_address(page)), numpages);
+ } else {
+ int i;
+
+ for (i = 0; i < numpages; i++)
+ clear_highpage_kasan_tagged(page + i);
+ }
+}
+
#ifndef __HAVE_ARCH_TAG_CLEAR_HIGHPAGES
/* Return false to let people know we did not initialize the pages */
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index b1c5430cad4e..1aaf7f839ff4 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1220,12 +1220,9 @@ static inline bool should_skip_kasan_poison(struct page *page)
static void kernel_init_pages(struct page *page, int numpages)
{
- int i;
-
/* s390's use of memset() could override KASAN redzones. */
kasan_disable_current();
- for (i = 0; i < numpages; i++)
- clear_highpage_kasan_tagged(page + i);
+ clear_highpages_kasan_tagged(page, numpages);
kasan_enable_current();
}
--
2.43.0
^ permalink raw reply [flat|nested] 6+ messages in thread* Re: [PATCH v2] mm/page_alloc: use batch page clearing in kernel_init_pages()
2026-04-21 4:24 [PATCH v2] mm/page_alloc: use batch page clearing in kernel_init_pages() Hrushikesh Salunke
@ 2026-04-21 9:27 ` Vlastimil Babka (SUSE)
2026-04-21 13:44 ` Zi Yan
1 sibling, 0 replies; 6+ messages in thread
From: Vlastimil Babka (SUSE) @ 2026-04-21 9:27 UTC (permalink / raw)
To: Hrushikesh Salunke, akpm, david, ljs, Liam.Howlett, rppt, surenb,
mhocko, jackmanb, hannes, ziy
Cc: linux-mm, linux-kernel, rkodsara, bharata, ankur.a.arora, shivankg
On 4/21/26 06:24, Hrushikesh Salunke wrote:
> When init_on_alloc is enabled, kernel_init_pages() clears every page
> one at a time via clear_highpage_kasan_tagged(), which incurs per-page
> kmap_local_page()/kunmap_local() overhead and prevents the architecture
> clearing primitive from operating on contiguous ranges.
>
> Introduce clear_highpages_kasan_tagged() in highmem.h, a batch
> clearing helper that calls clear_pages() for the full contiguous range
> on !HIGHMEM systems, bypassing the per-page kmap overhead and allowing
> a single invocation of the arch clearing primitive across the entire
> allocation. The HIGHMEM path falls back to per-page clearing since
> those pages require kmap.
>
> Use it in kernel_init_pages() to replace the per-page loop.
>
> Allocating 8192 x 2MB HugeTLB pages (16GB) with init_on_alloc=1:
>
> Before: 0.445s
> After: 0.166s (-62.7%, 2.68x faster)
>
> Kernel time (sys) reduction per workload with init_on_alloc=1:
>
> Workload Before After Change
> Graph500 64C128T 30m 41.8s 15m 14.8s -50.3%
> Graph500 16C32T 15m 56.7s 9m 43.7s -39.0%
> Pagerank 32T 1m 58.5s 1m 12.8s -38.5%
> Pagerank 128T 2m 36.3s 1m 40.4s -35.7%
>
> Signed-off-by: Hrushikesh Salunke <hsalunke@amd.com>
> ---
> base commit: f1541b40cd422d7e22273be9b7e9edfc9ea4f0d7
>
> v1: https://lore.kernel.org/all/20260408092441.435133-1-hsalunke@amd.com/
>
> Changes since v1:
> - Dropped cond_resched() and PROCESS_PAGES_NON_PREEMPT_BATCH as
> kernel_init_pages() runs inside the page allocator and can be
> called from atomic context, making cond_resched() unsafe. The
> original code never had a cond_resched() here, and the
> performance gain comes from batching, not rescheduling.
>
> - Moved the !HIGHMEM/HIGHMEM branching into a new
> clear_highpages_kasan_tagged() helper in highmem.h, per David's
> suggestion.
>
> include/linux/highmem.h | 12 ++++++++++++
> mm/page_alloc.c | 5 +----
> 2 files changed, 13 insertions(+), 4 deletions(-)
Acked-by: Vlastimil Babka (SUSE) <vbabka@kernel.org>
>
> diff --git a/include/linux/highmem.h b/include/linux/highmem.h
> index af03db851a1d..ad0f42d06ce6 100644
> --- a/include/linux/highmem.h
> +++ b/include/linux/highmem.h
> @@ -345,6 +345,18 @@ static inline void clear_highpage_kasan_tagged(struct page *page)
> kunmap_local(kaddr);
> }
>
> +static inline void clear_highpages_kasan_tagged(struct page *page, int numpages)
> +{
> + if (!IS_ENABLED(CONFIG_HIGHMEM)) {
> + clear_pages(kasan_reset_tag(page_address(page)), numpages);
> + } else {
> + int i;
> +
> + for (i = 0; i < numpages; i++)
> + clear_highpage_kasan_tagged(page + i);
> + }
> +}
> +
> #ifndef __HAVE_ARCH_TAG_CLEAR_HIGHPAGES
>
> /* Return false to let people know we did not initialize the pages */
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index b1c5430cad4e..1aaf7f839ff4 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -1220,12 +1220,9 @@ static inline bool should_skip_kasan_poison(struct page *page)
>
> static void kernel_init_pages(struct page *page, int numpages)
> {
> - int i;
> -
> /* s390's use of memset() could override KASAN redzones. */
> kasan_disable_current();
> - for (i = 0; i < numpages; i++)
> - clear_highpage_kasan_tagged(page + i);
> + clear_highpages_kasan_tagged(page, numpages);
> kasan_enable_current();
> }
>
^ permalink raw reply [flat|nested] 6+ messages in thread* Re: [PATCH v2] mm/page_alloc: use batch page clearing in kernel_init_pages()
2026-04-21 4:24 [PATCH v2] mm/page_alloc: use batch page clearing in kernel_init_pages() Hrushikesh Salunke
2026-04-21 9:27 ` Vlastimil Babka (SUSE)
@ 2026-04-21 13:44 ` Zi Yan
2026-04-21 13:57 ` David Hildenbrand (Arm)
1 sibling, 1 reply; 6+ messages in thread
From: Zi Yan @ 2026-04-21 13:44 UTC (permalink / raw)
To: Hrushikesh Salunke
Cc: akpm, david, ljs, Liam.Howlett, vbabka, rppt, surenb, mhocko,
jackmanb, hannes, linux-mm, linux-kernel, rkodsara, bharata,
ankur.a.arora, shivankg
On 21 Apr 2026, at 0:24, Hrushikesh Salunke wrote:
> When init_on_alloc is enabled, kernel_init_pages() clears every page
> one at a time via clear_highpage_kasan_tagged(), which incurs per-page
> kmap_local_page()/kunmap_local() overhead and prevents the architecture
> clearing primitive from operating on contiguous ranges.
>
> Introduce clear_highpages_kasan_tagged() in highmem.h, a batch
> clearing helper that calls clear_pages() for the full contiguous range
> on !HIGHMEM systems, bypassing the per-page kmap overhead and allowing
> a single invocation of the arch clearing primitive across the entire
> allocation. The HIGHMEM path falls back to per-page clearing since
> those pages require kmap.
>
> Use it in kernel_init_pages() to replace the per-page loop.
>
> Allocating 8192 x 2MB HugeTLB pages (16GB) with init_on_alloc=1:
>
> Before: 0.445s
> After: 0.166s (-62.7%, 2.68x faster)
>
> Kernel time (sys) reduction per workload with init_on_alloc=1:
>
> Workload Before After Change
> Graph500 64C128T 30m 41.8s 15m 14.8s -50.3%
> Graph500 16C32T 15m 56.7s 9m 43.7s -39.0%
> Pagerank 32T 1m 58.5s 1m 12.8s -38.5%
> Pagerank 128T 2m 36.3s 1m 40.4s -35.7%
>
> Signed-off-by: Hrushikesh Salunke <hsalunke@amd.com>
> ---
> base commit: f1541b40cd422d7e22273be9b7e9edfc9ea4f0d7
>
> v1: https://lore.kernel.org/all/20260408092441.435133-1-hsalunke@amd.com/
>
> Changes since v1:
> - Dropped cond_resched() and PROCESS_PAGES_NON_PREEMPT_BATCH as
> kernel_init_pages() runs inside the page allocator and can be
> called from atomic context, making cond_resched() unsafe. The
> original code never had a cond_resched() here, and the
> performance gain comes from batching, not rescheduling.
>
> - Moved the !HIGHMEM/HIGHMEM branching into a new
> clear_highpages_kasan_tagged() helper in highmem.h, per David's
> suggestion.
>
> include/linux/highmem.h | 12 ++++++++++++
> mm/page_alloc.c | 5 +----
> 2 files changed, 13 insertions(+), 4 deletions(-)
>
> diff --git a/include/linux/highmem.h b/include/linux/highmem.h
> index af03db851a1d..ad0f42d06ce6 100644
> --- a/include/linux/highmem.h
> +++ b/include/linux/highmem.h
> @@ -345,6 +345,18 @@ static inline void clear_highpage_kasan_tagged(struct page *page)
> kunmap_local(kaddr);
> }
>
> +static inline void clear_highpages_kasan_tagged(struct page *page, int numpages)
> +{
> + if (!IS_ENABLED(CONFIG_HIGHMEM)) {
> + clear_pages(kasan_reset_tag(page_address(page)), numpages);
kasan_reset_tag() here removes the tag from page address, so that
clear_pages() can use the right kaddr. I thought each page needs
a kasan_reset_tag(). No need to respond here, as I am reading
the code and trying to understand how it works.
Acked-by: Zi Yan <ziy@nvidia.com>
> + } else {
> + int i;
> +
> + for (i = 0; i < numpages; i++)
> + clear_highpage_kasan_tagged(page + i);
> + }
> +}
> +
> #ifndef __HAVE_ARCH_TAG_CLEAR_HIGHPAGES
>
> /* Return false to let people know we did not initialize the pages */
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index b1c5430cad4e..1aaf7f839ff4 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -1220,12 +1220,9 @@ static inline bool should_skip_kasan_poison(struct page *page)
>
> static void kernel_init_pages(struct page *page, int numpages)
> {
> - int i;
> -
> /* s390's use of memset() could override KASAN redzones. */
> kasan_disable_current();
> - for (i = 0; i < numpages; i++)
> - clear_highpage_kasan_tagged(page + i);
> + clear_highpages_kasan_tagged(page, numpages);
> kasan_enable_current();
> }
>
> --
> 2.43.0
Best Regards,
Yan, Zi
^ permalink raw reply [flat|nested] 6+ messages in thread* Re: [PATCH v2] mm/page_alloc: use batch page clearing in kernel_init_pages()
2026-04-21 13:44 ` Zi Yan
@ 2026-04-21 13:57 ` David Hildenbrand (Arm)
2026-04-21 14:03 ` Zi Yan
0 siblings, 1 reply; 6+ messages in thread
From: David Hildenbrand (Arm) @ 2026-04-21 13:57 UTC (permalink / raw)
To: Zi Yan, Hrushikesh Salunke
Cc: akpm, ljs, Liam.Howlett, vbabka, rppt, surenb, mhocko, jackmanb,
hannes, linux-mm, linux-kernel, rkodsara, bharata, ankur.a.arora,
shivankg
On 4/21/26 15:44, Zi Yan wrote:
> On 21 Apr 2026, at 0:24, Hrushikesh Salunke wrote:
>
>> When init_on_alloc is enabled, kernel_init_pages() clears every page
>> one at a time via clear_highpage_kasan_tagged(), which incurs per-page
>> kmap_local_page()/kunmap_local() overhead and prevents the architecture
>> clearing primitive from operating on contiguous ranges.
>>
>> Introduce clear_highpages_kasan_tagged() in highmem.h, a batch
>> clearing helper that calls clear_pages() for the full contiguous range
>> on !HIGHMEM systems, bypassing the per-page kmap overhead and allowing
>> a single invocation of the arch clearing primitive across the entire
>> allocation. The HIGHMEM path falls back to per-page clearing since
>> those pages require kmap.
>>
>> Use it in kernel_init_pages() to replace the per-page loop.
>>
>> Allocating 8192 x 2MB HugeTLB pages (16GB) with init_on_alloc=1:
>>
>> Before: 0.445s
>> After: 0.166s (-62.7%, 2.68x faster)
>>
>> Kernel time (sys) reduction per workload with init_on_alloc=1:
>>
>> Workload Before After Change
>> Graph500 64C128T 30m 41.8s 15m 14.8s -50.3%
>> Graph500 16C32T 15m 56.7s 9m 43.7s -39.0%
>> Pagerank 32T 1m 58.5s 1m 12.8s -38.5%
>> Pagerank 128T 2m 36.3s 1m 40.4s -35.7%
>>
>> Signed-off-by: Hrushikesh Salunke <hsalunke@amd.com>
>> ---
>> base commit: f1541b40cd422d7e22273be9b7e9edfc9ea4f0d7
>>
>> v1: https://lore.kernel.org/all/20260408092441.435133-1-hsalunke@amd.com/
>>
>> Changes since v1:
>> - Dropped cond_resched() and PROCESS_PAGES_NON_PREEMPT_BATCH as
>> kernel_init_pages() runs inside the page allocator and can be
>> called from atomic context, making cond_resched() unsafe. The
>> original code never had a cond_resched() here, and the
>> performance gain comes from batching, not rescheduling.
>>
>> - Moved the !HIGHMEM/HIGHMEM branching into a new
>> clear_highpages_kasan_tagged() helper in highmem.h, per David's
>> suggestion.
>>
>> include/linux/highmem.h | 12 ++++++++++++
>> mm/page_alloc.c | 5 +----
>> 2 files changed, 13 insertions(+), 4 deletions(-)
>>
>> diff --git a/include/linux/highmem.h b/include/linux/highmem.h
>> index af03db851a1d..ad0f42d06ce6 100644
>> --- a/include/linux/highmem.h
>> +++ b/include/linux/highmem.h
>> @@ -345,6 +345,18 @@ static inline void clear_highpage_kasan_tagged(struct page *page)
>> kunmap_local(kaddr);
>> }
>>
>> +static inline void clear_highpages_kasan_tagged(struct page *page, int numpages)
>> +{
>> + if (!IS_ENABLED(CONFIG_HIGHMEM)) {
>> + clear_pages(kasan_reset_tag(page_address(page)), numpages);
>
> kasan_reset_tag() here removes the tag from page address, so that
> clear_pages() can use the right kaddr. I thought each page needs
> a kasan_reset_tag(). No need to respond here, as I am reading
> the code and trying to understand how it works.
It's all confusing. But we really just turn the pointer into an untagged
pointer here, once.
So I think this is ok.
I do wonder, though, whether we want to move the
kasan_disable_current/kasan_enable_current into the
clear_highpages_kasan_tagged().
--
Cheers,
David
^ permalink raw reply [flat|nested] 6+ messages in thread* Re: [PATCH v2] mm/page_alloc: use batch page clearing in kernel_init_pages()
2026-04-21 13:57 ` David Hildenbrand (Arm)
@ 2026-04-21 14:03 ` Zi Yan
2026-04-21 14:06 ` David Hildenbrand (Arm)
0 siblings, 1 reply; 6+ messages in thread
From: Zi Yan @ 2026-04-21 14:03 UTC (permalink / raw)
To: David Hildenbrand (Arm)
Cc: Hrushikesh Salunke, akpm, ljs, Liam.Howlett, vbabka, rppt,
surenb, mhocko, jackmanb, hannes, linux-mm, linux-kernel,
rkodsara, bharata, ankur.a.arora, shivankg
On 21 Apr 2026, at 9:57, David Hildenbrand (Arm) wrote:
> On 4/21/26 15:44, Zi Yan wrote:
>> On 21 Apr 2026, at 0:24, Hrushikesh Salunke wrote:
>>
>>> When init_on_alloc is enabled, kernel_init_pages() clears every page
>>> one at a time via clear_highpage_kasan_tagged(), which incurs per-page
>>> kmap_local_page()/kunmap_local() overhead and prevents the architecture
>>> clearing primitive from operating on contiguous ranges.
>>>
>>> Introduce clear_highpages_kasan_tagged() in highmem.h, a batch
>>> clearing helper that calls clear_pages() for the full contiguous range
>>> on !HIGHMEM systems, bypassing the per-page kmap overhead and allowing
>>> a single invocation of the arch clearing primitive across the entire
>>> allocation. The HIGHMEM path falls back to per-page clearing since
>>> those pages require kmap.
>>>
>>> Use it in kernel_init_pages() to replace the per-page loop.
>>>
>>> Allocating 8192 x 2MB HugeTLB pages (16GB) with init_on_alloc=1:
>>>
>>> Before: 0.445s
>>> After: 0.166s (-62.7%, 2.68x faster)
>>>
>>> Kernel time (sys) reduction per workload with init_on_alloc=1:
>>>
>>> Workload Before After Change
>>> Graph500 64C128T 30m 41.8s 15m 14.8s -50.3%
>>> Graph500 16C32T 15m 56.7s 9m 43.7s -39.0%
>>> Pagerank 32T 1m 58.5s 1m 12.8s -38.5%
>>> Pagerank 128T 2m 36.3s 1m 40.4s -35.7%
>>>
>>> Signed-off-by: Hrushikesh Salunke <hsalunke@amd.com>
>>> ---
>>> base commit: f1541b40cd422d7e22273be9b7e9edfc9ea4f0d7
>>>
>>> v1: https://lore.kernel.org/all/20260408092441.435133-1-hsalunke@amd.com/
>>>
>>> Changes since v1:
>>> - Dropped cond_resched() and PROCESS_PAGES_NON_PREEMPT_BATCH as
>>> kernel_init_pages() runs inside the page allocator and can be
>>> called from atomic context, making cond_resched() unsafe. The
>>> original code never had a cond_resched() here, and the
>>> performance gain comes from batching, not rescheduling.
>>>
>>> - Moved the !HIGHMEM/HIGHMEM branching into a new
>>> clear_highpages_kasan_tagged() helper in highmem.h, per David's
>>> suggestion.
>>>
>>> include/linux/highmem.h | 12 ++++++++++++
>>> mm/page_alloc.c | 5 +----
>>> 2 files changed, 13 insertions(+), 4 deletions(-)
>>>
>>> diff --git a/include/linux/highmem.h b/include/linux/highmem.h
>>> index af03db851a1d..ad0f42d06ce6 100644
>>> --- a/include/linux/highmem.h
>>> +++ b/include/linux/highmem.h
>>> @@ -345,6 +345,18 @@ static inline void clear_highpage_kasan_tagged(struct page *page)
>>> kunmap_local(kaddr);
>>> }
>>>
>>> +static inline void clear_highpages_kasan_tagged(struct page *page, int numpages)
>>> +{
>>> + if (!IS_ENABLED(CONFIG_HIGHMEM)) {
>>> + clear_pages(kasan_reset_tag(page_address(page)), numpages);
>>
>> kasan_reset_tag() here removes the tag from page address, so that
>> clear_pages() can use the right kaddr. I thought each page needs
>> a kasan_reset_tag(). No need to respond here, as I am reading
>> the code and trying to understand how it works.
>
> It's all confusing. But we really just turn the pointer into an untagged
> pointer here, once.
Yes, I realized that after reading kasan_reset_tag() implementation.
>
> So I think this is ok.
>
> I do wonder, though, whether we want to move the
> kasan_disable_current/kasan_enable_current into the
> clear_highpages_kasan_tagged().
This sounds reasonable to me. And also replace kernel_init_pages()
with clear_highpages_kasan_tagged(), since kernel_init_pages()
will be a wrapper then.
Best Regards,
Yan, Zi
^ permalink raw reply [flat|nested] 6+ messages in thread* Re: [PATCH v2] mm/page_alloc: use batch page clearing in kernel_init_pages()
2026-04-21 14:03 ` Zi Yan
@ 2026-04-21 14:06 ` David Hildenbrand (Arm)
0 siblings, 0 replies; 6+ messages in thread
From: David Hildenbrand (Arm) @ 2026-04-21 14:06 UTC (permalink / raw)
To: Zi Yan
Cc: Hrushikesh Salunke, akpm, ljs, Liam.Howlett, vbabka, rppt,
surenb, mhocko, jackmanb, hannes, linux-mm, linux-kernel,
rkodsara, bharata, ankur.a.arora, shivankg
On 4/21/26 16:03, Zi Yan wrote:
> On 21 Apr 2026, at 9:57, David Hildenbrand (Arm) wrote:
>
>> On 4/21/26 15:44, Zi Yan wrote:
>>>
>>>
>>> kasan_reset_tag() here removes the tag from page address, so that
>>> clear_pages() can use the right kaddr. I thought each page needs
>>> a kasan_reset_tag(). No need to respond here, as I am reading
>>> the code and trying to understand how it works.
>>
>> It's all confusing. But we really just turn the pointer into an untagged
>> pointer here, once.
>
> Yes, I realized that after reading kasan_reset_tag() implementation.
>
>>
>> So I think this is ok.
>>
>> I do wonder, though, whether we want to move the
>> kasan_disable_current/kasan_enable_current into the
>> clear_highpages_kasan_tagged().
>
> This sounds reasonable to me. And also replace kernel_init_pages()
> with clear_highpages_kasan_tagged(), since kernel_init_pages()
> will be a wrapper then.
Agreed.
--
Cheers,
David
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2026-04-21 14:07 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2026-04-21 4:24 [PATCH v2] mm/page_alloc: use batch page clearing in kernel_init_pages() Hrushikesh Salunke
2026-04-21 9:27 ` Vlastimil Babka (SUSE)
2026-04-21 13:44 ` Zi Yan
2026-04-21 13:57 ` David Hildenbrand (Arm)
2026-04-21 14:03 ` Zi Yan
2026-04-21 14:06 ` David Hildenbrand (Arm)
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox