linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Zi Yan <ziy@nvidia.com>
To: "David Hildenbrand (Arm)" <david@kernel.org>
Cc: Hrushikesh Salunke <hsalunke@amd.com>,
	akpm@linux-foundation.org, ljs@kernel.org,
	Liam.Howlett@oracle.com, vbabka@kernel.org, rppt@kernel.org,
	surenb@google.com, mhocko@suse.com, jackmanb@google.com,
	hannes@cmpxchg.org, linux-mm@kvack.org,
	linux-kernel@vger.kernel.org, rkodsara@amd.com, bharata@amd.com,
	ankur.a.arora@oracle.com, shivankg@amd.com
Subject: Re: [PATCH v2] mm/page_alloc: use batch page clearing in kernel_init_pages()
Date: Tue, 21 Apr 2026 10:03:51 -0400	[thread overview]
Message-ID: <23A49388-EF68-4055-B620-54FD92FF2EC8@nvidia.com> (raw)
In-Reply-To: <a6182e60-9b81-43cd-aa25-09e54d20d13f@kernel.org>

On 21 Apr 2026, at 9:57, David Hildenbrand (Arm) wrote:

> On 4/21/26 15:44, Zi Yan wrote:
>> On 21 Apr 2026, at 0:24, Hrushikesh Salunke wrote:
>>
>>> When init_on_alloc is enabled, kernel_init_pages() clears every page
>>> one at a time via clear_highpage_kasan_tagged(), which incurs per-page
>>> kmap_local_page()/kunmap_local() overhead and prevents the architecture
>>> clearing primitive from operating on contiguous ranges.
>>>
>>> Introduce clear_highpages_kasan_tagged() in highmem.h, a batch
>>> clearing helper that calls clear_pages() for the full contiguous range
>>> on !HIGHMEM systems, bypassing the per-page kmap overhead and allowing
>>> a single invocation of the arch clearing primitive across the entire
>>> allocation. The HIGHMEM path falls back to per-page clearing since
>>> those pages require kmap.
>>>
>>> Use it in kernel_init_pages() to replace the per-page loop.
>>>
>>> Allocating 8192 x 2MB HugeTLB pages (16GB) with init_on_alloc=1:
>>>
>>>   Before: 0.445s
>>>   After:  0.166s  (-62.7%, 2.68x faster)
>>>
>>> Kernel time (sys) reduction per workload with init_on_alloc=1:
>>>
>>>   Workload            Before       After       Change
>>>   Graph500 64C128T    30m 41.8s    15m 14.8s   -50.3%
>>>   Graph500 16C32T     15m 56.7s     9m 43.7s   -39.0%
>>>   Pagerank 32T         1m 58.5s     1m 12.8s   -38.5%
>>>   Pagerank 128T        2m 36.3s     1m 40.4s   -35.7%
>>>
>>> Signed-off-by: Hrushikesh Salunke <hsalunke@amd.com>
>>> ---
>>> base commit: f1541b40cd422d7e22273be9b7e9edfc9ea4f0d7
>>>
>>> v1: https://lore.kernel.org/all/20260408092441.435133-1-hsalunke@amd.com/
>>>
>>> Changes since v1:
>>> - Dropped cond_resched() and PROCESS_PAGES_NON_PREEMPT_BATCH as
>>>   kernel_init_pages() runs inside the page allocator and can be
>>>   called from atomic context, making cond_resched() unsafe. The
>>>   original code never had a cond_resched() here, and the
>>>   performance gain comes from batching, not rescheduling.
>>>
>>> - Moved the !HIGHMEM/HIGHMEM branching into a new
>>>   clear_highpages_kasan_tagged() helper in highmem.h, per David's
>>>   suggestion.
>>>
>>>  include/linux/highmem.h | 12 ++++++++++++
>>>  mm/page_alloc.c         |  5 +----
>>>  2 files changed, 13 insertions(+), 4 deletions(-)
>>>
>>> diff --git a/include/linux/highmem.h b/include/linux/highmem.h
>>> index af03db851a1d..ad0f42d06ce6 100644
>>> --- a/include/linux/highmem.h
>>> +++ b/include/linux/highmem.h
>>> @@ -345,6 +345,18 @@ static inline void clear_highpage_kasan_tagged(struct page *page)
>>>  	kunmap_local(kaddr);
>>>  }
>>>
>>> +static inline void clear_highpages_kasan_tagged(struct page *page, int numpages)
>>> +{
>>> +	if (!IS_ENABLED(CONFIG_HIGHMEM)) {
>>> +		clear_pages(kasan_reset_tag(page_address(page)), numpages);
>>
>> kasan_reset_tag() here removes the tag from page address, so that
>> clear_pages() can use the right kaddr. I thought each page needs
>> a kasan_reset_tag(). No need to respond here, as I am reading
>> the code and trying to understand how it works.
>
> It's all confusing. But we really just turn the pointer into an untagged
> pointer here, once.

Yes, I realized that after reading kasan_reset_tag() implementation.

>
> So I think this is ok.
>
> I do wonder, though, whether we want to move the
> kasan_disable_current/kasan_enable_current into the
> clear_highpages_kasan_tagged().

This sounds reasonable to me. And also replace kernel_init_pages()
with clear_highpages_kasan_tagged(), since kernel_init_pages()
will be a wrapper then.

Best Regards,
Yan, Zi


  reply	other threads:[~2026-04-21 14:04 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-04-21  4:24 Hrushikesh Salunke
2026-04-21  9:27 ` Vlastimil Babka (SUSE)
2026-04-21 13:44 ` Zi Yan
2026-04-21 13:57   ` David Hildenbrand (Arm)
2026-04-21 14:03     ` Zi Yan [this message]
2026-04-21 14:06       ` David Hildenbrand (Arm)

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=23A49388-EF68-4055-B620-54FD92FF2EC8@nvidia.com \
    --to=ziy@nvidia.com \
    --cc=Liam.Howlett@oracle.com \
    --cc=akpm@linux-foundation.org \
    --cc=ankur.a.arora@oracle.com \
    --cc=bharata@amd.com \
    --cc=david@kernel.org \
    --cc=hannes@cmpxchg.org \
    --cc=hsalunke@amd.com \
    --cc=jackmanb@google.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=ljs@kernel.org \
    --cc=mhocko@suse.com \
    --cc=rkodsara@amd.com \
    --cc=rppt@kernel.org \
    --cc=shivankg@amd.com \
    --cc=surenb@google.com \
    --cc=vbabka@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox