From: zhangqilong <zhangqilong3@huawei.com>
To: David Hildenbrand <david@redhat.com>,
"akpm@linux-foundation.org" <akpm@linux-foundation.org>,
"lorenzo.stoakes@oracle.com" <lorenzo.stoakes@oracle.com>,
"Liam.Howlett@oracle.com" <Liam.Howlett@oracle.com>,
"vbabka@suse.cz" <vbabka@suse.cz>,
"rppt@kernel.org" <rppt@kernel.org>,
"surenb@google.com" <surenb@google.com>,
"mhocko@suse.com" <mhocko@suse.com>,
"jannh@google.com" <jannh@google.com>,
"pfalcato@suse.de" <pfalcato@suse.de>
Cc: "linux-mm@kvack.org" <linux-mm@kvack.org>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
"Wangkefeng (OS Kernel Lab)" <wangkefeng.wang@huawei.com>,
Sunnanyong <sunnanyong@huawei.com>
Subject: Re: [RFC PATCH 2/3] mm/mincore: Use can_pte_batch_count() in mincore_pte_range() for pte batch mincore_pte_range()
Date: Tue, 28 Oct 2025 11:32:31 +0000 [thread overview]
Message-ID: <1cd515972cbd4be9ac8b5abb635c052a@huawei.com> (raw)
> On 27.10.25 15:03, Zhang Qilong wrote:
> > In current mincore_pte_range(), if pte_batch_hint() return one pte,
> > it's not efficient, just call new added can_pte_batch_count().
> >
> > In ARM64 qemu, with 8 CPUs, 32G memory, a simple test demo like:
> > 1. mmap 1G anon memory
> > 2. write 1G data by 4k step
> > 3. mincore the mmaped 1G memory
> > 4. get the time consumed by mincore
> >
> > Tested the following cases:
> > - 4k, disabled all hugepage setting.
> > - 64k mTHP, only enable 64k hugepage setting.
> >
> > Before
> >
> > Case status | Consumed time (us) |
> > ----------------------------------|
> > 4k | 7356 |
> > 64k mTHP | 3670 |
> >
> > Pathed:
> >
> > Case status | Consumed time (us) |
> > ----------------------------------|
> > 4k | 4419 |
> > 64k mTHP | 3061 |
> >
>
> I assume you're only lucky in that benchmark because you got consecutive 4k
> pages / 64k mTHP from the buddy, right?
Year, the demo case is relatively simple, which may result in stronger continuity
of allocated physical page addresses.
This case primarily aims to validate optimization effectiveness in contiguous page
address. Maybe we also need watch side effectiveness in non-contiguous page
address.
>
> So I suspect that this will mostly just make a micro benchmark happy, because
> the reality where we allocate randomly over time, for the PCP, etc will look
> quite different.
>
> --
> Cheers
>
> David / dhildenb
>
next reply other threads:[~2025-10-28 11:32 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-10-28 11:32 zhangqilong [this message]
-- strict thread matches above, loose matches on Subject: below --
2025-10-28 11:13 zhangqilong
2025-10-27 14:03 [RFC PATCH 0/3] mm: PTEs batch optimization in mincore and mremap Zhang Qilong
2025-10-27 14:03 ` [RFC PATCH 2/3] mm/mincore: Use can_pte_batch_count() in mincore_pte_range() for pte batch mincore_pte_range() Zhang Qilong
2025-10-27 19:27 ` David Hildenbrand
2025-10-27 19:34 ` Lorenzo Stoakes
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1cd515972cbd4be9ac8b5abb635c052a@huawei.com \
--to=zhangqilong3@huawei.com \
--cc=Liam.Howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=david@redhat.com \
--cc=jannh@google.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lorenzo.stoakes@oracle.com \
--cc=mhocko@suse.com \
--cc=pfalcato@suse.de \
--cc=rppt@kernel.org \
--cc=sunnanyong@huawei.com \
--cc=surenb@google.com \
--cc=vbabka@suse.cz \
--cc=wangkefeng.wang@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox