From: Ankur Arora <ankur.a.arora@oracle.com>
To: "David Hildenbrand (Red Hat)" <david@kernel.org>
Cc: Ankur Arora <ankur.a.arora@oracle.com>,
linux-kernel@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org,
akpm@linux-foundation.org, bp@alien8.de,
dave.hansen@linux.intel.com, hpa@zytor.com, mingo@redhat.com,
mjguzik@gmail.com, luto@kernel.org, peterz@infradead.org,
tglx@linutronix.de, willy@infradead.org, raghavendra.kt@amd.com,
chleroy@kernel.org, ioworker0@gmail.com,
boris.ostrovsky@oracle.com, konrad.wilk@oracle.com
Subject: Re: [PATCH v10 7/8] mm, folio_zero_user: support clearing page ranges
Date: Thu, 18 Dec 2025 12:16:38 -0800 [thread overview]
Message-ID: <87qzsrwno9.fsf@oracle.com> (raw)
In-Reply-To: <9ad1ad81-67e2-4274-afe3-5b9bfb5c7ad1@kernel.org>
David Hildenbrand (Red Hat) <david@kernel.org> writes:
> On 12/15/25 21:49, Ankur Arora wrote:
>> Clear contiguous page ranges in folio_zero_user() instead of clearing
>> a single page at a time. Exposing larger ranges enables extent based
>> processor optimizations.
>> However, because the underlying clearing primitives do not, or might
>> not be able to check to call cond_resched() to check if preemption
>> is required, limit the worst case preemption latency by doing the
>> clearing in no more than PROCESS_PAGES_NON_PREEMPT_BATCH units.
>> For architectures that define clear_pages(), we assume that the
>> clearing is fast and define PROCESS_PAGES_NON_PREEMPT_BATCH as 8MB
>> worth of pages. This should be large enough to allow the processor
>> to optimize the operation and yet small enough that we see reasonable
>> preemption latency for when this optimization is not possible
>> (ex. slow microarchitectures, memory bandwidth saturation.)
>> Architectures that don't define clear_pages() will continue to use
>> the base value (single page). And, preemptible models don't need
>> invocations of cond_resched() so don't care about the batch size.
>> The resultant performance depends on the kinds of optimizations
>> available to the CPU for the region size being cleared. Two classes
>> of optimizations:
>> - clearing iteration costs are amortized over a range larger
>> than a single page.
>> - cacheline allocation elision (seen on AMD Zen models).
>> Testing a demand fault workload shows an improved baseline from the
>> first optimization and a larger improvement when the region being
>> cleared is large enough for the second optimization.
>> AMD Milan (EPYC 7J13, boost=0, region=64GB on the local NUMA node):
>> $ perf bench mem mmap -p $pg-sz -f demand -s 64GB -l 5
>> page-at-a-time contiguous clearing change
>> (GB/s +- %stdev) (GB/s +- %stdev)
>> pg-sz=2MB 12.92 +- 2.55% 17.03 +- 0.70% + 31.8%
>> preempt=*
>> pg-sz=1GB 17.14 +- 2.27% 18.04 +- 1.05% + 5.2%
>> preempt=none|voluntary
>> pg-sz=1GB 17.26 +- 1.24% 42.17 +- 4.21% [#] +144.3% preempt=full|lazy
>> [#] Notice that we perform much better with preempt=full|lazy. As
>> mentioned above, preemptible models not needing explicit invocations
>> of cond_resched() allow clearing of the full extent (1GB) as a
>> single unit.
>> In comparison the maximum extent used for preempt=none|voluntary is
>> PROCESS_PAGES_NON_PREEMPT_BATCH (8MB).
>> The larger extent allows the processor to elide cacheline
>> allocation (on Milan the threshold is LLC-size=32MB.)
>> Also as mentioned earlier, the baseline improvement is not specific to
>> AMD Zen platforms. Intel Icelakex (pg-sz=2MB|1GB) sees a similar
>> improvement as the Milan pg-sz=2MB workload above (~30%).
>> Signed-off-by: Ankur Arora <ankur.a.arora@oracle.com>
>> Reviewed-by: Raghavendra K T <raghavendra.kt@amd.com>
>> Tested-by: Raghavendra K T <raghavendra.kt@amd.com>
>> ---
>> include/linux/mm.h | 38 +++++++++++++++++++++++++++++++++++++-
>> mm/memory.c | 46 +++++++++++++++++++++++++---------------------
>> 2 files changed, 62 insertions(+), 22 deletions(-)
>> diff --git a/include/linux/mm.h b/include/linux/mm.h
>> index 12106ebf1a50..45e5e0ef620c 100644
>> --- a/include/linux/mm.h
>> +++ b/include/linux/mm.h
>> @@ -4194,7 +4194,6 @@ static inline void clear_page_guard(struct zone *zone, struct page *page,
>> unsigned int order) {}
>> #endif /* CONFIG_DEBUG_PAGEALLOC */
>> -#ifndef clear_pages
>
> Why is that change part of this patch?
>
> Looks like this should either go into the patch introducing clear_pages() (#3
> ?).
Yeah I think this was a mistake. There was no need to move the ifndef
below the comment as I do here. Will fix.
>> /**
>> * clear_pages() - clear a page range for kernel-internal use.
>> * @addr: start address
>> @@ -4204,7 +4203,18 @@ static inline void clear_page_guard(struct zone *zone, struct page *page,
>> * mapped to user space.
>> *
>> * Does absolutely no exception handling.
>> + *
>> + * Note that even though the clearing operation is preemptible, clear_pages()
>> + * does not (and on architectures where it reduces to a few long-running
>> + * instructions, might not be able to) call cond_resched() to check if
>> + * rescheduling is required.
>> + *
>> + * When running under preemptible models this is fine, since clear_pages(),
>> + * even when reduced to long-running instructions, is preemptible.
>> + * Under cooperatively scheduled models, however, the caller is expected to
>> + * limit @npages to no more than PROCESS_PAGES_NON_PREEMPT_BATCH.
>> */
>> +#ifndef clear_pages
>> static inline void clear_pages(void *addr, unsigned int npages)
>> {
>> do {
>> @@ -4214,6 +4224,32 @@ static inline void clear_pages(void *addr, unsigned int npages)
>> }
>> #endif
>> +#ifndef PROCESS_PAGES_NON_PREEMPT_BATCH
>> +#ifdef clear_pages
>> +/*
>> + * The architecture defines clear_pages(), and we assume that it is
>> + * generally "fast". So choose a batch size large enough to allow the processor
>> + * headroom for optimizing the operation and yet small enough that we see
>> + * reasonable preemption latency for when this optimization is not possible
>> + * (ex. slow microarchitectures, memory bandwidth saturation.)
>> + *
>> + * With a value of 8MB and assuming a memory bandwidth of ~10GBps, this should
>> + * result in worst case preemption latency of around 1ms when clearing pages.
>> + *
>> + * (See comment above clear_pages() for why preemption latency is a concern
>> + * here.)
>> + */
>> +#define PROCESS_PAGES_NON_PREEMPT_BATCH (8 << (20 - PAGE_SHIFT))
>> +#else /* !clear_pages */
>> +/*
>> + * The architecture does not provide a clear_pages() implementation. Assume
>> + * that clear_page() -- which clear_pages() will fallback to -- is relatively
>> + * slow and choose a small value for PROCESS_PAGES_NON_PREEMPT_BATCH.
>> + */
>> +#define PROCESS_PAGES_NON_PREEMPT_BATCH 1
>> +#endif
>> +#endif
>> +
>> #ifdef __HAVE_ARCH_GATE_AREA
>> extern struct vm_area_struct *get_gate_vma(struct mm_struct *mm);
>> extern int in_gate_area_no_mm(unsigned long addr);
>> diff --git a/mm/memory.c b/mm/memory.c
>> index 2a55edc48a65..974c48db6089 100644
>> --- a/mm/memory.c
>> +++ b/mm/memory.c
>> @@ -7237,40 +7237,44 @@ static inline int process_huge_page(
>> return 0;
>> }
>> -static void clear_gigantic_page(struct folio *folio, unsigned long
>> addr_hint,
>> - unsigned int nr_pages)
>> +static void clear_contig_highpages(struct page *page, unsigned long addr,
>> + unsigned int npages)
>> {
>> - unsigned long addr = ALIGN_DOWN(addr_hint, folio_size(folio));
>> - int i;
>> + unsigned int i, count, unit;
>> - might_sleep();
>> - for (i = 0; i < nr_pages; i++) {
>> + /*
>> + * When clearing we want to operate on the largest extent possible since
>> + * that allows for extent based architecture specific optimizations.
>> + *
>> + * However, since the clearing interfaces (clear_user_highpages(),
>> + * clear_user_pages(), clear_pages()), do not call cond_resched(), we
>> + * limit the batch size when running under non-preemptible scheduling
>> + * models.
>> + */
>> + unit = preempt_model_preemptible() ? npages : PROCESS_PAGES_NON_PREEMPT_BATCH;
>> +
>> + for (i = 0; i < npages; i += count) {
>> cond_resched();
>> - clear_user_highpage(folio_page(folio, i), addr + i * PAGE_SIZE);
>> +
>> + count = min(unit, npages - i);
>> + clear_user_highpages(page + i,
>> + addr + i * PAGE_SIZE, count);
>
> I guess that logic could be pushed down for the
> clear_user_highpages()->clear_pages() implementation (arch or generic)
> to take care of that, so not every user would have to care about that.
You mean the preemptibility unit stuff?
> No strong opinion as we could do that later whenever we actually get more
> clear_pages() users :)
I remember thinking about it early on. And seemed to me that clear_pages(),
clear_user_pages(), clear_user_highpages() all of them are structured
pretty similarly and all of them leave the the responsibility of when to
preempt to the caller.
I guess clear_user_highpages() is special since that's the logical entry
point for user pages.
I'd like to keep it as it for now. And relook at changing once clear_pages()
clear_pages() has a few more users.
>> -
>> /**
>> * folio_zero_user - Zero a folio which will be mapped to userspace.
>> * @folio: The folio to zero.
>> - * @addr_hint: The address will be accessed or the base address if uncelar.
>> + * @addr_hint: The address accessed by the user or the base address.
>> + *
>> + * Uses architectural support to clear page ranges.
>
> I think that comment can be dropped. Implementation detail :)
Ack.
Thanks for the reviews!
--
ankur
next prev parent reply other threads:[~2025-12-18 20:17 UTC|newest]
Thread overview: 31+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-12-15 20:49 [PATCH v10 0/8] mm: folio_zero_user: clear contiguous pages Ankur Arora
2025-12-15 20:49 ` [PATCH v10 1/8] treewide: provide a generic clear_user_page() variant Ankur Arora
2025-12-18 7:11 ` David Hildenbrand (Red Hat)
2025-12-18 19:31 ` Ankur Arora
2025-12-15 20:49 ` [PATCH v10 2/8] highmem: introduce clear_user_highpages() Ankur Arora
2025-12-15 20:49 ` [PATCH v10 3/8] mm: introduce clear_pages() and clear_user_pages() Ankur Arora
2025-12-15 20:49 ` [PATCH v10 4/8] highmem: do range clearing in clear_user_highpages() Ankur Arora
2025-12-18 7:15 ` David Hildenbrand (Red Hat)
2025-12-18 20:01 ` Ankur Arora
2025-12-15 20:49 ` [PATCH v10 5/8] x86/mm: Simplify clear_page_* Ankur Arora
2025-12-15 20:49 ` [PATCH v10 6/8] x86/clear_page: Introduce clear_pages() Ankur Arora
2025-12-18 7:22 ` David Hildenbrand (Red Hat)
2025-12-15 20:49 ` [PATCH v10 7/8] mm, folio_zero_user: support clearing page ranges Ankur Arora
2025-12-16 2:44 ` Andrew Morton
2025-12-16 6:49 ` Ankur Arora
2025-12-16 15:12 ` Andrew Morton
2025-12-17 8:48 ` Ankur Arora
2025-12-17 18:54 ` Andrew Morton
2025-12-17 19:51 ` Ankur Arora
2025-12-17 20:26 ` Andrew Morton
2025-12-18 0:51 ` Ankur Arora
2025-12-18 7:36 ` David Hildenbrand (Red Hat)
2025-12-18 20:16 ` Ankur Arora [this message]
2025-12-15 20:49 ` [PATCH v10 8/8] mm: folio_zero_user: cache neighbouring pages Ankur Arora
2025-12-18 7:49 ` David Hildenbrand (Red Hat)
2025-12-18 21:01 ` Ankur Arora
2025-12-18 21:23 ` Ankur Arora
2025-12-23 10:11 ` David Hildenbrand (Red Hat)
2025-12-16 2:48 ` [PATCH v10 0/8] mm: folio_zero_user: clear contiguous pages Andrew Morton
2025-12-16 5:04 ` Ankur Arora
2025-12-18 7:38 ` David Hildenbrand (Red Hat)
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=87qzsrwno9.fsf@oracle.com \
--to=ankur.a.arora@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=boris.ostrovsky@oracle.com \
--cc=bp@alien8.de \
--cc=chleroy@kernel.org \
--cc=dave.hansen@linux.intel.com \
--cc=david@kernel.org \
--cc=hpa@zytor.com \
--cc=ioworker0@gmail.com \
--cc=konrad.wilk@oracle.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=luto@kernel.org \
--cc=mingo@redhat.com \
--cc=mjguzik@gmail.com \
--cc=peterz@infradead.org \
--cc=raghavendra.kt@amd.com \
--cc=tglx@linutronix.de \
--cc=willy@infradead.org \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox