From: Ankur Arora <ankur.a.arora@oracle.com>
To: linux-kernel@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org
Cc: akpm@linux-foundation.org, david@redhat.com, bp@alien8.de,
dave.hansen@linux.intel.com, hpa@zytor.com, mingo@redhat.com,
mjguzik@gmail.com, luto@kernel.org, peterz@infradead.org,
acme@kernel.org, namhyung@kernel.org, tglx@linutronix.de,
willy@infradead.org, raghavendra.kt@amd.com,
boris.ostrovsky@oracle.com, konrad.wilk@oracle.com,
ankur.a.arora@oracle.com
Subject: [PATCH v7 16/16] x86/clear_pages: Support clearing of page-extents
Date: Wed, 17 Sep 2025 08:24:18 -0700 [thread overview]
Message-ID: <20250917152418.4077386-17-ankur.a.arora@oracle.com> (raw)
In-Reply-To: <20250917152418.4077386-1-ankur.a.arora@oracle.com>
Define ARCH_PAGE_CONTIG_NR which is used by folio_zero_user() to
decide the maximum contiguous page range to be zeroed when running
under cooperative preemption models. This allows the processor --
when using string instructions (REP; STOS) -- to optimize based on
the size of the region.
The resultant performance depends on the kinds of optimizations
available to the microarch for the region being cleared. Two classes
of optimizations:
- clearing iteration costs can be amortized over a range larger
than a single page.
- cacheline allocation elision (seen on AMD Zen models).
Testing a demand fault workload shows an improved baseline from the
first optimization and a larger improvement when the region being
cleared is large enough for the second optimization.
AMD Milan (EPYC 7J13, boost=0, region=64GB on the local NUMA node):
$ perf bench mem map -p $pg-sz -f demand -s 64GB -l 5
mm/folio_zero_user x86/folio_zero_user change
(GB/s +- %stdev) (GB/s +- %stdev)
pg-sz=2MB 11.82 +- 0.67% 16.48 +- 0.30% + 39.4% preempt=*
pg-sz=1GB 17.14 +- 1.39% 17.42 +- 0.98% [#] + 1.6% preempt=none|voluntary
pg-sz=1GB 17.51 +- 1.19% 43.23 +- 5.22% +146.8% preempt=full|lazy
[#] Milan uses a threshold of LLC-size (~32MB) for eliding cacheline
allocation, which is larger than ARCH_PAGE_CONTIG_NR, so
preempt=none|voluntary see no improvement on the pg-sz=1GB.
The improvement due to the CPU eliding cacheline allocation for
pg-sz=1GB can be seen in the reduced L1-dcache-loads:
- 44,513,459,667 cycles # 2.420 GHz ( +- 0.44% ) (35.71%)
- 1,378,032,592 instructions # 0.03 insn per cycle
- 11,224,288,082 L1-dcache-loads # 610.187 M/sec ( +- 0.08% ) (35.72%)
- 5,373,473,118 L1-dcache-load-misses # 47.87% of all L1-dcache accesses ( +- 0.00% ) (35.71%)
+ 20,093,219,076 cycles # 2.421 GHz ( +- 3.64% ) (35.69%)
+ 1,378,032,592 instructions # 0.03 insn per cycle
+ 186,525,095 L1-dcache-loads # 22.479 M/sec ( +- 2.11% ) (35.74%)
+ 73,479,687 L1-dcache-load-misses # 39.39% of all L1-dcache accesses ( +- 3.03% ) (35.74%)
Also as mentioned earlier, the baseline improvement is not specific to
AMD Zen*. Intel Icelakex (pg-sz=2MB|1GB) sees a similar improvement as
the Milan pg-sz=2MB workload above (~35%).
Signed-off-by: Ankur Arora <ankur.a.arora@oracle.com>
---
arch/x86/include/asm/page_64.h | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/arch/x86/include/asm/page_64.h b/arch/x86/include/asm/page_64.h
index 289b31a4c910..2361066d175e 100644
--- a/arch/x86/include/asm/page_64.h
+++ b/arch/x86/include/asm/page_64.h
@@ -40,6 +40,13 @@ extern unsigned long __phys_addr_symbol(unsigned long);
#define __phys_reloc_hide(x) (x)
+/*
+ * When running under voluntary preemption models, limit the max extent
+ * being cleared to pages worth 8MB. With a clearing BW of ~10GBps, this
+ * should result in worst case scheduling latency of ~1ms.
+ */
+#define ARCH_PAGE_CONTIG_NR (8 << (20 - PAGE_SHIFT))
+
void memzero_page_aligned_unrolled(void *addr, u64 len);
/**
--
2.43.5
next prev parent reply other threads:[~2025-09-17 15:25 UTC|newest]
Thread overview: 46+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-09-17 15:24 [PATCH v7 00/16] mm: folio_zero_user: clear contiguous pages Ankur Arora
2025-09-17 15:24 ` [PATCH v7 01/16] perf bench mem: Remove repetition around time measurement Ankur Arora
2025-09-17 15:24 ` [PATCH v7 02/16] perf bench mem: Defer type munging of size to float Ankur Arora
2025-09-17 15:24 ` [PATCH v7 03/16] perf bench mem: Move mem op parameters into a structure Ankur Arora
2025-09-17 15:24 ` [PATCH v7 04/16] perf bench mem: Pull out init/fini logic Ankur Arora
2025-09-17 15:24 ` [PATCH v7 05/16] perf bench mem: Switch from zalloc() to mmap() Ankur Arora
2025-09-17 15:24 ` [PATCH v7 06/16] perf bench mem: Allow mapping of hugepages Ankur Arora
2025-09-17 15:24 ` [PATCH v7 07/16] perf bench mem: Allow chunking on a memory region Ankur Arora
2025-09-17 15:24 ` [PATCH v7 08/16] perf bench mem: Refactor mem_options Ankur Arora
2025-09-17 15:24 ` [PATCH v7 09/16] perf bench mem: Add mmap() workloads Ankur Arora
2025-09-17 15:24 ` [PATCH v7 10/16] mm: define clear_pages(), clear_user_pages() Ankur Arora
2025-09-23 8:04 ` David Hildenbrand
2025-09-23 20:26 ` Ankur Arora
2025-09-24 11:05 ` David Hildenbrand
2025-09-25 5:25 ` Ankur Arora
2025-09-30 9:43 ` David Hildenbrand
2025-10-10 10:37 ` David Hildenbrand
2025-10-10 13:03 ` David Hildenbrand
2025-09-17 15:24 ` [PATCH v7 11/16] mm/highmem: introduce clear_user_highpages() Ankur Arora
2025-09-23 8:06 ` David Hildenbrand
2025-09-23 20:34 ` Ankur Arora
2025-09-24 11:06 ` David Hildenbrand
2025-09-25 5:26 ` Ankur Arora
2025-09-30 9:44 ` David Hildenbrand
2025-09-17 15:24 ` [PATCH v7 12/16] arm: mm: define clear_user_highpages() Ankur Arora
2025-09-23 8:09 ` David Hildenbrand
2025-09-23 22:25 ` Ankur Arora
2025-09-24 11:10 ` David Hildenbrand
2025-09-25 6:08 ` Ankur Arora
2025-09-30 9:51 ` David Hildenbrand
2025-10-07 6:43 ` Ankur Arora
2025-09-17 15:24 ` [PATCH v7 13/16] mm: memory: support clearing page ranges Ankur Arora
2025-09-17 21:44 ` Andrew Morton
2025-09-18 4:54 ` Ankur Arora
2025-09-23 8:14 ` David Hildenbrand
2025-09-23 8:36 ` Raghavendra K T
2025-09-23 9:13 ` Raghavendra K T
2025-10-07 6:17 ` Ankur Arora
2025-09-19 11:33 ` kernel test robot
2025-09-17 15:24 ` [PATCH v7 14/16] x86/mm: Simplify clear_page_* Ankur Arora
2025-09-17 15:24 ` [PATCH v7 15/16] x86/clear_page: Introduce clear_pages() Ankur Arora
2025-09-17 15:24 ` Ankur Arora [this message]
2025-09-17 16:29 ` [PATCH v7 00/16] mm: folio_zero_user: clear contiguous pages Arnaldo Carvalho de Melo
2025-09-18 4:00 ` Ankur Arora
2025-09-23 6:29 ` Raghavendra K T
2025-10-07 6:15 ` Ankur Arora
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250917152418.4077386-17-ankur.a.arora@oracle.com \
--to=ankur.a.arora@oracle.com \
--cc=acme@kernel.org \
--cc=akpm@linux-foundation.org \
--cc=boris.ostrovsky@oracle.com \
--cc=bp@alien8.de \
--cc=dave.hansen@linux.intel.com \
--cc=david@redhat.com \
--cc=hpa@zytor.com \
--cc=konrad.wilk@oracle.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=luto@kernel.org \
--cc=mingo@redhat.com \
--cc=mjguzik@gmail.com \
--cc=namhyung@kernel.org \
--cc=peterz@infradead.org \
--cc=raghavendra.kt@amd.com \
--cc=tglx@linutronix.de \
--cc=willy@infradead.org \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox