linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 00/14] Perf improvements for hugetlb and vmalloc on arm64
@ 2025-02-17 14:07 Ryan Roberts
  2025-02-17 14:07 ` [PATCH v2 01/14] arm64: hugetlb: Cleanup huge_pte size discovery mechanisms Ryan Roberts
                   ` (13 more replies)
  0 siblings, 14 replies; 32+ messages in thread
From: Ryan Roberts @ 2025-02-17 14:07 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Pasha Tatashin, Andrew Morton,
	Uladzislau Rezki, Christoph Hellwig, David Hildenbrand,
	Matthew Wilcox (Oracle),
	Mark Rutland, Anshuman Khandual, Alexandre Ghiti, Kevin Brodsky
  Cc: Ryan Roberts, linux-arm-kernel, linux-mm, linux-kernel

Hi All,

This series contains some perf improvements for hugetlb and vmalloc on arm64,
and is split out from v1 of a wider series at [1]. Although some of these
patches are core-mm, advice from Andrew was to go via the arm64 tree. Hopefully
I can get some ACKs from mm folks.

The 2 key performance improvements are 1) enabling the use of contpte-mapped
blocks in the vmalloc space when appropriate (which reduces TLB pressure). There
were already hooks for this (used by powerpc) but they required some tidying and
extending for arm64. And 2) batching up barriers when modifying the vmalloc
address space for upto 30% reduction in time taken in vmalloc().

vmalloc() performance was measured using the test_vmalloc.ko module. Tested on
Apple M2 and Ampere Altra. Each test had loop count set to 500000 and the whole
test was repeated 10 times. (perf results are against v1).

legend:
  - p: nr_pages (pages to allocate)
  - h: use_huge (vmalloc() vs vmalloc_huge())
  - (I): statistically significant improvement (95% CI does not overlap)
  - (R): statistically significant regression (95% CI does not overlap)
  - mearements are times; smaller is better

+--------------------------------------------------+-------------+-------------+
| Benchmark                                        |             |             |
|   Result Class                                   |    Apple M2 | Ampere Alta |
+==================================================+=============+=============+
| micromm/vmalloc                                  |             |             |
|   fix_align_alloc_test: p:1, h:0 (usec)          | (I) -12.93% |  (I) -7.89% |
|   fix_size_alloc_test: p:1, h:0 (usec)           |   (R) 4.00% |       1.40% |
|   fix_size_alloc_test: p:1, h:1 (usec)           |   (R) 5.28% |       1.46% |
|   fix_size_alloc_test: p:2, h:0 (usec)           |  (I) -3.04% |      -1.11% |
|   fix_size_alloc_test: p:2, h:1 (usec)           |      -3.24% |      -2.86% |
|   fix_size_alloc_test: p:4, h:0 (usec)           | (I) -11.77% |  (I) -4.48% |
|   fix_size_alloc_test: p:4, h:1 (usec)           |  (I) -9.19% |  (I) -4.45% |
|   fix_size_alloc_test: p:8, h:0 (usec)           | (I) -19.79% | (I) -11.63% |
|   fix_size_alloc_test: p:8, h:1 (usec)           | (I) -19.40% | (I) -11.11% |
|   fix_size_alloc_test: p:16, h:0 (usec)          | (I) -24.89% | (I) -15.26% |
|   fix_size_alloc_test: p:16, h:1 (usec)          | (I) -11.61% |   (R) 6.00% |
|   fix_size_alloc_test: p:32, h:0 (usec)          | (I) -26.54% | (I) -18.80% |
|   fix_size_alloc_test: p:32, h:1 (usec)          | (I) -15.42% |   (R) 5.82% |
|   fix_size_alloc_test: p:64, h:0 (usec)          | (I) -30.25% | (I) -20.80% |
|   fix_size_alloc_test: p:64, h:1 (usec)          | (I) -16.98% |   (R) 6.54% |
|   fix_size_alloc_test: p:128, h:0 (usec)         | (I) -32.56% | (I) -21.79% |
|   fix_size_alloc_test: p:128, h:1 (usec)         | (I) -18.39% |   (R) 5.91% |
|   fix_size_alloc_test: p:256, h:0 (usec)         | (I) -33.33% | (I) -22.22% |
|   fix_size_alloc_test: p:256, h:1 (usec)         | (I) -18.82% |   (R) 5.79% |
|   fix_size_alloc_test: p:512, h:0 (usec)         | (I) -33.27% | (I) -22.23% |
|   fix_size_alloc_test: p:512, h:1 (usec)         |       0.86% |      -0.71% |
|   full_fit_alloc_test: p:1, h:0 (usec)           |       2.49% |      -0.62% |
|   kvfree_rcu_1_arg_vmalloc_test: p:1, h:0 (usec) |       1.79% |      -1.25% |
|   kvfree_rcu_2_arg_vmalloc_test: p:1, h:0 (usec) |      -0.32% |       0.61% |
|   long_busy_list_alloc_test: p:1, h:0 (usec)     | (I) -31.06% | (I) -19.62% |
|   pcpu_alloc_test: p:1, h:0 (usec)               |       0.06% |       0.47% |
|   random_size_align_alloc_test: p:1, h:0 (usec)  | (I) -14.94% |  (I) -8.68% |
|   random_size_alloc_test: p:1, h:0 (usec)        | (I) -30.22% | (I) -19.59% |
|   vm_map_ram_test: p:1, h:0 (usec)               |       2.65% |   (R) 7.22% |
+--------------------------------------------------+-------------+-------------+

So there are some nice improvements but also some regressions to explain:

First fix_size_alloc_test with h:1 and p:16,32,64,128,256 regress by ~6% on
Altra. The regression is actually introduced by enabling contpte-mapped 64K
blocks in these tests, and that regression is reduced (from about 8% if memory
serves) by doing the barrier batching. I don't have a definite conclusion on the
root cause, but I've ruled out the differences in the mapping paths in vmalloc.
I strongly believe this is likely due to the difference in the allocation path;
64K blocks are not cached per-cpu so we have to go all the way to the buddy. I'm
not sure why this doesn't show up on M2 though. Regardless, I'm going to assert
that it's better to choose 16x reduction in TLB pressure vs 6% on the vmalloc
allocation call duration.

Next we have ~4% regression on M2 when vmalloc'ing a single page. (h is
irrelevant because a single page is too small for contpte). I assume this is
because there is some minor overhead in the barrier deferral mechanism and we
are not getting to amortize it over multiple pages here. But I would assume
vmalloc'ing 1 page is uncommon because it doesn't buy you anything over
kmalloc?

Changes since v1 [1]
====================
- Split out the fixes into their own series
- Added Rbs from Anshuman - Thanks!
- Added patch to clean up the methods by which huge_pte size is determined
- Added "#ifndef __PAGETABLE_PMD_FOLDED" around PUD_SIZE in
  flush_hugetlb_tlb_range()
- Renamed ___set_ptes() -> set_ptes_anysz()
- Renamed ___ptep_get_and_clear() -> ptep_get_and_clear_anysz()
- Fixed typos in commit logs
- Refactored pXd_valid_not_user() for better reuse
- Removed TIF_KMAP_UPDATE_PENDING after concluding that single flag is sufficent
- Concluded the extra isb() in __switch_to() is not required
- Only call arch_update_kernel_mappings_[begin|end]() for kernel mappings

Thanks to Anshuman for the review!

Applies on top of my fixes series at [2], which applies on top of v6.14-rc3. All
mm selftests run and pass.

[1] https://lore.kernel.org/all/20250205151003.88959-1-ryan.roberts@arm.com/
[2] https://lore.kernel.org/all/20250217140419.1702389-1-ryan.roberts@arm.com/

Thanks,
Ryan

Ryan Roberts (14):
  arm64: hugetlb: Cleanup huge_pte size discovery mechanisms
  arm64: hugetlb: Refine tlb maintenance scope
  mm/page_table_check: Batch-check pmds/puds just like ptes
  arm64/mm: Refactor __set_ptes() and __ptep_get_and_clear()
  arm64: hugetlb: Use set_ptes_anysz() and ptep_get_and_clear_anysz()
  arm64/mm: Hoist barriers out of set_ptes_anysz() loop
  arm64/mm: Avoid barriers for invalid or userspace mappings
  mm/vmalloc: Warn on improper use of vunmap_range()
  mm/vmalloc: Gracefully unmap huge ptes
  arm64/mm: Support huge pte-mapped pages in vmap
  mm/vmalloc: Batch arch_sync_kernel_mappings() more efficiently
  mm: Generalize arch_sync_kernel_mappings()
  mm: Only call arch_update_kernel_mappings_[begin|end]() for kernel
    mappings
  arm64/mm: Batch barriers when updating kernel mappings

 arch/arm64/include/asm/hugetlb.h     |  29 ++--
 arch/arm64/include/asm/pgtable.h     | 207 +++++++++++++++++++--------
 arch/arm64/include/asm/thread_info.h |   1 +
 arch/arm64/include/asm/vmalloc.h     |  46 ++++++
 arch/arm64/kernel/process.c          |   9 +-
 arch/arm64/mm/hugetlbpage.c          |  72 ++++------
 include/linux/page_table_check.h     |  30 ++--
 include/linux/pgtable.h              |  24 +---
 include/linux/pgtable_modmask.h      |  32 +++++
 include/linux/vmalloc.h              |  55 +++++++
 mm/memory.c                          |   7 +-
 mm/page_table_check.c                |  34 +++--
 mm/vmalloc.c                         |  93 +++++++-----
 13 files changed, 434 insertions(+), 205 deletions(-)
 create mode 100644 include/linux/pgtable_modmask.h

--
2.43.0



^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH v2 01/14] arm64: hugetlb: Cleanup huge_pte size discovery mechanisms
  2025-02-17 14:07 [PATCH v2 00/14] Perf improvements for hugetlb and vmalloc on arm64 Ryan Roberts
@ 2025-02-17 14:07 ` Ryan Roberts
  2025-02-17 14:07 ` [PATCH v2 02/14] arm64: hugetlb: Refine tlb maintenance scope Ryan Roberts
                   ` (12 subsequent siblings)
  13 siblings, 0 replies; 32+ messages in thread
From: Ryan Roberts @ 2025-02-17 14:07 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Pasha Tatashin, Andrew Morton,
	Uladzislau Rezki, Christoph Hellwig, David Hildenbrand,
	Matthew Wilcox (Oracle),
	Mark Rutland, Anshuman Khandual, Alexandre Ghiti, Kevin Brodsky
  Cc: Ryan Roberts, linux-arm-kernel, linux-mm, linux-kernel

Not all huge_pte helper APIs explicitly provide the size of the
huge_pte. So the helpers have to depend on various methods to determine
the size of the huge_pte. Some of these methods are dubious.

Let's clean up the code to use preferred methods and retire the dubious
ones. The options in order of preference:

 - If size is provided as parameter, use it together with
   num_contig_ptes(). This is explicit and works for both present and
   non-present ptes.

 - If vma is provided as a parameter, retrieve size via
   huge_page_size(hstate_vma(vma)) and use it together with
   num_contig_ptes(). This is explicit and works for both present and
   non-present ptes.

 - If the pte is present and contiguous, use find_num_contig() to walk
   the pgtable to find the level and infer the number of ptes from
   level. Only works for *present* ptes.

 - If the pte is present and not contiguous and you can infer from this
   that only 1 pte needs to be operated on. This is ok if you don't care
   about the absolute size, and just want to know the number of ptes.

 - NEVER rely on resolving the PFN of a present pte to a folio and
   getting the folio's size. This is fragile at best, because there is
   nothing to stop the core-mm from allocating a folio twice as big as
   the huge_pte then mapping it across 2 consecutive huge_ptes. Or just
   partially mapping it.

Where we require that the pte is present, add warnings if not-present.

Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
---
 arch/arm64/mm/hugetlbpage.c | 20 +++++++++++++++-----
 1 file changed, 15 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c
index 614b2feddba2..31ea826a8a09 100644
--- a/arch/arm64/mm/hugetlbpage.c
+++ b/arch/arm64/mm/hugetlbpage.c
@@ -136,7 +136,7 @@ pte_t huge_ptep_get(struct mm_struct *mm, unsigned long addr, pte_t *ptep)
 	if (!pte_present(orig_pte) || !pte_cont(orig_pte))
 		return orig_pte;
 
-	ncontig = num_contig_ptes(page_size(pte_page(orig_pte)), &pgsize);
+	ncontig = find_num_contig(mm, addr, ptep, &pgsize);
 	for (i = 0; i < ncontig; i++, ptep++) {
 		pte_t pte = __ptep_get(ptep);
 
@@ -445,16 +445,19 @@ int huge_ptep_set_access_flags(struct vm_area_struct *vma,
 	pgprot_t hugeprot;
 	pte_t orig_pte;
 
+	VM_WARN_ON(!pte_present(pte));
+
 	if (!pte_cont(pte))
 		return __ptep_set_access_flags(vma, addr, ptep, pte, dirty);
 
-	ncontig = find_num_contig(mm, addr, ptep, &pgsize);
+	ncontig = num_contig_ptes(huge_page_size(hstate_vma(vma)), &pgsize);
 	dpfn = pgsize >> PAGE_SHIFT;
 
 	if (!__cont_access_flags_changed(ptep, pte, ncontig))
 		return 0;
 
 	orig_pte = get_clear_contig_flush(mm, addr, ptep, pgsize, ncontig);
+	VM_WARN_ON(!pte_present(orig_pte));
 
 	/* Make sure we don't lose the dirty or young state */
 	if (pte_dirty(orig_pte))
@@ -479,7 +482,10 @@ void huge_ptep_set_wrprotect(struct mm_struct *mm,
 	size_t pgsize;
 	pte_t pte;
 
-	if (!pte_cont(__ptep_get(ptep))) {
+	pte = __ptep_get(ptep);
+	VM_WARN_ON(!pte_present(pte));
+
+	if (!pte_cont(pte)) {
 		__ptep_set_wrprotect(mm, addr, ptep);
 		return;
 	}
@@ -503,11 +509,15 @@ pte_t huge_ptep_clear_flush(struct vm_area_struct *vma,
 	struct mm_struct *mm = vma->vm_mm;
 	size_t pgsize;
 	int ncontig;
+	pte_t pte;
+
+	pte = __ptep_get(ptep);
+	VM_WARN_ON(!pte_present(pte));
 
-	if (!pte_cont(__ptep_get(ptep)))
+	if (!pte_cont(pte))
 		return ptep_clear_flush(vma, addr, ptep);
 
-	ncontig = find_num_contig(mm, addr, ptep, &pgsize);
+	ncontig = num_contig_ptes(huge_page_size(hstate_vma(vma)), &pgsize);
 	return get_clear_contig_flush(mm, addr, ptep, pgsize, ncontig);
 }
 
-- 
2.43.0



^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH v2 02/14] arm64: hugetlb: Refine tlb maintenance scope
  2025-02-17 14:07 [PATCH v2 00/14] Perf improvements for hugetlb and vmalloc on arm64 Ryan Roberts
  2025-02-17 14:07 ` [PATCH v2 01/14] arm64: hugetlb: Cleanup huge_pte size discovery mechanisms Ryan Roberts
@ 2025-02-17 14:07 ` Ryan Roberts
  2025-02-17 14:07 ` [PATCH v2 03/14] mm/page_table_check: Batch-check pmds/puds just like ptes Ryan Roberts
                   ` (11 subsequent siblings)
  13 siblings, 0 replies; 32+ messages in thread
From: Ryan Roberts @ 2025-02-17 14:07 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Pasha Tatashin, Andrew Morton,
	Uladzislau Rezki, Christoph Hellwig, David Hildenbrand,
	Matthew Wilcox (Oracle),
	Mark Rutland, Anshuman Khandual, Alexandre Ghiti, Kevin Brodsky
  Cc: Ryan Roberts, linux-arm-kernel, linux-mm, linux-kernel

When operating on contiguous blocks of ptes (or pmds) for some hugetlb
sizes, we must honour break-before-make requirements and clear down the
block to invalid state in the pgtable then invalidate the relevant tlb
entries before making the pgtable entries valid again.

However, the tlb maintenance is currently always done assuming the worst
case stride (PAGE_SIZE), last_level (false) and tlb_level
(TLBI_TTL_UNKNOWN). We can do much better with the hinting; In reality,
we know the stride from the huge_pte pgsize, we are always operating
only on the last level, and we always know the tlb_level, again based on
pgsize. So let's start providing these hints.

Additionally, avoid tlb maintenace in set_huge_pte_at().
Break-before-make is only required if we are transitioning the
contiguous pte block from valid -> valid. So let's elide the
clear-and-flush ("break") if the pte range was previously invalid.

Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
---
 arch/arm64/include/asm/hugetlb.h | 29 +++++++++++++++++++----------
 arch/arm64/mm/hugetlbpage.c      |  9 ++++++---
 2 files changed, 25 insertions(+), 13 deletions(-)

diff --git a/arch/arm64/include/asm/hugetlb.h b/arch/arm64/include/asm/hugetlb.h
index 07fbf5bf85a7..2a8155c4a882 100644
--- a/arch/arm64/include/asm/hugetlb.h
+++ b/arch/arm64/include/asm/hugetlb.h
@@ -69,29 +69,38 @@ extern void huge_ptep_modify_prot_commit(struct vm_area_struct *vma,
 
 #include <asm-generic/hugetlb.h>
 
-#define __HAVE_ARCH_FLUSH_HUGETLB_TLB_RANGE
-static inline void flush_hugetlb_tlb_range(struct vm_area_struct *vma,
-					   unsigned long start,
-					   unsigned long end)
+static inline void __flush_hugetlb_tlb_range(struct vm_area_struct *vma,
+					     unsigned long start,
+					     unsigned long end,
+					     unsigned long stride,
+					     bool last_level)
 {
-	unsigned long stride = huge_page_size(hstate_vma(vma));
-
 	switch (stride) {
 #ifndef __PAGETABLE_PMD_FOLDED
 	case PUD_SIZE:
-		__flush_tlb_range(vma, start, end, PUD_SIZE, false, 1);
+		__flush_tlb_range(vma, start, end, PUD_SIZE, last_level, 1);
 		break;
 #endif
 	case CONT_PMD_SIZE:
 	case PMD_SIZE:
-		__flush_tlb_range(vma, start, end, PMD_SIZE, false, 2);
+		__flush_tlb_range(vma, start, end, PMD_SIZE, last_level, 2);
 		break;
 	case CONT_PTE_SIZE:
-		__flush_tlb_range(vma, start, end, PAGE_SIZE, false, 3);
+		__flush_tlb_range(vma, start, end, PAGE_SIZE, last_level, 3);
 		break;
 	default:
-		__flush_tlb_range(vma, start, end, PAGE_SIZE, false, TLBI_TTL_UNKNOWN);
+		__flush_tlb_range(vma, start, end, PAGE_SIZE, last_level, TLBI_TTL_UNKNOWN);
 	}
 }
 
+#define __HAVE_ARCH_FLUSH_HUGETLB_TLB_RANGE
+static inline void flush_hugetlb_tlb_range(struct vm_area_struct *vma,
+					   unsigned long start,
+					   unsigned long end)
+{
+	unsigned long stride = huge_page_size(hstate_vma(vma));
+
+	__flush_hugetlb_tlb_range(vma, start, end, stride, false);
+}
+
 #endif /* __ASM_HUGETLB_H */
diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c
index 31ea826a8a09..b7434ed1b93b 100644
--- a/arch/arm64/mm/hugetlbpage.c
+++ b/arch/arm64/mm/hugetlbpage.c
@@ -190,8 +190,9 @@ static pte_t get_clear_contig_flush(struct mm_struct *mm,
 {
 	pte_t orig_pte = get_clear_contig(mm, addr, ptep, pgsize, ncontig);
 	struct vm_area_struct vma = TLB_FLUSH_VMA(mm, 0);
+	unsigned long end = addr + (pgsize * ncontig);
 
-	flush_tlb_range(&vma, addr, addr + (pgsize * ncontig));
+	__flush_hugetlb_tlb_range(&vma, addr, end, pgsize, true);
 	return orig_pte;
 }
 
@@ -216,7 +217,7 @@ static void clear_flush(struct mm_struct *mm,
 	for (i = 0; i < ncontig; i++, addr += pgsize, ptep++)
 		__ptep_get_and_clear(mm, addr, ptep);
 
-	flush_tlb_range(&vma, saddr, addr);
+	__flush_hugetlb_tlb_range(&vma, saddr, addr, pgsize, true);
 }
 
 void set_huge_pte_at(struct mm_struct *mm, unsigned long addr,
@@ -245,7 +246,9 @@ void set_huge_pte_at(struct mm_struct *mm, unsigned long addr,
 	dpfn = pgsize >> PAGE_SHIFT;
 	hugeprot = pte_pgprot(pte);
 
-	clear_flush(mm, addr, ptep, pgsize, ncontig);
+	/* Only need to "break" if transitioning valid -> valid. */
+	if (pte_valid(__ptep_get(ptep)))
+		clear_flush(mm, addr, ptep, pgsize, ncontig);
 
 	for (i = 0; i < ncontig; i++, ptep++, addr += pgsize, pfn += dpfn)
 		__set_ptes(mm, addr, ptep, pfn_pte(pfn, hugeprot), 1);
-- 
2.43.0



^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH v2 03/14] mm/page_table_check: Batch-check pmds/puds just like ptes
  2025-02-17 14:07 [PATCH v2 00/14] Perf improvements for hugetlb and vmalloc on arm64 Ryan Roberts
  2025-02-17 14:07 ` [PATCH v2 01/14] arm64: hugetlb: Cleanup huge_pte size discovery mechanisms Ryan Roberts
  2025-02-17 14:07 ` [PATCH v2 02/14] arm64: hugetlb: Refine tlb maintenance scope Ryan Roberts
@ 2025-02-17 14:07 ` Ryan Roberts
  2025-02-17 14:07 ` [PATCH v2 04/14] arm64/mm: Refactor __set_ptes() and __ptep_get_and_clear() Ryan Roberts
                   ` (10 subsequent siblings)
  13 siblings, 0 replies; 32+ messages in thread
From: Ryan Roberts @ 2025-02-17 14:07 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Pasha Tatashin, Andrew Morton,
	Uladzislau Rezki, Christoph Hellwig, David Hildenbrand,
	Matthew Wilcox (Oracle),
	Mark Rutland, Anshuman Khandual, Alexandre Ghiti, Kevin Brodsky
  Cc: Ryan Roberts, linux-arm-kernel, linux-mm, linux-kernel

Convert page_table_check_p[mu]d_set(...) to
page_table_check_p[mu]ds_set(..., nr) to allow checking a contiguous set
of pmds/puds in single batch. We retain page_table_check_p[mu]d_set(...)
as macros that call new batch functions with nr=1 for compatibility.

arm64 is about to reorganise its pte/pmd/pud helpers to reuse more code
and to allow the implementation for huge_pte to more efficiently set
ptes/pmds/puds in batches. We need these batch-helpers to make the
refactoring possible.

Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
---
 include/linux/page_table_check.h | 30 +++++++++++++++++-----------
 mm/page_table_check.c            | 34 +++++++++++++++++++-------------
 2 files changed, 38 insertions(+), 26 deletions(-)

diff --git a/include/linux/page_table_check.h b/include/linux/page_table_check.h
index 6722941c7cb8..289620d4aad3 100644
--- a/include/linux/page_table_check.h
+++ b/include/linux/page_table_check.h
@@ -19,8 +19,10 @@ void __page_table_check_pmd_clear(struct mm_struct *mm, pmd_t pmd);
 void __page_table_check_pud_clear(struct mm_struct *mm, pud_t pud);
 void __page_table_check_ptes_set(struct mm_struct *mm, pte_t *ptep, pte_t pte,
 		unsigned int nr);
-void __page_table_check_pmd_set(struct mm_struct *mm, pmd_t *pmdp, pmd_t pmd);
-void __page_table_check_pud_set(struct mm_struct *mm, pud_t *pudp, pud_t pud);
+void __page_table_check_pmds_set(struct mm_struct *mm, pmd_t *pmdp, pmd_t pmd,
+		unsigned int nr);
+void __page_table_check_puds_set(struct mm_struct *mm, pud_t *pudp, pud_t pud,
+		unsigned int nr);
 void __page_table_check_pte_clear_range(struct mm_struct *mm,
 					unsigned long addr,
 					pmd_t pmd);
@@ -74,22 +76,22 @@ static inline void page_table_check_ptes_set(struct mm_struct *mm,
 	__page_table_check_ptes_set(mm, ptep, pte, nr);
 }
 
-static inline void page_table_check_pmd_set(struct mm_struct *mm, pmd_t *pmdp,
-					    pmd_t pmd)
+static inline void page_table_check_pmds_set(struct mm_struct *mm,
+		pmd_t *pmdp, pmd_t pmd, unsigned int nr)
 {
 	if (static_branch_likely(&page_table_check_disabled))
 		return;
 
-	__page_table_check_pmd_set(mm, pmdp, pmd);
+	__page_table_check_pmds_set(mm, pmdp, pmd, nr);
 }
 
-static inline void page_table_check_pud_set(struct mm_struct *mm, pud_t *pudp,
-					    pud_t pud)
+static inline void page_table_check_puds_set(struct mm_struct *mm,
+		pud_t *pudp, pud_t pud, unsigned int nr)
 {
 	if (static_branch_likely(&page_table_check_disabled))
 		return;
 
-	__page_table_check_pud_set(mm, pudp, pud);
+	__page_table_check_puds_set(mm, pudp, pud, nr);
 }
 
 static inline void page_table_check_pte_clear_range(struct mm_struct *mm,
@@ -129,13 +131,13 @@ static inline void page_table_check_ptes_set(struct mm_struct *mm,
 {
 }
 
-static inline void page_table_check_pmd_set(struct mm_struct *mm, pmd_t *pmdp,
-					    pmd_t pmd)
+static inline void page_table_check_pmds_set(struct mm_struct *mm,
+		pmd_t *pmdp, pmd_t pmd, unsigned int nr)
 {
 }
 
-static inline void page_table_check_pud_set(struct mm_struct *mm, pud_t *pudp,
-					    pud_t pud)
+static inline void page_table_check_puds_set(struct mm_struct *mm,
+		pud_t *pudp, pud_t pud, unsigned int nr)
 {
 }
 
@@ -146,4 +148,8 @@ static inline void page_table_check_pte_clear_range(struct mm_struct *mm,
 }
 
 #endif /* CONFIG_PAGE_TABLE_CHECK */
+
+#define page_table_check_pmd_set(mm, pmdp, pmd)	page_table_check_pmds_set(mm, pmdp, pmd, 1)
+#define page_table_check_pud_set(mm, pudp, pud)	page_table_check_puds_set(mm, pudp, pud, 1)
+
 #endif /* __LINUX_PAGE_TABLE_CHECK_H */
diff --git a/mm/page_table_check.c b/mm/page_table_check.c
index 509c6ef8de40..dae4a7d776b3 100644
--- a/mm/page_table_check.c
+++ b/mm/page_table_check.c
@@ -234,33 +234,39 @@ static inline void page_table_check_pmd_flags(pmd_t pmd)
 		WARN_ON_ONCE(swap_cached_writable(pmd_to_swp_entry(pmd)));
 }
 
-void __page_table_check_pmd_set(struct mm_struct *mm, pmd_t *pmdp, pmd_t pmd)
+void __page_table_check_pmds_set(struct mm_struct *mm, pmd_t *pmdp, pmd_t pmd,
+		unsigned int nr)
 {
+	unsigned int i;
+	unsigned long stride = PMD_SIZE >> PAGE_SHIFT;
+
 	if (&init_mm == mm)
 		return;
 
 	page_table_check_pmd_flags(pmd);
 
-	__page_table_check_pmd_clear(mm, *pmdp);
-	if (pmd_user_accessible_page(pmd)) {
-		page_table_check_set(pmd_pfn(pmd), PMD_SIZE >> PAGE_SHIFT,
-				     pmd_write(pmd));
-	}
+	for (i = 0; i < nr; i++)
+		__page_table_check_pmd_clear(mm, *(pmdp + i));
+	if (pmd_user_accessible_page(pmd))
+		page_table_check_set(pmd_pfn(pmd), stride * nr, pmd_write(pmd));
 }
-EXPORT_SYMBOL(__page_table_check_pmd_set);
+EXPORT_SYMBOL(__page_table_check_pmds_set);
 
-void __page_table_check_pud_set(struct mm_struct *mm, pud_t *pudp, pud_t pud)
+void __page_table_check_puds_set(struct mm_struct *mm, pud_t *pudp, pud_t pud,
+		unsigned int nr)
 {
+	unsigned int i;
+	unsigned long stride = PUD_SIZE >> PAGE_SHIFT;
+
 	if (&init_mm == mm)
 		return;
 
-	__page_table_check_pud_clear(mm, *pudp);
-	if (pud_user_accessible_page(pud)) {
-		page_table_check_set(pud_pfn(pud), PUD_SIZE >> PAGE_SHIFT,
-				     pud_write(pud));
-	}
+	for (i = 0; i < nr; i++)
+		__page_table_check_pud_clear(mm, *(pudp + i));
+	if (pud_user_accessible_page(pud))
+		page_table_check_set(pud_pfn(pud), stride * nr, pud_write(pud));
 }
-EXPORT_SYMBOL(__page_table_check_pud_set);
+EXPORT_SYMBOL(__page_table_check_puds_set);
 
 void __page_table_check_pte_clear_range(struct mm_struct *mm,
 					unsigned long addr,
-- 
2.43.0



^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH v2 04/14] arm64/mm: Refactor __set_ptes() and __ptep_get_and_clear()
  2025-02-17 14:07 [PATCH v2 00/14] Perf improvements for hugetlb and vmalloc on arm64 Ryan Roberts
                   ` (2 preceding siblings ...)
  2025-02-17 14:07 ` [PATCH v2 03/14] mm/page_table_check: Batch-check pmds/puds just like ptes Ryan Roberts
@ 2025-02-17 14:07 ` Ryan Roberts
  2025-02-17 14:07 ` [PATCH v2 05/14] arm64: hugetlb: Use set_ptes_anysz() and ptep_get_and_clear_anysz() Ryan Roberts
                   ` (9 subsequent siblings)
  13 siblings, 0 replies; 32+ messages in thread
From: Ryan Roberts @ 2025-02-17 14:07 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Pasha Tatashin, Andrew Morton,
	Uladzislau Rezki, Christoph Hellwig, David Hildenbrand,
	Matthew Wilcox (Oracle),
	Mark Rutland, Anshuman Khandual, Alexandre Ghiti, Kevin Brodsky
  Cc: Ryan Roberts, linux-arm-kernel, linux-mm, linux-kernel

Refactor __set_ptes(), set_pmd_at() and set_pud_at() so that they are
all a thin wrapper around a new common set_ptes_anysz(), which takes
pgsize parameter. Additionally, refactor __ptep_get_and_clear() and
pmdp_huge_get_and_clear() to use a new common ptep_get_and_clear_anysz()
which also takes a pgsize parameter.

These changes will permit the huge_pte API to efficiently batch-set
pgtable entries and take advantage of the future barrier optimizations.
Additionally since the new *_anysz() helpers call the correct
page_table_check_*_set() API based on pgsize, this means that huge_ptes
will be able to get proper coverage. Currently the huge_pte API always
uses the pte API which assumes an entry only covers a single page.

Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
---
 arch/arm64/include/asm/pgtable.h | 108 +++++++++++++++++++------------
 1 file changed, 67 insertions(+), 41 deletions(-)

diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index 0b2a2ad1b9e8..e255a36380dc 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -420,23 +420,6 @@ static inline pte_t pte_advance_pfn(pte_t pte, unsigned long nr)
 	return pfn_pte(pte_pfn(pte) + nr, pte_pgprot(pte));
 }
 
-static inline void __set_ptes(struct mm_struct *mm,
-			      unsigned long __always_unused addr,
-			      pte_t *ptep, pte_t pte, unsigned int nr)
-{
-	page_table_check_ptes_set(mm, ptep, pte, nr);
-	__sync_cache_and_tags(pte, nr);
-
-	for (;;) {
-		__check_safe_pte_update(mm, ptep, pte);
-		__set_pte(ptep, pte);
-		if (--nr == 0)
-			break;
-		ptep++;
-		pte = pte_advance_pfn(pte, 1);
-	}
-}
-
 /*
  * Hugetlb definitions.
  */
@@ -641,30 +624,59 @@ static inline pgprot_t pud_pgprot(pud_t pud)
 	return __pgprot(pud_val(pfn_pud(pfn, __pgprot(0))) ^ pud_val(pud));
 }
 
-static inline void __set_pte_at(struct mm_struct *mm,
-				unsigned long __always_unused addr,
-				pte_t *ptep, pte_t pte, unsigned int nr)
+static inline void set_ptes_anysz(struct mm_struct *mm, pte_t *ptep, pte_t pte,
+				  unsigned int nr, unsigned long pgsize)
 {
-	__sync_cache_and_tags(pte, nr);
-	__check_safe_pte_update(mm, ptep, pte);
-	__set_pte(ptep, pte);
+	unsigned long stride = pgsize >> PAGE_SHIFT;
+
+	switch (pgsize) {
+	case PAGE_SIZE:
+		page_table_check_ptes_set(mm, ptep, pte, nr);
+		break;
+	case PMD_SIZE:
+		page_table_check_pmds_set(mm, (pmd_t *)ptep, pte_pmd(pte), nr);
+		break;
+	case PUD_SIZE:
+		page_table_check_puds_set(mm, (pud_t *)ptep, pte_pud(pte), nr);
+		break;
+	default:
+		VM_WARN_ON(1);
+	}
+
+	__sync_cache_and_tags(pte, nr * stride);
+
+	for (;;) {
+		__check_safe_pte_update(mm, ptep, pte);
+		__set_pte(ptep, pte);
+		if (--nr == 0)
+			break;
+		ptep++;
+		pte = pte_advance_pfn(pte, stride);
+	}
 }
 
-static inline void set_pmd_at(struct mm_struct *mm, unsigned long addr,
-			      pmd_t *pmdp, pmd_t pmd)
+static inline void __set_ptes(struct mm_struct *mm,
+			      unsigned long __always_unused addr,
+			      pte_t *ptep, pte_t pte, unsigned int nr)
 {
-	page_table_check_pmd_set(mm, pmdp, pmd);
-	return __set_pte_at(mm, addr, (pte_t *)pmdp, pmd_pte(pmd),
-						PMD_SIZE >> PAGE_SHIFT);
+	set_ptes_anysz(mm, ptep, pte, nr, PAGE_SIZE);
 }
 
-static inline void set_pud_at(struct mm_struct *mm, unsigned long addr,
-			      pud_t *pudp, pud_t pud)
+static inline void __set_pmds(struct mm_struct *mm,
+			      unsigned long __always_unused addr,
+			      pmd_t *pmdp, pmd_t pmd, unsigned int nr)
+{
+	set_ptes_anysz(mm, (pte_t *)pmdp, pmd_pte(pmd), nr, PMD_SIZE);
+}
+#define set_pmd_at(mm, addr, pmdp, pmd) __set_pmds(mm, addr, pmdp, pmd, 1)
+
+static inline void __set_puds(struct mm_struct *mm,
+			      unsigned long __always_unused addr,
+			      pud_t *pudp, pud_t pud, unsigned int nr)
 {
-	page_table_check_pud_set(mm, pudp, pud);
-	return __set_pte_at(mm, addr, (pte_t *)pudp, pud_pte(pud),
-						PUD_SIZE >> PAGE_SHIFT);
+	set_ptes_anysz(mm, (pte_t *)pudp, pud_pte(pud), nr, PUD_SIZE);
 }
+#define set_pud_at(mm, addr, pudp, pud) __set_puds(mm, addr, pudp, pud, 1)
 
 #define __p4d_to_phys(p4d)	__pte_to_phys(p4d_pte(p4d))
 #define __phys_to_p4d_val(phys)	__phys_to_pte_val(phys)
@@ -1276,16 +1288,34 @@ static inline int pmdp_test_and_clear_young(struct vm_area_struct *vma,
 }
 #endif /* CONFIG_TRANSPARENT_HUGEPAGE || CONFIG_ARCH_HAS_NONLEAF_PMD_YOUNG */
 
-static inline pte_t __ptep_get_and_clear(struct mm_struct *mm,
-				       unsigned long address, pte_t *ptep)
+static inline pte_t ptep_get_and_clear_anysz(struct mm_struct *mm, pte_t *ptep,
+					     unsigned long pgsize)
 {
 	pte_t pte = __pte(xchg_relaxed(&pte_val(*ptep), 0));
 
-	page_table_check_pte_clear(mm, pte);
+	switch (pgsize) {
+	case PAGE_SIZE:
+		page_table_check_pte_clear(mm, pte);
+		break;
+	case PMD_SIZE:
+		page_table_check_pmd_clear(mm, pte_pmd(pte));
+		break;
+	case PUD_SIZE:
+		page_table_check_pud_clear(mm, pte_pud(pte));
+		break;
+	default:
+		VM_WARN_ON(1);
+	}
 
 	return pte;
 }
 
+static inline pte_t __ptep_get_and_clear(struct mm_struct *mm,
+				       unsigned long address, pte_t *ptep)
+{
+	return ptep_get_and_clear_anysz(mm, ptep, PAGE_SIZE);
+}
+
 static inline void __clear_full_ptes(struct mm_struct *mm, unsigned long addr,
 				pte_t *ptep, unsigned int nr, int full)
 {
@@ -1322,11 +1352,7 @@ static inline pte_t __get_and_clear_full_ptes(struct mm_struct *mm,
 static inline pmd_t pmdp_huge_get_and_clear(struct mm_struct *mm,
 					    unsigned long address, pmd_t *pmdp)
 {
-	pmd_t pmd = __pmd(xchg_relaxed(&pmd_val(*pmdp), 0));
-
-	page_table_check_pmd_clear(mm, pmd);
-
-	return pmd;
+	return pte_pmd(ptep_get_and_clear_anysz(mm, (pte_t *)pmdp, PMD_SIZE));
 }
 #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
 
-- 
2.43.0



^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH v2 05/14] arm64: hugetlb: Use set_ptes_anysz() and ptep_get_and_clear_anysz()
  2025-02-17 14:07 [PATCH v2 00/14] Perf improvements for hugetlb and vmalloc on arm64 Ryan Roberts
                   ` (3 preceding siblings ...)
  2025-02-17 14:07 ` [PATCH v2 04/14] arm64/mm: Refactor __set_ptes() and __ptep_get_and_clear() Ryan Roberts
@ 2025-02-17 14:07 ` Ryan Roberts
  2025-02-17 14:07 ` [PATCH v2 06/14] arm64/mm: Hoist barriers out of set_ptes_anysz() loop Ryan Roberts
                   ` (8 subsequent siblings)
  13 siblings, 0 replies; 32+ messages in thread
From: Ryan Roberts @ 2025-02-17 14:07 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Pasha Tatashin, Andrew Morton,
	Uladzislau Rezki, Christoph Hellwig, David Hildenbrand,
	Matthew Wilcox (Oracle),
	Mark Rutland, Anshuman Khandual, Alexandre Ghiti, Kevin Brodsky
  Cc: Ryan Roberts, linux-arm-kernel, linux-mm, linux-kernel

Refactor the huge_pte helpers to use the new common set_ptes_anysz() and
ptep_get_and_clear_anysz() APIs.

This provides 2 benefits; First, when page_table_check=on, hugetlb is
now properly/fully checked. Previously only the first page of a hugetlb
folio was checked. Second, instead of having to call __set_ptes(nr=1)
for each pte in a loop, the whole contiguous batch can now be set in one
go, which enables some efficiencies and cleans up the code.

One detail to note is that huge_ptep_clear_flush() was previously
calling ptep_clear_flush() for a non-contiguous pte (i.e. a pud or pmd
block mapping). This has a couple of disadvantages; first
ptep_clear_flush() calls ptep_get_and_clear() which transparently
handles contpte. Given we only call for non-contiguous ptes, it would be
safe, but a waste of effort. It's preferable to go straight to the layer
below. However, more problematic is that ptep_get_and_clear() is for
PAGE_SIZE entries so it calls page_table_check_pte_clear() and would not
clear the whole hugetlb folio. So let's stop special-casing the non-cont
case and just rely on get_clear_contig_flush() to do the right thing for
non-cont entries.

Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
---
 arch/arm64/mm/hugetlbpage.c | 52 +++++++------------------------------
 1 file changed, 10 insertions(+), 42 deletions(-)

diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c
index b7434ed1b93b..8ac86cd180b3 100644
--- a/arch/arm64/mm/hugetlbpage.c
+++ b/arch/arm64/mm/hugetlbpage.c
@@ -166,12 +166,12 @@ static pte_t get_clear_contig(struct mm_struct *mm,
 	pte_t pte, tmp_pte;
 	bool present;
 
-	pte = __ptep_get_and_clear(mm, addr, ptep);
+	pte = ptep_get_and_clear_anysz(mm, ptep, pgsize);
 	present = pte_present(pte);
 	while (--ncontig) {
 		ptep++;
 		addr += pgsize;
-		tmp_pte = __ptep_get_and_clear(mm, addr, ptep);
+		tmp_pte = ptep_get_and_clear_anysz(mm, ptep, pgsize);
 		if (present) {
 			if (pte_dirty(tmp_pte))
 				pte = pte_mkdirty(pte);
@@ -215,7 +215,7 @@ static void clear_flush(struct mm_struct *mm,
 	unsigned long i, saddr = addr;
 
 	for (i = 0; i < ncontig; i++, addr += pgsize, ptep++)
-		__ptep_get_and_clear(mm, addr, ptep);
+		ptep_get_and_clear_anysz(mm, ptep, pgsize);
 
 	__flush_hugetlb_tlb_range(&vma, saddr, addr, pgsize, true);
 }
@@ -226,32 +226,20 @@ void set_huge_pte_at(struct mm_struct *mm, unsigned long addr,
 	size_t pgsize;
 	int i;
 	int ncontig;
-	unsigned long pfn, dpfn;
-	pgprot_t hugeprot;
 
 	ncontig = num_contig_ptes(sz, &pgsize);
 
 	if (!pte_present(pte)) {
 		for (i = 0; i < ncontig; i++, ptep++, addr += pgsize)
-			__set_ptes(mm, addr, ptep, pte, 1);
+			set_ptes_anysz(mm, ptep, pte, 1, pgsize);
 		return;
 	}
 
-	if (!pte_cont(pte)) {
-		__set_ptes(mm, addr, ptep, pte, 1);
-		return;
-	}
-
-	pfn = pte_pfn(pte);
-	dpfn = pgsize >> PAGE_SHIFT;
-	hugeprot = pte_pgprot(pte);
-
 	/* Only need to "break" if transitioning valid -> valid. */
-	if (pte_valid(__ptep_get(ptep)))
+	if (pte_cont(pte) && pte_valid(__ptep_get(ptep)))
 		clear_flush(mm, addr, ptep, pgsize, ncontig);
 
-	for (i = 0; i < ncontig; i++, ptep++, addr += pgsize, pfn += dpfn)
-		__set_ptes(mm, addr, ptep, pfn_pte(pfn, hugeprot), 1);
+	set_ptes_anysz(mm, ptep, pte, ncontig, pgsize);
 }
 
 pte_t *huge_pte_alloc(struct mm_struct *mm, struct vm_area_struct *vma,
@@ -441,11 +429,9 @@ int huge_ptep_set_access_flags(struct vm_area_struct *vma,
 			       unsigned long addr, pte_t *ptep,
 			       pte_t pte, int dirty)
 {
-	int ncontig, i;
+	int ncontig;
 	size_t pgsize = 0;
-	unsigned long pfn = pte_pfn(pte), dpfn;
 	struct mm_struct *mm = vma->vm_mm;
-	pgprot_t hugeprot;
 	pte_t orig_pte;
 
 	VM_WARN_ON(!pte_present(pte));
@@ -454,7 +440,6 @@ int huge_ptep_set_access_flags(struct vm_area_struct *vma,
 		return __ptep_set_access_flags(vma, addr, ptep, pte, dirty);
 
 	ncontig = num_contig_ptes(huge_page_size(hstate_vma(vma)), &pgsize);
-	dpfn = pgsize >> PAGE_SHIFT;
 
 	if (!__cont_access_flags_changed(ptep, pte, ncontig))
 		return 0;
@@ -469,19 +454,14 @@ int huge_ptep_set_access_flags(struct vm_area_struct *vma,
 	if (pte_young(orig_pte))
 		pte = pte_mkyoung(pte);
 
-	hugeprot = pte_pgprot(pte);
-	for (i = 0; i < ncontig; i++, ptep++, addr += pgsize, pfn += dpfn)
-		__set_ptes(mm, addr, ptep, pfn_pte(pfn, hugeprot), 1);
-
+	set_ptes_anysz(mm, ptep, pte, ncontig, pgsize);
 	return 1;
 }
 
 void huge_ptep_set_wrprotect(struct mm_struct *mm,
 			     unsigned long addr, pte_t *ptep)
 {
-	unsigned long pfn, dpfn;
-	pgprot_t hugeprot;
-	int ncontig, i;
+	int ncontig;
 	size_t pgsize;
 	pte_t pte;
 
@@ -494,16 +474,11 @@ void huge_ptep_set_wrprotect(struct mm_struct *mm,
 	}
 
 	ncontig = find_num_contig(mm, addr, ptep, &pgsize);
-	dpfn = pgsize >> PAGE_SHIFT;
 
 	pte = get_clear_contig_flush(mm, addr, ptep, pgsize, ncontig);
 	pte = pte_wrprotect(pte);
 
-	hugeprot = pte_pgprot(pte);
-	pfn = pte_pfn(pte);
-
-	for (i = 0; i < ncontig; i++, ptep++, addr += pgsize, pfn += dpfn)
-		__set_ptes(mm, addr, ptep, pfn_pte(pfn, hugeprot), 1);
+	set_ptes_anysz(mm, ptep, pte, ncontig, pgsize);
 }
 
 pte_t huge_ptep_clear_flush(struct vm_area_struct *vma,
@@ -512,13 +487,6 @@ pte_t huge_ptep_clear_flush(struct vm_area_struct *vma,
 	struct mm_struct *mm = vma->vm_mm;
 	size_t pgsize;
 	int ncontig;
-	pte_t pte;
-
-	pte = __ptep_get(ptep);
-	VM_WARN_ON(!pte_present(pte));
-
-	if (!pte_cont(pte))
-		return ptep_clear_flush(vma, addr, ptep);
 
 	ncontig = num_contig_ptes(huge_page_size(hstate_vma(vma)), &pgsize);
 	return get_clear_contig_flush(mm, addr, ptep, pgsize, ncontig);
-- 
2.43.0



^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH v2 06/14] arm64/mm: Hoist barriers out of set_ptes_anysz() loop
  2025-02-17 14:07 [PATCH v2 00/14] Perf improvements for hugetlb and vmalloc on arm64 Ryan Roberts
                   ` (4 preceding siblings ...)
  2025-02-17 14:07 ` [PATCH v2 05/14] arm64: hugetlb: Use set_ptes_anysz() and ptep_get_and_clear_anysz() Ryan Roberts
@ 2025-02-17 14:07 ` Ryan Roberts
  2025-02-22 11:56   ` Catalin Marinas
  2025-02-17 14:07 ` [PATCH v2 07/14] arm64/mm: Avoid barriers for invalid or userspace mappings Ryan Roberts
                   ` (7 subsequent siblings)
  13 siblings, 1 reply; 32+ messages in thread
From: Ryan Roberts @ 2025-02-17 14:07 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Pasha Tatashin, Andrew Morton,
	Uladzislau Rezki, Christoph Hellwig, David Hildenbrand,
	Matthew Wilcox (Oracle),
	Mark Rutland, Anshuman Khandual, Alexandre Ghiti, Kevin Brodsky
  Cc: Ryan Roberts, linux-arm-kernel, linux-mm, linux-kernel

set_ptes_anysz() previously called __set_pte() for each PTE in the
range, which would conditionally issue a DSB and ISB to make the new PTE
value immediately visible to the table walker if the new PTE was valid
and for kernel space.

We can do better than this; let's hoist those barriers out of the loop
so that they are only issued once at the end of the loop. We then reduce
the cost by the number of PTEs in the range.

Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
---
 arch/arm64/include/asm/pgtable.h | 14 ++++++++++----
 1 file changed, 10 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index e255a36380dc..e4b1946b261f 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -317,10 +317,8 @@ static inline void __set_pte_nosync(pte_t *ptep, pte_t pte)
 	WRITE_ONCE(*ptep, pte);
 }
 
-static inline void __set_pte(pte_t *ptep, pte_t pte)
+static inline void __set_pte_complete(pte_t pte)
 {
-	__set_pte_nosync(ptep, pte);
-
 	/*
 	 * Only if the new pte is valid and kernel, otherwise TLB maintenance
 	 * or update_mmu_cache() have the necessary barriers.
@@ -331,6 +329,12 @@ static inline void __set_pte(pte_t *ptep, pte_t pte)
 	}
 }
 
+static inline void __set_pte(pte_t *ptep, pte_t pte)
+{
+	__set_pte_nosync(ptep, pte);
+	__set_pte_complete(pte);
+}
+
 static inline pte_t __ptep_get(pte_t *ptep)
 {
 	return READ_ONCE(*ptep);
@@ -647,12 +651,14 @@ static inline void set_ptes_anysz(struct mm_struct *mm, pte_t *ptep, pte_t pte,
 
 	for (;;) {
 		__check_safe_pte_update(mm, ptep, pte);
-		__set_pte(ptep, pte);
+		__set_pte_nosync(ptep, pte);
 		if (--nr == 0)
 			break;
 		ptep++;
 		pte = pte_advance_pfn(pte, stride);
 	}
+
+	__set_pte_complete(pte);
 }
 
 static inline void __set_ptes(struct mm_struct *mm,
-- 
2.43.0



^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH v2 07/14] arm64/mm: Avoid barriers for invalid or userspace mappings
  2025-02-17 14:07 [PATCH v2 00/14] Perf improvements for hugetlb and vmalloc on arm64 Ryan Roberts
                   ` (5 preceding siblings ...)
  2025-02-17 14:07 ` [PATCH v2 06/14] arm64/mm: Hoist barriers out of set_ptes_anysz() loop Ryan Roberts
@ 2025-02-17 14:07 ` Ryan Roberts
  2025-02-20 16:54   ` Kevin Brodsky
  2025-02-22 13:17   ` Catalin Marinas
  2025-02-17 14:08 ` [PATCH v2 08/14] mm/vmalloc: Warn on improper use of vunmap_range() Ryan Roberts
                   ` (6 subsequent siblings)
  13 siblings, 2 replies; 32+ messages in thread
From: Ryan Roberts @ 2025-02-17 14:07 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Pasha Tatashin, Andrew Morton,
	Uladzislau Rezki, Christoph Hellwig, David Hildenbrand,
	Matthew Wilcox (Oracle),
	Mark Rutland, Anshuman Khandual, Alexandre Ghiti, Kevin Brodsky
  Cc: Ryan Roberts, linux-arm-kernel, linux-mm, linux-kernel

__set_pte_complete(), set_pmd(), set_pud(), set_p4d() and set_pgd() are
used to write entries into pgtables. And they issue barriers (currently
dsb and isb) to ensure that the written values are observed by the table
walker prior to any program-order-future memory access to the mapped
location.

Over the years some of these functions have received optimizations: In
particular, commit 7f0b1bf04511 ("arm64: Fix barriers used for page
table modifications") made it so that the barriers were only emitted for
valid-kernel mappings for set_pte() (now __set_pte_complete()). And
commit 0795edaf3f1f ("arm64: pgtable: Implement p[mu]d_valid() and check
in set_p[mu]d()") made it so that set_pmd()/set_pud() only emitted the
barriers for valid mappings. set_p4d()/set_pgd() continue to emit the
barriers unconditionally.

This is all very confusing to the casual observer; surely the rules
should be invariant to the level? Let's change this so that every level
consistently emits the barriers only when setting valid, non-user
entries (both table and leaf).

It seems obvious that if it is ok to elide barriers all but valid kernel
mappings at pte level, it must also be ok to do this for leaf entries at
other levels: If setting an entry to invalid, a tlb maintenance
operation must surely follow to synchronise the TLB and this contains
the required barriers. If setting a valid user mapping, the previous
mapping must have been invalid and there must have been a TLB
maintenance operation (complete with barriers) to honour
break-before-make. So the worst that can happen is we take an extra
fault (which will imply the DSB + ISB) and conclude that there is
nothing to do. These are the arguments for doing this optimization at
pte level and they also apply to leaf mappings at other levels.

For table entries, the same arguments hold: If unsetting a table entry,
a TLB is required and this will emit the required barriers. If setting a
table entry, the previous value must have been invalid and the table
walker must already be able to observe that. Additionally the contents
of the pgtable being pointed to in the newly set entry must be visible
before the entry is written and this is enforced via smp_wmb() (dmb) in
the pgtable allocation functions and in __split_huge_pmd_locked(). But
this last part could never have been enforced by the barriers in
set_pXd() because they occur after updating the entry. So ultimately,
the wost that can happen by eliding these barriers for user table
entries is an extra fault.

I observe roughly the same number of page faults (107M) with and without
this change when compiling the kernel on Apple M2.

Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
---
 arch/arm64/include/asm/pgtable.h | 34 ++++++++++++++++++++++++++------
 1 file changed, 28 insertions(+), 6 deletions(-)

diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index e4b1946b261f..51128c2956f8 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -767,6 +767,19 @@ static inline bool in_swapper_pgdir(void *addr)
 	        ((unsigned long)swapper_pg_dir & PAGE_MASK);
 }
 
+static inline bool pmd_valid_not_user(pmd_t pmd)
+{
+	/*
+	 * User-space table entries always have (PXN && !UXN). All other
+	 * combinations indicate it's a table entry for kernel space.
+	 * Valid-not-user leaf entries follow the same rules as
+	 * pte_valid_not_user().
+	 */
+	if (pmd_table(pmd))
+		return !((pmd_val(pmd) & (PMD_TABLE_PXN | PMD_TABLE_UXN)) == PMD_TABLE_PXN);
+	return pte_valid_not_user(pmd_pte(pmd));
+}
+
 static inline void set_pmd(pmd_t *pmdp, pmd_t pmd)
 {
 #ifdef __PAGETABLE_PMD_FOLDED
@@ -778,7 +791,7 @@ static inline void set_pmd(pmd_t *pmdp, pmd_t pmd)
 
 	WRITE_ONCE(*pmdp, pmd);
 
-	if (pmd_valid(pmd)) {
+	if (pmd_valid_not_user(pmd)) {
 		dsb(ishst);
 		isb();
 	}
@@ -833,6 +846,7 @@ static inline unsigned long pmd_page_vaddr(pmd_t pmd)
 #define pud_valid(pud)		pte_valid(pud_pte(pud))
 #define pud_user(pud)		pte_user(pud_pte(pud))
 #define pud_user_exec(pud)	pte_user_exec(pud_pte(pud))
+#define pud_valid_not_user(pud)	pmd_valid_not_user(pte_pmd(pud_pte(pud)))
 
 static inline bool pgtable_l4_enabled(void);
 
@@ -845,7 +859,7 @@ static inline void set_pud(pud_t *pudp, pud_t pud)
 
 	WRITE_ONCE(*pudp, pud);
 
-	if (pud_valid(pud)) {
+	if (pud_valid_not_user(pud)) {
 		dsb(ishst);
 		isb();
 	}
@@ -916,6 +930,7 @@ static inline bool mm_pud_folded(const struct mm_struct *mm)
 #define p4d_none(p4d)		(pgtable_l4_enabled() && !p4d_val(p4d))
 #define p4d_bad(p4d)		(pgtable_l4_enabled() && !(p4d_val(p4d) & P4D_TABLE_BIT))
 #define p4d_present(p4d)	(!p4d_none(p4d))
+#define p4d_valid_not_user(p4d)	pmd_valid_not_user(pte_pmd(p4d_pte(p4d)))
 
 static inline void set_p4d(p4d_t *p4dp, p4d_t p4d)
 {
@@ -925,8 +940,11 @@ static inline void set_p4d(p4d_t *p4dp, p4d_t p4d)
 	}
 
 	WRITE_ONCE(*p4dp, p4d);
-	dsb(ishst);
-	isb();
+
+	if (p4d_valid_not_user(p4d)) {
+		dsb(ishst);
+		isb();
+	}
 }
 
 static inline void p4d_clear(p4d_t *p4dp)
@@ -1043,6 +1061,7 @@ static inline bool mm_p4d_folded(const struct mm_struct *mm)
 #define pgd_none(pgd)		(pgtable_l5_enabled() && !pgd_val(pgd))
 #define pgd_bad(pgd)		(pgtable_l5_enabled() && !(pgd_val(pgd) & PGD_TABLE_BIT))
 #define pgd_present(pgd)	(!pgd_none(pgd))
+#define pgd_valid_not_user(pgd)	pmd_valid_not_user(pte_pmd(pgd_pte(pgd)))
 
 static inline void set_pgd(pgd_t *pgdp, pgd_t pgd)
 {
@@ -1052,8 +1071,11 @@ static inline void set_pgd(pgd_t *pgdp, pgd_t pgd)
 	}
 
 	WRITE_ONCE(*pgdp, pgd);
-	dsb(ishst);
-	isb();
+
+	if (pgd_valid_not_user(pgd)) {
+		dsb(ishst);
+		isb();
+	}
 }
 
 static inline void pgd_clear(pgd_t *pgdp)
-- 
2.43.0



^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH v2 08/14] mm/vmalloc: Warn on improper use of vunmap_range()
  2025-02-17 14:07 [PATCH v2 00/14] Perf improvements for hugetlb and vmalloc on arm64 Ryan Roberts
                   ` (6 preceding siblings ...)
  2025-02-17 14:07 ` [PATCH v2 07/14] arm64/mm: Avoid barriers for invalid or userspace mappings Ryan Roberts
@ 2025-02-17 14:08 ` Ryan Roberts
  2025-02-20  7:02   ` Anshuman Khandual
                     ` (2 more replies)
  2025-02-17 14:08 ` [PATCH v2 09/14] mm/vmalloc: Gracefully unmap huge ptes Ryan Roberts
                   ` (5 subsequent siblings)
  13 siblings, 3 replies; 32+ messages in thread
From: Ryan Roberts @ 2025-02-17 14:08 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Pasha Tatashin, Andrew Morton,
	Uladzislau Rezki, Christoph Hellwig, David Hildenbrand,
	Matthew Wilcox (Oracle),
	Mark Rutland, Anshuman Khandual, Alexandre Ghiti, Kevin Brodsky
  Cc: Ryan Roberts, linux-arm-kernel, linux-mm, linux-kernel

A call to vmalloc_huge() may cause memory blocks to be mapped at pmd or
pud level. But it is possible to subsequently call vunmap_range() on a
sub-range of the mapped memory, which partially overlaps a pmd or pud.
In this case, vmalloc unmaps the entire pmd or pud so that the
no-overlapping portion is also unmapped. Clearly that would have a bad
outcome, but it's not something that any callers do today as far as I
can tell. So I guess it's just expected that callers will not do this.

However, it would be useful to know if this happened in future; let's
add a warning to cover the eventuality.

Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
---
 mm/vmalloc.c | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 61981ee1c9d2..a7e34e6936d2 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -374,8 +374,10 @@ static void vunmap_pmd_range(pud_t *pud, unsigned long addr, unsigned long end,
 		if (cleared || pmd_bad(*pmd))
 			*mask |= PGTBL_PMD_MODIFIED;
 
-		if (cleared)
+		if (cleared) {
+			WARN_ON(next - addr < PMD_SIZE);
 			continue;
+		}
 		if (pmd_none_or_clear_bad(pmd))
 			continue;
 		vunmap_pte_range(pmd, addr, next, mask);
@@ -399,8 +401,10 @@ static void vunmap_pud_range(p4d_t *p4d, unsigned long addr, unsigned long end,
 		if (cleared || pud_bad(*pud))
 			*mask |= PGTBL_PUD_MODIFIED;
 
-		if (cleared)
+		if (cleared) {
+			WARN_ON(next - addr < PUD_SIZE);
 			continue;
+		}
 		if (pud_none_or_clear_bad(pud))
 			continue;
 		vunmap_pmd_range(pud, addr, next, mask);
-- 
2.43.0



^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH v2 09/14] mm/vmalloc: Gracefully unmap huge ptes
  2025-02-17 14:07 [PATCH v2 00/14] Perf improvements for hugetlb and vmalloc on arm64 Ryan Roberts
                   ` (7 preceding siblings ...)
  2025-02-17 14:08 ` [PATCH v2 08/14] mm/vmalloc: Warn on improper use of vunmap_range() Ryan Roberts
@ 2025-02-17 14:08 ` Ryan Roberts
  2025-02-20 12:05   ` Uladzislau Rezki
  2025-02-24 12:04   ` Catalin Marinas
  2025-02-17 14:08 ` [PATCH v2 10/14] arm64/mm: Support huge pte-mapped pages in vmap Ryan Roberts
                   ` (4 subsequent siblings)
  13 siblings, 2 replies; 32+ messages in thread
From: Ryan Roberts @ 2025-02-17 14:08 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Pasha Tatashin, Andrew Morton,
	Uladzislau Rezki, Christoph Hellwig, David Hildenbrand,
	Matthew Wilcox (Oracle),
	Mark Rutland, Anshuman Khandual, Alexandre Ghiti, Kevin Brodsky
  Cc: Ryan Roberts, linux-arm-kernel, linux-mm, linux-kernel

Commit f7ee1f13d606 ("mm/vmalloc: enable mapping of huge pages at pte
level in vmap") added its support by reusing the set_huge_pte_at() API,
which is otherwise only used for user mappings. But when unmapping those
huge ptes, it continued to call ptep_get_and_clear(), which is a
layering violation. To date, the only arch to implement this support is
powerpc and it all happens to work ok for it.

But arm64's implementation of ptep_get_and_clear() can not be safely
used to clear a previous set_huge_pte_at(). So let's introduce a new
arch opt-in function, arch_vmap_pte_range_unmap_size(), which can
provide the size of a (present) pte. Then we can call
huge_ptep_get_and_clear() to tear it down properly.

Note that if vunmap_range() is called with a range that starts in the
middle of a huge pte-mapped page, we must unmap the entire huge page so
the behaviour is consistent with pmd and pud block mappings. In this
case emit a warning just like we do for pmd/pud mappings.

Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
---
 include/linux/vmalloc.h |  8 ++++++++
 mm/vmalloc.c            | 18 ++++++++++++++++--
 2 files changed, 24 insertions(+), 2 deletions(-)

diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
index 31e9ffd936e3..16dd4cba64f2 100644
--- a/include/linux/vmalloc.h
+++ b/include/linux/vmalloc.h
@@ -113,6 +113,14 @@ static inline unsigned long arch_vmap_pte_range_map_size(unsigned long addr, uns
 }
 #endif
 
+#ifndef arch_vmap_pte_range_unmap_size
+static inline unsigned long arch_vmap_pte_range_unmap_size(unsigned long addr,
+							   pte_t *ptep)
+{
+	return PAGE_SIZE;
+}
+#endif
+
 #ifndef arch_vmap_pte_supported_shift
 static inline int arch_vmap_pte_supported_shift(unsigned long size)
 {
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index a7e34e6936d2..68950b1824d0 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -350,12 +350,26 @@ static void vunmap_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end,
 			     pgtbl_mod_mask *mask)
 {
 	pte_t *pte;
+	pte_t ptent;
+	unsigned long size = PAGE_SIZE;
 
 	pte = pte_offset_kernel(pmd, addr);
 	do {
-		pte_t ptent = ptep_get_and_clear(&init_mm, addr, pte);
+#ifdef CONFIG_HUGETLB_PAGE
+		size = arch_vmap_pte_range_unmap_size(addr, pte);
+		if (size != PAGE_SIZE) {
+			if (WARN_ON(!IS_ALIGNED(addr, size))) {
+				addr = ALIGN_DOWN(addr, size);
+				pte = PTR_ALIGN_DOWN(pte, sizeof(*pte) * (size >> PAGE_SHIFT));
+			}
+			ptent = huge_ptep_get_and_clear(&init_mm, addr, pte, size);
+			if (WARN_ON(end - addr < size))
+				size = end - addr;
+		} else
+#endif
+			ptent = ptep_get_and_clear(&init_mm, addr, pte);
 		WARN_ON(!pte_none(ptent) && !pte_present(ptent));
-	} while (pte++, addr += PAGE_SIZE, addr != end);
+	} while (pte += (size >> PAGE_SHIFT), addr += size, addr != end);
 	*mask |= PGTBL_PTE_MODIFIED;
 }
 
-- 
2.43.0



^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH v2 10/14] arm64/mm: Support huge pte-mapped pages in vmap
  2025-02-17 14:07 [PATCH v2 00/14] Perf improvements for hugetlb and vmalloc on arm64 Ryan Roberts
                   ` (8 preceding siblings ...)
  2025-02-17 14:08 ` [PATCH v2 09/14] mm/vmalloc: Gracefully unmap huge ptes Ryan Roberts
@ 2025-02-17 14:08 ` Ryan Roberts
  2025-02-24 12:03   ` Catalin Marinas
  2025-02-17 14:08 ` [PATCH v2 11/14] mm/vmalloc: Batch arch_sync_kernel_mappings() more efficiently Ryan Roberts
                   ` (3 subsequent siblings)
  13 siblings, 1 reply; 32+ messages in thread
From: Ryan Roberts @ 2025-02-17 14:08 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Pasha Tatashin, Andrew Morton,
	Uladzislau Rezki, Christoph Hellwig, David Hildenbrand,
	Matthew Wilcox (Oracle),
	Mark Rutland, Anshuman Khandual, Alexandre Ghiti, Kevin Brodsky
  Cc: Ryan Roberts, linux-arm-kernel, linux-mm, linux-kernel

Implement the required arch functions to enable use of contpte in the
vmap when VM_ALLOW_HUGE_VMAP is specified. This speeds up vmap
operations due to only having to issue a DSB and ISB per contpte block
instead of per pte. But it also means that the TLB pressure reduces due
to only needing a single TLB entry for the whole contpte block.

Since vmap uses set_huge_pte_at() to set the contpte, that API is now
used for kernel mappings for the first time. Although in the vmap case
we never expect it to be called to modify a valid mapping so
clear_flush() should never be called, it's still wise to make it robust
for the kernel case, so amend the tlb flush function if the mm is for
kernel space.

Tested with vmalloc performance selftests:

  # kself/mm/test_vmalloc.sh \
	run_test_mask=1
	test_repeat_count=5
	nr_pages=256
	test_loop_count=100000
	use_huge=1

Duration reduced from 1274243 usec to 1083553 usec on Apple M2 for 15%
reduction in time taken.

Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
---
 arch/arm64/include/asm/vmalloc.h | 46 ++++++++++++++++++++++++++++++++
 arch/arm64/mm/hugetlbpage.c      |  5 +++-
 2 files changed, 50 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/vmalloc.h b/arch/arm64/include/asm/vmalloc.h
index 38fafffe699f..40ebc664190b 100644
--- a/arch/arm64/include/asm/vmalloc.h
+++ b/arch/arm64/include/asm/vmalloc.h
@@ -23,6 +23,52 @@ static inline bool arch_vmap_pmd_supported(pgprot_t prot)
 	return !IS_ENABLED(CONFIG_PTDUMP_DEBUGFS);
 }
 
+#define arch_vmap_pte_range_map_size arch_vmap_pte_range_map_size
+static inline unsigned long arch_vmap_pte_range_map_size(unsigned long addr,
+						unsigned long end, u64 pfn,
+						unsigned int max_page_shift)
+{
+	/*
+	 * If the block is at least CONT_PTE_SIZE in size, and is naturally
+	 * aligned in both virtual and physical space, then we can pte-map the
+	 * block using the PTE_CONT bit for more efficient use of the TLB.
+	 */
+
+	if (max_page_shift < CONT_PTE_SHIFT)
+		return PAGE_SIZE;
+
+	if (end - addr < CONT_PTE_SIZE)
+		return PAGE_SIZE;
+
+	if (!IS_ALIGNED(addr, CONT_PTE_SIZE))
+		return PAGE_SIZE;
+
+	if (!IS_ALIGNED(PFN_PHYS(pfn), CONT_PTE_SIZE))
+		return PAGE_SIZE;
+
+	return CONT_PTE_SIZE;
+}
+
+#define arch_vmap_pte_range_unmap_size arch_vmap_pte_range_unmap_size
+static inline unsigned long arch_vmap_pte_range_unmap_size(unsigned long addr,
+							   pte_t *ptep)
+{
+	/*
+	 * The caller handles alignment so it's sufficient just to check
+	 * PTE_CONT.
+	 */
+	return pte_valid_cont(__ptep_get(ptep)) ? CONT_PTE_SIZE : PAGE_SIZE;
+}
+
+#define arch_vmap_pte_supported_shift arch_vmap_pte_supported_shift
+static inline int arch_vmap_pte_supported_shift(unsigned long size)
+{
+	if (size >= CONT_PTE_SIZE)
+		return CONT_PTE_SHIFT;
+
+	return PAGE_SHIFT;
+}
+
 #endif
 
 #define arch_vmap_pgprot_tagged arch_vmap_pgprot_tagged
diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c
index 8ac86cd180b3..a29f347fea54 100644
--- a/arch/arm64/mm/hugetlbpage.c
+++ b/arch/arm64/mm/hugetlbpage.c
@@ -217,7 +217,10 @@ static void clear_flush(struct mm_struct *mm,
 	for (i = 0; i < ncontig; i++, addr += pgsize, ptep++)
 		ptep_get_and_clear_anysz(mm, ptep, pgsize);
 
-	__flush_hugetlb_tlb_range(&vma, saddr, addr, pgsize, true);
+	if (mm == &init_mm)
+		flush_tlb_kernel_range(saddr, addr);
+	else
+		__flush_hugetlb_tlb_range(&vma, saddr, addr, pgsize, true);
 }
 
 void set_huge_pte_at(struct mm_struct *mm, unsigned long addr,
-- 
2.43.0



^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH v2 11/14] mm/vmalloc: Batch arch_sync_kernel_mappings() more efficiently
  2025-02-17 14:07 [PATCH v2 00/14] Perf improvements for hugetlb and vmalloc on arm64 Ryan Roberts
                   ` (9 preceding siblings ...)
  2025-02-17 14:08 ` [PATCH v2 10/14] arm64/mm: Support huge pte-mapped pages in vmap Ryan Roberts
@ 2025-02-17 14:08 ` Ryan Roberts
  2025-02-25 15:37   ` Catalin Marinas
  2025-02-17 14:08 ` [PATCH v2 12/14] mm: Generalize arch_sync_kernel_mappings() Ryan Roberts
                   ` (2 subsequent siblings)
  13 siblings, 1 reply; 32+ messages in thread
From: Ryan Roberts @ 2025-02-17 14:08 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Pasha Tatashin, Andrew Morton,
	Uladzislau Rezki, Christoph Hellwig, David Hildenbrand,
	Matthew Wilcox (Oracle),
	Mark Rutland, Anshuman Khandual, Alexandre Ghiti, Kevin Brodsky
  Cc: Ryan Roberts, linux-arm-kernel, linux-mm, linux-kernel

When page_shift is greater than PAGE_SIZE, __vmap_pages_range_noflush()
will call vmap_range_noflush() for each individual huge page. But
vmap_range_noflush() would previously call arch_sync_kernel_mappings()
directly so this would end up being called for every huge page.

We can do better than this; refactor the call into the outer
__vmap_pages_range_noflush() so that it is only called once for the
entire batch operation.

This will benefit performance for arm64 which is about to opt-in to
using the hook.

Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
---
 mm/vmalloc.c | 60 ++++++++++++++++++++++++++--------------------------
 1 file changed, 30 insertions(+), 30 deletions(-)

diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 68950b1824d0..50fd44439875 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -285,40 +285,38 @@ static int vmap_p4d_range(pgd_t *pgd, unsigned long addr, unsigned long end,
 
 static int vmap_range_noflush(unsigned long addr, unsigned long end,
 			phys_addr_t phys_addr, pgprot_t prot,
-			unsigned int max_page_shift)
+			unsigned int max_page_shift, pgtbl_mod_mask *mask)
 {
 	pgd_t *pgd;
-	unsigned long start;
 	unsigned long next;
 	int err;
-	pgtbl_mod_mask mask = 0;
 
 	might_sleep();
 	BUG_ON(addr >= end);
 
-	start = addr;
 	pgd = pgd_offset_k(addr);
 	do {
 		next = pgd_addr_end(addr, end);
 		err = vmap_p4d_range(pgd, addr, next, phys_addr, prot,
-					max_page_shift, &mask);
+					max_page_shift, mask);
 		if (err)
 			break;
 	} while (pgd++, phys_addr += (next - addr), addr = next, addr != end);
 
-	if (mask & ARCH_PAGE_TABLE_SYNC_MASK)
-		arch_sync_kernel_mappings(start, end);
-
 	return err;
 }
 
 int vmap_page_range(unsigned long addr, unsigned long end,
 		    phys_addr_t phys_addr, pgprot_t prot)
 {
+	pgtbl_mod_mask mask = 0;
 	int err;
 
 	err = vmap_range_noflush(addr, end, phys_addr, pgprot_nx(prot),
-				 ioremap_max_page_shift);
+				 ioremap_max_page_shift, &mask);
+	if (mask & ARCH_PAGE_TABLE_SYNC_MASK)
+		arch_sync_kernel_mappings(addr, end);
+
 	flush_cache_vmap(addr, end);
 	if (!err)
 		err = kmsan_ioremap_page_range(addr, end, phys_addr, prot,
@@ -587,29 +585,24 @@ static int vmap_pages_p4d_range(pgd_t *pgd, unsigned long addr,
 }
 
 static int vmap_small_pages_range_noflush(unsigned long addr, unsigned long end,
-		pgprot_t prot, struct page **pages)
+		pgprot_t prot, struct page **pages, pgtbl_mod_mask *mask)
 {
-	unsigned long start = addr;
 	pgd_t *pgd;
 	unsigned long next;
 	int err = 0;
 	int nr = 0;
-	pgtbl_mod_mask mask = 0;
 
 	BUG_ON(addr >= end);
 	pgd = pgd_offset_k(addr);
 	do {
 		next = pgd_addr_end(addr, end);
 		if (pgd_bad(*pgd))
-			mask |= PGTBL_PGD_MODIFIED;
-		err = vmap_pages_p4d_range(pgd, addr, next, prot, pages, &nr, &mask);
+			*mask |= PGTBL_PGD_MODIFIED;
+		err = vmap_pages_p4d_range(pgd, addr, next, prot, pages, &nr, mask);
 		if (err)
 			break;
 	} while (pgd++, addr = next, addr != end);
 
-	if (mask & ARCH_PAGE_TABLE_SYNC_MASK)
-		arch_sync_kernel_mappings(start, end);
-
 	return err;
 }
 
@@ -626,26 +619,33 @@ int __vmap_pages_range_noflush(unsigned long addr, unsigned long end,
 		pgprot_t prot, struct page **pages, unsigned int page_shift)
 {
 	unsigned int i, nr = (end - addr) >> PAGE_SHIFT;
+	unsigned long start = addr;
+	pgtbl_mod_mask mask = 0;
+	int err = 0;
 
 	WARN_ON(page_shift < PAGE_SHIFT);
 
 	if (!IS_ENABLED(CONFIG_HAVE_ARCH_HUGE_VMALLOC) ||
-			page_shift == PAGE_SHIFT)
-		return vmap_small_pages_range_noflush(addr, end, prot, pages);
-
-	for (i = 0; i < nr; i += 1U << (page_shift - PAGE_SHIFT)) {
-		int err;
-
-		err = vmap_range_noflush(addr, addr + (1UL << page_shift),
-					page_to_phys(pages[i]), prot,
-					page_shift);
-		if (err)
-			return err;
+			page_shift == PAGE_SHIFT) {
+		err = vmap_small_pages_range_noflush(addr, end, prot, pages,
+						&mask);
+	} else {
+		for (i = 0; i < nr; i += 1U << (page_shift - PAGE_SHIFT)) {
+			err = vmap_range_noflush(addr,
+						addr + (1UL << page_shift),
+						page_to_phys(pages[i]), prot,
+						page_shift, &mask);
+			if (err)
+				break;
 
-		addr += 1UL << page_shift;
+			addr += 1UL << page_shift;
+		}
 	}
 
-	return 0;
+	if (mask & ARCH_PAGE_TABLE_SYNC_MASK)
+		arch_sync_kernel_mappings(start, end);
+
+	return err;
 }
 
 int vmap_pages_range_noflush(unsigned long addr, unsigned long end,
-- 
2.43.0



^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH v2 12/14] mm: Generalize arch_sync_kernel_mappings()
  2025-02-17 14:07 [PATCH v2 00/14] Perf improvements for hugetlb and vmalloc on arm64 Ryan Roberts
                   ` (10 preceding siblings ...)
  2025-02-17 14:08 ` [PATCH v2 11/14] mm/vmalloc: Batch arch_sync_kernel_mappings() more efficiently Ryan Roberts
@ 2025-02-17 14:08 ` Ryan Roberts
  2025-02-25 17:10   ` Ryan Roberts
  2025-02-17 14:08 ` [PATCH v2 13/14] mm: Only call arch_update_kernel_mappings_[begin|end]() for kernel mappings Ryan Roberts
  2025-02-17 14:08 ` [PATCH v2 14/14] arm64/mm: Batch barriers when updating " Ryan Roberts
  13 siblings, 1 reply; 32+ messages in thread
From: Ryan Roberts @ 2025-02-17 14:08 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Pasha Tatashin, Andrew Morton,
	Uladzislau Rezki, Christoph Hellwig, David Hildenbrand,
	Matthew Wilcox (Oracle),
	Mark Rutland, Anshuman Khandual, Alexandre Ghiti, Kevin Brodsky
  Cc: Ryan Roberts, linux-arm-kernel, linux-mm, linux-kernel

arch_sync_kernel_mappings() is an optional hook for arches to allow them
to synchonize certain levels of the kernel pgtables after modification.
But arm64 could benefit from a hook similar to this, paired with a call
prior to starting the batch of modifications.

So let's introduce arch_update_kernel_mappings_begin() and
arch_update_kernel_mappings_end(). Both have a default implementation
which can be overridden by the arch code. The default for the former is
a nop, and the default for the latter is to call
arch_sync_kernel_mappings(), so the latter replaces previous
arch_sync_kernel_mappings() callsites. So by default, the resulting
behaviour is unchanged.

To avoid include hell, the pgtbl_mod_mask type and it's associated
macros are moved to their own header.

In a future patch, arm64 will opt-in to overriding both functions.

Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
---
 include/linux/pgtable.h         | 24 +----------------
 include/linux/pgtable_modmask.h | 32 ++++++++++++++++++++++
 include/linux/vmalloc.h         | 47 +++++++++++++++++++++++++++++++++
 mm/memory.c                     |  5 ++--
 mm/vmalloc.c                    | 15 ++++++-----
 5 files changed, 92 insertions(+), 31 deletions(-)
 create mode 100644 include/linux/pgtable_modmask.h

diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
index 94d267d02372..7f70786a73b3 100644
--- a/include/linux/pgtable.h
+++ b/include/linux/pgtable.h
@@ -4,6 +4,7 @@
 
 #include <linux/pfn.h>
 #include <asm/pgtable.h>
+#include <linux/pgtable_modmask.h>
 
 #define PMD_ORDER	(PMD_SHIFT - PAGE_SHIFT)
 #define PUD_ORDER	(PUD_SHIFT - PAGE_SHIFT)
@@ -1786,29 +1787,6 @@ static inline bool arch_has_pfn_modify_check(void)
 # define PAGE_KERNEL_EXEC PAGE_KERNEL
 #endif
 
-/*
- * Page Table Modification bits for pgtbl_mod_mask.
- *
- * These are used by the p?d_alloc_track*() set of functions an in the generic
- * vmalloc/ioremap code to track at which page-table levels entries have been
- * modified. Based on that the code can better decide when vmalloc and ioremap
- * mapping changes need to be synchronized to other page-tables in the system.
- */
-#define		__PGTBL_PGD_MODIFIED	0
-#define		__PGTBL_P4D_MODIFIED	1
-#define		__PGTBL_PUD_MODIFIED	2
-#define		__PGTBL_PMD_MODIFIED	3
-#define		__PGTBL_PTE_MODIFIED	4
-
-#define		PGTBL_PGD_MODIFIED	BIT(__PGTBL_PGD_MODIFIED)
-#define		PGTBL_P4D_MODIFIED	BIT(__PGTBL_P4D_MODIFIED)
-#define		PGTBL_PUD_MODIFIED	BIT(__PGTBL_PUD_MODIFIED)
-#define		PGTBL_PMD_MODIFIED	BIT(__PGTBL_PMD_MODIFIED)
-#define		PGTBL_PTE_MODIFIED	BIT(__PGTBL_PTE_MODIFIED)
-
-/* Page-Table Modification Mask */
-typedef unsigned int pgtbl_mod_mask;
-
 #endif /* !__ASSEMBLY__ */
 
 #if !defined(MAX_POSSIBLE_PHYSMEM_BITS) && !defined(CONFIG_64BIT)
diff --git a/include/linux/pgtable_modmask.h b/include/linux/pgtable_modmask.h
new file mode 100644
index 000000000000..5a21b1bb8df3
--- /dev/null
+++ b/include/linux/pgtable_modmask.h
@@ -0,0 +1,32 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _LINUX_PGTABLE_MODMASK_H
+#define _LINUX_PGTABLE_MODMASK_H
+
+#ifndef __ASSEMBLY__
+
+/*
+ * Page Table Modification bits for pgtbl_mod_mask.
+ *
+ * These are used by the p?d_alloc_track*() set of functions an in the generic
+ * vmalloc/ioremap code to track at which page-table levels entries have been
+ * modified. Based on that the code can better decide when vmalloc and ioremap
+ * mapping changes need to be synchronized to other page-tables in the system.
+ */
+#define		__PGTBL_PGD_MODIFIED	0
+#define		__PGTBL_P4D_MODIFIED	1
+#define		__PGTBL_PUD_MODIFIED	2
+#define		__PGTBL_PMD_MODIFIED	3
+#define		__PGTBL_PTE_MODIFIED	4
+
+#define		PGTBL_PGD_MODIFIED	BIT(__PGTBL_PGD_MODIFIED)
+#define		PGTBL_P4D_MODIFIED	BIT(__PGTBL_P4D_MODIFIED)
+#define		PGTBL_PUD_MODIFIED	BIT(__PGTBL_PUD_MODIFIED)
+#define		PGTBL_PMD_MODIFIED	BIT(__PGTBL_PMD_MODIFIED)
+#define		PGTBL_PTE_MODIFIED	BIT(__PGTBL_PTE_MODIFIED)
+
+/* Page-Table Modification Mask */
+typedef unsigned int pgtbl_mod_mask;
+
+#endif /* !__ASSEMBLY__ */
+
+#endif /* _LINUX_PGTABLE_MODMASK_H */
diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
index 16dd4cba64f2..cb5d8f1965a1 100644
--- a/include/linux/vmalloc.h
+++ b/include/linux/vmalloc.h
@@ -11,6 +11,7 @@
 #include <asm/page.h>		/* pgprot_t */
 #include <linux/rbtree.h>
 #include <linux/overflow.h>
+#include <linux/pgtable_modmask.h>
 
 #include <asm/vmalloc.h>
 
@@ -213,6 +214,26 @@ extern int remap_vmalloc_range(struct vm_area_struct *vma, void *addr,
 int vmap_pages_range(unsigned long addr, unsigned long end, pgprot_t prot,
 		     struct page **pages, unsigned int page_shift);
 
+#ifndef arch_update_kernel_mappings_begin
+/**
+ * arch_update_kernel_mappings_begin - A batch of kernel pgtable mappings are
+ * about to be updated.
+ * @start: Virtual address of start of range to be updated.
+ * @end: Virtual address of end of range to be updated.
+ *
+ * An optional hook to allow architecture code to prepare for a batch of kernel
+ * pgtable mapping updates. An architecture may use this to enter a lazy mode
+ * where some operations can be deferred until the end of the batch.
+ *
+ * Context: Called in task context and may be preemptible.
+ */
+static inline void arch_update_kernel_mappings_begin(unsigned long start,
+						     unsigned long end)
+{
+}
+#endif
+
+#ifndef arch_update_kernel_mappings_end
 /*
  * Architectures can set this mask to a combination of PGTBL_P?D_MODIFIED values
  * and let generic vmalloc and ioremap code know when arch_sync_kernel_mappings()
@@ -229,6 +250,32 @@ int vmap_pages_range(unsigned long addr, unsigned long end, pgprot_t prot,
  */
 void arch_sync_kernel_mappings(unsigned long start, unsigned long end);
 
+/**
+ * arch_update_kernel_mappings_end - A batch of kernel pgtable mappings have
+ * been updated.
+ * @start: Virtual address of start of range that was updated.
+ * @end: Virtual address of end of range that was updated.
+ *
+ * An optional hook to inform architecture code that a batch update is complete.
+ * This balances a previous call to arch_update_kernel_mappings_begin().
+ *
+ * An architecture may override this for any purpose, such as exiting a lazy
+ * mode previously entered with arch_update_kernel_mappings_begin() or syncing
+ * kernel mappings to a secondary pgtable. The default implementation calls an
+ * arch-provided arch_sync_kernel_mappings() if any arch-defined pgtable level
+ * was updated.
+ *
+ * Context: Called in task context and may be preemptible.
+ */
+static inline void arch_update_kernel_mappings_end(unsigned long start,
+						   unsigned long end,
+						   pgtbl_mod_mask mask)
+{
+	if (mask & ARCH_PAGE_TABLE_SYNC_MASK)
+		arch_sync_kernel_mappings(start, end);
+}
+#endif
+
 /*
  *	Lowlevel-APIs (not for driver use!)
  */
diff --git a/mm/memory.c b/mm/memory.c
index a15f7dd500ea..f80930bc19f6 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3035,6 +3035,8 @@ static int __apply_to_page_range(struct mm_struct *mm, unsigned long addr,
 	if (WARN_ON(addr >= end))
 		return -EINVAL;
 
+	arch_update_kernel_mappings_begin(start, end);
+
 	pgd = pgd_offset(mm, addr);
 	do {
 		next = pgd_addr_end(addr, end);
@@ -3055,8 +3057,7 @@ static int __apply_to_page_range(struct mm_struct *mm, unsigned long addr,
 			break;
 	} while (pgd++, addr = next, addr != end);
 
-	if (mask & ARCH_PAGE_TABLE_SYNC_MASK)
-		arch_sync_kernel_mappings(start, start + size);
+	arch_update_kernel_mappings_end(start, end, mask);
 
 	return err;
 }
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 50fd44439875..c5c51d86ef78 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -312,10 +312,10 @@ int vmap_page_range(unsigned long addr, unsigned long end,
 	pgtbl_mod_mask mask = 0;
 	int err;
 
+	arch_update_kernel_mappings_begin(addr, end);
 	err = vmap_range_noflush(addr, end, phys_addr, pgprot_nx(prot),
 				 ioremap_max_page_shift, &mask);
-	if (mask & ARCH_PAGE_TABLE_SYNC_MASK)
-		arch_sync_kernel_mappings(addr, end);
+	arch_update_kernel_mappings_end(addr, end, mask);
 
 	flush_cache_vmap(addr, end);
 	if (!err)
@@ -463,6 +463,9 @@ void __vunmap_range_noflush(unsigned long start, unsigned long end)
 	pgtbl_mod_mask mask = 0;
 
 	BUG_ON(addr >= end);
+
+	arch_update_kernel_mappings_begin(start, end);
+
 	pgd = pgd_offset_k(addr);
 	do {
 		next = pgd_addr_end(addr, end);
@@ -473,8 +476,7 @@ void __vunmap_range_noflush(unsigned long start, unsigned long end)
 		vunmap_p4d_range(pgd, addr, next, &mask);
 	} while (pgd++, addr = next, addr != end);
 
-	if (mask & ARCH_PAGE_TABLE_SYNC_MASK)
-		arch_sync_kernel_mappings(start, end);
+	arch_update_kernel_mappings_end(start, end, mask);
 }
 
 void vunmap_range_noflush(unsigned long start, unsigned long end)
@@ -625,6 +627,8 @@ int __vmap_pages_range_noflush(unsigned long addr, unsigned long end,
 
 	WARN_ON(page_shift < PAGE_SHIFT);
 
+	arch_update_kernel_mappings_begin(start, end);
+
 	if (!IS_ENABLED(CONFIG_HAVE_ARCH_HUGE_VMALLOC) ||
 			page_shift == PAGE_SHIFT) {
 		err = vmap_small_pages_range_noflush(addr, end, prot, pages,
@@ -642,8 +646,7 @@ int __vmap_pages_range_noflush(unsigned long addr, unsigned long end,
 		}
 	}
 
-	if (mask & ARCH_PAGE_TABLE_SYNC_MASK)
-		arch_sync_kernel_mappings(start, end);
+	arch_update_kernel_mappings_end(start, end, mask);
 
 	return err;
 }
-- 
2.43.0



^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH v2 13/14] mm: Only call arch_update_kernel_mappings_[begin|end]() for kernel mappings
  2025-02-17 14:07 [PATCH v2 00/14] Perf improvements for hugetlb and vmalloc on arm64 Ryan Roberts
                   ` (11 preceding siblings ...)
  2025-02-17 14:08 ` [PATCH v2 12/14] mm: Generalize arch_sync_kernel_mappings() Ryan Roberts
@ 2025-02-17 14:08 ` Ryan Roberts
  2025-02-17 14:08 ` [PATCH v2 14/14] arm64/mm: Batch barriers when updating " Ryan Roberts
  13 siblings, 0 replies; 32+ messages in thread
From: Ryan Roberts @ 2025-02-17 14:08 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Pasha Tatashin, Andrew Morton,
	Uladzislau Rezki, Christoph Hellwig, David Hildenbrand,
	Matthew Wilcox (Oracle),
	Mark Rutland, Anshuman Khandual, Alexandre Ghiti, Kevin Brodsky
  Cc: Ryan Roberts, linux-arm-kernel, linux-mm, linux-kernel

arch_update_kernel_mappings_[begin|end]() is called from
__apply_to_page_range(), which operates both on kernel and user
mappings. Previously arch_update_kernel_mappings_[begin|end]() was
called unconditionally for both user and kernel mappings.

The existing arch implementations of arch_sync_kernel_mappings() (which
is called by the default implementation of
arch_update_kernel_mappings_end()) filter on kernel address ranges so
this change is still correct for those users. But given
"kernel_mappings" is in the function name, we really shouldn't be
calling it for user mappings. This change will also make the upcoming
arm64 implementation simpler.

Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
---
 mm/memory.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/mm/memory.c b/mm/memory.c
index f80930bc19f6..4e299d254a11 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3035,7 +3035,8 @@ static int __apply_to_page_range(struct mm_struct *mm, unsigned long addr,
 	if (WARN_ON(addr >= end))
 		return -EINVAL;
 
-	arch_update_kernel_mappings_begin(start, end);
+	if (mm == &init_mm)
+		arch_update_kernel_mappings_begin(start, end);
 
 	pgd = pgd_offset(mm, addr);
 	do {
@@ -3057,7 +3058,8 @@ static int __apply_to_page_range(struct mm_struct *mm, unsigned long addr,
 			break;
 	} while (pgd++, addr = next, addr != end);
 
-	arch_update_kernel_mappings_end(start, end, mask);
+	if (mm == &init_mm)
+		arch_update_kernel_mappings_end(start, end, mask);
 
 	return err;
 }
-- 
2.43.0



^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH v2 14/14] arm64/mm: Batch barriers when updating kernel mappings
  2025-02-17 14:07 [PATCH v2 00/14] Perf improvements for hugetlb and vmalloc on arm64 Ryan Roberts
                   ` (12 preceding siblings ...)
  2025-02-17 14:08 ` [PATCH v2 13/14] mm: Only call arch_update_kernel_mappings_[begin|end]() for kernel mappings Ryan Roberts
@ 2025-02-17 14:08 ` Ryan Roberts
  13 siblings, 0 replies; 32+ messages in thread
From: Ryan Roberts @ 2025-02-17 14:08 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Pasha Tatashin, Andrew Morton,
	Uladzislau Rezki, Christoph Hellwig, David Hildenbrand,
	Matthew Wilcox (Oracle),
	Mark Rutland, Anshuman Khandual, Alexandre Ghiti, Kevin Brodsky
  Cc: Ryan Roberts, linux-arm-kernel, linux-mm, linux-kernel

Because the kernel can't tolerate page faults for kernel mappings, when
setting a valid, kernel space pte (or pmd/pud/p4d/pgd), it emits a
dsb(ishst) to ensure that the store to the pgtable is observed by the
table walker immediately. Additionally it emits an isb() to ensure that
any already speculatively determined invalid mapping fault gets
canceled.

We can improve the performance of vmalloc operations by batching these
barriers until the end of a set of entry updates. The newly added
arch_update_kernel_mappings_begin() / arch_update_kernel_mappings_end()
provide the required hooks.

vmalloc improves by up to 30% as a result.

A new TIF_ flag is created; TIF_KMAP_UPDATE_ACTIVE tells us if we are in
the batch mode and can therefore defer any barriers until the end of the
batch.

Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
---
 arch/arm64/include/asm/pgtable.h     | 73 ++++++++++++++++++++--------
 arch/arm64/include/asm/thread_info.h |  1 +
 arch/arm64/kernel/process.c          |  9 ++--
 3 files changed, 59 insertions(+), 24 deletions(-)

diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index 51128c2956f8..f8866dbdfde7 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -39,6 +39,49 @@
 #include <linux/mm_types.h>
 #include <linux/sched.h>
 #include <linux/page_table_check.h>
+#include <linux/pgtable_modmask.h>
+
+static inline void emit_pte_barriers(void)
+{
+	/*
+	 * These barriers are emitted under certain conditions after a pte entry
+	 * was modified (see e.g. __set_pte_complete()). The dsb makes the store
+	 * visible to the table walker. The isb ensures that any previous
+	 * speculative "invalid translation" marker that is in the CPU's
+	 * pipeline gets cleared, so that any access to that address after
+	 * setting the pte to valid won't cause a spurious fault. If the thread
+	 * gets preempted after storing to the pgtable but before emitting these
+	 * barriers, __switch_to() emits a dsb which ensure the walker gets to
+	 * see the store. There is no guarrantee of an isb being issued though.
+	 * This is safe because it will still get issued (albeit on a
+	 * potentially different CPU) when the thread starts running again,
+	 * before any access to the address.
+	 */
+	dsb(ishst);
+	isb();
+}
+
+static inline void queue_pte_barriers(void)
+{
+	if (!test_thread_flag(TIF_KMAP_UPDATE_ACTIVE))
+		emit_pte_barriers();
+}
+
+#define arch_update_kernel_mappings_begin arch_update_kernel_mappings_begin
+static inline void arch_update_kernel_mappings_begin(unsigned long start,
+						     unsigned long end)
+{
+	set_thread_flag(TIF_KMAP_UPDATE_ACTIVE);
+}
+
+#define arch_update_kernel_mappings_end arch_update_kernel_mappings_end
+static inline void arch_update_kernel_mappings_end(unsigned long start,
+						   unsigned long end,
+						   pgtbl_mod_mask mask)
+{
+	clear_thread_flag(TIF_KMAP_UPDATE_ACTIVE);
+	emit_pte_barriers();
+}
 
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
 #define __HAVE_ARCH_FLUSH_PMD_TLB_RANGE
@@ -323,10 +366,8 @@ static inline void __set_pte_complete(pte_t pte)
 	 * Only if the new pte is valid and kernel, otherwise TLB maintenance
 	 * or update_mmu_cache() have the necessary barriers.
 	 */
-	if (pte_valid_not_user(pte)) {
-		dsb(ishst);
-		isb();
-	}
+	if (pte_valid_not_user(pte))
+		queue_pte_barriers();
 }
 
 static inline void __set_pte(pte_t *ptep, pte_t pte)
@@ -791,10 +832,8 @@ static inline void set_pmd(pmd_t *pmdp, pmd_t pmd)
 
 	WRITE_ONCE(*pmdp, pmd);
 
-	if (pmd_valid_not_user(pmd)) {
-		dsb(ishst);
-		isb();
-	}
+	if (pmd_valid_not_user(pmd))
+		queue_pte_barriers();
 }
 
 static inline void pmd_clear(pmd_t *pmdp)
@@ -859,10 +898,8 @@ static inline void set_pud(pud_t *pudp, pud_t pud)
 
 	WRITE_ONCE(*pudp, pud);
 
-	if (pud_valid_not_user(pud)) {
-		dsb(ishst);
-		isb();
-	}
+	if (pud_valid_not_user(pud))
+		queue_pte_barriers();
 }
 
 static inline void pud_clear(pud_t *pudp)
@@ -941,10 +978,8 @@ static inline void set_p4d(p4d_t *p4dp, p4d_t p4d)
 
 	WRITE_ONCE(*p4dp, p4d);
 
-	if (p4d_valid_not_user(p4d)) {
-		dsb(ishst);
-		isb();
-	}
+	if (p4d_valid_not_user(p4d))
+		queue_pte_barriers();
 }
 
 static inline void p4d_clear(p4d_t *p4dp)
@@ -1072,10 +1107,8 @@ static inline void set_pgd(pgd_t *pgdp, pgd_t pgd)
 
 	WRITE_ONCE(*pgdp, pgd);
 
-	if (pgd_valid_not_user(pgd)) {
-		dsb(ishst);
-		isb();
-	}
+	if (pgd_valid_not_user(pgd))
+		queue_pte_barriers();
 }
 
 static inline void pgd_clear(pgd_t *pgdp)
diff --git a/arch/arm64/include/asm/thread_info.h b/arch/arm64/include/asm/thread_info.h
index 1114c1c3300a..3856e0759cc3 100644
--- a/arch/arm64/include/asm/thread_info.h
+++ b/arch/arm64/include/asm/thread_info.h
@@ -82,6 +82,7 @@ void arch_setup_new_exec(void);
 #define TIF_SME_VL_INHERIT	28	/* Inherit SME vl_onexec across exec */
 #define TIF_KERNEL_FPSTATE	29	/* Task is in a kernel mode FPSIMD section */
 #define TIF_TSC_SIGSEGV		30	/* SIGSEGV on counter-timer access */
+#define TIF_KMAP_UPDATE_ACTIVE	31	/* kernel map update in progress */
 
 #define _TIF_SIGPENDING		(1 << TIF_SIGPENDING)
 #define _TIF_NEED_RESCHED	(1 << TIF_NEED_RESCHED)
diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
index 42faebb7b712..45a55fe81788 100644
--- a/arch/arm64/kernel/process.c
+++ b/arch/arm64/kernel/process.c
@@ -680,10 +680,11 @@ struct task_struct *__switch_to(struct task_struct *prev,
 	gcs_thread_switch(next);
 
 	/*
-	 * Complete any pending TLB or cache maintenance on this CPU in case
-	 * the thread migrates to a different CPU.
-	 * This full barrier is also required by the membarrier system
-	 * call.
+	 * Complete any pending TLB or cache maintenance on this CPU in case the
+	 * thread migrates to a different CPU. This full barrier is also
+	 * required by the membarrier system call. Additionally it makes any
+	 * in-progress pgtable writes visible to the table walker; See
+	 * emit_pte_barriers().
 	 */
 	dsb(ish);
 
-- 
2.43.0



^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v2 08/14] mm/vmalloc: Warn on improper use of vunmap_range()
  2025-02-17 14:08 ` [PATCH v2 08/14] mm/vmalloc: Warn on improper use of vunmap_range() Ryan Roberts
@ 2025-02-20  7:02   ` Anshuman Khandual
  2025-02-24 12:03   ` Catalin Marinas
  2025-02-24 12:04   ` Catalin Marinas
  2 siblings, 0 replies; 32+ messages in thread
From: Anshuman Khandual @ 2025-02-20  7:02 UTC (permalink / raw)
  To: Ryan Roberts, Catalin Marinas, Will Deacon, Pasha Tatashin,
	Andrew Morton, Uladzislau Rezki, Christoph Hellwig,
	David Hildenbrand, Matthew Wilcox (Oracle),
	Mark Rutland, Alexandre Ghiti, Kevin Brodsky
  Cc: linux-arm-kernel, linux-mm, linux-kernel

On 2/17/25 19:38, Ryan Roberts wrote:
> A call to vmalloc_huge() may cause memory blocks to be mapped at pmd or
> pud level. But it is possible to subsequently call vunmap_range() on a
> sub-range of the mapped memory, which partially overlaps a pmd or pud.
> In this case, vmalloc unmaps the entire pmd or pud so that the
> no-overlapping portion is also unmapped. Clearly that would have a bad
> outcome, but it's not something that any callers do today as far as I
> can tell. So I guess it's just expected that callers will not do this.
> 
> However, it would be useful to know if this happened in future; let's
> add a warning to cover the eventuality.
> 
> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>

LGTM and stands on its own independent of the series here.

Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>

> ---
>  mm/vmalloc.c | 8 ++++++--
>  1 file changed, 6 insertions(+), 2 deletions(-)
> 
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index 61981ee1c9d2..a7e34e6936d2 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -374,8 +374,10 @@ static void vunmap_pmd_range(pud_t *pud, unsigned long addr, unsigned long end,
>  		if (cleared || pmd_bad(*pmd))
>  			*mask |= PGTBL_PMD_MODIFIED;
>  
> -		if (cleared)
> +		if (cleared) {
> +			WARN_ON(next - addr < PMD_SIZE);
>  			continue;
> +		}
>  		if (pmd_none_or_clear_bad(pmd))
>  			continue;
>  		vunmap_pte_range(pmd, addr, next, mask);
> @@ -399,8 +401,10 @@ static void vunmap_pud_range(p4d_t *p4d, unsigned long addr, unsigned long end,
>  		if (cleared || pud_bad(*pud))
>  			*mask |= PGTBL_PUD_MODIFIED;
>  
> -		if (cleared)
> +		if (cleared) {
> +			WARN_ON(next - addr < PUD_SIZE);
>  			continue;
> +		}
>  		if (pud_none_or_clear_bad(pud))
>  			continue;
>  		vunmap_pmd_range(pud, addr, next, mask);


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v2 09/14] mm/vmalloc: Gracefully unmap huge ptes
  2025-02-17 14:08 ` [PATCH v2 09/14] mm/vmalloc: Gracefully unmap huge ptes Ryan Roberts
@ 2025-02-20 12:05   ` Uladzislau Rezki
  2025-02-24 12:04   ` Catalin Marinas
  1 sibling, 0 replies; 32+ messages in thread
From: Uladzislau Rezki @ 2025-02-20 12:05 UTC (permalink / raw)
  To: Ryan Roberts
  Cc: Catalin Marinas, Will Deacon, Pasha Tatashin, Andrew Morton,
	Uladzislau Rezki, Christoph Hellwig, David Hildenbrand,
	Matthew Wilcox (Oracle),
	Mark Rutland, Anshuman Khandual, Alexandre Ghiti, Kevin Brodsky,
	linux-arm-kernel, linux-mm, linux-kernel

On Mon, Feb 17, 2025 at 02:08:01PM +0000, Ryan Roberts wrote:
> Commit f7ee1f13d606 ("mm/vmalloc: enable mapping of huge pages at pte
> level in vmap") added its support by reusing the set_huge_pte_at() API,
> which is otherwise only used for user mappings. But when unmapping those
> huge ptes, it continued to call ptep_get_and_clear(), which is a
> layering violation. To date, the only arch to implement this support is
> powerpc and it all happens to work ok for it.
> 
> But arm64's implementation of ptep_get_and_clear() can not be safely
> used to clear a previous set_huge_pte_at(). So let's introduce a new
> arch opt-in function, arch_vmap_pte_range_unmap_size(), which can
> provide the size of a (present) pte. Then we can call
> huge_ptep_get_and_clear() to tear it down properly.
> 
> Note that if vunmap_range() is called with a range that starts in the
> middle of a huge pte-mapped page, we must unmap the entire huge page so
> the behaviour is consistent with pmd and pud block mappings. In this
> case emit a warning just like we do for pmd/pud mappings.
> 
> Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
> ---
>  include/linux/vmalloc.h |  8 ++++++++
>  mm/vmalloc.c            | 18 ++++++++++++++++--
>  2 files changed, 24 insertions(+), 2 deletions(-)
> 
> diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
> index 31e9ffd936e3..16dd4cba64f2 100644
> --- a/include/linux/vmalloc.h
> +++ b/include/linux/vmalloc.h
> @@ -113,6 +113,14 @@ static inline unsigned long arch_vmap_pte_range_map_size(unsigned long addr, uns
>  }
>  #endif
>  
> +#ifndef arch_vmap_pte_range_unmap_size
> +static inline unsigned long arch_vmap_pte_range_unmap_size(unsigned long addr,
> +							   pte_t *ptep)
> +{
> +	return PAGE_SIZE;
> +}
> +#endif
> +
>  #ifndef arch_vmap_pte_supported_shift
>  static inline int arch_vmap_pte_supported_shift(unsigned long size)
>  {
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index a7e34e6936d2..68950b1824d0 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -350,12 +350,26 @@ static void vunmap_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end,
>  			     pgtbl_mod_mask *mask)
>  {
>  	pte_t *pte;
> +	pte_t ptent;
> +	unsigned long size = PAGE_SIZE;
>  
>  	pte = pte_offset_kernel(pmd, addr);
>  	do {
> -		pte_t ptent = ptep_get_and_clear(&init_mm, addr, pte);
> +#ifdef CONFIG_HUGETLB_PAGE
> +		size = arch_vmap_pte_range_unmap_size(addr, pte);
> +		if (size != PAGE_SIZE) {
> +			if (WARN_ON(!IS_ALIGNED(addr, size))) {
> +				addr = ALIGN_DOWN(addr, size);
> +				pte = PTR_ALIGN_DOWN(pte, sizeof(*pte) * (size >> PAGE_SHIFT));
> +			}
> +			ptent = huge_ptep_get_and_clear(&init_mm, addr, pte, size);
> +			if (WARN_ON(end - addr < size))
> +				size = end - addr;
> +		} else
> +#endif
> +			ptent = ptep_get_and_clear(&init_mm, addr, pte);
>  		WARN_ON(!pte_none(ptent) && !pte_present(ptent));
> -	} while (pte++, addr += PAGE_SIZE, addr != end);
> +	} while (pte += (size >> PAGE_SHIFT), addr += size, addr != end);
>  	*mask |= PGTBL_PTE_MODIFIED;
>  }
>  
> -- 
> 2.43.0
> 
Reviewed-by: Uladzislau Rezki (Sony) <urezki@gmail.com>


Uladzislau Rezki


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v2 07/14] arm64/mm: Avoid barriers for invalid or userspace mappings
  2025-02-17 14:07 ` [PATCH v2 07/14] arm64/mm: Avoid barriers for invalid or userspace mappings Ryan Roberts
@ 2025-02-20 16:54   ` Kevin Brodsky
  2025-02-24 12:26     ` Ryan Roberts
  2025-02-22 13:17   ` Catalin Marinas
  1 sibling, 1 reply; 32+ messages in thread
From: Kevin Brodsky @ 2025-02-20 16:54 UTC (permalink / raw)
  To: Ryan Roberts, Catalin Marinas, Will Deacon, Pasha Tatashin,
	Andrew Morton, Uladzislau Rezki, Christoph Hellwig,
	David Hildenbrand, Matthew Wilcox (Oracle),
	Mark Rutland, Anshuman Khandual, Alexandre Ghiti
  Cc: linux-arm-kernel, linux-mm, linux-kernel

On 17/02/2025 15:07, Ryan Roberts wrote:
> __set_pte_complete(), set_pmd(), set_pud(), set_p4d() and set_pgd() are

Nit: it would be more accurate to say __set_pte() instead of
__set_pte_complete(), as it is the former that actually writes the PTE
(and then issues barriers).

> used to write entries into pgtables. And they issue barriers (currently
> dsb and isb) to ensure that the written values are observed by the table
> walker prior to any program-order-future memory access to the mapped
> location.
>
> Over the years some of these functions have received optimizations: In
> particular, commit 7f0b1bf04511 ("arm64: Fix barriers used for page
> table modifications") made it so that the barriers were only emitted for
> valid-kernel mappings for set_pte() (now __set_pte_complete()). And
> commit 0795edaf3f1f ("arm64: pgtable: Implement p[mu]d_valid() and check
> in set_p[mu]d()") made it so that set_pmd()/set_pud() only emitted the
> barriers for valid mappings. set_p4d()/set_pgd() continue to emit the
> barriers unconditionally.
>
> This is all very confusing to the casual observer; surely the rules
> should be invariant to the level? Let's change this so that every level
> consistently emits the barriers only when setting valid, non-user
> entries (both table and leaf).
>
> It seems obvious that if it is ok to elide barriers all but valid kernel
> mappings at pte level, it must also be ok to do this for leaf entries at
> other levels: If setting an entry to invalid, a tlb maintenance
> operation must surely follow to synchronise the TLB and this contains
> the required barriers. If setting a valid user mapping, the previous
> mapping must have been invalid and there must have been a TLB
> maintenance operation (complete with barriers) to honour
> break-before-make. So the worst that can happen is we take an extra
> fault (which will imply the DSB + ISB) and conclude that there is
> nothing to do. These are the arguments for doing this optimization at
> pte level and they also apply to leaf mappings at other levels.
>
> For table entries, the same arguments hold: If unsetting a table entry,
> a TLB is required and this will emit the required barriers. If setting a

s/TLB/TLB maintenance/

> table entry, the previous value must have been invalid and the table
> walker must already be able to observe that. Additionally the contents
> of the pgtable being pointed to in the newly set entry must be visible
> before the entry is written and this is enforced via smp_wmb() (dmb) in
> the pgtable allocation functions and in __split_huge_pmd_locked(). But
> this last part could never have been enforced by the barriers in
> set_pXd() because they occur after updating the entry. So ultimately,
> the wost that can happen by eliding these barriers for user table

s/wost/worst/

- Kevin

> entries is an extra fault.
>
> [...]



^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v2 06/14] arm64/mm: Hoist barriers out of set_ptes_anysz() loop
  2025-02-17 14:07 ` [PATCH v2 06/14] arm64/mm: Hoist barriers out of set_ptes_anysz() loop Ryan Roberts
@ 2025-02-22 11:56   ` Catalin Marinas
  2025-02-24 12:18     ` Ryan Roberts
  0 siblings, 1 reply; 32+ messages in thread
From: Catalin Marinas @ 2025-02-22 11:56 UTC (permalink / raw)
  To: Ryan Roberts
  Cc: Will Deacon, Pasha Tatashin, Andrew Morton, Uladzislau Rezki,
	Christoph Hellwig, David Hildenbrand, Matthew Wilcox (Oracle),
	Mark Rutland, Anshuman Khandual, Alexandre Ghiti, Kevin Brodsky,
	linux-arm-kernel, linux-mm, linux-kernel

On Mon, Feb 17, 2025 at 02:07:58PM +0000, Ryan Roberts wrote:
> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
> index e255a36380dc..e4b1946b261f 100644
> --- a/arch/arm64/include/asm/pgtable.h
> +++ b/arch/arm64/include/asm/pgtable.h
> @@ -317,10 +317,8 @@ static inline void __set_pte_nosync(pte_t *ptep, pte_t pte)
>  	WRITE_ONCE(*ptep, pte);
>  }
>  
> -static inline void __set_pte(pte_t *ptep, pte_t pte)
> +static inline void __set_pte_complete(pte_t pte)
>  {
> -	__set_pte_nosync(ptep, pte);
> -
>  	/*
>  	 * Only if the new pte is valid and kernel, otherwise TLB maintenance
>  	 * or update_mmu_cache() have the necessary barriers.

Unrelated to this patch but I just realised that this comment is stale,
we no longer do anything in update_mmu_cache() since commit 120798d2e7d1
("arm64: mm: remove dsb from update_mmu_cache"). If you respin, please
remove the update_mmu_cache() part as well.

Thanks.

-- 
Catalin


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v2 07/14] arm64/mm: Avoid barriers for invalid or userspace mappings
  2025-02-17 14:07 ` [PATCH v2 07/14] arm64/mm: Avoid barriers for invalid or userspace mappings Ryan Roberts
  2025-02-20 16:54   ` Kevin Brodsky
@ 2025-02-22 13:17   ` Catalin Marinas
  2025-02-25 16:41     ` Ryan Roberts
  1 sibling, 1 reply; 32+ messages in thread
From: Catalin Marinas @ 2025-02-22 13:17 UTC (permalink / raw)
  To: Ryan Roberts
  Cc: Will Deacon, Pasha Tatashin, Andrew Morton, Uladzislau Rezki,
	Christoph Hellwig, David Hildenbrand, Matthew Wilcox (Oracle),
	Mark Rutland, Anshuman Khandual, Alexandre Ghiti, Kevin Brodsky,
	linux-arm-kernel, linux-mm, linux-kernel

On Mon, Feb 17, 2025 at 02:07:59PM +0000, Ryan Roberts wrote:
> __set_pte_complete(), set_pmd(), set_pud(), set_p4d() and set_pgd() are
> used to write entries into pgtables. And they issue barriers (currently
> dsb and isb) to ensure that the written values are observed by the table
> walker prior to any program-order-future memory access to the mapped
> location.
> 
> Over the years some of these functions have received optimizations: In
> particular, commit 7f0b1bf04511 ("arm64: Fix barriers used for page
> table modifications") made it so that the barriers were only emitted for
> valid-kernel mappings for set_pte() (now __set_pte_complete()). And
> commit 0795edaf3f1f ("arm64: pgtable: Implement p[mu]d_valid() and check
> in set_p[mu]d()") made it so that set_pmd()/set_pud() only emitted the
> barriers for valid mappings.

The assumption probably was that set_pmd/pud() are called a lot less
often than set_pte() as they cover larger address ranges (whether table
or leaf).

> set_p4d()/set_pgd() continue to emit the barriers unconditionally.

We probably missed them, should have been the same as set_pmd().

> This is all very confusing to the casual observer; surely the rules
> should be invariant to the level? Let's change this so that every level
> consistently emits the barriers only when setting valid, non-user
> entries (both table and leaf).

Also see commit d0b7a302d58a ("Revert "arm64: Remove unnecessary ISBs
from set_{pte,pmd,pud}"") why we added back the ISBs to the pmd/pud
accessors and the last paragraph on why we are ok with the spurious
faults for PTEs.

For user mappings, the translation fault is routed through the usual
path that can handle mapping new entries, so I think we are fine. But
it's worth double-checking Will's comment (unless he only referred to
kernel table entries).

> It seems obvious that if it is ok to elide barriers all but valid kernel
> mappings at pte level, it must also be ok to do this for leaf entries at
> other levels: If setting an entry to invalid, a tlb maintenance
> operation must surely follow to synchronise the TLB and this contains
> the required barriers.

Setting to invalid is fine indeed, handled by the TLB flushing code,
hence the pmd_valid() checks.

> If setting a valid user mapping, the previous
> mapping must have been invalid and there must have been a TLB
> maintenance operation (complete with barriers) to honour
> break-before-make.

That's not entirely true for change_protection() for example or the
fork() path when we make the entries read-only from writeable without
BBM. We could improve these cases as well, I haven't looked in detail.
ptep_modify_prot_commit() via change_pte_range() can defer the barriers
to tlb_end_vma(). Something similar on the copy_present_ptes() path.

> So the worst that can happen is we take an extra
> fault (which will imply the DSB + ISB) and conclude that there is
> nothing to do. These are the arguments for doing this optimization at
> pte level and they also apply to leaf mappings at other levels.

It's worth clarifying Will's comment in the commit I mentioned above.

> For table entries, the same arguments hold: If unsetting a table entry,
> a TLB is required and this will emit the required barriers. If setting a
> table entry, the previous value must have been invalid and the table
> walker must already be able to observe that. Additionally the contents
> of the pgtable being pointed to in the newly set entry must be visible
> before the entry is written and this is enforced via smp_wmb() (dmb) in
> the pgtable allocation functions and in __split_huge_pmd_locked(). But
> this last part could never have been enforced by the barriers in
> set_pXd() because they occur after updating the entry. So ultimately,
> the wost that can happen by eliding these barriers for user table
> entries is an extra fault.
> 
> I observe roughly the same number of page faults (107M) with and without
> this change when compiling the kernel on Apple M2.

That's microarch specific, highly dependent on timing, so you may never
see a difference.

> +static inline bool pmd_valid_not_user(pmd_t pmd)
> +{
> +	/*
> +	 * User-space table entries always have (PXN && !UXN). All other
> +	 * combinations indicate it's a table entry for kernel space.
> +	 * Valid-not-user leaf entries follow the same rules as
> +	 * pte_valid_not_user().
> +	 */
> +	if (pmd_table(pmd))
> +		return !((pmd_val(pmd) & (PMD_TABLE_PXN | PMD_TABLE_UXN)) == PMD_TABLE_PXN);
> +	return pte_valid_not_user(pmd_pte(pmd));
> +}

With the 128-bit format I think we lost the PXN/UXNTable bits, though we
have software bits if we need to. I just wonder whether it's worth the
hassle of skipping some barriers for user non-leaf entries. Did you see
any improvement in practice?

-- 
Catalin


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v2 08/14] mm/vmalloc: Warn on improper use of vunmap_range()
  2025-02-17 14:08 ` [PATCH v2 08/14] mm/vmalloc: Warn on improper use of vunmap_range() Ryan Roberts
  2025-02-20  7:02   ` Anshuman Khandual
@ 2025-02-24 12:03   ` Catalin Marinas
  2025-02-24 12:04   ` Catalin Marinas
  2 siblings, 0 replies; 32+ messages in thread
From: Catalin Marinas @ 2025-02-24 12:03 UTC (permalink / raw)
  To: Ryan Roberts
  Cc: Will Deacon, Pasha Tatashin, Andrew Morton, Uladzislau Rezki,
	Christoph Hellwig, David Hildenbrand, Matthew Wilcox (Oracle),
	Mark Rutland, Anshuman Khandual, Alexandre Ghiti, Kevin Brodsky,
	linux-arm-kernel, linux-mm, linux-kernel

On Mon, Feb 17, 2025 at 02:08:00PM +0000, Ryan Roberts wrote:
> A call to vmalloc_huge() may cause memory blocks to be mapped at pmd or
> pud level. But it is possible to subsequently call vunmap_range() on a
> sub-range of the mapped memory, which partially overlaps a pmd or pud.
> In this case, vmalloc unmaps the entire pmd or pud so that the
> no-overlapping portion is also unmapped. Clearly that would have a bad
> outcome, but it's not something that any callers do today as far as I
> can tell. So I guess it's just expected that callers will not do this.
> 
> However, it would be useful to know if this happened in future; let's
> add a warning to cover the eventuality.
> 
> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v2 10/14] arm64/mm: Support huge pte-mapped pages in vmap
  2025-02-17 14:08 ` [PATCH v2 10/14] arm64/mm: Support huge pte-mapped pages in vmap Ryan Roberts
@ 2025-02-24 12:03   ` Catalin Marinas
  2025-02-25 16:57     ` Ryan Roberts
  0 siblings, 1 reply; 32+ messages in thread
From: Catalin Marinas @ 2025-02-24 12:03 UTC (permalink / raw)
  To: Ryan Roberts
  Cc: Will Deacon, Pasha Tatashin, Andrew Morton, Uladzislau Rezki,
	Christoph Hellwig, David Hildenbrand, Matthew Wilcox (Oracle),
	Mark Rutland, Anshuman Khandual, Alexandre Ghiti, Kevin Brodsky,
	linux-arm-kernel, linux-mm, linux-kernel

On Mon, Feb 17, 2025 at 02:08:02PM +0000, Ryan Roberts wrote:
> Implement the required arch functions to enable use of contpte in the
> vmap when VM_ALLOW_HUGE_VMAP is specified. This speeds up vmap
> operations due to only having to issue a DSB and ISB per contpte block
> instead of per pte.

For non-cont PTEs, do you happen to know how often vmap_pte_range() is
called for multiple entries? It might be worth changing that to use
set_ptes() directly and we get some benefit as well.

> But it also means that the TLB pressure reduces due
> to only needing a single TLB entry for the whole contpte block.
> 
> Since vmap uses set_huge_pte_at() to set the contpte, that API is now
> used for kernel mappings for the first time. Although in the vmap case
> we never expect it to be called to modify a valid mapping so
> clear_flush() should never be called, it's still wise to make it robust
> for the kernel case, so amend the tlb flush function if the mm is for
> kernel space.
> 
> Tested with vmalloc performance selftests:
> 
>   # kself/mm/test_vmalloc.sh \
> 	run_test_mask=1
> 	test_repeat_count=5
> 	nr_pages=256
> 	test_loop_count=100000
> 	use_huge=1
> 
> Duration reduced from 1274243 usec to 1083553 usec on Apple M2 for 15%
> reduction in time taken.
> 
> Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
> ---
>  arch/arm64/include/asm/vmalloc.h | 46 ++++++++++++++++++++++++++++++++
>  arch/arm64/mm/hugetlbpage.c      |  5 +++-
>  2 files changed, 50 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/arm64/include/asm/vmalloc.h b/arch/arm64/include/asm/vmalloc.h
> index 38fafffe699f..40ebc664190b 100644
> --- a/arch/arm64/include/asm/vmalloc.h
> +++ b/arch/arm64/include/asm/vmalloc.h
> @@ -23,6 +23,52 @@ static inline bool arch_vmap_pmd_supported(pgprot_t prot)
>  	return !IS_ENABLED(CONFIG_PTDUMP_DEBUGFS);
>  }
>  
> +#define arch_vmap_pte_range_map_size arch_vmap_pte_range_map_size
> +static inline unsigned long arch_vmap_pte_range_map_size(unsigned long addr,
> +						unsigned long end, u64 pfn,
> +						unsigned int max_page_shift)
> +{
> +	/*
> +	 * If the block is at least CONT_PTE_SIZE in size, and is naturally
> +	 * aligned in both virtual and physical space, then we can pte-map the
> +	 * block using the PTE_CONT bit for more efficient use of the TLB.
> +	 */
> +

Nit: unnecessary empty line.

> +	if (max_page_shift < CONT_PTE_SHIFT)
> +		return PAGE_SIZE;
> +
> +	if (end - addr < CONT_PTE_SIZE)
> +		return PAGE_SIZE;
> +
> +	if (!IS_ALIGNED(addr, CONT_PTE_SIZE))
> +		return PAGE_SIZE;
> +
> +	if (!IS_ALIGNED(PFN_PHYS(pfn), CONT_PTE_SIZE))
> +		return PAGE_SIZE;
> +
> +	return CONT_PTE_SIZE;
> +}
> +
> +#define arch_vmap_pte_range_unmap_size arch_vmap_pte_range_unmap_size
> +static inline unsigned long arch_vmap_pte_range_unmap_size(unsigned long addr,
> +							   pte_t *ptep)
> +{
> +	/*
> +	 * The caller handles alignment so it's sufficient just to check
> +	 * PTE_CONT.
> +	 */
> +	return pte_valid_cont(__ptep_get(ptep)) ? CONT_PTE_SIZE : PAGE_SIZE;
> +}
> +
> +#define arch_vmap_pte_supported_shift arch_vmap_pte_supported_shift
> +static inline int arch_vmap_pte_supported_shift(unsigned long size)
> +{
> +	if (size >= CONT_PTE_SIZE)
> +		return CONT_PTE_SHIFT;
> +
> +	return PAGE_SHIFT;
> +}
> +
>  #endif
>  
>  #define arch_vmap_pgprot_tagged arch_vmap_pgprot_tagged
> diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c
> index 8ac86cd180b3..a29f347fea54 100644
> --- a/arch/arm64/mm/hugetlbpage.c
> +++ b/arch/arm64/mm/hugetlbpage.c
> @@ -217,7 +217,10 @@ static void clear_flush(struct mm_struct *mm,
>  	for (i = 0; i < ncontig; i++, addr += pgsize, ptep++)
>  		ptep_get_and_clear_anysz(mm, ptep, pgsize);
>  
> -	__flush_hugetlb_tlb_range(&vma, saddr, addr, pgsize, true);
> +	if (mm == &init_mm)
> +		flush_tlb_kernel_range(saddr, addr);
> +	else
> +		__flush_hugetlb_tlb_range(&vma, saddr, addr, pgsize, true);
>  }
>  
>  void set_huge_pte_at(struct mm_struct *mm, unsigned long addr,

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v2 08/14] mm/vmalloc: Warn on improper use of vunmap_range()
  2025-02-17 14:08 ` [PATCH v2 08/14] mm/vmalloc: Warn on improper use of vunmap_range() Ryan Roberts
  2025-02-20  7:02   ` Anshuman Khandual
  2025-02-24 12:03   ` Catalin Marinas
@ 2025-02-24 12:04   ` Catalin Marinas
  2 siblings, 0 replies; 32+ messages in thread
From: Catalin Marinas @ 2025-02-24 12:04 UTC (permalink / raw)
  To: Ryan Roberts
  Cc: Will Deacon, Pasha Tatashin, Andrew Morton, Uladzislau Rezki,
	Christoph Hellwig, David Hildenbrand, Matthew Wilcox (Oracle),
	Mark Rutland, Anshuman Khandual, Alexandre Ghiti, Kevin Brodsky,
	linux-arm-kernel, linux-mm, linux-kernel

On Mon, Feb 17, 2025 at 02:08:00PM +0000, Ryan Roberts wrote:
> A call to vmalloc_huge() may cause memory blocks to be mapped at pmd or
> pud level. But it is possible to subsequently call vunmap_range() on a
> sub-range of the mapped memory, which partially overlaps a pmd or pud.
> In this case, vmalloc unmaps the entire pmd or pud so that the
> no-overlapping portion is also unmapped. Clearly that would have a bad
> outcome, but it's not something that any callers do today as far as I
> can tell. So I guess it's just expected that callers will not do this.
> 
> However, it would be useful to know if this happened in future; let's
> add a warning to cover the eventuality.
> 
> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v2 09/14] mm/vmalloc: Gracefully unmap huge ptes
  2025-02-17 14:08 ` [PATCH v2 09/14] mm/vmalloc: Gracefully unmap huge ptes Ryan Roberts
  2025-02-20 12:05   ` Uladzislau Rezki
@ 2025-02-24 12:04   ` Catalin Marinas
  1 sibling, 0 replies; 32+ messages in thread
From: Catalin Marinas @ 2025-02-24 12:04 UTC (permalink / raw)
  To: Ryan Roberts
  Cc: Will Deacon, Pasha Tatashin, Andrew Morton, Uladzislau Rezki,
	Christoph Hellwig, David Hildenbrand, Matthew Wilcox (Oracle),
	Mark Rutland, Anshuman Khandual, Alexandre Ghiti, Kevin Brodsky,
	linux-arm-kernel, linux-mm, linux-kernel

On Mon, Feb 17, 2025 at 02:08:01PM +0000, Ryan Roberts wrote:
> Commit f7ee1f13d606 ("mm/vmalloc: enable mapping of huge pages at pte
> level in vmap") added its support by reusing the set_huge_pte_at() API,
> which is otherwise only used for user mappings. But when unmapping those
> huge ptes, it continued to call ptep_get_and_clear(), which is a
> layering violation. To date, the only arch to implement this support is
> powerpc and it all happens to work ok for it.
> 
> But arm64's implementation of ptep_get_and_clear() can not be safely
> used to clear a previous set_huge_pte_at(). So let's introduce a new
> arch opt-in function, arch_vmap_pte_range_unmap_size(), which can
> provide the size of a (present) pte. Then we can call
> huge_ptep_get_and_clear() to tear it down properly.
> 
> Note that if vunmap_range() is called with a range that starts in the
> middle of a huge pte-mapped page, we must unmap the entire huge page so
> the behaviour is consistent with pmd and pud block mappings. In this
> case emit a warning just like we do for pmd/pud mappings.
> 
> Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v2 06/14] arm64/mm: Hoist barriers out of set_ptes_anysz() loop
  2025-02-22 11:56   ` Catalin Marinas
@ 2025-02-24 12:18     ` Ryan Roberts
  0 siblings, 0 replies; 32+ messages in thread
From: Ryan Roberts @ 2025-02-24 12:18 UTC (permalink / raw)
  To: Catalin Marinas
  Cc: Will Deacon, Pasha Tatashin, Andrew Morton, Uladzislau Rezki,
	Christoph Hellwig, David Hildenbrand, Matthew Wilcox (Oracle),
	Mark Rutland, Anshuman Khandual, Alexandre Ghiti, Kevin Brodsky,
	linux-arm-kernel, linux-mm, linux-kernel

On 22/02/2025 11:56, Catalin Marinas wrote:
> On Mon, Feb 17, 2025 at 02:07:58PM +0000, Ryan Roberts wrote:
>> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
>> index e255a36380dc..e4b1946b261f 100644
>> --- a/arch/arm64/include/asm/pgtable.h
>> +++ b/arch/arm64/include/asm/pgtable.h
>> @@ -317,10 +317,8 @@ static inline void __set_pte_nosync(pte_t *ptep, pte_t pte)
>>  	WRITE_ONCE(*ptep, pte);
>>  }
>>  
>> -static inline void __set_pte(pte_t *ptep, pte_t pte)
>> +static inline void __set_pte_complete(pte_t pte)
>>  {
>> -	__set_pte_nosync(ptep, pte);
>> -
>>  	/*
>>  	 * Only if the new pte is valid and kernel, otherwise TLB maintenance
>>  	 * or update_mmu_cache() have the necessary barriers.
> 
> Unrelated to this patch but I just realised that this comment is stale,
> we no longer do anything in update_mmu_cache() since commit 120798d2e7d1
> ("arm64: mm: remove dsb from update_mmu_cache"). If you respin, please
> remove the update_mmu_cache() part as well.

Will do!

> 
> Thanks.
> 



^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v2 07/14] arm64/mm: Avoid barriers for invalid or userspace mappings
  2025-02-20 16:54   ` Kevin Brodsky
@ 2025-02-24 12:26     ` Ryan Roberts
  0 siblings, 0 replies; 32+ messages in thread
From: Ryan Roberts @ 2025-02-24 12:26 UTC (permalink / raw)
  To: Kevin Brodsky, Catalin Marinas, Will Deacon, Pasha Tatashin,
	Andrew Morton, Uladzislau Rezki, Christoph Hellwig,
	David Hildenbrand, Matthew Wilcox (Oracle),
	Mark Rutland, Anshuman Khandual, Alexandre Ghiti
  Cc: linux-arm-kernel, linux-mm, linux-kernel

On 20/02/2025 16:54, Kevin Brodsky wrote:
> On 17/02/2025 15:07, Ryan Roberts wrote:
>> __set_pte_complete(), set_pmd(), set_pud(), set_p4d() and set_pgd() are
> 
> Nit: it would be more accurate to say __set_pte() instead of
> __set_pte_complete(), as it is the former that actually writes the PTE
> (and then issues barriers).

Yeah, fair enough. Will fix in the next version.

> 
>> used to write entries into pgtables. And they issue barriers (currently
>> dsb and isb) to ensure that the written values are observed by the table
>> walker prior to any program-order-future memory access to the mapped
>> location.
>>
>> Over the years some of these functions have received optimizations: In
>> particular, commit 7f0b1bf04511 ("arm64: Fix barriers used for page
>> table modifications") made it so that the barriers were only emitted for
>> valid-kernel mappings for set_pte() (now __set_pte_complete()). And
>> commit 0795edaf3f1f ("arm64: pgtable: Implement p[mu]d_valid() and check
>> in set_p[mu]d()") made it so that set_pmd()/set_pud() only emitted the
>> barriers for valid mappings. set_p4d()/set_pgd() continue to emit the
>> barriers unconditionally.
>>
>> This is all very confusing to the casual observer; surely the rules
>> should be invariant to the level? Let's change this so that every level
>> consistently emits the barriers only when setting valid, non-user
>> entries (both table and leaf).
>>
>> It seems obvious that if it is ok to elide barriers all but valid kernel
>> mappings at pte level, it must also be ok to do this for leaf entries at
>> other levels: If setting an entry to invalid, a tlb maintenance
>> operation must surely follow to synchronise the TLB and this contains
>> the required barriers. If setting a valid user mapping, the previous
>> mapping must have been invalid and there must have been a TLB
>> maintenance operation (complete with barriers) to honour
>> break-before-make. So the worst that can happen is we take an extra
>> fault (which will imply the DSB + ISB) and conclude that there is
>> nothing to do. These are the arguments for doing this optimization at
>> pte level and they also apply to leaf mappings at other levels.
>>
>> For table entries, the same arguments hold: If unsetting a table entry,
>> a TLB is required and this will emit the required barriers. If setting a
> 
> s/TLB/TLB maintenance/
> 
>> table entry, the previous value must have been invalid and the table
>> walker must already be able to observe that. Additionally the contents
>> of the pgtable being pointed to in the newly set entry must be visible
>> before the entry is written and this is enforced via smp_wmb() (dmb) in
>> the pgtable allocation functions and in __split_huge_pmd_locked(). But
>> this last part could never have been enforced by the barriers in
>> set_pXd() because they occur after updating the entry. So ultimately,
>> the wost that can happen by eliding these barriers for user table
> 
> s/wost/worst/
> 
> - Kevin
> 
>> entries is an extra fault.
>>
>> [...]
> 



^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v2 11/14] mm/vmalloc: Batch arch_sync_kernel_mappings() more efficiently
  2025-02-17 14:08 ` [PATCH v2 11/14] mm/vmalloc: Batch arch_sync_kernel_mappings() more efficiently Ryan Roberts
@ 2025-02-25 15:37   ` Catalin Marinas
  2025-02-25 16:58     ` Ryan Roberts
  0 siblings, 1 reply; 32+ messages in thread
From: Catalin Marinas @ 2025-02-25 15:37 UTC (permalink / raw)
  To: Ryan Roberts
  Cc: Will Deacon, Pasha Tatashin, Andrew Morton, Uladzislau Rezki,
	Christoph Hellwig, David Hildenbrand, Matthew Wilcox (Oracle),
	Mark Rutland, Anshuman Khandual, Alexandre Ghiti, Kevin Brodsky,
	linux-arm-kernel, linux-mm, linux-kernel

On Mon, Feb 17, 2025 at 02:08:03PM +0000, Ryan Roberts wrote:
> When page_shift is greater than PAGE_SIZE, __vmap_pages_range_noflush()

s/PAGE_SIZE/PAGE_SHIFT/

Otherwise it looks fine.

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v2 07/14] arm64/mm: Avoid barriers for invalid or userspace mappings
  2025-02-22 13:17   ` Catalin Marinas
@ 2025-02-25 16:41     ` Ryan Roberts
  0 siblings, 0 replies; 32+ messages in thread
From: Ryan Roberts @ 2025-02-25 16:41 UTC (permalink / raw)
  To: Catalin Marinas
  Cc: Will Deacon, Pasha Tatashin, Andrew Morton, Uladzislau Rezki,
	Christoph Hellwig, David Hildenbrand, Matthew Wilcox (Oracle),
	Mark Rutland, Anshuman Khandual, Alexandre Ghiti, Kevin Brodsky,
	linux-arm-kernel, linux-mm, linux-kernel

On 22/02/2025 13:17, Catalin Marinas wrote:
> On Mon, Feb 17, 2025 at 02:07:59PM +0000, Ryan Roberts wrote:
>> __set_pte_complete(), set_pmd(), set_pud(), set_p4d() and set_pgd() are
>> used to write entries into pgtables. And they issue barriers (currently
>> dsb and isb) to ensure that the written values are observed by the table
>> walker prior to any program-order-future memory access to the mapped
>> location.
>>
>> Over the years some of these functions have received optimizations: In
>> particular, commit 7f0b1bf04511 ("arm64: Fix barriers used for page
>> table modifications") made it so that the barriers were only emitted for
>> valid-kernel mappings for set_pte() (now __set_pte_complete()). And
>> commit 0795edaf3f1f ("arm64: pgtable: Implement p[mu]d_valid() and check
>> in set_p[mu]d()") made it so that set_pmd()/set_pud() only emitted the
>> barriers for valid mappings.
> 
> The assumption probably was that set_pmd/pud() are called a lot less
> often than set_pte() as they cover larger address ranges (whether table
> or leaf).

Yes you're probably right. From an optimization perspective I doubt we are
seeing much improvement from the reduction in barriers for pmd/pud/p4d/pgd
levels. I made the change more for the benefit of uniformity.

> 
>> set_p4d()/set_pgd() continue to emit the barriers unconditionally.
> 
> We probably missed them, should have been the same as set_pmd().

So perhaps the middle ground is for pmd/pud/p4d/pgd to all have the same
implementation (emit barriers for valid mappings only). And then let pte level
continue with the additional optimization of only emitting barriers for valid
*kernel* mappings.

The only problem there is that it gets confusing when you bring set_pmd_at() and
set_pud_at() into the picture, which today end up calling __set_pte(). So
pmds/puds set by that route *already* has the same optimizations applied as ptes
today.

Personally, I'd prefer to have a single pattern for all levels.

> 
>> This is all very confusing to the casual observer; surely the rules
>> should be invariant to the level? Let's change this so that every level
>> consistently emits the barriers only when setting valid, non-user
>> entries (both table and leaf).
> 
> Also see commit d0b7a302d58a ("Revert "arm64: Remove unnecessary ISBs
> from set_{pte,pmd,pud}"") why we added back the ISBs to the pmd/pud
> accessors and the last paragraph on why we are ok with the spurious
> faults for PTEs.

Yes, I was already aware of that behaviour (thanks to MarkR for patiently
explaining it to me!). But my changes should still be compatible with that
requirement; we always issue an ISB after updating the page table with a valid,
kernel mapping.

As an aside, my understanding is that if the system supports FEAT_ETS2, then the
CPU promises not to store that "faulting" state in the pipeline, and the ISB
should not be needed; we could patch that out in future with an alternative.

> 
> For user mappings, the translation fault is routed through the usual
> path that can handle mapping new entries, so I think we are fine. But
> it's worth double-checking Will's comment (unless he only referred to
> kernel table entries).
> 
>> It seems obvious that if it is ok to elide barriers all but valid kernel
>> mappings at pte level, it must also be ok to do this for leaf entries at
>> other levels: If setting an entry to invalid, a tlb maintenance
>> operation must surely follow to synchronise the TLB and this contains
>> the required barriers.
> 
> Setting to invalid is fine indeed, handled by the TLB flushing code,
> hence the pmd_valid() checks.
> 
>> If setting a valid user mapping, the previous
>> mapping must have been invalid and there must have been a TLB
>> maintenance operation (complete with barriers) to honour
>> break-before-make.
> 
> That's not entirely true for change_protection() for example or the
> fork() path when we make the entries read-only from writeable without

For fork() we use ptep_set_wrprotect() (or these days, wrprotect_ptes(), right?)

> BBM. We could improve these cases as well, I haven't looked in detail.
> ptep_modify_prot_commit() via change_pte_range() can defer the barriers
> to tlb_end_vma(). Something similar on the copy_present_ptes() path.

But yes, I agree my comments are not fully correct for all corner cases. But the
point is that for all of these corner cases, the higher level causes appropriate
TLBI operations to be issued, which include the appropriate barriers.

> 
>> So the worst that can happen is we take an extra
>> fault (which will imply the DSB + ISB) and conclude that there is
>> nothing to do. These are the arguments for doing this optimization at
>> pte level and they also apply to leaf mappings at other levels.
> 
> It's worth clarifying Will's comment in the commit I mentioned above.

By my interpretation of Will's comment, it agrees with what I'm trying to say
here. Perhaps I've missed something?

> 
>> For table entries, the same arguments hold: If unsetting a table entry,
>> a TLB is required and this will emit the required barriers. If setting a
>> table entry, the previous value must have been invalid and the table
>> walker must already be able to observe that. Additionally the contents
>> of the pgtable being pointed to in the newly set entry must be visible
>> before the entry is written and this is enforced via smp_wmb() (dmb) in
>> the pgtable allocation functions and in __split_huge_pmd_locked(). But
>> this last part could never have been enforced by the barriers in
>> set_pXd() because they occur after updating the entry. So ultimately,
>> the wost that can happen by eliding these barriers for user table
>> entries is an extra fault.
>>
>> I observe roughly the same number of page faults (107M) with and without
>> this change when compiling the kernel on Apple M2.
> 
> That's microarch specific, highly dependent on timing, so you may never
> see a difference.

Fair. Are you advocating for keeping the higher levels not conditional on user
vs kernel? If so, then my preference would be to at least simplfy to 2 patterns
as I describe above.

> 
>> +static inline bool pmd_valid_not_user(pmd_t pmd)
>> +{
>> +	/*
>> +	 * User-space table entries always have (PXN && !UXN). All other
>> +	 * combinations indicate it's a table entry for kernel space.
>> +	 * Valid-not-user leaf entries follow the same rules as
>> +	 * pte_valid_not_user().
>> +	 */
>> +	if (pmd_table(pmd))
>> +		return !((pmd_val(pmd) & (PMD_TABLE_PXN | PMD_TABLE_UXN)) == PMD_TABLE_PXN);
>> +	return pte_valid_not_user(pmd_pte(pmd));
>> +}
> 
> With the 128-bit format I think we lost the PXN/UXNTable bits, though we
> have software bits if we need to. 

Indeed. I've already fed back to Anshuman that in my opinion, those bits need to
be maintained as SW bits. Otherwise it all gets a bit fragile if a SW agent
tries to check the value of those bits in D128 mode vs D64 mode.

> I just wonder whether it's worth the
> hassle of skipping some barriers for user non-leaf entries. Did you see
> any improvement in practice?

As I said, I doubt there is much of a performance improvement in practice, my
main motivation was unifying around a single pattern to simplify.

Thanks,
Ryan




^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v2 10/14] arm64/mm: Support huge pte-mapped pages in vmap
  2025-02-24 12:03   ` Catalin Marinas
@ 2025-02-25 16:57     ` Ryan Roberts
  0 siblings, 0 replies; 32+ messages in thread
From: Ryan Roberts @ 2025-02-25 16:57 UTC (permalink / raw)
  To: Catalin Marinas
  Cc: Will Deacon, Pasha Tatashin, Andrew Morton, Uladzislau Rezki,
	Christoph Hellwig, David Hildenbrand, Matthew Wilcox (Oracle),
	Mark Rutland, Anshuman Khandual, Alexandre Ghiti, Kevin Brodsky,
	linux-arm-kernel, linux-mm, linux-kernel

On 24/02/2025 12:03, Catalin Marinas wrote:
> On Mon, Feb 17, 2025 at 02:08:02PM +0000, Ryan Roberts wrote:
>> Implement the required arch functions to enable use of contpte in the
>> vmap when VM_ALLOW_HUGE_VMAP is specified. This speeds up vmap
>> operations due to only having to issue a DSB and ISB per contpte block
>> instead of per pte.
> 
> For non-cont PTEs, do you happen to know how often vmap_pte_range() is
> called for multiple entries? 

It's mostly when vmalloc_huge() is called and for remapping io regions I
believe. I don't have numbers though.

> It might be worth changing that to use
> set_ptes() directly and we get some benefit as well.

I thought about that, but it's a bit of a pain, given the current
arch_vmap_pte_range_map_size() opt-in for set_huge_pte_at().

Ideally I think we would just set_ptes() the whole lot (or at least in chunks of
max_page_shift so we honour the "must belong to same folio" requirement). Then
let contpte decide when to use contiguous mappings. But we currently block
contpte operations for kernel mappings because of the potential need to
invalidate, which can cause faults. So that would need work. And we would need
to rework powerpc somehow.

The alternative would be to keep the arch_vmap_pte_range_map_size() opt-in and
somehow expose the alignment/size requirements to vmap_pte_range() so it can do
set_ptes() in the gaps. But that all felt quite messy.

In the end, I decided we would be getting the benefits via arch_make_huge_pte()
for >=64K folios, and decided not to worry about the <=3K folio cases.

> 
>> But it also means that the TLB pressure reduces due
>> to only needing a single TLB entry for the whole contpte block.
>>
>> Since vmap uses set_huge_pte_at() to set the contpte, that API is now
>> used for kernel mappings for the first time. Although in the vmap case
>> we never expect it to be called to modify a valid mapping so
>> clear_flush() should never be called, it's still wise to make it robust
>> for the kernel case, so amend the tlb flush function if the mm is for
>> kernel space.
>>
>> Tested with vmalloc performance selftests:
>>
>>   # kself/mm/test_vmalloc.sh \
>> 	run_test_mask=1
>> 	test_repeat_count=5
>> 	nr_pages=256
>> 	test_loop_count=100000
>> 	use_huge=1
>>
>> Duration reduced from 1274243 usec to 1083553 usec on Apple M2 for 15%
>> reduction in time taken.
>>
>> Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
>> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
>> ---
>>  arch/arm64/include/asm/vmalloc.h | 46 ++++++++++++++++++++++++++++++++
>>  arch/arm64/mm/hugetlbpage.c      |  5 +++-
>>  2 files changed, 50 insertions(+), 1 deletion(-)
>>
>> diff --git a/arch/arm64/include/asm/vmalloc.h b/arch/arm64/include/asm/vmalloc.h
>> index 38fafffe699f..40ebc664190b 100644
>> --- a/arch/arm64/include/asm/vmalloc.h
>> +++ b/arch/arm64/include/asm/vmalloc.h
>> @@ -23,6 +23,52 @@ static inline bool arch_vmap_pmd_supported(pgprot_t prot)
>>  	return !IS_ENABLED(CONFIG_PTDUMP_DEBUGFS);
>>  }
>>  
>> +#define arch_vmap_pte_range_map_size arch_vmap_pte_range_map_size
>> +static inline unsigned long arch_vmap_pte_range_map_size(unsigned long addr,
>> +						unsigned long end, u64 pfn,
>> +						unsigned int max_page_shift)
>> +{
>> +	/*
>> +	 * If the block is at least CONT_PTE_SIZE in size, and is naturally
>> +	 * aligned in both virtual and physical space, then we can pte-map the
>> +	 * block using the PTE_CONT bit for more efficient use of the TLB.
>> +	 */
>> +
> 
> Nit: unnecessary empty line.

Will fix.

> 
>> +	if (max_page_shift < CONT_PTE_SHIFT)
>> +		return PAGE_SIZE;
>> +
>> +	if (end - addr < CONT_PTE_SIZE)
>> +		return PAGE_SIZE;
>> +
>> +	if (!IS_ALIGNED(addr, CONT_PTE_SIZE))
>> +		return PAGE_SIZE;
>> +
>> +	if (!IS_ALIGNED(PFN_PHYS(pfn), CONT_PTE_SIZE))
>> +		return PAGE_SIZE;
>> +
>> +	return CONT_PTE_SIZE;
>> +}
>> +
>> +#define arch_vmap_pte_range_unmap_size arch_vmap_pte_range_unmap_size
>> +static inline unsigned long arch_vmap_pte_range_unmap_size(unsigned long addr,
>> +							   pte_t *ptep)
>> +{
>> +	/*
>> +	 * The caller handles alignment so it's sufficient just to check
>> +	 * PTE_CONT.
>> +	 */
>> +	return pte_valid_cont(__ptep_get(ptep)) ? CONT_PTE_SIZE : PAGE_SIZE;
>> +}
>> +
>> +#define arch_vmap_pte_supported_shift arch_vmap_pte_supported_shift
>> +static inline int arch_vmap_pte_supported_shift(unsigned long size)
>> +{
>> +	if (size >= CONT_PTE_SIZE)
>> +		return CONT_PTE_SHIFT;
>> +
>> +	return PAGE_SHIFT;
>> +}
>> +
>>  #endif
>>  
>>  #define arch_vmap_pgprot_tagged arch_vmap_pgprot_tagged
>> diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c
>> index 8ac86cd180b3..a29f347fea54 100644
>> --- a/arch/arm64/mm/hugetlbpage.c
>> +++ b/arch/arm64/mm/hugetlbpage.c
>> @@ -217,7 +217,10 @@ static void clear_flush(struct mm_struct *mm,
>>  	for (i = 0; i < ncontig; i++, addr += pgsize, ptep++)
>>  		ptep_get_and_clear_anysz(mm, ptep, pgsize);
>>  
>> -	__flush_hugetlb_tlb_range(&vma, saddr, addr, pgsize, true);
>> +	if (mm == &init_mm)
>> +		flush_tlb_kernel_range(saddr, addr);
>> +	else
>> +		__flush_hugetlb_tlb_range(&vma, saddr, addr, pgsize, true);
>>  }
>>  
>>  void set_huge_pte_at(struct mm_struct *mm, unsigned long addr,
> 
> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>



^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v2 11/14] mm/vmalloc: Batch arch_sync_kernel_mappings() more efficiently
  2025-02-25 15:37   ` Catalin Marinas
@ 2025-02-25 16:58     ` Ryan Roberts
  0 siblings, 0 replies; 32+ messages in thread
From: Ryan Roberts @ 2025-02-25 16:58 UTC (permalink / raw)
  To: Catalin Marinas
  Cc: Will Deacon, Pasha Tatashin, Andrew Morton, Uladzislau Rezki,
	Christoph Hellwig, David Hildenbrand, Matthew Wilcox (Oracle),
	Mark Rutland, Anshuman Khandual, Alexandre Ghiti, Kevin Brodsky,
	linux-arm-kernel, linux-mm, linux-kernel

On 25/02/2025 15:37, Catalin Marinas wrote:
> On Mon, Feb 17, 2025 at 02:08:03PM +0000, Ryan Roberts wrote:
>> When page_shift is greater than PAGE_SIZE, __vmap_pages_range_noflush()
> 
> s/PAGE_SIZE/PAGE_SHIFT/
> 
> Otherwise it looks fine.
> 
> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>

Thanks!


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v2 12/14] mm: Generalize arch_sync_kernel_mappings()
  2025-02-17 14:08 ` [PATCH v2 12/14] mm: Generalize arch_sync_kernel_mappings() Ryan Roberts
@ 2025-02-25 17:10   ` Ryan Roberts
  2025-02-25 17:52     ` Catalin Marinas
  0 siblings, 1 reply; 32+ messages in thread
From: Ryan Roberts @ 2025-02-25 17:10 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Pasha Tatashin, Andrew Morton,
	Uladzislau Rezki, Christoph Hellwig, David Hildenbrand,
	Matthew Wilcox (Oracle),
	Mark Rutland, Anshuman Khandual, Alexandre Ghiti, Kevin Brodsky
  Cc: linux-arm-kernel, linux-mm, linux-kernel

On 17/02/2025 14:08, Ryan Roberts wrote:
> arch_sync_kernel_mappings() is an optional hook for arches to allow them
> to synchonize certain levels of the kernel pgtables after modification.
> But arm64 could benefit from a hook similar to this, paired with a call
> prior to starting the batch of modifications.
> 
> So let's introduce arch_update_kernel_mappings_begin() and
> arch_update_kernel_mappings_end(). Both have a default implementation
> which can be overridden by the arch code. The default for the former is
> a nop, and the default for the latter is to call
> arch_sync_kernel_mappings(), so the latter replaces previous
> arch_sync_kernel_mappings() callsites. So by default, the resulting
> behaviour is unchanged.

Thanks to Kevin Brodsky; after some discussion we realised that while this works
on arm64 today, it isn't really robust in general.

arch_update_kernel_mappings_{begin|end}() are called at the outer scope of the
page table walker. It's possible that a pgtable page could be allocatd within
that scope (if you get to the pmd and its not yet present for example). Some
arches will need to kmap that from himem (I think?) and will need any PTE
manipulations to be "immediate" so we can access the pgtable. But since we are
in the arch_update_kernel_mappings_{begin|end}() scope, we are in the lazy mode,
which could delay those PTE manipulations. arm64 doesn't use himem, so it's not
an issue, but it doesn't exactly feel robust. There are also some other, similar
interactions with Keven's in-progress kpkeys series.

As an alternative, I'm proposing to remove this change (keeping
arch_sync_kernel_mappings() as it was), and instead start wrapping the vmap pte
table walker functions with
arch_enter_lazy_mmu_mode()/arch_exit_lazy_mmu_mode(). These have a smaller scope
so there is no risk of the nesting (pgtable allocations happen outside the
scope). arm64 will then use these lazy mmu hooks for it's purpose of deferring
barriers. There might be a small amount of performance loss due to the reduced
scope, but I'm guessing most of the performance is in batching the operations of
a single pte table.

One wrinkle is that arm64 needs to know if we are operating on kernel or user
mappings in lazy mode. The lazy_mmu hooks apply to both kernel and user
mappings, unlike my previous method which were kernel only. So I'm proposing to
pass mm to arch_enter_lazy_mmu_mode().

Thanks,
Ryan

> 
> To avoid include hell, the pgtbl_mod_mask type and it's associated
> macros are moved to their own header.
> 
> In a future patch, arm64 will opt-in to overriding both functions.
> 
> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
> ---
>  include/linux/pgtable.h         | 24 +----------------
>  include/linux/pgtable_modmask.h | 32 ++++++++++++++++++++++
>  include/linux/vmalloc.h         | 47 +++++++++++++++++++++++++++++++++
>  mm/memory.c                     |  5 ++--
>  mm/vmalloc.c                    | 15 ++++++-----
>  5 files changed, 92 insertions(+), 31 deletions(-)
>  create mode 100644 include/linux/pgtable_modmask.h
> 
> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
> index 94d267d02372..7f70786a73b3 100644
> --- a/include/linux/pgtable.h
> +++ b/include/linux/pgtable.h
> @@ -4,6 +4,7 @@
>  
>  #include <linux/pfn.h>
>  #include <asm/pgtable.h>
> +#include <linux/pgtable_modmask.h>
>  
>  #define PMD_ORDER	(PMD_SHIFT - PAGE_SHIFT)
>  #define PUD_ORDER	(PUD_SHIFT - PAGE_SHIFT)
> @@ -1786,29 +1787,6 @@ static inline bool arch_has_pfn_modify_check(void)
>  # define PAGE_KERNEL_EXEC PAGE_KERNEL
>  #endif
>  
> -/*
> - * Page Table Modification bits for pgtbl_mod_mask.
> - *
> - * These are used by the p?d_alloc_track*() set of functions an in the generic
> - * vmalloc/ioremap code to track at which page-table levels entries have been
> - * modified. Based on that the code can better decide when vmalloc and ioremap
> - * mapping changes need to be synchronized to other page-tables in the system.
> - */
> -#define		__PGTBL_PGD_MODIFIED	0
> -#define		__PGTBL_P4D_MODIFIED	1
> -#define		__PGTBL_PUD_MODIFIED	2
> -#define		__PGTBL_PMD_MODIFIED	3
> -#define		__PGTBL_PTE_MODIFIED	4
> -
> -#define		PGTBL_PGD_MODIFIED	BIT(__PGTBL_PGD_MODIFIED)
> -#define		PGTBL_P4D_MODIFIED	BIT(__PGTBL_P4D_MODIFIED)
> -#define		PGTBL_PUD_MODIFIED	BIT(__PGTBL_PUD_MODIFIED)
> -#define		PGTBL_PMD_MODIFIED	BIT(__PGTBL_PMD_MODIFIED)
> -#define		PGTBL_PTE_MODIFIED	BIT(__PGTBL_PTE_MODIFIED)
> -
> -/* Page-Table Modification Mask */
> -typedef unsigned int pgtbl_mod_mask;
> -
>  #endif /* !__ASSEMBLY__ */
>  
>  #if !defined(MAX_POSSIBLE_PHYSMEM_BITS) && !defined(CONFIG_64BIT)
> diff --git a/include/linux/pgtable_modmask.h b/include/linux/pgtable_modmask.h
> new file mode 100644
> index 000000000000..5a21b1bb8df3
> --- /dev/null
> +++ b/include/linux/pgtable_modmask.h
> @@ -0,0 +1,32 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +#ifndef _LINUX_PGTABLE_MODMASK_H
> +#define _LINUX_PGTABLE_MODMASK_H
> +
> +#ifndef __ASSEMBLY__
> +
> +/*
> + * Page Table Modification bits for pgtbl_mod_mask.
> + *
> + * These are used by the p?d_alloc_track*() set of functions an in the generic
> + * vmalloc/ioremap code to track at which page-table levels entries have been
> + * modified. Based on that the code can better decide when vmalloc and ioremap
> + * mapping changes need to be synchronized to other page-tables in the system.
> + */
> +#define		__PGTBL_PGD_MODIFIED	0
> +#define		__PGTBL_P4D_MODIFIED	1
> +#define		__PGTBL_PUD_MODIFIED	2
> +#define		__PGTBL_PMD_MODIFIED	3
> +#define		__PGTBL_PTE_MODIFIED	4
> +
> +#define		PGTBL_PGD_MODIFIED	BIT(__PGTBL_PGD_MODIFIED)
> +#define		PGTBL_P4D_MODIFIED	BIT(__PGTBL_P4D_MODIFIED)
> +#define		PGTBL_PUD_MODIFIED	BIT(__PGTBL_PUD_MODIFIED)
> +#define		PGTBL_PMD_MODIFIED	BIT(__PGTBL_PMD_MODIFIED)
> +#define		PGTBL_PTE_MODIFIED	BIT(__PGTBL_PTE_MODIFIED)
> +
> +/* Page-Table Modification Mask */
> +typedef unsigned int pgtbl_mod_mask;
> +
> +#endif /* !__ASSEMBLY__ */
> +
> +#endif /* _LINUX_PGTABLE_MODMASK_H */
> diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
> index 16dd4cba64f2..cb5d8f1965a1 100644
> --- a/include/linux/vmalloc.h
> +++ b/include/linux/vmalloc.h
> @@ -11,6 +11,7 @@
>  #include <asm/page.h>		/* pgprot_t */
>  #include <linux/rbtree.h>
>  #include <linux/overflow.h>
> +#include <linux/pgtable_modmask.h>
>  
>  #include <asm/vmalloc.h>
>  
> @@ -213,6 +214,26 @@ extern int remap_vmalloc_range(struct vm_area_struct *vma, void *addr,
>  int vmap_pages_range(unsigned long addr, unsigned long end, pgprot_t prot,
>  		     struct page **pages, unsigned int page_shift);
>  
> +#ifndef arch_update_kernel_mappings_begin
> +/**
> + * arch_update_kernel_mappings_begin - A batch of kernel pgtable mappings are
> + * about to be updated.
> + * @start: Virtual address of start of range to be updated.
> + * @end: Virtual address of end of range to be updated.
> + *
> + * An optional hook to allow architecture code to prepare for a batch of kernel
> + * pgtable mapping updates. An architecture may use this to enter a lazy mode
> + * where some operations can be deferred until the end of the batch.
> + *
> + * Context: Called in task context and may be preemptible.
> + */
> +static inline void arch_update_kernel_mappings_begin(unsigned long start,
> +						     unsigned long end)
> +{
> +}
> +#endif
> +
> +#ifndef arch_update_kernel_mappings_end
>  /*
>   * Architectures can set this mask to a combination of PGTBL_P?D_MODIFIED values
>   * and let generic vmalloc and ioremap code know when arch_sync_kernel_mappings()
> @@ -229,6 +250,32 @@ int vmap_pages_range(unsigned long addr, unsigned long end, pgprot_t prot,
>   */
>  void arch_sync_kernel_mappings(unsigned long start, unsigned long end);
>  
> +/**
> + * arch_update_kernel_mappings_end - A batch of kernel pgtable mappings have
> + * been updated.
> + * @start: Virtual address of start of range that was updated.
> + * @end: Virtual address of end of range that was updated.
> + *
> + * An optional hook to inform architecture code that a batch update is complete.
> + * This balances a previous call to arch_update_kernel_mappings_begin().
> + *
> + * An architecture may override this for any purpose, such as exiting a lazy
> + * mode previously entered with arch_update_kernel_mappings_begin() or syncing
> + * kernel mappings to a secondary pgtable. The default implementation calls an
> + * arch-provided arch_sync_kernel_mappings() if any arch-defined pgtable level
> + * was updated.
> + *
> + * Context: Called in task context and may be preemptible.
> + */
> +static inline void arch_update_kernel_mappings_end(unsigned long start,
> +						   unsigned long end,
> +						   pgtbl_mod_mask mask)
> +{
> +	if (mask & ARCH_PAGE_TABLE_SYNC_MASK)
> +		arch_sync_kernel_mappings(start, end);
> +}
> +#endif
> +
>  /*
>   *	Lowlevel-APIs (not for driver use!)
>   */
> diff --git a/mm/memory.c b/mm/memory.c
> index a15f7dd500ea..f80930bc19f6 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -3035,6 +3035,8 @@ static int __apply_to_page_range(struct mm_struct *mm, unsigned long addr,
>  	if (WARN_ON(addr >= end))
>  		return -EINVAL;
>  
> +	arch_update_kernel_mappings_begin(start, end);
> +
>  	pgd = pgd_offset(mm, addr);
>  	do {
>  		next = pgd_addr_end(addr, end);
> @@ -3055,8 +3057,7 @@ static int __apply_to_page_range(struct mm_struct *mm, unsigned long addr,
>  			break;
>  	} while (pgd++, addr = next, addr != end);
>  
> -	if (mask & ARCH_PAGE_TABLE_SYNC_MASK)
> -		arch_sync_kernel_mappings(start, start + size);
> +	arch_update_kernel_mappings_end(start, end, mask);
>  
>  	return err;
>  }
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index 50fd44439875..c5c51d86ef78 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -312,10 +312,10 @@ int vmap_page_range(unsigned long addr, unsigned long end,
>  	pgtbl_mod_mask mask = 0;
>  	int err;
>  
> +	arch_update_kernel_mappings_begin(addr, end);
>  	err = vmap_range_noflush(addr, end, phys_addr, pgprot_nx(prot),
>  				 ioremap_max_page_shift, &mask);
> -	if (mask & ARCH_PAGE_TABLE_SYNC_MASK)
> -		arch_sync_kernel_mappings(addr, end);
> +	arch_update_kernel_mappings_end(addr, end, mask);
>  
>  	flush_cache_vmap(addr, end);
>  	if (!err)
> @@ -463,6 +463,9 @@ void __vunmap_range_noflush(unsigned long start, unsigned long end)
>  	pgtbl_mod_mask mask = 0;
>  
>  	BUG_ON(addr >= end);
> +
> +	arch_update_kernel_mappings_begin(start, end);
> +
>  	pgd = pgd_offset_k(addr);
>  	do {
>  		next = pgd_addr_end(addr, end);
> @@ -473,8 +476,7 @@ void __vunmap_range_noflush(unsigned long start, unsigned long end)
>  		vunmap_p4d_range(pgd, addr, next, &mask);
>  	} while (pgd++, addr = next, addr != end);
>  
> -	if (mask & ARCH_PAGE_TABLE_SYNC_MASK)
> -		arch_sync_kernel_mappings(start, end);
> +	arch_update_kernel_mappings_end(start, end, mask);
>  }
>  
>  void vunmap_range_noflush(unsigned long start, unsigned long end)
> @@ -625,6 +627,8 @@ int __vmap_pages_range_noflush(unsigned long addr, unsigned long end,
>  
>  	WARN_ON(page_shift < PAGE_SHIFT);
>  
> +	arch_update_kernel_mappings_begin(start, end);
> +
>  	if (!IS_ENABLED(CONFIG_HAVE_ARCH_HUGE_VMALLOC) ||
>  			page_shift == PAGE_SHIFT) {
>  		err = vmap_small_pages_range_noflush(addr, end, prot, pages,
> @@ -642,8 +646,7 @@ int __vmap_pages_range_noflush(unsigned long addr, unsigned long end,
>  		}
>  	}
>  
> -	if (mask & ARCH_PAGE_TABLE_SYNC_MASK)
> -		arch_sync_kernel_mappings(start, end);
> +	arch_update_kernel_mappings_end(start, end, mask);
>  
>  	return err;
>  }



^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v2 12/14] mm: Generalize arch_sync_kernel_mappings()
  2025-02-25 17:10   ` Ryan Roberts
@ 2025-02-25 17:52     ` Catalin Marinas
  0 siblings, 0 replies; 32+ messages in thread
From: Catalin Marinas @ 2025-02-25 17:52 UTC (permalink / raw)
  To: Ryan Roberts
  Cc: Will Deacon, Pasha Tatashin, Andrew Morton, Uladzislau Rezki,
	Christoph Hellwig, David Hildenbrand, Matthew Wilcox (Oracle),
	Mark Rutland, Anshuman Khandual, Alexandre Ghiti, Kevin Brodsky,
	linux-arm-kernel, linux-mm, linux-kernel

On Tue, Feb 25, 2025 at 05:10:10PM +0000, Ryan Roberts wrote:
> On 17/02/2025 14:08, Ryan Roberts wrote:
> > arch_sync_kernel_mappings() is an optional hook for arches to allow them
> > to synchonize certain levels of the kernel pgtables after modification.
> > But arm64 could benefit from a hook similar to this, paired with a call
> > prior to starting the batch of modifications.
> > 
> > So let's introduce arch_update_kernel_mappings_begin() and
> > arch_update_kernel_mappings_end(). Both have a default implementation
> > which can be overridden by the arch code. The default for the former is
> > a nop, and the default for the latter is to call
> > arch_sync_kernel_mappings(), so the latter replaces previous
> > arch_sync_kernel_mappings() callsites. So by default, the resulting
> > behaviour is unchanged.
> 
> Thanks to Kevin Brodsky; after some discussion we realised that while this works
> on arm64 today, it isn't really robust in general.
[...]
> As an alternative, I'm proposing to remove this change (keeping
> arch_sync_kernel_mappings() as it was), and instead start wrapping the vmap pte
> table walker functions with
> arch_enter_lazy_mmu_mode()/arch_exit_lazy_mmu_mode().

I came to the same conclusion why looking at the last three patches. I'm
also not a fan of relying on a TIF flag for batching.

> These have a smaller scope
> so there is no risk of the nesting (pgtable allocations happen outside the
> scope). arm64 will then use these lazy mmu hooks for it's purpose of deferring
> barriers. There might be a small amount of performance loss due to the reduced
> scope, but I'm guessing most of the performance is in batching the operations of
> a single pte table.
> 
> One wrinkle is that arm64 needs to know if we are operating on kernel or user
> mappings in lazy mode. The lazy_mmu hooks apply to both kernel and user
> mappings, unlike my previous method which were kernel only. So I'm proposing to
> pass mm to arch_enter_lazy_mmu_mode().

Note that we have the efi_mm that uses PAGE_KERNEL prot bits while your
code only checks for init_mm after patch 13.

-- 
Catalin


^ permalink raw reply	[flat|nested] 32+ messages in thread

end of thread, other threads:[~2025-02-25 17:52 UTC | newest]

Thread overview: 32+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-02-17 14:07 [PATCH v2 00/14] Perf improvements for hugetlb and vmalloc on arm64 Ryan Roberts
2025-02-17 14:07 ` [PATCH v2 01/14] arm64: hugetlb: Cleanup huge_pte size discovery mechanisms Ryan Roberts
2025-02-17 14:07 ` [PATCH v2 02/14] arm64: hugetlb: Refine tlb maintenance scope Ryan Roberts
2025-02-17 14:07 ` [PATCH v2 03/14] mm/page_table_check: Batch-check pmds/puds just like ptes Ryan Roberts
2025-02-17 14:07 ` [PATCH v2 04/14] arm64/mm: Refactor __set_ptes() and __ptep_get_and_clear() Ryan Roberts
2025-02-17 14:07 ` [PATCH v2 05/14] arm64: hugetlb: Use set_ptes_anysz() and ptep_get_and_clear_anysz() Ryan Roberts
2025-02-17 14:07 ` [PATCH v2 06/14] arm64/mm: Hoist barriers out of set_ptes_anysz() loop Ryan Roberts
2025-02-22 11:56   ` Catalin Marinas
2025-02-24 12:18     ` Ryan Roberts
2025-02-17 14:07 ` [PATCH v2 07/14] arm64/mm: Avoid barriers for invalid or userspace mappings Ryan Roberts
2025-02-20 16:54   ` Kevin Brodsky
2025-02-24 12:26     ` Ryan Roberts
2025-02-22 13:17   ` Catalin Marinas
2025-02-25 16:41     ` Ryan Roberts
2025-02-17 14:08 ` [PATCH v2 08/14] mm/vmalloc: Warn on improper use of vunmap_range() Ryan Roberts
2025-02-20  7:02   ` Anshuman Khandual
2025-02-24 12:03   ` Catalin Marinas
2025-02-24 12:04   ` Catalin Marinas
2025-02-17 14:08 ` [PATCH v2 09/14] mm/vmalloc: Gracefully unmap huge ptes Ryan Roberts
2025-02-20 12:05   ` Uladzislau Rezki
2025-02-24 12:04   ` Catalin Marinas
2025-02-17 14:08 ` [PATCH v2 10/14] arm64/mm: Support huge pte-mapped pages in vmap Ryan Roberts
2025-02-24 12:03   ` Catalin Marinas
2025-02-25 16:57     ` Ryan Roberts
2025-02-17 14:08 ` [PATCH v2 11/14] mm/vmalloc: Batch arch_sync_kernel_mappings() more efficiently Ryan Roberts
2025-02-25 15:37   ` Catalin Marinas
2025-02-25 16:58     ` Ryan Roberts
2025-02-17 14:08 ` [PATCH v2 12/14] mm: Generalize arch_sync_kernel_mappings() Ryan Roberts
2025-02-25 17:10   ` Ryan Roberts
2025-02-25 17:52     ` Catalin Marinas
2025-02-17 14:08 ` [PATCH v2 13/14] mm: Only call arch_update_kernel_mappings_[begin|end]() for kernel mappings Ryan Roberts
2025-02-17 14:08 ` [PATCH v2 14/14] arm64/mm: Batch barriers when updating " Ryan Roberts

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox