* [PATCH v4 01/11] arm64: hugetlb: Cleanup huge_pte size discovery mechanisms
2025-04-22 8:18 [PATCH v4 00/11] Perf improvements for hugetlb and vmalloc on arm64 Ryan Roberts
@ 2025-04-22 8:18 ` Ryan Roberts
2025-04-22 8:18 ` [PATCH v4 02/11] arm64: hugetlb: Refine tlb maintenance scope Ryan Roberts
` (12 subsequent siblings)
13 siblings, 0 replies; 18+ messages in thread
From: Ryan Roberts @ 2025-04-22 8:18 UTC (permalink / raw)
To: Catalin Marinas, Will Deacon, Pasha Tatashin, Andrew Morton,
Uladzislau Rezki, Christoph Hellwig, David Hildenbrand,
Matthew Wilcox (Oracle),
Mark Rutland, Anshuman Khandual, Alexandre Ghiti, Kevin Brodsky
Cc: Ryan Roberts, linux-arm-kernel, linux-mm, linux-kernel
Not all huge_pte helper APIs explicitly provide the size of the
huge_pte. So the helpers have to depend on various methods to determine
the size of the huge_pte. Some of these methods are dubious.
Let's clean up the code to use preferred methods and retire the dubious
ones. The options in order of preference:
- If size is provided as parameter, use it together with
num_contig_ptes(). This is explicit and works for both present and
non-present ptes.
- If vma is provided as a parameter, retrieve size via
huge_page_size(hstate_vma(vma)) and use it together with
num_contig_ptes(). This is explicit and works for both present and
non-present ptes.
- If the pte is present and contiguous, use find_num_contig() to walk
the pgtable to find the level and infer the number of ptes from
level. Only works for *present* ptes.
- If the pte is present and not contiguous and you can infer from this
that only 1 pte needs to be operated on. This is ok if you don't care
about the absolute size, and just want to know the number of ptes.
- NEVER rely on resolving the PFN of a present pte to a folio and
getting the folio's size. This is fragile at best, because there is
nothing to stop the core-mm from allocating a folio twice as big as
the huge_pte then mapping it across 2 consecutive huge_ptes. Or just
partially mapping it.
Where we require that the pte is present, add warnings if not-present.
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
---
arch/arm64/mm/hugetlbpage.c | 20 +++++++++++++++-----
1 file changed, 15 insertions(+), 5 deletions(-)
diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c
index cfe8cb8ba1cc..701394aa7734 100644
--- a/arch/arm64/mm/hugetlbpage.c
+++ b/arch/arm64/mm/hugetlbpage.c
@@ -129,7 +129,7 @@ pte_t huge_ptep_get(struct mm_struct *mm, unsigned long addr, pte_t *ptep)
if (!pte_present(orig_pte) || !pte_cont(orig_pte))
return orig_pte;
- ncontig = num_contig_ptes(page_size(pte_page(orig_pte)), &pgsize);
+ ncontig = find_num_contig(mm, addr, ptep, &pgsize);
for (i = 0; i < ncontig; i++, ptep++) {
pte_t pte = __ptep_get(ptep);
@@ -438,16 +438,19 @@ int huge_ptep_set_access_flags(struct vm_area_struct *vma,
pgprot_t hugeprot;
pte_t orig_pte;
+ VM_WARN_ON(!pte_present(pte));
+
if (!pte_cont(pte))
return __ptep_set_access_flags(vma, addr, ptep, pte, dirty);
- ncontig = find_num_contig(mm, addr, ptep, &pgsize);
+ ncontig = num_contig_ptes(huge_page_size(hstate_vma(vma)), &pgsize);
dpfn = pgsize >> PAGE_SHIFT;
if (!__cont_access_flags_changed(ptep, pte, ncontig))
return 0;
orig_pte = get_clear_contig_flush(mm, addr, ptep, pgsize, ncontig);
+ VM_WARN_ON(!pte_present(orig_pte));
/* Make sure we don't lose the dirty or young state */
if (pte_dirty(orig_pte))
@@ -472,7 +475,10 @@ void huge_ptep_set_wrprotect(struct mm_struct *mm,
size_t pgsize;
pte_t pte;
- if (!pte_cont(__ptep_get(ptep))) {
+ pte = __ptep_get(ptep);
+ VM_WARN_ON(!pte_present(pte));
+
+ if (!pte_cont(pte)) {
__ptep_set_wrprotect(mm, addr, ptep);
return;
}
@@ -496,11 +502,15 @@ pte_t huge_ptep_clear_flush(struct vm_area_struct *vma,
struct mm_struct *mm = vma->vm_mm;
size_t pgsize;
int ncontig;
+ pte_t pte;
+
+ pte = __ptep_get(ptep);
+ VM_WARN_ON(!pte_present(pte));
- if (!pte_cont(__ptep_get(ptep)))
+ if (!pte_cont(pte))
return ptep_clear_flush(vma, addr, ptep);
- ncontig = find_num_contig(mm, addr, ptep, &pgsize);
+ ncontig = num_contig_ptes(huge_page_size(hstate_vma(vma)), &pgsize);
return get_clear_contig_flush(mm, addr, ptep, pgsize, ncontig);
}
--
2.43.0
^ permalink raw reply [flat|nested] 18+ messages in thread* [PATCH v4 02/11] arm64: hugetlb: Refine tlb maintenance scope
2025-04-22 8:18 [PATCH v4 00/11] Perf improvements for hugetlb and vmalloc on arm64 Ryan Roberts
2025-04-22 8:18 ` [PATCH v4 01/11] arm64: hugetlb: Cleanup huge_pte size discovery mechanisms Ryan Roberts
@ 2025-04-22 8:18 ` Ryan Roberts
2025-04-22 8:18 ` [PATCH v4 03/11] mm/page_table_check: Batch-check pmds/puds just like ptes Ryan Roberts
` (11 subsequent siblings)
13 siblings, 0 replies; 18+ messages in thread
From: Ryan Roberts @ 2025-04-22 8:18 UTC (permalink / raw)
To: Catalin Marinas, Will Deacon, Pasha Tatashin, Andrew Morton,
Uladzislau Rezki, Christoph Hellwig, David Hildenbrand,
Matthew Wilcox (Oracle),
Mark Rutland, Anshuman Khandual, Alexandre Ghiti, Kevin Brodsky
Cc: Ryan Roberts, linux-arm-kernel, linux-mm, linux-kernel
When operating on contiguous blocks of ptes (or pmds) for some hugetlb
sizes, we must honour break-before-make requirements and clear down the
block to invalid state in the pgtable then invalidate the relevant tlb
entries before making the pgtable entries valid again.
However, the tlb maintenance is currently always done assuming the worst
case stride (PAGE_SIZE), last_level (false) and tlb_level
(TLBI_TTL_UNKNOWN). We can do much better with the hinting; In reality,
we know the stride from the huge_pte pgsize, we are always operating
only on the last level, and we always know the tlb_level, again based on
pgsize. So let's start providing these hints.
Additionally, avoid tlb maintenace in set_huge_pte_at().
Break-before-make is only required if we are transitioning the
contiguous pte block from valid -> valid. So let's elide the
clear-and-flush ("break") if the pte range was previously invalid.
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
---
arch/arm64/include/asm/hugetlb.h | 29 +++++++++++++++++++----------
arch/arm64/mm/hugetlbpage.c | 9 ++++++---
2 files changed, 25 insertions(+), 13 deletions(-)
diff --git a/arch/arm64/include/asm/hugetlb.h b/arch/arm64/include/asm/hugetlb.h
index 07fbf5bf85a7..2a8155c4a882 100644
--- a/arch/arm64/include/asm/hugetlb.h
+++ b/arch/arm64/include/asm/hugetlb.h
@@ -69,29 +69,38 @@ extern void huge_ptep_modify_prot_commit(struct vm_area_struct *vma,
#include <asm-generic/hugetlb.h>
-#define __HAVE_ARCH_FLUSH_HUGETLB_TLB_RANGE
-static inline void flush_hugetlb_tlb_range(struct vm_area_struct *vma,
- unsigned long start,
- unsigned long end)
+static inline void __flush_hugetlb_tlb_range(struct vm_area_struct *vma,
+ unsigned long start,
+ unsigned long end,
+ unsigned long stride,
+ bool last_level)
{
- unsigned long stride = huge_page_size(hstate_vma(vma));
-
switch (stride) {
#ifndef __PAGETABLE_PMD_FOLDED
case PUD_SIZE:
- __flush_tlb_range(vma, start, end, PUD_SIZE, false, 1);
+ __flush_tlb_range(vma, start, end, PUD_SIZE, last_level, 1);
break;
#endif
case CONT_PMD_SIZE:
case PMD_SIZE:
- __flush_tlb_range(vma, start, end, PMD_SIZE, false, 2);
+ __flush_tlb_range(vma, start, end, PMD_SIZE, last_level, 2);
break;
case CONT_PTE_SIZE:
- __flush_tlb_range(vma, start, end, PAGE_SIZE, false, 3);
+ __flush_tlb_range(vma, start, end, PAGE_SIZE, last_level, 3);
break;
default:
- __flush_tlb_range(vma, start, end, PAGE_SIZE, false, TLBI_TTL_UNKNOWN);
+ __flush_tlb_range(vma, start, end, PAGE_SIZE, last_level, TLBI_TTL_UNKNOWN);
}
}
+#define __HAVE_ARCH_FLUSH_HUGETLB_TLB_RANGE
+static inline void flush_hugetlb_tlb_range(struct vm_area_struct *vma,
+ unsigned long start,
+ unsigned long end)
+{
+ unsigned long stride = huge_page_size(hstate_vma(vma));
+
+ __flush_hugetlb_tlb_range(vma, start, end, stride, false);
+}
+
#endif /* __ASM_HUGETLB_H */
diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c
index 701394aa7734..087fc43381c6 100644
--- a/arch/arm64/mm/hugetlbpage.c
+++ b/arch/arm64/mm/hugetlbpage.c
@@ -183,8 +183,9 @@ static pte_t get_clear_contig_flush(struct mm_struct *mm,
{
pte_t orig_pte = get_clear_contig(mm, addr, ptep, pgsize, ncontig);
struct vm_area_struct vma = TLB_FLUSH_VMA(mm, 0);
+ unsigned long end = addr + (pgsize * ncontig);
- flush_tlb_range(&vma, addr, addr + (pgsize * ncontig));
+ __flush_hugetlb_tlb_range(&vma, addr, end, pgsize, true);
return orig_pte;
}
@@ -209,7 +210,7 @@ static void clear_flush(struct mm_struct *mm,
for (i = 0; i < ncontig; i++, addr += pgsize, ptep++)
__ptep_get_and_clear(mm, addr, ptep);
- flush_tlb_range(&vma, saddr, addr);
+ __flush_hugetlb_tlb_range(&vma, saddr, addr, pgsize, true);
}
void set_huge_pte_at(struct mm_struct *mm, unsigned long addr,
@@ -238,7 +239,9 @@ void set_huge_pte_at(struct mm_struct *mm, unsigned long addr,
dpfn = pgsize >> PAGE_SHIFT;
hugeprot = pte_pgprot(pte);
- clear_flush(mm, addr, ptep, pgsize, ncontig);
+ /* Only need to "break" if transitioning valid -> valid. */
+ if (pte_valid(__ptep_get(ptep)))
+ clear_flush(mm, addr, ptep, pgsize, ncontig);
for (i = 0; i < ncontig; i++, ptep++, addr += pgsize, pfn += dpfn)
__set_ptes(mm, addr, ptep, pfn_pte(pfn, hugeprot), 1);
--
2.43.0
^ permalink raw reply [flat|nested] 18+ messages in thread* [PATCH v4 03/11] mm/page_table_check: Batch-check pmds/puds just like ptes
2025-04-22 8:18 [PATCH v4 00/11] Perf improvements for hugetlb and vmalloc on arm64 Ryan Roberts
2025-04-22 8:18 ` [PATCH v4 01/11] arm64: hugetlb: Cleanup huge_pte size discovery mechanisms Ryan Roberts
2025-04-22 8:18 ` [PATCH v4 02/11] arm64: hugetlb: Refine tlb maintenance scope Ryan Roberts
@ 2025-04-22 8:18 ` Ryan Roberts
2025-04-22 8:18 ` [PATCH v4 04/11] arm64/mm: Refactor __set_ptes() and __ptep_get_and_clear() Ryan Roberts
` (10 subsequent siblings)
13 siblings, 0 replies; 18+ messages in thread
From: Ryan Roberts @ 2025-04-22 8:18 UTC (permalink / raw)
To: Catalin Marinas, Will Deacon, Pasha Tatashin, Andrew Morton,
Uladzislau Rezki, Christoph Hellwig, David Hildenbrand,
Matthew Wilcox (Oracle),
Mark Rutland, Anshuman Khandual, Alexandre Ghiti, Kevin Brodsky
Cc: Ryan Roberts, linux-arm-kernel, linux-mm, linux-kernel
Convert page_table_check_p[mu]d_set(...) to
page_table_check_p[mu]ds_set(..., nr) to allow checking a contiguous set
of pmds/puds in single batch. We retain page_table_check_p[mu]d_set(...)
as macros that call new batch functions with nr=1 for compatibility.
arm64 is about to reorganise its pte/pmd/pud helpers to reuse more code
and to allow the implementation for huge_pte to more efficiently set
ptes/pmds/puds in batches. We need these batch-helpers to make the
refactoring possible.
Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
Reviewed-by: Pasha Tatashin <pasha.tatashin@soleen.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
---
include/linux/page_table_check.h | 30 +++++++++++++++++-----------
mm/page_table_check.c | 34 +++++++++++++++++++-------------
2 files changed, 38 insertions(+), 26 deletions(-)
diff --git a/include/linux/page_table_check.h b/include/linux/page_table_check.h
index 6722941c7cb8..289620d4aad3 100644
--- a/include/linux/page_table_check.h
+++ b/include/linux/page_table_check.h
@@ -19,8 +19,10 @@ void __page_table_check_pmd_clear(struct mm_struct *mm, pmd_t pmd);
void __page_table_check_pud_clear(struct mm_struct *mm, pud_t pud);
void __page_table_check_ptes_set(struct mm_struct *mm, pte_t *ptep, pte_t pte,
unsigned int nr);
-void __page_table_check_pmd_set(struct mm_struct *mm, pmd_t *pmdp, pmd_t pmd);
-void __page_table_check_pud_set(struct mm_struct *mm, pud_t *pudp, pud_t pud);
+void __page_table_check_pmds_set(struct mm_struct *mm, pmd_t *pmdp, pmd_t pmd,
+ unsigned int nr);
+void __page_table_check_puds_set(struct mm_struct *mm, pud_t *pudp, pud_t pud,
+ unsigned int nr);
void __page_table_check_pte_clear_range(struct mm_struct *mm,
unsigned long addr,
pmd_t pmd);
@@ -74,22 +76,22 @@ static inline void page_table_check_ptes_set(struct mm_struct *mm,
__page_table_check_ptes_set(mm, ptep, pte, nr);
}
-static inline void page_table_check_pmd_set(struct mm_struct *mm, pmd_t *pmdp,
- pmd_t pmd)
+static inline void page_table_check_pmds_set(struct mm_struct *mm,
+ pmd_t *pmdp, pmd_t pmd, unsigned int nr)
{
if (static_branch_likely(&page_table_check_disabled))
return;
- __page_table_check_pmd_set(mm, pmdp, pmd);
+ __page_table_check_pmds_set(mm, pmdp, pmd, nr);
}
-static inline void page_table_check_pud_set(struct mm_struct *mm, pud_t *pudp,
- pud_t pud)
+static inline void page_table_check_puds_set(struct mm_struct *mm,
+ pud_t *pudp, pud_t pud, unsigned int nr)
{
if (static_branch_likely(&page_table_check_disabled))
return;
- __page_table_check_pud_set(mm, pudp, pud);
+ __page_table_check_puds_set(mm, pudp, pud, nr);
}
static inline void page_table_check_pte_clear_range(struct mm_struct *mm,
@@ -129,13 +131,13 @@ static inline void page_table_check_ptes_set(struct mm_struct *mm,
{
}
-static inline void page_table_check_pmd_set(struct mm_struct *mm, pmd_t *pmdp,
- pmd_t pmd)
+static inline void page_table_check_pmds_set(struct mm_struct *mm,
+ pmd_t *pmdp, pmd_t pmd, unsigned int nr)
{
}
-static inline void page_table_check_pud_set(struct mm_struct *mm, pud_t *pudp,
- pud_t pud)
+static inline void page_table_check_puds_set(struct mm_struct *mm,
+ pud_t *pudp, pud_t pud, unsigned int nr)
{
}
@@ -146,4 +148,8 @@ static inline void page_table_check_pte_clear_range(struct mm_struct *mm,
}
#endif /* CONFIG_PAGE_TABLE_CHECK */
+
+#define page_table_check_pmd_set(mm, pmdp, pmd) page_table_check_pmds_set(mm, pmdp, pmd, 1)
+#define page_table_check_pud_set(mm, pudp, pud) page_table_check_puds_set(mm, pudp, pud, 1)
+
#endif /* __LINUX_PAGE_TABLE_CHECK_H */
diff --git a/mm/page_table_check.c b/mm/page_table_check.c
index 68109ee93841..4eeca782b888 100644
--- a/mm/page_table_check.c
+++ b/mm/page_table_check.c
@@ -218,33 +218,39 @@ static inline void page_table_check_pmd_flags(pmd_t pmd)
WARN_ON_ONCE(swap_cached_writable(pmd_to_swp_entry(pmd)));
}
-void __page_table_check_pmd_set(struct mm_struct *mm, pmd_t *pmdp, pmd_t pmd)
+void __page_table_check_pmds_set(struct mm_struct *mm, pmd_t *pmdp, pmd_t pmd,
+ unsigned int nr)
{
+ unsigned long stride = PMD_SIZE >> PAGE_SHIFT;
+ unsigned int i;
+
if (&init_mm == mm)
return;
page_table_check_pmd_flags(pmd);
- __page_table_check_pmd_clear(mm, *pmdp);
- if (pmd_user_accessible_page(pmd)) {
- page_table_check_set(pmd_pfn(pmd), PMD_SIZE >> PAGE_SHIFT,
- pmd_write(pmd));
- }
+ for (i = 0; i < nr; i++)
+ __page_table_check_pmd_clear(mm, *(pmdp + i));
+ if (pmd_user_accessible_page(pmd))
+ page_table_check_set(pmd_pfn(pmd), stride * nr, pmd_write(pmd));
}
-EXPORT_SYMBOL(__page_table_check_pmd_set);
+EXPORT_SYMBOL(__page_table_check_pmds_set);
-void __page_table_check_pud_set(struct mm_struct *mm, pud_t *pudp, pud_t pud)
+void __page_table_check_puds_set(struct mm_struct *mm, pud_t *pudp, pud_t pud,
+ unsigned int nr)
{
+ unsigned long stride = PUD_SIZE >> PAGE_SHIFT;
+ unsigned int i;
+
if (&init_mm == mm)
return;
- __page_table_check_pud_clear(mm, *pudp);
- if (pud_user_accessible_page(pud)) {
- page_table_check_set(pud_pfn(pud), PUD_SIZE >> PAGE_SHIFT,
- pud_write(pud));
- }
+ for (i = 0; i < nr; i++)
+ __page_table_check_pud_clear(mm, *(pudp + i));
+ if (pud_user_accessible_page(pud))
+ page_table_check_set(pud_pfn(pud), stride * nr, pud_write(pud));
}
-EXPORT_SYMBOL(__page_table_check_pud_set);
+EXPORT_SYMBOL(__page_table_check_puds_set);
void __page_table_check_pte_clear_range(struct mm_struct *mm,
unsigned long addr,
--
2.43.0
^ permalink raw reply [flat|nested] 18+ messages in thread* [PATCH v4 04/11] arm64/mm: Refactor __set_ptes() and __ptep_get_and_clear()
2025-04-22 8:18 [PATCH v4 00/11] Perf improvements for hugetlb and vmalloc on arm64 Ryan Roberts
` (2 preceding siblings ...)
2025-04-22 8:18 ` [PATCH v4 03/11] mm/page_table_check: Batch-check pmds/puds just like ptes Ryan Roberts
@ 2025-04-22 8:18 ` Ryan Roberts
2025-04-24 9:26 ` Anshuman Khandual
2025-04-22 8:18 ` [PATCH v4 05/11] arm64: hugetlb: Use __set_ptes_anysz() and __ptep_get_and_clear_anysz() Ryan Roberts
` (9 subsequent siblings)
13 siblings, 1 reply; 18+ messages in thread
From: Ryan Roberts @ 2025-04-22 8:18 UTC (permalink / raw)
To: Catalin Marinas, Will Deacon, Pasha Tatashin, Andrew Morton,
Uladzislau Rezki, Christoph Hellwig, David Hildenbrand,
Matthew Wilcox (Oracle),
Mark Rutland, Anshuman Khandual, Alexandre Ghiti, Kevin Brodsky
Cc: Ryan Roberts, linux-arm-kernel, linux-mm, linux-kernel
Refactor __set_ptes(), set_pmd_at() and set_pud_at() so that they are
all a thin wrapper around a new common __set_ptes_anysz(), which takes
pgsize parameter. Additionally, refactor __ptep_get_and_clear() and
pmdp_huge_get_and_clear() to use a new common
__ptep_get_and_clear_anysz() which also takes a pgsize parameter.
These changes will permit the huge_pte API to efficiently batch-set
pgtable entries and take advantage of the future barrier optimizations.
Additionally since the new *_anysz() helpers call the correct
page_table_check_*_set() API based on pgsize, this means that huge_ptes
will be able to get proper coverage. Currently the huge_pte API always
uses the pte API which assumes an entry only covers a single page.
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
---
arch/arm64/include/asm/pgtable.h | 114 ++++++++++++++++++++-----------
1 file changed, 73 insertions(+), 41 deletions(-)
diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index d3b538be1500..d80aa9ba0a16 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -423,23 +423,6 @@ static inline pte_t pte_advance_pfn(pte_t pte, unsigned long nr)
return pfn_pte(pte_pfn(pte) + nr, pte_pgprot(pte));
}
-static inline void __set_ptes(struct mm_struct *mm,
- unsigned long __always_unused addr,
- pte_t *ptep, pte_t pte, unsigned int nr)
-{
- page_table_check_ptes_set(mm, ptep, pte, nr);
- __sync_cache_and_tags(pte, nr);
-
- for (;;) {
- __check_safe_pte_update(mm, ptep, pte);
- __set_pte(ptep, pte);
- if (--nr == 0)
- break;
- ptep++;
- pte = pte_advance_pfn(pte, 1);
- }
-}
-
/*
* Hugetlb definitions.
*/
@@ -649,30 +632,62 @@ static inline pgprot_t pud_pgprot(pud_t pud)
return __pgprot(pud_val(pfn_pud(pfn, __pgprot(0))) ^ pud_val(pud));
}
-static inline void __set_pte_at(struct mm_struct *mm,
- unsigned long __always_unused addr,
- pte_t *ptep, pte_t pte, unsigned int nr)
+static inline void __set_ptes_anysz(struct mm_struct *mm, pte_t *ptep,
+ pte_t pte, unsigned int nr,
+ unsigned long pgsize)
{
- __sync_cache_and_tags(pte, nr);
- __check_safe_pte_update(mm, ptep, pte);
- __set_pte(ptep, pte);
+ unsigned long stride = pgsize >> PAGE_SHIFT;
+
+ switch (pgsize) {
+ case PAGE_SIZE:
+ page_table_check_ptes_set(mm, ptep, pte, nr);
+ break;
+ case PMD_SIZE:
+ page_table_check_pmds_set(mm, (pmd_t *)ptep, pte_pmd(pte), nr);
+ break;
+#ifndef __PAGETABLE_PMD_FOLDED
+ case PUD_SIZE:
+ page_table_check_puds_set(mm, (pud_t *)ptep, pte_pud(pte), nr);
+ break;
+#endif
+ default:
+ VM_WARN_ON(1);
+ }
+
+ __sync_cache_and_tags(pte, nr * stride);
+
+ for (;;) {
+ __check_safe_pte_update(mm, ptep, pte);
+ __set_pte(ptep, pte);
+ if (--nr == 0)
+ break;
+ ptep++;
+ pte = pte_advance_pfn(pte, stride);
+ }
+}
+
+static inline void __set_ptes(struct mm_struct *mm,
+ unsigned long __always_unused addr,
+ pte_t *ptep, pte_t pte, unsigned int nr)
+{
+ __set_ptes_anysz(mm, ptep, pte, nr, PAGE_SIZE);
}
-static inline void set_pmd_at(struct mm_struct *mm, unsigned long addr,
- pmd_t *pmdp, pmd_t pmd)
+static inline void __set_pmds(struct mm_struct *mm,
+ unsigned long __always_unused addr,
+ pmd_t *pmdp, pmd_t pmd, unsigned int nr)
{
- page_table_check_pmd_set(mm, pmdp, pmd);
- return __set_pte_at(mm, addr, (pte_t *)pmdp, pmd_pte(pmd),
- PMD_SIZE >> PAGE_SHIFT);
+ __set_ptes_anysz(mm, (pte_t *)pmdp, pmd_pte(pmd), nr, PMD_SIZE);
}
+#define set_pmd_at(mm, addr, pmdp, pmd) __set_pmds(mm, addr, pmdp, pmd, 1)
-static inline void set_pud_at(struct mm_struct *mm, unsigned long addr,
- pud_t *pudp, pud_t pud)
+static inline void __set_puds(struct mm_struct *mm,
+ unsigned long __always_unused addr,
+ pud_t *pudp, pud_t pud, unsigned int nr)
{
- page_table_check_pud_set(mm, pudp, pud);
- return __set_pte_at(mm, addr, (pte_t *)pudp, pud_pte(pud),
- PUD_SIZE >> PAGE_SHIFT);
+ __set_ptes_anysz(mm, (pte_t *)pudp, pud_pte(pud), nr, PUD_SIZE);
}
+#define set_pud_at(mm, addr, pudp, pud) __set_puds(mm, addr, pudp, pud, 1)
#define __p4d_to_phys(p4d) __pte_to_phys(p4d_pte(p4d))
#define __phys_to_p4d_val(phys) __phys_to_pte_val(phys)
@@ -1301,16 +1316,37 @@ static inline int pmdp_test_and_clear_young(struct vm_area_struct *vma,
}
#endif /* CONFIG_TRANSPARENT_HUGEPAGE || CONFIG_ARCH_HAS_NONLEAF_PMD_YOUNG */
-static inline pte_t __ptep_get_and_clear(struct mm_struct *mm,
- unsigned long address, pte_t *ptep)
+static inline pte_t __ptep_get_and_clear_anysz(struct mm_struct *mm,
+ pte_t *ptep,
+ unsigned long pgsize)
{
pte_t pte = __pte(xchg_relaxed(&pte_val(*ptep), 0));
- page_table_check_pte_clear(mm, pte);
+ switch (pgsize) {
+ case PAGE_SIZE:
+ page_table_check_pte_clear(mm, pte);
+ break;
+ case PMD_SIZE:
+ page_table_check_pmd_clear(mm, pte_pmd(pte));
+ break;
+#ifndef __PAGETABLE_PMD_FOLDED
+ case PUD_SIZE:
+ page_table_check_pud_clear(mm, pte_pud(pte));
+ break;
+#endif
+ default:
+ VM_WARN_ON(1);
+ }
return pte;
}
+static inline pte_t __ptep_get_and_clear(struct mm_struct *mm,
+ unsigned long address, pte_t *ptep)
+{
+ return __ptep_get_and_clear_anysz(mm, ptep, PAGE_SIZE);
+}
+
static inline void __clear_full_ptes(struct mm_struct *mm, unsigned long addr,
pte_t *ptep, unsigned int nr, int full)
{
@@ -1347,11 +1383,7 @@ static inline pte_t __get_and_clear_full_ptes(struct mm_struct *mm,
static inline pmd_t pmdp_huge_get_and_clear(struct mm_struct *mm,
unsigned long address, pmd_t *pmdp)
{
- pmd_t pmd = __pmd(xchg_relaxed(&pmd_val(*pmdp), 0));
-
- page_table_check_pmd_clear(mm, pmd);
-
- return pmd;
+ return pte_pmd(__ptep_get_and_clear_anysz(mm, (pte_t *)pmdp, PMD_SIZE));
}
#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
--
2.43.0
^ permalink raw reply [flat|nested] 18+ messages in thread* Re: [PATCH v4 04/11] arm64/mm: Refactor __set_ptes() and __ptep_get_and_clear()
2025-04-22 8:18 ` [PATCH v4 04/11] arm64/mm: Refactor __set_ptes() and __ptep_get_and_clear() Ryan Roberts
@ 2025-04-24 9:26 ` Anshuman Khandual
0 siblings, 0 replies; 18+ messages in thread
From: Anshuman Khandual @ 2025-04-24 9:26 UTC (permalink / raw)
To: Ryan Roberts, Catalin Marinas, Will Deacon, Pasha Tatashin,
Andrew Morton, Uladzislau Rezki, Christoph Hellwig,
David Hildenbrand, Matthew Wilcox (Oracle),
Mark Rutland, Alexandre Ghiti, Kevin Brodsky
Cc: linux-arm-kernel, linux-mm, linux-kernel
On 4/22/25 13:48, Ryan Roberts wrote:
> Refactor __set_ptes(), set_pmd_at() and set_pud_at() so that they are
> all a thin wrapper around a new common __set_ptes_anysz(), which takes
> pgsize parameter. Additionally, refactor __ptep_get_and_clear() and
> pmdp_huge_get_and_clear() to use a new common
> __ptep_get_and_clear_anysz() which also takes a pgsize parameter.
>
> These changes will permit the huge_pte API to efficiently batch-set
> pgtable entries and take advantage of the future barrier optimizations.
> Additionally since the new *_anysz() helpers call the correct
> page_table_check_*_set() API based on pgsize, this means that huge_ptes
> will be able to get proper coverage. Currently the huge_pte API always
> uses the pte API which assumes an entry only covers a single page.
>
> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
> ---
> arch/arm64/include/asm/pgtable.h | 114 ++++++++++++++++++++-----------
> 1 file changed, 73 insertions(+), 41 deletions(-)
>
> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
> index d3b538be1500..d80aa9ba0a16 100644
> --- a/arch/arm64/include/asm/pgtable.h
> +++ b/arch/arm64/include/asm/pgtable.h
> @@ -423,23 +423,6 @@ static inline pte_t pte_advance_pfn(pte_t pte, unsigned long nr)
> return pfn_pte(pte_pfn(pte) + nr, pte_pgprot(pte));
> }
>
> -static inline void __set_ptes(struct mm_struct *mm,
> - unsigned long __always_unused addr,
> - pte_t *ptep, pte_t pte, unsigned int nr)
> -{
> - page_table_check_ptes_set(mm, ptep, pte, nr);
> - __sync_cache_and_tags(pte, nr);
> -
> - for (;;) {
> - __check_safe_pte_update(mm, ptep, pte);
> - __set_pte(ptep, pte);
> - if (--nr == 0)
> - break;
> - ptep++;
> - pte = pte_advance_pfn(pte, 1);
> - }
> -}
> -
> /*
> * Hugetlb definitions.
> */
> @@ -649,30 +632,62 @@ static inline pgprot_t pud_pgprot(pud_t pud)
> return __pgprot(pud_val(pfn_pud(pfn, __pgprot(0))) ^ pud_val(pud));
> }
>
> -static inline void __set_pte_at(struct mm_struct *mm,
> - unsigned long __always_unused addr,
> - pte_t *ptep, pte_t pte, unsigned int nr)
> +static inline void __set_ptes_anysz(struct mm_struct *mm, pte_t *ptep,
> + pte_t pte, unsigned int nr,
> + unsigned long pgsize)
> {
> - __sync_cache_and_tags(pte, nr);
> - __check_safe_pte_update(mm, ptep, pte);
> - __set_pte(ptep, pte);
> + unsigned long stride = pgsize >> PAGE_SHIFT;
> +
> + switch (pgsize) {
> + case PAGE_SIZE:
> + page_table_check_ptes_set(mm, ptep, pte, nr);
> + break;
> + case PMD_SIZE:
> + page_table_check_pmds_set(mm, (pmd_t *)ptep, pte_pmd(pte), nr);
> + break;
> +#ifndef __PAGETABLE_PMD_FOLDED
> + case PUD_SIZE:
> + page_table_check_puds_set(mm, (pud_t *)ptep, pte_pud(pte), nr);
> + break;
> +#endif
> + default:
> + VM_WARN_ON(1);
> + }
> +
> + __sync_cache_and_tags(pte, nr * stride);
> +
> + for (;;) {
> + __check_safe_pte_update(mm, ptep, pte);
> + __set_pte(ptep, pte);
> + if (--nr == 0)
> + break;
> + ptep++;
> + pte = pte_advance_pfn(pte, stride);
> + }
> +}
> +
> +static inline void __set_ptes(struct mm_struct *mm,
> + unsigned long __always_unused addr,
> + pte_t *ptep, pte_t pte, unsigned int nr)
> +{
> + __set_ptes_anysz(mm, ptep, pte, nr, PAGE_SIZE);
> }
>
> -static inline void set_pmd_at(struct mm_struct *mm, unsigned long addr,
> - pmd_t *pmdp, pmd_t pmd)
> +static inline void __set_pmds(struct mm_struct *mm,
> + unsigned long __always_unused addr,
> + pmd_t *pmdp, pmd_t pmd, unsigned int nr)
> {
> - page_table_check_pmd_set(mm, pmdp, pmd);
> - return __set_pte_at(mm, addr, (pte_t *)pmdp, pmd_pte(pmd),
> - PMD_SIZE >> PAGE_SHIFT);
> + __set_ptes_anysz(mm, (pte_t *)pmdp, pmd_pte(pmd), nr, PMD_SIZE);
> }
> +#define set_pmd_at(mm, addr, pmdp, pmd) __set_pmds(mm, addr, pmdp, pmd, 1)
>
> -static inline void set_pud_at(struct mm_struct *mm, unsigned long addr,
> - pud_t *pudp, pud_t pud)
> +static inline void __set_puds(struct mm_struct *mm,
> + unsigned long __always_unused addr,
> + pud_t *pudp, pud_t pud, unsigned int nr)
> {
> - page_table_check_pud_set(mm, pudp, pud);
> - return __set_pte_at(mm, addr, (pte_t *)pudp, pud_pte(pud),
> - PUD_SIZE >> PAGE_SHIFT);
> + __set_ptes_anysz(mm, (pte_t *)pudp, pud_pte(pud), nr, PUD_SIZE);
> }
> +#define set_pud_at(mm, addr, pudp, pud) __set_puds(mm, addr, pudp, pud, 1)
>
> #define __p4d_to_phys(p4d) __pte_to_phys(p4d_pte(p4d))
> #define __phys_to_p4d_val(phys) __phys_to_pte_val(phys)
> @@ -1301,16 +1316,37 @@ static inline int pmdp_test_and_clear_young(struct vm_area_struct *vma,
> }
> #endif /* CONFIG_TRANSPARENT_HUGEPAGE || CONFIG_ARCH_HAS_NONLEAF_PMD_YOUNG */
>
> -static inline pte_t __ptep_get_and_clear(struct mm_struct *mm,
> - unsigned long address, pte_t *ptep)
> +static inline pte_t __ptep_get_and_clear_anysz(struct mm_struct *mm,
> + pte_t *ptep,
> + unsigned long pgsize)
> {
> pte_t pte = __pte(xchg_relaxed(&pte_val(*ptep), 0));
>
> - page_table_check_pte_clear(mm, pte);
> + switch (pgsize) {
> + case PAGE_SIZE:
> + page_table_check_pte_clear(mm, pte);
> + break;
> + case PMD_SIZE:
> + page_table_check_pmd_clear(mm, pte_pmd(pte));
> + break;
> +#ifndef __PAGETABLE_PMD_FOLDED
> + case PUD_SIZE:
> + page_table_check_pud_clear(mm, pte_pud(pte));
> + break;
> +#endif
> + default:
> + VM_WARN_ON(1);
> + }
>
> return pte;
> }
>
> +static inline pte_t __ptep_get_and_clear(struct mm_struct *mm,
> + unsigned long address, pte_t *ptep)
> +{
> + return __ptep_get_and_clear_anysz(mm, ptep, PAGE_SIZE);
> +}
> +
> static inline void __clear_full_ptes(struct mm_struct *mm, unsigned long addr,
> pte_t *ptep, unsigned int nr, int full)
> {
> @@ -1347,11 +1383,7 @@ static inline pte_t __get_and_clear_full_ptes(struct mm_struct *mm,
> static inline pmd_t pmdp_huge_get_and_clear(struct mm_struct *mm,
> unsigned long address, pmd_t *pmdp)
> {
> - pmd_t pmd = __pmd(xchg_relaxed(&pmd_val(*pmdp), 0));
> -
> - page_table_check_pmd_clear(mm, pmd);
> -
> - return pmd;
> + return pte_pmd(__ptep_get_and_clear_anysz(mm, (pte_t *)pmdp, PMD_SIZE));
> }
> #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
>
Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCH v4 05/11] arm64: hugetlb: Use __set_ptes_anysz() and __ptep_get_and_clear_anysz()
2025-04-22 8:18 [PATCH v4 00/11] Perf improvements for hugetlb and vmalloc on arm64 Ryan Roberts
` (3 preceding siblings ...)
2025-04-22 8:18 ` [PATCH v4 04/11] arm64/mm: Refactor __set_ptes() and __ptep_get_and_clear() Ryan Roberts
@ 2025-04-22 8:18 ` Ryan Roberts
2025-04-24 9:40 ` Anshuman Khandual
2025-04-22 8:18 ` [PATCH v4 06/11] arm64/mm: Hoist barriers out of set_ptes_anysz() loop Ryan Roberts
` (8 subsequent siblings)
13 siblings, 1 reply; 18+ messages in thread
From: Ryan Roberts @ 2025-04-22 8:18 UTC (permalink / raw)
To: Catalin Marinas, Will Deacon, Pasha Tatashin, Andrew Morton,
Uladzislau Rezki, Christoph Hellwig, David Hildenbrand,
Matthew Wilcox (Oracle),
Mark Rutland, Anshuman Khandual, Alexandre Ghiti, Kevin Brodsky
Cc: Ryan Roberts, linux-arm-kernel, linux-mm, linux-kernel
Refactor the huge_pte helpers to use the new common __set_ptes_anysz()
and __ptep_get_and_clear_anysz() APIs.
This provides 2 benefits; First, when page_table_check=on, hugetlb is
now properly/fully checked. Previously only the first page of a hugetlb
folio was checked. Second, instead of having to call __set_ptes(nr=1)
for each pte in a loop, the whole contiguous batch can now be set in one
go, which enables some efficiencies and cleans up the code.
One detail to note is that huge_ptep_clear_flush() was previously
calling ptep_clear_flush() for a non-contiguous pte (i.e. a pud or pmd
block mapping). This has a couple of disadvantages; first
ptep_clear_flush() calls ptep_get_and_clear() which transparently
handles contpte. Given we only call for non-contiguous ptes, it would be
safe, but a waste of effort. It's preferable to go straight to the layer
below. However, more problematic is that ptep_get_and_clear() is for
PAGE_SIZE entries so it calls page_table_check_pte_clear() and would not
clear the whole hugetlb folio. So let's stop special-casing the non-cont
case and just rely on get_clear_contig_flush() to do the right thing for
non-cont entries.
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
---
arch/arm64/mm/hugetlbpage.c | 53 +++++++------------------------------
1 file changed, 10 insertions(+), 43 deletions(-)
diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c
index 087fc43381c6..d34703846ef4 100644
--- a/arch/arm64/mm/hugetlbpage.c
+++ b/arch/arm64/mm/hugetlbpage.c
@@ -159,12 +159,11 @@ static pte_t get_clear_contig(struct mm_struct *mm,
pte_t pte, tmp_pte;
bool present;
- pte = __ptep_get_and_clear(mm, addr, ptep);
+ pte = __ptep_get_and_clear_anysz(mm, ptep, pgsize);
present = pte_present(pte);
while (--ncontig) {
ptep++;
- addr += pgsize;
- tmp_pte = __ptep_get_and_clear(mm, addr, ptep);
+ tmp_pte = __ptep_get_and_clear_anysz(mm, ptep, pgsize);
if (present) {
if (pte_dirty(tmp_pte))
pte = pte_mkdirty(pte);
@@ -208,7 +207,7 @@ static void clear_flush(struct mm_struct *mm,
unsigned long i, saddr = addr;
for (i = 0; i < ncontig; i++, addr += pgsize, ptep++)
- __ptep_get_and_clear(mm, addr, ptep);
+ __ptep_get_and_clear_anysz(mm, ptep, pgsize);
__flush_hugetlb_tlb_range(&vma, saddr, addr, pgsize, true);
}
@@ -219,32 +218,20 @@ void set_huge_pte_at(struct mm_struct *mm, unsigned long addr,
size_t pgsize;
int i;
int ncontig;
- unsigned long pfn, dpfn;
- pgprot_t hugeprot;
ncontig = num_contig_ptes(sz, &pgsize);
if (!pte_present(pte)) {
for (i = 0; i < ncontig; i++, ptep++, addr += pgsize)
- __set_ptes(mm, addr, ptep, pte, 1);
+ __set_ptes_anysz(mm, ptep, pte, 1, pgsize);
return;
}
- if (!pte_cont(pte)) {
- __set_ptes(mm, addr, ptep, pte, 1);
- return;
- }
-
- pfn = pte_pfn(pte);
- dpfn = pgsize >> PAGE_SHIFT;
- hugeprot = pte_pgprot(pte);
-
/* Only need to "break" if transitioning valid -> valid. */
- if (pte_valid(__ptep_get(ptep)))
+ if (pte_cont(pte) && pte_valid(__ptep_get(ptep)))
clear_flush(mm, addr, ptep, pgsize, ncontig);
- for (i = 0; i < ncontig; i++, ptep++, addr += pgsize, pfn += dpfn)
- __set_ptes(mm, addr, ptep, pfn_pte(pfn, hugeprot), 1);
+ __set_ptes_anysz(mm, ptep, pte, ncontig, pgsize);
}
pte_t *huge_pte_alloc(struct mm_struct *mm, struct vm_area_struct *vma,
@@ -434,11 +421,9 @@ int huge_ptep_set_access_flags(struct vm_area_struct *vma,
unsigned long addr, pte_t *ptep,
pte_t pte, int dirty)
{
- int ncontig, i;
+ int ncontig;
size_t pgsize = 0;
- unsigned long pfn = pte_pfn(pte), dpfn;
struct mm_struct *mm = vma->vm_mm;
- pgprot_t hugeprot;
pte_t orig_pte;
VM_WARN_ON(!pte_present(pte));
@@ -447,7 +432,6 @@ int huge_ptep_set_access_flags(struct vm_area_struct *vma,
return __ptep_set_access_flags(vma, addr, ptep, pte, dirty);
ncontig = num_contig_ptes(huge_page_size(hstate_vma(vma)), &pgsize);
- dpfn = pgsize >> PAGE_SHIFT;
if (!__cont_access_flags_changed(ptep, pte, ncontig))
return 0;
@@ -462,19 +446,14 @@ int huge_ptep_set_access_flags(struct vm_area_struct *vma,
if (pte_young(orig_pte))
pte = pte_mkyoung(pte);
- hugeprot = pte_pgprot(pte);
- for (i = 0; i < ncontig; i++, ptep++, addr += pgsize, pfn += dpfn)
- __set_ptes(mm, addr, ptep, pfn_pte(pfn, hugeprot), 1);
-
+ __set_ptes_anysz(mm, ptep, pte, ncontig, pgsize);
return 1;
}
void huge_ptep_set_wrprotect(struct mm_struct *mm,
unsigned long addr, pte_t *ptep)
{
- unsigned long pfn, dpfn;
- pgprot_t hugeprot;
- int ncontig, i;
+ int ncontig;
size_t pgsize;
pte_t pte;
@@ -487,16 +466,11 @@ void huge_ptep_set_wrprotect(struct mm_struct *mm,
}
ncontig = find_num_contig(mm, addr, ptep, &pgsize);
- dpfn = pgsize >> PAGE_SHIFT;
pte = get_clear_contig_flush(mm, addr, ptep, pgsize, ncontig);
pte = pte_wrprotect(pte);
- hugeprot = pte_pgprot(pte);
- pfn = pte_pfn(pte);
-
- for (i = 0; i < ncontig; i++, ptep++, addr += pgsize, pfn += dpfn)
- __set_ptes(mm, addr, ptep, pfn_pte(pfn, hugeprot), 1);
+ __set_ptes_anysz(mm, ptep, pte, ncontig, pgsize);
}
pte_t huge_ptep_clear_flush(struct vm_area_struct *vma,
@@ -505,13 +479,6 @@ pte_t huge_ptep_clear_flush(struct vm_area_struct *vma,
struct mm_struct *mm = vma->vm_mm;
size_t pgsize;
int ncontig;
- pte_t pte;
-
- pte = __ptep_get(ptep);
- VM_WARN_ON(!pte_present(pte));
-
- if (!pte_cont(pte))
- return ptep_clear_flush(vma, addr, ptep);
ncontig = num_contig_ptes(huge_page_size(hstate_vma(vma)), &pgsize);
return get_clear_contig_flush(mm, addr, ptep, pgsize, ncontig);
--
2.43.0
^ permalink raw reply [flat|nested] 18+ messages in thread* Re: [PATCH v4 05/11] arm64: hugetlb: Use __set_ptes_anysz() and __ptep_get_and_clear_anysz()
2025-04-22 8:18 ` [PATCH v4 05/11] arm64: hugetlb: Use __set_ptes_anysz() and __ptep_get_and_clear_anysz() Ryan Roberts
@ 2025-04-24 9:40 ` Anshuman Khandual
0 siblings, 0 replies; 18+ messages in thread
From: Anshuman Khandual @ 2025-04-24 9:40 UTC (permalink / raw)
To: Ryan Roberts, Catalin Marinas, Will Deacon, Pasha Tatashin,
Andrew Morton, Uladzislau Rezki, Christoph Hellwig,
David Hildenbrand, Matthew Wilcox (Oracle),
Mark Rutland, Alexandre Ghiti, Kevin Brodsky
Cc: linux-arm-kernel, linux-mm, linux-kernel
On 4/22/25 13:48, Ryan Roberts wrote:
> Refactor the huge_pte helpers to use the new common __set_ptes_anysz()
> and __ptep_get_and_clear_anysz() APIs.
>
> This provides 2 benefits; First, when page_table_check=on, hugetlb is
> now properly/fully checked. Previously only the first page of a hugetlb
> folio was checked. Second, instead of having to call __set_ptes(nr=1)
> for each pte in a loop, the whole contiguous batch can now be set in one
> go, which enables some efficiencies and cleans up the code.
>
> One detail to note is that huge_ptep_clear_flush() was previously
> calling ptep_clear_flush() for a non-contiguous pte (i.e. a pud or pmd
> block mapping). This has a couple of disadvantages; first
> ptep_clear_flush() calls ptep_get_and_clear() which transparently
> handles contpte. Given we only call for non-contiguous ptes, it would be
> safe, but a waste of effort. It's preferable to go straight to the layer
> below. However, more problematic is that ptep_get_and_clear() is for
> PAGE_SIZE entries so it calls page_table_check_pte_clear() and would not
> clear the whole hugetlb folio. So let's stop special-casing the non-cont
> case and just rely on get_clear_contig_flush() to do the right thing for
> non-cont entries.
>
> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
> ---
> arch/arm64/mm/hugetlbpage.c | 53 +++++++------------------------------
> 1 file changed, 10 insertions(+), 43 deletions(-)
>
> diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c
> index 087fc43381c6..d34703846ef4 100644
> --- a/arch/arm64/mm/hugetlbpage.c
> +++ b/arch/arm64/mm/hugetlbpage.c
> @@ -159,12 +159,11 @@ static pte_t get_clear_contig(struct mm_struct *mm,
> pte_t pte, tmp_pte;
> bool present;
>
> - pte = __ptep_get_and_clear(mm, addr, ptep);
> + pte = __ptep_get_and_clear_anysz(mm, ptep, pgsize);
> present = pte_present(pte);
> while (--ncontig) {
> ptep++;
> - addr += pgsize;
> - tmp_pte = __ptep_get_and_clear(mm, addr, ptep);
> + tmp_pte = __ptep_get_and_clear_anysz(mm, ptep, pgsize);
> if (present) {
> if (pte_dirty(tmp_pte))
> pte = pte_mkdirty(pte);
> @@ -208,7 +207,7 @@ static void clear_flush(struct mm_struct *mm,
> unsigned long i, saddr = addr;
>
> for (i = 0; i < ncontig; i++, addr += pgsize, ptep++)
> - __ptep_get_and_clear(mm, addr, ptep);
> + __ptep_get_and_clear_anysz(mm, ptep, pgsize);
>
> __flush_hugetlb_tlb_range(&vma, saddr, addr, pgsize, true);
> }
> @@ -219,32 +218,20 @@ void set_huge_pte_at(struct mm_struct *mm, unsigned long addr,
> size_t pgsize;
> int i;
> int ncontig;
> - unsigned long pfn, dpfn;
> - pgprot_t hugeprot;
>
> ncontig = num_contig_ptes(sz, &pgsize);
>
> if (!pte_present(pte)) {
> for (i = 0; i < ncontig; i++, ptep++, addr += pgsize)
> - __set_ptes(mm, addr, ptep, pte, 1);
> + __set_ptes_anysz(mm, ptep, pte, 1, pgsize);
> return;
> }
>
> - if (!pte_cont(pte)) {
> - __set_ptes(mm, addr, ptep, pte, 1);
> - return;
> - }
> -
> - pfn = pte_pfn(pte);
> - dpfn = pgsize >> PAGE_SHIFT;
> - hugeprot = pte_pgprot(pte);
> -
> /* Only need to "break" if transitioning valid -> valid. */
> - if (pte_valid(__ptep_get(ptep)))
> + if (pte_cont(pte) && pte_valid(__ptep_get(ptep)))
> clear_flush(mm, addr, ptep, pgsize, ncontig);
>
> - for (i = 0; i < ncontig; i++, ptep++, addr += pgsize, pfn += dpfn)
> - __set_ptes(mm, addr, ptep, pfn_pte(pfn, hugeprot), 1);
> + __set_ptes_anysz(mm, ptep, pte, ncontig, pgsize);
> }
>
> pte_t *huge_pte_alloc(struct mm_struct *mm, struct vm_area_struct *vma,
> @@ -434,11 +421,9 @@ int huge_ptep_set_access_flags(struct vm_area_struct *vma,
> unsigned long addr, pte_t *ptep,
> pte_t pte, int dirty)
> {
> - int ncontig, i;
> + int ncontig;
> size_t pgsize = 0;
> - unsigned long pfn = pte_pfn(pte), dpfn;
> struct mm_struct *mm = vma->vm_mm;
> - pgprot_t hugeprot;
> pte_t orig_pte;
>
> VM_WARN_ON(!pte_present(pte));
> @@ -447,7 +432,6 @@ int huge_ptep_set_access_flags(struct vm_area_struct *vma,
> return __ptep_set_access_flags(vma, addr, ptep, pte, dirty);
>
> ncontig = num_contig_ptes(huge_page_size(hstate_vma(vma)), &pgsize);
> - dpfn = pgsize >> PAGE_SHIFT;
>
> if (!__cont_access_flags_changed(ptep, pte, ncontig))
> return 0;
> @@ -462,19 +446,14 @@ int huge_ptep_set_access_flags(struct vm_area_struct *vma,
> if (pte_young(orig_pte))
> pte = pte_mkyoung(pte);
>
> - hugeprot = pte_pgprot(pte);
> - for (i = 0; i < ncontig; i++, ptep++, addr += pgsize, pfn += dpfn)
> - __set_ptes(mm, addr, ptep, pfn_pte(pfn, hugeprot), 1);
> -
> + __set_ptes_anysz(mm, ptep, pte, ncontig, pgsize);
> return 1;
> }
>
> void huge_ptep_set_wrprotect(struct mm_struct *mm,
> unsigned long addr, pte_t *ptep)
> {
> - unsigned long pfn, dpfn;
> - pgprot_t hugeprot;
> - int ncontig, i;
> + int ncontig;
> size_t pgsize;
> pte_t pte;
>
> @@ -487,16 +466,11 @@ void huge_ptep_set_wrprotect(struct mm_struct *mm,
> }
>
> ncontig = find_num_contig(mm, addr, ptep, &pgsize);
> - dpfn = pgsize >> PAGE_SHIFT;
>
> pte = get_clear_contig_flush(mm, addr, ptep, pgsize, ncontig);
> pte = pte_wrprotect(pte);
>
> - hugeprot = pte_pgprot(pte);
> - pfn = pte_pfn(pte);
> -
> - for (i = 0; i < ncontig; i++, ptep++, addr += pgsize, pfn += dpfn)
> - __set_ptes(mm, addr, ptep, pfn_pte(pfn, hugeprot), 1);
> + __set_ptes_anysz(mm, ptep, pte, ncontig, pgsize);
> }
>
> pte_t huge_ptep_clear_flush(struct vm_area_struct *vma,
> @@ -505,13 +479,6 @@ pte_t huge_ptep_clear_flush(struct vm_area_struct *vma,
> struct mm_struct *mm = vma->vm_mm;
> size_t pgsize;
> int ncontig;
> - pte_t pte;
> -
> - pte = __ptep_get(ptep);
> - VM_WARN_ON(!pte_present(pte));
> -
> - if (!pte_cont(pte))
> - return ptep_clear_flush(vma, addr, ptep);
>
> ncontig = num_contig_ptes(huge_page_size(hstate_vma(vma)), &pgsize);
> return get_clear_contig_flush(mm, addr, ptep, pgsize, ncontig);
Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCH v4 06/11] arm64/mm: Hoist barriers out of set_ptes_anysz() loop
2025-04-22 8:18 [PATCH v4 00/11] Perf improvements for hugetlb and vmalloc on arm64 Ryan Roberts
` (4 preceding siblings ...)
2025-04-22 8:18 ` [PATCH v4 05/11] arm64: hugetlb: Use __set_ptes_anysz() and __ptep_get_and_clear_anysz() Ryan Roberts
@ 2025-04-22 8:18 ` Ryan Roberts
2025-04-22 8:18 ` [PATCH v4 07/11] mm/vmalloc: Warn on improper use of vunmap_range() Ryan Roberts
` (7 subsequent siblings)
13 siblings, 0 replies; 18+ messages in thread
From: Ryan Roberts @ 2025-04-22 8:18 UTC (permalink / raw)
To: Catalin Marinas, Will Deacon, Pasha Tatashin, Andrew Morton,
Uladzislau Rezki, Christoph Hellwig, David Hildenbrand,
Matthew Wilcox (Oracle),
Mark Rutland, Anshuman Khandual, Alexandre Ghiti, Kevin Brodsky
Cc: Ryan Roberts, linux-arm-kernel, linux-mm, linux-kernel
set_ptes_anysz() previously called __set_pte() for each PTE in the
range, which would conditionally issue a DSB and ISB to make the new PTE
value immediately visible to the table walker if the new PTE was valid
and for kernel space.
We can do better than this; let's hoist those barriers out of the loop
so that they are only issued once at the end of the loop. We then reduce
the cost by the number of PTEs in the range.
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
---
arch/arm64/include/asm/pgtable.h | 16 +++++++++++-----
1 file changed, 11 insertions(+), 5 deletions(-)
diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index d80aa9ba0a16..39c331743b69 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -320,13 +320,11 @@ static inline void __set_pte_nosync(pte_t *ptep, pte_t pte)
WRITE_ONCE(*ptep, pte);
}
-static inline void __set_pte(pte_t *ptep, pte_t pte)
+static inline void __set_pte_complete(pte_t pte)
{
- __set_pte_nosync(ptep, pte);
-
/*
* Only if the new pte is valid and kernel, otherwise TLB maintenance
- * or update_mmu_cache() have the necessary barriers.
+ * has the necessary barriers.
*/
if (pte_valid_not_user(pte)) {
dsb(ishst);
@@ -334,6 +332,12 @@ static inline void __set_pte(pte_t *ptep, pte_t pte)
}
}
+static inline void __set_pte(pte_t *ptep, pte_t pte)
+{
+ __set_pte_nosync(ptep, pte);
+ __set_pte_complete(pte);
+}
+
static inline pte_t __ptep_get(pte_t *ptep)
{
return READ_ONCE(*ptep);
@@ -658,12 +662,14 @@ static inline void __set_ptes_anysz(struct mm_struct *mm, pte_t *ptep,
for (;;) {
__check_safe_pte_update(mm, ptep, pte);
- __set_pte(ptep, pte);
+ __set_pte_nosync(ptep, pte);
if (--nr == 0)
break;
ptep++;
pte = pte_advance_pfn(pte, stride);
}
+
+ __set_pte_complete(pte);
}
static inline void __set_ptes(struct mm_struct *mm,
--
2.43.0
^ permalink raw reply [flat|nested] 18+ messages in thread* [PATCH v4 07/11] mm/vmalloc: Warn on improper use of vunmap_range()
2025-04-22 8:18 [PATCH v4 00/11] Perf improvements for hugetlb and vmalloc on arm64 Ryan Roberts
` (5 preceding siblings ...)
2025-04-22 8:18 ` [PATCH v4 06/11] arm64/mm: Hoist barriers out of set_ptes_anysz() loop Ryan Roberts
@ 2025-04-22 8:18 ` Ryan Roberts
2025-04-22 8:18 ` [PATCH v4 08/11] mm/vmalloc: Gracefully unmap huge ptes Ryan Roberts
` (6 subsequent siblings)
13 siblings, 0 replies; 18+ messages in thread
From: Ryan Roberts @ 2025-04-22 8:18 UTC (permalink / raw)
To: Catalin Marinas, Will Deacon, Pasha Tatashin, Andrew Morton,
Uladzislau Rezki, Christoph Hellwig, David Hildenbrand,
Matthew Wilcox (Oracle),
Mark Rutland, Anshuman Khandual, Alexandre Ghiti, Kevin Brodsky
Cc: Ryan Roberts, linux-arm-kernel, linux-mm, linux-kernel
A call to vmalloc_huge() may cause memory blocks to be mapped at pmd or
pud level. But it is possible to subsequently call vunmap_range() on a
sub-range of the mapped memory, which partially overlaps a pmd or pud.
In this case, vmalloc unmaps the entire pmd or pud so that the
no-overlapping portion is also unmapped. Clearly that would have a bad
outcome, but it's not something that any callers do today as far as I
can tell. So I guess it's just expected that callers will not do this.
However, it would be useful to know if this happened in future; let's
add a warning to cover the eventuality.
Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
---
mm/vmalloc.c | 8 ++++++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 3ed720a787ec..d60d3a29d149 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -374,8 +374,10 @@ static void vunmap_pmd_range(pud_t *pud, unsigned long addr, unsigned long end,
if (cleared || pmd_bad(*pmd))
*mask |= PGTBL_PMD_MODIFIED;
- if (cleared)
+ if (cleared) {
+ WARN_ON(next - addr < PMD_SIZE);
continue;
+ }
if (pmd_none_or_clear_bad(pmd))
continue;
vunmap_pte_range(pmd, addr, next, mask);
@@ -399,8 +401,10 @@ static void vunmap_pud_range(p4d_t *p4d, unsigned long addr, unsigned long end,
if (cleared || pud_bad(*pud))
*mask |= PGTBL_PUD_MODIFIED;
- if (cleared)
+ if (cleared) {
+ WARN_ON(next - addr < PUD_SIZE);
continue;
+ }
if (pud_none_or_clear_bad(pud))
continue;
vunmap_pmd_range(pud, addr, next, mask);
--
2.43.0
^ permalink raw reply [flat|nested] 18+ messages in thread* [PATCH v4 08/11] mm/vmalloc: Gracefully unmap huge ptes
2025-04-22 8:18 [PATCH v4 00/11] Perf improvements for hugetlb and vmalloc on arm64 Ryan Roberts
` (6 preceding siblings ...)
2025-04-22 8:18 ` [PATCH v4 07/11] mm/vmalloc: Warn on improper use of vunmap_range() Ryan Roberts
@ 2025-04-22 8:18 ` Ryan Roberts
2025-04-22 8:18 ` [PATCH v4 09/11] arm64/mm: Support huge pte-mapped pages in vmap Ryan Roberts
` (5 subsequent siblings)
13 siblings, 0 replies; 18+ messages in thread
From: Ryan Roberts @ 2025-04-22 8:18 UTC (permalink / raw)
To: Catalin Marinas, Will Deacon, Pasha Tatashin, Andrew Morton,
Uladzislau Rezki, Christoph Hellwig, David Hildenbrand,
Matthew Wilcox (Oracle),
Mark Rutland, Anshuman Khandual, Alexandre Ghiti, Kevin Brodsky
Cc: Ryan Roberts, linux-arm-kernel, linux-mm, linux-kernel
Commit f7ee1f13d606 ("mm/vmalloc: enable mapping of huge pages at pte
level in vmap") added its support by reusing the set_huge_pte_at() API,
which is otherwise only used for user mappings. But when unmapping those
huge ptes, it continued to call ptep_get_and_clear(), which is a
layering violation. To date, the only arch to implement this support is
powerpc and it all happens to work ok for it.
But arm64's implementation of ptep_get_and_clear() can not be safely
used to clear a previous set_huge_pte_at(). So let's introduce a new
arch opt-in function, arch_vmap_pte_range_unmap_size(), which can
provide the size of a (present) pte. Then we can call
huge_ptep_get_and_clear() to tear it down properly.
Note that if vunmap_range() is called with a range that starts in the
middle of a huge pte-mapped page, we must unmap the entire huge page so
the behaviour is consistent with pmd and pud block mappings. In this
case emit a warning just like we do for pmd/pud mappings.
Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
Reviewed-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
---
include/linux/vmalloc.h | 8 ++++++++
mm/vmalloc.c | 18 ++++++++++++++++--
2 files changed, 24 insertions(+), 2 deletions(-)
diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
index 31e9ffd936e3..16dd4cba64f2 100644
--- a/include/linux/vmalloc.h
+++ b/include/linux/vmalloc.h
@@ -113,6 +113,14 @@ static inline unsigned long arch_vmap_pte_range_map_size(unsigned long addr, uns
}
#endif
+#ifndef arch_vmap_pte_range_unmap_size
+static inline unsigned long arch_vmap_pte_range_unmap_size(unsigned long addr,
+ pte_t *ptep)
+{
+ return PAGE_SIZE;
+}
+#endif
+
#ifndef arch_vmap_pte_supported_shift
static inline int arch_vmap_pte_supported_shift(unsigned long size)
{
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index d60d3a29d149..fe2e2cc8da94 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -350,12 +350,26 @@ static void vunmap_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end,
pgtbl_mod_mask *mask)
{
pte_t *pte;
+ pte_t ptent;
+ unsigned long size = PAGE_SIZE;
pte = pte_offset_kernel(pmd, addr);
do {
- pte_t ptent = ptep_get_and_clear(&init_mm, addr, pte);
+#ifdef CONFIG_HUGETLB_PAGE
+ size = arch_vmap_pte_range_unmap_size(addr, pte);
+ if (size != PAGE_SIZE) {
+ if (WARN_ON(!IS_ALIGNED(addr, size))) {
+ addr = ALIGN_DOWN(addr, size);
+ pte = PTR_ALIGN_DOWN(pte, sizeof(*pte) * (size >> PAGE_SHIFT));
+ }
+ ptent = huge_ptep_get_and_clear(&init_mm, addr, pte, size);
+ if (WARN_ON(end - addr < size))
+ size = end - addr;
+ } else
+#endif
+ ptent = ptep_get_and_clear(&init_mm, addr, pte);
WARN_ON(!pte_none(ptent) && !pte_present(ptent));
- } while (pte++, addr += PAGE_SIZE, addr != end);
+ } while (pte += (size >> PAGE_SHIFT), addr += size, addr != end);
*mask |= PGTBL_PTE_MODIFIED;
}
--
2.43.0
^ permalink raw reply [flat|nested] 18+ messages in thread* [PATCH v4 09/11] arm64/mm: Support huge pte-mapped pages in vmap
2025-04-22 8:18 [PATCH v4 00/11] Perf improvements for hugetlb and vmalloc on arm64 Ryan Roberts
` (7 preceding siblings ...)
2025-04-22 8:18 ` [PATCH v4 08/11] mm/vmalloc: Gracefully unmap huge ptes Ryan Roberts
@ 2025-04-22 8:18 ` Ryan Roberts
2025-04-22 8:18 ` [PATCH v4 10/11] mm/vmalloc: Enter lazy mmu mode while manipulating vmalloc ptes Ryan Roberts
` (4 subsequent siblings)
13 siblings, 0 replies; 18+ messages in thread
From: Ryan Roberts @ 2025-04-22 8:18 UTC (permalink / raw)
To: Catalin Marinas, Will Deacon, Pasha Tatashin, Andrew Morton,
Uladzislau Rezki, Christoph Hellwig, David Hildenbrand,
Matthew Wilcox (Oracle),
Mark Rutland, Anshuman Khandual, Alexandre Ghiti, Kevin Brodsky
Cc: Ryan Roberts, linux-arm-kernel, linux-mm, linux-kernel
Implement the required arch functions to enable use of contpte in the
vmap when VM_ALLOW_HUGE_VMAP is specified. This speeds up vmap
operations due to only having to issue a DSB and ISB per contpte block
instead of per pte. But it also means that the TLB pressure reduces due
to only needing a single TLB entry for the whole contpte block.
Since vmap uses set_huge_pte_at() to set the contpte, that API is now
used for kernel mappings for the first time. Although in the vmap case
we never expect it to be called to modify a valid mapping so
clear_flush() should never be called, it's still wise to make it robust
for the kernel case, so amend the tlb flush function if the mm is for
kernel space.
Tested with vmalloc performance selftests:
# kself/mm/test_vmalloc.sh \
run_test_mask=1
test_repeat_count=5
nr_pages=256
test_loop_count=100000
use_huge=1
Duration reduced from 1274243 usec to 1083553 usec on Apple M2 for 15%
reduction in time taken.
Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
---
arch/arm64/include/asm/vmalloc.h | 45 ++++++++++++++++++++++++++++++++
arch/arm64/mm/hugetlbpage.c | 5 +++-
2 files changed, 49 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/include/asm/vmalloc.h b/arch/arm64/include/asm/vmalloc.h
index 38fafffe699f..12f534e8f3ed 100644
--- a/arch/arm64/include/asm/vmalloc.h
+++ b/arch/arm64/include/asm/vmalloc.h
@@ -23,6 +23,51 @@ static inline bool arch_vmap_pmd_supported(pgprot_t prot)
return !IS_ENABLED(CONFIG_PTDUMP_DEBUGFS);
}
+#define arch_vmap_pte_range_map_size arch_vmap_pte_range_map_size
+static inline unsigned long arch_vmap_pte_range_map_size(unsigned long addr,
+ unsigned long end, u64 pfn,
+ unsigned int max_page_shift)
+{
+ /*
+ * If the block is at least CONT_PTE_SIZE in size, and is naturally
+ * aligned in both virtual and physical space, then we can pte-map the
+ * block using the PTE_CONT bit for more efficient use of the TLB.
+ */
+ if (max_page_shift < CONT_PTE_SHIFT)
+ return PAGE_SIZE;
+
+ if (end - addr < CONT_PTE_SIZE)
+ return PAGE_SIZE;
+
+ if (!IS_ALIGNED(addr, CONT_PTE_SIZE))
+ return PAGE_SIZE;
+
+ if (!IS_ALIGNED(PFN_PHYS(pfn), CONT_PTE_SIZE))
+ return PAGE_SIZE;
+
+ return CONT_PTE_SIZE;
+}
+
+#define arch_vmap_pte_range_unmap_size arch_vmap_pte_range_unmap_size
+static inline unsigned long arch_vmap_pte_range_unmap_size(unsigned long addr,
+ pte_t *ptep)
+{
+ /*
+ * The caller handles alignment so it's sufficient just to check
+ * PTE_CONT.
+ */
+ return pte_valid_cont(__ptep_get(ptep)) ? CONT_PTE_SIZE : PAGE_SIZE;
+}
+
+#define arch_vmap_pte_supported_shift arch_vmap_pte_supported_shift
+static inline int arch_vmap_pte_supported_shift(unsigned long size)
+{
+ if (size >= CONT_PTE_SIZE)
+ return CONT_PTE_SHIFT;
+
+ return PAGE_SHIFT;
+}
+
#endif
#define arch_vmap_pgprot_tagged arch_vmap_pgprot_tagged
diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c
index d34703846ef4..0c8737f4f2ce 100644
--- a/arch/arm64/mm/hugetlbpage.c
+++ b/arch/arm64/mm/hugetlbpage.c
@@ -209,7 +209,10 @@ static void clear_flush(struct mm_struct *mm,
for (i = 0; i < ncontig; i++, addr += pgsize, ptep++)
__ptep_get_and_clear_anysz(mm, ptep, pgsize);
- __flush_hugetlb_tlb_range(&vma, saddr, addr, pgsize, true);
+ if (mm == &init_mm)
+ flush_tlb_kernel_range(saddr, addr);
+ else
+ __flush_hugetlb_tlb_range(&vma, saddr, addr, pgsize, true);
}
void set_huge_pte_at(struct mm_struct *mm, unsigned long addr,
--
2.43.0
^ permalink raw reply [flat|nested] 18+ messages in thread* [PATCH v4 10/11] mm/vmalloc: Enter lazy mmu mode while manipulating vmalloc ptes
2025-04-22 8:18 [PATCH v4 00/11] Perf improvements for hugetlb and vmalloc on arm64 Ryan Roberts
` (8 preceding siblings ...)
2025-04-22 8:18 ` [PATCH v4 09/11] arm64/mm: Support huge pte-mapped pages in vmap Ryan Roberts
@ 2025-04-22 8:18 ` Ryan Roberts
2025-04-22 8:18 ` [PATCH v4 11/11] arm64/mm: Batch barriers when updating kernel mappings Ryan Roberts
` (3 subsequent siblings)
13 siblings, 0 replies; 18+ messages in thread
From: Ryan Roberts @ 2025-04-22 8:18 UTC (permalink / raw)
To: Catalin Marinas, Will Deacon, Pasha Tatashin, Andrew Morton,
Uladzislau Rezki, Christoph Hellwig, David Hildenbrand,
Matthew Wilcox (Oracle),
Mark Rutland, Anshuman Khandual, Alexandre Ghiti, Kevin Brodsky
Cc: Ryan Roberts, linux-arm-kernel, linux-mm, linux-kernel
Wrap vmalloc's pte table manipulation loops with
arch_enter_lazy_mmu_mode() / arch_leave_lazy_mmu_mode(). This provides
the arch code with the opportunity to optimize the pte manipulations.
Note that vmap_pfn() already uses lazy mmu mode since it delegates to
apply_to_page_range() which enters lazy mmu mode for both user and
kernel mappings.
These hooks will shortly be used by arm64 to improve vmalloc
performance.
Reviewed-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
---
mm/vmalloc.c | 14 ++++++++++++++
1 file changed, 14 insertions(+)
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index fe2e2cc8da94..24430160b37f 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -104,6 +104,9 @@ static int vmap_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end,
pte = pte_alloc_kernel_track(pmd, addr, mask);
if (!pte)
return -ENOMEM;
+
+ arch_enter_lazy_mmu_mode();
+
do {
if (unlikely(!pte_none(ptep_get(pte)))) {
if (pfn_valid(pfn)) {
@@ -127,6 +130,8 @@ static int vmap_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end,
set_pte_at(&init_mm, addr, pte, pfn_pte(pfn, prot));
pfn++;
} while (pte += PFN_DOWN(size), addr += size, addr != end);
+
+ arch_leave_lazy_mmu_mode();
*mask |= PGTBL_PTE_MODIFIED;
return 0;
}
@@ -354,6 +359,8 @@ static void vunmap_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end,
unsigned long size = PAGE_SIZE;
pte = pte_offset_kernel(pmd, addr);
+ arch_enter_lazy_mmu_mode();
+
do {
#ifdef CONFIG_HUGETLB_PAGE
size = arch_vmap_pte_range_unmap_size(addr, pte);
@@ -370,6 +377,8 @@ static void vunmap_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end,
ptent = ptep_get_and_clear(&init_mm, addr, pte);
WARN_ON(!pte_none(ptent) && !pte_present(ptent));
} while (pte += (size >> PAGE_SHIFT), addr += size, addr != end);
+
+ arch_leave_lazy_mmu_mode();
*mask |= PGTBL_PTE_MODIFIED;
}
@@ -515,6 +524,9 @@ static int vmap_pages_pte_range(pmd_t *pmd, unsigned long addr,
pte = pte_alloc_kernel_track(pmd, addr, mask);
if (!pte)
return -ENOMEM;
+
+ arch_enter_lazy_mmu_mode();
+
do {
struct page *page = pages[*nr];
@@ -528,6 +540,8 @@ static int vmap_pages_pte_range(pmd_t *pmd, unsigned long addr,
set_pte_at(&init_mm, addr, pte, mk_pte(page, prot));
(*nr)++;
} while (pte++, addr += PAGE_SIZE, addr != end);
+
+ arch_leave_lazy_mmu_mode();
*mask |= PGTBL_PTE_MODIFIED;
return 0;
}
--
2.43.0
^ permalink raw reply [flat|nested] 18+ messages in thread* [PATCH v4 11/11] arm64/mm: Batch barriers when updating kernel mappings
2025-04-22 8:18 [PATCH v4 00/11] Perf improvements for hugetlb and vmalloc on arm64 Ryan Roberts
` (9 preceding siblings ...)
2025-04-22 8:18 ` [PATCH v4 10/11] mm/vmalloc: Enter lazy mmu mode while manipulating vmalloc ptes Ryan Roberts
@ 2025-04-22 8:18 ` Ryan Roberts
2025-04-24 9:13 ` Anshuman Khandual
2025-04-23 19:18 ` [PATCH v4 00/11] Perf improvements for hugetlb and vmalloc on arm64 Luiz Capitulino
` (2 subsequent siblings)
13 siblings, 1 reply; 18+ messages in thread
From: Ryan Roberts @ 2025-04-22 8:18 UTC (permalink / raw)
To: Catalin Marinas, Will Deacon, Pasha Tatashin, Andrew Morton,
Uladzislau Rezki, Christoph Hellwig, David Hildenbrand,
Matthew Wilcox (Oracle),
Mark Rutland, Anshuman Khandual, Alexandre Ghiti, Kevin Brodsky
Cc: Ryan Roberts, linux-arm-kernel, linux-mm, linux-kernel
Because the kernel can't tolerate page faults for kernel mappings, when
setting a valid, kernel space pte (or pmd/pud/p4d/pgd), it emits a
dsb(ishst) to ensure that the store to the pgtable is observed by the
table walker immediately. Additionally it emits an isb() to ensure that
any already speculatively determined invalid mapping fault gets
canceled.
We can improve the performance of vmalloc operations by batching these
barriers until the end of a set of entry updates.
arch_enter_lazy_mmu_mode() and arch_leave_lazy_mmu_mode() provide the
required hooks.
vmalloc improves by up to 30% as a result.
Two new TIF_ flags are created; TIF_LAZY_MMU tells us if the task is in
the lazy mode and can therefore defer any barriers until exit from the
lazy mode. TIF_LAZY_MMU_PENDING is used to remember if any pte operation
was performed while in the lazy mode that required barriers. Then when
leaving lazy mode, if that flag is set, we emit the barriers.
Since arch_enter_lazy_mmu_mode() and arch_leave_lazy_mmu_mode() are used
for both user and kernel mappings, we need the second flag to avoid
emitting barriers unnecessarily if only user mappings were updated.
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
---
arch/arm64/include/asm/pgtable.h | 81 ++++++++++++++++++++++------
arch/arm64/include/asm/thread_info.h | 2 +
arch/arm64/kernel/process.c | 9 ++--
3 files changed, 72 insertions(+), 20 deletions(-)
diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index 39c331743b69..ab4a1b19e596 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -40,6 +40,63 @@
#include <linux/sched.h>
#include <linux/page_table_check.h>
+static inline void emit_pte_barriers(void)
+{
+ /*
+ * These barriers are emitted under certain conditions after a pte entry
+ * was modified (see e.g. __set_pte_complete()). The dsb makes the store
+ * visible to the table walker. The isb ensures that any previous
+ * speculative "invalid translation" marker that is in the CPU's
+ * pipeline gets cleared, so that any access to that address after
+ * setting the pte to valid won't cause a spurious fault. If the thread
+ * gets preempted after storing to the pgtable but before emitting these
+ * barriers, __switch_to() emits a dsb which ensure the walker gets to
+ * see the store. There is no guarantee of an isb being issued though.
+ * This is safe because it will still get issued (albeit on a
+ * potentially different CPU) when the thread starts running again,
+ * before any access to the address.
+ */
+ dsb(ishst);
+ isb();
+}
+
+static inline void queue_pte_barriers(void)
+{
+ unsigned long flags;
+
+ VM_WARN_ON(in_interrupt());
+ flags = read_thread_flags();
+
+ if (flags & BIT(TIF_LAZY_MMU)) {
+ /* Avoid the atomic op if already set. */
+ if (!(flags & BIT(TIF_LAZY_MMU_PENDING)))
+ set_thread_flag(TIF_LAZY_MMU_PENDING);
+ } else {
+ emit_pte_barriers();
+ }
+}
+
+#define __HAVE_ARCH_ENTER_LAZY_MMU_MODE
+static inline void arch_enter_lazy_mmu_mode(void)
+{
+ VM_WARN_ON(in_interrupt());
+ VM_WARN_ON(test_thread_flag(TIF_LAZY_MMU));
+
+ set_thread_flag(TIF_LAZY_MMU);
+}
+
+static inline void arch_flush_lazy_mmu_mode(void)
+{
+ if (test_and_clear_thread_flag(TIF_LAZY_MMU_PENDING))
+ emit_pte_barriers();
+}
+
+static inline void arch_leave_lazy_mmu_mode(void)
+{
+ arch_flush_lazy_mmu_mode();
+ clear_thread_flag(TIF_LAZY_MMU);
+}
+
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
#define __HAVE_ARCH_FLUSH_PMD_TLB_RANGE
@@ -326,10 +383,8 @@ static inline void __set_pte_complete(pte_t pte)
* Only if the new pte is valid and kernel, otherwise TLB maintenance
* has the necessary barriers.
*/
- if (pte_valid_not_user(pte)) {
- dsb(ishst);
- isb();
- }
+ if (pte_valid_not_user(pte))
+ queue_pte_barriers();
}
static inline void __set_pte(pte_t *ptep, pte_t pte)
@@ -801,10 +856,8 @@ static inline void set_pmd(pmd_t *pmdp, pmd_t pmd)
WRITE_ONCE(*pmdp, pmd);
- if (pmd_valid(pmd)) {
- dsb(ishst);
- isb();
- }
+ if (pmd_valid(pmd))
+ queue_pte_barriers();
}
static inline void pmd_clear(pmd_t *pmdp)
@@ -869,10 +922,8 @@ static inline void set_pud(pud_t *pudp, pud_t pud)
WRITE_ONCE(*pudp, pud);
- if (pud_valid(pud)) {
- dsb(ishst);
- isb();
- }
+ if (pud_valid(pud))
+ queue_pte_barriers();
}
static inline void pud_clear(pud_t *pudp)
@@ -951,8 +1002,7 @@ static inline void set_p4d(p4d_t *p4dp, p4d_t p4d)
}
WRITE_ONCE(*p4dp, p4d);
- dsb(ishst);
- isb();
+ queue_pte_barriers();
}
static inline void p4d_clear(p4d_t *p4dp)
@@ -1080,8 +1130,7 @@ static inline void set_pgd(pgd_t *pgdp, pgd_t pgd)
}
WRITE_ONCE(*pgdp, pgd);
- dsb(ishst);
- isb();
+ queue_pte_barriers();
}
static inline void pgd_clear(pgd_t *pgdp)
diff --git a/arch/arm64/include/asm/thread_info.h b/arch/arm64/include/asm/thread_info.h
index 1114c1c3300a..1fdd74b7b831 100644
--- a/arch/arm64/include/asm/thread_info.h
+++ b/arch/arm64/include/asm/thread_info.h
@@ -82,6 +82,8 @@ void arch_setup_new_exec(void);
#define TIF_SME_VL_INHERIT 28 /* Inherit SME vl_onexec across exec */
#define TIF_KERNEL_FPSTATE 29 /* Task is in a kernel mode FPSIMD section */
#define TIF_TSC_SIGSEGV 30 /* SIGSEGV on counter-timer access */
+#define TIF_LAZY_MMU 31 /* Task in lazy mmu mode */
+#define TIF_LAZY_MMU_PENDING 32 /* Ops pending for lazy mmu mode exit */
#define _TIF_SIGPENDING (1 << TIF_SIGPENDING)
#define _TIF_NEED_RESCHED (1 << TIF_NEED_RESCHED)
diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
index 42faebb7b712..45a55fe81788 100644
--- a/arch/arm64/kernel/process.c
+++ b/arch/arm64/kernel/process.c
@@ -680,10 +680,11 @@ struct task_struct *__switch_to(struct task_struct *prev,
gcs_thread_switch(next);
/*
- * Complete any pending TLB or cache maintenance on this CPU in case
- * the thread migrates to a different CPU.
- * This full barrier is also required by the membarrier system
- * call.
+ * Complete any pending TLB or cache maintenance on this CPU in case the
+ * thread migrates to a different CPU. This full barrier is also
+ * required by the membarrier system call. Additionally it makes any
+ * in-progress pgtable writes visible to the table walker; See
+ * emit_pte_barriers().
*/
dsb(ish);
--
2.43.0
^ permalink raw reply [flat|nested] 18+ messages in thread* Re: [PATCH v4 11/11] arm64/mm: Batch barriers when updating kernel mappings
2025-04-22 8:18 ` [PATCH v4 11/11] arm64/mm: Batch barriers when updating kernel mappings Ryan Roberts
@ 2025-04-24 9:13 ` Anshuman Khandual
0 siblings, 0 replies; 18+ messages in thread
From: Anshuman Khandual @ 2025-04-24 9:13 UTC (permalink / raw)
To: Ryan Roberts, Catalin Marinas, Will Deacon, Pasha Tatashin,
Andrew Morton, Uladzislau Rezki, Christoph Hellwig,
David Hildenbrand, Matthew Wilcox (Oracle),
Mark Rutland, Alexandre Ghiti, Kevin Brodsky
Cc: linux-arm-kernel, linux-mm, linux-kernel
On 4/22/25 13:48, Ryan Roberts wrote:
> Because the kernel can't tolerate page faults for kernel mappings, when
> setting a valid, kernel space pte (or pmd/pud/p4d/pgd), it emits a
> dsb(ishst) to ensure that the store to the pgtable is observed by the
> table walker immediately. Additionally it emits an isb() to ensure that
> any already speculatively determined invalid mapping fault gets
> canceled.
>
> We can improve the performance of vmalloc operations by batching these
> barriers until the end of a set of entry updates.
> arch_enter_lazy_mmu_mode() and arch_leave_lazy_mmu_mode() provide the
> required hooks.
>
> vmalloc improves by up to 30% as a result.
>
> Two new TIF_ flags are created; TIF_LAZY_MMU tells us if the task is in
> the lazy mode and can therefore defer any barriers until exit from the
> lazy mode. TIF_LAZY_MMU_PENDING is used to remember if any pte operation
> was performed while in the lazy mode that required barriers. Then when
> leaving lazy mode, if that flag is set, we emit the barriers.
>
> Since arch_enter_lazy_mmu_mode() and arch_leave_lazy_mmu_mode() are used
> for both user and kernel mappings, we need the second flag to avoid
> emitting barriers unnecessarily if only user mappings were updated.
>
> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
> ---
> arch/arm64/include/asm/pgtable.h | 81 ++++++++++++++++++++++------
> arch/arm64/include/asm/thread_info.h | 2 +
> arch/arm64/kernel/process.c | 9 ++--
> 3 files changed, 72 insertions(+), 20 deletions(-)
>
> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
> index 39c331743b69..ab4a1b19e596 100644
> --- a/arch/arm64/include/asm/pgtable.h
> +++ b/arch/arm64/include/asm/pgtable.h
> @@ -40,6 +40,63 @@
> #include <linux/sched.h>
> #include <linux/page_table_check.h>
>
> +static inline void emit_pte_barriers(void)
> +{
> + /*
> + * These barriers are emitted under certain conditions after a pte entry
> + * was modified (see e.g. __set_pte_complete()). The dsb makes the store
> + * visible to the table walker. The isb ensures that any previous
> + * speculative "invalid translation" marker that is in the CPU's
> + * pipeline gets cleared, so that any access to that address after
> + * setting the pte to valid won't cause a spurious fault. If the thread
> + * gets preempted after storing to the pgtable but before emitting these
> + * barriers, __switch_to() emits a dsb which ensure the walker gets to
> + * see the store. There is no guarantee of an isb being issued though.
> + * This is safe because it will still get issued (albeit on a
> + * potentially different CPU) when the thread starts running again,
> + * before any access to the address.
> + */
> + dsb(ishst);
> + isb();
> +}
> +
> +static inline void queue_pte_barriers(void)
> +{
> + unsigned long flags;
> +
> + VM_WARN_ON(in_interrupt());
> + flags = read_thread_flags();
> +
> + if (flags & BIT(TIF_LAZY_MMU)) {
> + /* Avoid the atomic op if already set. */
> + if (!(flags & BIT(TIF_LAZY_MMU_PENDING)))
> + set_thread_flag(TIF_LAZY_MMU_PENDING);
> + } else {
> + emit_pte_barriers();
> + }
> +}
> +
> +#define __HAVE_ARCH_ENTER_LAZY_MMU_MODE
> +static inline void arch_enter_lazy_mmu_mode(void)
> +{
> + VM_WARN_ON(in_interrupt());
> + VM_WARN_ON(test_thread_flag(TIF_LAZY_MMU));
> +
> + set_thread_flag(TIF_LAZY_MMU);
> +}
> +
> +static inline void arch_flush_lazy_mmu_mode(void)
> +{
> + if (test_and_clear_thread_flag(TIF_LAZY_MMU_PENDING))
> + emit_pte_barriers();
> +}
> +
> +static inline void arch_leave_lazy_mmu_mode(void)
> +{
> + arch_flush_lazy_mmu_mode();
> + clear_thread_flag(TIF_LAZY_MMU);
> +}
> +
> #ifdef CONFIG_TRANSPARENT_HUGEPAGE
> #define __HAVE_ARCH_FLUSH_PMD_TLB_RANGE
>
> @@ -326,10 +383,8 @@ static inline void __set_pte_complete(pte_t pte)
> * Only if the new pte is valid and kernel, otherwise TLB maintenance
> * has the necessary barriers.
> */
> - if (pte_valid_not_user(pte)) {
> - dsb(ishst);
> - isb();
> - }
> + if (pte_valid_not_user(pte))
> + queue_pte_barriers();
> }
>
> static inline void __set_pte(pte_t *ptep, pte_t pte)
> @@ -801,10 +856,8 @@ static inline void set_pmd(pmd_t *pmdp, pmd_t pmd)
>
> WRITE_ONCE(*pmdp, pmd);
>
> - if (pmd_valid(pmd)) {
> - dsb(ishst);
> - isb();
> - }
> + if (pmd_valid(pmd))
> + queue_pte_barriers();
> }
>
> static inline void pmd_clear(pmd_t *pmdp)
> @@ -869,10 +922,8 @@ static inline void set_pud(pud_t *pudp, pud_t pud)
>
> WRITE_ONCE(*pudp, pud);
>
> - if (pud_valid(pud)) {
> - dsb(ishst);
> - isb();
> - }
> + if (pud_valid(pud))
> + queue_pte_barriers();
> }
>
> static inline void pud_clear(pud_t *pudp)
> @@ -951,8 +1002,7 @@ static inline void set_p4d(p4d_t *p4dp, p4d_t p4d)
> }
>
> WRITE_ONCE(*p4dp, p4d);
> - dsb(ishst);
> - isb();
> + queue_pte_barriers();
> }
>
> static inline void p4d_clear(p4d_t *p4dp)
> @@ -1080,8 +1130,7 @@ static inline void set_pgd(pgd_t *pgdp, pgd_t pgd)
> }
>
> WRITE_ONCE(*pgdp, pgd);
> - dsb(ishst);
> - isb();
> + queue_pte_barriers();
> }
>
> static inline void pgd_clear(pgd_t *pgdp)
> diff --git a/arch/arm64/include/asm/thread_info.h b/arch/arm64/include/asm/thread_info.h
> index 1114c1c3300a..1fdd74b7b831 100644
> --- a/arch/arm64/include/asm/thread_info.h
> +++ b/arch/arm64/include/asm/thread_info.h
> @@ -82,6 +82,8 @@ void arch_setup_new_exec(void);
> #define TIF_SME_VL_INHERIT 28 /* Inherit SME vl_onexec across exec */
> #define TIF_KERNEL_FPSTATE 29 /* Task is in a kernel mode FPSIMD section */
> #define TIF_TSC_SIGSEGV 30 /* SIGSEGV on counter-timer access */
> +#define TIF_LAZY_MMU 31 /* Task in lazy mmu mode */
> +#define TIF_LAZY_MMU_PENDING 32 /* Ops pending for lazy mmu mode exit */
>
> #define _TIF_SIGPENDING (1 << TIF_SIGPENDING)
> #define _TIF_NEED_RESCHED (1 << TIF_NEED_RESCHED)
> diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
> index 42faebb7b712..45a55fe81788 100644
> --- a/arch/arm64/kernel/process.c
> +++ b/arch/arm64/kernel/process.c
> @@ -680,10 +680,11 @@ struct task_struct *__switch_to(struct task_struct *prev,
> gcs_thread_switch(next);
>
> /*
> - * Complete any pending TLB or cache maintenance on this CPU in case
> - * the thread migrates to a different CPU.
> - * This full barrier is also required by the membarrier system
> - * call.
> + * Complete any pending TLB or cache maintenance on this CPU in case the
> + * thread migrates to a different CPU. This full barrier is also
> + * required by the membarrier system call. Additionally it makes any
> + * in-progress pgtable writes visible to the table walker; See
> + * emit_pte_barriers().
> */
> dsb(ish);
>
Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v4 00/11] Perf improvements for hugetlb and vmalloc on arm64
2025-04-22 8:18 [PATCH v4 00/11] Perf improvements for hugetlb and vmalloc on arm64 Ryan Roberts
` (10 preceding siblings ...)
2025-04-22 8:18 ` [PATCH v4 11/11] arm64/mm: Batch barriers when updating kernel mappings Ryan Roberts
@ 2025-04-23 19:18 ` Luiz Capitulino
2025-05-08 14:00 ` Ryan Roberts
2025-05-09 13:55 ` Will Deacon
13 siblings, 0 replies; 18+ messages in thread
From: Luiz Capitulino @ 2025-04-23 19:18 UTC (permalink / raw)
To: Ryan Roberts, Catalin Marinas, Will Deacon, Pasha Tatashin,
Andrew Morton, Uladzislau Rezki, Christoph Hellwig,
David Hildenbrand, Matthew Wilcox (Oracle),
Mark Rutland, Anshuman Khandual, Alexandre Ghiti, Kevin Brodsky
Cc: linux-arm-kernel, linux-mm, linux-kernel
On 2025-04-22 04:18, Ryan Roberts wrote:
> Hi All,
>
> This is v4 of a series to improve performance for hugetlb and vmalloc on arm64.
> Although some of these patches are core-mm, advice from Andrew was to go via the
> arm64 tree. All patches are now acked/reviewed by relevant maintainers so I
> believe this should be good-to-go.
>
> The 2 key performance improvements are 1) enabling the use of contpte-mapped
> blocks in the vmalloc space when appropriate (which reduces TLB pressure). There
> were already hooks for this (used by powerpc) but they required some tidying and
> extending for arm64. And 2) batching up barriers when modifying the vmalloc
> address space for upto 30% reduction in time taken in vmalloc().
>
> vmalloc() performance was measured using the test_vmalloc.ko module. Tested on
> Apple M2 and Ampere Altra. Each test had loop count set to 500000 and the whole
> test was repeated 10 times.
>
> legend:
> - p: nr_pages (pages to allocate)
> - h: use_huge (vmalloc() vs vmalloc_huge())
> - (I): statistically significant improvement (95% CI does not overlap)
> - (R): statistically significant regression (95% CI does not overlap)
> - measurements are times; smaller is better
>
> +--------------------------------------------------+-------------+-------------+
> | Benchmark | | |
> | Result Class | Apple M2 | Ampere Alta |
> +==================================================+=============+=============+
> | micromm/vmalloc | | |
> | fix_align_alloc_test: p:1, h:0 (usec) | (I) -11.53% | -2.57% |
> | fix_size_alloc_test: p:1, h:0 (usec) | 2.14% | 1.79% |
> | fix_size_alloc_test: p:4, h:0 (usec) | (I) -9.93% | (I) -4.80% |
> | fix_size_alloc_test: p:16, h:0 (usec) | (I) -25.07% | (I) -14.24% |
> | fix_size_alloc_test: p:16, h:1 (usec) | (I) -14.07% | (R) 7.93% |
> | fix_size_alloc_test: p:64, h:0 (usec) | (I) -29.43% | (I) -19.30% |
> | fix_size_alloc_test: p:64, h:1 (usec) | (I) -16.39% | (R) 6.71% |
> | fix_size_alloc_test: p:256, h:0 (usec) | (I) -31.46% | (I) -20.60% |
> | fix_size_alloc_test: p:256, h:1 (usec) | (I) -16.58% | (R) 6.70% |
> | fix_size_alloc_test: p:512, h:0 (usec) | (I) -31.96% | (I) -20.04% |
> | fix_size_alloc_test: p:512, h:1 (usec) | 2.30% | 0.71% |
> | full_fit_alloc_test: p:1, h:0 (usec) | -2.94% | 1.77% |
> | kvfree_rcu_1_arg_vmalloc_test: p:1, h:0 (usec) | -7.75% | 1.71% |
> | kvfree_rcu_2_arg_vmalloc_test: p:1, h:0 (usec) | -9.07% | (R) 2.34% |
> | long_busy_list_alloc_test: p:1, h:0 (usec) | (I) -29.18% | (I) -17.91% |
> | pcpu_alloc_test: p:1, h:0 (usec) | -14.71% | -3.14% |
> | random_size_align_alloc_test: p:1, h:0 (usec) | (I) -11.08% | (I) -4.62% |
> | random_size_alloc_test: p:1, h:0 (usec) | (I) -30.25% | (I) -17.95% |
> | vm_map_ram_test: p:1, h:0 (usec) | 5.06% | (R) 6.63% |
> +--------------------------------------------------+-------------+-------------+
>
> So there are some nice improvements but also some regressions to explain:
>
> fix_size_alloc_test with h:1 and p:16,64,256 regress by ~6% on Altra. The
> regression is actually introduced by enabling contpte-mapped 64K blocks in these
> tests, and that regression is reduced (from about 8% if memory serves) by doing
> the barrier batching. I don't have a definite conclusion on the root cause, but
> I've ruled out the differences in the mapping paths in vmalloc. I strongly
> believe this is likely due to the difference in the allocation path; 64K blocks
> are not cached per-cpu so we have to go all the way to the buddy. I'm not sure
> why this doesn't show up on M2 though. Regardless, I'm going to assert that it's
> better to choose 16x reduction in TLB pressure vs 6% on the vmalloc allocation
> call duration.
I ran a couple of basic functional tests for HugeTLB for 1G, 2M, 32M and 64k
pages on an Ampere Mt. Jade and a Jetson AGX Orin systems and didn't
get any issues, so:
Tested-by: Luiz Capitulino <luizcap@redhat.com>
>
> Changes since v3 [3]
> ====================
> - Applied R-bs (thanks all!)
> - Renamed set_ptes_anysz() -> __set_ptes_anysz() (Catalin)
> - Renamed ptep_get_and_clear_anysz() -> __ptep_get_and_clear_anysz() (Catalin)
> - Only set TIF_LAZY_MMU_PENDING if not already set to avoid atomic ops (Catalin)
> - Fix commet typos (Anshuman)
> - Fix build warnings when PMD is folded (buildbot)
> - Reverse xmas tree for variables in __page_table_check_p[mu]ds_set() (Pasha)
>
> Changes since v2 [2]
> ====================
> - Removed the new arch_update_kernel_mappings_[begin|end]() API
> - Switches to arch_[enter|leave]_lazy_mmu_mode() instead for barrier batching
> - Removed clean up to avoid barriers for invalid or user mappings
>
> Changes since v1 [1]
> ====================
> - Split out the fixes into their own series
> - Added Rbs from Anshuman - Thanks!
> - Added patch to clean up the methods by which huge_pte size is determined
> - Added "#ifndef __PAGETABLE_PMD_FOLDED" around PUD_SIZE in
> flush_hugetlb_tlb_range()
> - Renamed ___set_ptes() -> set_ptes_anysz()
> - Renamed ___ptep_get_and_clear() -> ptep_get_and_clear_anysz()
> - Fixed typos in commit logs
> - Refactored pXd_valid_not_user() for better reuse
> - Removed TIF_KMAP_UPDATE_PENDING after concluding that single flag is sufficent
> - Concluded the extra isb() in __switch_to() is not required
> - Only call arch_update_kernel_mappings_[begin|end]() for kernel mappings
>
> Applies on top of v6.15-rc3. All mm selftests run and no regressions observed.
>
> [1] https://lore.kernel.org/all/20250205151003.88959-1-ryan.roberts@arm.com/
> [2] https://lore.kernel.org/all/20250217140809.1702789-1-ryan.roberts@arm.com/
> [3] https://lore.kernel.org/all/20250304150444.3788920-1-ryan.roberts@arm.com/
>
> Thanks,
> Ryan
>
> Ryan Roberts (11):
> arm64: hugetlb: Cleanup huge_pte size discovery mechanisms
> arm64: hugetlb: Refine tlb maintenance scope
> mm/page_table_check: Batch-check pmds/puds just like ptes
> arm64/mm: Refactor __set_ptes() and __ptep_get_and_clear()
> arm64: hugetlb: Use __set_ptes_anysz() and
> __ptep_get_and_clear_anysz()
> arm64/mm: Hoist barriers out of set_ptes_anysz() loop
> mm/vmalloc: Warn on improper use of vunmap_range()
> mm/vmalloc: Gracefully unmap huge ptes
> arm64/mm: Support huge pte-mapped pages in vmap
> mm/vmalloc: Enter lazy mmu mode while manipulating vmalloc ptes
> arm64/mm: Batch barriers when updating kernel mappings
>
> arch/arm64/include/asm/hugetlb.h | 29 ++--
> arch/arm64/include/asm/pgtable.h | 209 +++++++++++++++++++--------
> arch/arm64/include/asm/thread_info.h | 2 +
> arch/arm64/include/asm/vmalloc.h | 45 ++++++
> arch/arm64/kernel/process.c | 9 +-
> arch/arm64/mm/hugetlbpage.c | 73 ++++------
> include/linux/page_table_check.h | 30 ++--
> include/linux/vmalloc.h | 8 +
> mm/page_table_check.c | 34 +++--
> mm/vmalloc.c | 40 ++++-
> 10 files changed, 329 insertions(+), 150 deletions(-)
>
> --
> 2.43.0
>
>
^ permalink raw reply [flat|nested] 18+ messages in thread* Re: [PATCH v4 00/11] Perf improvements for hugetlb and vmalloc on arm64
2025-04-22 8:18 [PATCH v4 00/11] Perf improvements for hugetlb and vmalloc on arm64 Ryan Roberts
` (11 preceding siblings ...)
2025-04-23 19:18 ` [PATCH v4 00/11] Perf improvements for hugetlb and vmalloc on arm64 Luiz Capitulino
@ 2025-05-08 14:00 ` Ryan Roberts
2025-05-09 13:55 ` Will Deacon
13 siblings, 0 replies; 18+ messages in thread
From: Ryan Roberts @ 2025-05-08 14:00 UTC (permalink / raw)
To: Catalin Marinas, Will Deacon, Pasha Tatashin, Andrew Morton,
Uladzislau Rezki, Christoph Hellwig, David Hildenbrand,
Matthew Wilcox (Oracle),
Mark Rutland, Anshuman Khandual, Alexandre Ghiti, Kevin Brodsky
Cc: linux-arm-kernel, linux-mm, linux-kernel
Hi Will,
Just a bump on this; I believe it's had review by all the relavent folks (and
has the R-b) tags. I was hoping to get this into v6.16, but getting nervous that
time is running out to soak it in linux-next. Any chance you could consider pulling?
Thanks,
Ryan
On 22/04/2025 09:18, Ryan Roberts wrote:
> Hi All,
>
> This is v4 of a series to improve performance for hugetlb and vmalloc on arm64.
> Although some of these patches are core-mm, advice from Andrew was to go via the
> arm64 tree. All patches are now acked/reviewed by relevant maintainers so I
> believe this should be good-to-go.
>
> The 2 key performance improvements are 1) enabling the use of contpte-mapped
> blocks in the vmalloc space when appropriate (which reduces TLB pressure). There
> were already hooks for this (used by powerpc) but they required some tidying and
> extending for arm64. And 2) batching up barriers when modifying the vmalloc
> address space for upto 30% reduction in time taken in vmalloc().
>
> vmalloc() performance was measured using the test_vmalloc.ko module. Tested on
> Apple M2 and Ampere Altra. Each test had loop count set to 500000 and the whole
> test was repeated 10 times.
>
> legend:
> - p: nr_pages (pages to allocate)
> - h: use_huge (vmalloc() vs vmalloc_huge())
> - (I): statistically significant improvement (95% CI does not overlap)
> - (R): statistically significant regression (95% CI does not overlap)
> - measurements are times; smaller is better
>
> +--------------------------------------------------+-------------+-------------+
> | Benchmark | | |
> | Result Class | Apple M2 | Ampere Alta |
> +==================================================+=============+=============+
> | micromm/vmalloc | | |
> | fix_align_alloc_test: p:1, h:0 (usec) | (I) -11.53% | -2.57% |
> | fix_size_alloc_test: p:1, h:0 (usec) | 2.14% | 1.79% |
> | fix_size_alloc_test: p:4, h:0 (usec) | (I) -9.93% | (I) -4.80% |
> | fix_size_alloc_test: p:16, h:0 (usec) | (I) -25.07% | (I) -14.24% |
> | fix_size_alloc_test: p:16, h:1 (usec) | (I) -14.07% | (R) 7.93% |
> | fix_size_alloc_test: p:64, h:0 (usec) | (I) -29.43% | (I) -19.30% |
> | fix_size_alloc_test: p:64, h:1 (usec) | (I) -16.39% | (R) 6.71% |
> | fix_size_alloc_test: p:256, h:0 (usec) | (I) -31.46% | (I) -20.60% |
> | fix_size_alloc_test: p:256, h:1 (usec) | (I) -16.58% | (R) 6.70% |
> | fix_size_alloc_test: p:512, h:0 (usec) | (I) -31.96% | (I) -20.04% |
> | fix_size_alloc_test: p:512, h:1 (usec) | 2.30% | 0.71% |
> | full_fit_alloc_test: p:1, h:0 (usec) | -2.94% | 1.77% |
> | kvfree_rcu_1_arg_vmalloc_test: p:1, h:0 (usec) | -7.75% | 1.71% |
> | kvfree_rcu_2_arg_vmalloc_test: p:1, h:0 (usec) | -9.07% | (R) 2.34% |
> | long_busy_list_alloc_test: p:1, h:0 (usec) | (I) -29.18% | (I) -17.91% |
> | pcpu_alloc_test: p:1, h:0 (usec) | -14.71% | -3.14% |
> | random_size_align_alloc_test: p:1, h:0 (usec) | (I) -11.08% | (I) -4.62% |
> | random_size_alloc_test: p:1, h:0 (usec) | (I) -30.25% | (I) -17.95% |
> | vm_map_ram_test: p:1, h:0 (usec) | 5.06% | (R) 6.63% |
> +--------------------------------------------------+-------------+-------------+
>
> So there are some nice improvements but also some regressions to explain:
>
> fix_size_alloc_test with h:1 and p:16,64,256 regress by ~6% on Altra. The
> regression is actually introduced by enabling contpte-mapped 64K blocks in these
> tests, and that regression is reduced (from about 8% if memory serves) by doing
> the barrier batching. I don't have a definite conclusion on the root cause, but
> I've ruled out the differences in the mapping paths in vmalloc. I strongly
> believe this is likely due to the difference in the allocation path; 64K blocks
> are not cached per-cpu so we have to go all the way to the buddy. I'm not sure
> why this doesn't show up on M2 though. Regardless, I'm going to assert that it's
> better to choose 16x reduction in TLB pressure vs 6% on the vmalloc allocation
> call duration.
>
> Changes since v3 [3]
> ====================
> - Applied R-bs (thanks all!)
> - Renamed set_ptes_anysz() -> __set_ptes_anysz() (Catalin)
> - Renamed ptep_get_and_clear_anysz() -> __ptep_get_and_clear_anysz() (Catalin)
> - Only set TIF_LAZY_MMU_PENDING if not already set to avoid atomic ops (Catalin)
> - Fix commet typos (Anshuman)
> - Fix build warnings when PMD is folded (buildbot)
> - Reverse xmas tree for variables in __page_table_check_p[mu]ds_set() (Pasha)
>
> Changes since v2 [2]
> ====================
> - Removed the new arch_update_kernel_mappings_[begin|end]() API
> - Switches to arch_[enter|leave]_lazy_mmu_mode() instead for barrier batching
> - Removed clean up to avoid barriers for invalid or user mappings
>
> Changes since v1 [1]
> ====================
> - Split out the fixes into their own series
> - Added Rbs from Anshuman - Thanks!
> - Added patch to clean up the methods by which huge_pte size is determined
> - Added "#ifndef __PAGETABLE_PMD_FOLDED" around PUD_SIZE in
> flush_hugetlb_tlb_range()
> - Renamed ___set_ptes() -> set_ptes_anysz()
> - Renamed ___ptep_get_and_clear() -> ptep_get_and_clear_anysz()
> - Fixed typos in commit logs
> - Refactored pXd_valid_not_user() for better reuse
> - Removed TIF_KMAP_UPDATE_PENDING after concluding that single flag is sufficent
> - Concluded the extra isb() in __switch_to() is not required
> - Only call arch_update_kernel_mappings_[begin|end]() for kernel mappings
>
> Applies on top of v6.15-rc3. All mm selftests run and no regressions observed.
>
> [1] https://lore.kernel.org/all/20250205151003.88959-1-ryan.roberts@arm.com/
> [2] https://lore.kernel.org/all/20250217140809.1702789-1-ryan.roberts@arm.com/
> [3] https://lore.kernel.org/all/20250304150444.3788920-1-ryan.roberts@arm.com/
>
> Thanks,
> Ryan
>
> Ryan Roberts (11):
> arm64: hugetlb: Cleanup huge_pte size discovery mechanisms
> arm64: hugetlb: Refine tlb maintenance scope
> mm/page_table_check: Batch-check pmds/puds just like ptes
> arm64/mm: Refactor __set_ptes() and __ptep_get_and_clear()
> arm64: hugetlb: Use __set_ptes_anysz() and
> __ptep_get_and_clear_anysz()
> arm64/mm: Hoist barriers out of set_ptes_anysz() loop
> mm/vmalloc: Warn on improper use of vunmap_range()
> mm/vmalloc: Gracefully unmap huge ptes
> arm64/mm: Support huge pte-mapped pages in vmap
> mm/vmalloc: Enter lazy mmu mode while manipulating vmalloc ptes
> arm64/mm: Batch barriers when updating kernel mappings
>
> arch/arm64/include/asm/hugetlb.h | 29 ++--
> arch/arm64/include/asm/pgtable.h | 209 +++++++++++++++++++--------
> arch/arm64/include/asm/thread_info.h | 2 +
> arch/arm64/include/asm/vmalloc.h | 45 ++++++
> arch/arm64/kernel/process.c | 9 +-
> arch/arm64/mm/hugetlbpage.c | 73 ++++------
> include/linux/page_table_check.h | 30 ++--
> include/linux/vmalloc.h | 8 +
> mm/page_table_check.c | 34 +++--
> mm/vmalloc.c | 40 ++++-
> 10 files changed, 329 insertions(+), 150 deletions(-)
>
> --
> 2.43.0
>
^ permalink raw reply [flat|nested] 18+ messages in thread* Re: [PATCH v4 00/11] Perf improvements for hugetlb and vmalloc on arm64
2025-04-22 8:18 [PATCH v4 00/11] Perf improvements for hugetlb and vmalloc on arm64 Ryan Roberts
` (12 preceding siblings ...)
2025-05-08 14:00 ` Ryan Roberts
@ 2025-05-09 13:55 ` Will Deacon
13 siblings, 0 replies; 18+ messages in thread
From: Will Deacon @ 2025-05-09 13:55 UTC (permalink / raw)
To: Catalin Marinas, Pasha Tatashin, Andrew Morton, Uladzislau Rezki,
Christoph Hellwig, David Hildenbrand, Matthew Wilcox (Oracle),
Mark Rutland, Anshuman Khandual, Alexandre Ghiti, Kevin Brodsky,
Ryan Roberts
Cc: kernel-team, Will Deacon, linux-arm-kernel, linux-mm, linux-kernel
On Tue, 22 Apr 2025 09:18:08 +0100, Ryan Roberts wrote:
> This is v4 of a series to improve performance for hugetlb and vmalloc on arm64.
> Although some of these patches are core-mm, advice from Andrew was to go via the
> arm64 tree. All patches are now acked/reviewed by relevant maintainers so I
> believe this should be good-to-go.
>
> The 2 key performance improvements are 1) enabling the use of contpte-mapped
> blocks in the vmalloc space when appropriate (which reduces TLB pressure). There
> were already hooks for this (used by powerpc) but they required some tidying and
> extending for arm64. And 2) batching up barriers when modifying the vmalloc
> address space for upto 30% reduction in time taken in vmalloc().
>
> [...]
Sorry for the delay in getting to this series, it all looks good.
Applied to arm64 (for-next/mm), thanks!
[01/11] arm64: hugetlb: Cleanup huge_pte size discovery mechanisms
https://git.kernel.org/arm64/c/29cb80519689
[02/11] arm64: hugetlb: Refine tlb maintenance scope
https://git.kernel.org/arm64/c/5b3f8917644e
[03/11] mm/page_table_check: Batch-check pmds/puds just like ptes
https://git.kernel.org/arm64/c/91e40668e70a
[04/11] arm64/mm: Refactor __set_ptes() and __ptep_get_and_clear()
https://git.kernel.org/arm64/c/ef493d234362
[05/11] arm64: hugetlb: Use __set_ptes_anysz() and __ptep_get_and_clear_anysz()
https://git.kernel.org/arm64/c/a899b7d0673c
[06/11] arm64/mm: Hoist barriers out of set_ptes_anysz() loop
https://git.kernel.org/arm64/c/f89b399e8d6e
[07/11] mm/vmalloc: Warn on improper use of vunmap_range()
https://git.kernel.org/arm64/c/61ef8ddaa35e
[08/11] mm/vmalloc: Gracefully unmap huge ptes
https://git.kernel.org/arm64/c/2fba13371fe8
[09/11] arm64/mm: Support huge pte-mapped pages in vmap
https://git.kernel.org/arm64/c/06fc959fcff7
[10/11] mm/vmalloc: Enter lazy mmu mode while manipulating vmalloc ptes
https://git.kernel.org/arm64/c/44562c71e2cf
[11/11] arm64/mm: Batch barriers when updating kernel mappings
https://git.kernel.org/arm64/c/5fdd05efa1cd
Cheers,
--
Will
https://fixes.arm64.dev
https://next.arm64.dev
https://will.arm64.dev
^ permalink raw reply [flat|nested] 18+ messages in thread