* [PATCH v8 1/5] arm64: Enable permission change on arm64 kernel block mappings
2025-09-17 19:02 [PATCH v8 0/5] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full Yang Shi
@ 2025-09-17 19:02 ` Yang Shi
2025-09-17 19:02 ` [PATCH v8 2/5] arm64: cpufeature: add AmpereOne to BBML2 allow list Yang Shi
` (4 subsequent siblings)
5 siblings, 0 replies; 32+ messages in thread
From: Yang Shi @ 2025-09-17 19:02 UTC (permalink / raw)
To: catalin.marinas, will, ryan.roberts, akpm, david,
lorenzo.stoakes, ardb, dev.jain, scott, cl
Cc: yang, linux-arm-kernel, linux-kernel, linux-mm
From: Dev Jain <dev.jain@arm.com>
This patch paves the path to enable huge mappings in vmalloc space and
linear map space by default on arm64. For this we must ensure that we
can handle any permission games on the kernel (init_mm) pagetable.
Previously, __change_memory_common() used apply_to_page_range() which
does not support changing permissions for block mappings. We move away
from this by using the pagewalk API, similar to what riscv does right
now. It is the responsibility of the caller to ensure that the range
over which permissions are being changed falls on leaf mapping
boundaries. For systems with BBML2, this will be handled in future
patches by dyanmically splitting the mappings when required.
Unlike apply_to_page_range(), the pagewalk API currently enforces the
init_mm.mmap_lock to be held. To avoid the unnecessary bottleneck of the
mmap_lock for our usecase, this patch extends this generic API to be
used locklessly, so as to retain the existing behaviour for changing
permissions. Apart from this reason, it is noted at [1] that KFENCE can
manipulate kernel pgtable entries during softirqs. It does this by
calling set_memory_valid() -> __change_memory_common(). This being a
non-sleepable context, we cannot take the init_mm mmap lock.
Add comments to highlight the conditions under which we can use the
lockless variant - no underlying VMA, and the user having exclusive
control over the range, thus guaranteeing no concurrent access.
We require that the start and end of a given range do not partially
overlap block mappings, or cont mappings. Return -EINVAL in case a
partial block mapping is detected in any of the PGD/P4D/PUD/PMD levels;
add a corresponding comment in update_range_prot() to warn that
eliminating such a condition is the responsibility of the caller.
Note that, the pte level callback may change permissions for a whole
contpte block, and that will be done one pte at a time, as opposed to an
atomic operation for the block mappings. This is fine as any access will
decode either the old or the new permission until the TLBI.
apply_to_page_range() currently performs all pte level callbacks while
in lazy mmu mode. Since arm64 can optimize performance by batching
barriers when modifying kernel pgtables in lazy mmu mode, we would like
to continue to benefit from this optimisation. Unfortunately
walk_kernel_page_table_range() does not use lazy mmu mode. However,
since the pagewalk framework is not allocating any memory, we can safely
bracket the whole operation inside lazy mmu mode ourselves. Therefore,
wrap the call to walk_kernel_page_table_range() with the lazy MMU
helpers.
Link: https://lore.kernel.org/linux-arm-kernel/89d0ad18-4772-4d8f-ae8a-7c48d26a927e@arm.com/ [1]
Signed-off-by: Dev Jain <dev.jain@arm.com>
Signed-off-by: Yang Shi <yshi@os.amperecomputing.com>
Reviewed-by: Ryan Roberts <ryan.roberts@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
---
arch/arm64/mm/pageattr.c | 119 +++++++++++++++++++++++++++++----------
include/linux/pagewalk.h | 3 +
mm/pagewalk.c | 36 ++++++++----
3 files changed, 115 insertions(+), 43 deletions(-)
diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
index 667aff1efe49..c0648764c403 100644
--- a/arch/arm64/mm/pageattr.c
+++ b/arch/arm64/mm/pageattr.c
@@ -8,6 +8,7 @@
#include <linux/mem_encrypt.h>
#include <linux/sched.h>
#include <linux/vmalloc.h>
+#include <linux/pagewalk.h>
#include <asm/cacheflush.h>
#include <asm/pgtable-prot.h>
@@ -20,6 +21,65 @@ struct page_change_data {
pgprot_t clear_mask;
};
+static ptdesc_t set_pageattr_masks(ptdesc_t val, struct mm_walk *walk)
+{
+ struct page_change_data *masks = walk->private;
+
+ val &= ~(pgprot_val(masks->clear_mask));
+ val |= (pgprot_val(masks->set_mask));
+
+ return val;
+}
+
+static int pageattr_pud_entry(pud_t *pud, unsigned long addr,
+ unsigned long next, struct mm_walk *walk)
+{
+ pud_t val = pudp_get(pud);
+
+ if (pud_sect(val)) {
+ if (WARN_ON_ONCE((next - addr) != PUD_SIZE))
+ return -EINVAL;
+ val = __pud(set_pageattr_masks(pud_val(val), walk));
+ set_pud(pud, val);
+ walk->action = ACTION_CONTINUE;
+ }
+
+ return 0;
+}
+
+static int pageattr_pmd_entry(pmd_t *pmd, unsigned long addr,
+ unsigned long next, struct mm_walk *walk)
+{
+ pmd_t val = pmdp_get(pmd);
+
+ if (pmd_sect(val)) {
+ if (WARN_ON_ONCE((next - addr) != PMD_SIZE))
+ return -EINVAL;
+ val = __pmd(set_pageattr_masks(pmd_val(val), walk));
+ set_pmd(pmd, val);
+ walk->action = ACTION_CONTINUE;
+ }
+
+ return 0;
+}
+
+static int pageattr_pte_entry(pte_t *pte, unsigned long addr,
+ unsigned long next, struct mm_walk *walk)
+{
+ pte_t val = __ptep_get(pte);
+
+ val = __pte(set_pageattr_masks(pte_val(val), walk));
+ __set_pte(pte, val);
+
+ return 0;
+}
+
+static const struct mm_walk_ops pageattr_ops = {
+ .pud_entry = pageattr_pud_entry,
+ .pmd_entry = pageattr_pmd_entry,
+ .pte_entry = pageattr_pte_entry,
+};
+
bool rodata_full __ro_after_init = true;
bool can_set_direct_map(void)
@@ -37,32 +97,35 @@ bool can_set_direct_map(void)
arm64_kfence_can_set_direct_map() || is_realm_world();
}
-static int change_page_range(pte_t *ptep, unsigned long addr, void *data)
+static int update_range_prot(unsigned long start, unsigned long size,
+ pgprot_t set_mask, pgprot_t clear_mask)
{
- struct page_change_data *cdata = data;
- pte_t pte = __ptep_get(ptep);
+ struct page_change_data data;
+ int ret;
- pte = clear_pte_bit(pte, cdata->clear_mask);
- pte = set_pte_bit(pte, cdata->set_mask);
+ data.set_mask = set_mask;
+ data.clear_mask = clear_mask;
- __set_pte(ptep, pte);
- return 0;
+ arch_enter_lazy_mmu_mode();
+
+ /*
+ * The caller must ensure that the range we are operating on does not
+ * partially overlap a block mapping, or a cont mapping. Any such case
+ * must be eliminated by splitting the mapping.
+ */
+ ret = walk_kernel_page_table_range_lockless(start, start + size,
+ &pageattr_ops, NULL, &data);
+ arch_leave_lazy_mmu_mode();
+
+ return ret;
}
-/*
- * This function assumes that the range is mapped with PAGE_SIZE pages.
- */
static int __change_memory_common(unsigned long start, unsigned long size,
- pgprot_t set_mask, pgprot_t clear_mask)
+ pgprot_t set_mask, pgprot_t clear_mask)
{
- struct page_change_data data;
int ret;
- data.set_mask = set_mask;
- data.clear_mask = clear_mask;
-
- ret = apply_to_page_range(&init_mm, start, size, change_page_range,
- &data);
+ ret = update_range_prot(start, size, set_mask, clear_mask);
/*
* If the memory is being made valid without changing any other bits
@@ -174,32 +237,26 @@ int set_memory_valid(unsigned long addr, int numpages, int enable)
int set_direct_map_invalid_noflush(struct page *page)
{
- struct page_change_data data = {
- .set_mask = __pgprot(0),
- .clear_mask = __pgprot(PTE_VALID),
- };
+ pgprot_t clear_mask = __pgprot(PTE_VALID);
+ pgprot_t set_mask = __pgprot(0);
if (!can_set_direct_map())
return 0;
- return apply_to_page_range(&init_mm,
- (unsigned long)page_address(page),
- PAGE_SIZE, change_page_range, &data);
+ return update_range_prot((unsigned long)page_address(page),
+ PAGE_SIZE, set_mask, clear_mask);
}
int set_direct_map_default_noflush(struct page *page)
{
- struct page_change_data data = {
- .set_mask = __pgprot(PTE_VALID | PTE_WRITE),
- .clear_mask = __pgprot(PTE_RDONLY),
- };
+ pgprot_t set_mask = __pgprot(PTE_VALID | PTE_WRITE);
+ pgprot_t clear_mask = __pgprot(PTE_RDONLY);
if (!can_set_direct_map())
return 0;
- return apply_to_page_range(&init_mm,
- (unsigned long)page_address(page),
- PAGE_SIZE, change_page_range, &data);
+ return update_range_prot((unsigned long)page_address(page),
+ PAGE_SIZE, set_mask, clear_mask);
}
static int __set_memory_enc_dec(unsigned long addr,
diff --git a/include/linux/pagewalk.h b/include/linux/pagewalk.h
index 682472c15495..88e18615dd72 100644
--- a/include/linux/pagewalk.h
+++ b/include/linux/pagewalk.h
@@ -134,6 +134,9 @@ int walk_page_range(struct mm_struct *mm, unsigned long start,
int walk_kernel_page_table_range(unsigned long start,
unsigned long end, const struct mm_walk_ops *ops,
pgd_t *pgd, void *private);
+int walk_kernel_page_table_range_lockless(unsigned long start,
+ unsigned long end, const struct mm_walk_ops *ops,
+ pgd_t *pgd, void *private);
int walk_page_range_vma(struct vm_area_struct *vma, unsigned long start,
unsigned long end, const struct mm_walk_ops *ops,
void *private);
diff --git a/mm/pagewalk.c b/mm/pagewalk.c
index 648038247a8d..936689d8bcac 100644
--- a/mm/pagewalk.c
+++ b/mm/pagewalk.c
@@ -606,10 +606,32 @@ int walk_page_range(struct mm_struct *mm, unsigned long start,
int walk_kernel_page_table_range(unsigned long start, unsigned long end,
const struct mm_walk_ops *ops, pgd_t *pgd, void *private)
{
- struct mm_struct *mm = &init_mm;
+ /*
+ * Kernel intermediate page tables are usually not freed, so the mmap
+ * read lock is sufficient. But there are some exceptions.
+ * E.g. memory hot-remove. In which case, the mmap lock is insufficient
+ * to prevent the intermediate kernel pages tables belonging to the
+ * specified address range from being freed. The caller should take
+ * other actions to prevent this race.
+ */
+ mmap_assert_locked(&init_mm);
+
+ return walk_kernel_page_table_range_lockless(start, end, ops, pgd,
+ private);
+}
+
+/*
+ * Use this function to walk the kernel page tables locklessly. It should be
+ * guaranteed that the caller has exclusive access over the range they are
+ * operating on - that there should be no concurrent access, for example,
+ * changing permissions for vmalloc objects.
+ */
+int walk_kernel_page_table_range_lockless(unsigned long start, unsigned long end,
+ const struct mm_walk_ops *ops, pgd_t *pgd, void *private)
+{
struct mm_walk walk = {
.ops = ops,
- .mm = mm,
+ .mm = &init_mm,
.pgd = pgd,
.private = private,
.no_vma = true
@@ -620,16 +642,6 @@ int walk_kernel_page_table_range(unsigned long start, unsigned long end,
if (!check_ops_valid(ops))
return -EINVAL;
- /*
- * Kernel intermediate page tables are usually not freed, so the mmap
- * read lock is sufficient. But there are some exceptions.
- * E.g. memory hot-remove. In which case, the mmap lock is insufficient
- * to prevent the intermediate kernel pages tables belonging to the
- * specified address range from being freed. The caller should take
- * other actions to prevent this race.
- */
- mmap_assert_locked(mm);
-
return walk_pgd_range(start, end, &walk);
}
--
2.47.0
^ permalink raw reply [flat|nested] 32+ messages in thread* [PATCH v8 2/5] arm64: cpufeature: add AmpereOne to BBML2 allow list
2025-09-17 19:02 [PATCH v8 0/5] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full Yang Shi
2025-09-17 19:02 ` [PATCH v8 1/5] arm64: Enable permission change on arm64 kernel block mappings Yang Shi
@ 2025-09-17 19:02 ` Yang Shi
2025-09-17 19:02 ` [PATCH v8 3/5] arm64: mm: support large block mapping when rodata=full Yang Shi
` (3 subsequent siblings)
5 siblings, 0 replies; 32+ messages in thread
From: Yang Shi @ 2025-09-17 19:02 UTC (permalink / raw)
To: catalin.marinas, will, ryan.roberts, akpm, david,
lorenzo.stoakes, ardb, dev.jain, scott, cl
Cc: yang, linux-arm-kernel, linux-kernel, linux-mm
AmpereOne supports BBML2 without conflict abort, add to the allow list.
Reviewed-by: Christoph Lameter (Ampere) <cl@gentwo.org>
Reviewed-by: Ryan Roberts <ryan.roberts@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Yang Shi <yang@os.amperecomputing.com>
---
arch/arm64/kernel/cpufeature.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index ef269a5a37e1..ba07eeff2a8d 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -2235,6 +2235,8 @@ static bool has_bbml2_noabort(const struct arm64_cpu_capabilities *caps, int sco
static const struct midr_range supports_bbml2_noabort_list[] = {
MIDR_REV_RANGE(MIDR_CORTEX_X4, 0, 3, 0xf),
MIDR_REV_RANGE(MIDR_NEOVERSE_V3, 0, 2, 0xf),
+ MIDR_ALL_VERSIONS(MIDR_AMPERE1),
+ MIDR_ALL_VERSIONS(MIDR_AMPERE1A),
{}
};
--
2.47.0
^ permalink raw reply [flat|nested] 32+ messages in thread* [PATCH v8 3/5] arm64: mm: support large block mapping when rodata=full
2025-09-17 19:02 [PATCH v8 0/5] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full Yang Shi
2025-09-17 19:02 ` [PATCH v8 1/5] arm64: Enable permission change on arm64 kernel block mappings Yang Shi
2025-09-17 19:02 ` [PATCH v8 2/5] arm64: cpufeature: add AmpereOne to BBML2 allow list Yang Shi
@ 2025-09-17 19:02 ` Yang Shi
2025-11-01 16:14 ` Guenter Roeck
2025-09-17 19:02 ` [PATCH v8 4/5] arm64: mm: split linear mapping if BBML2 unsupported on secondary CPUs Yang Shi
` (2 subsequent siblings)
5 siblings, 1 reply; 32+ messages in thread
From: Yang Shi @ 2025-09-17 19:02 UTC (permalink / raw)
To: catalin.marinas, will, ryan.roberts, akpm, david,
lorenzo.stoakes, ardb, dev.jain, scott, cl
Cc: yang, linux-arm-kernel, linux-kernel, linux-mm
When rodata=full is specified, kernel linear mapping has to be mapped at
PTE level since large page table can't be split due to break-before-make
rule on ARM64.
This resulted in a couple of problems:
- performance degradation
- more TLB pressure
- memory waste for kernel page table
With FEAT_BBM level 2 support, splitting large block page table to
smaller ones doesn't need to make the page table entry invalid anymore.
This allows kernel split large block mapping on the fly.
Add kernel page table split support and use large block mapping by
default when FEAT_BBM level 2 is supported for rodata=full. When
changing permissions for kernel linear mapping, the page table will be
split to smaller size.
The machine without FEAT_BBM level 2 will fallback to have kernel linear
mapping PTE-mapped when rodata=full.
With this we saw significant performance boost with some benchmarks and
much less memory consumption on my AmpereOne machine (192 cores, 1P)
with 256GB memory.
* Memory use after boot
Before:
MemTotal: 258988984 kB
MemFree: 254821700 kB
After:
MemTotal: 259505132 kB
MemFree: 255410264 kB
Around 500MB more memory are free to use. The larger the machine, the
more memory saved.
* Memcached
We saw performance degradation when running Memcached benchmark with
rodata=full vs rodata=on. Our profiling pointed to kernel TLB pressure.
With this patchset we saw ops/sec is increased by around 3.5%, P99
latency is reduced by around 9.6%.
The gain mainly came from reduced kernel TLB misses. The kernel TLB
MPKI is reduced by 28.5%.
The benchmark data is now on par with rodata=on too.
* Disk encryption (dm-crypt) benchmark
Ran fio benchmark with the below command on a 128G ramdisk (ext4) with
disk encryption (by dm-crypt).
fio --directory=/data --random_generator=lfsr --norandommap \
--randrepeat 1 --status-interval=999 --rw=write --bs=4k --loops=1 \
--ioengine=sync --iodepth=1 --numjobs=1 --fsync_on_close=1 \
--group_reporting --thread --name=iops-test-job --eta-newline=1 \
--size 100G
The IOPS is increased by 90% - 150% (the variance is high, but the worst
number of good case is around 90% more than the best number of bad
case). The bandwidth is increased and the avg clat is reduced
proportionally.
* Sequential file read
Read 100G file sequentially on XFS (xfs_io read with page cache
populated). The bandwidth is increased by 150%.
Co-developed-by: Ryan Roberts <ryan.roberts@arm.com>
Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Yang Shi <yang@os.amperecomputing.com>
---
arch/arm64/include/asm/cpufeature.h | 2 +
arch/arm64/include/asm/mmu.h | 1 +
arch/arm64/include/asm/pgtable.h | 5 +
arch/arm64/kernel/cpufeature.c | 7 +-
arch/arm64/mm/mmu.c | 264 +++++++++++++++++++++++++++-
arch/arm64/mm/pageattr.c | 4 +
6 files changed, 277 insertions(+), 6 deletions(-)
diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index bf13d676aae2..e223cbf350e4 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -871,6 +871,8 @@ static inline bool system_supports_pmuv3(void)
return cpus_have_final_cap(ARM64_HAS_PMUV3);
}
+bool cpu_supports_bbml2_noabort(void);
+
static inline bool system_supports_bbml2_noabort(void)
{
return alternative_has_cap_unlikely(ARM64_HAS_BBML2_NOABORT);
diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
index 49f1a810df16..a7cc95d97ceb 100644
--- a/arch/arm64/include/asm/mmu.h
+++ b/arch/arm64/include/asm/mmu.h
@@ -78,6 +78,7 @@ extern void create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
pgprot_t prot, bool page_mappings_only);
extern void *fixmap_remap_fdt(phys_addr_t dt_phys, int *size, pgprot_t prot);
extern void mark_linear_text_alias_ro(void);
+extern int split_kernel_leaf_mapping(unsigned long start, unsigned long end);
/*
* This check is triggered during the early boot before the cpufeature
diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index abd2dee416b3..aa89c2e67ebc 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -371,6 +371,11 @@ static inline pmd_t pmd_mkcont(pmd_t pmd)
return __pmd(pmd_val(pmd) | PMD_SECT_CONT);
}
+static inline pmd_t pmd_mknoncont(pmd_t pmd)
+{
+ return __pmd(pmd_val(pmd) & ~PMD_SECT_CONT);
+}
+
#ifdef CONFIG_HAVE_ARCH_USERFAULTFD_WP
static inline int pte_uffd_wp(pte_t pte)
{
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index ba07eeff2a8d..7dc092e33fcc 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -2218,7 +2218,7 @@ static bool hvhe_possible(const struct arm64_cpu_capabilities *entry,
return arm64_test_sw_feature_override(ARM64_SW_FEATURE_OVERRIDE_HVHE);
}
-static bool has_bbml2_noabort(const struct arm64_cpu_capabilities *caps, int scope)
+bool cpu_supports_bbml2_noabort(void)
{
/*
* We want to allow usage of BBML2 in as wide a range of kernel contexts
@@ -2252,6 +2252,11 @@ static bool has_bbml2_noabort(const struct arm64_cpu_capabilities *caps, int sco
return true;
}
+static bool has_bbml2_noabort(const struct arm64_cpu_capabilities *caps, int scope)
+{
+ return cpu_supports_bbml2_noabort();
+}
+
#ifdef CONFIG_ARM64_PAN
static void cpu_enable_pan(const struct arm64_cpu_capabilities *__unused)
{
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 183801520740..fa09dd120626 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -474,6 +474,8 @@ void create_kpti_ng_temp_pgd(pgd_t *pgdir, phys_addr_t phys, unsigned long virt,
int flags);
#endif
+#define INVALID_PHYS_ADDR (-1ULL)
+
static phys_addr_t __pgd_pgtable_alloc(struct mm_struct *mm,
enum pgtable_type pgtable_type)
{
@@ -481,7 +483,9 @@ static phys_addr_t __pgd_pgtable_alloc(struct mm_struct *mm,
struct ptdesc *ptdesc = pagetable_alloc(GFP_PGTABLE_KERNEL & ~__GFP_ZERO, 0);
phys_addr_t pa;
- BUG_ON(!ptdesc);
+ if (!ptdesc)
+ return INVALID_PHYS_ADDR;
+
pa = page_to_phys(ptdesc_page(ptdesc));
switch (pgtable_type) {
@@ -502,16 +506,256 @@ static phys_addr_t __pgd_pgtable_alloc(struct mm_struct *mm,
return pa;
}
+static phys_addr_t
+try_pgd_pgtable_alloc_init_mm(enum pgtable_type pgtable_type)
+{
+ return __pgd_pgtable_alloc(&init_mm, pgtable_type);
+}
+
static phys_addr_t __maybe_unused
pgd_pgtable_alloc_init_mm(enum pgtable_type pgtable_type)
{
- return __pgd_pgtable_alloc(&init_mm, pgtable_type);
+ phys_addr_t pa;
+
+ pa = __pgd_pgtable_alloc(&init_mm, pgtable_type);
+ BUG_ON(pa == INVALID_PHYS_ADDR);
+ return pa;
}
static phys_addr_t
pgd_pgtable_alloc_special_mm(enum pgtable_type pgtable_type)
{
- return __pgd_pgtable_alloc(NULL, pgtable_type);
+ phys_addr_t pa;
+
+ pa = __pgd_pgtable_alloc(NULL, pgtable_type);
+ BUG_ON(pa == INVALID_PHYS_ADDR);
+ return pa;
+}
+
+static void split_contpte(pte_t *ptep)
+{
+ int i;
+
+ ptep = PTR_ALIGN_DOWN(ptep, sizeof(*ptep) * CONT_PTES);
+ for (i = 0; i < CONT_PTES; i++, ptep++)
+ __set_pte(ptep, pte_mknoncont(__ptep_get(ptep)));
+}
+
+static int split_pmd(pmd_t *pmdp, pmd_t pmd)
+{
+ pmdval_t tableprot = PMD_TYPE_TABLE | PMD_TABLE_UXN | PMD_TABLE_AF;
+ unsigned long pfn = pmd_pfn(pmd);
+ pgprot_t prot = pmd_pgprot(pmd);
+ phys_addr_t pte_phys;
+ pte_t *ptep;
+ int i;
+
+ pte_phys = try_pgd_pgtable_alloc_init_mm(TABLE_PTE);
+ if (pte_phys == INVALID_PHYS_ADDR)
+ return -ENOMEM;
+ ptep = (pte_t *)phys_to_virt(pte_phys);
+
+ if (pgprot_val(prot) & PMD_SECT_PXN)
+ tableprot |= PMD_TABLE_PXN;
+
+ prot = __pgprot((pgprot_val(prot) & ~PTE_TYPE_MASK) | PTE_TYPE_PAGE);
+ prot = __pgprot(pgprot_val(prot) | PTE_CONT);
+
+ for (i = 0; i < PTRS_PER_PTE; i++, ptep++, pfn++)
+ __set_pte(ptep, pfn_pte(pfn, prot));
+
+ /*
+ * Ensure the pte entries are visible to the table walker by the time
+ * the pmd entry that points to the ptes is visible.
+ */
+ dsb(ishst);
+ __pmd_populate(pmdp, pte_phys, tableprot);
+
+ return 0;
+}
+
+static void split_contpmd(pmd_t *pmdp)
+{
+ int i;
+
+ pmdp = PTR_ALIGN_DOWN(pmdp, sizeof(*pmdp) * CONT_PMDS);
+ for (i = 0; i < CONT_PMDS; i++, pmdp++)
+ set_pmd(pmdp, pmd_mknoncont(pmdp_get(pmdp)));
+}
+
+static int split_pud(pud_t *pudp, pud_t pud)
+{
+ pudval_t tableprot = PUD_TYPE_TABLE | PUD_TABLE_UXN | PUD_TABLE_AF;
+ unsigned int step = PMD_SIZE >> PAGE_SHIFT;
+ unsigned long pfn = pud_pfn(pud);
+ pgprot_t prot = pud_pgprot(pud);
+ phys_addr_t pmd_phys;
+ pmd_t *pmdp;
+ int i;
+
+ pmd_phys = try_pgd_pgtable_alloc_init_mm(TABLE_PMD);
+ if (pmd_phys == INVALID_PHYS_ADDR)
+ return -ENOMEM;
+ pmdp = (pmd_t *)phys_to_virt(pmd_phys);
+
+ if (pgprot_val(prot) & PMD_SECT_PXN)
+ tableprot |= PUD_TABLE_PXN;
+
+ prot = __pgprot((pgprot_val(prot) & ~PMD_TYPE_MASK) | PMD_TYPE_SECT);
+ prot = __pgprot(pgprot_val(prot) | PTE_CONT);
+
+ for (i = 0; i < PTRS_PER_PMD; i++, pmdp++, pfn += step)
+ set_pmd(pmdp, pfn_pmd(pfn, prot));
+
+ /*
+ * Ensure the pmd entries are visible to the table walker by the time
+ * the pud entry that points to the pmds is visible.
+ */
+ dsb(ishst);
+ __pud_populate(pudp, pmd_phys, tableprot);
+
+ return 0;
+}
+
+static int split_kernel_leaf_mapping_locked(unsigned long addr)
+{
+ pgd_t *pgdp, pgd;
+ p4d_t *p4dp, p4d;
+ pud_t *pudp, pud;
+ pmd_t *pmdp, pmd;
+ pte_t *ptep, pte;
+ int ret = 0;
+
+ /*
+ * PGD: If addr is PGD aligned then addr already describes a leaf
+ * boundary. If not present then there is nothing to split.
+ */
+ if (ALIGN_DOWN(addr, PGDIR_SIZE) == addr)
+ goto out;
+ pgdp = pgd_offset_k(addr);
+ pgd = pgdp_get(pgdp);
+ if (!pgd_present(pgd))
+ goto out;
+
+ /*
+ * P4D: If addr is P4D aligned then addr already describes a leaf
+ * boundary. If not present then there is nothing to split.
+ */
+ if (ALIGN_DOWN(addr, P4D_SIZE) == addr)
+ goto out;
+ p4dp = p4d_offset(pgdp, addr);
+ p4d = p4dp_get(p4dp);
+ if (!p4d_present(p4d))
+ goto out;
+
+ /*
+ * PUD: If addr is PUD aligned then addr already describes a leaf
+ * boundary. If not present then there is nothing to split. Otherwise,
+ * if we have a pud leaf, split to contpmd.
+ */
+ if (ALIGN_DOWN(addr, PUD_SIZE) == addr)
+ goto out;
+ pudp = pud_offset(p4dp, addr);
+ pud = pudp_get(pudp);
+ if (!pud_present(pud))
+ goto out;
+ if (pud_leaf(pud)) {
+ ret = split_pud(pudp, pud);
+ if (ret)
+ goto out;
+ }
+
+ /*
+ * CONTPMD: If addr is CONTPMD aligned then addr already describes a
+ * leaf boundary. If not present then there is nothing to split.
+ * Otherwise, if we have a contpmd leaf, split to pmd.
+ */
+ if (ALIGN_DOWN(addr, CONT_PMD_SIZE) == addr)
+ goto out;
+ pmdp = pmd_offset(pudp, addr);
+ pmd = pmdp_get(pmdp);
+ if (!pmd_present(pmd))
+ goto out;
+ if (pmd_leaf(pmd)) {
+ if (pmd_cont(pmd))
+ split_contpmd(pmdp);
+ /*
+ * PMD: If addr is PMD aligned then addr already describes a
+ * leaf boundary. Otherwise, split to contpte.
+ */
+ if (ALIGN_DOWN(addr, PMD_SIZE) == addr)
+ goto out;
+ ret = split_pmd(pmdp, pmd);
+ if (ret)
+ goto out;
+ }
+
+ /*
+ * CONTPTE: If addr is CONTPTE aligned then addr already describes a
+ * leaf boundary. If not present then there is nothing to split.
+ * Otherwise, if we have a contpte leaf, split to pte.
+ */
+ if (ALIGN_DOWN(addr, CONT_PTE_SIZE) == addr)
+ goto out;
+ ptep = pte_offset_kernel(pmdp, addr);
+ pte = __ptep_get(ptep);
+ if (!pte_present(pte))
+ goto out;
+ if (pte_cont(pte))
+ split_contpte(ptep);
+
+out:
+ return ret;
+}
+
+static DEFINE_MUTEX(pgtable_split_lock);
+
+int split_kernel_leaf_mapping(unsigned long start, unsigned long end)
+{
+ int ret;
+
+ /*
+ * !BBML2_NOABORT systems should not be trying to change permissions on
+ * anything that is not pte-mapped in the first place. Just return early
+ * and let the permission change code raise a warning if not already
+ * pte-mapped.
+ */
+ if (!system_supports_bbml2_noabort())
+ return 0;
+
+ /*
+ * Ensure start and end are at least page-aligned since this is the
+ * finest granularity we can split to.
+ */
+ if (start != PAGE_ALIGN(start) || end != PAGE_ALIGN(end))
+ return -EINVAL;
+
+ mutex_lock(&pgtable_split_lock);
+ arch_enter_lazy_mmu_mode();
+
+ /*
+ * The split_kernel_leaf_mapping_locked() may sleep, it is not a
+ * problem for ARM64 since ARM64's lazy MMU implementation allows
+ * sleeping.
+ *
+ * Optimize for the common case of splitting out a single page from a
+ * larger mapping. Here we can just split on the "least aligned" of
+ * start and end and this will guarantee that there must also be a split
+ * on the more aligned address since the both addresses must be in the
+ * same contpte block and it must have been split to ptes.
+ */
+ if (end - start == PAGE_SIZE) {
+ start = __ffs(start) < __ffs(end) ? start : end;
+ ret = split_kernel_leaf_mapping_locked(start);
+ } else {
+ ret = split_kernel_leaf_mapping_locked(start);
+ if (!ret)
+ ret = split_kernel_leaf_mapping_locked(end);
+ }
+
+ arch_leave_lazy_mmu_mode();
+ mutex_unlock(&pgtable_split_lock);
+ return ret;
}
/*
@@ -633,6 +877,16 @@ static inline void arm64_kfence_map_pool(phys_addr_t kfence_pool, pgd_t *pgdp) {
#endif /* CONFIG_KFENCE */
+static inline bool force_pte_mapping(void)
+{
+ bool bbml2 = system_capabilities_finalized() ?
+ system_supports_bbml2_noabort() : cpu_supports_bbml2_noabort();
+
+ return (!bbml2 && (rodata_full || arm64_kfence_can_set_direct_map() ||
+ is_realm_world())) ||
+ debug_pagealloc_enabled();
+}
+
static void __init map_mem(pgd_t *pgdp)
{
static const u64 direct_map_end = _PAGE_END(VA_BITS_MIN);
@@ -658,7 +912,7 @@ static void __init map_mem(pgd_t *pgdp)
early_kfence_pool = arm64_kfence_alloc_pool();
- if (can_set_direct_map())
+ if (force_pte_mapping())
flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
/*
@@ -1360,7 +1614,7 @@ int arch_add_memory(int nid, u64 start, u64 size,
VM_BUG_ON(!mhp_range_allowed(start, size, true));
- if (can_set_direct_map())
+ if (force_pte_mapping())
flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
__create_pgd_mapping(swapper_pg_dir, start, __phys_to_virt(start),
diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
index c0648764c403..5135f2d66958 100644
--- a/arch/arm64/mm/pageattr.c
+++ b/arch/arm64/mm/pageattr.c
@@ -106,6 +106,10 @@ static int update_range_prot(unsigned long start, unsigned long size,
data.set_mask = set_mask;
data.clear_mask = clear_mask;
+ ret = split_kernel_leaf_mapping(start, start + size);
+ if (WARN_ON_ONCE(ret))
+ return ret;
+
arch_enter_lazy_mmu_mode();
/*
--
2.47.0
^ permalink raw reply [flat|nested] 32+ messages in thread* Re: [PATCH v8 3/5] arm64: mm: support large block mapping when rodata=full
2025-09-17 19:02 ` [PATCH v8 3/5] arm64: mm: support large block mapping when rodata=full Yang Shi
@ 2025-11-01 16:14 ` Guenter Roeck
2025-11-02 10:31 ` Ryan Roberts
0 siblings, 1 reply; 32+ messages in thread
From: Guenter Roeck @ 2025-11-01 16:14 UTC (permalink / raw)
To: Yang Shi
Cc: catalin.marinas, will, ryan.roberts, akpm, david,
lorenzo.stoakes, ardb, dev.jain, scott, cl, linux-arm-kernel,
linux-kernel, linux-mm
Hi,
On Wed, Sep 17, 2025 at 12:02:09PM -0700, Yang Shi wrote:
> When rodata=full is specified, kernel linear mapping has to be mapped at
> PTE level since large page table can't be split due to break-before-make
> rule on ARM64.
>
> This resulted in a couple of problems:
> - performance degradation
> - more TLB pressure
> - memory waste for kernel page table
>
> With FEAT_BBM level 2 support, splitting large block page table to
> smaller ones doesn't need to make the page table entry invalid anymore.
> This allows kernel split large block mapping on the fly.
>
> Add kernel page table split support and use large block mapping by
> default when FEAT_BBM level 2 is supported for rodata=full. When
> changing permissions for kernel linear mapping, the page table will be
> split to smaller size.
>
> The machine without FEAT_BBM level 2 will fallback to have kernel linear
> mapping PTE-mapped when rodata=full.
>
> With this we saw significant performance boost with some benchmarks and
> much less memory consumption on my AmpereOne machine (192 cores, 1P)
> with 256GB memory.
>
> * Memory use after boot
> Before:
> MemTotal: 258988984 kB
> MemFree: 254821700 kB
>
> After:
> MemTotal: 259505132 kB
> MemFree: 255410264 kB
>
> Around 500MB more memory are free to use. The larger the machine, the
> more memory saved.
>
> * Memcached
> We saw performance degradation when running Memcached benchmark with
> rodata=full vs rodata=on. Our profiling pointed to kernel TLB pressure.
> With this patchset we saw ops/sec is increased by around 3.5%, P99
> latency is reduced by around 9.6%.
> The gain mainly came from reduced kernel TLB misses. The kernel TLB
> MPKI is reduced by 28.5%.
>
> The benchmark data is now on par with rodata=on too.
>
> * Disk encryption (dm-crypt) benchmark
> Ran fio benchmark with the below command on a 128G ramdisk (ext4) with
> disk encryption (by dm-crypt).
> fio --directory=/data --random_generator=lfsr --norandommap \
> --randrepeat 1 --status-interval=999 --rw=write --bs=4k --loops=1 \
> --ioengine=sync --iodepth=1 --numjobs=1 --fsync_on_close=1 \
> --group_reporting --thread --name=iops-test-job --eta-newline=1 \
> --size 100G
>
> The IOPS is increased by 90% - 150% (the variance is high, but the worst
> number of good case is around 90% more than the best number of bad
> case). The bandwidth is increased and the avg clat is reduced
> proportionally.
>
> * Sequential file read
> Read 100G file sequentially on XFS (xfs_io read with page cache
> populated). The bandwidth is increased by 150%.
>
With lock debugging enabled, we see a large number of "BUG: sleeping
function called from invalid context at kernel/locking/mutex.c:580"
and "BUG: Invalid wait context:" backtraces when running v6.18-rc3.
Please see example below.
Bisect points to this patch.
Please let me know if there is anything I can do to help tracking
down the problem.
Thanks,
Guenter
---
Example log:
[ 0.537499] BUG: sleeping function called from invalid context at kernel/locking/mutex.c:580
[ 0.537501] in_atomic(): 1, irqs_disabled(): 0, non_block: 0, pid: 1, name: swapper/0
[ 0.537502] preempt_count: 1, expected: 0
[ 0.537504] 2 locks held by swapper/0/1:
[ 0.537505] #0: ffffb60b01211960 (sched_domains_mutex){+.+.}-{4:4}, at: sched_domains_mutex_lock+0x24/0x38
[ 0.537510] #1: ffffb60b01595838 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire+0x0/0x40
[ 0.537516] CPU: 0 UID: 0 PID: 1 Comm: swapper/0 Not tainted 6.18.0-dbg-DEV #1 NONE
[ 0.537517] Call trace:
[ 0.537518] show_stack+0x20/0x38 (C)
[ 0.537520] __dump_stack+0x28/0x38
[ 0.537522] dump_stack_lvl+0xac/0xf0
[ 0.537525] dump_stack+0x18/0x3c
[ 0.537527] __might_resched+0x248/0x2a0
[ 0.537529] __might_sleep+0x40/0x90
[ 0.537531] __mutex_lock_common+0x70/0x1818
[ 0.537533] mutex_lock_nested+0x34/0x48
[ 0.537534] split_kernel_leaf_mapping+0x74/0x1a0
[ 0.537536] update_range_prot+0x40/0x150
[ 0.537537] __change_memory_common+0x30/0x148
[ 0.537538] __kernel_map_pages+0x70/0x88
[ 0.537540] __free_frozen_pages+0x6e4/0x7b8
[ 0.537542] free_frozen_pages+0x1c/0x30
[ 0.537544] __free_slab+0xf0/0x168
[ 0.537547] free_slab+0x2c/0xf8
[ 0.537549] free_to_partial_list+0x4e0/0x620
[ 0.537551] __slab_free+0x228/0x250
[ 0.537553] kfree+0x3c4/0x4c0
[ 0.537555] destroy_sched_domain+0xf8/0x140
[ 0.537557] cpu_attach_domain+0x17c/0x610
[ 0.537558] build_sched_domains+0x15a4/0x1718
[ 0.537560] sched_init_domains+0xbc/0xf8
[ 0.537561] sched_init_smp+0x30/0x98
[ 0.537562] kernel_init_freeable+0x148/0x230
[ 0.537564] kernel_init+0x28/0x148
[ 0.537566] ret_from_fork+0x10/0x20
[ 0.537569] =============================
[ 0.537569] [ BUG: Invalid wait context ]
[ 0.537571] 6.18.0-dbg-DEV #1 Tainted: G W
[ 0.537572] -----------------------------
[ 0.537572] swapper/0/1 is trying to lock:
[ 0.537573] ffffb60b011f3830 (pgtable_split_lock){+.+.}-{4:4}, at: split_kernel_leaf_mapping+0x74/0x1a0
[ 0.537576] other info that might help us debug this:
[ 0.537577] context-{5:5}
[ 0.537578] 2 locks held by swapper/0/1:
[ 0.537579] #0: ffffb60b01211960 (sched_domains_mutex){+.+.}-{4:4}, at: sched_domains_mutex_lock+0x24/0x38
[ 0.537582] #1: ffffb60b01595838 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire+0x0/0x40
[ 0.537585] stack backtrace:
[ 0.537585] CPU: 0 UID: 0 PID: 1 Comm: swapper/0 Tainted: G W 6.18.0-dbg-DEV #1 NONE
[ 0.537587] Tainted: [W]=WARN
[ 0.537588] Call trace:
[ 0.537589] show_stack+0x20/0x38 (C)
[ 0.537591] __dump_stack+0x28/0x38
[ 0.537593] dump_stack_lvl+0xac/0xf0
[ 0.537596] dump_stack+0x18/0x3c
[ 0.537598] __lock_acquire+0x980/0x2a20
[ 0.537600] lock_acquire+0x124/0x2b8
[ 0.537602] __mutex_lock_common+0xd8/0x1818
[ 0.537604] mutex_lock_nested+0x34/0x48
[ 0.537605] split_kernel_leaf_mapping+0x74/0x1a0
[ 0.537607] update_range_prot+0x40/0x150
[ 0.537608] __change_memory_common+0x30/0x148
[ 0.537609] __kernel_map_pages+0x70/0x88
[ 0.537610] __free_frozen_pages+0x6e4/0x7b8
[ 0.537613] free_frozen_pages+0x1c/0x30
[ 0.537615] __free_slab+0xf0/0x168
[ 0.537617] free_slab+0x2c/0xf8
[ 0.537619] free_to_partial_list+0x4e0/0x620
[ 0.537621] __slab_free+0x228/0x250
[ 0.537623] kfree+0x3c4/0x4c0
[ 0.537625] destroy_sched_domain+0xf8/0x140
[ 0.537627] cpu_attach_domain+0x17c/0x610
[ 0.537628] build_sched_domains+0x15a4/0x1718
[ 0.537630] sched_init_domains+0xbc/0xf8
[ 0.537631] sched_init_smp+0x30/0x98
[ 0.537632] kernel_init_freeable+0x148/0x230
[ 0.537633] kernel_init+0x28/0x148
[ 0.537635] ret_from_fork+0x10/0x20
---
bisect:
# bad: [3a8660878839faadb4f1a6dd72c3179c1df56787] Linux 6.18-rc1
# good: [e5f0a698b34ed76002dc5cff3804a61c80233a7a] Linux 6.17
git bisect start 'v6.18-rc1' 'v6.17'
# bad: [58809f614e0e3f4e12b489bddf680bfeb31c0a20] Merge tag 'drm-next-2025-10-01' of https://gitlab.freedesktop.org/drm/kernel
git bisect bad 58809f614e0e3f4e12b489bddf680bfeb31c0a20
# bad: [a8253f807760e9c80eada9e5354e1240ccf325f9] Merge tag 'soc-newsoc-6.18' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc
git bisect bad a8253f807760e9c80eada9e5354e1240ccf325f9
# bad: [4b81e2eb9e4db8f6094c077d0c8b27c264901c1b] Merge tag 'timers-vdso-2025-09-29' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
git bisect bad 4b81e2eb9e4db8f6094c077d0c8b27c264901c1b
# bad: [f1004b2f19d7e9add9d707f64d9fcbc50f67921b] Merge tag 'm68k-for-v6.18-tag1' of git://git.kernel.org/pub/scm/linux/kernel/git/geert/linux-m68k
git bisect bad f1004b2f19d7e9add9d707f64d9fcbc50f67921b
# good: [a9401710a5f5681abd2a6f21f9e76bc9f2e81891] Merge tag 'v6.18-rc-part1-smb3-common' of git://git.samba.org/ksmbd
git bisect good a9401710a5f5681abd2a6f21f9e76bc9f2e81891
# good: [fe68bb2861808ed5c48d399bd7e670ab76829d55] Merge tag 'microblaze-v6.18' of git://git.monstr.eu/linux-2.6-microblaze
git bisect good fe68bb2861808ed5c48d399bd7e670ab76829d55
# bad: [f2d64a22faeeecff385b4c91fab5fe036ab00162] Merge branch 'for-next/perf' into for-next/core
git bisect bad f2d64a22faeeecff385b4c91fab5fe036ab00162
# good: [30f9386820cddbba59b48ae0670c3a1646dd440e] Merge branch 'for-next/misc' into for-next/core
git bisect good 30f9386820cddbba59b48ae0670c3a1646dd440e
# good: [43de0ac332b815cf56dbdce63687de9acfd35d49] drivers/perf: hisi: Relax the event ID check in the framework
git bisect good 43de0ac332b815cf56dbdce63687de9acfd35d49
# good: [5973a62efa34c80c9a4e5eac1fca6f6209b902af] arm64: map [_text, _stext) virtual address range non-executable+read-only
git bisect good 5973a62efa34c80c9a4e5eac1fca6f6209b902af
# good: [b3abb08d6f628a76c36bf7da9508e1a67bf186a0] drivers/perf: hisi: Refactor the event configuration of L3C PMU
git bisect good b3abb08d6f628a76c36bf7da9508e1a67bf186a0
# good: [6d2f913fda5683fbd4c3580262e10386c1263dfb] Documentation: hisi-pmu: Add introduction to HiSilicon V3 PMU
git bisect good 6d2f913fda5683fbd4c3580262e10386c1263dfb
# good: [2084660ad288c998b6f0c885e266deb364f65fba] perf/dwc_pcie: Fix use of uninitialized variable
git bisect good 2084660ad288c998b6f0c885e266deb364f65fba
# bad: [77dfca70baefcb988318a72fe69eb99f6dabbbb1] Merge branch 'for-next/mm' into for-next/core
git bisect bad 77dfca70baefcb988318a72fe69eb99f6dabbbb1
# first bad commit: [77dfca70baefcb988318a72fe69eb99f6dabbbb1] Merge branch 'for-next/mm' into for-next/core
---
bisect into branch:
- git checkout -b testing 77dfca70baefcb988318a72fe69eb99f6dabbbb1
- git rebase 77dfca70baefcb988318a72fe69eb99f6dabbbb1~1
[ fix minor conflict similar to the conflict resolution in 77dfca70baefc]
- git diff 77dfca70baefcb988318a72fe69eb99f6dabbbb1
[ confirmed that there are no differences ]
- confirm that the problem is still seen at the tip of the rebase
- git bisect start HEAD 77dfca70baefcb988318a72fe69eb99f6dabbbb1~1
- run bisect
Results:
# bad: [47fc25df1ae3ae8412f1b812fb586c714d04a5e6] arm64: map [_text, _stext) virtual address range non-executable+read-only
# good: [30f9386820cddbba59b48ae0670c3a1646dd440e] Merge branch 'for-next/misc' into for-next/core
git bisect start 'HEAD' '77dfca70baefcb988318a72fe69eb99f6dabbbb1~1'
# good: [805491d19fc21271b5c27f4602f8f66b625c110f] arm64/Kconfig: Remove CONFIG_RODATA_FULL_DEFAULT_ENABLED
git bisect good 805491d19fc21271b5c27f4602f8f66b625c110f
# bad: [13c7d7426232cc4489df7cd2e1f646a22d3f6172] arm64: mm: support large block mapping when rodata=full
git bisect bad 13c7d7426232cc4489df7cd2e1f646a22d3f6172
# good: [a4d9c67e503f2b73c2d89d8e8209dfd241bdc8d8] arm64: Enable permission change on arm64 kernel block mappings
git bisect good a4d9c67e503f2b73c2d89d8e8209dfd241bdc8d8
# first bad commit: [13c7d7426232cc4489df7cd2e1f646a22d3f6172] arm64: mm: support large block mapping when rodata=full
^ permalink raw reply [flat|nested] 32+ messages in thread* Re: [PATCH v8 3/5] arm64: mm: support large block mapping when rodata=full
2025-11-01 16:14 ` Guenter Roeck
@ 2025-11-02 10:31 ` Ryan Roberts
2025-11-02 12:11 ` Ryan Roberts
0 siblings, 1 reply; 32+ messages in thread
From: Ryan Roberts @ 2025-11-02 10:31 UTC (permalink / raw)
To: Guenter Roeck, Yang Shi
Cc: catalin.marinas, will, akpm, david, lorenzo.stoakes, ardb,
dev.jain, scott, cl, linux-arm-kernel, linux-kernel, linux-mm,
nd
On 01/11/2025 16:14, Guenter Roeck wrote:
> Hi,
>
> On Wed, Sep 17, 2025 at 12:02:09PM -0700, Yang Shi wrote:
>> When rodata=full is specified, kernel linear mapping has to be mapped at
>> PTE level since large page table can't be split due to break-before-make
>> rule on ARM64.
>>
>> This resulted in a couple of problems:
>> - performance degradation
>> - more TLB pressure
>> - memory waste for kernel page table
>>
>> With FEAT_BBM level 2 support, splitting large block page table to
>> smaller ones doesn't need to make the page table entry invalid anymore.
>> This allows kernel split large block mapping on the fly.
>>
>> Add kernel page table split support and use large block mapping by
>> default when FEAT_BBM level 2 is supported for rodata=full. When
>> changing permissions for kernel linear mapping, the page table will be
>> split to smaller size.
>>
>> The machine without FEAT_BBM level 2 will fallback to have kernel linear
>> mapping PTE-mapped when rodata=full.
>>
>> With this we saw significant performance boost with some benchmarks and
>> much less memory consumption on my AmpereOne machine (192 cores, 1P)
>> with 256GB memory.
>>
>> * Memory use after boot
>> Before:
>> MemTotal: 258988984 kB
>> MemFree: 254821700 kB
>>
>> After:
>> MemTotal: 259505132 kB
>> MemFree: 255410264 kB
>>
>> Around 500MB more memory are free to use. The larger the machine, the
>> more memory saved.
>>
>> * Memcached
>> We saw performance degradation when running Memcached benchmark with
>> rodata=full vs rodata=on. Our profiling pointed to kernel TLB pressure.
>> With this patchset we saw ops/sec is increased by around 3.5%, P99
>> latency is reduced by around 9.6%.
>> The gain mainly came from reduced kernel TLB misses. The kernel TLB
>> MPKI is reduced by 28.5%.
>>
>> The benchmark data is now on par with rodata=on too.
>>
>> * Disk encryption (dm-crypt) benchmark
>> Ran fio benchmark with the below command on a 128G ramdisk (ext4) with
>> disk encryption (by dm-crypt).
>> fio --directory=/data --random_generator=lfsr --norandommap \
>> --randrepeat 1 --status-interval=999 --rw=write --bs=4k --loops=1 \
>> --ioengine=sync --iodepth=1 --numjobs=1 --fsync_on_close=1 \
>> --group_reporting --thread --name=iops-test-job --eta-newline=1 \
>> --size 100G
>>
>> The IOPS is increased by 90% - 150% (the variance is high, but the worst
>> number of good case is around 90% more than the best number of bad
>> case). The bandwidth is increased and the avg clat is reduced
>> proportionally.
>>
>> * Sequential file read
>> Read 100G file sequentially on XFS (xfs_io read with page cache
>> populated). The bandwidth is increased by 150%.
>>
>
> With lock debugging enabled, we see a large number of "BUG: sleeping
> function called from invalid context at kernel/locking/mutex.c:580"
> and "BUG: Invalid wait context:" backtraces when running v6.18-rc3.
> Please see example below.
>
> Bisect points to this patch.
>
> Please let me know if there is anything I can do to help tracking
> down the problem.
Thanks for the report - ouch!
I expect you're running on a system that supports BBML2_NOABORT, based on the
stack trace, I expect you have CONFIG_DEBUG_PAGEALLOC enabled? That will cause
permission tricks to be played on the linear map at page allocation and free
time, which can happen in non-sleepable contexts. And with this patch we are
taking pgtable_split_lock (a mutex) in split_kernel_leaf_mapping(), which is
called as a result of the permission change request.
However, when CONFIG_DEBUG_PAGEALLOC enabled we always force-map the linear map
by PTE so split_kernel_leaf_mapping() is actually unneccessary and will return
without actually having to split anything. So we could add an early "if
(force_pte_mapping()) return 0;" to bypass the function entirely in this case,
and I *think* that should solve it.
But I'm also concerned about KFENCE. I can't remember it's exact semantics off
the top of my head, so I'm concerned we could see similar problems there (where
we only force pte mapping for the KFENCE pool).
I'll investigate fully tomorrow and hopefully provide a fix.
Yang Shi, Do you have any additional thoughts?
Thanks,
Ryan
>
> Thanks,
> Guenter
>
> ---
> Example log:
>
> [ 0.537499] BUG: sleeping function called from invalid context at kernel/locking/mutex.c:580
> [ 0.537501] in_atomic(): 1, irqs_disabled(): 0, non_block: 0, pid: 1, name: swapper/0
> [ 0.537502] preempt_count: 1, expected: 0
> [ 0.537504] 2 locks held by swapper/0/1:
> [ 0.537505] #0: ffffb60b01211960 (sched_domains_mutex){+.+.}-{4:4}, at: sched_domains_mutex_lock+0x24/0x38
> [ 0.537510] #1: ffffb60b01595838 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire+0x0/0x40
> [ 0.537516] CPU: 0 UID: 0 PID: 1 Comm: swapper/0 Not tainted 6.18.0-dbg-DEV #1 NONE
> [ 0.537517] Call trace:
> [ 0.537518] show_stack+0x20/0x38 (C)
> [ 0.537520] __dump_stack+0x28/0x38
> [ 0.537522] dump_stack_lvl+0xac/0xf0
> [ 0.537525] dump_stack+0x18/0x3c
> [ 0.537527] __might_resched+0x248/0x2a0
> [ 0.537529] __might_sleep+0x40/0x90
> [ 0.537531] __mutex_lock_common+0x70/0x1818
> [ 0.537533] mutex_lock_nested+0x34/0x48
> [ 0.537534] split_kernel_leaf_mapping+0x74/0x1a0
> [ 0.537536] update_range_prot+0x40/0x150
> [ 0.537537] __change_memory_common+0x30/0x148
> [ 0.537538] __kernel_map_pages+0x70/0x88
> [ 0.537540] __free_frozen_pages+0x6e4/0x7b8
> [ 0.537542] free_frozen_pages+0x1c/0x30
> [ 0.537544] __free_slab+0xf0/0x168
> [ 0.537547] free_slab+0x2c/0xf8
> [ 0.537549] free_to_partial_list+0x4e0/0x620
> [ 0.537551] __slab_free+0x228/0x250
> [ 0.537553] kfree+0x3c4/0x4c0
> [ 0.537555] destroy_sched_domain+0xf8/0x140
> [ 0.537557] cpu_attach_domain+0x17c/0x610
> [ 0.537558] build_sched_domains+0x15a4/0x1718
> [ 0.537560] sched_init_domains+0xbc/0xf8
> [ 0.537561] sched_init_smp+0x30/0x98
> [ 0.537562] kernel_init_freeable+0x148/0x230
> [ 0.537564] kernel_init+0x28/0x148
> [ 0.537566] ret_from_fork+0x10/0x20
> [ 0.537569] =============================
> [ 0.537569] [ BUG: Invalid wait context ]
> [ 0.537571] 6.18.0-dbg-DEV #1 Tainted: G W
> [ 0.537572] -----------------------------
> [ 0.537572] swapper/0/1 is trying to lock:
> [ 0.537573] ffffb60b011f3830 (pgtable_split_lock){+.+.}-{4:4}, at: split_kernel_leaf_mapping+0x74/0x1a0
> [ 0.537576] other info that might help us debug this:
> [ 0.537577] context-{5:5}
> [ 0.537578] 2 locks held by swapper/0/1:
> [ 0.537579] #0: ffffb60b01211960 (sched_domains_mutex){+.+.}-{4:4}, at: sched_domains_mutex_lock+0x24/0x38
> [ 0.537582] #1: ffffb60b01595838 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire+0x0/0x40
> [ 0.537585] stack backtrace:
> [ 0.537585] CPU: 0 UID: 0 PID: 1 Comm: swapper/0 Tainted: G W 6.18.0-dbg-DEV #1 NONE
> [ 0.537587] Tainted: [W]=WARN
> [ 0.537588] Call trace:
> [ 0.537589] show_stack+0x20/0x38 (C)
> [ 0.537591] __dump_stack+0x28/0x38
> [ 0.537593] dump_stack_lvl+0xac/0xf0
> [ 0.537596] dump_stack+0x18/0x3c
> [ 0.537598] __lock_acquire+0x980/0x2a20
> [ 0.537600] lock_acquire+0x124/0x2b8
> [ 0.537602] __mutex_lock_common+0xd8/0x1818
> [ 0.537604] mutex_lock_nested+0x34/0x48
> [ 0.537605] split_kernel_leaf_mapping+0x74/0x1a0
> [ 0.537607] update_range_prot+0x40/0x150
> [ 0.537608] __change_memory_common+0x30/0x148
> [ 0.537609] __kernel_map_pages+0x70/0x88
> [ 0.537610] __free_frozen_pages+0x6e4/0x7b8
> [ 0.537613] free_frozen_pages+0x1c/0x30
> [ 0.537615] __free_slab+0xf0/0x168
> [ 0.537617] free_slab+0x2c/0xf8
> [ 0.537619] free_to_partial_list+0x4e0/0x620
> [ 0.537621] __slab_free+0x228/0x250
> [ 0.537623] kfree+0x3c4/0x4c0
> [ 0.537625] destroy_sched_domain+0xf8/0x140
> [ 0.537627] cpu_attach_domain+0x17c/0x610
> [ 0.537628] build_sched_domains+0x15a4/0x1718
> [ 0.537630] sched_init_domains+0xbc/0xf8
> [ 0.537631] sched_init_smp+0x30/0x98
> [ 0.537632] kernel_init_freeable+0x148/0x230
> [ 0.537633] kernel_init+0x28/0x148
> [ 0.537635] ret_from_fork+0x10/0x20
>
> ---
> bisect:
>
> # bad: [3a8660878839faadb4f1a6dd72c3179c1df56787] Linux 6.18-rc1
> # good: [e5f0a698b34ed76002dc5cff3804a61c80233a7a] Linux 6.17
> git bisect start 'v6.18-rc1' 'v6.17'
> # bad: [58809f614e0e3f4e12b489bddf680bfeb31c0a20] Merge tag 'drm-next-2025-10-01' of https://gitlab.freedesktop.org/drm/kernel
> git bisect bad 58809f614e0e3f4e12b489bddf680bfeb31c0a20
> # bad: [a8253f807760e9c80eada9e5354e1240ccf325f9] Merge tag 'soc-newsoc-6.18' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc
> git bisect bad a8253f807760e9c80eada9e5354e1240ccf325f9
> # bad: [4b81e2eb9e4db8f6094c077d0c8b27c264901c1b] Merge tag 'timers-vdso-2025-09-29' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
> git bisect bad 4b81e2eb9e4db8f6094c077d0c8b27c264901c1b
> # bad: [f1004b2f19d7e9add9d707f64d9fcbc50f67921b] Merge tag 'm68k-for-v6.18-tag1' of git://git.kernel.org/pub/scm/linux/kernel/git/geert/linux-m68k
> git bisect bad f1004b2f19d7e9add9d707f64d9fcbc50f67921b
> # good: [a9401710a5f5681abd2a6f21f9e76bc9f2e81891] Merge tag 'v6.18-rc-part1-smb3-common' of git://git.samba.org/ksmbd
> git bisect good a9401710a5f5681abd2a6f21f9e76bc9f2e81891
> # good: [fe68bb2861808ed5c48d399bd7e670ab76829d55] Merge tag 'microblaze-v6.18' of git://git.monstr.eu/linux-2.6-microblaze
> git bisect good fe68bb2861808ed5c48d399bd7e670ab76829d55
> # bad: [f2d64a22faeeecff385b4c91fab5fe036ab00162] Merge branch 'for-next/perf' into for-next/core
> git bisect bad f2d64a22faeeecff385b4c91fab5fe036ab00162
> # good: [30f9386820cddbba59b48ae0670c3a1646dd440e] Merge branch 'for-next/misc' into for-next/core
> git bisect good 30f9386820cddbba59b48ae0670c3a1646dd440e
> # good: [43de0ac332b815cf56dbdce63687de9acfd35d49] drivers/perf: hisi: Relax the event ID check in the framework
> git bisect good 43de0ac332b815cf56dbdce63687de9acfd35d49
> # good: [5973a62efa34c80c9a4e5eac1fca6f6209b902af] arm64: map [_text, _stext) virtual address range non-executable+read-only
> git bisect good 5973a62efa34c80c9a4e5eac1fca6f6209b902af
> # good: [b3abb08d6f628a76c36bf7da9508e1a67bf186a0] drivers/perf: hisi: Refactor the event configuration of L3C PMU
> git bisect good b3abb08d6f628a76c36bf7da9508e1a67bf186a0
> # good: [6d2f913fda5683fbd4c3580262e10386c1263dfb] Documentation: hisi-pmu: Add introduction to HiSilicon V3 PMU
> git bisect good 6d2f913fda5683fbd4c3580262e10386c1263dfb
> # good: [2084660ad288c998b6f0c885e266deb364f65fba] perf/dwc_pcie: Fix use of uninitialized variable
> git bisect good 2084660ad288c998b6f0c885e266deb364f65fba
> # bad: [77dfca70baefcb988318a72fe69eb99f6dabbbb1] Merge branch 'for-next/mm' into for-next/core
> git bisect bad 77dfca70baefcb988318a72fe69eb99f6dabbbb1
> # first bad commit: [77dfca70baefcb988318a72fe69eb99f6dabbbb1] Merge branch 'for-next/mm' into for-next/core
>
> ---
> bisect into branch:
>
> - git checkout -b testing 77dfca70baefcb988318a72fe69eb99f6dabbbb1
> - git rebase 77dfca70baefcb988318a72fe69eb99f6dabbbb1~1
> [ fix minor conflict similar to the conflict resolution in 77dfca70baefc]
> - git diff 77dfca70baefcb988318a72fe69eb99f6dabbbb1
> [ confirmed that there are no differences ]
> - confirm that the problem is still seen at the tip of the rebase
> - git bisect start HEAD 77dfca70baefcb988318a72fe69eb99f6dabbbb1~1
> - run bisect
>
> Results:
>
> # bad: [47fc25df1ae3ae8412f1b812fb586c714d04a5e6] arm64: map [_text, _stext) virtual address range non-executable+read-only
> # good: [30f9386820cddbba59b48ae0670c3a1646dd440e] Merge branch 'for-next/misc' into for-next/core
> git bisect start 'HEAD' '77dfca70baefcb988318a72fe69eb99f6dabbbb1~1'
> # good: [805491d19fc21271b5c27f4602f8f66b625c110f] arm64/Kconfig: Remove CONFIG_RODATA_FULL_DEFAULT_ENABLED
> git bisect good 805491d19fc21271b5c27f4602f8f66b625c110f
> # bad: [13c7d7426232cc4489df7cd2e1f646a22d3f6172] arm64: mm: support large block mapping when rodata=full
> git bisect bad 13c7d7426232cc4489df7cd2e1f646a22d3f6172
> # good: [a4d9c67e503f2b73c2d89d8e8209dfd241bdc8d8] arm64: Enable permission change on arm64 kernel block mappings
> git bisect good a4d9c67e503f2b73c2d89d8e8209dfd241bdc8d8
> # first bad commit: [13c7d7426232cc4489df7cd2e1f646a22d3f6172] arm64: mm: support large block mapping when rodata=full
^ permalink raw reply [flat|nested] 32+ messages in thread* Re: [PATCH v8 3/5] arm64: mm: support large block mapping when rodata=full
2025-11-02 10:31 ` Ryan Roberts
@ 2025-11-02 12:11 ` Ryan Roberts
2025-11-02 15:13 ` Guenter Roeck
` (4 more replies)
0 siblings, 5 replies; 32+ messages in thread
From: Ryan Roberts @ 2025-11-02 12:11 UTC (permalink / raw)
To: Guenter Roeck, Yang Shi
Cc: catalin.marinas, will, akpm, david, lorenzo.stoakes, ardb,
dev.jain, scott, cl, linux-arm-kernel, linux-kernel, linux-mm,
nd
On 02/11/2025 10:31, Ryan Roberts wrote:
> On 01/11/2025 16:14, Guenter Roeck wrote:
>> Hi,
>>
>> On Wed, Sep 17, 2025 at 12:02:09PM -0700, Yang Shi wrote:
>>> When rodata=full is specified, kernel linear mapping has to be mapped at
>>> PTE level since large page table can't be split due to break-before-make
>>> rule on ARM64.
>>>
>>> This resulted in a couple of problems:
>>> - performance degradation
>>> - more TLB pressure
>>> - memory waste for kernel page table
>>>
>>> With FEAT_BBM level 2 support, splitting large block page table to
>>> smaller ones doesn't need to make the page table entry invalid anymore.
>>> This allows kernel split large block mapping on the fly.
>>>
>>> Add kernel page table split support and use large block mapping by
>>> default when FEAT_BBM level 2 is supported for rodata=full. When
>>> changing permissions for kernel linear mapping, the page table will be
>>> split to smaller size.
>>>
>>> The machine without FEAT_BBM level 2 will fallback to have kernel linear
>>> mapping PTE-mapped when rodata=full.
>>>
>>> With this we saw significant performance boost with some benchmarks and
>>> much less memory consumption on my AmpereOne machine (192 cores, 1P)
>>> with 256GB memory.
>>>
>>> * Memory use after boot
>>> Before:
>>> MemTotal: 258988984 kB
>>> MemFree: 254821700 kB
>>>
>>> After:
>>> MemTotal: 259505132 kB
>>> MemFree: 255410264 kB
>>>
>>> Around 500MB more memory are free to use. The larger the machine, the
>>> more memory saved.
>>>
>>> * Memcached
>>> We saw performance degradation when running Memcached benchmark with
>>> rodata=full vs rodata=on. Our profiling pointed to kernel TLB pressure.
>>> With this patchset we saw ops/sec is increased by around 3.5%, P99
>>> latency is reduced by around 9.6%.
>>> The gain mainly came from reduced kernel TLB misses. The kernel TLB
>>> MPKI is reduced by 28.5%.
>>>
>>> The benchmark data is now on par with rodata=on too.
>>>
>>> * Disk encryption (dm-crypt) benchmark
>>> Ran fio benchmark with the below command on a 128G ramdisk (ext4) with
>>> disk encryption (by dm-crypt).
>>> fio --directory=/data --random_generator=lfsr --norandommap \
>>> --randrepeat 1 --status-interval=999 --rw=write --bs=4k --loops=1 \
>>> --ioengine=sync --iodepth=1 --numjobs=1 --fsync_on_close=1 \
>>> --group_reporting --thread --name=iops-test-job --eta-newline=1 \
>>> --size 100G
>>>
>>> The IOPS is increased by 90% - 150% (the variance is high, but the worst
>>> number of good case is around 90% more than the best number of bad
>>> case). The bandwidth is increased and the avg clat is reduced
>>> proportionally.
>>>
>>> * Sequential file read
>>> Read 100G file sequentially on XFS (xfs_io read with page cache
>>> populated). The bandwidth is increased by 150%.
>>>
>>
>> With lock debugging enabled, we see a large number of "BUG: sleeping
>> function called from invalid context at kernel/locking/mutex.c:580"
>> and "BUG: Invalid wait context:" backtraces when running v6.18-rc3.
>> Please see example below.
>>
>> Bisect points to this patch.
>>
>> Please let me know if there is anything I can do to help tracking
>> down the problem.
>
> Thanks for the report - ouch!
>
> I expect you're running on a system that supports BBML2_NOABORT, based on the
> stack trace, I expect you have CONFIG_DEBUG_PAGEALLOC enabled? That will cause
> permission tricks to be played on the linear map at page allocation and free
> time, which can happen in non-sleepable contexts. And with this patch we are
> taking pgtable_split_lock (a mutex) in split_kernel_leaf_mapping(), which is
> called as a result of the permission change request.
>
> However, when CONFIG_DEBUG_PAGEALLOC enabled we always force-map the linear map
> by PTE so split_kernel_leaf_mapping() is actually unneccessary and will return
> without actually having to split anything. So we could add an early "if
> (force_pte_mapping()) return 0;" to bypass the function entirely in this case,
> and I *think* that should solve it.
>
> But I'm also concerned about KFENCE. I can't remember it's exact semantics off
> the top of my head, so I'm concerned we could see similar problems there (where
> we only force pte mapping for the KFENCE pool).
>
> I'll investigate fully tomorrow and hopefully provide a fix.
Here's a proposed fix, although I can't get access to a system with BBML2 until
tomorrow at the earliest. Guenter, I wonder if you could check that this
resolves your issue?
---8<---
commit 602ec2db74e5abfb058bd03934475ead8558eb72
Author: Ryan Roberts <ryan.roberts@arm.com>
Date: Sun Nov 2 11:45:18 2025 +0000
arm64: mm: Don't attempt to split known pte-mapped regions
It has been reported that split_kernel_leaf_mapping() is trying to sleep
in non-sleepable context. It does this when acquiring the
pgtable_split_lock mutex, when either CONFIG_DEBUG_ALLOC or
CONFIG_KFENCE are enabled, which change linear map permissions within
softirq context during memory allocation and/or freeing.
But it turns out that the memory for which these features may attempt to
modify the permissions is always mapped by pte, so there is no need to
attempt to split the mapping. So let's exit early in these cases and
avoid attempting to take the mutex.
Closes: https://lore.kernel.org/all/f24b9032-0ec9-47b1-8b95-c0eeac7a31c5@roeck-us.net/
Fixes: a166563e7ec3 ("arm64: mm: support large block mapping when rodata=full")
Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index b8d37eb037fc..6e26f070bb49 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -708,6 +708,16 @@ static int split_kernel_leaf_mapping_locked(unsigned long addr)
return ret;
}
+static inline bool force_pte_mapping(void)
+{
+ bool bbml2 = system_capabilities_finalized() ?
+ system_supports_bbml2_noabort() : cpu_supports_bbml2_noabort();
+
+ return (!bbml2 && (rodata_full || arm64_kfence_can_set_direct_map() ||
+ is_realm_world())) ||
+ debug_pagealloc_enabled();
+}
+
static DEFINE_MUTEX(pgtable_split_lock);
int split_kernel_leaf_mapping(unsigned long start, unsigned long end)
@@ -723,6 +733,16 @@ int split_kernel_leaf_mapping(unsigned long start, unsigned long end)
if (!system_supports_bbml2_noabort())
return 0;
+ /*
+ * If the region is within a pte-mapped area, there is no need to try to
+ * split. Additionally, CONFIG_DEBUG_ALLOC and CONFIG_KFENCE may change
+ * permissions from softirq context so for those cases (which are always
+ * pte-mapped), we must not go any further because taking the mutex
+ * below may sleep.
+ */
+ if (force_pte_mapping() || is_kfence_address((void *)start))
+ return 0;
+
/*
* Ensure start and end are at least page-aligned since this is the
* finest granularity we can split to.
@@ -1009,16 +1029,6 @@ static inline void arm64_kfence_map_pool(phys_addr_t kfence_pool, pgd_t *pgdp) {
#endif /* CONFIG_KFENCE */
-static inline bool force_pte_mapping(void)
-{
- bool bbml2 = system_capabilities_finalized() ?
- system_supports_bbml2_noabort() : cpu_supports_bbml2_noabort();
-
- return (!bbml2 && (rodata_full || arm64_kfence_can_set_direct_map() ||
- is_realm_world())) ||
- debug_pagealloc_enabled();
-}
-
static void __init map_mem(pgd_t *pgdp)
{
static const u64 direct_map_end = _PAGE_END(VA_BITS_MIN);
---8<---
Thanks,
Ryan
>
> Yang Shi, Do you have any additional thoughts?
>
> Thanks,
> Ryan
>
>>
>> Thanks,
>> Guenter
>>
>> ---
>> Example log:
>>
>> [ 0.537499] BUG: sleeping function called from invalid context at kernel/locking/mutex.c:580
>> [ 0.537501] in_atomic(): 1, irqs_disabled(): 0, non_block: 0, pid: 1, name: swapper/0
>> [ 0.537502] preempt_count: 1, expected: 0
>> [ 0.537504] 2 locks held by swapper/0/1:
>> [ 0.537505] #0: ffffb60b01211960 (sched_domains_mutex){+.+.}-{4:4}, at: sched_domains_mutex_lock+0x24/0x38
>> [ 0.537510] #1: ffffb60b01595838 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire+0x0/0x40
>> [ 0.537516] CPU: 0 UID: 0 PID: 1 Comm: swapper/0 Not tainted 6.18.0-dbg-DEV #1 NONE
>> [ 0.537517] Call trace:
>> [ 0.537518] show_stack+0x20/0x38 (C)
>> [ 0.537520] __dump_stack+0x28/0x38
>> [ 0.537522] dump_stack_lvl+0xac/0xf0
>> [ 0.537525] dump_stack+0x18/0x3c
>> [ 0.537527] __might_resched+0x248/0x2a0
>> [ 0.537529] __might_sleep+0x40/0x90
>> [ 0.537531] __mutex_lock_common+0x70/0x1818
>> [ 0.537533] mutex_lock_nested+0x34/0x48
>> [ 0.537534] split_kernel_leaf_mapping+0x74/0x1a0
>> [ 0.537536] update_range_prot+0x40/0x150
>> [ 0.537537] __change_memory_common+0x30/0x148
>> [ 0.537538] __kernel_map_pages+0x70/0x88
>> [ 0.537540] __free_frozen_pages+0x6e4/0x7b8
>> [ 0.537542] free_frozen_pages+0x1c/0x30
>> [ 0.537544] __free_slab+0xf0/0x168
>> [ 0.537547] free_slab+0x2c/0xf8
>> [ 0.537549] free_to_partial_list+0x4e0/0x620
>> [ 0.537551] __slab_free+0x228/0x250
>> [ 0.537553] kfree+0x3c4/0x4c0
>> [ 0.537555] destroy_sched_domain+0xf8/0x140
>> [ 0.537557] cpu_attach_domain+0x17c/0x610
>> [ 0.537558] build_sched_domains+0x15a4/0x1718
>> [ 0.537560] sched_init_domains+0xbc/0xf8
>> [ 0.537561] sched_init_smp+0x30/0x98
>> [ 0.537562] kernel_init_freeable+0x148/0x230
>> [ 0.537564] kernel_init+0x28/0x148
>> [ 0.537566] ret_from_fork+0x10/0x20
>> [ 0.537569] =============================
>> [ 0.537569] [ BUG: Invalid wait context ]
>> [ 0.537571] 6.18.0-dbg-DEV #1 Tainted: G W
>> [ 0.537572] -----------------------------
>> [ 0.537572] swapper/0/1 is trying to lock:
>> [ 0.537573] ffffb60b011f3830 (pgtable_split_lock){+.+.}-{4:4}, at: split_kernel_leaf_mapping+0x74/0x1a0
>> [ 0.537576] other info that might help us debug this:
>> [ 0.537577] context-{5:5}
>> [ 0.537578] 2 locks held by swapper/0/1:
>> [ 0.537579] #0: ffffb60b01211960 (sched_domains_mutex){+.+.}-{4:4}, at: sched_domains_mutex_lock+0x24/0x38
>> [ 0.537582] #1: ffffb60b01595838 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire+0x0/0x40
>> [ 0.537585] stack backtrace:
>> [ 0.537585] CPU: 0 UID: 0 PID: 1 Comm: swapper/0 Tainted: G W 6.18.0-dbg-DEV #1 NONE
>> [ 0.537587] Tainted: [W]=WARN
>> [ 0.537588] Call trace:
>> [ 0.537589] show_stack+0x20/0x38 (C)
>> [ 0.537591] __dump_stack+0x28/0x38
>> [ 0.537593] dump_stack_lvl+0xac/0xf0
>> [ 0.537596] dump_stack+0x18/0x3c
>> [ 0.537598] __lock_acquire+0x980/0x2a20
>> [ 0.537600] lock_acquire+0x124/0x2b8
>> [ 0.537602] __mutex_lock_common+0xd8/0x1818
>> [ 0.537604] mutex_lock_nested+0x34/0x48
>> [ 0.537605] split_kernel_leaf_mapping+0x74/0x1a0
>> [ 0.537607] update_range_prot+0x40/0x150
>> [ 0.537608] __change_memory_common+0x30/0x148
>> [ 0.537609] __kernel_map_pages+0x70/0x88
>> [ 0.537610] __free_frozen_pages+0x6e4/0x7b8
>> [ 0.537613] free_frozen_pages+0x1c/0x30
>> [ 0.537615] __free_slab+0xf0/0x168
>> [ 0.537617] free_slab+0x2c/0xf8
>> [ 0.537619] free_to_partial_list+0x4e0/0x620
>> [ 0.537621] __slab_free+0x228/0x250
>> [ 0.537623] kfree+0x3c4/0x4c0
>> [ 0.537625] destroy_sched_domain+0xf8/0x140
>> [ 0.537627] cpu_attach_domain+0x17c/0x610
>> [ 0.537628] build_sched_domains+0x15a4/0x1718
>> [ 0.537630] sched_init_domains+0xbc/0xf8
>> [ 0.537631] sched_init_smp+0x30/0x98
>> [ 0.537632] kernel_init_freeable+0x148/0x230
>> [ 0.537633] kernel_init+0x28/0x148
>> [ 0.537635] ret_from_fork+0x10/0x20
>>
>> ---
>> bisect:
>>
>> # bad: [3a8660878839faadb4f1a6dd72c3179c1df56787] Linux 6.18-rc1
>> # good: [e5f0a698b34ed76002dc5cff3804a61c80233a7a] Linux 6.17
>> git bisect start 'v6.18-rc1' 'v6.17'
>> # bad: [58809f614e0e3f4e12b489bddf680bfeb31c0a20] Merge tag 'drm-next-2025-10-01' of https://gitlab.freedesktop.org/drm/kernel
>> git bisect bad 58809f614e0e3f4e12b489bddf680bfeb31c0a20
>> # bad: [a8253f807760e9c80eada9e5354e1240ccf325f9] Merge tag 'soc-newsoc-6.18' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc
>> git bisect bad a8253f807760e9c80eada9e5354e1240ccf325f9
>> # bad: [4b81e2eb9e4db8f6094c077d0c8b27c264901c1b] Merge tag 'timers-vdso-2025-09-29' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
>> git bisect bad 4b81e2eb9e4db8f6094c077d0c8b27c264901c1b
>> # bad: [f1004b2f19d7e9add9d707f64d9fcbc50f67921b] Merge tag 'm68k-for-v6.18-tag1' of git://git.kernel.org/pub/scm/linux/kernel/git/geert/linux-m68k
>> git bisect bad f1004b2f19d7e9add9d707f64d9fcbc50f67921b
>> # good: [a9401710a5f5681abd2a6f21f9e76bc9f2e81891] Merge tag 'v6.18-rc-part1-smb3-common' of git://git.samba.org/ksmbd
>> git bisect good a9401710a5f5681abd2a6f21f9e76bc9f2e81891
>> # good: [fe68bb2861808ed5c48d399bd7e670ab76829d55] Merge tag 'microblaze-v6.18' of git://git.monstr.eu/linux-2.6-microblaze
>> git bisect good fe68bb2861808ed5c48d399bd7e670ab76829d55
>> # bad: [f2d64a22faeeecff385b4c91fab5fe036ab00162] Merge branch 'for-next/perf' into for-next/core
>> git bisect bad f2d64a22faeeecff385b4c91fab5fe036ab00162
>> # good: [30f9386820cddbba59b48ae0670c3a1646dd440e] Merge branch 'for-next/misc' into for-next/core
>> git bisect good 30f9386820cddbba59b48ae0670c3a1646dd440e
>> # good: [43de0ac332b815cf56dbdce63687de9acfd35d49] drivers/perf: hisi: Relax the event ID check in the framework
>> git bisect good 43de0ac332b815cf56dbdce63687de9acfd35d49
>> # good: [5973a62efa34c80c9a4e5eac1fca6f6209b902af] arm64: map [_text, _stext) virtual address range non-executable+read-only
>> git bisect good 5973a62efa34c80c9a4e5eac1fca6f6209b902af
>> # good: [b3abb08d6f628a76c36bf7da9508e1a67bf186a0] drivers/perf: hisi: Refactor the event configuration of L3C PMU
>> git bisect good b3abb08d6f628a76c36bf7da9508e1a67bf186a0
>> # good: [6d2f913fda5683fbd4c3580262e10386c1263dfb] Documentation: hisi-pmu: Add introduction to HiSilicon V3 PMU
>> git bisect good 6d2f913fda5683fbd4c3580262e10386c1263dfb
>> # good: [2084660ad288c998b6f0c885e266deb364f65fba] perf/dwc_pcie: Fix use of uninitialized variable
>> git bisect good 2084660ad288c998b6f0c885e266deb364f65fba
>> # bad: [77dfca70baefcb988318a72fe69eb99f6dabbbb1] Merge branch 'for-next/mm' into for-next/core
>> git bisect bad 77dfca70baefcb988318a72fe69eb99f6dabbbb1
>> # first bad commit: [77dfca70baefcb988318a72fe69eb99f6dabbbb1] Merge branch 'for-next/mm' into for-next/core
>>
>> ---
>> bisect into branch:
>>
>> - git checkout -b testing 77dfca70baefcb988318a72fe69eb99f6dabbbb1
>> - git rebase 77dfca70baefcb988318a72fe69eb99f6dabbbb1~1
>> [ fix minor conflict similar to the conflict resolution in 77dfca70baefc]
>> - git diff 77dfca70baefcb988318a72fe69eb99f6dabbbb1
>> [ confirmed that there are no differences ]
>> - confirm that the problem is still seen at the tip of the rebase
>> - git bisect start HEAD 77dfca70baefcb988318a72fe69eb99f6dabbbb1~1
>> - run bisect
>>
>> Results:
>>
>> # bad: [47fc25df1ae3ae8412f1b812fb586c714d04a5e6] arm64: map [_text, _stext) virtual address range non-executable+read-only
>> # good: [30f9386820cddbba59b48ae0670c3a1646dd440e] Merge branch 'for-next/misc' into for-next/core
>> git bisect start 'HEAD' '77dfca70baefcb988318a72fe69eb99f6dabbbb1~1'
>> # good: [805491d19fc21271b5c27f4602f8f66b625c110f] arm64/Kconfig: Remove CONFIG_RODATA_FULL_DEFAULT_ENABLED
>> git bisect good 805491d19fc21271b5c27f4602f8f66b625c110f
>> # bad: [13c7d7426232cc4489df7cd2e1f646a22d3f6172] arm64: mm: support large block mapping when rodata=full
>> git bisect bad 13c7d7426232cc4489df7cd2e1f646a22d3f6172
>> # good: [a4d9c67e503f2b73c2d89d8e8209dfd241bdc8d8] arm64: Enable permission change on arm64 kernel block mappings
>> git bisect good a4d9c67e503f2b73c2d89d8e8209dfd241bdc8d8
>> # first bad commit: [13c7d7426232cc4489df7cd2e1f646a22d3f6172] arm64: mm: support large block mapping when rodata=full
>
^ permalink raw reply [flat|nested] 32+ messages in thread* Re: [PATCH v8 3/5] arm64: mm: support large block mapping when rodata=full
2025-11-02 12:11 ` Ryan Roberts
@ 2025-11-02 15:13 ` Guenter Roeck
2025-11-02 17:46 ` Guenter Roeck
` (3 subsequent siblings)
4 siblings, 0 replies; 32+ messages in thread
From: Guenter Roeck @ 2025-11-02 15:13 UTC (permalink / raw)
To: Ryan Roberts
Cc: Yang Shi, catalin.marinas, will, akpm, david, lorenzo.stoakes,
ardb, dev.jain, scott, cl, linux-arm-kernel, linux-kernel,
linux-mm, nd
On Sun, Nov 02, 2025 at 12:11:11PM +0000, Ryan Roberts wrote:
...
>
> Here's a proposed fix, although I can't get access to a system with BBML2 until
> tomorrow at the earliest. Guenter, I wonder if you could check that this
> resolves your issue?
Testing.
Thanks!
Guenter
^ permalink raw reply [flat|nested] 32+ messages in thread* Re: [PATCH v8 3/5] arm64: mm: support large block mapping when rodata=full
2025-11-02 12:11 ` Ryan Roberts
2025-11-02 15:13 ` Guenter Roeck
@ 2025-11-02 17:46 ` Guenter Roeck
2025-11-02 17:49 ` Guenter Roeck
` (2 subsequent siblings)
4 siblings, 0 replies; 32+ messages in thread
From: Guenter Roeck @ 2025-11-02 17:46 UTC (permalink / raw)
To: Ryan Roberts
Cc: Yang Shi, catalin.marinas, will, akpm, david, lorenzo.stoakes,
ardb, dev.jain, scott, cl, linux-arm-kernel, linux-kernel,
linux-mm, nd
On Sun, Nov 2, 2025 at 7:09 AM Ryan Roberts <linux@roeck-us.net> wrote:
>
> On 02/11/2025 10:31, Ryan Roberts wrote:
> > On 01/11/2025 16:14, Guenter Roeck wrote:
> >> Hi,
> >>
> >> On Wed, Sep 17, 2025 at 12:02:09PM -0700, Yang Shi wrote:
> >>> When rodata=full is specified, kernel linear mapping has to be mapped at
> >>> PTE level since large page table can't be split due to break-before-make
> >>> rule on ARM64.
> >>>
> >>> This resulted in a couple of problems:
> >>> - performance degradation
> >>> - more TLB pressure
> >>> - memory waste for kernel page table
> >>>
> >>> With FEAT_BBM level 2 support, splitting large block page table to
> >>> smaller ones doesn't need to make the page table entry invalid anymore.
> >>> This allows kernel split large block mapping on the fly.
> >>>
> >>> Add kernel page table split support and use large block mapping by
> >>> default when FEAT_BBM level 2 is supported for rodata=full. When
> >>> changing permissions for kernel linear mapping, the page table will be
> >>> split to smaller size.
> >>>
> >>> The machine without FEAT_BBM level 2 will fallback to have kernel linear
> >>> mapping PTE-mapped when rodata=full.
> >>>
> >>> With this we saw significant performance boost with some benchmarks and
> >>> much less memory consumption on my AmpereOne machine (192 cores, 1P)
> >>> with 256GB memory.
> >>>
> >>> * Memory use after boot
> >>> Before:
> >>> MemTotal: 258988984 kB
> >>> MemFree: 254821700 kB
> >>>
> >>> After:
> >>> MemTotal: 259505132 kB
> >>> MemFree: 255410264 kB
> >>>
> >>> Around 500MB more memory are free to use. The larger the machine, the
> >>> more memory saved.
> >>>
> >>> * Memcached
> >>> We saw performance degradation when running Memcached benchmark with
> >>> rodata=full vs rodata=on. Our profiling pointed to kernel TLB pressure.
> >>> With this patchset we saw ops/sec is increased by around 3.5%, P99
> >>> latency is reduced by around 9.6%.
> >>> The gain mainly came from reduced kernel TLB misses. The kernel TLB
> >>> MPKI is reduced by 28.5%.
> >>>
> >>> The benchmark data is now on par with rodata=on too.
> >>>
> >>> * Disk encryption (dm-crypt) benchmark
> >>> Ran fio benchmark with the below command on a 128G ramdisk (ext4) with
> >>> disk encryption (by dm-crypt).
> >>> fio --directory=/data --random_generator=lfsr --norandommap \
> >>> --randrepeat 1 --status-interval=999 --rw=write --bs=4k --loops=1 \
> >>> --ioengine=sync --iodepth=1 --numjobs=1 --fsync_on_close=1 \
> >>> --group_reporting --thread --name=iops-test-job --eta-newline=1 \
> >>> --size 100G
> >>>
> >>> The IOPS is increased by 90% - 150% (the variance is high, but the worst
> >>> number of good case is around 90% more than the best number of bad
> >>> case). The bandwidth is increased and the avg clat is reduced
> >>> proportionally.
> >>>
> >>> * Sequential file read
> >>> Read 100G file sequentially on XFS (xfs_io read with page cache
> >>> populated). The bandwidth is increased by 150%.
> >>>
> >>
> >> With lock debugging enabled, we see a large number of "BUG: sleeping
> >> function called from invalid context at kernel/locking/mutex.c:580"
> >> and "BUG: Invalid wait context:" backtraces when running v6.18-rc3.
> >> Please see example below.
> >>
> >> Bisect points to this patch.
> >>
> >> Please let me know if there is anything I can do to help tracking
> >> down the problem.
> >
> > Thanks for the report - ouch!
> >
> > I expect you're running on a system that supports BBML2_NOABORT, based on the
> > stack trace, I expect you have CONFIG_DEBUG_PAGEALLOC enabled? That will cause
> > permission tricks to be played on the linear map at page allocation and free
> > time, which can happen in non-sleepable contexts. And with this patch we are
> > taking pgtable_split_lock (a mutex) in split_kernel_leaf_mapping(), which is
> > called as a result of the permission change request.
> >
> > However, when CONFIG_DEBUG_PAGEALLOC enabled we always force-map the linear map
> > by PTE so split_kernel_leaf_mapping() is actually unneccessary and will return
> > without actually having to split anything. So we could add an early "if
> > (force_pte_mapping()) return 0;" to bypass the function entirely in this case,
> > and I *think* that should solve it.
> >
> > But I'm also concerned about KFENCE. I can't remember it's exact semantics off
> > the top of my head, so I'm concerned we could see similar problems there (where
> > we only force pte mapping for the KFENCE pool).
> >
> > I'll investigate fully tomorrow and hopefully provide a fix.
>
> Here's a proposed fix, although I can't get access to a system with BBML2 until
> tomorrow at the earliest. Guenter, I wonder if you could check that this
> resolves your issue?
>
> ---8<---
> commit 602ec2db74e5abfb058bd03934475ead8558eb72
> Author: Ryan Roberts <ryan.roberts@arm.com>
> Date: Sun Nov 2 11:45:18 2025 +0000
>
> arm64: mm: Don't attempt to split known pte-mapped regions
>
> It has been reported that split_kernel_leaf_mapping() is trying to sleep
> in non-sleepable context. It does this when acquiring the
> pgtable_split_lock mutex, when either CONFIG_DEBUG_ALLOC or
> CONFIG_KFENCE are enabled, which change linear map permissions within
> softirq context during memory allocation and/or freeing.
>
> But it turns out that the memory for which these features may attempt to
> modify the permissions is always mapped by pte, so there is no need to
> attempt to split the mapping. So let's exit early in these cases and
> avoid attempting to take the mutex.
>
> Closes: https://lore.kernel.org/all/f24b9032-0ec9-47b1-8b95-c0eeac7a31c5@roeck-us.net/
> Fixes: a166563e7ec3 ("arm64: mm: support large block mapping when rodata=full")
> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
>
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index b8d37eb037fc..6e26f070bb49 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -708,6 +708,16 @@ static int split_kernel_leaf_mapping_locked(unsigned long addr)
> return ret;
> }
>
> +static inline bool force_pte_mapping(void)
> +{
> + bool bbml2 = system_capabilities_finalized() ?
> + system_supports_bbml2_noabort() : cpu_supports_bbml2_noabort();
> +
> + return (!bbml2 && (rodata_full || arm64_kfence_can_set_direct_map() ||
> + is_realm_world())) ||
> + debug_pagealloc_enabled();
> +}
> +
> static DEFINE_MUTEX(pgtable_split_lock);
>
> int split_kernel_leaf_mapping(unsigned long start, unsigned long end)
> @@ -723,6 +733,16 @@ int split_kernel_leaf_mapping(unsigned long start, unsigned long end)
> if (!system_supports_bbml2_noabort())
> return 0;
>
> + /*
> + * If the region is within a pte-mapped area, there is no need to try to
> + * split. Additionally, CONFIG_DEBUG_ALLOC and CONFIG_KFENCE may change
> + * permissions from softirq context so for those cases (which are always
> + * pte-mapped), we must not go any further because taking the mutex
> + * below may sleep.
> + */
> + if (force_pte_mapping() || is_kfence_address((void *)start))
> + return 0;
> +
> /*
> * Ensure start and end are at least page-aligned since this is the
> * finest granularity we can split to.
> @@ -1009,16 +1029,6 @@ static inline void arm64_kfence_map_pool(phys_addr_t kfence_pool, pgd_t *pgdp) {
>
> #endif /* CONFIG_KFENCE */
>
> -static inline bool force_pte_mapping(void)
> -{
> - bool bbml2 = system_capabilities_finalized() ?
> - system_supports_bbml2_noabort() : cpu_supports_bbml2_noabort();
> -
> - return (!bbml2 && (rodata_full || arm64_kfence_can_set_direct_map() ||
> - is_realm_world())) ||
> - debug_pagealloc_enabled();
> -}
> -
> static void __init map_mem(pgd_t *pgdp)
> {
> static const u64 direct_map_end = _PAGE_END(VA_BITS_MIN);
> ---8<---
>
> Thanks,
> Ryan
>
> >
> > Yang Shi, Do you have any additional thoughts?
> >
> > Thanks,
> > Ryan
> >
> >>
> >> Thanks,
> >> Guenter
> >>
> >> ---
> >> Example log:
> >>
> >> [ 0.537499] BUG: sleeping function called from invalid context at kernel/locking/mutex.c:580
> >> [ 0.537501] in_atomic(): 1, irqs_disabled(): 0, non_block: 0, pid: 1, name: swapper/0
> >> [ 0.537502] preempt_count: 1, expected: 0
> >> [ 0.537504] 2 locks held by swapper/0/1:
> >> [ 0.537505] #0: ffffb60b01211960 (sched_domains_mutex){+.+.}-{4:4}, at: sched_domains_mutex_lock+0x24/0x38
> >> [ 0.537510] #1: ffffb60b01595838 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire+0x0/0x40
> >> [ 0.537516] CPU: 0 UID: 0 PID: 1 Comm: swapper/0 Not tainted 6.18.0-dbg-DEV #1 NONE
> >> [ 0.537517] Call trace:
> >> [ 0.537518] show_stack+0x20/0x38 (C)
> >> [ 0.537520] __dump_stack+0x28/0x38
> >> [ 0.537522] dump_stack_lvl+0xac/0xf0
> >> [ 0.537525] dump_stack+0x18/0x3c
> >> [ 0.537527] __might_resched+0x248/0x2a0
> >> [ 0.537529] __might_sleep+0x40/0x90
> >> [ 0.537531] __mutex_lock_common+0x70/0x1818
> >> [ 0.537533] mutex_lock_nested+0x34/0x48
> >> [ 0.537534] split_kernel_leaf_mapping+0x74/0x1a0
> >> [ 0.537536] update_range_prot+0x40/0x150
> >> [ 0.537537] __change_memory_common+0x30/0x148
> >> [ 0.537538] __kernel_map_pages+0x70/0x88
> >> [ 0.537540] __free_frozen_pages+0x6e4/0x7b8
> >> [ 0.537542] free_frozen_pages+0x1c/0x30
> >> [ 0.537544] __free_slab+0xf0/0x168
> >> [ 0.537547] free_slab+0x2c/0xf8
> >> [ 0.537549] free_to_partial_list+0x4e0/0x620
> >> [ 0.537551] __slab_free+0x228/0x250
> >> [ 0.537553] kfree+0x3c4/0x4c0
> >> [ 0.537555] destroy_sched_domain+0xf8/0x140
> >> [ 0.537557] cpu_attach_domain+0x17c/0x610
> >> [ 0.537558] build_sched_domains+0x15a4/0x1718
> >> [ 0.537560] sched_init_domains+0xbc/0xf8
> >> [ 0.537561] sched_init_smp+0x30/0x98
> >> [ 0.537562] kernel_init_freeable+0x148/0x230
> >> [ 0.537564] kernel_init+0x28/0x148
> >> [ 0.537566] ret_from_fork+0x10/0x20
> >> [ 0.537569] =============================
> >> [ 0.537569] [ BUG: Invalid wait context ]
> >> [ 0.537571] 6.18.0-dbg-DEV #1 Tainted: G W
> >> [ 0.537572] -----------------------------
> >> [ 0.537572] swapper/0/1 is trying to lock:
> >> [ 0.537573] ffffb60b011f3830 (pgtable_split_lock){+.+.}-{4:4}, at: split_kernel_leaf_mapping+0x74/0x1a0
> >> [ 0.537576] other info that might help us debug this:
> >> [ 0.537577] context-{5:5}
> >> [ 0.537578] 2 locks held by swapper/0/1:
> >> [ 0.537579] #0: ffffb60b01211960 (sched_domains_mutex){+.+.}-{4:4}, at: sched_domains_mutex_lock+0x24/0x38
> >> [ 0.537582] #1: ffffb60b01595838 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire+0x0/0x40
> >> [ 0.537585] stack backtrace:
> >> [ 0.537585] CPU: 0 UID: 0 PID: 1 Comm: swapper/0 Tainted: G W 6.18.0-dbg-DEV #1 NONE
> >> [ 0.537587] Tainted: [W]=WARN
> >> [ 0.537588] Call trace:
> >> [ 0.537589] show_stack+0x20/0x38 (C)
> >> [ 0.537591] __dump_stack+0x28/0x38
> >> [ 0.537593] dump_stack_lvl+0xac/0xf0
> >> [ 0.537596] dump_stack+0x18/0x3c
> >> [ 0.537598] __lock_acquire+0x980/0x2a20
> >> [ 0.537600] lock_acquire+0x124/0x2b8
> >> [ 0.537602] __mutex_lock_common+0xd8/0x1818
> >> [ 0.537604] mutex_lock_nested+0x34/0x48
> >> [ 0.537605] split_kernel_leaf_mapping+0x74/0x1a0
> >> [ 0.537607] update_range_prot+0x40/0x150
> >> [ 0.537608] __change_memory_common+0x30/0x148
> >> [ 0.537609] __kernel_map_pages+0x70/0x88
> >> [ 0.537610] __free_frozen_pages+0x6e4/0x7b8
> >> [ 0.537613] free_frozen_pages+0x1c/0x30
> >> [ 0.537615] __free_slab+0xf0/0x168
> >> [ 0.537617] free_slab+0x2c/0xf8
> >> [ 0.537619] free_to_partial_list+0x4e0/0x620
> >> [ 0.537621] __slab_free+0x228/0x250
> >> [ 0.537623] kfree+0x3c4/0x4c0
> >> [ 0.537625] destroy_sched_domain+0xf8/0x140
> >> [ 0.537627] cpu_attach_domain+0x17c/0x610
> >> [ 0.537628] build_sched_domains+0x15a4/0x1718
> >> [ 0.537630] sched_init_domains+0xbc/0xf8
> >> [ 0.537631] sched_init_smp+0x30/0x98
> >> [ 0.537632] kernel_init_freeable+0x148/0x230
> >> [ 0.537633] kernel_init+0x28/0x148
> >> [ 0.537635] ret_from_fork+0x10/0x20
> >>
> >> ---
> >> bisect:
> >>
> >> # bad: [3a8660878839faadb4f1a6dd72c3179c1df56787] Linux 6.18-rc1
> >> # good: [e5f0a698b34ed76002dc5cff3804a61c80233a7a] Linux 6.17
> >> git bisect start 'v6.18-rc1' 'v6.17'
> >> # bad: [58809f614e0e3f4e12b489bddf680bfeb31c0a20] Merge tag 'drm-next-2025-10-01' of https://gitlab.freedesktop.org/drm/kernel
> >> git bisect bad 58809f614e0e3f4e12b489bddf680bfeb31c0a20
> >> # bad: [a8253f807760e9c80eada9e5354e1240ccf325f9] Merge tag 'soc-newsoc-6.18' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc
> >> git bisect bad a8253f807760e9c80eada9e5354e1240ccf325f9
> >> # bad: [4b81e2eb9e4db8f6094c077d0c8b27c264901c1b] Merge tag 'timers-vdso-2025-09-29' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
> >> git bisect bad 4b81e2eb9e4db8f6094c077d0c8b27c264901c1b
> >> # bad: [f1004b2f19d7e9add9d707f64d9fcbc50f67921b] Merge tag 'm68k-for-v6.18-tag1' of git://git.kernel.org/pub/scm/linux/kernel/git/geert/linux-m68k
> >> git bisect bad f1004b2f19d7e9add9d707f64d9fcbc50f67921b
> >> # good: [a9401710a5f5681abd2a6f21f9e76bc9f2e81891] Merge tag 'v6.18-rc-part1-smb3-common' of git://git.samba.org/ksmbd
> >> git bisect good a9401710a5f5681abd2a6f21f9e76bc9f2e81891
> >> # good: [fe68bb2861808ed5c48d399bd7e670ab76829d55] Merge tag 'microblaze-v6.18' of git://git.monstr.eu/linux-2.6-microblaze
> >> git bisect good fe68bb2861808ed5c48d399bd7e670ab76829d55
> >> # bad: [f2d64a22faeeecff385b4c91fab5fe036ab00162] Merge branch 'for-next/perf' into for-next/core
> >> git bisect bad f2d64a22faeeecff385b4c91fab5fe036ab00162
> >> # good: [30f9386820cddbba59b48ae0670c3a1646dd440e] Merge branch 'for-next/misc' into for-next/core
> >> git bisect good 30f9386820cddbba59b48ae0670c3a1646dd440e
> >> # good: [43de0ac332b815cf56dbdce63687de9acfd35d49] drivers/perf: hisi: Relax the event ID check in the framework
> >> git bisect good 43de0ac332b815cf56dbdce63687de9acfd35d49
> >> # good: [5973a62efa34c80c9a4e5eac1fca6f6209b902af] arm64: map [_text, _stext) virtual address range non-executable+read-only
> >> git bisect good 5973a62efa34c80c9a4e5eac1fca6f6209b902af
> >> # good: [b3abb08d6f628a76c36bf7da9508e1a67bf186a0] drivers/perf: hisi: Refactor the event configuration of L3C PMU
> >> git bisect good b3abb08d6f628a76c36bf7da9508e1a67bf186a0
> >> # good: [6d2f913fda5683fbd4c3580262e10386c1263dfb] Documentation: hisi-pmu: Add introduction to HiSilicon V3 PMU
> >> git bisect good 6d2f913fda5683fbd4c3580262e10386c1263dfb
> >> # good: [2084660ad288c998b6f0c885e266deb364f65fba] perf/dwc_pcie: Fix use of uninitialized variable
> >> git bisect good 2084660ad288c998b6f0c885e266deb364f65fba
> >> # bad: [77dfca70baefcb988318a72fe69eb99f6dabbbb1] Merge branch 'for-next/mm' into for-next/core
> >> git bisect bad 77dfca70baefcb988318a72fe69eb99f6dabbbb1
> >> # first bad commit: [77dfca70baefcb988318a72fe69eb99f6dabbbb1] Merge branch 'for-next/mm' into for-next/core
> >>
> >> ---
> >> bisect into branch:
> >>
> >> - git checkout -b testing 77dfca70baefcb988318a72fe69eb99f6dabbbb1
> >> - git rebase 77dfca70baefcb988318a72fe69eb99f6dabbbb1~1
> >> [ fix minor conflict similar to the conflict resolution in 77dfca70baefc]
> >> - git diff 77dfca70baefcb988318a72fe69eb99f6dabbbb1
> >> [ confirmed that there are no differences ]
> >> - confirm that the problem is still seen at the tip of the rebase
> >> - git bisect start HEAD 77dfca70baefcb988318a72fe69eb99f6dabbbb1~1
> >> - run bisect
> >>
> >> Results:
> >>
> >> # bad: [47fc25df1ae3ae8412f1b812fb586c714d04a5e6] arm64: map [_text, _stext) virtual address range non-executable+read-only
> >> # good: [30f9386820cddbba59b48ae0670c3a1646dd440e] Merge branch 'for-next/misc' into for-next/core
> >> git bisect start 'HEAD' '77dfca70baefcb988318a72fe69eb99f6dabbbb1~1'
> >> # good: [805491d19fc21271b5c27f4602f8f66b625c110f] arm64/Kconfig: Remove CONFIG_RODATA_FULL_DEFAULT_ENABLED
> >> git bisect good 805491d19fc21271b5c27f4602f8f66b625c110f
> >> # bad: [13c7d7426232cc4489df7cd2e1f646a22d3f6172] arm64: mm: support large block mapping when rodata=full
> >> git bisect bad 13c7d7426232cc4489df7cd2e1f646a22d3f6172
> >> # good: [a4d9c67e503f2b73c2d89d8e8209dfd241bdc8d8] arm64: Enable permission change on arm64 kernel block mappings
> >> git bisect good a4d9c67e503f2b73c2d89d8e8209dfd241bdc8d8
> >> # first bad commit: [13c7d7426232cc4489df7cd2e1f646a22d3f6172] arm64: mm: support large block mapping when rodata=full
> >
>
^ permalink raw reply [flat|nested] 32+ messages in thread* Re: [PATCH v8 3/5] arm64: mm: support large block mapping when rodata=full
2025-11-02 12:11 ` Ryan Roberts
2025-11-02 15:13 ` Guenter Roeck
2025-11-02 17:46 ` Guenter Roeck
@ 2025-11-02 17:49 ` Guenter Roeck
2025-11-02 17:52 ` Guenter Roeck
2025-11-03 0:47 ` Yang Shi
2025-11-03 5:53 ` Dev Jain
4 siblings, 1 reply; 32+ messages in thread
From: Guenter Roeck @ 2025-11-02 17:49 UTC (permalink / raw)
To: Ryan Roberts
Cc: Yang Shi, catalin.marinas, will, akpm, david, lorenzo.stoakes,
ardb, dev.jain, scott, cl, linux-arm-kernel, linux-kernel,
linux-mm, nd
On Sun, Nov 2, 2025 at 7:09 AM Ryan Roberts <linux@roeck-us.net> wrote:
...
> commit 602ec2db74e5abfb058bd03934475ead8558eb72
> Author: Ryan Roberts <ryan.roberts@arm.com>
> Date: Sun Nov 2 11:45:18 2025 +0000
>
> arm64: mm: Don't attempt to split known pte-mapped regions
>
> It has been reported that split_kernel_leaf_mapping() is trying to sleep
> in non-sleepable context. It does this when acquiring the
> pgtable_split_lock mutex, when either CONFIG_DEBUG_ALLOC or
> CONFIG_KFENCE are enabled, which change linear map permissions within
> softirq context during memory allocation and/or freeing.
>
> But it turns out that the memory for which these features may attempt to
> modify the permissions is always mapped by pte, so there is no need to
> attempt to split the mapping. So let's exit early in these cases and
> avoid attempting to take the mutex.
>
> Closes: https://lore.kernel.org/all/f24b9032-0ec9-47b1-8b95-c0eeac7a31c5@roeck-us.net/
> Fixes: a166563e7ec3 ("arm64: mm: support large block mapping when rodata=full")
> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
Tested-by: Guenter Roeck <groeck@google.com>
Thanks a lot for the quick turnaround!
Guenter
^ permalink raw reply [flat|nested] 32+ messages in thread* Re: [PATCH v8 3/5] arm64: mm: support large block mapping when rodata=full
2025-11-02 17:49 ` Guenter Roeck
@ 2025-11-02 17:52 ` Guenter Roeck
0 siblings, 0 replies; 32+ messages in thread
From: Guenter Roeck @ 2025-11-02 17:52 UTC (permalink / raw)
To: Ryan Roberts
Cc: Yang Shi, catalin.marinas, will, akpm, david, lorenzo.stoakes,
ardb, dev.jain, scott, cl, linux-arm-kernel, linux-kernel,
linux-mm, nd
On 11/2/25 09:49, Guenter Roeck wrote:
> On Sun, Nov 2, 2025 at 7:09 AM Ryan Roberts <linux@roeck-us.net> wrote:
Oops. That was sent from my Google address and got messed up.
Copying Ryan this time. Sorry for the noise.
Guenter
> ...
>> commit 602ec2db74e5abfb058bd03934475ead8558eb72
>> Author: Ryan Roberts <ryan.roberts@arm.com>
>> Date: Sun Nov 2 11:45:18 2025 +0000
>>
>> arm64: mm: Don't attempt to split known pte-mapped regions
>>
>> It has been reported that split_kernel_leaf_mapping() is trying to sleep
>> in non-sleepable context. It does this when acquiring the
>> pgtable_split_lock mutex, when either CONFIG_DEBUG_ALLOC or
>> CONFIG_KFENCE are enabled, which change linear map permissions within
>> softirq context during memory allocation and/or freeing.
>>
>> But it turns out that the memory for which these features may attempt to
>> modify the permissions is always mapped by pte, so there is no need to
>> attempt to split the mapping. So let's exit early in these cases and
>> avoid attempting to take the mutex.
>>
>> Closes: https://lore.kernel.org/all/f24b9032-0ec9-47b1-8b95-c0eeac7a31c5@roeck-us.net/
>> Fixes: a166563e7ec3 ("arm64: mm: support large block mapping when rodata=full")
>> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
>
> Tested-by: Guenter Roeck <groeck@google.com>
>
> Thanks a lot for the quick turnaround!
>
> Guenter
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCH v8 3/5] arm64: mm: support large block mapping when rodata=full
2025-11-02 12:11 ` Ryan Roberts
` (2 preceding siblings ...)
2025-11-02 17:49 ` Guenter Roeck
@ 2025-11-03 0:47 ` Yang Shi
2025-11-03 10:07 ` Ryan Roberts
2025-11-03 5:53 ` Dev Jain
4 siblings, 1 reply; 32+ messages in thread
From: Yang Shi @ 2025-11-03 0:47 UTC (permalink / raw)
To: Ryan Roberts, Guenter Roeck
Cc: catalin.marinas, will, akpm, david, lorenzo.stoakes, ardb,
dev.jain, scott, cl, linux-arm-kernel, linux-kernel, linux-mm,
nd
On 11/2/25 4:11 AM, Ryan Roberts wrote:
> On 02/11/2025 10:31, Ryan Roberts wrote:
>> On 01/11/2025 16:14, Guenter Roeck wrote:
>>> Hi,
>>>
>>> On Wed, Sep 17, 2025 at 12:02:09PM -0700, Yang Shi wrote:
>>>> When rodata=full is specified, kernel linear mapping has to be mapped at
>>>> PTE level since large page table can't be split due to break-before-make
>>>> rule on ARM64.
>>>>
>>>> This resulted in a couple of problems:
>>>> - performance degradation
>>>> - more TLB pressure
>>>> - memory waste for kernel page table
>>>>
>>>> With FEAT_BBM level 2 support, splitting large block page table to
>>>> smaller ones doesn't need to make the page table entry invalid anymore.
>>>> This allows kernel split large block mapping on the fly.
>>>>
>>>> Add kernel page table split support and use large block mapping by
>>>> default when FEAT_BBM level 2 is supported for rodata=full. When
>>>> changing permissions for kernel linear mapping, the page table will be
>>>> split to smaller size.
>>>>
>>>> The machine without FEAT_BBM level 2 will fallback to have kernel linear
>>>> mapping PTE-mapped when rodata=full.
>>>>
>>>> With this we saw significant performance boost with some benchmarks and
>>>> much less memory consumption on my AmpereOne machine (192 cores, 1P)
>>>> with 256GB memory.
>>>>
>>>> * Memory use after boot
>>>> Before:
>>>> MemTotal: 258988984 kB
>>>> MemFree: 254821700 kB
>>>>
>>>> After:
>>>> MemTotal: 259505132 kB
>>>> MemFree: 255410264 kB
>>>>
>>>> Around 500MB more memory are free to use. The larger the machine, the
>>>> more memory saved.
>>>>
>>>> * Memcached
>>>> We saw performance degradation when running Memcached benchmark with
>>>> rodata=full vs rodata=on. Our profiling pointed to kernel TLB pressure.
>>>> With this patchset we saw ops/sec is increased by around 3.5%, P99
>>>> latency is reduced by around 9.6%.
>>>> The gain mainly came from reduced kernel TLB misses. The kernel TLB
>>>> MPKI is reduced by 28.5%.
>>>>
>>>> The benchmark data is now on par with rodata=on too.
>>>>
>>>> * Disk encryption (dm-crypt) benchmark
>>>> Ran fio benchmark with the below command on a 128G ramdisk (ext4) with
>>>> disk encryption (by dm-crypt).
>>>> fio --directory=/data --random_generator=lfsr --norandommap \
>>>> --randrepeat 1 --status-interval=999 --rw=write --bs=4k --loops=1 \
>>>> --ioengine=sync --iodepth=1 --numjobs=1 --fsync_on_close=1 \
>>>> --group_reporting --thread --name=iops-test-job --eta-newline=1 \
>>>> --size 100G
>>>>
>>>> The IOPS is increased by 90% - 150% (the variance is high, but the worst
>>>> number of good case is around 90% more than the best number of bad
>>>> case). The bandwidth is increased and the avg clat is reduced
>>>> proportionally.
>>>>
>>>> * Sequential file read
>>>> Read 100G file sequentially on XFS (xfs_io read with page cache
>>>> populated). The bandwidth is increased by 150%.
>>>>
>>> With lock debugging enabled, we see a large number of "BUG: sleeping
>>> function called from invalid context at kernel/locking/mutex.c:580"
>>> and "BUG: Invalid wait context:" backtraces when running v6.18-rc3.
>>> Please see example below.
>>>
>>> Bisect points to this patch.
>>>
>>> Please let me know if there is anything I can do to help tracking
>>> down the problem.
>> Thanks for the report - ouch!
>>
>> I expect you're running on a system that supports BBML2_NOABORT, based on the
>> stack trace, I expect you have CONFIG_DEBUG_PAGEALLOC enabled? That will cause
>> permission tricks to be played on the linear map at page allocation and free
>> time, which can happen in non-sleepable contexts. And with this patch we are
>> taking pgtable_split_lock (a mutex) in split_kernel_leaf_mapping(), which is
>> called as a result of the permission change request.
>>
>> However, when CONFIG_DEBUG_PAGEALLOC enabled we always force-map the linear map
>> by PTE so split_kernel_leaf_mapping() is actually unneccessary and will return
>> without actually having to split anything. So we could add an early "if
>> (force_pte_mapping()) return 0;" to bypass the function entirely in this case,
>> and I *think* that should solve it.
>>
>> But I'm also concerned about KFENCE. I can't remember it's exact semantics off
>> the top of my head, so I'm concerned we could see similar problems there (where
>> we only force pte mapping for the KFENCE pool).
>>
>> I'll investigate fully tomorrow and hopefully provide a fix.
Hi Ryan,
Thanks a lot for the quick fix. I have some comments about kfence below.
> Here's a proposed fix, although I can't get access to a system with BBML2 until
> tomorrow at the earliest. Guenter, I wonder if you could check that this
> resolves your issue?
>
> ---8<---
> commit 602ec2db74e5abfb058bd03934475ead8558eb72
> Author: Ryan Roberts <ryan.roberts@arm.com>
> Date: Sun Nov 2 11:45:18 2025 +0000
>
> arm64: mm: Don't attempt to split known pte-mapped regions
>
> It has been reported that split_kernel_leaf_mapping() is trying to sleep
> in non-sleepable context. It does this when acquiring the
> pgtable_split_lock mutex, when either CONFIG_DEBUG_ALLOC or
> CONFIG_KFENCE are enabled, which change linear map permissions within
> softirq context during memory allocation and/or freeing.
>
> But it turns out that the memory for which these features may attempt to
> modify the permissions is always mapped by pte, so there is no need to
> attempt to split the mapping. So let's exit early in these cases and
> avoid attempting to take the mutex.
>
> Closes: https://lore.kernel.org/all/f24b9032-0ec9-47b1-8b95-c0eeac7a31c5@roeck-us.net/
> Fixes: a166563e7ec3 ("arm64: mm: support large block mapping when rodata=full")
> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
>
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index b8d37eb037fc..6e26f070bb49 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -708,6 +708,16 @@ static int split_kernel_leaf_mapping_locked(unsigned long addr)
> return ret;
> }
>
> +static inline bool force_pte_mapping(void)
> +{
> + bool bbml2 = system_capabilities_finalized() ?
> + system_supports_bbml2_noabort() : cpu_supports_bbml2_noabort();
> +
> + return (!bbml2 && (rodata_full || arm64_kfence_can_set_direct_map() ||
> + is_realm_world())) ||
> + debug_pagealloc_enabled();
> +}
> +
> static DEFINE_MUTEX(pgtable_split_lock);
>
> int split_kernel_leaf_mapping(unsigned long start, unsigned long end)
> @@ -723,6 +733,16 @@ int split_kernel_leaf_mapping(unsigned long start, unsigned long end)
> if (!system_supports_bbml2_noabort())
> return 0;
>
> + /*
> + * If the region is within a pte-mapped area, there is no need to try to
> + * split. Additionally, CONFIG_DEBUG_ALLOC and CONFIG_KFENCE may change
> + * permissions from softirq context so for those cases (which are always
> + * pte-mapped), we must not go any further because taking the mutex
> + * below may sleep.
> + */
> + if (force_pte_mapping() || is_kfence_address((void *)start))
IIUC this may break kfence late init? The kfence_late_init() allocates
pages from buddy allocator, then protects them (setting them to
invalid). But the protection requires split page table, this check will
prevent kernel from splitting page table because __kfence_pool is
initialized before doing protection. So there is kind of circular
dependency.
The below fix may work?
if (force_pte_mapping() || (READ_ONCE(kfence_enabled) &&
is_kfence_address((void *)start)))
The kfence_enabled won't be set until protection is done. So if it is
set, we know kfence address must be mapped by PTE.
Thanks,
Yang
> + return 0;
> +
> /*
> * Ensure start and end are at least page-aligned since this is the
> * finest granularity we can split to.
> @@ -1009,16 +1029,6 @@ static inline void arm64_kfence_map_pool(phys_addr_t kfence_pool, pgd_t *pgdp) {
>
> #endif /* CONFIG_KFENCE */
>
> -static inline bool force_pte_mapping(void)
> -{
> - bool bbml2 = system_capabilities_finalized() ?
> - system_supports_bbml2_noabort() : cpu_supports_bbml2_noabort();
> -
> - return (!bbml2 && (rodata_full || arm64_kfence_can_set_direct_map() ||
> - is_realm_world())) ||
> - debug_pagealloc_enabled();
> -}
> -
> static void __init map_mem(pgd_t *pgdp)
> {
> static const u64 direct_map_end = _PAGE_END(VA_BITS_MIN);
> ---8<---
>
> Thanks,
> Ryan
>
>> Yang Shi, Do you have any additional thoughts?
>>
>> Thanks,
>> Ryan
>>
>>> Thanks,
>>> Guenter
>>>
>>> ---
>>> Example log:
>>>
>>> [ 0.537499] BUG: sleeping function called from invalid context at kernel/locking/mutex.c:580
>>> [ 0.537501] in_atomic(): 1, irqs_disabled(): 0, non_block: 0, pid: 1, name: swapper/0
>>> [ 0.537502] preempt_count: 1, expected: 0
>>> [ 0.537504] 2 locks held by swapper/0/1:
>>> [ 0.537505] #0: ffffb60b01211960 (sched_domains_mutex){+.+.}-{4:4}, at: sched_domains_mutex_lock+0x24/0x38
>>> [ 0.537510] #1: ffffb60b01595838 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire+0x0/0x40
>>> [ 0.537516] CPU: 0 UID: 0 PID: 1 Comm: swapper/0 Not tainted 6.18.0-dbg-DEV #1 NONE
>>> [ 0.537517] Call trace:
>>> [ 0.537518] show_stack+0x20/0x38 (C)
>>> [ 0.537520] __dump_stack+0x28/0x38
>>> [ 0.537522] dump_stack_lvl+0xac/0xf0
>>> [ 0.537525] dump_stack+0x18/0x3c
>>> [ 0.537527] __might_resched+0x248/0x2a0
>>> [ 0.537529] __might_sleep+0x40/0x90
>>> [ 0.537531] __mutex_lock_common+0x70/0x1818
>>> [ 0.537533] mutex_lock_nested+0x34/0x48
>>> [ 0.537534] split_kernel_leaf_mapping+0x74/0x1a0
>>> [ 0.537536] update_range_prot+0x40/0x150
>>> [ 0.537537] __change_memory_common+0x30/0x148
>>> [ 0.537538] __kernel_map_pages+0x70/0x88
>>> [ 0.537540] __free_frozen_pages+0x6e4/0x7b8
>>> [ 0.537542] free_frozen_pages+0x1c/0x30
>>> [ 0.537544] __free_slab+0xf0/0x168
>>> [ 0.537547] free_slab+0x2c/0xf8
>>> [ 0.537549] free_to_partial_list+0x4e0/0x620
>>> [ 0.537551] __slab_free+0x228/0x250
>>> [ 0.537553] kfree+0x3c4/0x4c0
>>> [ 0.537555] destroy_sched_domain+0xf8/0x140
>>> [ 0.537557] cpu_attach_domain+0x17c/0x610
>>> [ 0.537558] build_sched_domains+0x15a4/0x1718
>>> [ 0.537560] sched_init_domains+0xbc/0xf8
>>> [ 0.537561] sched_init_smp+0x30/0x98
>>> [ 0.537562] kernel_init_freeable+0x148/0x230
>>> [ 0.537564] kernel_init+0x28/0x148
>>> [ 0.537566] ret_from_fork+0x10/0x20
>>> [ 0.537569] =============================
>>> [ 0.537569] [ BUG: Invalid wait context ]
>>> [ 0.537571] 6.18.0-dbg-DEV #1 Tainted: G W
>>> [ 0.537572] -----------------------------
>>> [ 0.537572] swapper/0/1 is trying to lock:
>>> [ 0.537573] ffffb60b011f3830 (pgtable_split_lock){+.+.}-{4:4}, at: split_kernel_leaf_mapping+0x74/0x1a0
>>> [ 0.537576] other info that might help us debug this:
>>> [ 0.537577] context-{5:5}
>>> [ 0.537578] 2 locks held by swapper/0/1:
>>> [ 0.537579] #0: ffffb60b01211960 (sched_domains_mutex){+.+.}-{4:4}, at: sched_domains_mutex_lock+0x24/0x38
>>> [ 0.537582] #1: ffffb60b01595838 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire+0x0/0x40
>>> [ 0.537585] stack backtrace:
>>> [ 0.537585] CPU: 0 UID: 0 PID: 1 Comm: swapper/0 Tainted: G W 6.18.0-dbg-DEV #1 NONE
>>> [ 0.537587] Tainted: [W]=WARN
>>> [ 0.537588] Call trace:
>>> [ 0.537589] show_stack+0x20/0x38 (C)
>>> [ 0.537591] __dump_stack+0x28/0x38
>>> [ 0.537593] dump_stack_lvl+0xac/0xf0
>>> [ 0.537596] dump_stack+0x18/0x3c
>>> [ 0.537598] __lock_acquire+0x980/0x2a20
>>> [ 0.537600] lock_acquire+0x124/0x2b8
>>> [ 0.537602] __mutex_lock_common+0xd8/0x1818
>>> [ 0.537604] mutex_lock_nested+0x34/0x48
>>> [ 0.537605] split_kernel_leaf_mapping+0x74/0x1a0
>>> [ 0.537607] update_range_prot+0x40/0x150
>>> [ 0.537608] __change_memory_common+0x30/0x148
>>> [ 0.537609] __kernel_map_pages+0x70/0x88
>>> [ 0.537610] __free_frozen_pages+0x6e4/0x7b8
>>> [ 0.537613] free_frozen_pages+0x1c/0x30
>>> [ 0.537615] __free_slab+0xf0/0x168
>>> [ 0.537617] free_slab+0x2c/0xf8
>>> [ 0.537619] free_to_partial_list+0x4e0/0x620
>>> [ 0.537621] __slab_free+0x228/0x250
>>> [ 0.537623] kfree+0x3c4/0x4c0
>>> [ 0.537625] destroy_sched_domain+0xf8/0x140
>>> [ 0.537627] cpu_attach_domain+0x17c/0x610
>>> [ 0.537628] build_sched_domains+0x15a4/0x1718
>>> [ 0.537630] sched_init_domains+0xbc/0xf8
>>> [ 0.537631] sched_init_smp+0x30/0x98
>>> [ 0.537632] kernel_init_freeable+0x148/0x230
>>> [ 0.537633] kernel_init+0x28/0x148
>>> [ 0.537635] ret_from_fork+0x10/0x20
>>>
>>> ---
>>> bisect:
>>>
>>> # bad: [3a8660878839faadb4f1a6dd72c3179c1df56787] Linux 6.18-rc1
>>> # good: [e5f0a698b34ed76002dc5cff3804a61c80233a7a] Linux 6.17
>>> git bisect start 'v6.18-rc1' 'v6.17'
>>> # bad: [58809f614e0e3f4e12b489bddf680bfeb31c0a20] Merge tag 'drm-next-2025-10-01' of https://gitlab.freedesktop.org/drm/kernel
>>> git bisect bad 58809f614e0e3f4e12b489bddf680bfeb31c0a20
>>> # bad: [a8253f807760e9c80eada9e5354e1240ccf325f9] Merge tag 'soc-newsoc-6.18' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc
>>> git bisect bad a8253f807760e9c80eada9e5354e1240ccf325f9
>>> # bad: [4b81e2eb9e4db8f6094c077d0c8b27c264901c1b] Merge tag 'timers-vdso-2025-09-29' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
>>> git bisect bad 4b81e2eb9e4db8f6094c077d0c8b27c264901c1b
>>> # bad: [f1004b2f19d7e9add9d707f64d9fcbc50f67921b] Merge tag 'm68k-for-v6.18-tag1' of git://git.kernel.org/pub/scm/linux/kernel/git/geert/linux-m68k
>>> git bisect bad f1004b2f19d7e9add9d707f64d9fcbc50f67921b
>>> # good: [a9401710a5f5681abd2a6f21f9e76bc9f2e81891] Merge tag 'v6.18-rc-part1-smb3-common' of git://git.samba.org/ksmbd
>>> git bisect good a9401710a5f5681abd2a6f21f9e76bc9f2e81891
>>> # good: [fe68bb2861808ed5c48d399bd7e670ab76829d55] Merge tag 'microblaze-v6.18' of git://git.monstr.eu/linux-2.6-microblaze
>>> git bisect good fe68bb2861808ed5c48d399bd7e670ab76829d55
>>> # bad: [f2d64a22faeeecff385b4c91fab5fe036ab00162] Merge branch 'for-next/perf' into for-next/core
>>> git bisect bad f2d64a22faeeecff385b4c91fab5fe036ab00162
>>> # good: [30f9386820cddbba59b48ae0670c3a1646dd440e] Merge branch 'for-next/misc' into for-next/core
>>> git bisect good 30f9386820cddbba59b48ae0670c3a1646dd440e
>>> # good: [43de0ac332b815cf56dbdce63687de9acfd35d49] drivers/perf: hisi: Relax the event ID check in the framework
>>> git bisect good 43de0ac332b815cf56dbdce63687de9acfd35d49
>>> # good: [5973a62efa34c80c9a4e5eac1fca6f6209b902af] arm64: map [_text, _stext) virtual address range non-executable+read-only
>>> git bisect good 5973a62efa34c80c9a4e5eac1fca6f6209b902af
>>> # good: [b3abb08d6f628a76c36bf7da9508e1a67bf186a0] drivers/perf: hisi: Refactor the event configuration of L3C PMU
>>> git bisect good b3abb08d6f628a76c36bf7da9508e1a67bf186a0
>>> # good: [6d2f913fda5683fbd4c3580262e10386c1263dfb] Documentation: hisi-pmu: Add introduction to HiSilicon V3 PMU
>>> git bisect good 6d2f913fda5683fbd4c3580262e10386c1263dfb
>>> # good: [2084660ad288c998b6f0c885e266deb364f65fba] perf/dwc_pcie: Fix use of uninitialized variable
>>> git bisect good 2084660ad288c998b6f0c885e266deb364f65fba
>>> # bad: [77dfca70baefcb988318a72fe69eb99f6dabbbb1] Merge branch 'for-next/mm' into for-next/core
>>> git bisect bad 77dfca70baefcb988318a72fe69eb99f6dabbbb1
>>> # first bad commit: [77dfca70baefcb988318a72fe69eb99f6dabbbb1] Merge branch 'for-next/mm' into for-next/core
>>>
>>> ---
>>> bisect into branch:
>>>
>>> - git checkout -b testing 77dfca70baefcb988318a72fe69eb99f6dabbbb1
>>> - git rebase 77dfca70baefcb988318a72fe69eb99f6dabbbb1~1
>>> [ fix minor conflict similar to the conflict resolution in 77dfca70baefc]
>>> - git diff 77dfca70baefcb988318a72fe69eb99f6dabbbb1
>>> [ confirmed that there are no differences ]
>>> - confirm that the problem is still seen at the tip of the rebase
>>> - git bisect start HEAD 77dfca70baefcb988318a72fe69eb99f6dabbbb1~1
>>> - run bisect
>>>
>>> Results:
>>>
>>> # bad: [47fc25df1ae3ae8412f1b812fb586c714d04a5e6] arm64: map [_text, _stext) virtual address range non-executable+read-only
>>> # good: [30f9386820cddbba59b48ae0670c3a1646dd440e] Merge branch 'for-next/misc' into for-next/core
>>> git bisect start 'HEAD' '77dfca70baefcb988318a72fe69eb99f6dabbbb1~1'
>>> # good: [805491d19fc21271b5c27f4602f8f66b625c110f] arm64/Kconfig: Remove CONFIG_RODATA_FULL_DEFAULT_ENABLED
>>> git bisect good 805491d19fc21271b5c27f4602f8f66b625c110f
>>> # bad: [13c7d7426232cc4489df7cd2e1f646a22d3f6172] arm64: mm: support large block mapping when rodata=full
>>> git bisect bad 13c7d7426232cc4489df7cd2e1f646a22d3f6172
>>> # good: [a4d9c67e503f2b73c2d89d8e8209dfd241bdc8d8] arm64: Enable permission change on arm64 kernel block mappings
>>> git bisect good a4d9c67e503f2b73c2d89d8e8209dfd241bdc8d8
>>> # first bad commit: [13c7d7426232cc4489df7cd2e1f646a22d3f6172] arm64: mm: support large block mapping when rodata=full
^ permalink raw reply [flat|nested] 32+ messages in thread* Re: [PATCH v8 3/5] arm64: mm: support large block mapping when rodata=full
2025-11-03 0:47 ` Yang Shi
@ 2025-11-03 10:07 ` Ryan Roberts
2025-11-03 16:21 ` Yang Shi
0 siblings, 1 reply; 32+ messages in thread
From: Ryan Roberts @ 2025-11-03 10:07 UTC (permalink / raw)
To: Yang Shi, Guenter Roeck
Cc: catalin.marinas, will, akpm, david, lorenzo.stoakes, ardb,
dev.jain, scott, cl, linux-arm-kernel, linux-kernel, linux-mm,
nd
On 03/11/2025 00:47, Yang Shi wrote:
>
>
[...]
>> @@ -723,6 +733,16 @@ int split_kernel_leaf_mapping(unsigned long start,
>> unsigned long end)
>> if (!system_supports_bbml2_noabort())
>> return 0;
>> + /*
>> + * If the region is within a pte-mapped area, there is no need to try to
>> + * split. Additionally, CONFIG_DEBUG_ALLOC and CONFIG_KFENCE may change
>> + * permissions from softirq context so for those cases (which are always
>> + * pte-mapped), we must not go any further because taking the mutex
>> + * below may sleep.
>> + */
>> + if (force_pte_mapping() || is_kfence_address((void *)start))
>
> IIUC this may break kfence late init? The kfence_late_init() allocates pages
> from buddy allocator, then protects them (setting them to invalid). But the
> protection requires split page table, this check will prevent kernel from
> splitting page table because __kfence_pool is initialized before doing
> protection. So there is kind of circular dependency.
I hadn't considered late init. But I guess the requirement is that the kfence
pool needs to be pte mapped whenever kfence is enabled.
For early init; that requirement is clearly met since we pte map it in the arch
code. For late init, as far as I can tell, the memory is initially block mapped,
is allocarted from the buddy then every other page is protected via
kfence_init_pool() from kfence_init_pool(). This will have the effect of
splitting every page in the pool to pte mappings (as long as your suggested fix
below is applied).
It all feels a bit accidental though.
>
> The below fix may work?
>
> if (force_pte_mapping() || (READ_ONCE(kfence_enabled) && is_kfence_address((void
> *)start)))
>
> The kfence_enabled won't be set until protection is done. So if it is set, we
> know kfence address must be mapped by PTE.
I think it will work, but it feels a bit hacky, and kfence_enabled is currently
static in core.c.
I wonder if it would be preferable to explicitly do the pte mapping in
arch_kfence_init_pool()? It looks like that's how x86 does it...
>
> Thanks,
> Yang
>
>
>
>
>
>> + return 0;
>> +
>> /*
>> * Ensure start and end are at least page-aligned since this is the
>> * finest granularity we can split to.
>> @@ -1009,16 +1029,6 @@ static inline void arm64_kfence_map_pool(phys_addr_t
>> kfence_pool, pgd_t *pgdp) {
>> #endif /* CONFIG_KFENCE */
>> -static inline bool force_pte_mapping(void)
>> -{
>> - bool bbml2 = system_capabilities_finalized() ?
>> - system_supports_bbml2_noabort() : cpu_supports_bbml2_noabort();
>> -
>> - return (!bbml2 && (rodata_full || arm64_kfence_can_set_direct_map() ||
>> - is_realm_world())) ||
>> - debug_pagealloc_enabled();
>> -}
>> -
>> static void __init map_mem(pgd_t *pgdp)
>> {
>> static const u64 direct_map_end = _PAGE_END(VA_BITS_MIN);
>> ---8<---
>>
>> Thanks,
>> Ryan
>>
>>> Yang Shi, Do you have any additional thoughts?
>>>
>>> Thanks,
>>> Ryan
>>>
>>>> Thanks,
>>>> Guenter
>>>>
>>>> ---
>>>> Example log:
>>>>
>>>> [ 0.537499] BUG: sleeping function called from invalid context at kernel/
>>>> locking/mutex.c:580
>>>> [ 0.537501] in_atomic(): 1, irqs_disabled(): 0, non_block: 0, pid: 1,
>>>> name: swapper/0
>>>> [ 0.537502] preempt_count: 1, expected: 0
>>>> [ 0.537504] 2 locks held by swapper/0/1:
>>>> [ 0.537505] #0: ffffb60b01211960 (sched_domains_mutex){+.+.}-{4:4}, at:
>>>> sched_domains_mutex_lock+0x24/0x38
>>>> [ 0.537510] #1: ffffb60b01595838 (rcu_read_lock){....}-{1:3}, at:
>>>> rcu_lock_acquire+0x0/0x40
>>>> [ 0.537516] CPU: 0 UID: 0 PID: 1 Comm: swapper/0 Not tainted 6.18.0-dbg-
>>>> DEV #1 NONE
>>>> [ 0.537517] Call trace:
>>>> [ 0.537518] show_stack+0x20/0x38 (C)
>>>> [ 0.537520] __dump_stack+0x28/0x38
>>>> [ 0.537522] dump_stack_lvl+0xac/0xf0
>>>> [ 0.537525] dump_stack+0x18/0x3c
>>>> [ 0.537527] __might_resched+0x248/0x2a0
>>>> [ 0.537529] __might_sleep+0x40/0x90
>>>> [ 0.537531] __mutex_lock_common+0x70/0x1818
>>>> [ 0.537533] mutex_lock_nested+0x34/0x48
>>>> [ 0.537534] split_kernel_leaf_mapping+0x74/0x1a0
>>>> [ 0.537536] update_range_prot+0x40/0x150
>>>> [ 0.537537] __change_memory_common+0x30/0x148
>>>> [ 0.537538] __kernel_map_pages+0x70/0x88
>>>> [ 0.537540] __free_frozen_pages+0x6e4/0x7b8
>>>> [ 0.537542] free_frozen_pages+0x1c/0x30
>>>> [ 0.537544] __free_slab+0xf0/0x168
>>>> [ 0.537547] free_slab+0x2c/0xf8
>>>> [ 0.537549] free_to_partial_list+0x4e0/0x620
>>>> [ 0.537551] __slab_free+0x228/0x250
>>>> [ 0.537553] kfree+0x3c4/0x4c0
>>>> [ 0.537555] destroy_sched_domain+0xf8/0x140
>>>> [ 0.537557] cpu_attach_domain+0x17c/0x610
>>>> [ 0.537558] build_sched_domains+0x15a4/0x1718
>>>> [ 0.537560] sched_init_domains+0xbc/0xf8
>>>> [ 0.537561] sched_init_smp+0x30/0x98
>>>> [ 0.537562] kernel_init_freeable+0x148/0x230
>>>> [ 0.537564] kernel_init+0x28/0x148
>>>> [ 0.537566] ret_from_fork+0x10/0x20
>>>> [ 0.537569] =============================
>>>> [ 0.537569] [ BUG: Invalid wait context ]
>>>> [ 0.537571] 6.18.0-dbg-DEV #1 Tainted: G W
>>>> [ 0.537572] -----------------------------
>>>> [ 0.537572] swapper/0/1 is trying to lock:
>>>> [ 0.537573] ffffb60b011f3830 (pgtable_split_lock){+.+.}-{4:4}, at:
>>>> split_kernel_leaf_mapping+0x74/0x1a0
>>>> [ 0.537576] other info that might help us debug this:
>>>> [ 0.537577] context-{5:5}
>>>> [ 0.537578] 2 locks held by swapper/0/1:
>>>> [ 0.537579] #0: ffffb60b01211960 (sched_domains_mutex){+.+.}-{4:4}, at:
>>>> sched_domains_mutex_lock+0x24/0x38
>>>> [ 0.537582] #1: ffffb60b01595838 (rcu_read_lock){....}-{1:3}, at:
>>>> rcu_lock_acquire+0x0/0x40
>>>> [ 0.537585] stack backtrace:
>>>> [ 0.537585] CPU: 0 UID: 0 PID: 1 Comm: swapper/0 Tainted: G
>>>> W 6.18.0-dbg-DEV #1 NONE
>>>> [ 0.537587] Tainted: [W]=WARN
>>>> [ 0.537588] Call trace:
>>>> [ 0.537589] show_stack+0x20/0x38 (C)
>>>> [ 0.537591] __dump_stack+0x28/0x38
>>>> [ 0.537593] dump_stack_lvl+0xac/0xf0
>>>> [ 0.537596] dump_stack+0x18/0x3c
>>>> [ 0.537598] __lock_acquire+0x980/0x2a20
>>>> [ 0.537600] lock_acquire+0x124/0x2b8
>>>> [ 0.537602] __mutex_lock_common+0xd8/0x1818
>>>> [ 0.537604] mutex_lock_nested+0x34/0x48
>>>> [ 0.537605] split_kernel_leaf_mapping+0x74/0x1a0
>>>> [ 0.537607] update_range_prot+0x40/0x150
>>>> [ 0.537608] __change_memory_common+0x30/0x148
>>>> [ 0.537609] __kernel_map_pages+0x70/0x88
>>>> [ 0.537610] __free_frozen_pages+0x6e4/0x7b8
>>>> [ 0.537613] free_frozen_pages+0x1c/0x30
>>>> [ 0.537615] __free_slab+0xf0/0x168
>>>> [ 0.537617] free_slab+0x2c/0xf8
>>>> [ 0.537619] free_to_partial_list+0x4e0/0x620
>>>> [ 0.537621] __slab_free+0x228/0x250
>>>> [ 0.537623] kfree+0x3c4/0x4c0
>>>> [ 0.537625] destroy_sched_domain+0xf8/0x140
>>>> [ 0.537627] cpu_attach_domain+0x17c/0x610
>>>> [ 0.537628] build_sched_domains+0x15a4/0x1718
>>>> [ 0.537630] sched_init_domains+0xbc/0xf8
>>>> [ 0.537631] sched_init_smp+0x30/0x98
>>>> [ 0.537632] kernel_init_freeable+0x148/0x230
>>>> [ 0.537633] kernel_init+0x28/0x148
>>>> [ 0.537635] ret_from_fork+0x10/0x20
>>>>
>>>> ---
>>>> bisect:
>>>>
>>>> # bad: [3a8660878839faadb4f1a6dd72c3179c1df56787] Linux 6.18-rc1
>>>> # good: [e5f0a698b34ed76002dc5cff3804a61c80233a7a] Linux 6.17
>>>> git bisect start 'v6.18-rc1' 'v6.17'
>>>> # bad: [58809f614e0e3f4e12b489bddf680bfeb31c0a20] Merge tag 'drm-
>>>> next-2025-10-01' of https://gitlab.freedesktop.org/drm/kernel
>>>> git bisect bad 58809f614e0e3f4e12b489bddf680bfeb31c0a20
>>>> # bad: [a8253f807760e9c80eada9e5354e1240ccf325f9] Merge tag 'soc-
>>>> newsoc-6.18' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc
>>>> git bisect bad a8253f807760e9c80eada9e5354e1240ccf325f9
>>>> # bad: [4b81e2eb9e4db8f6094c077d0c8b27c264901c1b] Merge tag 'timers-
>>>> vdso-2025-09-29' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
>>>> git bisect bad 4b81e2eb9e4db8f6094c077d0c8b27c264901c1b
>>>> # bad: [f1004b2f19d7e9add9d707f64d9fcbc50f67921b] Merge tag 'm68k-for-v6.18-
>>>> tag1' of git://git.kernel.org/pub/scm/linux/kernel/git/geert/linux-m68k
>>>> git bisect bad f1004b2f19d7e9add9d707f64d9fcbc50f67921b
>>>> # good: [a9401710a5f5681abd2a6f21f9e76bc9f2e81891] Merge tag 'v6.18-rc-
>>>> part1-smb3-common' of git://git.samba.org/ksmbd
>>>> git bisect good a9401710a5f5681abd2a6f21f9e76bc9f2e81891
>>>> # good: [fe68bb2861808ed5c48d399bd7e670ab76829d55] Merge tag 'microblaze-
>>>> v6.18' of git://git.monstr.eu/linux-2.6-microblaze
>>>> git bisect good fe68bb2861808ed5c48d399bd7e670ab76829d55
>>>> # bad: [f2d64a22faeeecff385b4c91fab5fe036ab00162] Merge branch 'for-next/
>>>> perf' into for-next/core
>>>> git bisect bad f2d64a22faeeecff385b4c91fab5fe036ab00162
>>>> # good: [30f9386820cddbba59b48ae0670c3a1646dd440e] Merge branch 'for-next/
>>>> misc' into for-next/core
>>>> git bisect good 30f9386820cddbba59b48ae0670c3a1646dd440e
>>>> # good: [43de0ac332b815cf56dbdce63687de9acfd35d49] drivers/perf: hisi: Relax
>>>> the event ID check in the framework
>>>> git bisect good 43de0ac332b815cf56dbdce63687de9acfd35d49
>>>> # good: [5973a62efa34c80c9a4e5eac1fca6f6209b902af] arm64: map [_text,
>>>> _stext) virtual address range non-executable+read-only
>>>> git bisect good 5973a62efa34c80c9a4e5eac1fca6f6209b902af
>>>> # good: [b3abb08d6f628a76c36bf7da9508e1a67bf186a0] drivers/perf: hisi:
>>>> Refactor the event configuration of L3C PMU
>>>> git bisect good b3abb08d6f628a76c36bf7da9508e1a67bf186a0
>>>> # good: [6d2f913fda5683fbd4c3580262e10386c1263dfb] Documentation: hisi-pmu:
>>>> Add introduction to HiSilicon V3 PMU
>>>> git bisect good 6d2f913fda5683fbd4c3580262e10386c1263dfb
>>>> # good: [2084660ad288c998b6f0c885e266deb364f65fba] perf/dwc_pcie: Fix use of
>>>> uninitialized variable
>>>> git bisect good 2084660ad288c998b6f0c885e266deb364f65fba
>>>> # bad: [77dfca70baefcb988318a72fe69eb99f6dabbbb1] Merge branch 'for-next/mm'
>>>> into for-next/core
>>>> git bisect bad 77dfca70baefcb988318a72fe69eb99f6dabbbb1
>>>> # first bad commit: [77dfca70baefcb988318a72fe69eb99f6dabbbb1] Merge branch
>>>> 'for-next/mm' into for-next/core
>>>>
>>>> ---
>>>> bisect into branch:
>>>>
>>>> - git checkout -b testing 77dfca70baefcb988318a72fe69eb99f6dabbbb1
>>>> - git rebase 77dfca70baefcb988318a72fe69eb99f6dabbbb1~1
>>>> [ fix minor conflict similar to the conflict resolution in 77dfca70baefc]
>>>> - git diff 77dfca70baefcb988318a72fe69eb99f6dabbbb1
>>>> [ confirmed that there are no differences ]
>>>> - confirm that the problem is still seen at the tip of the rebase
>>>> - git bisect start HEAD 77dfca70baefcb988318a72fe69eb99f6dabbbb1~1
>>>> - run bisect
>>>>
>>>> Results:
>>>>
>>>> # bad: [47fc25df1ae3ae8412f1b812fb586c714d04a5e6] arm64: map [_text, _stext)
>>>> virtual address range non-executable+read-only
>>>> # good: [30f9386820cddbba59b48ae0670c3a1646dd440e] Merge branch 'for-next/
>>>> misc' into for-next/core
>>>> git bisect start 'HEAD' '77dfca70baefcb988318a72fe69eb99f6dabbbb1~1'
>>>> # good: [805491d19fc21271b5c27f4602f8f66b625c110f] arm64/Kconfig: Remove
>>>> CONFIG_RODATA_FULL_DEFAULT_ENABLED
>>>> git bisect good 805491d19fc21271b5c27f4602f8f66b625c110f
>>>> # bad: [13c7d7426232cc4489df7cd2e1f646a22d3f6172] arm64: mm: support large
>>>> block mapping when rodata=full
>>>> git bisect bad 13c7d7426232cc4489df7cd2e1f646a22d3f6172
>>>> # good: [a4d9c67e503f2b73c2d89d8e8209dfd241bdc8d8] arm64: Enable permission
>>>> change on arm64 kernel block mappings
>>>> git bisect good a4d9c67e503f2b73c2d89d8e8209dfd241bdc8d8
>>>> # first bad commit: [13c7d7426232cc4489df7cd2e1f646a22d3f6172] arm64: mm:
>>>> support large block mapping when rodata=full
>
^ permalink raw reply [flat|nested] 32+ messages in thread* Re: [PATCH v8 3/5] arm64: mm: support large block mapping when rodata=full
2025-11-03 10:07 ` Ryan Roberts
@ 2025-11-03 16:21 ` Yang Shi
0 siblings, 0 replies; 32+ messages in thread
From: Yang Shi @ 2025-11-03 16:21 UTC (permalink / raw)
To: Ryan Roberts, Guenter Roeck
Cc: catalin.marinas, will, akpm, david, lorenzo.stoakes, ardb,
dev.jain, scott, cl, linux-arm-kernel, linux-kernel, linux-mm,
nd
On 11/3/25 2:07 AM, Ryan Roberts wrote:
> On 03/11/2025 00:47, Yang Shi wrote:
>>
> [...]
>
>>> @@ -723,6 +733,16 @@ int split_kernel_leaf_mapping(unsigned long start,
>>> unsigned long end)
>>> if (!system_supports_bbml2_noabort())
>>> return 0;
>>> + /*
>>> + * If the region is within a pte-mapped area, there is no need to try to
>>> + * split. Additionally, CONFIG_DEBUG_ALLOC and CONFIG_KFENCE may change
>>> + * permissions from softirq context so for those cases (which are always
>>> + * pte-mapped), we must not go any further because taking the mutex
>>> + * below may sleep.
>>> + */
>>> + if (force_pte_mapping() || is_kfence_address((void *)start))
>> IIUC this may break kfence late init? The kfence_late_init() allocates pages
>> from buddy allocator, then protects them (setting them to invalid). But the
>> protection requires split page table, this check will prevent kernel from
>> splitting page table because __kfence_pool is initialized before doing
>> protection. So there is kind of circular dependency.
> I hadn't considered late init. But I guess the requirement is that the kfence
> pool needs to be pte mapped whenever kfence is enabled.
>
> For early init; that requirement is clearly met since we pte map it in the arch
> code. For late init, as far as I can tell, the memory is initially block mapped,
> is allocarted from the buddy then every other page is protected via
> kfence_init_pool() from kfence_init_pool(). This will have the effect of
> splitting every page in the pool to pte mappings (as long as your suggested fix
> below is applied).
>
> It all feels a bit accidental though.
Yeah, it is not that explicit and obvious.
>
>> The below fix may work?
>>
>> if (force_pte_mapping() || (READ_ONCE(kfence_enabled) && is_kfence_address((void
>> *)start)))
>>
>> The kfence_enabled won't be set until protection is done. So if it is set, we
>> know kfence address must be mapped by PTE.
> I think it will work, but it feels a bit hacky, and kfence_enabled is currently
> static in core.c.
>
> I wonder if it would be preferable to explicitly do the pte mapping in
> arch_kfence_init_pool()? It looks like that's how x86 does it...
I agree, this looks better and strongly and explicitly convey the
PTE-mapping for kfence pool requirement.
Thanks,
Yang
>
>> Thanks,
>> Yang
>>
>>
>>
>>
>>
>>> + return 0;
>>> +
>>> /*
>>> * Ensure start and end are at least page-aligned since this is the
>>> * finest granularity we can split to.
>>> @@ -1009,16 +1029,6 @@ static inline void arm64_kfence_map_pool(phys_addr_t
>>> kfence_pool, pgd_t *pgdp) {
>>> #endif /* CONFIG_KFENCE */
>>> -static inline bool force_pte_mapping(void)
>>> -{
>>> - bool bbml2 = system_capabilities_finalized() ?
>>> - system_supports_bbml2_noabort() : cpu_supports_bbml2_noabort();
>>> -
>>> - return (!bbml2 && (rodata_full || arm64_kfence_can_set_direct_map() ||
>>> - is_realm_world())) ||
>>> - debug_pagealloc_enabled();
>>> -}
>>> -
>>> static void __init map_mem(pgd_t *pgdp)
>>> {
>>> static const u64 direct_map_end = _PAGE_END(VA_BITS_MIN);
>>> ---8<---
>>>
>>> Thanks,
>>> Ryan
>>>
>>>> Yang Shi, Do you have any additional thoughts?
>>>>
>>>> Thanks,
>>>> Ryan
>>>>
>>>>> Thanks,
>>>>> Guenter
>>>>>
>>>>> ---
>>>>> Example log:
>>>>>
>>>>> [ 0.537499] BUG: sleeping function called from invalid context at kernel/
>>>>> locking/mutex.c:580
>>>>> [ 0.537501] in_atomic(): 1, irqs_disabled(): 0, non_block: 0, pid: 1,
>>>>> name: swapper/0
>>>>> [ 0.537502] preempt_count: 1, expected: 0
>>>>> [ 0.537504] 2 locks held by swapper/0/1:
>>>>> [ 0.537505] #0: ffffb60b01211960 (sched_domains_mutex){+.+.}-{4:4}, at:
>>>>> sched_domains_mutex_lock+0x24/0x38
>>>>> [ 0.537510] #1: ffffb60b01595838 (rcu_read_lock){....}-{1:3}, at:
>>>>> rcu_lock_acquire+0x0/0x40
>>>>> [ 0.537516] CPU: 0 UID: 0 PID: 1 Comm: swapper/0 Not tainted 6.18.0-dbg-
>>>>> DEV #1 NONE
>>>>> [ 0.537517] Call trace:
>>>>> [ 0.537518] show_stack+0x20/0x38 (C)
>>>>> [ 0.537520] __dump_stack+0x28/0x38
>>>>> [ 0.537522] dump_stack_lvl+0xac/0xf0
>>>>> [ 0.537525] dump_stack+0x18/0x3c
>>>>> [ 0.537527] __might_resched+0x248/0x2a0
>>>>> [ 0.537529] __might_sleep+0x40/0x90
>>>>> [ 0.537531] __mutex_lock_common+0x70/0x1818
>>>>> [ 0.537533] mutex_lock_nested+0x34/0x48
>>>>> [ 0.537534] split_kernel_leaf_mapping+0x74/0x1a0
>>>>> [ 0.537536] update_range_prot+0x40/0x150
>>>>> [ 0.537537] __change_memory_common+0x30/0x148
>>>>> [ 0.537538] __kernel_map_pages+0x70/0x88
>>>>> [ 0.537540] __free_frozen_pages+0x6e4/0x7b8
>>>>> [ 0.537542] free_frozen_pages+0x1c/0x30
>>>>> [ 0.537544] __free_slab+0xf0/0x168
>>>>> [ 0.537547] free_slab+0x2c/0xf8
>>>>> [ 0.537549] free_to_partial_list+0x4e0/0x620
>>>>> [ 0.537551] __slab_free+0x228/0x250
>>>>> [ 0.537553] kfree+0x3c4/0x4c0
>>>>> [ 0.537555] destroy_sched_domain+0xf8/0x140
>>>>> [ 0.537557] cpu_attach_domain+0x17c/0x610
>>>>> [ 0.537558] build_sched_domains+0x15a4/0x1718
>>>>> [ 0.537560] sched_init_domains+0xbc/0xf8
>>>>> [ 0.537561] sched_init_smp+0x30/0x98
>>>>> [ 0.537562] kernel_init_freeable+0x148/0x230
>>>>> [ 0.537564] kernel_init+0x28/0x148
>>>>> [ 0.537566] ret_from_fork+0x10/0x20
>>>>> [ 0.537569] =============================
>>>>> [ 0.537569] [ BUG: Invalid wait context ]
>>>>> [ 0.537571] 6.18.0-dbg-DEV #1 Tainted: G W
>>>>> [ 0.537572] -----------------------------
>>>>> [ 0.537572] swapper/0/1 is trying to lock:
>>>>> [ 0.537573] ffffb60b011f3830 (pgtable_split_lock){+.+.}-{4:4}, at:
>>>>> split_kernel_leaf_mapping+0x74/0x1a0
>>>>> [ 0.537576] other info that might help us debug this:
>>>>> [ 0.537577] context-{5:5}
>>>>> [ 0.537578] 2 locks held by swapper/0/1:
>>>>> [ 0.537579] #0: ffffb60b01211960 (sched_domains_mutex){+.+.}-{4:4}, at:
>>>>> sched_domains_mutex_lock+0x24/0x38
>>>>> [ 0.537582] #1: ffffb60b01595838 (rcu_read_lock){....}-{1:3}, at:
>>>>> rcu_lock_acquire+0x0/0x40
>>>>> [ 0.537585] stack backtrace:
>>>>> [ 0.537585] CPU: 0 UID: 0 PID: 1 Comm: swapper/0 Tainted: G
>>>>> W 6.18.0-dbg-DEV #1 NONE
>>>>> [ 0.537587] Tainted: [W]=WARN
>>>>> [ 0.537588] Call trace:
>>>>> [ 0.537589] show_stack+0x20/0x38 (C)
>>>>> [ 0.537591] __dump_stack+0x28/0x38
>>>>> [ 0.537593] dump_stack_lvl+0xac/0xf0
>>>>> [ 0.537596] dump_stack+0x18/0x3c
>>>>> [ 0.537598] __lock_acquire+0x980/0x2a20
>>>>> [ 0.537600] lock_acquire+0x124/0x2b8
>>>>> [ 0.537602] __mutex_lock_common+0xd8/0x1818
>>>>> [ 0.537604] mutex_lock_nested+0x34/0x48
>>>>> [ 0.537605] split_kernel_leaf_mapping+0x74/0x1a0
>>>>> [ 0.537607] update_range_prot+0x40/0x150
>>>>> [ 0.537608] __change_memory_common+0x30/0x148
>>>>> [ 0.537609] __kernel_map_pages+0x70/0x88
>>>>> [ 0.537610] __free_frozen_pages+0x6e4/0x7b8
>>>>> [ 0.537613] free_frozen_pages+0x1c/0x30
>>>>> [ 0.537615] __free_slab+0xf0/0x168
>>>>> [ 0.537617] free_slab+0x2c/0xf8
>>>>> [ 0.537619] free_to_partial_list+0x4e0/0x620
>>>>> [ 0.537621] __slab_free+0x228/0x250
>>>>> [ 0.537623] kfree+0x3c4/0x4c0
>>>>> [ 0.537625] destroy_sched_domain+0xf8/0x140
>>>>> [ 0.537627] cpu_attach_domain+0x17c/0x610
>>>>> [ 0.537628] build_sched_domains+0x15a4/0x1718
>>>>> [ 0.537630] sched_init_domains+0xbc/0xf8
>>>>> [ 0.537631] sched_init_smp+0x30/0x98
>>>>> [ 0.537632] kernel_init_freeable+0x148/0x230
>>>>> [ 0.537633] kernel_init+0x28/0x148
>>>>> [ 0.537635] ret_from_fork+0x10/0x20
>>>>>
>>>>> ---
>>>>> bisect:
>>>>>
>>>>> # bad: [3a8660878839faadb4f1a6dd72c3179c1df56787] Linux 6.18-rc1
>>>>> # good: [e5f0a698b34ed76002dc5cff3804a61c80233a7a] Linux 6.17
>>>>> git bisect start 'v6.18-rc1' 'v6.17'
>>>>> # bad: [58809f614e0e3f4e12b489bddf680bfeb31c0a20] Merge tag 'drm-
>>>>> next-2025-10-01' of https://gitlab.freedesktop.org/drm/kernel
>>>>> git bisect bad 58809f614e0e3f4e12b489bddf680bfeb31c0a20
>>>>> # bad: [a8253f807760e9c80eada9e5354e1240ccf325f9] Merge tag 'soc-
>>>>> newsoc-6.18' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc
>>>>> git bisect bad a8253f807760e9c80eada9e5354e1240ccf325f9
>>>>> # bad: [4b81e2eb9e4db8f6094c077d0c8b27c264901c1b] Merge tag 'timers-
>>>>> vdso-2025-09-29' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
>>>>> git bisect bad 4b81e2eb9e4db8f6094c077d0c8b27c264901c1b
>>>>> # bad: [f1004b2f19d7e9add9d707f64d9fcbc50f67921b] Merge tag 'm68k-for-v6.18-
>>>>> tag1' of git://git.kernel.org/pub/scm/linux/kernel/git/geert/linux-m68k
>>>>> git bisect bad f1004b2f19d7e9add9d707f64d9fcbc50f67921b
>>>>> # good: [a9401710a5f5681abd2a6f21f9e76bc9f2e81891] Merge tag 'v6.18-rc-
>>>>> part1-smb3-common' of git://git.samba.org/ksmbd
>>>>> git bisect good a9401710a5f5681abd2a6f21f9e76bc9f2e81891
>>>>> # good: [fe68bb2861808ed5c48d399bd7e670ab76829d55] Merge tag 'microblaze-
>>>>> v6.18' of git://git.monstr.eu/linux-2.6-microblaze
>>>>> git bisect good fe68bb2861808ed5c48d399bd7e670ab76829d55
>>>>> # bad: [f2d64a22faeeecff385b4c91fab5fe036ab00162] Merge branch 'for-next/
>>>>> perf' into for-next/core
>>>>> git bisect bad f2d64a22faeeecff385b4c91fab5fe036ab00162
>>>>> # good: [30f9386820cddbba59b48ae0670c3a1646dd440e] Merge branch 'for-next/
>>>>> misc' into for-next/core
>>>>> git bisect good 30f9386820cddbba59b48ae0670c3a1646dd440e
>>>>> # good: [43de0ac332b815cf56dbdce63687de9acfd35d49] drivers/perf: hisi: Relax
>>>>> the event ID check in the framework
>>>>> git bisect good 43de0ac332b815cf56dbdce63687de9acfd35d49
>>>>> # good: [5973a62efa34c80c9a4e5eac1fca6f6209b902af] arm64: map [_text,
>>>>> _stext) virtual address range non-executable+read-only
>>>>> git bisect good 5973a62efa34c80c9a4e5eac1fca6f6209b902af
>>>>> # good: [b3abb08d6f628a76c36bf7da9508e1a67bf186a0] drivers/perf: hisi:
>>>>> Refactor the event configuration of L3C PMU
>>>>> git bisect good b3abb08d6f628a76c36bf7da9508e1a67bf186a0
>>>>> # good: [6d2f913fda5683fbd4c3580262e10386c1263dfb] Documentation: hisi-pmu:
>>>>> Add introduction to HiSilicon V3 PMU
>>>>> git bisect good 6d2f913fda5683fbd4c3580262e10386c1263dfb
>>>>> # good: [2084660ad288c998b6f0c885e266deb364f65fba] perf/dwc_pcie: Fix use of
>>>>> uninitialized variable
>>>>> git bisect good 2084660ad288c998b6f0c885e266deb364f65fba
>>>>> # bad: [77dfca70baefcb988318a72fe69eb99f6dabbbb1] Merge branch 'for-next/mm'
>>>>> into for-next/core
>>>>> git bisect bad 77dfca70baefcb988318a72fe69eb99f6dabbbb1
>>>>> # first bad commit: [77dfca70baefcb988318a72fe69eb99f6dabbbb1] Merge branch
>>>>> 'for-next/mm' into for-next/core
>>>>>
>>>>> ---
>>>>> bisect into branch:
>>>>>
>>>>> - git checkout -b testing 77dfca70baefcb988318a72fe69eb99f6dabbbb1
>>>>> - git rebase 77dfca70baefcb988318a72fe69eb99f6dabbbb1~1
>>>>> [ fix minor conflict similar to the conflict resolution in 77dfca70baefc]
>>>>> - git diff 77dfca70baefcb988318a72fe69eb99f6dabbbb1
>>>>> [ confirmed that there are no differences ]
>>>>> - confirm that the problem is still seen at the tip of the rebase
>>>>> - git bisect start HEAD 77dfca70baefcb988318a72fe69eb99f6dabbbb1~1
>>>>> - run bisect
>>>>>
>>>>> Results:
>>>>>
>>>>> # bad: [47fc25df1ae3ae8412f1b812fb586c714d04a5e6] arm64: map [_text, _stext)
>>>>> virtual address range non-executable+read-only
>>>>> # good: [30f9386820cddbba59b48ae0670c3a1646dd440e] Merge branch 'for-next/
>>>>> misc' into for-next/core
>>>>> git bisect start 'HEAD' '77dfca70baefcb988318a72fe69eb99f6dabbbb1~1'
>>>>> # good: [805491d19fc21271b5c27f4602f8f66b625c110f] arm64/Kconfig: Remove
>>>>> CONFIG_RODATA_FULL_DEFAULT_ENABLED
>>>>> git bisect good 805491d19fc21271b5c27f4602f8f66b625c110f
>>>>> # bad: [13c7d7426232cc4489df7cd2e1f646a22d3f6172] arm64: mm: support large
>>>>> block mapping when rodata=full
>>>>> git bisect bad 13c7d7426232cc4489df7cd2e1f646a22d3f6172
>>>>> # good: [a4d9c67e503f2b73c2d89d8e8209dfd241bdc8d8] arm64: Enable permission
>>>>> change on arm64 kernel block mappings
>>>>> git bisect good a4d9c67e503f2b73c2d89d8e8209dfd241bdc8d8
>>>>> # first bad commit: [13c7d7426232cc4489df7cd2e1f646a22d3f6172] arm64: mm:
>>>>> support large block mapping when rodata=full
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCH v8 3/5] arm64: mm: support large block mapping when rodata=full
2025-11-02 12:11 ` Ryan Roberts
` (3 preceding siblings ...)
2025-11-03 0:47 ` Yang Shi
@ 2025-11-03 5:53 ` Dev Jain
4 siblings, 0 replies; 32+ messages in thread
From: Dev Jain @ 2025-11-03 5:53 UTC (permalink / raw)
To: Ryan Roberts, Guenter Roeck, Yang Shi
Cc: catalin.marinas, will, akpm, david, lorenzo.stoakes, ardb, scott,
cl, linux-arm-kernel, linux-kernel, linux-mm, nd
>>>>
>>> With lock debugging enabled, we see a large number of "BUG: sleeping
>>> function called from invalid context at kernel/locking/mutex.c:580"
>>> and "BUG: Invalid wait context:" backtraces when running v6.18-rc3.
>>> Please see example below.
>>>
>>> Bisect points to this patch.
>>>
>>> Please let me know if there is anything I can do to help tracking
>>> down the problem.
>> Thanks for the report - ouch!
>>
>> I expect you're running on a system that supports BBML2_NOABORT, based on the
>> stack trace, I expect you have CONFIG_DEBUG_PAGEALLOC enabled? That will cause
>> permission tricks to be played on the linear map at page allocation and free
>> time, which can happen in non-sleepable contexts. And with this patch we are
>> taking pgtable_split_lock (a mutex) in split_kernel_leaf_mapping(), which is
>> called as a result of the permission change request.
>>
>> However, when CONFIG_DEBUG_PAGEALLOC enabled we always force-map the linear map
>> by PTE so split_kernel_leaf_mapping() is actually unneccessary and will return
>> without actually having to split anything. So we could add an early "if
>> (force_pte_mapping()) return 0;" to bypass the function entirely in this case,
>> and I *think* that should solve it.
>>
>> But I'm also concerned about KFENCE. I can't remember it's exact semantics off
>> the top of my head, so I'm concerned we could see similar problems there (where
>> we only force pte mapping for the KFENCE pool).
>>
>> I'll investigate fully tomorrow and hopefully provide a fix.
> Here's a proposed fix, although I can't get access to a system with BBML2 until
> tomorrow at the earliest. Guenter, I wonder if you could check that this
> resolves your issue?
>
> ---8<---
> commit 602ec2db74e5abfb058bd03934475ead8558eb72
> Author: Ryan Roberts <ryan.roberts@arm.com>
> Date: Sun Nov 2 11:45:18 2025 +0000
>
> arm64: mm: Don't attempt to split known pte-mapped regions
>
> It has been reported that split_kernel_leaf_mapping() is trying to sleep
> in non-sleepable context. It does this when acquiring the
> pgtable_split_lock mutex, when either CONFIG_DEBUG_ALLOC or
> CONFIG_KFENCE are enabled, which change linear map permissions within
> softirq context during memory allocation and/or freeing.
>
> But it turns out that the memory for which these features may attempt to
> modify the permissions is always mapped by pte, so there is no need to
> attempt to split the mapping. So let's exit early in these cases and
> avoid attempting to take the mutex.
>
> Closes: https://lore.kernel.org/all/f24b9032-0ec9-47b1-8b95-c0eeac7a31c5@roeck-us.net/
> Fixes: a166563e7ec3 ("arm64: mm: support large block mapping when rodata=full")
> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
>
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index b8d37eb037fc..6e26f070bb49 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -708,6 +708,16 @@ static int split_kernel_leaf_mapping_locked(unsigned long addr)
> return ret;
> }
>
> +static inline bool force_pte_mapping(void)
> +{
> + bool bbml2 = system_capabilities_finalized() ?
> + system_supports_bbml2_noabort() : cpu_supports_bbml2_noabort();
> +
> + return (!bbml2 && (rodata_full || arm64_kfence_can_set_direct_map() ||
> + is_realm_world())) ||
> + debug_pagealloc_enabled();
> +}
> +
> static DEFINE_MUTEX(pgtable_split_lock);
>
> int split_kernel_leaf_mapping(unsigned long start, unsigned long end)
> @@ -723,6 +733,16 @@ int split_kernel_leaf_mapping(unsigned long start, unsigned long end)
> if (!system_supports_bbml2_noabort())
> return 0;
>
> + /*
> + * If the region is within a pte-mapped area, there is no need to try to
> + * split. Additionally, CONFIG_DEBUG_ALLOC and CONFIG_KFENCE may change
Nit: CONFIG_DEBUG_PAGEALLOC.
> + * permissions from softirq context so for those cases (which are always
> + * pte-mapped), we must not go any further because taking the mutex
> + * below may sleep.
> + */
> + if (force_pte_mapping() || is_kfence_address((void *)start))
> + return 0;
> +
> /*
> * Ensure start and end are at least page-aligned since this is the
> * finest granularity we can split to.
> @@ -1009,16 +1029,6 @@ static inline void arm64_kfence_map_pool(phys_addr_t kfence_pool, pgd_t *pgdp) {
>
> #endif /* CONFIG_KFENCE */
>
> -static inline bool force_pte_mapping(void)
> -{
> - bool bbml2 = system_capabilities_finalized() ?
> - system_supports_bbml2_noabort() : cpu_supports_bbml2_noabort();
> -
> - return (!bbml2 && (rodata_full || arm64_kfence_can_set_direct_map() ||
> - is_realm_world())) ||
> - debug_pagealloc_enabled();
> -}
> -
Otherwise LGTM.
Reviewed-by: Dev Jain <dev.jain@arm.com>
> static void __init map_mem(pgd_t *pgdp)
> {
> static const u64 direct_map_end = _PAGE_END(VA_BITS_MIN);
> ---8<---
>
> Thanks,
> Ryan
>
>> Yang Shi, Do you have any additional thoughts?
>>
>> Thanks,
>> Ryan
>>
^ permalink raw reply [flat|nested] 32+ messages in thread
* [PATCH v8 4/5] arm64: mm: split linear mapping if BBML2 unsupported on secondary CPUs
2025-09-17 19:02 [PATCH v8 0/5] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full Yang Shi
` (2 preceding siblings ...)
2025-09-17 19:02 ` [PATCH v8 3/5] arm64: mm: support large block mapping when rodata=full Yang Shi
@ 2025-09-17 19:02 ` Yang Shi
2025-09-17 19:02 ` [PATCH v8 5/5] arm64: kprobes: call set_memory_rox() for kprobe page Yang Shi
2025-09-18 21:10 ` [PATCH v8 0/5] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full Will Deacon
5 siblings, 0 replies; 32+ messages in thread
From: Yang Shi @ 2025-09-17 19:02 UTC (permalink / raw)
To: catalin.marinas, will, ryan.roberts, akpm, david,
lorenzo.stoakes, ardb, dev.jain, scott, cl
Cc: yang, linux-arm-kernel, linux-kernel, linux-mm
From: Ryan Roberts <ryan.roberts@arm.com>
The kernel linear mapping is painted in very early stage of system boot.
The cpufeature has not been finalized yet at this point. So the linear
mapping is determined by the capability of boot CPU only. If the boot
CPU supports BBML2, large block mappings will be used for linear
mapping.
But the secondary CPUs may not support BBML2, so repaint the linear
mapping if large block mapping is used and the secondary CPUs don't
support BBML2 once cpufeature is finalized on all CPUs.
If the boot CPU doesn't support BBML2 or the secondary CPUs have the
same BBML2 capability with the boot CPU, repainting the linear mapping
is not needed.
Repainting is implemented by the boot CPU, which we know supports BBML2,
so it is safe for the live mapping size to change for this CPU. The
linear map region is walked using the pagewalk API and any discovered
large leaf mappings are split to pte mappings using the existing helper
functions. Since the repainting is performed inside of a stop_machine(),
we must use GFP_ATOMIC to allocate the extra intermediate pgtables. But
since we are still early in boot, it is expected that there is plenty of
memory available so we will never need to sleep for reclaim, and so
GFP_ATOMIC is acceptable here.
The secondary CPUs are all put into a waiting area with the idmap in
TTBR0 and reserved map in TTBR1 while this is performed since they
cannot be allowed to observe any size changes on the live mappings. Some
of this infrastructure is reused from the kpti case. Specifically we
share the same flag (was __idmap_kpti_flag, now idmap_kpti_bbml2_flag)
since it means we don't have to reserve any extra pgtable memory to
idmap the extra flag.
Co-developed-by: Yang Shi <yang@os.amperecomputing.com>
Signed-off-by: Yang Shi <yang@os.amperecomputing.com>
Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
---
arch/arm64/include/asm/mmu.h | 2 +
arch/arm64/kernel/cpufeature.c | 3 +
arch/arm64/mm/mmu.c | 182 +++++++++++++++++++++++++++++----
arch/arm64/mm/proc.S | 27 +++--
4 files changed, 187 insertions(+), 27 deletions(-)
diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
index a7cc95d97ceb..ff6fd0bbd7d2 100644
--- a/arch/arm64/include/asm/mmu.h
+++ b/arch/arm64/include/asm/mmu.h
@@ -79,6 +79,8 @@ extern void create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
extern void *fixmap_remap_fdt(phys_addr_t dt_phys, int *size, pgprot_t prot);
extern void mark_linear_text_alias_ro(void);
extern int split_kernel_leaf_mapping(unsigned long start, unsigned long end);
+extern void init_idmap_kpti_bbml2_flag(void);
+extern void linear_map_maybe_split_to_ptes(void);
/*
* This check is triggered during the early boot before the cpufeature
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 7dc092e33fcc..dd3dbe4c9359 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -86,6 +86,7 @@
#include <asm/kvm_host.h>
#include <asm/mmu.h>
#include <asm/mmu_context.h>
+#include <asm/mmu.h>
#include <asm/mte.h>
#include <asm/hypervisor.h>
#include <asm/processor.h>
@@ -2028,6 +2029,7 @@ static void __init kpti_install_ng_mappings(void)
if (arm64_use_ng_mappings)
return;
+ init_idmap_kpti_bbml2_flag();
stop_machine(__kpti_install_ng_mappings, NULL, cpu_online_mask);
}
@@ -3955,6 +3957,7 @@ void __init setup_system_features(void)
{
setup_system_capabilities();
+ linear_map_maybe_split_to_ptes();
kpti_install_ng_mappings();
sve_setup();
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index fa09dd120626..ca3f02eb961e 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -27,6 +27,8 @@
#include <linux/kfence.h>
#include <linux/pkeys.h>
#include <linux/mm_inline.h>
+#include <linux/pagewalk.h>
+#include <linux/stop_machine.h>
#include <asm/barrier.h>
#include <asm/cputype.h>
@@ -476,11 +478,11 @@ void create_kpti_ng_temp_pgd(pgd_t *pgdir, phys_addr_t phys, unsigned long virt,
#define INVALID_PHYS_ADDR (-1ULL)
-static phys_addr_t __pgd_pgtable_alloc(struct mm_struct *mm,
+static phys_addr_t __pgd_pgtable_alloc(struct mm_struct *mm, gfp_t gfp,
enum pgtable_type pgtable_type)
{
/* Page is zeroed by init_clear_pgtable() so don't duplicate effort. */
- struct ptdesc *ptdesc = pagetable_alloc(GFP_PGTABLE_KERNEL & ~__GFP_ZERO, 0);
+ struct ptdesc *ptdesc = pagetable_alloc(gfp & ~__GFP_ZERO, 0);
phys_addr_t pa;
if (!ptdesc)
@@ -507,9 +509,9 @@ static phys_addr_t __pgd_pgtable_alloc(struct mm_struct *mm,
}
static phys_addr_t
-try_pgd_pgtable_alloc_init_mm(enum pgtable_type pgtable_type)
+try_pgd_pgtable_alloc_init_mm(enum pgtable_type pgtable_type, gfp_t gfp)
{
- return __pgd_pgtable_alloc(&init_mm, pgtable_type);
+ return __pgd_pgtable_alloc(&init_mm, gfp, pgtable_type);
}
static phys_addr_t __maybe_unused
@@ -517,7 +519,7 @@ pgd_pgtable_alloc_init_mm(enum pgtable_type pgtable_type)
{
phys_addr_t pa;
- pa = __pgd_pgtable_alloc(&init_mm, pgtable_type);
+ pa = __pgd_pgtable_alloc(&init_mm, GFP_PGTABLE_KERNEL, pgtable_type);
BUG_ON(pa == INVALID_PHYS_ADDR);
return pa;
}
@@ -527,7 +529,7 @@ pgd_pgtable_alloc_special_mm(enum pgtable_type pgtable_type)
{
phys_addr_t pa;
- pa = __pgd_pgtable_alloc(NULL, pgtable_type);
+ pa = __pgd_pgtable_alloc(NULL, GFP_PGTABLE_KERNEL, pgtable_type);
BUG_ON(pa == INVALID_PHYS_ADDR);
return pa;
}
@@ -541,7 +543,7 @@ static void split_contpte(pte_t *ptep)
__set_pte(ptep, pte_mknoncont(__ptep_get(ptep)));
}
-static int split_pmd(pmd_t *pmdp, pmd_t pmd)
+static int split_pmd(pmd_t *pmdp, pmd_t pmd, gfp_t gfp, bool to_cont)
{
pmdval_t tableprot = PMD_TYPE_TABLE | PMD_TABLE_UXN | PMD_TABLE_AF;
unsigned long pfn = pmd_pfn(pmd);
@@ -550,7 +552,7 @@ static int split_pmd(pmd_t *pmdp, pmd_t pmd)
pte_t *ptep;
int i;
- pte_phys = try_pgd_pgtable_alloc_init_mm(TABLE_PTE);
+ pte_phys = try_pgd_pgtable_alloc_init_mm(TABLE_PTE, gfp);
if (pte_phys == INVALID_PHYS_ADDR)
return -ENOMEM;
ptep = (pte_t *)phys_to_virt(pte_phys);
@@ -559,7 +561,9 @@ static int split_pmd(pmd_t *pmdp, pmd_t pmd)
tableprot |= PMD_TABLE_PXN;
prot = __pgprot((pgprot_val(prot) & ~PTE_TYPE_MASK) | PTE_TYPE_PAGE);
- prot = __pgprot(pgprot_val(prot) | PTE_CONT);
+ prot = __pgprot(pgprot_val(prot) & ~PTE_CONT);
+ if (to_cont)
+ prot = __pgprot(pgprot_val(prot) | PTE_CONT);
for (i = 0; i < PTRS_PER_PTE; i++, ptep++, pfn++)
__set_pte(ptep, pfn_pte(pfn, prot));
@@ -583,7 +587,7 @@ static void split_contpmd(pmd_t *pmdp)
set_pmd(pmdp, pmd_mknoncont(pmdp_get(pmdp)));
}
-static int split_pud(pud_t *pudp, pud_t pud)
+static int split_pud(pud_t *pudp, pud_t pud, gfp_t gfp, bool to_cont)
{
pudval_t tableprot = PUD_TYPE_TABLE | PUD_TABLE_UXN | PUD_TABLE_AF;
unsigned int step = PMD_SIZE >> PAGE_SHIFT;
@@ -593,7 +597,7 @@ static int split_pud(pud_t *pudp, pud_t pud)
pmd_t *pmdp;
int i;
- pmd_phys = try_pgd_pgtable_alloc_init_mm(TABLE_PMD);
+ pmd_phys = try_pgd_pgtable_alloc_init_mm(TABLE_PMD, gfp);
if (pmd_phys == INVALID_PHYS_ADDR)
return -ENOMEM;
pmdp = (pmd_t *)phys_to_virt(pmd_phys);
@@ -602,7 +606,9 @@ static int split_pud(pud_t *pudp, pud_t pud)
tableprot |= PUD_TABLE_PXN;
prot = __pgprot((pgprot_val(prot) & ~PMD_TYPE_MASK) | PMD_TYPE_SECT);
- prot = __pgprot(pgprot_val(prot) | PTE_CONT);
+ prot = __pgprot(pgprot_val(prot) & ~PTE_CONT);
+ if (to_cont)
+ prot = __pgprot(pgprot_val(prot) | PTE_CONT);
for (i = 0; i < PTRS_PER_PMD; i++, pmdp++, pfn += step)
set_pmd(pmdp, pfn_pmd(pfn, prot));
@@ -660,7 +666,7 @@ static int split_kernel_leaf_mapping_locked(unsigned long addr)
if (!pud_present(pud))
goto out;
if (pud_leaf(pud)) {
- ret = split_pud(pudp, pud);
+ ret = split_pud(pudp, pud, GFP_PGTABLE_KERNEL, true);
if (ret)
goto out;
}
@@ -685,7 +691,7 @@ static int split_kernel_leaf_mapping_locked(unsigned long addr)
*/
if (ALIGN_DOWN(addr, PMD_SIZE) == addr)
goto out;
- ret = split_pmd(pmdp, pmd);
+ ret = split_pmd(pmdp, pmd, GFP_PGTABLE_KERNEL, true);
if (ret)
goto out;
}
@@ -758,6 +764,138 @@ int split_kernel_leaf_mapping(unsigned long start, unsigned long end)
return ret;
}
+static int __init split_to_ptes_pud_entry(pud_t *pudp, unsigned long addr,
+ unsigned long next,
+ struct mm_walk *walk)
+{
+ pud_t pud = pudp_get(pudp);
+ int ret = 0;
+
+ if (pud_leaf(pud))
+ ret = split_pud(pudp, pud, GFP_ATOMIC, false);
+
+ return ret;
+}
+
+static int __init split_to_ptes_pmd_entry(pmd_t *pmdp, unsigned long addr,
+ unsigned long next,
+ struct mm_walk *walk)
+{
+ pmd_t pmd = pmdp_get(pmdp);
+ int ret = 0;
+
+ if (pmd_leaf(pmd)) {
+ if (pmd_cont(pmd))
+ split_contpmd(pmdp);
+ ret = split_pmd(pmdp, pmd, GFP_ATOMIC, false);
+
+ /*
+ * We have split the pmd directly to ptes so there is no need to
+ * visit each pte to check if they are contpte.
+ */
+ walk->action = ACTION_CONTINUE;
+ }
+
+ return ret;
+}
+
+static int __init split_to_ptes_pte_entry(pte_t *ptep, unsigned long addr,
+ unsigned long next,
+ struct mm_walk *walk)
+{
+ pte_t pte = __ptep_get(ptep);
+
+ if (pte_cont(pte))
+ split_contpte(ptep);
+
+ return 0;
+}
+
+static const struct mm_walk_ops split_to_ptes_ops __initconst = {
+ .pud_entry = split_to_ptes_pud_entry,
+ .pmd_entry = split_to_ptes_pmd_entry,
+ .pte_entry = split_to_ptes_pte_entry,
+};
+
+static bool linear_map_requires_bbml2 __initdata;
+
+u32 idmap_kpti_bbml2_flag;
+
+void __init init_idmap_kpti_bbml2_flag(void)
+{
+ WRITE_ONCE(idmap_kpti_bbml2_flag, 1);
+ /* Must be visible to other CPUs before stop_machine() is called. */
+ smp_mb();
+}
+
+static int __init linear_map_split_to_ptes(void *__unused)
+{
+ /*
+ * Repainting the linear map must be done by CPU0 (the boot CPU) because
+ * that's the only CPU that we know supports BBML2. The other CPUs will
+ * be held in a waiting area with the idmap active.
+ */
+ if (!smp_processor_id()) {
+ unsigned long lstart = _PAGE_OFFSET(vabits_actual);
+ unsigned long lend = PAGE_END;
+ unsigned long kstart = (unsigned long)lm_alias(_stext);
+ unsigned long kend = (unsigned long)lm_alias(__init_begin);
+ int ret;
+
+ /*
+ * Wait for all secondary CPUs to be put into the waiting area.
+ */
+ smp_cond_load_acquire(&idmap_kpti_bbml2_flag, VAL == num_online_cpus());
+
+ /*
+ * Walk all of the linear map [lstart, lend), except the kernel
+ * linear map alias [kstart, kend), and split all mappings to
+ * PTE. The kernel alias remains static throughout runtime so
+ * can continue to be safely mapped with large mappings.
+ */
+ ret = walk_kernel_page_table_range_lockless(lstart, kstart,
+ &split_to_ptes_ops, NULL, NULL);
+ if (!ret)
+ ret = walk_kernel_page_table_range_lockless(kend, lend,
+ &split_to_ptes_ops, NULL, NULL);
+ if (ret)
+ panic("Failed to split linear map\n");
+ flush_tlb_kernel_range(lstart, lend);
+
+ /*
+ * Relies on dsb in flush_tlb_kernel_range() to avoid reordering
+ * before any page table split operations.
+ */
+ WRITE_ONCE(idmap_kpti_bbml2_flag, 0);
+ } else {
+ typedef void (wait_split_fn)(void);
+ extern wait_split_fn wait_linear_map_split_to_ptes;
+ wait_split_fn *wait_fn;
+
+ wait_fn = (void *)__pa_symbol(wait_linear_map_split_to_ptes);
+
+ /*
+ * At least one secondary CPU doesn't support BBML2 so cannot
+ * tolerate the size of the live mappings changing. So have the
+ * secondary CPUs wait for the boot CPU to make the changes
+ * with the idmap active and init_mm inactive.
+ */
+ cpu_install_idmap();
+ wait_fn();
+ cpu_uninstall_idmap();
+ }
+
+ return 0;
+}
+
+void __init linear_map_maybe_split_to_ptes(void)
+{
+ if (linear_map_requires_bbml2 && !system_supports_bbml2_noabort()) {
+ init_idmap_kpti_bbml2_flag();
+ stop_machine(linear_map_split_to_ptes, NULL, cpu_online_mask);
+ }
+}
+
/*
* This function can only be used to modify existing table entries,
* without allocating new levels of table. Note that this permits the
@@ -912,6 +1050,8 @@ static void __init map_mem(pgd_t *pgdp)
early_kfence_pool = arm64_kfence_alloc_pool();
+ linear_map_requires_bbml2 = !force_pte_mapping() && can_set_direct_map();
+
if (force_pte_mapping())
flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
@@ -1045,7 +1185,7 @@ void __pi_map_range(u64 *pgd, u64 start, u64 end, u64 pa, pgprot_t prot,
int level, pte_t *tbl, bool may_use_cont, u64 va_offset);
static u8 idmap_ptes[IDMAP_LEVELS - 1][PAGE_SIZE] __aligned(PAGE_SIZE) __ro_after_init,
- kpti_ptes[IDMAP_LEVELS - 1][PAGE_SIZE] __aligned(PAGE_SIZE) __ro_after_init;
+ kpti_bbml2_ptes[IDMAP_LEVELS - 1][PAGE_SIZE] __aligned(PAGE_SIZE) __ro_after_init;
static void __init create_idmap(void)
{
@@ -1057,15 +1197,17 @@ static void __init create_idmap(void)
IDMAP_ROOT_LEVEL, (pte_t *)idmap_pg_dir, false,
__phys_to_virt(ptep) - ptep);
- if (IS_ENABLED(CONFIG_UNMAP_KERNEL_AT_EL0) && !arm64_use_ng_mappings) {
- extern u32 __idmap_kpti_flag;
- u64 pa = __pa_symbol(&__idmap_kpti_flag);
+ if (linear_map_requires_bbml2 ||
+ (IS_ENABLED(CONFIG_UNMAP_KERNEL_AT_EL0) && !arm64_use_ng_mappings)) {
+ u64 pa = __pa_symbol(&idmap_kpti_bbml2_flag);
/*
* The KPTI G-to-nG conversion code needs a read-write mapping
- * of its synchronization flag in the ID map.
+ * of its synchronization flag in the ID map. This is also used
+ * when splitting the linear map to ptes if a secondary CPU
+ * doesn't support bbml2.
*/
- ptep = __pa_symbol(kpti_ptes);
+ ptep = __pa_symbol(kpti_bbml2_ptes);
__pi_map_range(&ptep, pa, pa + sizeof(u32), pa, PAGE_KERNEL,
IDMAP_ROOT_LEVEL, (pte_t *)idmap_pg_dir, false,
__phys_to_virt(ptep) - ptep);
diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
index 8c75965afc9e..86818511962b 100644
--- a/arch/arm64/mm/proc.S
+++ b/arch/arm64/mm/proc.S
@@ -245,10 +245,6 @@ SYM_FUNC_ALIAS(__pi_idmap_cpu_replace_ttbr1, idmap_cpu_replace_ttbr1)
*
* Called exactly once from stop_machine context by each CPU found during boot.
*/
- .pushsection ".data", "aw", %progbits
-SYM_DATA(__idmap_kpti_flag, .long 1)
- .popsection
-
SYM_TYPED_FUNC_START(idmap_kpti_install_ng_mappings)
cpu .req w0
temp_pte .req x0
@@ -273,7 +269,7 @@ SYM_TYPED_FUNC_START(idmap_kpti_install_ng_mappings)
mov x5, x3 // preserve temp_pte arg
mrs swapper_ttb, ttbr1_el1
- adr_l flag_ptr, __idmap_kpti_flag
+ adr_l flag_ptr, idmap_kpti_bbml2_flag
cbnz cpu, __idmap_kpti_secondary
@@ -416,7 +412,25 @@ alternative_else_nop_endif
__idmap_kpti_secondary:
/* Uninstall swapper before surgery begins */
__idmap_cpu_set_reserved_ttbr1 x16, x17
+ b scondary_cpu_wait
+
+ .unreq swapper_ttb
+ .unreq flag_ptr
+SYM_FUNC_END(idmap_kpti_install_ng_mappings)
+ .popsection
+#endif
+
+ .pushsection ".idmap.text", "a"
+SYM_TYPED_FUNC_START(wait_linear_map_split_to_ptes)
+ /* Must be same registers as in idmap_kpti_install_ng_mappings */
+ swapper_ttb .req x3
+ flag_ptr .req x4
+
+ mrs swapper_ttb, ttbr1_el1
+ adr_l flag_ptr, idmap_kpti_bbml2_flag
+ __idmap_cpu_set_reserved_ttbr1 x16, x17
+scondary_cpu_wait:
/* Increment the flag to let the boot CPU we're ready */
1: ldxr w16, [flag_ptr]
add w16, w16, #1
@@ -436,9 +450,8 @@ __idmap_kpti_secondary:
.unreq swapper_ttb
.unreq flag_ptr
-SYM_FUNC_END(idmap_kpti_install_ng_mappings)
+SYM_FUNC_END(wait_linear_map_split_to_ptes)
.popsection
-#endif
/*
* __cpu_setup
--
2.47.0
^ permalink raw reply [flat|nested] 32+ messages in thread* [PATCH v8 5/5] arm64: kprobes: call set_memory_rox() for kprobe page
2025-09-17 19:02 [PATCH v8 0/5] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full Yang Shi
` (3 preceding siblings ...)
2025-09-17 19:02 ` [PATCH v8 4/5] arm64: mm: split linear mapping if BBML2 unsupported on secondary CPUs Yang Shi
@ 2025-09-17 19:02 ` Yang Shi
2025-09-18 12:48 ` Catalin Marinas
2025-09-18 21:10 ` [PATCH v8 0/5] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full Will Deacon
5 siblings, 1 reply; 32+ messages in thread
From: Yang Shi @ 2025-09-17 19:02 UTC (permalink / raw)
To: catalin.marinas, will, ryan.roberts, akpm, david,
lorenzo.stoakes, ardb, dev.jain, scott, cl
Cc: yang, linux-arm-kernel, linux-kernel, linux-mm
The kprobe page is allocated by execmem allocator with ROX permission.
It needs to call set_memory_rox() to set proper permission for the
direct map too. It was missed.
And the set_memory_rox() guarantees the direct map will be split if it
needs so that set_direct_map calls in vfree() won't fail.
Fixes: 10d5e97c1bf8 ("arm64: use PAGE_KERNEL_ROX directly in alloc_insn_page")
Signed-off-by: Yang Shi <yang@os.amperecomputing.com>
---
arch/arm64/kernel/probes/kprobes.c | 12 ++++++++++++
1 file changed, 12 insertions(+)
diff --git a/arch/arm64/kernel/probes/kprobes.c b/arch/arm64/kernel/probes/kprobes.c
index 0c5d408afd95..c4f8c4750f1e 100644
--- a/arch/arm64/kernel/probes/kprobes.c
+++ b/arch/arm64/kernel/probes/kprobes.c
@@ -10,6 +10,7 @@
#define pr_fmt(fmt) "kprobes: " fmt
+#include <linux/execmem.h>
#include <linux/extable.h>
#include <linux/kasan.h>
#include <linux/kernel.h>
@@ -41,6 +42,17 @@ DEFINE_PER_CPU(struct kprobe_ctlblk, kprobe_ctlblk);
static void __kprobes
post_kprobe_handler(struct kprobe *, struct kprobe_ctlblk *, struct pt_regs *);
+void *alloc_insn_page(void)
+{
+ void *page;
+
+ page = execmem_alloc(EXECMEM_KPROBES, PAGE_SIZE);
+ if (!page)
+ return NULL;
+ set_memory_rox((unsigned long)page, 1);
+ return page;
+}
+
static void __kprobes arch_prepare_ss_slot(struct kprobe *p)
{
kprobe_opcode_t *addr = p->ainsn.xol_insn;
--
2.47.0
^ permalink raw reply [flat|nested] 32+ messages in thread* Re: [PATCH v8 5/5] arm64: kprobes: call set_memory_rox() for kprobe page
2025-09-17 19:02 ` [PATCH v8 5/5] arm64: kprobes: call set_memory_rox() for kprobe page Yang Shi
@ 2025-09-18 12:48 ` Catalin Marinas
2025-09-18 15:05 ` Yang Shi
0 siblings, 1 reply; 32+ messages in thread
From: Catalin Marinas @ 2025-09-18 12:48 UTC (permalink / raw)
To: Yang Shi
Cc: will, ryan.roberts, akpm, david, lorenzo.stoakes, ardb, dev.jain,
scott, cl, linux-arm-kernel, linux-kernel, linux-mm
On Wed, Sep 17, 2025 at 12:02:11PM -0700, Yang Shi wrote:
> The kprobe page is allocated by execmem allocator with ROX permission.
> It needs to call set_memory_rox() to set proper permission for the
> direct map too. It was missed.
>
> And the set_memory_rox() guarantees the direct map will be split if it
> needs so that set_direct_map calls in vfree() won't fail.
>
> Fixes: 10d5e97c1bf8 ("arm64: use PAGE_KERNEL_ROX directly in alloc_insn_page")
> Signed-off-by: Yang Shi <yang@os.amperecomputing.com>
> ---
> arch/arm64/kernel/probes/kprobes.c | 12 ++++++++++++
> 1 file changed, 12 insertions(+)
>
> diff --git a/arch/arm64/kernel/probes/kprobes.c b/arch/arm64/kernel/probes/kprobes.c
> index 0c5d408afd95..c4f8c4750f1e 100644
> --- a/arch/arm64/kernel/probes/kprobes.c
> +++ b/arch/arm64/kernel/probes/kprobes.c
> @@ -10,6 +10,7 @@
>
> #define pr_fmt(fmt) "kprobes: " fmt
>
> +#include <linux/execmem.h>
> #include <linux/extable.h>
> #include <linux/kasan.h>
> #include <linux/kernel.h>
> @@ -41,6 +42,17 @@ DEFINE_PER_CPU(struct kprobe_ctlblk, kprobe_ctlblk);
> static void __kprobes
> post_kprobe_handler(struct kprobe *, struct kprobe_ctlblk *, struct pt_regs *);
>
> +void *alloc_insn_page(void)
> +{
> + void *page;
Nit: I'd call this 'addr'. 'page' makes me think of a struct page.
> +
> + page = execmem_alloc(EXECMEM_KPROBES, PAGE_SIZE);
> + if (!page)
> + return NULL;
> + set_memory_rox((unsigned long)page, 1);
It's unfortunate that we change the attributes of the ROX vmap first to
RO, then to back to ROX so that we get the linear map changed. Maybe
factor out some of the code in change_memory_common() to only change the
linear map.
Otherwise it looks fine.
--
Catalin
^ permalink raw reply [flat|nested] 32+ messages in thread* Re: [PATCH v8 5/5] arm64: kprobes: call set_memory_rox() for kprobe page
2025-09-18 12:48 ` Catalin Marinas
@ 2025-09-18 15:05 ` Yang Shi
2025-09-18 15:30 ` Ryan Roberts
2025-09-18 15:32 ` Catalin Marinas
0 siblings, 2 replies; 32+ messages in thread
From: Yang Shi @ 2025-09-18 15:05 UTC (permalink / raw)
To: Catalin Marinas
Cc: will, ryan.roberts, akpm, david, lorenzo.stoakes, ardb, dev.jain,
scott, cl, linux-arm-kernel, linux-kernel, linux-mm
On 9/18/25 5:48 AM, Catalin Marinas wrote:
> On Wed, Sep 17, 2025 at 12:02:11PM -0700, Yang Shi wrote:
>> The kprobe page is allocated by execmem allocator with ROX permission.
>> It needs to call set_memory_rox() to set proper permission for the
>> direct map too. It was missed.
>>
>> And the set_memory_rox() guarantees the direct map will be split if it
>> needs so that set_direct_map calls in vfree() won't fail.
>>
>> Fixes: 10d5e97c1bf8 ("arm64: use PAGE_KERNEL_ROX directly in alloc_insn_page")
>> Signed-off-by: Yang Shi <yang@os.amperecomputing.com>
>> ---
>> arch/arm64/kernel/probes/kprobes.c | 12 ++++++++++++
>> 1 file changed, 12 insertions(+)
>>
>> diff --git a/arch/arm64/kernel/probes/kprobes.c b/arch/arm64/kernel/probes/kprobes.c
>> index 0c5d408afd95..c4f8c4750f1e 100644
>> --- a/arch/arm64/kernel/probes/kprobes.c
>> +++ b/arch/arm64/kernel/probes/kprobes.c
>> @@ -10,6 +10,7 @@
>>
>> #define pr_fmt(fmt) "kprobes: " fmt
>>
>> +#include <linux/execmem.h>
>> #include <linux/extable.h>
>> #include <linux/kasan.h>
>> #include <linux/kernel.h>
>> @@ -41,6 +42,17 @@ DEFINE_PER_CPU(struct kprobe_ctlblk, kprobe_ctlblk);
>> static void __kprobes
>> post_kprobe_handler(struct kprobe *, struct kprobe_ctlblk *, struct pt_regs *);
>>
>> +void *alloc_insn_page(void)
>> +{
>> + void *page;
> Nit: I'd call this 'addr'. 'page' makes me think of a struct page.
Sure.
>
>> +
>> + page = execmem_alloc(EXECMEM_KPROBES, PAGE_SIZE);
>> + if (!page)
>> + return NULL;
>> + set_memory_rox((unsigned long)page, 1);
> It's unfortunate that we change the attributes of the ROX vmap first to
> RO, then to back to ROX so that we get the linear map changed. Maybe
> factor out some of the code in change_memory_common() to only change the
> linear map.
I want to make sure I understand you correctly, you meant
set_memory_rox() should do:
change linear map to RO (call a new helper, for example,
set_direct_map_ro())
change vmap to ROX (call change_memory_common())
Is it correct?
If so set_memory_ro() should do the similar thing.
And I think we should have the cleanup patch separate from this bug fix
patch because the bug fix patch should be applied to -stable release
too. Keeping it simpler makes the backport easier.
Shall I squash the cleanup patch into patch #1?
Thanks,
Yang
>
> Otherwise it looks fine.
>
^ permalink raw reply [flat|nested] 32+ messages in thread* Re: [PATCH v8 5/5] arm64: kprobes: call set_memory_rox() for kprobe page
2025-09-18 15:05 ` Yang Shi
@ 2025-09-18 15:30 ` Ryan Roberts
2025-09-18 15:50 ` Yang Shi
2025-09-18 15:32 ` Catalin Marinas
1 sibling, 1 reply; 32+ messages in thread
From: Ryan Roberts @ 2025-09-18 15:30 UTC (permalink / raw)
To: Yang Shi, Catalin Marinas
Cc: will, akpm, david, lorenzo.stoakes, ardb, dev.jain, scott, cl,
linux-arm-kernel, linux-kernel, linux-mm
On 18/09/2025 16:05, Yang Shi wrote:
>
>
> On 9/18/25 5:48 AM, Catalin Marinas wrote:
>> On Wed, Sep 17, 2025 at 12:02:11PM -0700, Yang Shi wrote:
>>> The kprobe page is allocated by execmem allocator with ROX permission.
>>> It needs to call set_memory_rox() to set proper permission for the
>>> direct map too. It was missed.
>>>
>>> And the set_memory_rox() guarantees the direct map will be split if it
>>> needs so that set_direct_map calls in vfree() won't fail.
>>>
>>> Fixes: 10d5e97c1bf8 ("arm64: use PAGE_KERNEL_ROX directly in alloc_insn_page")
>>> Signed-off-by: Yang Shi <yang@os.amperecomputing.com>
>>> ---
>>> arch/arm64/kernel/probes/kprobes.c | 12 ++++++++++++
>>> 1 file changed, 12 insertions(+)
>>>
>>> diff --git a/arch/arm64/kernel/probes/kprobes.c b/arch/arm64/kernel/probes/
>>> kprobes.c
>>> index 0c5d408afd95..c4f8c4750f1e 100644
>>> --- a/arch/arm64/kernel/probes/kprobes.c
>>> +++ b/arch/arm64/kernel/probes/kprobes.c
>>> @@ -10,6 +10,7 @@
>>> #define pr_fmt(fmt) "kprobes: " fmt
>>> +#include <linux/execmem.h>
>>> #include <linux/extable.h>
>>> #include <linux/kasan.h>
>>> #include <linux/kernel.h>
>>> @@ -41,6 +42,17 @@ DEFINE_PER_CPU(struct kprobe_ctlblk, kprobe_ctlblk);
>>> static void __kprobes
>>> post_kprobe_handler(struct kprobe *, struct kprobe_ctlblk *, struct pt_regs
>>> *);
>>> +void *alloc_insn_page(void)
>>> +{
>>> + void *page;
>> Nit: I'd call this 'addr'. 'page' makes me think of a struct page.
>
> Sure.
>
>>
>>> +
>>> + page = execmem_alloc(EXECMEM_KPROBES, PAGE_SIZE);
>>> + if (!page)
>>> + return NULL;
>>> + set_memory_rox((unsigned long)page, 1);
>> It's unfortunate that we change the attributes of the ROX vmap first to
>> RO, then to back to ROX so that we get the linear map changed. Maybe
>> factor out some of the code in change_memory_common() to only change the
>> linear map.
>
> I want to make sure I understand you correctly, you meant set_memory_rox()
> should do:
>
> change linear map to RO (call a new helper, for example, set_direct_map_ro())
> change vmap to ROX (call change_memory_common())
>
> Is it correct?
>
> If so set_memory_ro() should do the similar thing.
>
> And I think we should have the cleanup patch separate from this bug fix patch
> because the bug fix patch should be applied to -stable release too. Keeping it
> simpler makes the backport easier.
>
> Shall I squash the cleanup patch into patch #1?
Personally I think we should drop this patch from the series and handle it
separately.
We worked out that the requirement is to either never call set_memory_*() or to
call set_memory_*() for the entire vmalloc'ed range prior to optionally calling
set_memory_*() for a sub-range in order to guarrantee vm_reset_perms() works
correctly.
Given this is only allocating a single page, it is impossible to call
set_memory_*() for a sub-range. So the requirement is met.
I agree it looks odd/wrong to have different permissions in the linear map vs
the vmap but that is an orthogonal bug that can be fixed separately.
What do you think?
Thanks,
Ryan
>
> Thanks,
> Yang
>
>>
>> Otherwise it looks fine.
>>
>
^ permalink raw reply [flat|nested] 32+ messages in thread* Re: [PATCH v8 5/5] arm64: kprobes: call set_memory_rox() for kprobe page
2025-09-18 15:30 ` Ryan Roberts
@ 2025-09-18 15:50 ` Yang Shi
0 siblings, 0 replies; 32+ messages in thread
From: Yang Shi @ 2025-09-18 15:50 UTC (permalink / raw)
To: Ryan Roberts, Catalin Marinas
Cc: will, akpm, david, lorenzo.stoakes, ardb, dev.jain, scott, cl,
linux-arm-kernel, linux-kernel, linux-mm
On 9/18/25 8:30 AM, Ryan Roberts wrote:
> On 18/09/2025 16:05, Yang Shi wrote:
>>
>> On 9/18/25 5:48 AM, Catalin Marinas wrote:
>>> On Wed, Sep 17, 2025 at 12:02:11PM -0700, Yang Shi wrote:
>>>> The kprobe page is allocated by execmem allocator with ROX permission.
>>>> It needs to call set_memory_rox() to set proper permission for the
>>>> direct map too. It was missed.
>>>>
>>>> And the set_memory_rox() guarantees the direct map will be split if it
>>>> needs so that set_direct_map calls in vfree() won't fail.
>>>>
>>>> Fixes: 10d5e97c1bf8 ("arm64: use PAGE_KERNEL_ROX directly in alloc_insn_page")
>>>> Signed-off-by: Yang Shi <yang@os.amperecomputing.com>
>>>> ---
>>>> arch/arm64/kernel/probes/kprobes.c | 12 ++++++++++++
>>>> 1 file changed, 12 insertions(+)
>>>>
>>>> diff --git a/arch/arm64/kernel/probes/kprobes.c b/arch/arm64/kernel/probes/
>>>> kprobes.c
>>>> index 0c5d408afd95..c4f8c4750f1e 100644
>>>> --- a/arch/arm64/kernel/probes/kprobes.c
>>>> +++ b/arch/arm64/kernel/probes/kprobes.c
>>>> @@ -10,6 +10,7 @@
>>>> #define pr_fmt(fmt) "kprobes: " fmt
>>>> +#include <linux/execmem.h>
>>>> #include <linux/extable.h>
>>>> #include <linux/kasan.h>
>>>> #include <linux/kernel.h>
>>>> @@ -41,6 +42,17 @@ DEFINE_PER_CPU(struct kprobe_ctlblk, kprobe_ctlblk);
>>>> static void __kprobes
>>>> post_kprobe_handler(struct kprobe *, struct kprobe_ctlblk *, struct pt_regs
>>>> *);
>>>> +void *alloc_insn_page(void)
>>>> +{
>>>> + void *page;
>>> Nit: I'd call this 'addr'. 'page' makes me think of a struct page.
>> Sure.
>>
>>>> +
>>>> + page = execmem_alloc(EXECMEM_KPROBES, PAGE_SIZE);
>>>> + if (!page)
>>>> + return NULL;
>>>> + set_memory_rox((unsigned long)page, 1);
>>> It's unfortunate that we change the attributes of the ROX vmap first to
>>> RO, then to back to ROX so that we get the linear map changed. Maybe
>>> factor out some of the code in change_memory_common() to only change the
>>> linear map.
>> I want to make sure I understand you correctly, you meant set_memory_rox()
>> should do:
>>
>> change linear map to RO (call a new helper, for example, set_direct_map_ro())
>> change vmap to ROX (call change_memory_common())
>>
>> Is it correct?
>>
>> If so set_memory_ro() should do the similar thing.
>>
>> And I think we should have the cleanup patch separate from this bug fix patch
>> because the bug fix patch should be applied to -stable release too. Keeping it
>> simpler makes the backport easier.
>>
>> Shall I squash the cleanup patch into patch #1?
>
> Personally I think we should drop this patch from the series and handle it
> separately.
>
> We worked out that the requirement is to either never call set_memory_*() or to
> call set_memory_*() for the entire vmalloc'ed range prior to optionally calling
> set_memory_*() for a sub-range in order to guarrantee vm_reset_perms() works
> correctly.
>
> Given this is only allocating a single page, it is impossible to call
> set_memory_*() for a sub-range. So the requirement is met.
>
> I agree it looks odd/wrong to have different permissions in the linear map vs
> the vmap but that is an orthogonal bug that can be fixed separately.
>
> What do you think?
Yeah, sounds good to me.
Thanks,
Yang
>
> Thanks,
> Ryan
>
>
>> Thanks,
>> Yang
>>
>>> Otherwise it looks fine.
>>>
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCH v8 5/5] arm64: kprobes: call set_memory_rox() for kprobe page
2025-09-18 15:05 ` Yang Shi
2025-09-18 15:30 ` Ryan Roberts
@ 2025-09-18 15:32 ` Catalin Marinas
2025-09-18 15:48 ` Yang Shi
1 sibling, 1 reply; 32+ messages in thread
From: Catalin Marinas @ 2025-09-18 15:32 UTC (permalink / raw)
To: Yang Shi
Cc: will, ryan.roberts, akpm, david, lorenzo.stoakes, ardb, dev.jain,
scott, cl, linux-arm-kernel, linux-kernel, linux-mm
On Thu, Sep 18, 2025 at 08:05:55AM -0700, Yang Shi wrote:
> On 9/18/25 5:48 AM, Catalin Marinas wrote:
> > On Wed, Sep 17, 2025 at 12:02:11PM -0700, Yang Shi wrote:
> > > + page = execmem_alloc(EXECMEM_KPROBES, PAGE_SIZE);
> > > + if (!page)
> > > + return NULL;
> > > + set_memory_rox((unsigned long)page, 1);
> > It's unfortunate that we change the attributes of the ROX vmap first to
> > RO, then to back to ROX so that we get the linear map changed. Maybe
> > factor out some of the code in change_memory_common() to only change the
> > linear map.
>
> I want to make sure I understand you correctly, you meant set_memory_rox()
> should do:
>
> change linear map to RO (call a new helper, for example,
> set_direct_map_ro())
> change vmap to ROX (call change_memory_common())
set_memory_rox() is correct. What I meant is that in alloc_insn_page(),
execmem_alloc() already returns RX memory. Calling set_memory_rox() does
indeed change the linear map to RO but it also changes the vmap memory
to RO and then to RX. There's no need for the alloc_insn_page() to do
this but we shouldn't change set_memory_rox() for this, the latter is
correct. I was thinking of alloc_insn_page() calling a new function that
only changes the linear map.
> And I think we should have the cleanup patch separate from this bug fix
> patch because the bug fix patch should be applied to -stable release too.
> Keeping it simpler makes the backport easier.
Yes, for now you can leave it as is, that's not a critical path.
> Shall I squash the cleanup patch into patch #1?
No, I'd leave it as a separate fix, especially if we want to backport
it.
Anyway, for now, with the nitpick on the address variable name:
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCH v8 5/5] arm64: kprobes: call set_memory_rox() for kprobe page
2025-09-18 15:32 ` Catalin Marinas
@ 2025-09-18 15:48 ` Yang Shi
0 siblings, 0 replies; 32+ messages in thread
From: Yang Shi @ 2025-09-18 15:48 UTC (permalink / raw)
To: Catalin Marinas
Cc: will, ryan.roberts, akpm, david, lorenzo.stoakes, ardb, dev.jain,
scott, cl, linux-arm-kernel, linux-kernel, linux-mm
On 9/18/25 8:32 AM, Catalin Marinas wrote:
> On Thu, Sep 18, 2025 at 08:05:55AM -0700, Yang Shi wrote:
>> On 9/18/25 5:48 AM, Catalin Marinas wrote:
>>> On Wed, Sep 17, 2025 at 12:02:11PM -0700, Yang Shi wrote:
>>>> + page = execmem_alloc(EXECMEM_KPROBES, PAGE_SIZE);
>>>> + if (!page)
>>>> + return NULL;
>>>> + set_memory_rox((unsigned long)page, 1);
>>> It's unfortunate that we change the attributes of the ROX vmap first to
>>> RO, then to back to ROX so that we get the linear map changed. Maybe
>>> factor out some of the code in change_memory_common() to only change the
>>> linear map.
>> I want to make sure I understand you correctly, you meant set_memory_rox()
>> should do:
>>
>> change linear map to RO (call a new helper, for example,
>> set_direct_map_ro())
>> change vmap to ROX (call change_memory_common())
> set_memory_rox() is correct. What I meant is that in alloc_insn_page(),
> execmem_alloc() already returns RX memory. Calling set_memory_rox() does
> indeed change the linear map to RO but it also changes the vmap memory
> to RO and then to RX. There's no need for the alloc_insn_page() to do
> this but we shouldn't change set_memory_rox() for this, the latter is
> correct. I was thinking of alloc_insn_page() calling a new function that
> only changes the linear map.
Aha, I see. If we have the new helper, it also allows us to refactor
set_memory_rox() to what I said.
>
>> And I think we should have the cleanup patch separate from this bug fix
>> patch because the bug fix patch should be applied to -stable release too.
>> Keeping it simpler makes the backport easier.
> Yes, for now you can leave it as is, that's not a critical path.
Sure.
>
>> Shall I squash the cleanup patch into patch #1?
> No, I'd leave it as a separate fix, especially if we want to backport
> it.
I meant the potential cleanup patch. Anyway it can be handled in a
separate patch at anytime.
>
> Anyway, for now, with the nitpick on the address variable name:
>
> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Thank you. Ryan also suggested separate the fix from this series. I will
fix the variable name nit then post it separately instead of posting a
new series.
Yang
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCH v8 0/5] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full
2025-09-17 19:02 [PATCH v8 0/5] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full Yang Shi
` (4 preceding siblings ...)
2025-09-17 19:02 ` [PATCH v8 5/5] arm64: kprobes: call set_memory_rox() for kprobe page Yang Shi
@ 2025-09-18 21:10 ` Will Deacon
2025-09-19 10:08 ` Ryan Roberts
2025-09-19 14:55 ` Yang Shi
5 siblings, 2 replies; 32+ messages in thread
From: Will Deacon @ 2025-09-18 21:10 UTC (permalink / raw)
To: catalin.marinas, ryan.roberts, akpm, david, lorenzo.stoakes,
ardb, dev.jain, scott, cl, Yang Shi
Cc: kernel-team, Will Deacon, linux-arm-kernel, linux-kernel, linux-mm
On Wed, 17 Sep 2025 12:02:06 -0700, Yang Shi wrote:
> On systems with BBML2_NOABORT support, it causes the linear map to be mapped
> with large blocks, even when rodata=full, and leads to some nice performance
> improvements.
>
> Ryan tested v7 on an AmpereOne system (a VM with 12G RAM) in all 3 possible
> modes by hacking the BBML2 feature detection code:
>
> [...]
Applied patches 1 and 3 to arm64 (for-next/mm), thanks!
[1/5] arm64: Enable permission change on arm64 kernel block mappings
https://git.kernel.org/arm64/c/a660194dd101
[3/5] arm64: mm: support large block mapping when rodata=full
https://git.kernel.org/arm64/c/a166563e7ec3
I also picked up the BBML allow-list addition (second patch) on
for-next/cpufeature.
The fourth patch ("arm64: mm: split linear mapping if BBML2 unsupported
on secondary CPUs") has some really horrible conflicts. These are partly
due to some of the type cleanups on for-next/mm but I think mainly due
to Kevin's kpti rework that landed after -rc1.
So I think the best bet might be to leave that one for next time, if
that's ok?
Cheers,
--
Will
https://fixes.arm64.dev
https://next.arm64.dev
https://will.arm64.dev
^ permalink raw reply [flat|nested] 32+ messages in thread* Re: [PATCH v8 0/5] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full
2025-09-18 21:10 ` [PATCH v8 0/5] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full Will Deacon
@ 2025-09-19 10:08 ` Ryan Roberts
2025-09-19 11:27 ` Will Deacon
2025-09-19 14:55 ` Yang Shi
1 sibling, 1 reply; 32+ messages in thread
From: Ryan Roberts @ 2025-09-19 10:08 UTC (permalink / raw)
To: Will Deacon, catalin.marinas, akpm, david, lorenzo.stoakes, ardb,
dev.jain, scott, cl, Yang Shi
Cc: kernel-team, linux-arm-kernel, linux-kernel, linux-mm
On 18/09/2025 22:10, Will Deacon wrote:
> On Wed, 17 Sep 2025 12:02:06 -0700, Yang Shi wrote:
>> On systems with BBML2_NOABORT support, it causes the linear map to be mapped
>> with large blocks, even when rodata=full, and leads to some nice performance
>> improvements.
>>
>> Ryan tested v7 on an AmpereOne system (a VM with 12G RAM) in all 3 possible
>> modes by hacking the BBML2 feature detection code:
>>
>> [...]
>
> Applied patches 1 and 3 to arm64 (for-next/mm), thanks!
>
> [1/5] arm64: Enable permission change on arm64 kernel block mappings
> https://git.kernel.org/arm64/c/a660194dd101
> [3/5] arm64: mm: support large block mapping when rodata=full
> https://git.kernel.org/arm64/c/a166563e7ec3
>
> I also picked up the BBML allow-list addition (second patch) on
> for-next/cpufeature.
>
> The fourth patch ("arm64: mm: split linear mapping if BBML2 unsupported
> on secondary CPUs") has some really horrible conflicts. These are partly
> due to some of the type cleanups on for-next/mm but I think mainly due
> to Kevin's kpti rework that landed after -rc1.
Thanks Will, although I'm nervous that without this patch, some platforms might
not boot; Wikipedia tells me that there are some Google, Mediatek and Qualcomm
SoCs that pair X4 CPUs (which is on the BBML2_NOABORT allow list) with A720
and/or A520 (which are not). See previous mail at [1].
I'd be happy to rebase it if you can let me know the prefered base SHA/tree?
[1]
https://lore.kernel.org/linux-arm-kernel/11f84d00-8c76-402d-bbad-014a3542992f@arm.com/
Thanks,
Ryan
>
> So I think the best bet might be to leave that one for next time, if
> that's ok?
>
> Cheers,
^ permalink raw reply [flat|nested] 32+ messages in thread* Re: [PATCH v8 0/5] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full
2025-09-19 10:08 ` Ryan Roberts
@ 2025-09-19 11:27 ` Will Deacon
2025-09-19 11:49 ` Ryan Roberts
0 siblings, 1 reply; 32+ messages in thread
From: Will Deacon @ 2025-09-19 11:27 UTC (permalink / raw)
To: Ryan Roberts
Cc: catalin.marinas, akpm, david, lorenzo.stoakes, ardb, dev.jain,
scott, cl, Yang Shi, kernel-team, linux-arm-kernel, linux-kernel,
linux-mm
On Fri, Sep 19, 2025 at 11:08:47AM +0100, Ryan Roberts wrote:
> On 18/09/2025 22:10, Will Deacon wrote:
> > On Wed, 17 Sep 2025 12:02:06 -0700, Yang Shi wrote:
> >> On systems with BBML2_NOABORT support, it causes the linear map to be mapped
> >> with large blocks, even when rodata=full, and leads to some nice performance
> >> improvements.
> >>
> >> Ryan tested v7 on an AmpereOne system (a VM with 12G RAM) in all 3 possible
> >> modes by hacking the BBML2 feature detection code:
> >>
> >> [...]
> >
> > Applied patches 1 and 3 to arm64 (for-next/mm), thanks!
> >
> > [1/5] arm64: Enable permission change on arm64 kernel block mappings
> > https://git.kernel.org/arm64/c/a660194dd101
> > [3/5] arm64: mm: support large block mapping when rodata=full
> > https://git.kernel.org/arm64/c/a166563e7ec3
> >
> > I also picked up the BBML allow-list addition (second patch) on
> > for-next/cpufeature.
> >
> > The fourth patch ("arm64: mm: split linear mapping if BBML2 unsupported
> > on secondary CPUs") has some really horrible conflicts. These are partly
> > due to some of the type cleanups on for-next/mm but I think mainly due
> > to Kevin's kpti rework that landed after -rc1.
>
> Thanks Will, although I'm nervous that without this patch, some platforms might
> not boot; Wikipedia tells me that there are some Google, Mediatek and Qualcomm
> SoCs that pair X4 CPUs (which is on the BBML2_NOABORT allow list) with A720
> and/or A520 (which are not). See previous mail at [1].
I'd be surprised if these SoCs are booting on the X4 but who knows.
Lemme have another look at applying the patch with fresh eyes, but I do
wonder whether having X4 on the allow list really makes any sense. Are
there any SoCs out there that _don't_ pair it with CPUs that aren't on
the allow list? (apologies for the double negative).
Will
^ permalink raw reply [flat|nested] 32+ messages in thread* Re: [PATCH v8 0/5] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full
2025-09-19 11:27 ` Will Deacon
@ 2025-09-19 11:49 ` Ryan Roberts
2025-09-19 11:56 ` Will Deacon
0 siblings, 1 reply; 32+ messages in thread
From: Ryan Roberts @ 2025-09-19 11:49 UTC (permalink / raw)
To: Will Deacon
Cc: catalin.marinas, akpm, david, lorenzo.stoakes, ardb, dev.jain,
scott, cl, Yang Shi, kernel-team, linux-arm-kernel, linux-kernel,
linux-mm
On 19/09/2025 12:27, Will Deacon wrote:
> On Fri, Sep 19, 2025 at 11:08:47AM +0100, Ryan Roberts wrote:
>> On 18/09/2025 22:10, Will Deacon wrote:
>>> On Wed, 17 Sep 2025 12:02:06 -0700, Yang Shi wrote:
>>>> On systems with BBML2_NOABORT support, it causes the linear map to be mapped
>>>> with large blocks, even when rodata=full, and leads to some nice performance
>>>> improvements.
>>>>
>>>> Ryan tested v7 on an AmpereOne system (a VM with 12G RAM) in all 3 possible
>>>> modes by hacking the BBML2 feature detection code:
>>>>
>>>> [...]
>>>
>>> Applied patches 1 and 3 to arm64 (for-next/mm), thanks!
>>>
>>> [1/5] arm64: Enable permission change on arm64 kernel block mappings
>>> https://git.kernel.org/arm64/c/a660194dd101
>>> [3/5] arm64: mm: support large block mapping when rodata=full
>>> https://git.kernel.org/arm64/c/a166563e7ec3
>>>
>>> I also picked up the BBML allow-list addition (second patch) on
>>> for-next/cpufeature.
>>>
>>> The fourth patch ("arm64: mm: split linear mapping if BBML2 unsupported
>>> on secondary CPUs") has some really horrible conflicts. These are partly
>>> due to some of the type cleanups on for-next/mm but I think mainly due
>>> to Kevin's kpti rework that landed after -rc1.
>>
>> Thanks Will, although I'm nervous that without this patch, some platforms might
>> not boot; Wikipedia tells me that there are some Google, Mediatek and Qualcomm
>> SoCs that pair X4 CPUs (which is on the BBML2_NOABORT allow list) with A720
>> and/or A520 (which are not). See previous mail at [1].
>
> I'd be surprised if these SoCs are booting on the X4 but who knows.
Ahh. You can probably tell I'm a bit naive to some of this system level stuff...
I had assumed they would want to boot on the big CPU to reduce boot time.
>
> Lemme have another look at applying the patch with fresh eyes, but I do
> wonder whether having X4 on the allow list really makes any sense. Are
> there any SoCs out there that _don't_ pair it with CPUs that aren't on
> the allow list? (apologies for the double negative).
Hmm, that's a fair question. I'm not aware of any. So I guess the simplest
solution is to remove X4 from the allow list and ditch fourth patch.
>
> Will
^ permalink raw reply [flat|nested] 32+ messages in thread* Re: [PATCH v8 0/5] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full
2025-09-19 11:49 ` Ryan Roberts
@ 2025-09-19 11:56 ` Will Deacon
2025-09-19 12:00 ` Ryan Roberts
0 siblings, 1 reply; 32+ messages in thread
From: Will Deacon @ 2025-09-19 11:56 UTC (permalink / raw)
To: Ryan Roberts
Cc: catalin.marinas, akpm, david, lorenzo.stoakes, ardb, dev.jain,
scott, cl, Yang Shi, kernel-team, linux-arm-kernel, linux-kernel,
linux-mm
On Fri, Sep 19, 2025 at 12:49:22PM +0100, Ryan Roberts wrote:
> On 19/09/2025 12:27, Will Deacon wrote:
> > On Fri, Sep 19, 2025 at 11:08:47AM +0100, Ryan Roberts wrote:
> >> On 18/09/2025 22:10, Will Deacon wrote:
> >>> On Wed, 17 Sep 2025 12:02:06 -0700, Yang Shi wrote:
> >>>> On systems with BBML2_NOABORT support, it causes the linear map to be mapped
> >>>> with large blocks, even when rodata=full, and leads to some nice performance
> >>>> improvements.
> >>>>
> >>>> Ryan tested v7 on an AmpereOne system (a VM with 12G RAM) in all 3 possible
> >>>> modes by hacking the BBML2 feature detection code:
> >>>>
> >>>> [...]
> >>>
> >>> Applied patches 1 and 3 to arm64 (for-next/mm), thanks!
> >>>
> >>> [1/5] arm64: Enable permission change on arm64 kernel block mappings
> >>> https://git.kernel.org/arm64/c/a660194dd101
> >>> [3/5] arm64: mm: support large block mapping when rodata=full
> >>> https://git.kernel.org/arm64/c/a166563e7ec3
> >>>
> >>> I also picked up the BBML allow-list addition (second patch) on
> >>> for-next/cpufeature.
> >>>
> >>> The fourth patch ("arm64: mm: split linear mapping if BBML2 unsupported
> >>> on secondary CPUs") has some really horrible conflicts. These are partly
> >>> due to some of the type cleanups on for-next/mm but I think mainly due
> >>> to Kevin's kpti rework that landed after -rc1.
> >>
> >> Thanks Will, although I'm nervous that without this patch, some platforms might
> >> not boot; Wikipedia tells me that there are some Google, Mediatek and Qualcomm
> >> SoCs that pair X4 CPUs (which is on the BBML2_NOABORT allow list) with A720
> >> and/or A520 (which are not). See previous mail at [1].
> >
> > I'd be surprised if these SoCs are booting on the X4 but who knows.
>
> Ahh. You can probably tell I'm a bit naive to some of this system level stuff...
> I had assumed they would want to boot on the big CPU to reduce boot time.
One of the problems is that the boot CPU becomes CPU0 and that inevitably
means it ends up being responsible for a tonne of extra stuff (interrupts,
TZ, etc) and in many cases can't be offlined. So it's all a trade-off.
> > Lemme have another look at applying the patch with fresh eyes, but I do
> > wonder whether having X4 on the allow list really makes any sense. Are
> > there any SoCs out there that _don't_ pair it with CPUs that aren't on
> > the allow list? (apologies for the double negative).
>
> Hmm, that's a fair question. I'm not aware of any. So I guess the simplest
> solution is to remove X4 from the allow list and ditch fourth patch.
That's probably a good idea but I have a horrible feeling we _are_ going
to need your patch once the errata start flying about :)
So how about we:
- Remove X4 from the list
- I try harder to apply your patch for secondary CPUs...
- ... if I fail, we can apply it next time around
Sound reasonable?
Will
^ permalink raw reply [flat|nested] 32+ messages in thread* Re: [PATCH v8 0/5] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full
2025-09-19 11:56 ` Will Deacon
@ 2025-09-19 12:00 ` Ryan Roberts
2025-09-19 18:44 ` Will Deacon
0 siblings, 1 reply; 32+ messages in thread
From: Ryan Roberts @ 2025-09-19 12:00 UTC (permalink / raw)
To: Will Deacon
Cc: catalin.marinas, akpm, david, lorenzo.stoakes, ardb, dev.jain,
scott, cl, Yang Shi, kernel-team, linux-arm-kernel, linux-kernel,
linux-mm
On 19/09/2025 12:56, Will Deacon wrote:
> On Fri, Sep 19, 2025 at 12:49:22PM +0100, Ryan Roberts wrote:
>> On 19/09/2025 12:27, Will Deacon wrote:
>>> On Fri, Sep 19, 2025 at 11:08:47AM +0100, Ryan Roberts wrote:
>>>> On 18/09/2025 22:10, Will Deacon wrote:
>>>>> On Wed, 17 Sep 2025 12:02:06 -0700, Yang Shi wrote:
>>>>>> On systems with BBML2_NOABORT support, it causes the linear map to be mapped
>>>>>> with large blocks, even when rodata=full, and leads to some nice performance
>>>>>> improvements.
>>>>>>
>>>>>> Ryan tested v7 on an AmpereOne system (a VM with 12G RAM) in all 3 possible
>>>>>> modes by hacking the BBML2 feature detection code:
>>>>>>
>>>>>> [...]
>>>>>
>>>>> Applied patches 1 and 3 to arm64 (for-next/mm), thanks!
>>>>>
>>>>> [1/5] arm64: Enable permission change on arm64 kernel block mappings
>>>>> https://git.kernel.org/arm64/c/a660194dd101
>>>>> [3/5] arm64: mm: support large block mapping when rodata=full
>>>>> https://git.kernel.org/arm64/c/a166563e7ec3
>>>>>
>>>>> I also picked up the BBML allow-list addition (second patch) on
>>>>> for-next/cpufeature.
>>>>>
>>>>> The fourth patch ("arm64: mm: split linear mapping if BBML2 unsupported
>>>>> on secondary CPUs") has some really horrible conflicts. These are partly
>>>>> due to some of the type cleanups on for-next/mm but I think mainly due
>>>>> to Kevin's kpti rework that landed after -rc1.
>>>>
>>>> Thanks Will, although I'm nervous that without this patch, some platforms might
>>>> not boot; Wikipedia tells me that there are some Google, Mediatek and Qualcomm
>>>> SoCs that pair X4 CPUs (which is on the BBML2_NOABORT allow list) with A720
>>>> and/or A520 (which are not). See previous mail at [1].
>>>
>>> I'd be surprised if these SoCs are booting on the X4 but who knows.
>>
>> Ahh. You can probably tell I'm a bit naive to some of this system level stuff...
>> I had assumed they would want to boot on the big CPU to reduce boot time.
>
> One of the problems is that the boot CPU becomes CPU0 and that inevitably
> means it ends up being responsible for a tonne of extra stuff (interrupts,
> TZ, etc) and in many cases can't be offlined. So it's all a trade-off.
>
>>> Lemme have another look at applying the patch with fresh eyes, but I do
>>> wonder whether having X4 on the allow list really makes any sense. Are
>>> there any SoCs out there that _don't_ pair it with CPUs that aren't on
>>> the allow list? (apologies for the double negative).
>>
>> Hmm, that's a fair question. I'm not aware of any. So I guess the simplest
>> solution is to remove X4 from the allow list and ditch fourth patch.
>
> That's probably a good idea but I have a horrible feeling we _are_ going
> to need your patch once the errata start flying about :)
>
> So how about we:
>
> - Remove X4 from the list
> - I try harder to apply your patch for secondary CPUs...
> - ... if I fail, we can apply it next time around
>
> Sound reasonable?
Yeah that works for me. Cheers!
>
> Will
^ permalink raw reply [flat|nested] 32+ messages in thread* Re: [PATCH v8 0/5] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full
2025-09-19 12:00 ` Ryan Roberts
@ 2025-09-19 18:44 ` Will Deacon
2025-09-23 7:15 ` Ryan Roberts
0 siblings, 1 reply; 32+ messages in thread
From: Will Deacon @ 2025-09-19 18:44 UTC (permalink / raw)
To: Ryan Roberts
Cc: catalin.marinas, akpm, david, lorenzo.stoakes, ardb, dev.jain,
scott, cl, Yang Shi, kernel-team, linux-arm-kernel, linux-kernel,
linux-mm
On Fri, Sep 19, 2025 at 01:00:49PM +0100, Ryan Roberts wrote:
> On 19/09/2025 12:56, Will Deacon wrote:
> > So how about we:
> >
> > - Remove X4 from the list
> > - I try harder to apply your patch for secondary CPUs...
> > - ... if I fail, we can apply it next time around
> >
> > Sound reasonable?
>
> Yeah that works for me. Cheers!
So after all that, the conflict was straightforward once I sat down and
looked at it properly.
Please can you check for-next/core? I forcefully triggered the
repainting path in qemu and it booted without any problems.
Will
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCH v8 0/5] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full
2025-09-19 18:44 ` Will Deacon
@ 2025-09-23 7:15 ` Ryan Roberts
0 siblings, 0 replies; 32+ messages in thread
From: Ryan Roberts @ 2025-09-23 7:15 UTC (permalink / raw)
To: Will Deacon
Cc: catalin.marinas, akpm, david, lorenzo.stoakes, ardb, dev.jain,
scott, cl, Yang Shi, kernel-team, linux-arm-kernel, linux-kernel,
linux-mm
On 19/09/2025 19:44, Will Deacon wrote:
> On Fri, Sep 19, 2025 at 01:00:49PM +0100, Ryan Roberts wrote:
>> On 19/09/2025 12:56, Will Deacon wrote:
>>> So how about we:
>>>
>>> - Remove X4 from the list
>>> - I try harder to apply your patch for secondary CPUs...
>>> - ... if I fail, we can apply it next time around
>>>
>>> Sound reasonable?
>>
>> Yeah that works for me. Cheers!
>
> So after all that, the conflict was straightforward once I sat down and
> looked at it properly.
>
> Please can you check for-next/core? I forcefully triggered the
> repainting path in qemu and it booted without any problems.
Thanks Will, I took a look and didn't spot any problems. Thanks for squeezing
this in.
>
> Will
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCH v8 0/5] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full
2025-09-18 21:10 ` [PATCH v8 0/5] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full Will Deacon
2025-09-19 10:08 ` Ryan Roberts
@ 2025-09-19 14:55 ` Yang Shi
1 sibling, 0 replies; 32+ messages in thread
From: Yang Shi @ 2025-09-19 14:55 UTC (permalink / raw)
To: Will Deacon, catalin.marinas, ryan.roberts, akpm, david,
lorenzo.stoakes, ardb, dev.jain, scott, cl
Cc: kernel-team, linux-arm-kernel, linux-kernel, linux-mm
On 9/18/25 2:10 PM, Will Deacon wrote:
> On Wed, 17 Sep 2025 12:02:06 -0700, Yang Shi wrote:
>> On systems with BBML2_NOABORT support, it causes the linear map to be mapped
>> with large blocks, even when rodata=full, and leads to some nice performance
>> improvements.
>>
>> Ryan tested v7 on an AmpereOne system (a VM with 12G RAM) in all 3 possible
>> modes by hacking the BBML2 feature detection code:
>>
>> [...]
> Applied patches 1 and 3 to arm64 (for-next/mm), thanks!
>
> [1/5] arm64: Enable permission change on arm64 kernel block mappings
> https://git.kernel.org/arm64/c/a660194dd101
> [3/5] arm64: mm: support large block mapping when rodata=full
> https://git.kernel.org/arm64/c/a166563e7ec3
>
> I also picked up the BBML allow-list addition (second patch) on
> for-next/cpufeature.
Hi Will,
Thank you so much!
>
> The fourth patch ("arm64: mm: split linear mapping if BBML2 unsupported
> on secondary CPUs") has some really horrible conflicts. These are partly
> due to some of the type cleanups on for-next/mm but I think mainly due
> to Kevin's kpti rework that landed after -rc1.
>
> So I think the best bet might be to leave that one for next time, if
> that's ok?
I saw you and Ryan just figured out how to move forward. You guys are
definitely more knowledgeable than me regarding the asymmetric systems.
Thanks,
Yang
>
> Cheers,
^ permalink raw reply [flat|nested] 32+ messages in thread