* [PATCH v10 01/13] kasan: sw_tags: Use arithmetic shift for shadow computation
2026-02-04 19:18 [PATCH v10 00/13] kasan: x86: arm64: KASAN tag-based mode for x86 Maciej Wieczor-Retman
@ 2026-02-04 19:19 ` Maciej Wieczor-Retman
2026-02-04 19:19 ` [PATCH v10 02/13] kasan: arm64: x86: Make special tags arch specific Maciej Wieczor-Retman
` (2 subsequent siblings)
3 siblings, 0 replies; 5+ messages in thread
From: Maciej Wieczor-Retman @ 2026-02-04 19:19 UTC (permalink / raw)
To: Catalin Marinas, Will Deacon, Jonathan Corbet, Andrey Ryabinin,
Alexander Potapenko, Andrey Konovalov, Dmitry Vyukov,
Vincenzo Frascino, Andrew Morton, Jan Kiszka, Kieran Bingham,
Nathan Chancellor, Nick Desaulniers, Bill Wendling, Justin Stitt
Cc: m.wieczorretman, Samuel Holland, Maciej Wieczor-Retman,
linux-arm-kernel, linux-doc, linux-kernel, kasan-dev, workflows,
linux-mm, llvm
From: Samuel Holland <samuel.holland@sifive.com>
Currently, kasan_mem_to_shadow() uses a logical right shift, which turns
canonical kernel addresses into non-canonical addresses by clearing the
high KASAN_SHADOW_SCALE_SHIFT bits. The value of KASAN_SHADOW_OFFSET is
then chosen so that the addition results in a canonical address for the
shadow memory.
For KASAN_GENERIC, this shift/add combination is ABI with the compiler,
because KASAN_SHADOW_OFFSET is used in compiler-generated inline tag
checks[1], which must only attempt to dereference canonical addresses.
However, for KASAN_SW_TAGS there is some freedom to change the algorithm
without breaking the ABI. Because TBI is enabled for kernel addresses,
the top bits of shadow memory addresses computed during tag checks are
irrelevant, and so likewise are the top bits of KASAN_SHADOW_OFFSET.
This is demonstrated by the fact that LLVM uses a logical right shift in
the tag check fast path[2] but a sbfx (signed bitfield extract)
instruction in the slow path[3] without causing any issues.
Use an arithmetic shift in kasan_mem_to_shadow() as it provides a number
of benefits:
1) The memory layout doesn't change but is easier to understand.
KASAN_SHADOW_OFFSET becomes a canonical memory address, and the shifted
pointer becomes a negative offset, so KASAN_SHADOW_OFFSET ==
KASAN_SHADOW_END regardless of the shift amount or the size of the
virtual address space.
2) KASAN_SHADOW_OFFSET becomes a simpler constant, requiring only one
instruction to load instead of two. Since it must be loaded in each
function with a tag check, this decreases kernel text size by 0.5%.
3) This shift and the sign extension from kasan_reset_tag() can be
combined into a single sbfx instruction. When this same algorithm change
is applied to the compiler, it removes an instruction from each inline
tag check, further reducing kernel text size by an additional 4.6%.
These benefits extend to other architectures as well. On RISC-V, where
the baseline ISA does not shifted addition or have an equivalent to the
sbfx instruction, loading KASAN_SHADOW_OFFSET is reduced from 3 to 2
instructions, and kasan_mem_to_shadow(kasan_reset_tag(addr)) similarly
combines two consecutive right shifts.
Add the arch_kasan_non_canonical_hook() to group the arch specific code
in the relevant arch directories.
Link: https://github.com/llvm/llvm-project/blob/llvmorg-20-init/llvm/lib/Transforms/Instrumentation/AddressSanitizer.cpp#L1316 [1]
Link: https://github.com/llvm/llvm-project/blob/llvmorg-20-init/llvm/lib/Transforms/Instrumentation/HWAddressSanitizer.cpp#L895 [2]
Link: https://github.com/llvm/llvm-project/blob/llvmorg-20-init/llvm/lib/Target/AArch64/AArch64AsmPrinter.cpp#L669 [3]
Signed-off-by: Samuel Holland <samuel.holland@sifive.com>
Co-developed-by: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
Signed-off-by: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
---
Changelog v10: (Maciej)
- Update the Documentation/dev-tools/kasan.rst file with the changed
kasan_mem_to_shadow().
Changelog v9: (Maciej)
- Take out the arm64 related code from mm/kasan/report.c and put it in
the arch specific directory in a new file so the kasan_mem_to_shadow()
function can be included.
- Reset addr tag bits in arm64's arch_kasan_non_canonical_hook() so the
inline mode can also work with that function (Andrey Ryabinin).
- Fix incorrect number of zeros in a comment in mm/kasan/report.c.
- Remove Catalin's acked-by since changes were made.
Changelog v7: (Maciej)
- Change UL to ULL in report.c to fix some compilation warnings.
Changelog v6: (Maciej)
- Add Catalin's acked-by.
- Move x86 gdb snippet here from the last patch.
Changelog v5: (Maciej)
- (u64) -> (unsigned long) in report.c
Changelog v4: (Maciej)
- Revert x86 to signed mem_to_shadow mapping.
- Remove last two paragraphs since they were just poorer duplication of
the comments in kasan_non_canonical_hook().
Changelog v3: (Maciej)
- Fix scripts/gdb/linux/kasan.py so the new signed mem_to_shadow() is
reflected there.
- Fix Documentation/arch/arm64/kasan-offsets.sh to take new offsets into
account.
- Made changes to the kasan_non_canonical_hook() according to upstream
discussion. Settled on overflow on both ranges and separate checks for
x86 and arm.
Changelog v2: (Maciej)
- Correct address range that's checked in kasan_non_canonical_hook().
Adjust the comment inside.
- Remove part of comment from arch/arm64/include/asm/memory.h.
- Append patch message paragraph about the overflow in
kasan_non_canonical_hook().
Documentation/arch/arm64/kasan-offsets.sh | 8 ++++--
Documentation/dev-tools/kasan.rst | 18 ++++++++----
MAINTAINERS | 2 +-
arch/arm64/Kconfig | 10 +++----
arch/arm64/include/asm/kasan.h | 5 ++++
arch/arm64/include/asm/memory.h | 14 ++++++++-
arch/arm64/mm/Makefile | 2 ++
arch/arm64/mm/kasan_init.c | 7 +++--
arch/arm64/mm/kasan_sw_tags.c | 35 +++++++++++++++++++++++
include/linux/kasan.h | 10 +++++--
mm/kasan/kasan.h | 7 +++++
mm/kasan/report.c | 15 ++++++++--
scripts/gdb/linux/kasan.py | 5 +++-
scripts/gdb/linux/mm.py | 5 ++--
14 files changed, 118 insertions(+), 25 deletions(-)
create mode 100644 arch/arm64/mm/kasan_sw_tags.c
diff --git a/Documentation/arch/arm64/kasan-offsets.sh b/Documentation/arch/arm64/kasan-offsets.sh
index 2dc5f9e18039..ce777c7c7804 100644
--- a/Documentation/arch/arm64/kasan-offsets.sh
+++ b/Documentation/arch/arm64/kasan-offsets.sh
@@ -5,8 +5,12 @@
print_kasan_offset () {
printf "%02d\t" $1
- printf "0x%08x00000000\n" $(( (0xffffffff & (-1 << ($1 - 1 - 32))) \
- - (1 << (64 - 32 - $2)) ))
+ if [[ $2 -ne 4 ]] then
+ printf "0x%08x00000000\n" $(( (0xffffffff & (-1 << ($1 - 1 - 32))) \
+ - (1 << (64 - 32 - $2)) ))
+ else
+ printf "0x%08x00000000\n" $(( (0xffffffff & (-1 << ($1 - 1 - 32))) ))
+ fi
}
echo KASAN_SHADOW_SCALE_SHIFT = 3
diff --git a/Documentation/dev-tools/kasan.rst b/Documentation/dev-tools/kasan.rst
index a034700da7c4..64dbf8b308bd 100644
--- a/Documentation/dev-tools/kasan.rst
+++ b/Documentation/dev-tools/kasan.rst
@@ -318,13 +318,19 @@ translate a memory address to its corresponding shadow address.
Here is the function which translates an address to its corresponding shadow
address::
- static inline void *kasan_mem_to_shadow(const void *addr)
- {
- return (void *)((unsigned long)addr >> KASAN_SHADOW_SCALE_SHIFT)
- + KASAN_SHADOW_OFFSET;
- }
+ static inline void *kasan_mem_to_shadow(const void *addr)
+ {
+ void *scaled;
-where ``KASAN_SHADOW_SCALE_SHIFT = 3``.
+ if (IS_ENABLED(CONFIG_KASAN_GENERIC))
+ scaled = (void *)((unsigned long)addr >> KASAN_SHADOW_SCALE_SHIFT);
+ else
+ scaled = (void *)((long)addr >> KASAN_SHADOW_SCALE_SHIFT);
+
+ return KASAN_SHADOW_OFFSET + scaled;
+ }
+
+where for Generic KASAN ``KASAN_SHADOW_SCALE_SHIFT = 3``.
Compile-time instrumentation is used to insert memory access checks. Compiler
inserts function calls (``__asan_load*(addr)``, ``__asan_store*(addr)``) before
diff --git a/MAINTAINERS b/MAINTAINERS
index 0efa8cc6775b..bbcb5bf5e2c6 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -13587,7 +13587,7 @@ S: Maintained
B: https://bugzilla.kernel.org/buglist.cgi?component=Sanitizers&product=Memory%20Management
F: Documentation/dev-tools/kasan.rst
F: arch/*/include/asm/*kasan.h
-F: arch/*/mm/kasan_init*
+F: arch/*/mm/kasan*
F: include/linux/kasan*.h
F: lib/Kconfig.kasan
F: mm/kasan/
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 93173f0a09c7..c1b7261cdb96 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -434,11 +434,11 @@ config KASAN_SHADOW_OFFSET
default 0xdffffe0000000000 if ARM64_VA_BITS_42 && !KASAN_SW_TAGS
default 0xdfffffc000000000 if ARM64_VA_BITS_39 && !KASAN_SW_TAGS
default 0xdffffff800000000 if ARM64_VA_BITS_36 && !KASAN_SW_TAGS
- default 0xefff800000000000 if (ARM64_VA_BITS_48 || (ARM64_VA_BITS_52 && !ARM64_16K_PAGES)) && KASAN_SW_TAGS
- default 0xefffc00000000000 if (ARM64_VA_BITS_47 || ARM64_VA_BITS_52) && ARM64_16K_PAGES && KASAN_SW_TAGS
- default 0xeffffe0000000000 if ARM64_VA_BITS_42 && KASAN_SW_TAGS
- default 0xefffffc000000000 if ARM64_VA_BITS_39 && KASAN_SW_TAGS
- default 0xeffffff800000000 if ARM64_VA_BITS_36 && KASAN_SW_TAGS
+ default 0xffff800000000000 if (ARM64_VA_BITS_48 || (ARM64_VA_BITS_52 && !ARM64_16K_PAGES)) && KASAN_SW_TAGS
+ default 0xffffc00000000000 if (ARM64_VA_BITS_47 || ARM64_VA_BITS_52) && ARM64_16K_PAGES && KASAN_SW_TAGS
+ default 0xfffffe0000000000 if ARM64_VA_BITS_42 && KASAN_SW_TAGS
+ default 0xffffffc000000000 if ARM64_VA_BITS_39 && KASAN_SW_TAGS
+ default 0xfffffff800000000 if ARM64_VA_BITS_36 && KASAN_SW_TAGS
default 0xffffffffffffffff
config UNWIND_TABLES
diff --git a/arch/arm64/include/asm/kasan.h b/arch/arm64/include/asm/kasan.h
index b167e9d3da91..42d8e3092835 100644
--- a/arch/arm64/include/asm/kasan.h
+++ b/arch/arm64/include/asm/kasan.h
@@ -22,5 +22,10 @@ void kasan_init(void);
static inline void kasan_init(void) { }
#endif
+#ifdef CONFIG_KASAN_SW_TAGS
+bool __arch_kasan_non_canonical_hook(unsigned long addr);
+#define arch_kasan_non_canonical_hook(addr) __arch_kasan_non_canonical_hook(addr)
+#endif
+
#endif
#endif
diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index 9d54b2ea49d6..f127fbf691ac 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -89,7 +89,15 @@
*
* KASAN_SHADOW_END is defined first as the shadow address that corresponds to
* the upper bound of possible virtual kernel memory addresses UL(1) << 64
- * according to the mapping formula.
+ * according to the mapping formula. For Generic KASAN, the address in the
+ * mapping formula is treated as unsigned (part of the compiler's ABI), so the
+ * end of the shadow memory region is at a large positive offset from
+ * KASAN_SHADOW_OFFSET. For Software Tag-Based KASAN, the address in the
+ * formula is treated as signed. Since all kernel addresses are negative, they
+ * map to shadow memory below KASAN_SHADOW_OFFSET, making KASAN_SHADOW_OFFSET
+ * itself the end of the shadow memory region. (User pointers are positive and
+ * would map to shadow memory above KASAN_SHADOW_OFFSET, but shadow memory is
+ * not allocated for them.)
*
* KASAN_SHADOW_START is defined second based on KASAN_SHADOW_END. The shadow
* memory start must map to the lowest possible kernel virtual memory address
@@ -100,7 +108,11 @@
*/
#if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)
#define KASAN_SHADOW_OFFSET _AC(CONFIG_KASAN_SHADOW_OFFSET, UL)
+#ifdef CONFIG_KASAN_GENERIC
#define KASAN_SHADOW_END ((UL(1) << (64 - KASAN_SHADOW_SCALE_SHIFT)) + KASAN_SHADOW_OFFSET)
+#else
+#define KASAN_SHADOW_END KASAN_SHADOW_OFFSET
+#endif
#define _KASAN_SHADOW_START(va) (KASAN_SHADOW_END - (UL(1) << ((va) - KASAN_SHADOW_SCALE_SHIFT)))
#define KASAN_SHADOW_START _KASAN_SHADOW_START(vabits_actual)
#define PAGE_END KASAN_SHADOW_START
diff --git a/arch/arm64/mm/Makefile b/arch/arm64/mm/Makefile
index c26489cf96cd..4658d59b7ea6 100644
--- a/arch/arm64/mm/Makefile
+++ b/arch/arm64/mm/Makefile
@@ -15,4 +15,6 @@ obj-$(CONFIG_ARM64_GCS) += gcs.o
KASAN_SANITIZE_physaddr.o += n
obj-$(CONFIG_KASAN) += kasan_init.o
+obj-$(CONFIG_KASAN_SW_TAGS) += kasan_sw_tags.o
KASAN_SANITIZE_kasan_init.o := n
+KASAN_SANITIZE_kasan_sw_tags.o := n
diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
index abeb81bf6ebd..937f6eb8115b 100644
--- a/arch/arm64/mm/kasan_init.c
+++ b/arch/arm64/mm/kasan_init.c
@@ -198,8 +198,11 @@ static bool __init root_level_aligned(u64 addr)
/* The early shadow maps everything to a single page of zeroes */
asmlinkage void __init kasan_early_init(void)
{
- BUILD_BUG_ON(KASAN_SHADOW_OFFSET !=
- KASAN_SHADOW_END - (1UL << (64 - KASAN_SHADOW_SCALE_SHIFT)));
+ if (IS_ENABLED(CONFIG_KASAN_GENERIC))
+ BUILD_BUG_ON(KASAN_SHADOW_OFFSET !=
+ KASAN_SHADOW_END - (1UL << (64 - KASAN_SHADOW_SCALE_SHIFT)));
+ else
+ BUILD_BUG_ON(KASAN_SHADOW_OFFSET != KASAN_SHADOW_END);
BUILD_BUG_ON(!IS_ALIGNED(_KASAN_SHADOW_START(VA_BITS), SHADOW_ALIGN));
BUILD_BUG_ON(!IS_ALIGNED(_KASAN_SHADOW_START(VA_BITS_MIN), SHADOW_ALIGN));
BUILD_BUG_ON(!IS_ALIGNED(KASAN_SHADOW_END, SHADOW_ALIGN));
diff --git a/arch/arm64/mm/kasan_sw_tags.c b/arch/arm64/mm/kasan_sw_tags.c
new file mode 100644
index 000000000000..d509db7bdc7e
--- /dev/null
+++ b/arch/arm64/mm/kasan_sw_tags.c
@@ -0,0 +1,35 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * This file contains ARM64 specific KASAN sw_tags code.
+ */
+
+#include <linux/kasan.h>
+
+bool __arch_kasan_non_canonical_hook(unsigned long addr)
+{
+ /*
+ * For Software Tag-Based KASAN, kasan_mem_to_shadow() uses the
+ * arithmetic shift. Normally, this would make checking for a possible
+ * shadow address complicated, as the shadow address computation
+ * operation would overflow only for some memory addresses. However, due
+ * to the chosen KASAN_SHADOW_OFFSET values and the fact the
+ * kasan_mem_to_shadow() only operates on pointers with the tag reset,
+ * the overflow always happens.
+ *
+ * For arm64, the top byte of the pointer gets reset to 0xFF. Thus, the
+ * possible shadow addresses belong to a region that is the result of
+ * kasan_mem_to_shadow() applied to the memory range
+ * [0xFF00000000000000, 0xFFFFFFFFFFFFFFFF]. Despite the overflow, the
+ * resulting possible shadow region is contiguous, as the overflow
+ * happens for both 0xFF00000000000000 and 0xFFFFFFFFFFFFFFFF.
+ *
+ * Reset the addr's tag bits so the inline mode which still uses
+ * the logical shift can work correctly. Otherwise it would
+ * always return because of the 'smaller than' comparison below.
+ */
+ addr |= (0xFFULL << 56);
+ if (addr < (unsigned long)kasan_mem_to_shadow((void *)(0xFFULL << 56)) ||
+ addr > (unsigned long)kasan_mem_to_shadow((void *)(~0ULL)))
+ return true;
+ return false;
+}
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 338a1921a50a..81c83dcfcebe 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -62,8 +62,14 @@ int kasan_populate_early_shadow(const void *shadow_start,
#ifndef kasan_mem_to_shadow
static inline void *kasan_mem_to_shadow(const void *addr)
{
- return (void *)((unsigned long)addr >> KASAN_SHADOW_SCALE_SHIFT)
- + KASAN_SHADOW_OFFSET;
+ void *scaled;
+
+ if (IS_ENABLED(CONFIG_KASAN_GENERIC))
+ scaled = (void *)((unsigned long)addr >> KASAN_SHADOW_SCALE_SHIFT);
+ else
+ scaled = (void *)((long)addr >> KASAN_SHADOW_SCALE_SHIFT);
+
+ return KASAN_SHADOW_OFFSET + scaled;
}
#endif
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index fc9169a54766..02574e53d980 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -558,6 +558,13 @@ static inline bool kasan_arch_is_ready(void) { return true; }
#error kasan_arch_is_ready only works in KASAN generic outline mode!
#endif
+#ifndef arch_kasan_non_canonical_hook
+static inline bool arch_kasan_non_canonical_hook(unsigned long addr)
+{
+ return false;
+}
+#endif
+
#if IS_ENABLED(CONFIG_KASAN_KUNIT_TEST)
void kasan_kunit_test_suite_start(void);
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 62c01b4527eb..53152d148deb 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -642,10 +642,19 @@ void kasan_non_canonical_hook(unsigned long addr)
const char *bug_type;
/*
- * All addresses that came as a result of the memory-to-shadow mapping
- * (even for bogus pointers) must be >= KASAN_SHADOW_OFFSET.
+ * For Generic KASAN, kasan_mem_to_shadow() uses the logical right shift
+ * and never overflows with the chosen KASAN_SHADOW_OFFSET values. Thus,
+ * the possible shadow addresses (even for bogus pointers) belong to a
+ * single contiguous region that is the result of kasan_mem_to_shadow()
+ * applied to the whole address space.
*/
- if (addr < KASAN_SHADOW_OFFSET)
+ if (IS_ENABLED(CONFIG_KASAN_GENERIC)) {
+ if (addr < (unsigned long)kasan_mem_to_shadow((void *)(0ULL)) ||
+ addr > (unsigned long)kasan_mem_to_shadow((void *)(~0ULL)))
+ return;
+ }
+
+ if (arch_kasan_non_canonical_hook(addr))
return;
orig_addr = (unsigned long)kasan_shadow_to_mem((void *)addr);
diff --git a/scripts/gdb/linux/kasan.py b/scripts/gdb/linux/kasan.py
index 56730b3fde0b..4b86202b155f 100644
--- a/scripts/gdb/linux/kasan.py
+++ b/scripts/gdb/linux/kasan.py
@@ -7,7 +7,8 @@
#
import gdb
-from linux import constants, mm
+from linux import constants, utils, mm
+from ctypes import c_int64 as s64
def help():
t = """Usage: lx-kasan_mem_to_shadow [Hex memory addr]
@@ -39,6 +40,8 @@ class KasanMemToShadow(gdb.Command):
else:
help()
def kasan_mem_to_shadow(self, addr):
+ if constants.CONFIG_KASAN_SW_TAGS and not utils.is_target_arch('x86'):
+ addr = s64(addr)
return (addr >> self.p_ops.KASAN_SHADOW_SCALE_SHIFT) + self.p_ops.KASAN_SHADOW_OFFSET
KasanMemToShadow()
diff --git a/scripts/gdb/linux/mm.py b/scripts/gdb/linux/mm.py
index 7571aebbe650..2e63f3dedd53 100644
--- a/scripts/gdb/linux/mm.py
+++ b/scripts/gdb/linux/mm.py
@@ -110,12 +110,13 @@ class aarch64_page_ops():
self.KERNEL_END = gdb.parse_and_eval("_end")
if constants.LX_CONFIG_KASAN_GENERIC or constants.LX_CONFIG_KASAN_SW_TAGS:
+ self.KASAN_SHADOW_OFFSET = constants.LX_CONFIG_KASAN_SHADOW_OFFSET
if constants.LX_CONFIG_KASAN_GENERIC:
self.KASAN_SHADOW_SCALE_SHIFT = 3
+ self.KASAN_SHADOW_END = (1 << (64 - self.KASAN_SHADOW_SCALE_SHIFT)) + self.KASAN_SHADOW_OFFSET
else:
self.KASAN_SHADOW_SCALE_SHIFT = 4
- self.KASAN_SHADOW_OFFSET = constants.LX_CONFIG_KASAN_SHADOW_OFFSET
- self.KASAN_SHADOW_END = (1 << (64 - self.KASAN_SHADOW_SCALE_SHIFT)) + self.KASAN_SHADOW_OFFSET
+ self.KASAN_SHADOW_END = self.KASAN_SHADOW_OFFSET
self.PAGE_END = self.KASAN_SHADOW_END - (1 << (self.vabits_actual - self.KASAN_SHADOW_SCALE_SHIFT))
else:
self.PAGE_END = self._PAGE_END(self.VA_BITS_MIN)
--
2.53.0
^ permalink raw reply [flat|nested] 5+ messages in thread* [PATCH v10 02/13] kasan: arm64: x86: Make special tags arch specific
2026-02-04 19:18 [PATCH v10 00/13] kasan: x86: arm64: KASAN tag-based mode for x86 Maciej Wieczor-Retman
2026-02-04 19:19 ` [PATCH v10 01/13] kasan: sw_tags: Use arithmetic shift for shadow computation Maciej Wieczor-Retman
@ 2026-02-04 19:19 ` Maciej Wieczor-Retman
2026-02-04 19:19 ` [PATCH v10 04/13] x86/kasan: Add arch specific kasan functions Maciej Wieczor-Retman
2026-02-04 19:20 ` [PATCH v10 06/13] mm/execmem: Untag addresses in EXECMEM_ROX related pointer arithmetic Maciej Wieczor-Retman
3 siblings, 0 replies; 5+ messages in thread
From: Maciej Wieczor-Retman @ 2026-02-04 19:19 UTC (permalink / raw)
To: Andrey Ryabinin, Alexander Potapenko, Andrey Konovalov,
Dmitry Vyukov, Vincenzo Frascino, Catalin Marinas, Will Deacon,
Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
H. Peter Anvin, Andrew Morton, David Hildenbrand,
Lorenzo Stoakes, Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko
Cc: m.wieczorretman, Samuel Holland, Maciej Wieczor-Retman,
linux-kernel, kasan-dev, linux-arm-kernel, linux-mm
From: Samuel Holland <samuel.holland@sifive.com>
KASAN's tag-based mode defines multiple special tag values. They're
reserved for:
- Native kernel value. On arm64 it's 0xFF and it causes an early return
in the tag checking function.
- Invalid value. 0xFE marks an area as freed / unallocated. It's also
the value that is used to initialize regions of shadow memory.
- Min and max values. 0xFD is the highest value that can be randomly
generated for a new tag. 0 is the minimal value with the exception of
arm64's hardware mode where it is equal to 0xF0.
Metadata macro is also defined:
- Tag width equal to 8.
Tag-based mode on x86 is going to use 4 bit wide tags so all the above
values need to be changed accordingly.
Make tag width and native kernel tag arch specific for x86 and arm64.
Base the invalid tag value and the max value on the native kernel tag
since they follow the same pattern on both mentioned architectures.
Also generalize KASAN_SHADOW_INIT and 0xff used in various
page_kasan_tag* helpers.
Give KASAN_TAG_MIN the default value of zero, and move the special value
for hw_tags arm64 to its arch specific kasan-tags.h.
Signed-off-by: Samuel Holland <samuel.holland@sifive.com>
Co-developed-by: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
Signed-off-by: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
Acked-by: Will Deacon <will@kernel.org> (for the arm part)
Reviewed-by: Andrey Konovalov <andreyknvl@gmail.com>
Reviewed-by: Andrey Ryabinin <ryabinin.a.a@gmail.com>
---
Changelog v9:
- Add Andrey Ryabinin's Reviewed-by tag.
- Add Andrey Konovalov's Reviewed-by tag.
Changelog v8:
- Add Will's Acked-by tag.
Changelog v7:
- Reorder defines of arm64 tag width to prevent redefinition warnings.
- Remove KASAN_TAG_MASK so it's only defined in mmzone.h (Andrey
Konovalov)
- Merge the 'support tag widths less than 8 bits' with this patch since
they do similar things and overwrite each other. (Alexander)
Changelog v6:
- Add hardware tags KASAN_TAG_WIDTH value to the arm64 arch file.
- Keep KASAN_TAG_MASK in the mmzone.h.
- Remove ifndef from KASAN_SHADOW_INIT.
Changelog v5:
- Move KASAN_TAG_MIN to the arm64 kasan-tags.h for the hardware KASAN
mode case.
Changelog v4:
- Move KASAN_TAG_MASK to kasan-tags.h.
Changelog v2:
- Remove risc-v from the patch.
MAINTAINERS | 2 +-
arch/arm64/include/asm/kasan-tags.h | 14 ++++++++++++++
arch/arm64/include/asm/kasan.h | 2 --
arch/arm64/include/asm/uaccess.h | 1 +
arch/x86/include/asm/kasan-tags.h | 9 +++++++++
include/linux/kasan-tags.h | 19 ++++++++++++++-----
include/linux/kasan.h | 3 +--
include/linux/mm.h | 6 +++---
include/linux/page-flags-layout.h | 9 +--------
9 files changed, 44 insertions(+), 21 deletions(-)
create mode 100644 arch/arm64/include/asm/kasan-tags.h
create mode 100644 arch/x86/include/asm/kasan-tags.h
diff --git a/MAINTAINERS b/MAINTAINERS
index bbcb5bf5e2c6..e27bdb0f100d 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -13586,7 +13586,7 @@ L: kasan-dev@googlegroups.com
S: Maintained
B: https://bugzilla.kernel.org/buglist.cgi?component=Sanitizers&product=Memory%20Management
F: Documentation/dev-tools/kasan.rst
-F: arch/*/include/asm/*kasan.h
+F: arch/*/include/asm/*kasan*.h
F: arch/*/mm/kasan*
F: include/linux/kasan*.h
F: lib/Kconfig.kasan
diff --git a/arch/arm64/include/asm/kasan-tags.h b/arch/arm64/include/asm/kasan-tags.h
new file mode 100644
index 000000000000..259952677443
--- /dev/null
+++ b/arch/arm64/include/asm/kasan-tags.h
@@ -0,0 +1,14 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef __ASM_KASAN_TAGS_H
+#define __ASM_KASAN_TAGS_H
+
+#define KASAN_TAG_KERNEL 0xFF /* native kernel pointers tag */
+
+#ifdef CONFIG_KASAN_HW_TAGS
+#define KASAN_TAG_MIN 0xF0 /* minimum value for random tags */
+#define KASAN_TAG_WIDTH 4
+#else
+#define KASAN_TAG_WIDTH 8
+#endif
+
+#endif /* ASM_KASAN_TAGS_H */
diff --git a/arch/arm64/include/asm/kasan.h b/arch/arm64/include/asm/kasan.h
index 42d8e3092835..ad10931ddae7 100644
--- a/arch/arm64/include/asm/kasan.h
+++ b/arch/arm64/include/asm/kasan.h
@@ -6,8 +6,6 @@
#include <linux/linkage.h>
#include <asm/memory.h>
-#include <asm/mte-kasan.h>
-#include <asm/pgtable-types.h>
#define arch_kasan_set_tag(addr, tag) __tag_set(addr, tag)
#define arch_kasan_reset_tag(addr) __tag_reset(addr)
diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h
index 6490930deef8..ccd41a39e3a1 100644
--- a/arch/arm64/include/asm/uaccess.h
+++ b/arch/arm64/include/asm/uaccess.h
@@ -22,6 +22,7 @@
#include <asm/cpufeature.h>
#include <asm/mmu.h>
#include <asm/mte.h>
+#include <asm/mte-kasan.h>
#include <asm/ptrace.h>
#include <asm/memory.h>
#include <asm/extable.h>
diff --git a/arch/x86/include/asm/kasan-tags.h b/arch/x86/include/asm/kasan-tags.h
new file mode 100644
index 000000000000..68ba385bc75c
--- /dev/null
+++ b/arch/x86/include/asm/kasan-tags.h
@@ -0,0 +1,9 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef __ASM_KASAN_TAGS_H
+#define __ASM_KASAN_TAGS_H
+
+#define KASAN_TAG_KERNEL 0xF /* native kernel pointers tag */
+
+#define KASAN_TAG_WIDTH 4
+
+#endif /* ASM_KASAN_TAGS_H */
diff --git a/include/linux/kasan-tags.h b/include/linux/kasan-tags.h
index 4f85f562512c..ad5c11950233 100644
--- a/include/linux/kasan-tags.h
+++ b/include/linux/kasan-tags.h
@@ -2,13 +2,22 @@
#ifndef _LINUX_KASAN_TAGS_H
#define _LINUX_KASAN_TAGS_H
+#if defined(CONFIG_KASAN_SW_TAGS) || defined(CONFIG_KASAN_HW_TAGS)
+#include <asm/kasan-tags.h>
+#endif
+
+#ifndef KASAN_TAG_WIDTH
+#define KASAN_TAG_WIDTH 0
+#endif
+
+#ifndef KASAN_TAG_KERNEL
#define KASAN_TAG_KERNEL 0xFF /* native kernel pointers tag */
-#define KASAN_TAG_INVALID 0xFE /* inaccessible memory tag */
-#define KASAN_TAG_MAX 0xFD /* maximum value for random tags */
+#endif
+
+#define KASAN_TAG_INVALID (KASAN_TAG_KERNEL - 1) /* inaccessible memory tag */
+#define KASAN_TAG_MAX (KASAN_TAG_KERNEL - 2) /* maximum value for random tags */
-#ifdef CONFIG_KASAN_HW_TAGS
-#define KASAN_TAG_MIN 0xF0 /* minimum value for random tags */
-#else
+#ifndef KASAN_TAG_MIN
#define KASAN_TAG_MIN 0x00 /* minimum value for random tags */
#endif
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 81c83dcfcebe..c6febd1362e8 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -40,8 +40,7 @@ typedef unsigned int __bitwise kasan_vmalloc_flags_t;
/* Software KASAN implementations use shadow memory. */
#ifdef CONFIG_KASAN_SW_TAGS
-/* This matches KASAN_TAG_INVALID. */
-#define KASAN_SHADOW_INIT 0xFE
+#define KASAN_SHADOW_INIT KASAN_TAG_INVALID
#else
#define KASAN_SHADOW_INIT 0
#endif
diff --git a/include/linux/mm.h b/include/linux/mm.h
index f0d5be9dc736..e424e68569a2 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1953,7 +1953,7 @@ static inline u8 page_kasan_tag(const struct page *page)
if (kasan_enabled()) {
tag = (page->flags.f >> KASAN_TAG_PGSHIFT) & KASAN_TAG_MASK;
- tag ^= 0xff;
+ tag ^= KASAN_TAG_KERNEL;
}
return tag;
@@ -1966,7 +1966,7 @@ static inline void page_kasan_tag_set(struct page *page, u8 tag)
if (!kasan_enabled())
return;
- tag ^= 0xff;
+ tag ^= KASAN_TAG_KERNEL;
old_flags = READ_ONCE(page->flags.f);
do {
flags = old_flags;
@@ -1985,7 +1985,7 @@ static inline void page_kasan_tag_reset(struct page *page)
static inline u8 page_kasan_tag(const struct page *page)
{
- return 0xff;
+ return KASAN_TAG_KERNEL;
}
static inline void page_kasan_tag_set(struct page *page, u8 tag) { }
diff --git a/include/linux/page-flags-layout.h b/include/linux/page-flags-layout.h
index 760006b1c480..b2cc4cb870e0 100644
--- a/include/linux/page-flags-layout.h
+++ b/include/linux/page-flags-layout.h
@@ -3,6 +3,7 @@
#define PAGE_FLAGS_LAYOUT_H
#include <linux/numa.h>
+#include <linux/kasan-tags.h>
#include <generated/bounds.h>
/*
@@ -72,14 +73,6 @@
#define NODE_NOT_IN_PAGE_FLAGS 1
#endif
-#if defined(CONFIG_KASAN_SW_TAGS)
-#define KASAN_TAG_WIDTH 8
-#elif defined(CONFIG_KASAN_HW_TAGS)
-#define KASAN_TAG_WIDTH 4
-#else
-#define KASAN_TAG_WIDTH 0
-#endif
-
#ifdef CONFIG_NUMA_BALANCING
#define LAST__PID_SHIFT 8
#define LAST__PID_MASK ((1 << LAST__PID_SHIFT)-1)
--
2.53.0
^ permalink raw reply [flat|nested] 5+ messages in thread* [PATCH v10 04/13] x86/kasan: Add arch specific kasan functions
2026-02-04 19:18 [PATCH v10 00/13] kasan: x86: arm64: KASAN tag-based mode for x86 Maciej Wieczor-Retman
2026-02-04 19:19 ` [PATCH v10 01/13] kasan: sw_tags: Use arithmetic shift for shadow computation Maciej Wieczor-Retman
2026-02-04 19:19 ` [PATCH v10 02/13] kasan: arm64: x86: Make special tags arch specific Maciej Wieczor-Retman
@ 2026-02-04 19:19 ` Maciej Wieczor-Retman
2026-02-04 19:20 ` [PATCH v10 06/13] mm/execmem: Untag addresses in EXECMEM_ROX related pointer arithmetic Maciej Wieczor-Retman
3 siblings, 0 replies; 5+ messages in thread
From: Maciej Wieczor-Retman @ 2026-02-04 19:19 UTC (permalink / raw)
To: Andrey Ryabinin, Alexander Potapenko, Andrey Konovalov,
Dmitry Vyukov, Vincenzo Frascino, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, Dave Hansen, x86, H. Peter Anvin, Andrew Morton,
David Hildenbrand, Lorenzo Stoakes, Liam R. Howlett,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
Axel Rasmussen, Yuanchu Xie, Wei Xu
Cc: m.wieczorretman, Maciej Wieczor-Retman, kasan-dev, linux-kernel,
linux-mm
From: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
KASAN's software tag-based mode needs multiple macros/functions to
handle tag and pointer interactions - to set, retrieve and reset tags
from the top bits of a pointer.
Mimic functions currently used by arm64 but change the tag's position to
bits [60:57] in the pointer.
Signed-off-by: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
Reviewed-by: Andrey Ryabinin <ryabinin.a.a@gmail.com>
---
Changelog v9:
- Rename KASAN_TAG_BYTE_MASK to KASAN_TAG_BITS_MASK.
- Add Andrey Ryabinin's Reviewed-by tag.
Changelog v7:
- Add KASAN_TAG_BYTE_MASK to avoid circular includes and avoid removing
KASAN_TAG_MASK from mmzone.h.
- Remove Andrey's Acked-by tag.
Changelog v6:
- Remove empty line after ifdef CONFIG_KASAN_SW_TAGS
- Add ifdef 64 bit to avoid problems in vdso32.
- Add Andrey's Acked-by tag.
Changelog v4:
- Rewrite __tag_set() without pointless casts and make it more readable.
Changelog v3:
- Reorder functions so that __tag_*() etc are above the
arch_kasan_*() ones.
- Remove CONFIG_KASAN condition from __tag_set()
arch/x86/include/asm/kasan.h | 42 ++++++++++++++++++++++++++++++++++--
include/linux/kasan-tags.h | 2 ++
include/linux/mmzone.h | 2 +-
3 files changed, 43 insertions(+), 3 deletions(-)
diff --git a/arch/x86/include/asm/kasan.h b/arch/x86/include/asm/kasan.h
index d7e33c7f096b..c868ae734f68 100644
--- a/arch/x86/include/asm/kasan.h
+++ b/arch/x86/include/asm/kasan.h
@@ -3,6 +3,8 @@
#define _ASM_X86_KASAN_H
#include <linux/const.h>
+#include <linux/kasan-tags.h>
+#include <linux/types.h>
#define KASAN_SHADOW_OFFSET _AC(CONFIG_KASAN_SHADOW_OFFSET, UL)
#define KASAN_SHADOW_SCALE_SHIFT 3
@@ -24,8 +26,43 @@
KASAN_SHADOW_SCALE_SHIFT)))
#ifndef __ASSEMBLER__
+#include <linux/bitops.h>
+#include <linux/bitfield.h>
+#include <linux/bits.h>
+
+#ifdef CONFIG_KASAN_SW_TAGS
+#define __tag_shifted(tag) FIELD_PREP(GENMASK_ULL(60, 57), tag)
+#define __tag_reset(addr) (sign_extend64((u64)(addr), 56))
+#define __tag_get(addr) ((u8)FIELD_GET(GENMASK_ULL(60, 57), (u64)addr))
+#else
+#define __tag_shifted(tag) 0UL
+#define __tag_reset(addr) (addr)
+#define __tag_get(addr) 0
+#endif /* CONFIG_KASAN_SW_TAGS */
+
+#ifdef CONFIG_64BIT
+static inline void *__tag_set(const void *__addr, u8 tag)
+{
+ u64 addr = (u64)__addr;
+
+ addr &= ~__tag_shifted(KASAN_TAG_BITS_MASK);
+ addr |= __tag_shifted(tag & KASAN_TAG_BITS_MASK);
+
+ return (void *)addr;
+}
+#else
+static inline void *__tag_set(void *__addr, u8 tag)
+{
+ return __addr;
+}
+#endif
+
+#define arch_kasan_set_tag(addr, tag) __tag_set(addr, tag)
+#define arch_kasan_reset_tag(addr) __tag_reset(addr)
+#define arch_kasan_get_tag(addr) __tag_get(addr)
#ifdef CONFIG_KASAN
+
void __init kasan_early_init(void);
void __init kasan_init(void);
void __init kasan_populate_shadow_for_vaddr(void *va, size_t size, int nid);
@@ -34,8 +71,9 @@ static inline void kasan_early_init(void) { }
static inline void kasan_init(void) { }
static inline void kasan_populate_shadow_for_vaddr(void *va, size_t size,
int nid) { }
-#endif
-#endif
+#endif /* CONFIG_KASAN */
+
+#endif /* __ASSEMBLER__ */
#endif
diff --git a/include/linux/kasan-tags.h b/include/linux/kasan-tags.h
index ad5c11950233..36cc4f70674a 100644
--- a/include/linux/kasan-tags.h
+++ b/include/linux/kasan-tags.h
@@ -10,6 +10,8 @@
#define KASAN_TAG_WIDTH 0
#endif
+#define KASAN_TAG_BITS_MASK ((1UL << KASAN_TAG_WIDTH) - 1)
+
#ifndef KASAN_TAG_KERNEL
#define KASAN_TAG_KERNEL 0xFF /* native kernel pointers tag */
#endif
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index fc5d6c88d2f0..72c958cb6f3b 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -1177,7 +1177,7 @@ static inline bool zone_is_empty(const struct zone *zone)
#define NODES_MASK ((1UL << NODES_WIDTH) - 1)
#define SECTIONS_MASK ((1UL << SECTIONS_WIDTH) - 1)
#define LAST_CPUPID_MASK ((1UL << LAST_CPUPID_SHIFT) - 1)
-#define KASAN_TAG_MASK ((1UL << KASAN_TAG_WIDTH) - 1)
+#define KASAN_TAG_MASK KASAN_TAG_BITS_MASK
#define ZONEID_MASK ((1UL << ZONEID_SHIFT) - 1)
static inline enum zone_type memdesc_zonenum(memdesc_flags_t flags)
--
2.53.0
^ permalink raw reply [flat|nested] 5+ messages in thread* [PATCH v10 06/13] mm/execmem: Untag addresses in EXECMEM_ROX related pointer arithmetic
2026-02-04 19:18 [PATCH v10 00/13] kasan: x86: arm64: KASAN tag-based mode for x86 Maciej Wieczor-Retman
` (2 preceding siblings ...)
2026-02-04 19:19 ` [PATCH v10 04/13] x86/kasan: Add arch specific kasan functions Maciej Wieczor-Retman
@ 2026-02-04 19:20 ` Maciej Wieczor-Retman
3 siblings, 0 replies; 5+ messages in thread
From: Maciej Wieczor-Retman @ 2026-02-04 19:20 UTC (permalink / raw)
To: Andrew Morton, Mike Rapoport, Uladzislau Rezki
Cc: m.wieczorretman, Maciej Wieczor-Retman, Alexander Potapenko,
linux-mm, linux-kernel
From: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
ARCH_HAS_EXECMEM_ROX was re-enabled in x86 at Linux 6.14 release.
vm_reset_perms() calculates range's start and end addresses using min()
and max() functions. To do that it compares pointers but, with KASAN
software tags mode enabled, some are tagged - addr variable is, while
start and end variables aren't. This can cause the wrong address to be
chosen and result in various errors in different places.
Reset tags in the address used as function argument in min(), max().
execmem_cache_add() adds tagged pointers to a maple tree structure,
which then are incorrectly compared when walking the tree. That results
in different pointers being returned later and page permission violation
errors panicking the kernel.
Reset tag of the address range inserted into the maple tree inside
execmem_vmalloc() which then gets propagated to execmem_cache_add().
Signed-off-by: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
Acked-by: Alexander Potapenko <glider@google.com>
Acked-by: Mike Rapoport (Microsoft) <rppt@kernel.org>
---
Changelog v10:
- Add Mike's acked-by tag.
Changelog v7:
- Add Alexander's acked-by tag.
- Add comments on why these tag resets are needed (Alexander)
Changelog v6:
- Move back the tag reset from execmem_cache_add() to execmem_vmalloc()
(Mike Rapoport)
- Rewrite the changelogs to match the code changes from v6 and v5.
Changelog v5:
- Remove the within_range() change.
- arch_kasan_reset_tag -> kasan_reset_tag.
Changelog v4:
- Add patch to the series.
mm/execmem.c | 9 ++++++++-
mm/vmalloc.c | 7 ++++++-
2 files changed, 14 insertions(+), 2 deletions(-)
diff --git a/mm/execmem.c b/mm/execmem.c
index 810a4ba9c924..dc7422222cf7 100644
--- a/mm/execmem.c
+++ b/mm/execmem.c
@@ -59,7 +59,14 @@ static void *execmem_vmalloc(struct execmem_range *range, size_t size,
return NULL;
}
- return p;
+ /*
+ * Resetting the tag here is necessary to avoid the tagged address
+ * ending up in the maple tree structure. There it's linear address
+ * can be incorrectly compared with other addresses which can result in
+ * a wrong address being picked down the line and for example a page
+ * permission violation error panicking the kernel.
+ */
+ return kasan_reset_tag(p);
}
struct vm_struct *execmem_vmap(size_t size)
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index e286c2d2068c..2304d095c579 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -3354,7 +3354,12 @@ static void vm_reset_perms(struct vm_struct *area)
* the vm_unmap_aliases() flush includes the direct map.
*/
for (i = 0; i < area->nr_pages; i += 1U << page_order) {
- unsigned long addr = (unsigned long)page_address(area->pages[i]);
+ /*
+ * Addresses' tag needs resetting so it can be properly used in
+ * the min() and max() below. Otherwise the start or end values
+ * might be favoured.
+ */
+ unsigned long addr = (unsigned long)kasan_reset_tag(page_address(area->pages[i]));
if (addr) {
unsigned long page_size;
--
2.53.0
^ permalink raw reply [flat|nested] 5+ messages in thread