From: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
To: nathan@kernel.org, arnd@arndb.de, broonie@kernel.org,
Liam.Howlett@oracle.com, urezki@gmail.com, will@kernel.org,
kaleshsingh@google.com, rppt@kernel.org, leitao@debian.org,
coxu@redhat.com, surenb@google.com, akpm@linux-foundation.org,
luto@kernel.org, jpoimboe@kernel.org, changyuanl@google.com,
hpa@zytor.com, dvyukov@google.com, kas@kernel.org,
corbet@lwn.net, vincenzo.frascino@arm.com, smostafa@google.com,
nick.desaulniers+lkml@gmail.com, morbo@google.com,
andreyknvl@gmail.com, alexander.shishkin@linux.intel.com,
thiago.bauermann@linaro.org, catalin.marinas@arm.com,
ryabinin.a.a@gmail.com, jan.kiszka@siemens.com, jbohac@suse.cz,
dan.j.williams@intel.com, joel.granados@kernel.org,
baohua@kernel.org, kevin.brodsky@arm.com,
nicolas.schier@linux.dev, pcc@google.com,
andriy.shevchenko@linux.intel.com, wei.liu@kernel.org,
bp@alien8.de, ada.coupriediaz@arm.com, xin@zytor.com,
pankaj.gupta@amd.com, vbabka@suse.cz, glider@google.com,
jgross@suse.com, kees@kernel.org, jhubbard@nvidia.com,
joey.gouly@arm.com, ardb@kernel.org, thuth@redhat.com,
pasha.tatashin@soleen.com, kristina.martsenko@arm.com,
bigeasy@linutronix.de, maciej.wieczor-retman@intel.com,
lorenzo.stoakes@oracle.com, jason.andryuk@amd.com,
david@redhat.com, graf@amazon.com, wangkefeng.wang@huawei.com,
ziy@nvidia.com, mark.rutland@arm.com,
dave.hansen@linux.intel.com, samuel.holland@sifive.com,
kbingham@kernel.org, trintaeoitogc@gmail.com,
scott@os.amperecomputing.com, justinstitt@google.com,
kuan-ying.lee@canonical.com, maz@kernel.org, tglx@linutronix.de,
samitolvanen@google.com, mhocko@suse.com,
nunodasneves@linux.microsoft.com, brgerst@gmail.com,
willy@infradead.org, ubizjak@gmail.com, peterz@infradead.org,
mingo@redhat.com, sohil.mehta@intel.com
Cc: linux-mm@kvack.org, linux-kbuild@vger.kernel.org,
linux-arm-kernel@lists.infradead.org, x86@kernel.org,
llvm@lists.linux.dev, kasan-dev@googlegroups.com,
linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: [PATCH v4 13/18] kasan: arm64: x86: Handle int3 for inline KASAN reports
Date: Tue, 12 Aug 2025 15:23:49 +0200 [thread overview]
Message-ID: <9030d5a35eb5a3831319881cb8cb040aad65b7b6.1755004923.git.maciej.wieczor-retman@intel.com> (raw)
In-Reply-To: <cover.1755004923.git.maciej.wieczor-retman@intel.com>
Inline KASAN on x86 does tag mismatch reports by passing the faulty
address and metadata through the INT3 instruction - scheme that's setup
in the LLVM's compiler code (specifically HWAddressSanitizer.cpp).
Add a kasan hook to the INT3 handling function.
Disable KASAN in an INT3 core kernel selftest function since it can raise
a false tag mismatch report and potentially panic the kernel.
Make part of that hook - which decides whether to die or recover from a
tag mismatch - arch independent to avoid duplicating a long comment on
both x86 and arm64 architectures.
Signed-off-by: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
---
Changelog v4:
- Make kasan_handler() a stub in a header file. Remove #ifdef from
traps.c.
- Consolidate the "recover" comment into one place.
- Make small changes to the patch message.
MAINTAINERS | 2 +-
arch/arm64/kernel/traps.c | 17 +----------------
arch/x86/include/asm/kasan.h | 26 ++++++++++++++++++++++++++
arch/x86/kernel/alternative.c | 4 +++-
arch/x86/kernel/traps.c | 4 ++++
arch/x86/mm/Makefile | 2 ++
arch/x86/mm/kasan_inline.c | 23 +++++++++++++++++++++++
include/linux/kasan.h | 24 ++++++++++++++++++++++++
8 files changed, 84 insertions(+), 18 deletions(-)
create mode 100644 arch/x86/mm/kasan_inline.c
diff --git a/MAINTAINERS b/MAINTAINERS
index 7ce8c6b86e3d..3daeeaf67546 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -13167,7 +13167,7 @@ S: Maintained
B: https://bugzilla.kernel.org/buglist.cgi?component=Sanitizers&product=Memory%20Management
F: Documentation/dev-tools/kasan.rst
F: arch/*/include/asm/*kasan*.h
-F: arch/*/mm/kasan_init*
+F: arch/*/mm/kasan_*
F: include/linux/kasan*.h
F: lib/Kconfig.kasan
F: mm/kasan/
diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c
index f528b6041f6a..b9bdabc14ad1 100644
--- a/arch/arm64/kernel/traps.c
+++ b/arch/arm64/kernel/traps.c
@@ -1068,22 +1068,7 @@ int kasan_brk_handler(struct pt_regs *regs, unsigned long esr)
kasan_report(addr, size, write, pc);
- /*
- * The instrumentation allows to control whether we can proceed after
- * a crash was detected. This is done by passing the -recover flag to
- * the compiler. Disabling recovery allows to generate more compact
- * code.
- *
- * Unfortunately disabling recovery doesn't work for the kernel right
- * now. KASAN reporting is disabled in some contexts (for example when
- * the allocator accesses slab object metadata; this is controlled by
- * current->kasan_depth). All these accesses are detected by the tool,
- * even though the reports for them are not printed.
- *
- * This is something that might be fixed at some point in the future.
- */
- if (!recover)
- die("Oops - KASAN", regs, esr);
+ kasan_inline_recover(recover, "Oops - KASAN", regs, esr);
/* If thread survives, skip over the brk instruction and continue: */
arm64_skip_faulting_instruction(regs, AARCH64_INSN_SIZE);
diff --git a/arch/x86/include/asm/kasan.h b/arch/x86/include/asm/kasan.h
index 1963eb2fcff3..5bf38bb836e1 100644
--- a/arch/x86/include/asm/kasan.h
+++ b/arch/x86/include/asm/kasan.h
@@ -6,7 +6,28 @@
#include <linux/kasan-tags.h>
#include <linux/types.h>
#define KASAN_SHADOW_OFFSET _AC(CONFIG_KASAN_SHADOW_OFFSET, UL)
+#ifdef CONFIG_KASAN_SW_TAGS
+
+/*
+ * LLVM ABI for reporting tag mismatches in inline KASAN mode.
+ * On x86 the INT3 instruction is used to carry metadata in RAX
+ * to the KASAN report.
+ *
+ * SIZE refers to how many bytes the faulty memory access
+ * requested.
+ * WRITE bit, when set, indicates the access was a write, otherwise
+ * it was a read.
+ * RECOVER bit, when set, should allow the kernel to carry on after
+ * a tag mismatch. Otherwise die() is called.
+ */
+#define KASAN_RAX_RECOVER 0x20
+#define KASAN_RAX_WRITE 0x10
+#define KASAN_RAX_SIZE_MASK 0x0f
+#define KASAN_RAX_SIZE(rax) (1 << ((rax) & KASAN_RAX_SIZE_MASK))
+
+#else
#define KASAN_SHADOW_SCALE_SHIFT 3
+#endif
/*
* Compiler uses shadow offset assuming that addresses start
@@ -35,10 +56,15 @@
#define __tag_shifted(tag) FIELD_PREP(GENMASK_ULL(60, 57), tag)
#define __tag_reset(addr) (sign_extend64((u64)(addr), 56))
#define __tag_get(addr) ((u8)FIELD_GET(GENMASK_ULL(60, 57), (u64)addr))
+bool kasan_inline_handler(struct pt_regs *regs);
#else
#define __tag_shifted(tag) 0UL
#define __tag_reset(addr) (addr)
#define __tag_get(addr) 0
+static inline bool kasan_inline_handler(struct pt_regs *regs)
+{
+ return false;
+}
#endif /* CONFIG_KASAN_SW_TAGS */
static inline void *__tag_set(const void *__addr, u8 tag)
diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
index 2a330566e62b..4cb085daad31 100644
--- a/arch/x86/kernel/alternative.c
+++ b/arch/x86/kernel/alternative.c
@@ -2228,7 +2228,7 @@ int3_exception_notify(struct notifier_block *self, unsigned long val, void *data
}
/* Must be noinline to ensure uniqueness of int3_selftest_ip. */
-static noinline void __init int3_selftest(void)
+static noinline __no_sanitize_address void __init int3_selftest(void)
{
static __initdata struct notifier_block int3_exception_nb = {
.notifier_call = int3_exception_notify,
@@ -2236,6 +2236,7 @@ static noinline void __init int3_selftest(void)
};
unsigned int val = 0;
+ kasan_disable_current();
BUG_ON(register_die_notifier(&int3_exception_nb));
/*
@@ -2253,6 +2254,7 @@ static noinline void __init int3_selftest(void)
BUG_ON(val != 1);
+ kasan_enable_current();
unregister_die_notifier(&int3_exception_nb);
}
diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c
index 0f6f187b1a9e..2a119279980f 100644
--- a/arch/x86/kernel/traps.c
+++ b/arch/x86/kernel/traps.c
@@ -912,6 +912,10 @@ static bool do_int3(struct pt_regs *regs)
if (kprobe_int3_handler(regs))
return true;
#endif
+
+ if (kasan_inline_handler(regs))
+ return true;
+
res = notify_die(DIE_INT3, "int3", regs, 0, X86_TRAP_BP, SIGTRAP);
return res == NOTIFY_STOP;
diff --git a/arch/x86/mm/Makefile b/arch/x86/mm/Makefile
index 5b9908f13dcf..1dc18090cbe7 100644
--- a/arch/x86/mm/Makefile
+++ b/arch/x86/mm/Makefile
@@ -36,7 +36,9 @@ obj-$(CONFIG_PTDUMP) += dump_pagetables.o
obj-$(CONFIG_PTDUMP_DEBUGFS) += debug_pagetables.o
KASAN_SANITIZE_kasan_init_$(BITS).o := n
+KASAN_SANITIZE_kasan_inline.o := n
obj-$(CONFIG_KASAN) += kasan_init_$(BITS).o
+obj-$(CONFIG_KASAN_SW_TAGS) += kasan_inline.o
KMSAN_SANITIZE_kmsan_shadow.o := n
obj-$(CONFIG_KMSAN) += kmsan_shadow.o
diff --git a/arch/x86/mm/kasan_inline.c b/arch/x86/mm/kasan_inline.c
new file mode 100644
index 000000000000..9f85dfd1c38b
--- /dev/null
+++ b/arch/x86/mm/kasan_inline.c
@@ -0,0 +1,23 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <linux/kasan.h>
+#include <linux/kdebug.h>
+
+bool kasan_inline_handler(struct pt_regs *regs)
+{
+ int metadata = regs->ax;
+ u64 addr = regs->di;
+ u64 pc = regs->ip;
+ bool recover = metadata & KASAN_RAX_RECOVER;
+ bool write = metadata & KASAN_RAX_WRITE;
+ size_t size = KASAN_RAX_SIZE(metadata);
+
+ if (user_mode(regs))
+ return false;
+
+ if (!kasan_report((void *)addr, size, write, pc))
+ return false;
+
+ kasan_inline_recover(recover, "Oops - KASAN", regs, metadata, die);
+
+ return true;
+}
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 54481f8c30c5..8691ad870f3b 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -663,4 +663,28 @@ void kasan_non_canonical_hook(unsigned long addr);
static inline void kasan_non_canonical_hook(unsigned long addr) { }
#endif /* CONFIG_KASAN_GENERIC || CONFIG_KASAN_SW_TAGS */
+#ifdef CONFIG_KASAN_SW_TAGS
+/*
+ * The instrumentation allows to control whether we can proceed after
+ * a crash was detected. This is done by passing the -recover flag to
+ * the compiler. Disabling recovery allows to generate more compact
+ * code.
+ *
+ * Unfortunately disabling recovery doesn't work for the kernel right
+ * now. KASAN reporting is disabled in some contexts (for example when
+ * the allocator accesses slab object metadata; this is controlled by
+ * current->kasan_depth). All these accesses are detected by the tool,
+ * even though the reports for them are not printed.
+ *
+ * This is something that might be fixed at some point in the future.
+ */
+static inline void kasan_inline_recover(
+ bool recover, char *msg, struct pt_regs *regs, unsigned long err,
+ void die_fn(const char *str, struct pt_regs *regs, long err))
+{
+ if (!recover)
+ die_fn(msg, regs, err);
+}
+#endif
+
#endif /* LINUX_KASAN_H */
--
2.50.1
next prev parent reply other threads:[~2025-08-12 13:30 UTC|newest]
Thread overview: 37+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-08-12 13:23 [PATCH v4 00/18] kasan: x86: arm64: KASAN tag-based mode for x86 Maciej Wieczor-Retman
2025-08-12 13:23 ` [PATCH v4 01/18] kasan: sw_tags: Use arithmetic shift for shadow computation Maciej Wieczor-Retman
2025-08-12 13:23 ` [PATCH v4 02/18] kasan: sw_tags: Support tag widths less than 8 bits Maciej Wieczor-Retman
2025-08-13 14:48 ` Ada Couprie Diaz
2025-08-18 4:24 ` Maciej Wieczor-Retman
2025-08-12 13:23 ` [PATCH v4 03/18] kasan: Fix inline mode for x86 tag-based mode Maciej Wieczor-Retman
2025-08-12 13:23 ` [PATCH v4 04/18] x86: Add arch specific kasan functions Maciej Wieczor-Retman
2025-08-12 13:23 ` [PATCH v4 05/18] kasan: arm64: x86: Make special tags arch specific Maciej Wieczor-Retman
2025-08-12 13:23 ` [PATCH v4 06/18] x86: Reset tag for virtual to physical address conversions Maciej Wieczor-Retman
2025-08-14 7:15 ` Mike Rapoport
2025-08-18 5:29 ` Maciej Wieczor-Retman
2025-08-12 13:23 ` [PATCH v4 07/18] mm: x86: Untag addresses in EXECMEM_ROX related pointer arithmetic Maciej Wieczor-Retman
2025-08-14 7:26 ` Mike Rapoport
2025-08-18 5:47 ` Maciej Wieczor-Retman
2025-08-12 13:23 ` [PATCH v4 08/18] x86: Physical address comparisons in fill_p*d/pte Maciej Wieczor-Retman
2025-08-12 13:23 ` [PATCH v4 09/18] x86: KASAN raw shadow memory PTE init Maciej Wieczor-Retman
2025-08-12 13:23 ` [PATCH v4 10/18] x86: LAM compatible non-canonical definition Maciej Wieczor-Retman
2025-08-12 13:23 ` [PATCH v4 11/18] x86: LAM initialization Maciej Wieczor-Retman
2025-08-12 13:23 ` [PATCH v4 12/18] x86: Minimal SLAB alignment Maciej Wieczor-Retman
2025-08-12 13:23 ` Maciej Wieczor-Retman [this message]
2025-08-13 14:49 ` [PATCH v4 13/18] kasan: arm64: x86: Handle int3 for inline KASAN reports Ada Couprie Diaz
2025-08-18 5:57 ` Maciej Wieczor-Retman
2025-08-13 15:17 ` Peter Zijlstra
2025-08-18 6:26 ` Maciej Wieczor-Retman
2025-09-08 15:40 ` Peter Zijlstra
2025-09-09 8:47 ` Maciej Wieczor-Retman
2025-08-12 13:23 ` [PATCH v4 14/18] kasan: x86: Apply multishot to the inline report handler Maciej Wieczor-Retman
2025-08-12 13:23 ` [PATCH v4 15/18] kasan: x86: Logical bit shift for kasan_mem_to_shadow Maciej Wieczor-Retman
2025-08-12 13:23 ` [PATCH v4 16/18] mm: Unpoison pcpu chunks with base address tag Maciej Wieczor-Retman
2025-08-12 13:23 ` [PATCH v4 17/18] mm: Unpoison vms[area] addresses with a common tag Maciej Wieczor-Retman
2025-08-12 13:23 ` [PATCH v4 18/18] x86: Make software tag-based kasan available Maciej Wieczor-Retman
2025-08-13 8:16 ` [PATCH v4 00/18] kasan: x86: arm64: KASAN tag-based mode for x86 Kiryl Shutsemau
2025-08-13 10:39 ` Maciej Wieczor-Retman
2025-08-13 11:05 ` Kiryl Shutsemau
2025-08-13 11:44 ` Maciej Wieczor-Retman
2025-08-21 12:30 ` Ada Couprie Diaz
2025-08-22 7:36 ` Maciej Wieczor-Retman
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=9030d5a35eb5a3831319881cb8cb040aad65b7b6.1755004923.git.maciej.wieczor-retman@intel.com \
--to=maciej.wieczor-retman@intel.com \
--cc=Liam.Howlett@oracle.com \
--cc=ada.coupriediaz@arm.com \
--cc=akpm@linux-foundation.org \
--cc=alexander.shishkin@linux.intel.com \
--cc=andreyknvl@gmail.com \
--cc=andriy.shevchenko@linux.intel.com \
--cc=ardb@kernel.org \
--cc=arnd@arndb.de \
--cc=baohua@kernel.org \
--cc=bigeasy@linutronix.de \
--cc=bp@alien8.de \
--cc=brgerst@gmail.com \
--cc=broonie@kernel.org \
--cc=catalin.marinas@arm.com \
--cc=changyuanl@google.com \
--cc=corbet@lwn.net \
--cc=coxu@redhat.com \
--cc=dan.j.williams@intel.com \
--cc=dave.hansen@linux.intel.com \
--cc=david@redhat.com \
--cc=dvyukov@google.com \
--cc=glider@google.com \
--cc=graf@amazon.com \
--cc=hpa@zytor.com \
--cc=jan.kiszka@siemens.com \
--cc=jason.andryuk@amd.com \
--cc=jbohac@suse.cz \
--cc=jgross@suse.com \
--cc=jhubbard@nvidia.com \
--cc=joel.granados@kernel.org \
--cc=joey.gouly@arm.com \
--cc=jpoimboe@kernel.org \
--cc=justinstitt@google.com \
--cc=kaleshsingh@google.com \
--cc=kas@kernel.org \
--cc=kasan-dev@googlegroups.com \
--cc=kbingham@kernel.org \
--cc=kees@kernel.org \
--cc=kevin.brodsky@arm.com \
--cc=kristina.martsenko@arm.com \
--cc=kuan-ying.lee@canonical.com \
--cc=leitao@debian.org \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-doc@vger.kernel.org \
--cc=linux-kbuild@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=llvm@lists.linux.dev \
--cc=lorenzo.stoakes@oracle.com \
--cc=luto@kernel.org \
--cc=mark.rutland@arm.com \
--cc=maz@kernel.org \
--cc=mhocko@suse.com \
--cc=mingo@redhat.com \
--cc=morbo@google.com \
--cc=nathan@kernel.org \
--cc=nick.desaulniers+lkml@gmail.com \
--cc=nicolas.schier@linux.dev \
--cc=nunodasneves@linux.microsoft.com \
--cc=pankaj.gupta@amd.com \
--cc=pasha.tatashin@soleen.com \
--cc=pcc@google.com \
--cc=peterz@infradead.org \
--cc=rppt@kernel.org \
--cc=ryabinin.a.a@gmail.com \
--cc=samitolvanen@google.com \
--cc=samuel.holland@sifive.com \
--cc=scott@os.amperecomputing.com \
--cc=smostafa@google.com \
--cc=sohil.mehta@intel.com \
--cc=surenb@google.com \
--cc=tglx@linutronix.de \
--cc=thiago.bauermann@linaro.org \
--cc=thuth@redhat.com \
--cc=trintaeoitogc@gmail.com \
--cc=ubizjak@gmail.com \
--cc=urezki@gmail.com \
--cc=vbabka@suse.cz \
--cc=vincenzo.frascino@arm.com \
--cc=wangkefeng.wang@huawei.com \
--cc=wei.liu@kernel.org \
--cc=will@kernel.org \
--cc=willy@infradead.org \
--cc=x86@kernel.org \
--cc=xin@zytor.com \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox