linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH RFC bpf-next 0/8] bpf: add support for KASAN checks in JITed programs
@ 2026-04-13 18:28 Alexis Lothoré (eBPF Foundation)
  2026-04-13 18:28 ` [PATCH RFC bpf-next 1/8] kasan: expose generic kasan helpers Alexis Lothoré (eBPF Foundation)
                   ` (7 more replies)
  0 siblings, 8 replies; 12+ messages in thread
From: Alexis Lothoré (eBPF Foundation) @ 2026-04-13 18:28 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Martin KaFai Lau, Eduard Zingerman, Kumar Kartikeya Dwivedi,
	Song Liu, Yonghong Song, Jiri Olsa, John Fastabend,
	David S. Miller, David Ahern, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, Dave Hansen, x86, H. Peter Anvin, Shuah Khan,
	Maxime Coquelin, Alexandre Torgue, Andrey Ryabinin,
	Alexander Potapenko, Andrey Konovalov, Dmitry Vyukov,
	Vincenzo Frascino, Andrew Morton
  Cc: ebpf, Bastien Curutchet, Thomas Petazzoni, Xu Kuohai, bpf,
	linux-kernel, netdev, linux-kselftest, linux-stm32,
	linux-arm-kernel, kasan-dev, linux-mm,
	Alexis Lothoré (eBPF Foundation)

Hello,
this series aims to bring basic support for KASAN checks to BPF JITed
programs. This follows the first RFC posted in [1].

KASAN allows to spot memory management mistakes by reserving a fraction
of memory as "shadow memory" that will map to the rest of the memory and
allow its monitoring. Each memory-accessing instruction is then
instrumented at build time to call some ASAN check function, that will
analyze the corresponding bits in shadow memory, and if it detects the
access as invalid, trigger a detailed report. The goal of this series is
to replicate this mechanism for BPF programs when they are being JITed
into native instructions: that's then the (runtime) JIT compiler who is
in charge of inserting calls to the corresponding kasan checks, when a
program is being loaded into the kernel. This task involves:
- identifying at program load time the instructions performing memory
  accesses
- identifying those accesses properties (size ? read or write ?) to
  define the relevant kasan check function to call
- just before the identified instructions:
  - perform the basic context saving (ie: saving registers)
  - inserting a call to the relevant kasan check function 
  - restore context
- whenever the instrumented program executes, if it performs an invalid
  access, it triggers a kasan report identical to those instrumented on
  kernel side at build time.

As discussed in [1], this series is based on some choices and
assumptions:
- it focuses on x86_64 for now, and so only on KASAN_GENERIC
- not all memory accessing BPF instructions are being instrumented:
  - it focuses on STX/LDX instructions
  - it discards instructions accessing BPF program stack (already
    monitored by page guards)
  - it discards possibly faulting instructions, like BPF_PROBE_MEM or
    BPF_PROBE_ATOMIC insns

The series is marked and sent as RFC:
- to allow collecting feedback early and make sure that it goes into the
  right direction
- because it depends on Xu's work to pass data between the verifier and
  JIT compilers. This work is not merged yet, see [2]. I have been
  tracking the various revisions he sent on the ML and based my local
  branch on his work
- because tests brought by this series currently can't run on BPF CI:
  they expect kasan multishot to be enabled, otherwise the first test
  will make all other kasan-related tests fail.
- because some cases like atomic loads/stores are not instrumented yet
  (and are still making me scratch my head)
- because it will hopefully provide a good basis to discuss the topic at
  LSFMMBPF (see [3])

Despite this series not being ready for integration yet, anyone
interested in running it locally can perform the following steps to run
the JITed KASAN instrumentation selftests:
- rebasing locally this series on [2]
- building and running the corresponding kernel with kasan_multi_shot
  enabled
- running `test_progs -a kasan`

And should get a variety of KASAN tests executed for BPF programs:

  #162/1   kasan/bpf_kasan_uaf_read_1:OK
  #162/2   kasan/bpf_kasan_uaf_read_2:OK
  #162/3   kasan/bpf_kasan_uaf_read_4:OK
  #162/4   kasan/bpf_kasan_uaf_read_8:OK
  #162/5   kasan/bpf_kasan_uaf_write_1:OK
  #162/6   kasan/bpf_kasan_uaf_write_2:OK
  #162/7   kasan/bpf_kasan_uaf_write_4:OK
  #162/8   kasan/bpf_kasan_uaf_write_8:OK
  #162/9   kasan/bpf_kasan_oob_read_1:OK
  #162/10  kasan/bpf_kasan_oob_read_2:OK
  #162/11  kasan/bpf_kasan_oob_read_4:OK
  #162/12  kasan/bpf_kasan_oob_read_8:OK
  #162/13  kasan/bpf_kasan_oob_write_1:OK
  #162/14  kasan/bpf_kasan_oob_write_2:OK
  #162/15  kasan/bpf_kasan_oob_write_4:OK
  #162/16  kasan/bpf_kasan_oob_write_8:OK
  #162     kasan:OK
  Summary: 1/16 PASSED, 0 SKIPPED, 0 FAILED

[1] https://lore.kernel.org/bpf/DG7UG112AVBC.JKYISDTAM30T@bootlin.com/
[2] https://lore.kernel.org/bpf/cover.1776062885.git.xukuohai@hotmail.com/
[3] https://lore.kernel.org/bpf/DGGNCXX79H8O.2P6K8L1QW1M8K@bootlin.com/

Signed-off-by: Alexis Lothoré (eBPF Foundation) <alexis.lothore@bootlin.com>
---
Alexis Lothoré (eBPF Foundation) (8):
      kasan: expose generic kasan helpers
      bpf: mark instructions accessing program stack
      bpf: add BPF_JIT_KASAN for KASAN instrumentation of JITed programs
      bpf, x86: add helper to emit kasan checks in x86 JITed programs
      bpf, x86: emit KASAN checks into x86 JITed programs
      selftests/bpf: do not run verifier JIT tests when BPF_JIT_KASAN is enabled
      bpf, x86: enable KASAN for JITed programs on x86
      selftests/bpf: add tests to validate KASAN on JIT programs

 arch/x86/Kconfig                                   |   1 +
 arch/x86/net/bpf_jit_comp.c                        | 106 +++++++++++++
 include/linux/bpf.h                                |   2 +
 include/linux/bpf_verifier.h                       |   2 +
 include/linux/kasan.h                              |  13 ++
 kernel/bpf/Kconfig                                 |   9 ++
 kernel/bpf/core.c                                  |  10 ++
 kernel/bpf/verifier.c                              |   7 +
 mm/kasan/kasan.h                                   |  10 --
 tools/testing/selftests/bpf/prog_tests/kasan.c     | 165 +++++++++++++++++++++
 tools/testing/selftests/bpf/progs/kasan.c          | 146 ++++++++++++++++++
 .../testing/selftests/bpf/test_kmods/bpf_testmod.c |  79 ++++++++++
 tools/testing/selftests/bpf/test_loader.c          |   5 +
 tools/testing/selftests/bpf/unpriv_helpers.c       |   5 +
 tools/testing/selftests/bpf/unpriv_helpers.h       |   1 +
 15 files changed, 551 insertions(+), 10 deletions(-)
---
base-commit: 7990a071b32887a1a883952e8cf60134b6d6fea0
change-id: 20260126-kasan-fcd68f64cd7b

Best regards,
--  
Alexis Lothoré (eBPF Foundation) <alexis.lothore@bootlin.com>



^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH RFC bpf-next 1/8] kasan: expose generic kasan helpers
  2026-04-13 18:28 [PATCH RFC bpf-next 0/8] bpf: add support for KASAN checks in JITed programs Alexis Lothoré (eBPF Foundation)
@ 2026-04-13 18:28 ` Alexis Lothoré (eBPF Foundation)
  2026-04-13 22:19   ` Andrey Konovalov
  2026-04-13 18:28 ` [PATCH RFC bpf-next 2/8] bpf: mark instructions accessing program stack Alexis Lothoré (eBPF Foundation)
                   ` (6 subsequent siblings)
  7 siblings, 1 reply; 12+ messages in thread
From: Alexis Lothoré (eBPF Foundation) @ 2026-04-13 18:28 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Martin KaFai Lau, Eduard Zingerman, Kumar Kartikeya Dwivedi,
	Song Liu, Yonghong Song, Jiri Olsa, John Fastabend,
	David S. Miller, David Ahern, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, Dave Hansen, x86, H. Peter Anvin, Shuah Khan,
	Maxime Coquelin, Alexandre Torgue, Andrey Ryabinin,
	Alexander Potapenko, Andrey Konovalov, Dmitry Vyukov,
	Vincenzo Frascino, Andrew Morton
  Cc: ebpf, Bastien Curutchet, Thomas Petazzoni, Xu Kuohai, bpf,
	linux-kernel, netdev, linux-kselftest, linux-stm32,
	linux-arm-kernel, kasan-dev, linux-mm,
	Alexis Lothoré (eBPF Foundation)

In order to prepare KASAN helpers to be called from the eBPF subsystem
(to add KASAN instrumentation at runtime when JITing eBPF programs),
expose the __asan_{load,store}X functions in linux/kasan.h

Signed-off-by: Alexis Lothoré (eBPF Foundation) <alexis.lothore@bootlin.com>
---
 include/linux/kasan.h | 13 +++++++++++++
 mm/kasan/kasan.h      | 10 ----------
 2 files changed, 13 insertions(+), 10 deletions(-)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 338a1921a50a..6f580d4a39e4 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -710,4 +710,17 @@ void kasan_non_canonical_hook(unsigned long addr);
 static inline void kasan_non_canonical_hook(unsigned long addr) { }
 #endif /* CONFIG_KASAN_GENERIC || CONFIG_KASAN_SW_TAGS */
 
+#ifdef CONFIG_KASAN_GENERIC
+void __asan_load1(void *p);
+void __asan_store1(void *p);
+void __asan_load2(void *p);
+void __asan_store2(void *p);
+void __asan_load4(void *p);
+void __asan_store4(void *p);
+void __asan_load8(void *p);
+void __asan_store8(void *p);
+void __asan_load16(void *p);
+void __asan_store16(void *p);
+#endif /* CONFIG_KASAN_GENERIC */
+
 #endif /* LINUX_KASAN_H */
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index fc9169a54766..3bfce8eb3135 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -594,16 +594,6 @@ void __asan_handle_no_return(void);
 void __asan_alloca_poison(void *, ssize_t size);
 void __asan_allocas_unpoison(void *stack_top, ssize_t stack_bottom);
 
-void __asan_load1(void *);
-void __asan_store1(void *);
-void __asan_load2(void *);
-void __asan_store2(void *);
-void __asan_load4(void *);
-void __asan_store4(void *);
-void __asan_load8(void *);
-void __asan_store8(void *);
-void __asan_load16(void *);
-void __asan_store16(void *);
 void __asan_loadN(void *, ssize_t size);
 void __asan_storeN(void *, ssize_t size);
 

-- 
2.53.0



^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH RFC bpf-next 2/8] bpf: mark instructions accessing program stack
  2026-04-13 18:28 [PATCH RFC bpf-next 0/8] bpf: add support for KASAN checks in JITed programs Alexis Lothoré (eBPF Foundation)
  2026-04-13 18:28 ` [PATCH RFC bpf-next 1/8] kasan: expose generic kasan helpers Alexis Lothoré (eBPF Foundation)
@ 2026-04-13 18:28 ` Alexis Lothoré (eBPF Foundation)
  2026-04-13 18:28 ` [PATCH RFC bpf-next 3/8] bpf: add BPF_JIT_KASAN for KASAN instrumentation of JITed programs Alexis Lothoré (eBPF Foundation)
                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 12+ messages in thread
From: Alexis Lothoré (eBPF Foundation) @ 2026-04-13 18:28 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Martin KaFai Lau, Eduard Zingerman, Kumar Kartikeya Dwivedi,
	Song Liu, Yonghong Song, Jiri Olsa, John Fastabend,
	David S. Miller, David Ahern, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, Dave Hansen, x86, H. Peter Anvin, Shuah Khan,
	Maxime Coquelin, Alexandre Torgue, Andrey Ryabinin,
	Alexander Potapenko, Andrey Konovalov, Dmitry Vyukov,
	Vincenzo Frascino, Andrew Morton
  Cc: ebpf, Bastien Curutchet, Thomas Petazzoni, Xu Kuohai, bpf,
	linux-kernel, netdev, linux-kselftest, linux-stm32,
	linux-arm-kernel, kasan-dev, linux-mm,
	Alexis Lothoré (eBPF Foundation)

In order to prepare to emit KASAN checks in JITed programs, JIT
compilers need to be aware about whether some load/store instructions
are targeting the bpf program stack, as those should not be monitored
(we already have guard pages for that, and it is difficult anyway to
correctly monitor any kind of data passed on stack).

To support this need, make the BPF verifier mark the instructions that
access program stack:
- add a setter that allows the verifier to mark instructions accessing
  the program stack
- add a getter that allows JIT compilers to check whether instructions
  being JITed are accessing the stack

Signed-off-by: Alexis Lothoré (eBPF Foundation) <alexis.lothore@bootlin.com>
---
 include/linux/bpf.h          |  2 ++
 include/linux/bpf_verifier.h |  2 ++
 kernel/bpf/core.c            | 10 ++++++++++
 kernel/bpf/verifier.c        |  7 +++++++
 4 files changed, 21 insertions(+)

diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index b4b703c90ca9..774a0395c498 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -1543,6 +1543,8 @@ void bpf_jit_uncharge_modmem(u32 size);
 bool bpf_prog_has_trampoline(const struct bpf_prog *prog);
 bool bpf_insn_is_indirect_target(const struct bpf_verifier_env *env, const struct bpf_prog *prog,
 				 int insn_idx);
+bool bpf_insn_accesses_stack(const struct bpf_verifier_env *env,
+			     const struct bpf_prog *prog, int insn_idx);
 #else
 static inline int bpf_trampoline_link_prog(struct bpf_tramp_link *link,
 					   struct bpf_trampoline *tr,
diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
index b148f816f25b..ab99ed4c4227 100644
--- a/include/linux/bpf_verifier.h
+++ b/include/linux/bpf_verifier.h
@@ -660,6 +660,8 @@ struct bpf_insn_aux_data {
 	u16 const_reg_map_mask;
 	u16 const_reg_subprog_mask;
 	u32 const_reg_vals[10];
+	/* instruction accesses stack */
+	bool accesses_stack;
 };
 
 #define MAX_USED_MAPS 64 /* max number of maps accessed by one eBPF program */
diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
index 8b018ff48875..340abfdadbed 100644
--- a/kernel/bpf/core.c
+++ b/kernel/bpf/core.c
@@ -1582,6 +1582,16 @@ bool bpf_insn_is_indirect_target(const struct bpf_verifier_env *env, const struc
 	insn_idx += prog->aux->subprog_start;
 	return env->insn_aux_data[insn_idx].indirect_target;
 }
+
+bool bpf_insn_accesses_stack(const struct bpf_verifier_env *env,
+			     const struct bpf_prog *prog, int insn_idx)
+{
+	if (!env)
+		return false;
+	insn_idx += prog->aux->subprog_start;
+	return env->insn_aux_data[insn_idx].accesses_stack;
+}
+
 #endif /* CONFIG_BPF_JIT */
 
 /* Base function for offset calculation. Needs to go into .text section,
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 1e36b9e91277..7bce4fb4e540 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -3502,6 +3502,11 @@ static void mark_indirect_target(struct bpf_verifier_env *env, int idx)
 	env->insn_aux_data[idx].indirect_target = true;
 }
 
+static void mark_insn_accesses_stack(struct bpf_verifier_env *env, int idx)
+{
+	env->insn_aux_data[idx].accesses_stack = true;
+}
+
 #define LR_FRAMENO_BITS	3
 #define LR_SPI_BITS	6
 #define LR_ENTRY_BITS	(LR_SPI_BITS + LR_FRAMENO_BITS + 1)
@@ -6490,6 +6495,8 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
 		else
 			err = check_stack_write(env, regno, off, size,
 						value_regno, insn_idx);
+
+		mark_insn_accesses_stack(env, insn_idx);
 	} else if (reg_is_pkt_pointer(reg)) {
 		if (t == BPF_WRITE && !may_access_direct_pkt_data(env, NULL, t)) {
 			verbose(env, "cannot write into packet\n");

-- 
2.53.0



^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH RFC bpf-next 3/8] bpf: add BPF_JIT_KASAN for KASAN instrumentation of JITed programs
  2026-04-13 18:28 [PATCH RFC bpf-next 0/8] bpf: add support for KASAN checks in JITed programs Alexis Lothoré (eBPF Foundation)
  2026-04-13 18:28 ` [PATCH RFC bpf-next 1/8] kasan: expose generic kasan helpers Alexis Lothoré (eBPF Foundation)
  2026-04-13 18:28 ` [PATCH RFC bpf-next 2/8] bpf: mark instructions accessing program stack Alexis Lothoré (eBPF Foundation)
@ 2026-04-13 18:28 ` Alexis Lothoré (eBPF Foundation)
  2026-04-13 22:20   ` Andrey Konovalov
  2026-04-13 18:28 ` [PATCH RFC bpf-next 4/8] bpf, x86: add helper to emit kasan checks in x86 " Alexis Lothoré (eBPF Foundation)
                   ` (4 subsequent siblings)
  7 siblings, 1 reply; 12+ messages in thread
From: Alexis Lothoré (eBPF Foundation) @ 2026-04-13 18:28 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Martin KaFai Lau, Eduard Zingerman, Kumar Kartikeya Dwivedi,
	Song Liu, Yonghong Song, Jiri Olsa, John Fastabend,
	David S. Miller, David Ahern, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, Dave Hansen, x86, H. Peter Anvin, Shuah Khan,
	Maxime Coquelin, Alexandre Torgue, Andrey Ryabinin,
	Alexander Potapenko, Andrey Konovalov, Dmitry Vyukov,
	Vincenzo Frascino, Andrew Morton
  Cc: ebpf, Bastien Curutchet, Thomas Petazzoni, Xu Kuohai, bpf,
	linux-kernel, netdev, linux-kselftest, linux-stm32,
	linux-arm-kernel, kasan-dev, linux-mm,
	Alexis Lothoré (eBPF Foundation)

Add a new Kconfig option CONFIG_BPF_JIT_KASAN that automatically enables
KASAN (Kernel Address Sanitizer) memory access checks for JIT-compiled
BPF programs, when both KASAN and JIT compiler are enabled. When
enabled, the JIT compiler will emit shadow memory checks before memory
loads and stores to detect use-after-free, out-of-bounds, and other
memory safety bugs at runtime. The option is gated behind
HAVE_EBPF_JIT_KASAN, as it needs proper arch-specific implementation.

Signed-off-by: Alexis Lothoré (eBPF Foundation) <alexis.lothore@bootlin.com>
---
 kernel/bpf/Kconfig | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/kernel/bpf/Kconfig b/kernel/bpf/Kconfig
index eb3de35734f0..28392adb3d7e 100644
--- a/kernel/bpf/Kconfig
+++ b/kernel/bpf/Kconfig
@@ -17,6 +17,10 @@ config HAVE_CBPF_JIT
 config HAVE_EBPF_JIT
 	bool
 
+# KASAN support for JIT compiler
+config HAVE_EBPF_JIT_KASAN
+	bool
+
 # Used by archs to tell that they want the BPF JIT compiler enabled by
 # default for kernels that were compiled with BPF JIT support.
 config ARCH_WANT_DEFAULT_BPF_JIT
@@ -101,4 +105,9 @@ config BPF_LSM
 
 	  If you are unsure how to answer this question, answer N.
 
+config BPF_JIT_KASAN
+	bool
+	depends on HAVE_EBPF_JIT_KASAN
+	default y if BPF_JIT && KASAN_GENERIC
+
 endmenu # "BPF subsystem"

-- 
2.53.0



^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH RFC bpf-next 4/8] bpf, x86: add helper to emit kasan checks in x86 JITed programs
  2026-04-13 18:28 [PATCH RFC bpf-next 0/8] bpf: add support for KASAN checks in JITed programs Alexis Lothoré (eBPF Foundation)
                   ` (2 preceding siblings ...)
  2026-04-13 18:28 ` [PATCH RFC bpf-next 3/8] bpf: add BPF_JIT_KASAN for KASAN instrumentation of JITed programs Alexis Lothoré (eBPF Foundation)
@ 2026-04-13 18:28 ` Alexis Lothoré (eBPF Foundation)
  2026-04-13 18:28 ` [PATCH RFC bpf-next 5/8] bpf, x86: emit KASAN checks into " Alexis Lothoré (eBPF Foundation)
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 12+ messages in thread
From: Alexis Lothoré (eBPF Foundation) @ 2026-04-13 18:28 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Martin KaFai Lau, Eduard Zingerman, Kumar Kartikeya Dwivedi,
	Song Liu, Yonghong Song, Jiri Olsa, John Fastabend,
	David S. Miller, David Ahern, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, Dave Hansen, x86, H. Peter Anvin, Shuah Khan,
	Maxime Coquelin, Alexandre Torgue, Andrey Ryabinin,
	Alexander Potapenko, Andrey Konovalov, Dmitry Vyukov,
	Vincenzo Frascino, Andrew Morton
  Cc: ebpf, Bastien Curutchet, Thomas Petazzoni, Xu Kuohai, bpf,
	linux-kernel, netdev, linux-kselftest, linux-stm32,
	linux-arm-kernel, kasan-dev, linux-mm,
	Alexis Lothoré (eBPF Foundation)

Add the emit_kasan_check() function that emits KASAN shadow memory
checks before memory accesses in JIT-compiled BPF programs. The
implementation relies on the existing __asan_{load,store}X functions
from KASAN subsystem. The helper:
- ensures that the kasan instrumention is actually needed: if the
  instruction being processed accesses the program stack, we skip the
  instrumentation, as those accesses are already protected with page
  guards
- saves registers. This includes caller-saved registers, but also
  temporary registers, as those were possibly used by the
  affected program
- computes the accessed address and stores it in %rdi
- calls the relevant function, depending on the instruction being a load
  or a store, and the size of the access.
- restores registeres

The special care needed when inserting this instrumentation comes at the
cost of a non negligeable increase in JITed code size. For example, a
bare

  mov 	0x0(%si),rbx # Load in rbx content at address stored in rsi

becomes

  push    %rax
  push    %rcx
  push    %rdx
  push    %rsi
  push    %rdi
  push    %r8
  push    %r9
  push    %r10
  push    %r11
  sub     $0x8,%rsp
  mov     %rsi,%rdi
  call    0xffffffff81da0a60 <__asan_load8>
  add     $0x8,%rsp
  pop     %r11
  pop     %r10
  pop     %r9
  pop     %r8
  pop     %rdi
  pop     %rsi
  pop     %rdx
  pop     %rcx
  pop     %rax
  mov     0x0(%rsi),rbx

Signed-off-by: Alexis Lothoré (eBPF Foundation) <alexis.lothore@bootlin.com>
---
 arch/x86/net/bpf_jit_comp.c | 93 +++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 93 insertions(+)

diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
index ea9e707e8abf..b90103bd0080 100644
--- a/arch/x86/net/bpf_jit_comp.c
+++ b/arch/x86/net/bpf_jit_comp.c
@@ -20,6 +20,10 @@
 #include <asm/unwind.h>
 #include <asm/cfi.h>
 
+#ifdef CONFIG_BPF_JIT_KASAN
+#include <linux/kasan.h>
+#endif
+
 static bool all_callee_regs_used[4] = {true, true, true, true};
 
 static u8 *emit_code(u8 *ptr, u32 bytes, unsigned int len)
@@ -1301,6 +1305,95 @@ static void emit_store_stack_imm64(u8 **pprog, int reg, int stack_off, u64 imm64
 	emit_stx(pprog, BPF_DW, BPF_REG_FP, reg, stack_off);
 }
 
+static int emit_kasan_check(u8 **pprog, u32 addr_reg, struct bpf_insn *insn,
+			    u8 *ip, bool accesses_stack)
+{
+#ifdef CONFIG_BPF_JIT_KASAN
+	bool is_write = BPF_CLASS(insn->code) == BPF_STX;
+	u32 bpf_size = BPF_SIZE(insn->code);
+	s32 off = insn->off;
+	u8 *prog = *pprog;
+	void *kasan_func;
+
+	if (accesses_stack)
+		return 0;
+
+	/* Derive KASAN check function from access type and size */
+	switch (bpf_size) {
+	case BPF_B:
+		kasan_func = is_write ? __asan_store1 : __asan_load1;
+		break;
+	case BPF_H:
+		kasan_func = is_write ? __asan_store2 : __asan_load2;
+		break;
+	case BPF_W:
+		kasan_func = is_write ? __asan_store4 : __asan_load4;
+		break;
+	case BPF_DW:
+		kasan_func = is_write ? __asan_store8 : __asan_load8;
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	/* Save rax */
+	EMIT1(0x50);
+	/* Save rcx */
+	EMIT1(0x51);
+	/* Save rdx */
+	EMIT1(0x52);
+	/* Save rsi */
+	EMIT1(0x56);
+	/* Save rdi */
+	EMIT1(0x57);
+	/* Save r8 */
+	EMIT2(0x41, 0x50);
+	/* Save r9 */
+	EMIT2(0x41, 0x51);
+	/* Save r10 */
+	EMIT2(0x41, 0x52);
+	/* Save r11 */
+	EMIT2(0x41, 0x53);
+	/* We have pushed 72 bytes, realign stack to 16 bytes: sub rsp, 8 */
+	EMIT4(0x48, 0x83, 0xEC, 8);
+
+	/* mov rdi, addr_reg */
+	EMIT_mov(BPF_REG_1, addr_reg);
+
+	/* add rdi, off (if offset is non-zero) */
+	if (off) {
+		if (is_imm8(off)) {
+			/* add rdi, imm8 */
+			EMIT4(0x48, 0x83, 0xC7, (u8)off);
+		} else {
+			/* add rdi, imm32 */
+			EMIT3_off32(0x48, 0x81, 0xC7, off);
+		}
+	}
+
+	/* Adjust ip to account for the instrumentation generated so far */
+	ip += (prog - *pprog);
+	/* call kasan_func */
+	if (emit_call(&prog, kasan_func, ip))
+		return -ERANGE;
+
+	/* Restore registers */
+	EMIT4(0x48, 0x83, 0xC4, 8);
+	EMIT2(0x41, 0x5B);
+	EMIT2(0x41, 0x5A);
+	EMIT2(0x41, 0x59);
+	EMIT2(0x41, 0x58);
+	EMIT1(0x5F);
+	EMIT1(0x5E);
+	EMIT1(0x5A);
+	EMIT1(0x59);
+	EMIT1(0x58);
+
+	*pprog = prog;
+#endif /* CONFIG_BPF_JIT_KASAN */
+	return 0;
+}
+
 static int emit_atomic_rmw(u8 **pprog, u32 atomic_op,
 			   u32 dst_reg, u32 src_reg, s16 off, u8 bpf_size)
 {

-- 
2.53.0



^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH RFC bpf-next 5/8] bpf, x86: emit KASAN checks into x86 JITed programs
  2026-04-13 18:28 [PATCH RFC bpf-next 0/8] bpf: add support for KASAN checks in JITed programs Alexis Lothoré (eBPF Foundation)
                   ` (3 preceding siblings ...)
  2026-04-13 18:28 ` [PATCH RFC bpf-next 4/8] bpf, x86: add helper to emit kasan checks in x86 " Alexis Lothoré (eBPF Foundation)
@ 2026-04-13 18:28 ` Alexis Lothoré (eBPF Foundation)
  2026-04-13 18:28 ` [PATCH RFC bpf-next 6/8] selftests/bpf: do not run verifier JIT tests when BPF_JIT_KASAN is enabled Alexis Lothoré (eBPF Foundation)
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 12+ messages in thread
From: Alexis Lothoré (eBPF Foundation) @ 2026-04-13 18:28 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Martin KaFai Lau, Eduard Zingerman, Kumar Kartikeya Dwivedi,
	Song Liu, Yonghong Song, Jiri Olsa, John Fastabend,
	David S. Miller, David Ahern, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, Dave Hansen, x86, H. Peter Anvin, Shuah Khan,
	Maxime Coquelin, Alexandre Torgue, Andrey Ryabinin,
	Alexander Potapenko, Andrey Konovalov, Dmitry Vyukov,
	Vincenzo Frascino, Andrew Morton
  Cc: ebpf, Bastien Curutchet, Thomas Petazzoni, Xu Kuohai, bpf,
	linux-kernel, netdev, linux-kselftest, linux-stm32,
	linux-arm-kernel, kasan-dev, linux-mm,
	Alexis Lothoré (eBPF Foundation)

Insert KASAN shadow memory checks before memory load and store
operations in JIT-compiled BPF programs. This helps detect memory safety
bugs such as use-after-free and out-of-bounds accesses at runtime.

The main instructions being targeted are BPF_LDX and BPF_STX, but not
all of them are being instrumented:
- if the load/store instruction is in fact accessing the program stack,
  emit_kasan_check silently skips the instrumentation, as we already
  have page guards to monitor stack accesses. Stack accesses _could_ be
  monitored more finely by adding kasan checks, but it would need JIT
  compiler to insert red zones around any variable on stack, and we likely
  do not have enough info in JIT compiler to do so.
- if the load/store instruction is a BPF_PROBE_MEM or a BPF_PROBE_ATOMIC
  instruction, we do not instrument it, as the passed address can fault
  (hence the custom fault management with BPF_PROBE_XXX instructions),
  and so the corresponding kasan check could fault as well.

Signed-off-by: Alexis Lothoré (eBPF Foundation) <alexis.lothore@bootlin.com>
---
This RFC also ignores for now atomic operations, because I am not
perfectly clear yet about how they are JITed and so how much kasan
instrumentation is legitimate here.
---
 arch/x86/net/bpf_jit_comp.c | 13 +++++++++++++
 1 file changed, 13 insertions(+)

diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
index b90103bd0080..111fe1d55121 100644
--- a/arch/x86/net/bpf_jit_comp.c
+++ b/arch/x86/net/bpf_jit_comp.c
@@ -1811,6 +1811,7 @@ static int do_jit(struct bpf_verifier_env *env, struct bpf_prog *bpf_prog, int *
 		const s32 imm32 = insn->imm;
 		u32 dst_reg = insn->dst_reg;
 		u32 src_reg = insn->src_reg;
+		bool accesses_stack;
 		u8 b2 = 0, b3 = 0;
 		u8 *start_of_ldx;
 		s64 jmp_offset;
@@ -1831,6 +1832,7 @@ static int do_jit(struct bpf_verifier_env *env, struct bpf_prog *bpf_prog, int *
 			EMIT_ENDBR();
 
 		ip = image + addrs[i - 1] + (prog - temp);
+		accesses_stack = bpf_insn_accesses_stack(env, bpf_prog, i - 1);
 
 		switch (insn->code) {
 			/* ALU */
@@ -2242,6 +2244,11 @@ st:			if (is_imm8(insn->off))
 		case BPF_STX | BPF_MEM | BPF_H:
 		case BPF_STX | BPF_MEM | BPF_W:
 		case BPF_STX | BPF_MEM | BPF_DW:
+			err = emit_kasan_check(&prog, dst_reg, insn,
+					       image + addrs[i - 1],
+					       accesses_stack);
+			if (err)
+				return err;
 			emit_stx(&prog, BPF_SIZE(insn->code), dst_reg, src_reg, insn->off);
 			break;
 
@@ -2390,6 +2397,12 @@ st:			if (is_imm8(insn->off))
 				/* populate jmp_offset for JAE above to jump to start_of_ldx */
 				start_of_ldx = prog;
 				end_of_jmp[-1] = start_of_ldx - end_of_jmp;
+			} else {
+				err = emit_kasan_check(&prog, src_reg, insn,
+						       image + addrs[i - 1],
+						       accesses_stack);
+				if (err)
+					return err;
 			}
 			if (BPF_MODE(insn->code) == BPF_PROBE_MEMSX ||
 			    BPF_MODE(insn->code) == BPF_MEMSX)

-- 
2.53.0



^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH RFC bpf-next 6/8] selftests/bpf: do not run verifier JIT tests when BPF_JIT_KASAN is enabled
  2026-04-13 18:28 [PATCH RFC bpf-next 0/8] bpf: add support for KASAN checks in JITed programs Alexis Lothoré (eBPF Foundation)
                   ` (4 preceding siblings ...)
  2026-04-13 18:28 ` [PATCH RFC bpf-next 5/8] bpf, x86: emit KASAN checks into " Alexis Lothoré (eBPF Foundation)
@ 2026-04-13 18:28 ` Alexis Lothoré (eBPF Foundation)
  2026-04-13 18:28 ` [PATCH RFC bpf-next 7/8] bpf, x86: enable KASAN for JITed programs on x86 Alexis Lothoré (eBPF Foundation)
  2026-04-13 18:28 ` [PATCH RFC bpf-next 8/8] selftests/bpf: add tests to validate KASAN on JIT programs Alexis Lothoré (eBPF Foundation)
  7 siblings, 0 replies; 12+ messages in thread
From: Alexis Lothoré (eBPF Foundation) @ 2026-04-13 18:28 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Martin KaFai Lau, Eduard Zingerman, Kumar Kartikeya Dwivedi,
	Song Liu, Yonghong Song, Jiri Olsa, John Fastabend,
	David S. Miller, David Ahern, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, Dave Hansen, x86, H. Peter Anvin, Shuah Khan,
	Maxime Coquelin, Alexandre Torgue, Andrey Ryabinin,
	Alexander Potapenko, Andrey Konovalov, Dmitry Vyukov,
	Vincenzo Frascino, Andrew Morton
  Cc: ebpf, Bastien Curutchet, Thomas Petazzoni, Xu Kuohai, bpf,
	linux-kernel, netdev, linux-kselftest, linux-stm32,
	linux-arm-kernel, kasan-dev, linux-mm,
	Alexis Lothoré (eBPF Foundation)

Multiple verifier tests validate the exact list of JITed instructions.
Even if the test offers some flexibility in its checks (eg: not
enforcing the first instruction to be verified right at the beginning of
jited code, but rather searching where the expected JIT instructions
could be located), it is confused by the new KASAN instrumentation JITed
in programs: this instrumentation can be inserted anywhere in-between
searched instructions, leading to test failures despite the correct
instructions being generated.

Prevent those failures by skipping tests involving JITed instructions
checks when kernel is built with KASAN _and_ JIT is enabled, as those
two conditions lead the JITed code to contains KASAN checks.

Signed-off-by: Alexis Lothoré (eBPF Foundation) <alexis.lothore@bootlin.com>
---
 tools/testing/selftests/bpf/test_loader.c    | 5 +++++
 tools/testing/selftests/bpf/unpriv_helpers.c | 5 +++++
 tools/testing/selftests/bpf/unpriv_helpers.h | 1 +
 3 files changed, 11 insertions(+)

diff --git a/tools/testing/selftests/bpf/test_loader.c b/tools/testing/selftests/bpf/test_loader.c
index c4c34cae6102..d2c0062ef31a 100644
--- a/tools/testing/selftests/bpf/test_loader.c
+++ b/tools/testing/selftests/bpf/test_loader.c
@@ -1175,6 +1175,11 @@ void run_subtest(struct test_loader *tester,
 		return;
 	}
 
+	if (is_jit_enabled() && subspec->jited.cnt && get_kasan_jit_enabled()) {
+		test__skip();
+		return;
+	}
+
 	if (unpriv) {
 		if (!can_execute_unpriv(tester, spec)) {
 			test__skip();
diff --git a/tools/testing/selftests/bpf/unpriv_helpers.c b/tools/testing/selftests/bpf/unpriv_helpers.c
index f997d7ec8fd0..25bd08648f5f 100644
--- a/tools/testing/selftests/bpf/unpriv_helpers.c
+++ b/tools/testing/selftests/bpf/unpriv_helpers.c
@@ -142,3 +142,8 @@ bool get_unpriv_disabled(void)
 	}
 	return mitigations_off;
 }
+
+bool get_kasan_jit_enabled(void)
+{
+	return config_contains("CONFIG_BPF_JIT_KASAN=y");
+}
diff --git a/tools/testing/selftests/bpf/unpriv_helpers.h b/tools/testing/selftests/bpf/unpriv_helpers.h
index 151f67329665..bc5f4c953c9d 100644
--- a/tools/testing/selftests/bpf/unpriv_helpers.h
+++ b/tools/testing/selftests/bpf/unpriv_helpers.h
@@ -5,3 +5,4 @@
 #define UNPRIV_SYSCTL "kernel/unprivileged_bpf_disabled"
 
 bool get_unpriv_disabled(void);
+bool get_kasan_jit_enabled(void);

-- 
2.53.0



^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH RFC bpf-next 7/8] bpf, x86: enable KASAN for JITed programs on x86
  2026-04-13 18:28 [PATCH RFC bpf-next 0/8] bpf: add support for KASAN checks in JITed programs Alexis Lothoré (eBPF Foundation)
                   ` (5 preceding siblings ...)
  2026-04-13 18:28 ` [PATCH RFC bpf-next 6/8] selftests/bpf: do not run verifier JIT tests when BPF_JIT_KASAN is enabled Alexis Lothoré (eBPF Foundation)
@ 2026-04-13 18:28 ` Alexis Lothoré (eBPF Foundation)
  2026-04-13 18:28 ` [PATCH RFC bpf-next 8/8] selftests/bpf: add tests to validate KASAN on JIT programs Alexis Lothoré (eBPF Foundation)
  7 siblings, 0 replies; 12+ messages in thread
From: Alexis Lothoré (eBPF Foundation) @ 2026-04-13 18:28 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Martin KaFai Lau, Eduard Zingerman, Kumar Kartikeya Dwivedi,
	Song Liu, Yonghong Song, Jiri Olsa, John Fastabend,
	David S. Miller, David Ahern, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, Dave Hansen, x86, H. Peter Anvin, Shuah Khan,
	Maxime Coquelin, Alexandre Torgue, Andrey Ryabinin,
	Alexander Potapenko, Andrey Konovalov, Dmitry Vyukov,
	Vincenzo Frascino, Andrew Morton
  Cc: ebpf, Bastien Curutchet, Thomas Petazzoni, Xu Kuohai, bpf,
	linux-kernel, netdev, linux-kselftest, linux-stm32,
	linux-arm-kernel, kasan-dev, linux-mm,
	Alexis Lothoré (eBPF Foundation)

Mark x86 as supporting KASAN checks in JITed programs so that the
corresponding JIT compiler inserts checks on the translated
instructions.

Signed-off-by: Alexis Lothoré (eBPF Foundation) <alexis.lothore@bootlin.com>
---
 arch/x86/Kconfig | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index e2df1b147184..a50aa9a0b93c 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -234,6 +234,7 @@ config X86
 	select HAVE_SAMPLE_FTRACE_DIRECT	if X86_64
 	select HAVE_SAMPLE_FTRACE_DIRECT_MULTI	if X86_64
 	select HAVE_EBPF_JIT
+	select HAVE_EBPF_JIT_KASAN		if X86_64
 	select HAVE_EFFICIENT_UNALIGNED_ACCESS
 	select HAVE_EISA			if X86_32
 	select HAVE_EXIT_THREAD

-- 
2.53.0



^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH RFC bpf-next 8/8] selftests/bpf: add tests to validate KASAN on JIT programs
  2026-04-13 18:28 [PATCH RFC bpf-next 0/8] bpf: add support for KASAN checks in JITed programs Alexis Lothoré (eBPF Foundation)
                   ` (6 preceding siblings ...)
  2026-04-13 18:28 ` [PATCH RFC bpf-next 7/8] bpf, x86: enable KASAN for JITed programs on x86 Alexis Lothoré (eBPF Foundation)
@ 2026-04-13 18:28 ` Alexis Lothoré (eBPF Foundation)
  2026-04-13 22:20   ` Andrey Konovalov
  7 siblings, 1 reply; 12+ messages in thread
From: Alexis Lothoré (eBPF Foundation) @ 2026-04-13 18:28 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Martin KaFai Lau, Eduard Zingerman, Kumar Kartikeya Dwivedi,
	Song Liu, Yonghong Song, Jiri Olsa, John Fastabend,
	David S. Miller, David Ahern, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, Dave Hansen, x86, H. Peter Anvin, Shuah Khan,
	Maxime Coquelin, Alexandre Torgue, Andrey Ryabinin,
	Alexander Potapenko, Andrey Konovalov, Dmitry Vyukov,
	Vincenzo Frascino, Andrew Morton
  Cc: ebpf, Bastien Curutchet, Thomas Petazzoni, Xu Kuohai, bpf,
	linux-kernel, netdev, linux-kselftest, linux-stm32,
	linux-arm-kernel, kasan-dev, linux-mm,
	Alexis Lothoré (eBPF Foundation)

Add a basic KASAN test runner that loads and test-run programs that can
trigger memory management bugs. The test captures kernel logs and ensure
that the expected KASAN splat is emitted by searching for the
corresponding first lines in the report.

This version implements two faulty programs triggering either a
user-after-free, or an out-of-bounds memory usage. The bugs are
triggered thanks to some dedicated kfuncs in bpf_testmod.c, but two
different techniques are used, as some cases can be quite hard to
trigger in a pure "black box" approach:
- for reads, we can make the used kfuncs return some faulty pointers
  that ebpf programs will manipulate, they will generate legitimate
  kasan reports as a consequence
- applying the same trick for faulty writes is harder, as ebpf programs
  can't write kernel data freely. So ebpf programs can call another
  specific testing kfunc that will alter the shadow memory matching the
  passed memory (eg: a map). When the program will try to write to the
  corresponding memory, it will trigger a report as well.

Signed-off-by: Alexis Lothoré (eBPF Foundation) <alexis.lothore@bootlin.com>
---
The way of bringing kasan_poison into bpf_testmod is definitely not
ideal.  But I would like to validate the testing approach (triggering
real faulty accesses, which is hard on some cases, VS manually poisoning
BPF-manipulated memory) before eventually making clean bridges between
KASAN APIs and bpf_testmod.c, if the latter approach is the valid one.
---
 tools/testing/selftests/bpf/prog_tests/kasan.c     | 165 +++++++++++++++++++++
 tools/testing/selftests/bpf/progs/kasan.c          | 146 ++++++++++++++++++
 .../testing/selftests/bpf/test_kmods/bpf_testmod.c |  79 ++++++++++
 3 files changed, 390 insertions(+)

diff --git a/tools/testing/selftests/bpf/prog_tests/kasan.c b/tools/testing/selftests/bpf/prog_tests/kasan.c
new file mode 100644
index 000000000000..fd628aaa8005
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/kasan.c
@@ -0,0 +1,165 @@
+// SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
+#include <bpf/bpf.h>
+#include <fcntl.h>
+#include <linux/if_ether.h>
+#include <sys/klog.h>
+#include <test_progs.h>
+#include <unpriv_helpers.h>
+#include "kasan.skel.h"
+
+#define SUBTEST_NAME_MAX_LEN	64
+#define SYSLOG_ACTION_READ_ALL	3
+#define SYSLOG_ACTION_CLEAR	5
+
+#define MAX_LOG_SIZE		(8*1024)
+#define READ_CHUNK_SIZE		128
+
+#define KASAN_PATTERN_SLAB_UAF "BUG: KASAN: slab-use-after-free in bpf_prog_"
+#define KASAN_PATTERN_GLOBAL_OOB "BUG: KASAN: global-out-of-bounds in bpf_prog_"
+
+static char klog_buffer[MAX_LOG_SIZE];
+
+static int read_kernel_logs(char *buf, size_t max_len)
+{
+	return klogctl(SYSLOG_ACTION_READ_ALL, buf, max_len);
+}
+
+static int clear_kernel_logs(void)
+{
+	return klogctl(SYSLOG_ACTION_CLEAR, NULL, 0);
+}
+
+static int kernel_logs_have_matching_kasan_report(char *buf, char *pattern,
+						  bool is_write, int size)
+{
+	char *access_desc_start, *access_desc_end, *tmp;
+	char access_log[READ_CHUNK_SIZE];
+	char *kasan_report_start;
+	int hsize, nsize;
+	/* Searched kasan report is valid if
+	 * - it contains the expected kasan pattern
+	 * - the next line is the description of the faulty access
+	 * - faulty access properties match the tested type and size
+	 */
+	kasan_report_start = strstr(buf, pattern);
+
+	if (!kasan_report_start)
+		return 1;
+
+	/* Find next line */
+	access_desc_start = strchr(kasan_report_start, '\n');
+	if (!access_desc_start)
+		return 1;
+	access_desc_start++;
+
+	access_desc_end = strchr(access_desc_start, '\n');
+	if (!access_desc_end)
+		return 1;
+
+	nsize = snprintf(access_log, READ_CHUNK_SIZE, "%s of size %d at addr",
+		 is_write ? "Write" : "Read", size);
+
+	hsize = access_desc_end - access_desc_start;
+	tmp = memmem(access_desc_start, hsize, access_log, nsize);
+
+	if (!tmp)
+		return 1;
+
+	return 0;
+}
+
+struct test_spec {
+	char *prog_name;
+	char *expected_report_pattern;
+};
+
+static struct test_spec tests[] = {
+	{
+		.prog_name = "bpf_kasan_uaf",
+		.expected_report_pattern = KASAN_PATTERN_SLAB_UAF
+	},
+	{
+		.prog_name = "bpf_kasan_oob",
+		.expected_report_pattern = KASAN_PATTERN_GLOBAL_OOB
+	}
+};
+
+static void run_test_with_type_and_size(struct kasan *skel,
+					struct test_spec *test, bool is_write,
+					int access_size)
+{
+	char subtest_name[SUBTEST_NAME_MAX_LEN];
+	struct bpf_program *prog;
+	uint8_t buf[ETH_HLEN];
+	int ret;
+
+	prog = bpf_object__find_program_by_name(skel->obj, test->prog_name);
+	if (!ASSERT_OK_PTR(prog, "find test prog"))
+		return;
+
+	snprintf(subtest_name, SUBTEST_NAME_MAX_LEN, "%s_%s_%d",
+		 test->prog_name, is_write ? "write" : "read", access_size);
+
+	if (!test__start_subtest(subtest_name))
+		return;
+
+	ret = clear_kernel_logs();
+	if (!ASSERT_OK(ret, "reset log buffer"))
+		return;
+
+	LIBBPF_OPTS(bpf_test_run_opts, topts);
+	topts.sz = sizeof(struct bpf_test_run_opts);
+	topts.data_size_in = ETH_HLEN;
+	topts.data_in = buf;
+	skel->bss->is_write = is_write;
+	skel->bss->access_size = access_size;
+	ret = bpf_prog_test_run_opts(bpf_program__fd(prog), &topts);
+	if (!ASSERT_OK(ret, "run prog"))
+		return;
+
+	ret = read_kernel_logs(klog_buffer, MAX_LOG_SIZE);
+	if (ASSERT_GE(ret, 0, "read kernel logs"))
+		ASSERT_OK(kernel_logs_have_matching_kasan_report(
+				  klog_buffer, test->expected_report_pattern,
+				  is_write, access_size),
+			  test->prog_name);
+}
+
+static void run_test_with_type(struct kasan *skel, struct test_spec *test,
+			       bool is_write)
+{
+	run_test_with_type_and_size(skel, test, is_write, 1);
+	run_test_with_type_and_size(skel, test, is_write, 2);
+	run_test_with_type_and_size(skel, test, is_write, 4);
+	run_test_with_type_and_size(skel, test, is_write, 8);
+}
+
+static void run_test(struct kasan *skel, struct test_spec *test)
+{
+	run_test_with_type(skel, test, false);
+	run_test_with_type(skel, test, true);
+}
+
+void test_kasan(void)
+{
+	struct test_spec *test;
+	struct kasan *skel;
+	int i;
+
+	if (!is_jit_enabled() || !get_kasan_jit_enabled()) {
+		test__skip();
+		return;
+	}
+
+	skel = kasan__open_and_load();
+	if (!ASSERT_OK_PTR(skel, "open and load prog"))
+		return;
+
+	for (i = 0; i < ARRAY_SIZE(tests); i++) {
+		test = &tests[i];
+
+		run_test(skel, test);
+	}
+
+	kasan__destroy(skel);
+}
diff --git a/tools/testing/selftests/bpf/progs/kasan.c b/tools/testing/selftests/bpf/progs/kasan.c
new file mode 100644
index 000000000000..f713c9b7c9ce
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/kasan.c
@@ -0,0 +1,146 @@
+// SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
+
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+
+#define KASAN_SLAB_FREE 0xFB
+#define KASAN_GLOBAL_REDZONE 0xF9
+
+extern __u8 *bpf_kfunc_kasan_uaf_1(void) __ksym;
+extern __u16 *bpf_kfunc_kasan_uaf_2(void) __ksym;
+extern __u32 *bpf_kfunc_kasan_uaf_4(void) __ksym;
+extern __u64 *bpf_kfunc_kasan_uaf_8(void) __ksym;
+extern __u8 *bpf_kfunc_kasan_oob_1(void) __ksym;
+extern __u16 *bpf_kfunc_kasan_oob_2(void) __ksym;
+extern __u32 *bpf_kfunc_kasan_oob_4(void) __ksym;
+extern __u64 *bpf_kfunc_kasan_oob_8(void) __ksym;
+extern void bpf_kfunc_kasan_poison(void *mem, __u32 mem__sz, __u8 byte) __ksym;
+
+int access_size;
+int is_write;
+
+struct kasan_write_val {
+	__u8 data_1;
+	__u16 data_2;
+	__u32 data_4;
+	__u64 data_8;
+};
+
+struct {
+	__uint(type, BPF_MAP_TYPE_ARRAY);
+	__uint(max_entries, 1);
+	__type(key, __u32);
+	__type(value, struct kasan_write_val);
+} test_map SEC(".maps");
+
+static void bpf_kasan_faulty_write(int size, __u8 poison_byte)
+{
+	struct kasan_write_val *val;
+	__u32 key = 0;
+
+	val = bpf_map_lookup_elem(&test_map, &key);
+	if (!val)
+		return;
+
+	bpf_kfunc_kasan_poison(val, sizeof(struct kasan_write_val),
+			       poison_byte);
+	switch (size) {
+	case 1:
+		val->data_1 = 0xAA;
+		break;
+	case 2:
+		val->data_2 = 0xAA;
+		break;
+	case 4:
+		val->data_4 = 0xAA;
+		break;
+	case 8:
+		val->data_8 = 0xAA;
+		break;
+	}
+	bpf_kfunc_kasan_poison(val, sizeof(struct kasan_write_val), 0x00);
+}
+
+
+static int bpf_kasan_uaf_read(int size)
+{
+	__u8 *result_1;
+	__u16 *result_2;
+	__u32 *result_4;
+	__u64 *result_8;
+	int ret = 0;
+
+	switch (size) {
+	case 1:
+		result_1 = bpf_kfunc_kasan_uaf_1();
+		ret = result_1[0] ? 1 : 0;
+		break;
+	case 2:
+		result_2 = bpf_kfunc_kasan_uaf_2();
+		ret = result_2[0] ? 1 : 0;
+		break;
+	case 4:
+		result_4 = bpf_kfunc_kasan_uaf_4();
+		ret = result_4[0] ? 1 : 0;
+		break;
+	case 8:
+		result_8 = bpf_kfunc_kasan_uaf_8();
+		ret = result_8[0] ? 1 : 0;
+		break;
+	}
+	return ret;
+}
+
+SEC("tcx/ingress")
+int bpf_kasan_uaf(struct __sk_buff *skb)
+{
+	if (is_write) {
+		bpf_kasan_faulty_write(access_size, KASAN_SLAB_FREE);
+		return 0;
+	}
+
+	return bpf_kasan_uaf_read(access_size);
+}
+
+static int bpf_kasan_oob_read(int size)
+{
+	__u8 *result_1;
+	__u16 *result_2;
+	__u32 *result_4;
+	__u64 *result_8;
+	int ret = 0;
+
+	switch (size) {
+	case 1:
+		result_1 = bpf_kfunc_kasan_oob_1();
+		ret = result_1[0] ? 1 : 0;
+		break;
+	case 2:
+		result_2 = bpf_kfunc_kasan_oob_2();
+		ret = result_2[0] ? 1 : 0;
+		break;
+	case 4:
+		result_4 = bpf_kfunc_kasan_oob_4();
+		ret = result_4[0] ? 1 : 0;
+		break;
+	case 8:
+		result_8 = bpf_kfunc_kasan_oob_8();
+		ret = result_8[0] ? 1 : 0;
+		break;
+	}
+	return ret;
+}
+
+SEC("tcx/ingress")
+int bpf_kasan_oob(struct __sk_buff *skb)
+{
+	if (is_write) {
+		bpf_kasan_faulty_write(access_size, KASAN_GLOBAL_REDZONE);
+		return 0;
+	}
+
+	return bpf_kasan_oob_read(access_size);
+}
+
+char LICENSE[] SEC("license") = "GPL";
diff --git a/tools/testing/selftests/bpf/test_kmods/bpf_testmod.c b/tools/testing/selftests/bpf/test_kmods/bpf_testmod.c
index d876314a4d67..01554bcbbbb0 100644
--- a/tools/testing/selftests/bpf/test_kmods/bpf_testmod.c
+++ b/tools/testing/selftests/bpf/test_kmods/bpf_testmod.c
@@ -271,6 +271,76 @@ __bpf_kfunc void bpf_kfunc_put_default_trusted_ptr_test(struct prog_test_member
 	 */
 }
 
+static void *kasan_uaf(void)
+{
+	void *p = kmalloc(64, GFP_ATOMIC);
+
+	if (!p)
+		return NULL;
+	memset(p, 0xAA, 64);
+	kfree(p);
+
+	return p;
+}
+
+#ifdef CONFIG_KASAN_GENERIC
+extern void kasan_poison(const void *addr, size_t size, u8 value, bool init);
+
+__bpf_kfunc void bpf_kfunc_kasan_poison(void *mem, u32 mem__sz, u8 byte)
+{
+	kasan_poison(mem, mem__sz, byte, false);
+}
+#else
+__bpf_kfunc void bpf_kfunc_kasan_poison(void *mem, u32 mem__sz, u8 byte) { }
+#endif
+
+__bpf_kfunc u8 *bpf_kfunc_kasan_uaf_1(void)
+{
+	return kasan_uaf();
+}
+
+__bpf_kfunc u16 *bpf_kfunc_kasan_uaf_2(void)
+{
+	return kasan_uaf();
+}
+
+__bpf_kfunc u32 *bpf_kfunc_kasan_uaf_4(void)
+{
+	return kasan_uaf();
+}
+
+__bpf_kfunc u64 *bpf_kfunc_kasan_uaf_8(void)
+{
+	return kasan_uaf();
+}
+
+static u8 test_oob_buffer[64];
+
+static void *bpf_kfunc_kasan_oob(void)
+{
+	return test_oob_buffer+64;
+}
+
+__bpf_kfunc u8 *bpf_kfunc_kasan_oob_1(void)
+{
+	return bpf_kfunc_kasan_oob();
+}
+
+__bpf_kfunc u16 *bpf_kfunc_kasan_oob_2(void)
+{
+	return bpf_kfunc_kasan_oob();
+}
+
+__bpf_kfunc u32 *bpf_kfunc_kasan_oob_4(void)
+{
+	return bpf_kfunc_kasan_oob();
+}
+
+__bpf_kfunc u64 *bpf_kfunc_kasan_oob_8(void)
+{
+	return bpf_kfunc_kasan_oob();
+}
+
 __bpf_kfunc struct bpf_testmod_ctx *
 bpf_testmod_ctx_create(int *err)
 {
@@ -740,6 +810,15 @@ BTF_ID_FLAGS(func, bpf_testmod_ops3_call_test_1)
 BTF_ID_FLAGS(func, bpf_testmod_ops3_call_test_2)
 BTF_ID_FLAGS(func, bpf_kfunc_get_default_trusted_ptr_test);
 BTF_ID_FLAGS(func, bpf_kfunc_put_default_trusted_ptr_test);
+BTF_ID_FLAGS(func, bpf_kfunc_kasan_poison)
+BTF_ID_FLAGS(func, bpf_kfunc_kasan_uaf_1)
+BTF_ID_FLAGS(func, bpf_kfunc_kasan_uaf_2)
+BTF_ID_FLAGS(func, bpf_kfunc_kasan_uaf_4)
+BTF_ID_FLAGS(func, bpf_kfunc_kasan_uaf_8)
+BTF_ID_FLAGS(func, bpf_kfunc_kasan_oob_1)
+BTF_ID_FLAGS(func, bpf_kfunc_kasan_oob_2)
+BTF_ID_FLAGS(func, bpf_kfunc_kasan_oob_4)
+BTF_ID_FLAGS(func, bpf_kfunc_kasan_oob_8)
 BTF_KFUNCS_END(bpf_testmod_common_kfunc_ids)
 
 BTF_ID_LIST(bpf_testmod_dtor_ids)

-- 
2.53.0



^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH RFC bpf-next 1/8] kasan: expose generic kasan helpers
  2026-04-13 18:28 ` [PATCH RFC bpf-next 1/8] kasan: expose generic kasan helpers Alexis Lothoré (eBPF Foundation)
@ 2026-04-13 22:19   ` Andrey Konovalov
  0 siblings, 0 replies; 12+ messages in thread
From: Andrey Konovalov @ 2026-04-13 22:19 UTC (permalink / raw)
  To: Alexis Lothoré (eBPF Foundation)
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Martin KaFai Lau, Eduard Zingerman, Kumar Kartikeya Dwivedi,
	Song Liu, Yonghong Song, Jiri Olsa, John Fastabend,
	David S. Miller, David Ahern, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, Dave Hansen, x86, H. Peter Anvin, Shuah Khan,
	Maxime Coquelin, Alexandre Torgue, Andrey Ryabinin,
	Alexander Potapenko, Dmitry Vyukov, Vincenzo Frascino,
	Andrew Morton, ebpf, Bastien Curutchet, Thomas Petazzoni,
	Xu Kuohai, bpf, linux-kernel, netdev, linux-kselftest,
	linux-stm32, linux-arm-kernel, kasan-dev, linux-mm

On Mon, Apr 13, 2026 at 8:29 PM Alexis Lothoré (eBPF Foundation)
<alexis.lothore@bootlin.com> wrote:
>
> In order to prepare KASAN helpers to be called from the eBPF subsystem
> (to add KASAN instrumentation at runtime when JITing eBPF programs),
> expose the __asan_{load,store}X functions in linux/kasan.h
>
> Signed-off-by: Alexis Lothoré (eBPF Foundation) <alexis.lothore@bootlin.com>
> ---
>  include/linux/kasan.h | 13 +++++++++++++
>  mm/kasan/kasan.h      | 10 ----------
>  2 files changed, 13 insertions(+), 10 deletions(-)
>
> diff --git a/include/linux/kasan.h b/include/linux/kasan.h
> index 338a1921a50a..6f580d4a39e4 100644
> --- a/include/linux/kasan.h
> +++ b/include/linux/kasan.h
> @@ -710,4 +710,17 @@ void kasan_non_canonical_hook(unsigned long addr);
>  static inline void kasan_non_canonical_hook(unsigned long addr) { }
>  #endif /* CONFIG_KASAN_GENERIC || CONFIG_KASAN_SW_TAGS */
>
> +#ifdef CONFIG_KASAN_GENERIC
> +void __asan_load1(void *p);
> +void __asan_store1(void *p);
> +void __asan_load2(void *p);
> +void __asan_store2(void *p);
> +void __asan_load4(void *p);
> +void __asan_store4(void *p);
> +void __asan_load8(void *p);
> +void __asan_store8(void *p);
> +void __asan_load16(void *p);
> +void __asan_store16(void *p);
> +#endif /* CONFIG_KASAN_GENERIC */

This looks ugly, let's not do this unless it's really required.

You can just use kasan_check_read/write() instead - these are public
wrappers around the same shadow memory checking functions. And they
also work with the SW_TAGS mode, in case the BPF would want to use
that mode at some point. (For HW_TAGS, we only have kasan_check_byte()
that checks a single byte, but it can be extended in the future if
required to be used by BPF.)



> +
>  #endif /* LINUX_KASAN_H */
> diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
> index fc9169a54766..3bfce8eb3135 100644
> --- a/mm/kasan/kasan.h
> +++ b/mm/kasan/kasan.h
> @@ -594,16 +594,6 @@ void __asan_handle_no_return(void);
>  void __asan_alloca_poison(void *, ssize_t size);
>  void __asan_allocas_unpoison(void *stack_top, ssize_t stack_bottom);
>
> -void __asan_load1(void *);
> -void __asan_store1(void *);
> -void __asan_load2(void *);
> -void __asan_store2(void *);
> -void __asan_load4(void *);
> -void __asan_store4(void *);
> -void __asan_load8(void *);
> -void __asan_store8(void *);
> -void __asan_load16(void *);
> -void __asan_store16(void *);
>  void __asan_loadN(void *, ssize_t size);
>  void __asan_storeN(void *, ssize_t size);
>
>
> --
> 2.53.0
>


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH RFC bpf-next 3/8] bpf: add BPF_JIT_KASAN for KASAN instrumentation of JITed programs
  2026-04-13 18:28 ` [PATCH RFC bpf-next 3/8] bpf: add BPF_JIT_KASAN for KASAN instrumentation of JITed programs Alexis Lothoré (eBPF Foundation)
@ 2026-04-13 22:20   ` Andrey Konovalov
  0 siblings, 0 replies; 12+ messages in thread
From: Andrey Konovalov @ 2026-04-13 22:20 UTC (permalink / raw)
  To: Alexis Lothoré (eBPF Foundation)
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Martin KaFai Lau, Eduard Zingerman, Kumar Kartikeya Dwivedi,
	Song Liu, Yonghong Song, Jiri Olsa, John Fastabend,
	David S. Miller, David Ahern, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, Dave Hansen, x86, H. Peter Anvin, Shuah Khan,
	Maxime Coquelin, Alexandre Torgue, Andrey Ryabinin,
	Alexander Potapenko, Dmitry Vyukov, Vincenzo Frascino,
	Andrew Morton, ebpf, Bastien Curutchet, Thomas Petazzoni,
	Xu Kuohai, bpf, linux-kernel, netdev, linux-kselftest,
	linux-stm32, linux-arm-kernel, kasan-dev, linux-mm

On Mon, Apr 13, 2026 at 8:29 PM Alexis Lothoré (eBPF Foundation)
<alexis.lothore@bootlin.com> wrote:
>
> Add a new Kconfig option CONFIG_BPF_JIT_KASAN that automatically enables
> KASAN (Kernel Address Sanitizer) memory access checks for JIT-compiled
> BPF programs, when both KASAN and JIT compiler are enabled. When
> enabled, the JIT compiler will emit shadow memory checks before memory
> loads and stores to detect use-after-free, out-of-bounds, and other
> memory safety bugs at runtime. The option is gated behind
> HAVE_EBPF_JIT_KASAN, as it needs proper arch-specific implementation.
>
> Signed-off-by: Alexis Lothoré (eBPF Foundation) <alexis.lothore@bootlin.com>
> ---
>  kernel/bpf/Kconfig | 9 +++++++++
>  1 file changed, 9 insertions(+)
>
> diff --git a/kernel/bpf/Kconfig b/kernel/bpf/Kconfig
> index eb3de35734f0..28392adb3d7e 100644
> --- a/kernel/bpf/Kconfig
> +++ b/kernel/bpf/Kconfig
> @@ -17,6 +17,10 @@ config HAVE_CBPF_JIT
>  config HAVE_EBPF_JIT
>         bool
>
> +# KASAN support for JIT compiler
> +config HAVE_EBPF_JIT_KASAN
> +       bool
> +
>  # Used by archs to tell that they want the BPF JIT compiler enabled by
>  # default for kernels that were compiled with BPF JIT support.
>  config ARCH_WANT_DEFAULT_BPF_JIT
> @@ -101,4 +105,9 @@ config BPF_LSM
>
>           If you are unsure how to answer this question, answer N.
>
> +config BPF_JIT_KASAN
> +       bool
> +       depends on HAVE_EBPF_JIT_KASAN
> +       default y if BPF_JIT && KASAN_GENERIC

Should this be "depends on KASAN && KASAN_GENERIC"?


> +
>  endmenu # "BPF subsystem"
>
> --
> 2.53.0
>


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH RFC bpf-next 8/8] selftests/bpf: add tests to validate KASAN on JIT programs
  2026-04-13 18:28 ` [PATCH RFC bpf-next 8/8] selftests/bpf: add tests to validate KASAN on JIT programs Alexis Lothoré (eBPF Foundation)
@ 2026-04-13 22:20   ` Andrey Konovalov
  0 siblings, 0 replies; 12+ messages in thread
From: Andrey Konovalov @ 2026-04-13 22:20 UTC (permalink / raw)
  To: Alexis Lothoré (eBPF Foundation)
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Martin KaFai Lau, Eduard Zingerman, Kumar Kartikeya Dwivedi,
	Song Liu, Yonghong Song, Jiri Olsa, John Fastabend,
	David S. Miller, David Ahern, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, Dave Hansen, x86, H. Peter Anvin, Shuah Khan,
	Maxime Coquelin, Alexandre Torgue, Andrey Ryabinin,
	Alexander Potapenko, Dmitry Vyukov, Vincenzo Frascino,
	Andrew Morton, ebpf, Bastien Curutchet, Thomas Petazzoni,
	Xu Kuohai, bpf, linux-kernel, netdev, linux-kselftest,
	linux-stm32, linux-arm-kernel, kasan-dev, linux-mm

On Mon, Apr 13, 2026 at 8:29 PM Alexis Lothoré (eBPF Foundation)
<alexis.lothore@bootlin.com> wrote:
>
> Add a basic KASAN test runner that loads and test-run programs that can
> trigger memory management bugs. The test captures kernel logs and ensure
> that the expected KASAN splat is emitted by searching for the
> corresponding first lines in the report.
>
> This version implements two faulty programs triggering either a
> user-after-free, or an out-of-bounds memory usage. The bugs are
> triggered thanks to some dedicated kfuncs in bpf_testmod.c, but two
> different techniques are used, as some cases can be quite hard to
> trigger in a pure "black box" approach:
> - for reads, we can make the used kfuncs return some faulty pointers
>   that ebpf programs will manipulate, they will generate legitimate
>   kasan reports as a consequence
> - applying the same trick for faulty writes is harder, as ebpf programs
>   can't write kernel data freely. So ebpf programs can call another
>   specific testing kfunc that will alter the shadow memory matching the
>   passed memory (eg: a map). When the program will try to write to the
>   corresponding memory, it will trigger a report as well.
>
> Signed-off-by: Alexis Lothoré (eBPF Foundation) <alexis.lothore@bootlin.com>
> ---
> The way of bringing kasan_poison into bpf_testmod is definitely not
> ideal.  But I would like to validate the testing approach (triggering
> real faulty accesses, which is hard on some cases, VS manually poisoning
> BPF-manipulated memory) before eventually making clean bridges between
> KASAN APIs and bpf_testmod.c, if the latter approach is the valid one.

Would it make sense to put these tests into KASAN KUnit tests in
mm/kasan/kasan_test_c.c? I assume there is a kernel API to JIT BPF
programs from the kernel itself?

There, you can just call kasan_poison(), some tests already do this.
And you can also extend the KASAN KUnit test framework to find out
whether the bad access is a read or write, if you want to check this.



> ---
>  tools/testing/selftests/bpf/prog_tests/kasan.c     | 165 +++++++++++++++++++++
>  tools/testing/selftests/bpf/progs/kasan.c          | 146 ++++++++++++++++++
>  .../testing/selftests/bpf/test_kmods/bpf_testmod.c |  79 ++++++++++
>  3 files changed, 390 insertions(+)
>
> diff --git a/tools/testing/selftests/bpf/prog_tests/kasan.c b/tools/testing/selftests/bpf/prog_tests/kasan.c
> new file mode 100644
> index 000000000000..fd628aaa8005
> --- /dev/null
> +++ b/tools/testing/selftests/bpf/prog_tests/kasan.c
> @@ -0,0 +1,165 @@
> +// SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
> +#include <bpf/bpf.h>
> +#include <fcntl.h>
> +#include <linux/if_ether.h>
> +#include <sys/klog.h>
> +#include <test_progs.h>
> +#include <unpriv_helpers.h>
> +#include "kasan.skel.h"
> +
> +#define SUBTEST_NAME_MAX_LEN   64
> +#define SYSLOG_ACTION_READ_ALL 3
> +#define SYSLOG_ACTION_CLEAR    5
> +
> +#define MAX_LOG_SIZE           (8*1024)
> +#define READ_CHUNK_SIZE                128
> +
> +#define KASAN_PATTERN_SLAB_UAF "BUG: KASAN: slab-use-after-free in bpf_prog_"
> +#define KASAN_PATTERN_GLOBAL_OOB "BUG: KASAN: global-out-of-bounds in bpf_prog_"
> +
> +static char klog_buffer[MAX_LOG_SIZE];
> +
> +static int read_kernel_logs(char *buf, size_t max_len)
> +{
> +       return klogctl(SYSLOG_ACTION_READ_ALL, buf, max_len);
> +}
> +
> +static int clear_kernel_logs(void)
> +{
> +       return klogctl(SYSLOG_ACTION_CLEAR, NULL, 0);
> +}
> +
> +static int kernel_logs_have_matching_kasan_report(char *buf, char *pattern,
> +                                                 bool is_write, int size)
> +{
> +       char *access_desc_start, *access_desc_end, *tmp;
> +       char access_log[READ_CHUNK_SIZE];
> +       char *kasan_report_start;
> +       int hsize, nsize;
> +       /* Searched kasan report is valid if
> +        * - it contains the expected kasan pattern
> +        * - the next line is the description of the faulty access
> +        * - faulty access properties match the tested type and size
> +        */
> +       kasan_report_start = strstr(buf, pattern);
> +
> +       if (!kasan_report_start)
> +               return 1;
> +
> +       /* Find next line */
> +       access_desc_start = strchr(kasan_report_start, '\n');
> +       if (!access_desc_start)
> +               return 1;
> +       access_desc_start++;
> +
> +       access_desc_end = strchr(access_desc_start, '\n');
> +       if (!access_desc_end)
> +               return 1;
> +
> +       nsize = snprintf(access_log, READ_CHUNK_SIZE, "%s of size %d at addr",
> +                is_write ? "Write" : "Read", size);
> +
> +       hsize = access_desc_end - access_desc_start;
> +       tmp = memmem(access_desc_start, hsize, access_log, nsize);
> +
> +       if (!tmp)
> +               return 1;
> +
> +       return 0;
> +}
> +
> +struct test_spec {
> +       char *prog_name;
> +       char *expected_report_pattern;
> +};
> +
> +static struct test_spec tests[] = {
> +       {
> +               .prog_name = "bpf_kasan_uaf",
> +               .expected_report_pattern = KASAN_PATTERN_SLAB_UAF
> +       },
> +       {
> +               .prog_name = "bpf_kasan_oob",
> +               .expected_report_pattern = KASAN_PATTERN_GLOBAL_OOB
> +       }
> +};
> +
> +static void run_test_with_type_and_size(struct kasan *skel,
> +                                       struct test_spec *test, bool is_write,
> +                                       int access_size)
> +{
> +       char subtest_name[SUBTEST_NAME_MAX_LEN];
> +       struct bpf_program *prog;
> +       uint8_t buf[ETH_HLEN];
> +       int ret;
> +
> +       prog = bpf_object__find_program_by_name(skel->obj, test->prog_name);
> +       if (!ASSERT_OK_PTR(prog, "find test prog"))
> +               return;
> +
> +       snprintf(subtest_name, SUBTEST_NAME_MAX_LEN, "%s_%s_%d",
> +                test->prog_name, is_write ? "write" : "read", access_size);
> +
> +       if (!test__start_subtest(subtest_name))
> +               return;
> +
> +       ret = clear_kernel_logs();
> +       if (!ASSERT_OK(ret, "reset log buffer"))
> +               return;
> +
> +       LIBBPF_OPTS(bpf_test_run_opts, topts);
> +       topts.sz = sizeof(struct bpf_test_run_opts);
> +       topts.data_size_in = ETH_HLEN;
> +       topts.data_in = buf;
> +       skel->bss->is_write = is_write;
> +       skel->bss->access_size = access_size;
> +       ret = bpf_prog_test_run_opts(bpf_program__fd(prog), &topts);
> +       if (!ASSERT_OK(ret, "run prog"))
> +               return;
> +
> +       ret = read_kernel_logs(klog_buffer, MAX_LOG_SIZE);
> +       if (ASSERT_GE(ret, 0, "read kernel logs"))
> +               ASSERT_OK(kernel_logs_have_matching_kasan_report(
> +                                 klog_buffer, test->expected_report_pattern,
> +                                 is_write, access_size),
> +                         test->prog_name);
> +}
> +
> +static void run_test_with_type(struct kasan *skel, struct test_spec *test,
> +                              bool is_write)
> +{
> +       run_test_with_type_and_size(skel, test, is_write, 1);
> +       run_test_with_type_and_size(skel, test, is_write, 2);
> +       run_test_with_type_and_size(skel, test, is_write, 4);
> +       run_test_with_type_and_size(skel, test, is_write, 8);
> +}
> +
> +static void run_test(struct kasan *skel, struct test_spec *test)
> +{
> +       run_test_with_type(skel, test, false);
> +       run_test_with_type(skel, test, true);
> +}
> +
> +void test_kasan(void)
> +{
> +       struct test_spec *test;
> +       struct kasan *skel;
> +       int i;
> +
> +       if (!is_jit_enabled() || !get_kasan_jit_enabled()) {
> +               test__skip();
> +               return;
> +       }
> +
> +       skel = kasan__open_and_load();
> +       if (!ASSERT_OK_PTR(skel, "open and load prog"))
> +               return;
> +
> +       for (i = 0; i < ARRAY_SIZE(tests); i++) {
> +               test = &tests[i];
> +
> +               run_test(skel, test);
> +       }
> +
> +       kasan__destroy(skel);
> +}
> diff --git a/tools/testing/selftests/bpf/progs/kasan.c b/tools/testing/selftests/bpf/progs/kasan.c
> new file mode 100644
> index 000000000000..f713c9b7c9ce
> --- /dev/null
> +++ b/tools/testing/selftests/bpf/progs/kasan.c
> @@ -0,0 +1,146 @@
> +// SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
> +
> +#include <linux/bpf.h>
> +#include <bpf/bpf_helpers.h>
> +#include <bpf/bpf_tracing.h>
> +
> +#define KASAN_SLAB_FREE 0xFB
> +#define KASAN_GLOBAL_REDZONE 0xF9
> +
> +extern __u8 *bpf_kfunc_kasan_uaf_1(void) __ksym;
> +extern __u16 *bpf_kfunc_kasan_uaf_2(void) __ksym;
> +extern __u32 *bpf_kfunc_kasan_uaf_4(void) __ksym;
> +extern __u64 *bpf_kfunc_kasan_uaf_8(void) __ksym;
> +extern __u8 *bpf_kfunc_kasan_oob_1(void) __ksym;
> +extern __u16 *bpf_kfunc_kasan_oob_2(void) __ksym;
> +extern __u32 *bpf_kfunc_kasan_oob_4(void) __ksym;
> +extern __u64 *bpf_kfunc_kasan_oob_8(void) __ksym;
> +extern void bpf_kfunc_kasan_poison(void *mem, __u32 mem__sz, __u8 byte) __ksym;
> +
> +int access_size;
> +int is_write;
> +
> +struct kasan_write_val {
> +       __u8 data_1;
> +       __u16 data_2;
> +       __u32 data_4;
> +       __u64 data_8;
> +};
> +
> +struct {
> +       __uint(type, BPF_MAP_TYPE_ARRAY);
> +       __uint(max_entries, 1);
> +       __type(key, __u32);
> +       __type(value, struct kasan_write_val);
> +} test_map SEC(".maps");
> +
> +static void bpf_kasan_faulty_write(int size, __u8 poison_byte)
> +{
> +       struct kasan_write_val *val;
> +       __u32 key = 0;
> +
> +       val = bpf_map_lookup_elem(&test_map, &key);
> +       if (!val)
> +               return;
> +
> +       bpf_kfunc_kasan_poison(val, sizeof(struct kasan_write_val),
> +                              poison_byte);
> +       switch (size) {
> +       case 1:
> +               val->data_1 = 0xAA;
> +               break;
> +       case 2:
> +               val->data_2 = 0xAA;
> +               break;
> +       case 4:
> +               val->data_4 = 0xAA;
> +               break;
> +       case 8:
> +               val->data_8 = 0xAA;
> +               break;
> +       }
> +       bpf_kfunc_kasan_poison(val, sizeof(struct kasan_write_val), 0x00);
> +}
> +
> +
> +static int bpf_kasan_uaf_read(int size)
> +{
> +       __u8 *result_1;
> +       __u16 *result_2;
> +       __u32 *result_4;
> +       __u64 *result_8;
> +       int ret = 0;
> +
> +       switch (size) {
> +       case 1:
> +               result_1 = bpf_kfunc_kasan_uaf_1();
> +               ret = result_1[0] ? 1 : 0;
> +               break;
> +       case 2:
> +               result_2 = bpf_kfunc_kasan_uaf_2();
> +               ret = result_2[0] ? 1 : 0;
> +               break;
> +       case 4:
> +               result_4 = bpf_kfunc_kasan_uaf_4();
> +               ret = result_4[0] ? 1 : 0;
> +               break;
> +       case 8:
> +               result_8 = bpf_kfunc_kasan_uaf_8();
> +               ret = result_8[0] ? 1 : 0;
> +               break;
> +       }
> +       return ret;
> +}
> +
> +SEC("tcx/ingress")
> +int bpf_kasan_uaf(struct __sk_buff *skb)
> +{
> +       if (is_write) {
> +               bpf_kasan_faulty_write(access_size, KASAN_SLAB_FREE);
> +               return 0;
> +       }
> +
> +       return bpf_kasan_uaf_read(access_size);
> +}
> +
> +static int bpf_kasan_oob_read(int size)
> +{
> +       __u8 *result_1;
> +       __u16 *result_2;
> +       __u32 *result_4;
> +       __u64 *result_8;
> +       int ret = 0;
> +
> +       switch (size) {
> +       case 1:
> +               result_1 = bpf_kfunc_kasan_oob_1();
> +               ret = result_1[0] ? 1 : 0;
> +               break;
> +       case 2:
> +               result_2 = bpf_kfunc_kasan_oob_2();
> +               ret = result_2[0] ? 1 : 0;
> +               break;
> +       case 4:
> +               result_4 = bpf_kfunc_kasan_oob_4();
> +               ret = result_4[0] ? 1 : 0;
> +               break;
> +       case 8:
> +               result_8 = bpf_kfunc_kasan_oob_8();
> +               ret = result_8[0] ? 1 : 0;
> +               break;
> +       }
> +       return ret;
> +}
> +
> +SEC("tcx/ingress")
> +int bpf_kasan_oob(struct __sk_buff *skb)
> +{
> +       if (is_write) {
> +               bpf_kasan_faulty_write(access_size, KASAN_GLOBAL_REDZONE);
> +               return 0;
> +       }
> +
> +       return bpf_kasan_oob_read(access_size);
> +}
> +
> +char LICENSE[] SEC("license") = "GPL";
> diff --git a/tools/testing/selftests/bpf/test_kmods/bpf_testmod.c b/tools/testing/selftests/bpf/test_kmods/bpf_testmod.c
> index d876314a4d67..01554bcbbbb0 100644
> --- a/tools/testing/selftests/bpf/test_kmods/bpf_testmod.c
> +++ b/tools/testing/selftests/bpf/test_kmods/bpf_testmod.c
> @@ -271,6 +271,76 @@ __bpf_kfunc void bpf_kfunc_put_default_trusted_ptr_test(struct prog_test_member
>          */
>  }
>
> +static void *kasan_uaf(void)
> +{
> +       void *p = kmalloc(64, GFP_ATOMIC);
> +
> +       if (!p)
> +               return NULL;
> +       memset(p, 0xAA, 64);
> +       kfree(p);
> +
> +       return p;
> +}
> +
> +#ifdef CONFIG_KASAN_GENERIC
> +extern void kasan_poison(const void *addr, size_t size, u8 value, bool init);
> +
> +__bpf_kfunc void bpf_kfunc_kasan_poison(void *mem, u32 mem__sz, u8 byte)
> +{
> +       kasan_poison(mem, mem__sz, byte, false);
> +}
> +#else
> +__bpf_kfunc void bpf_kfunc_kasan_poison(void *mem, u32 mem__sz, u8 byte) { }
> +#endif
> +
> +__bpf_kfunc u8 *bpf_kfunc_kasan_uaf_1(void)
> +{
> +       return kasan_uaf();
> +}
> +
> +__bpf_kfunc u16 *bpf_kfunc_kasan_uaf_2(void)
> +{
> +       return kasan_uaf();
> +}
> +
> +__bpf_kfunc u32 *bpf_kfunc_kasan_uaf_4(void)
> +{
> +       return kasan_uaf();
> +}
> +
> +__bpf_kfunc u64 *bpf_kfunc_kasan_uaf_8(void)
> +{
> +       return kasan_uaf();
> +}
> +
> +static u8 test_oob_buffer[64];
> +
> +static void *bpf_kfunc_kasan_oob(void)
> +{
> +       return test_oob_buffer+64;
> +}
> +
> +__bpf_kfunc u8 *bpf_kfunc_kasan_oob_1(void)
> +{
> +       return bpf_kfunc_kasan_oob();
> +}
> +
> +__bpf_kfunc u16 *bpf_kfunc_kasan_oob_2(void)
> +{
> +       return bpf_kfunc_kasan_oob();
> +}
> +
> +__bpf_kfunc u32 *bpf_kfunc_kasan_oob_4(void)
> +{
> +       return bpf_kfunc_kasan_oob();
> +}
> +
> +__bpf_kfunc u64 *bpf_kfunc_kasan_oob_8(void)
> +{
> +       return bpf_kfunc_kasan_oob();
> +}
> +
>  __bpf_kfunc struct bpf_testmod_ctx *
>  bpf_testmod_ctx_create(int *err)
>  {
> @@ -740,6 +810,15 @@ BTF_ID_FLAGS(func, bpf_testmod_ops3_call_test_1)
>  BTF_ID_FLAGS(func, bpf_testmod_ops3_call_test_2)
>  BTF_ID_FLAGS(func, bpf_kfunc_get_default_trusted_ptr_test);
>  BTF_ID_FLAGS(func, bpf_kfunc_put_default_trusted_ptr_test);
> +BTF_ID_FLAGS(func, bpf_kfunc_kasan_poison)
> +BTF_ID_FLAGS(func, bpf_kfunc_kasan_uaf_1)
> +BTF_ID_FLAGS(func, bpf_kfunc_kasan_uaf_2)
> +BTF_ID_FLAGS(func, bpf_kfunc_kasan_uaf_4)
> +BTF_ID_FLAGS(func, bpf_kfunc_kasan_uaf_8)
> +BTF_ID_FLAGS(func, bpf_kfunc_kasan_oob_1)
> +BTF_ID_FLAGS(func, bpf_kfunc_kasan_oob_2)
> +BTF_ID_FLAGS(func, bpf_kfunc_kasan_oob_4)
> +BTF_ID_FLAGS(func, bpf_kfunc_kasan_oob_8)
>  BTF_KFUNCS_END(bpf_testmod_common_kfunc_ids)
>
>  BTF_ID_LIST(bpf_testmod_dtor_ids)
>
> --
> 2.53.0
>


^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2026-04-13 22:20 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2026-04-13 18:28 [PATCH RFC bpf-next 0/8] bpf: add support for KASAN checks in JITed programs Alexis Lothoré (eBPF Foundation)
2026-04-13 18:28 ` [PATCH RFC bpf-next 1/8] kasan: expose generic kasan helpers Alexis Lothoré (eBPF Foundation)
2026-04-13 22:19   ` Andrey Konovalov
2026-04-13 18:28 ` [PATCH RFC bpf-next 2/8] bpf: mark instructions accessing program stack Alexis Lothoré (eBPF Foundation)
2026-04-13 18:28 ` [PATCH RFC bpf-next 3/8] bpf: add BPF_JIT_KASAN for KASAN instrumentation of JITed programs Alexis Lothoré (eBPF Foundation)
2026-04-13 22:20   ` Andrey Konovalov
2026-04-13 18:28 ` [PATCH RFC bpf-next 4/8] bpf, x86: add helper to emit kasan checks in x86 " Alexis Lothoré (eBPF Foundation)
2026-04-13 18:28 ` [PATCH RFC bpf-next 5/8] bpf, x86: emit KASAN checks into " Alexis Lothoré (eBPF Foundation)
2026-04-13 18:28 ` [PATCH RFC bpf-next 6/8] selftests/bpf: do not run verifier JIT tests when BPF_JIT_KASAN is enabled Alexis Lothoré (eBPF Foundation)
2026-04-13 18:28 ` [PATCH RFC bpf-next 7/8] bpf, x86: enable KASAN for JITed programs on x86 Alexis Lothoré (eBPF Foundation)
2026-04-13 18:28 ` [PATCH RFC bpf-next 8/8] selftests/bpf: add tests to validate KASAN on JIT programs Alexis Lothoré (eBPF Foundation)
2026-04-13 22:20   ` Andrey Konovalov

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox