linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH mm 00/11] kasan: assorted clean-ups
@ 2023-12-21 20:04 andrey.konovalov
  2023-12-21 20:04 ` [PATCH mm 01/11] kasan/arm64: improve comments for KASAN_SHADOW_START/END andrey.konovalov
                   ` (10 more replies)
  0 siblings, 11 replies; 12+ messages in thread
From: andrey.konovalov @ 2023-12-21 20:04 UTC (permalink / raw)
  To: Marco Elver
  Cc: Andrey Konovalov, Alexander Potapenko, Dmitry Vyukov,
	Andrey Ryabinin, kasan-dev, Andrew Morton, linux-mm,
	linux-kernel, Andrey Konovalov

From: Andrey Konovalov <andreyknvl@google.com>

Code clean-ups, nothing worthy of being backported to stable.

This series goes on top of the "kasan: save mempool stack traces" one.

Andrey Konovalov (11):
  kasan/arm64: improve comments for KASAN_SHADOW_START/END
  mm, kasan: use KASAN_TAG_KERNEL instead of 0xff
  kasan: improve kasan_non_canonical_hook
  kasan: clean up kasan_requires_meta
  kasan: update kasan_poison documentation comment
  kasan: clean up is_kfence_address checks
  kasan: respect CONFIG_KASAN_VMALLOC for kasan_flag_vmalloc
  kasan: check kasan_vmalloc_enabled in vmalloc tests
  kasan: export kasan_poison as GPL
  kasan: remove SLUB checks for page_alloc fallbacks in tests
  kasan: speed up match_all_mem_tag test for SW_TAGS

 arch/arm64/include/asm/kasan.h  | 22 +--------------
 arch/arm64/include/asm/memory.h | 38 +++++++++++++++++++++-----
 arch/arm64/mm/kasan_init.c      |  5 ++++
 include/linux/kasan.h           |  1 +
 include/linux/mm.h              |  4 +--
 mm/kasan/common.c               | 26 +++++++++++-------
 mm/kasan/hw_tags.c              |  8 ++++++
 mm/kasan/kasan.h                | 48 ++++++++++++++++-----------------
 mm/kasan/kasan_test.c           | 45 ++++++++++++++-----------------
 mm/kasan/report.c               | 34 +++++++++++++----------
 mm/kasan/shadow.c               | 14 +---------
 mm/page_alloc.c                 |  2 +-
 12 files changed, 131 insertions(+), 116 deletions(-)

-- 
2.25.1



^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH mm 01/11] kasan/arm64: improve comments for KASAN_SHADOW_START/END
  2023-12-21 20:04 [PATCH mm 00/11] kasan: assorted clean-ups andrey.konovalov
@ 2023-12-21 20:04 ` andrey.konovalov
  2023-12-21 20:04 ` [PATCH mm 02/11] mm, kasan: use KASAN_TAG_KERNEL instead of 0xff andrey.konovalov
                   ` (9 subsequent siblings)
  10 siblings, 0 replies; 12+ messages in thread
From: andrey.konovalov @ 2023-12-21 20:04 UTC (permalink / raw)
  To: Marco Elver
  Cc: Andrey Konovalov, Alexander Potapenko, Dmitry Vyukov,
	Andrey Ryabinin, kasan-dev, Andrew Morton, linux-mm,
	linux-kernel, Andrey Konovalov

From: Andrey Konovalov <andreyknvl@google.com>

Unify and improve the comments for KASAN_SHADOW_START/END definitions
from include/asm/kasan.h and include/asm/memory.h.

Also put both definitions together in include/asm/memory.h.

Also clarify the related BUILD_BUG_ON checks in mm/kasan_init.c.

Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
---
 arch/arm64/include/asm/kasan.h  | 22 +------------------
 arch/arm64/include/asm/memory.h | 38 +++++++++++++++++++++++++++------
 arch/arm64/mm/kasan_init.c      |  5 +++++
 3 files changed, 38 insertions(+), 27 deletions(-)

diff --git a/arch/arm64/include/asm/kasan.h b/arch/arm64/include/asm/kasan.h
index 12d5f47f7dbe..7eefc525a9df 100644
--- a/arch/arm64/include/asm/kasan.h
+++ b/arch/arm64/include/asm/kasan.h
@@ -15,29 +15,9 @@
 
 #if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)
 
+asmlinkage void kasan_early_init(void);
 void kasan_init(void);
-
-/*
- * KASAN_SHADOW_START: beginning of the kernel virtual addresses.
- * KASAN_SHADOW_END: KASAN_SHADOW_START + 1/N of kernel virtual addresses,
- * where N = (1 << KASAN_SHADOW_SCALE_SHIFT).
- *
- * KASAN_SHADOW_OFFSET:
- * This value is used to map an address to the corresponding shadow
- * address by the following formula:
- *     shadow_addr = (address >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET
- *
- * (1 << (64 - KASAN_SHADOW_SCALE_SHIFT)) shadow addresses that lie in range
- * [KASAN_SHADOW_OFFSET, KASAN_SHADOW_END) cover all 64-bits of virtual
- * addresses. So KASAN_SHADOW_OFFSET should satisfy the following equation:
- *      KASAN_SHADOW_OFFSET = KASAN_SHADOW_END -
- *				(1ULL << (64 - KASAN_SHADOW_SCALE_SHIFT))
- */
-#define _KASAN_SHADOW_START(va)	(KASAN_SHADOW_END - (1UL << ((va) - KASAN_SHADOW_SCALE_SHIFT)))
-#define KASAN_SHADOW_START      _KASAN_SHADOW_START(vabits_actual)
-
 void kasan_copy_shadow(pgd_t *pgdir);
-asmlinkage void kasan_early_init(void);
 
 #else
 static inline void kasan_init(void) { }
diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index fde4186cc387..0f139cb4467b 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -65,15 +65,41 @@
 #define KERNEL_END		_end
 
 /*
- * Generic and tag-based KASAN require 1/8th and 1/16th of the kernel virtual
- * address space for the shadow region respectively. They can bloat the stack
- * significantly, so double the (minimum) stack size when they are in use.
+ * Generic and Software Tag-Based KASAN modes require 1/8th and 1/16th of the
+ * kernel virtual address space for storing the shadow memory respectively.
+ *
+ * The mapping between a virtual memory address and its corresponding shadow
+ * memory address is defined based on the formula:
+ *
+ *     shadow_addr = (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET
+ *
+ * where KASAN_SHADOW_SCALE_SHIFT is the order of the number of bits that map
+ * to a single shadow byte and KASAN_SHADOW_OFFSET is a constant that offsets
+ * the mapping. Note that KASAN_SHADOW_OFFSET does not point to the start of
+ * the shadow memory region.
+ *
+ * Based on this mapping, we define two constants:
+ *
+ *     KASAN_SHADOW_START: the start of the shadow memory region;
+ *     KASAN_SHADOW_END: the end of the shadow memory region.
+ *
+ * KASAN_SHADOW_END is defined first as the shadow address that corresponds to
+ * the upper bound of possible virtual kernel memory addresses UL(1) << 64
+ * according to the mapping formula.
+ *
+ * KASAN_SHADOW_START is defined second based on KASAN_SHADOW_END. The shadow
+ * memory start must map to the lowest possible kernel virtual memory address
+ * and thus it depends on the actual bitness of the address space.
+ *
+ * As KASAN inserts redzones between stack variables, this increases the stack
+ * memory usage significantly. Thus, we double the (minimum) stack size.
  */
 #if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)
 #define KASAN_SHADOW_OFFSET	_AC(CONFIG_KASAN_SHADOW_OFFSET, UL)
-#define KASAN_SHADOW_END	((UL(1) << (64 - KASAN_SHADOW_SCALE_SHIFT)) \
-					+ KASAN_SHADOW_OFFSET)
-#define PAGE_END		(KASAN_SHADOW_END - (1UL << (vabits_actual - KASAN_SHADOW_SCALE_SHIFT)))
+#define KASAN_SHADOW_END	((UL(1) << (64 - KASAN_SHADOW_SCALE_SHIFT)) + KASAN_SHADOW_OFFSET)
+#define _KASAN_SHADOW_START(va)	(KASAN_SHADOW_END - (UL(1) << ((va) - KASAN_SHADOW_SCALE_SHIFT)))
+#define KASAN_SHADOW_START	_KASAN_SHADOW_START(vabits_actual)
+#define PAGE_END		KASAN_SHADOW_START
 #define KASAN_THREAD_SHIFT	1
 #else
 #define KASAN_THREAD_SHIFT	0
diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
index 555285ebd5af..4c7ad574b946 100644
--- a/arch/arm64/mm/kasan_init.c
+++ b/arch/arm64/mm/kasan_init.c
@@ -170,6 +170,11 @@ asmlinkage void __init kasan_early_init(void)
 {
 	BUILD_BUG_ON(KASAN_SHADOW_OFFSET !=
 		KASAN_SHADOW_END - (1UL << (64 - KASAN_SHADOW_SCALE_SHIFT)));
+	/*
+	 * We cannot check the actual value of KASAN_SHADOW_START during build,
+	 * as it depends on vabits_actual. As a best-effort approach, check
+	 * potential values calculated based on VA_BITS and VA_BITS_MIN.
+	 */
 	BUILD_BUG_ON(!IS_ALIGNED(_KASAN_SHADOW_START(VA_BITS), PGDIR_SIZE));
 	BUILD_BUG_ON(!IS_ALIGNED(_KASAN_SHADOW_START(VA_BITS_MIN), PGDIR_SIZE));
 	BUILD_BUG_ON(!IS_ALIGNED(KASAN_SHADOW_END, PGDIR_SIZE));
-- 
2.25.1



^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH mm 02/11] mm, kasan: use KASAN_TAG_KERNEL instead of 0xff
  2023-12-21 20:04 [PATCH mm 00/11] kasan: assorted clean-ups andrey.konovalov
  2023-12-21 20:04 ` [PATCH mm 01/11] kasan/arm64: improve comments for KASAN_SHADOW_START/END andrey.konovalov
@ 2023-12-21 20:04 ` andrey.konovalov
  2023-12-21 20:04 ` [PATCH mm 03/11] kasan: improve kasan_non_canonical_hook andrey.konovalov
                   ` (8 subsequent siblings)
  10 siblings, 0 replies; 12+ messages in thread
From: andrey.konovalov @ 2023-12-21 20:04 UTC (permalink / raw)
  To: Marco Elver
  Cc: Andrey Konovalov, Alexander Potapenko, Dmitry Vyukov,
	Andrey Ryabinin, kasan-dev, Andrew Morton, linux-mm,
	linux-kernel, Andrey Konovalov

From: Andrey Konovalov <andreyknvl@google.com>

Use the KASAN_TAG_KERNEL marco instead of open-coding 0xff in the mm
code. This macro is provided by include/linux/kasan-tags.h, which does
not include any other headers, so it's safe to include it into mm.h
without causing circular include dependencies.

Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
---
 include/linux/kasan.h | 1 +
 include/linux/mm.h    | 4 ++--
 mm/page_alloc.c       | 2 +-
 3 files changed, 4 insertions(+), 3 deletions(-)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index d49e3d4c099e..dbb06d789e74 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -4,6 +4,7 @@
 
 #include <linux/bug.h>
 #include <linux/kasan-enabled.h>
+#include <linux/kasan-tags.h>
 #include <linux/kernel.h>
 #include <linux/static_key.h>
 #include <linux/types.h>
diff --git a/include/linux/mm.h b/include/linux/mm.h
index a422cc123a2d..8b2e4841e817 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1815,7 +1815,7 @@ static inline void vma_set_access_pid_bit(struct vm_area_struct *vma)
 
 static inline u8 page_kasan_tag(const struct page *page)
 {
-	u8 tag = 0xff;
+	u8 tag = KASAN_TAG_KERNEL;
 
 	if (kasan_enabled()) {
 		tag = (page->flags >> KASAN_TAG_PGSHIFT) & KASAN_TAG_MASK;
@@ -1844,7 +1844,7 @@ static inline void page_kasan_tag_set(struct page *page, u8 tag)
 static inline void page_kasan_tag_reset(struct page *page)
 {
 	if (kasan_enabled())
-		page_kasan_tag_set(page, 0xff);
+		page_kasan_tag_set(page, KASAN_TAG_KERNEL);
 }
 
 #else /* CONFIG_KASAN_SW_TAGS || CONFIG_KASAN_HW_TAGS */
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 7ea9c33320bf..51e85760877a 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1059,7 +1059,7 @@ static inline bool should_skip_kasan_poison(struct page *page, fpi_t fpi_flags)
 	if (IS_ENABLED(CONFIG_KASAN_GENERIC))
 		return deferred_pages_enabled();
 
-	return page_kasan_tag(page) == 0xff;
+	return page_kasan_tag(page) == KASAN_TAG_KERNEL;
 }
 
 static void kernel_init_pages(struct page *page, int numpages)
-- 
2.25.1



^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH mm 03/11] kasan: improve kasan_non_canonical_hook
  2023-12-21 20:04 [PATCH mm 00/11] kasan: assorted clean-ups andrey.konovalov
  2023-12-21 20:04 ` [PATCH mm 01/11] kasan/arm64: improve comments for KASAN_SHADOW_START/END andrey.konovalov
  2023-12-21 20:04 ` [PATCH mm 02/11] mm, kasan: use KASAN_TAG_KERNEL instead of 0xff andrey.konovalov
@ 2023-12-21 20:04 ` andrey.konovalov
  2023-12-21 20:04 ` [PATCH mm 04/11] kasan: clean up kasan_requires_meta andrey.konovalov
                   ` (7 subsequent siblings)
  10 siblings, 0 replies; 12+ messages in thread
From: andrey.konovalov @ 2023-12-21 20:04 UTC (permalink / raw)
  To: Marco Elver
  Cc: Andrey Konovalov, Alexander Potapenko, Dmitry Vyukov,
	Andrey Ryabinin, kasan-dev, Andrew Morton, linux-mm,
	linux-kernel, Andrey Konovalov

From: Andrey Konovalov <andreyknvl@google.com>

Make kasan_non_canonical_hook to be more sure in its report (i.e. say
"probably" instead of "maybe") if the address belongs to the shadow memory
region for kernel addresses.

Also use the kasan_shadow_to_mem helper to calculate the original address.

Also improve the comments in kasan_non_canonical_hook.

Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
---
 mm/kasan/kasan.h  |  6 ++++++
 mm/kasan/report.c | 34 ++++++++++++++++++++--------------
 2 files changed, 26 insertions(+), 14 deletions(-)

diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 69e4f5e58e33..0e209b823b2c 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -307,6 +307,12 @@ struct kasan_stack_ring {
 
 #if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)
 
+static __always_inline bool addr_in_shadow(const void *addr)
+{
+	return addr >= (void *)KASAN_SHADOW_START &&
+		addr < (void *)KASAN_SHADOW_END;
+}
+
 #ifndef kasan_shadow_to_mem
 static inline const void *kasan_shadow_to_mem(const void *shadow_addr)
 {
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index a938237f6882..4bc7ac9fb37d 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -635,37 +635,43 @@ void kasan_report_async(void)
 
 #if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)
 /*
- * With CONFIG_KASAN_INLINE, accesses to bogus pointers (outside the high
- * canonical half of the address space) cause out-of-bounds shadow memory reads
- * before the actual access. For addresses in the low canonical half of the
- * address space, as well as most non-canonical addresses, that out-of-bounds
- * shadow memory access lands in the non-canonical part of the address space.
- * Help the user figure out what the original bogus pointer was.
+ * With compiler-based KASAN modes, accesses to bogus pointers (outside of the
+ * mapped kernel address space regions) cause faults when KASAN tries to check
+ * the shadow memory before the actual memory access. This results in cryptic
+ * GPF reports, which are hard for users to interpret. This hook helps users to
+ * figure out what the original bogus pointer was.
  */
 void kasan_non_canonical_hook(unsigned long addr)
 {
 	unsigned long orig_addr;
 	const char *bug_type;
 
+	/*
+	 * All addresses that came as a result of the memory-to-shadow mapping
+	 * (even for bogus pointers) must be >= KASAN_SHADOW_OFFSET.
+	 */
 	if (addr < KASAN_SHADOW_OFFSET)
 		return;
 
-	orig_addr = (addr - KASAN_SHADOW_OFFSET) << KASAN_SHADOW_SCALE_SHIFT;
+	orig_addr = (unsigned long)kasan_shadow_to_mem((void *)addr);
+
 	/*
 	 * For faults near the shadow address for NULL, we can be fairly certain
 	 * that this is a KASAN shadow memory access.
-	 * For faults that correspond to shadow for low canonical addresses, we
-	 * can still be pretty sure - that shadow region is a fairly narrow
-	 * chunk of the non-canonical address space.
-	 * But faults that look like shadow for non-canonical addresses are a
-	 * really large chunk of the address space. In that case, we still
-	 * print the decoded address, but make it clear that this is not
-	 * necessarily what's actually going on.
+	 * For faults that correspond to the shadow for low or high canonical
+	 * addresses, we can still be pretty sure: these shadow regions are a
+	 * fairly narrow chunk of the address space.
+	 * But the shadow for non-canonical addresses is a really large chunk
+	 * of the address space. For this case, we still print the decoded
+	 * address, but make it clear that this is not necessarily what's
+	 * actually going on.
 	 */
 	if (orig_addr < PAGE_SIZE)
 		bug_type = "null-ptr-deref";
 	else if (orig_addr < TASK_SIZE)
 		bug_type = "probably user-memory-access";
+	else if (addr_in_shadow((void *)addr))
+		bug_type = "probably wild-memory-access";
 	else
 		bug_type = "maybe wild-memory-access";
 	pr_alert("KASAN: %s in range [0x%016lx-0x%016lx]\n", bug_type,
-- 
2.25.1



^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH mm 04/11] kasan: clean up kasan_requires_meta
  2023-12-21 20:04 [PATCH mm 00/11] kasan: assorted clean-ups andrey.konovalov
                   ` (2 preceding siblings ...)
  2023-12-21 20:04 ` [PATCH mm 03/11] kasan: improve kasan_non_canonical_hook andrey.konovalov
@ 2023-12-21 20:04 ` andrey.konovalov
  2023-12-21 20:04 ` [PATCH mm 05/11] kasan: update kasan_poison documentation comment andrey.konovalov
                   ` (6 subsequent siblings)
  10 siblings, 0 replies; 12+ messages in thread
From: andrey.konovalov @ 2023-12-21 20:04 UTC (permalink / raw)
  To: Marco Elver
  Cc: Andrey Konovalov, Alexander Potapenko, Dmitry Vyukov,
	Andrey Ryabinin, kasan-dev, Andrew Morton, linux-mm,
	linux-kernel, Andrey Konovalov

From: Andrey Konovalov <andreyknvl@google.com>

Currently, for Generic KASAN mode, kasan_requires_meta is defined to
return kasan_stack_collection_enabled.

Even though the Generic mode does not support disabling stack trace
collection, kasan_requires_meta was implemented in this way to make it
easier to implement the disabling for the Generic mode in the future.

However, for the Generic mode, the per-object metadata also stores the
quarantine link. So even if disabling stack collection is implemented,
the per-object metadata will still be required.

Fix kasan_requires_meta to return true for the Generic mode and update
the related comments.

This change does not fix any observable bugs but rather just brings the
code to a cleaner state.

Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
---
 mm/kasan/kasan.h | 18 +++++++++---------
 1 file changed, 9 insertions(+), 9 deletions(-)

diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 0e209b823b2c..38af25b9c89c 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -101,21 +101,21 @@ static inline bool kasan_sample_page_alloc(unsigned int order)
 
 #ifdef CONFIG_KASAN_GENERIC
 
-/* Generic KASAN uses per-object metadata to store stack traces. */
+/*
+ * Generic KASAN uses per-object metadata to store alloc and free stack traces
+ * and the quarantine link.
+ */
 static inline bool kasan_requires_meta(void)
 {
-	/*
-	 * Technically, Generic KASAN always collects stack traces right now.
-	 * However, let's use kasan_stack_collection_enabled() in case the
-	 * kasan.stacktrace command-line argument is changed to affect
-	 * Generic KASAN.
-	 */
-	return kasan_stack_collection_enabled();
+	return true;
 }
 
 #else /* CONFIG_KASAN_GENERIC */
 
-/* Tag-based KASAN modes do not use per-object metadata. */
+/*
+ * Tag-based KASAN modes do not use per-object metadata: they use the stack
+ * ring to store alloc and free stack traces and do not use qurantine.
+ */
 static inline bool kasan_requires_meta(void)
 {
 	return false;
-- 
2.25.1



^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH mm 05/11] kasan: update kasan_poison documentation comment
  2023-12-21 20:04 [PATCH mm 00/11] kasan: assorted clean-ups andrey.konovalov
                   ` (3 preceding siblings ...)
  2023-12-21 20:04 ` [PATCH mm 04/11] kasan: clean up kasan_requires_meta andrey.konovalov
@ 2023-12-21 20:04 ` andrey.konovalov
  2023-12-21 20:04 ` [PATCH mm 06/11] kasan: clean up is_kfence_address checks andrey.konovalov
                   ` (5 subsequent siblings)
  10 siblings, 0 replies; 12+ messages in thread
From: andrey.konovalov @ 2023-12-21 20:04 UTC (permalink / raw)
  To: Marco Elver
  Cc: Andrey Konovalov, Alexander Potapenko, Dmitry Vyukov,
	Andrey Ryabinin, kasan-dev, Andrew Morton, linux-mm,
	linux-kernel, Andrey Konovalov

From: Andrey Konovalov <andreyknvl@google.com>

The comment for kasan_poison says that the size argument gets aligned by
the function to KASAN_GRANULE_SIZE, which is wrong: the argument must be
already aligned when it is passed to the function.

Remove the invalid part of the comment.

Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
---
 mm/kasan/kasan.h | 2 --
 1 file changed, 2 deletions(-)

diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 38af25b9c89c..1c34511090d7 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -513,8 +513,6 @@ static inline bool kasan_byte_accessible(const void *addr)
  * @size - range size, must be aligned to KASAN_GRANULE_SIZE
  * @value - value that's written to metadata for the range
  * @init - whether to initialize the memory range (only for hardware tag-based)
- *
- * The size gets aligned to KASAN_GRANULE_SIZE before marking the range.
  */
 void kasan_poison(const void *addr, size_t size, u8 value, bool init);
 
-- 
2.25.1



^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH mm 06/11] kasan: clean up is_kfence_address checks
  2023-12-21 20:04 [PATCH mm 00/11] kasan: assorted clean-ups andrey.konovalov
                   ` (4 preceding siblings ...)
  2023-12-21 20:04 ` [PATCH mm 05/11] kasan: update kasan_poison documentation comment andrey.konovalov
@ 2023-12-21 20:04 ` andrey.konovalov
  2023-12-21 20:04 ` [PATCH mm 07/11] kasan: respect CONFIG_KASAN_VMALLOC for kasan_flag_vmalloc andrey.konovalov
                   ` (4 subsequent siblings)
  10 siblings, 0 replies; 12+ messages in thread
From: andrey.konovalov @ 2023-12-21 20:04 UTC (permalink / raw)
  To: Marco Elver
  Cc: Andrey Konovalov, Alexander Potapenko, Dmitry Vyukov,
	Andrey Ryabinin, kasan-dev, Andrew Morton, linux-mm,
	linux-kernel, Andrey Konovalov

From: Andrey Konovalov <andreyknvl@google.com>

1. Do not untag addresses that are passed to is_kfence_address: it
   tolerates tagged addresses.

2. Move is_kfence_address checks from internal KASAN functions
   (kasan_poison/unpoison, etc.) to external-facing ones.

   Note that kasan_poison/unpoison are never called outside of KASAN/slab
   code anymore; the comment is wrong, so drop it.

3. Simplify/reorganize the code around the updated checks.

Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
---
 mm/kasan/common.c | 26 +++++++++++++++++---------
 mm/kasan/kasan.h  | 16 ++--------------
 mm/kasan/shadow.c | 12 ------------
 3 files changed, 19 insertions(+), 35 deletions(-)

diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index f4255e807b74..86adf80cc11a 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -79,6 +79,9 @@ EXPORT_SYMBOL(kasan_disable_current);
 
 void __kasan_unpoison_range(const void *address, size_t size)
 {
+	if (is_kfence_address(address))
+		return;
+
 	kasan_unpoison(address, size, false);
 }
 
@@ -218,9 +221,6 @@ static inline bool poison_slab_object(struct kmem_cache *cache, void *object,
 	tagged_object = object;
 	object = kasan_reset_tag(object);
 
-	if (is_kfence_address(object))
-		return false;
-
 	if (unlikely(nearest_obj(cache, virt_to_slab(object), object) != object)) {
 		kasan_report_invalid_free(tagged_object, ip, KASAN_REPORT_INVALID_FREE);
 		return true;
@@ -247,7 +247,12 @@ static inline bool poison_slab_object(struct kmem_cache *cache, void *object,
 bool __kasan_slab_free(struct kmem_cache *cache, void *object,
 				unsigned long ip, bool init)
 {
-	bool buggy_object = poison_slab_object(cache, object, ip, init);
+	bool buggy_object;
+
+	if (is_kfence_address(object))
+		return false;
+
+	buggy_object = poison_slab_object(cache, object, ip, init);
 
 	return buggy_object ? true : kasan_quarantine_put(cache, object);
 }
@@ -359,7 +364,7 @@ void * __must_check __kasan_kmalloc(struct kmem_cache *cache, const void *object
 	if (unlikely(object == NULL))
 		return NULL;
 
-	if (is_kfence_address(kasan_reset_tag(object)))
+	if (is_kfence_address(object))
 		return (void *)object;
 
 	/* The object has already been unpoisoned by kasan_slab_alloc(). */
@@ -417,7 +422,7 @@ void * __must_check __kasan_krealloc(const void *object, size_t size, gfp_t flag
 	if (unlikely(object == ZERO_SIZE_PTR))
 		return (void *)object;
 
-	if (is_kfence_address(kasan_reset_tag(object)))
+	if (is_kfence_address(object))
 		return (void *)object;
 
 	/*
@@ -483,6 +488,9 @@ bool __kasan_mempool_poison_object(void *ptr, unsigned long ip)
 		return true;
 	}
 
+	if (is_kfence_address(ptr))
+		return false;
+
 	slab = folio_slab(folio);
 	return !poison_slab_object(slab->slab_cache, ptr, ip, false);
 }
@@ -492,9 +500,6 @@ void __kasan_mempool_unpoison_object(void *ptr, size_t size, unsigned long ip)
 	struct slab *slab;
 	gfp_t flags = 0; /* Might be executing under a lock. */
 
-	if (is_kfence_address(kasan_reset_tag(ptr)))
-		return;
-
 	slab = virt_to_slab(ptr);
 
 	/*
@@ -507,6 +512,9 @@ void __kasan_mempool_unpoison_object(void *ptr, size_t size, unsigned long ip)
 		return;
 	}
 
+	if (is_kfence_address(ptr))
+		return;
+
 	/* Unpoison the object and save alloc info for non-kmalloc() allocations. */
 	unpoison_slab_object(slab->slab_cache, ptr, size, flags);
 
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 1c34511090d7..5fbcc1b805bc 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -466,35 +466,23 @@ static inline u8 kasan_random_tag(void) { return 0; }
 
 static inline void kasan_poison(const void *addr, size_t size, u8 value, bool init)
 {
-	addr = kasan_reset_tag(addr);
-
-	/* Skip KFENCE memory if called explicitly outside of sl*b. */
-	if (is_kfence_address(addr))
-		return;
-
 	if (WARN_ON((unsigned long)addr & KASAN_GRANULE_MASK))
 		return;
 	if (WARN_ON(size & KASAN_GRANULE_MASK))
 		return;
 
-	hw_set_mem_tag_range((void *)addr, size, value, init);
+	hw_set_mem_tag_range(kasan_reset_tag(addr), size, value, init);
 }
 
 static inline void kasan_unpoison(const void *addr, size_t size, bool init)
 {
 	u8 tag = get_tag(addr);
 
-	addr = kasan_reset_tag(addr);
-
-	/* Skip KFENCE memory if called explicitly outside of sl*b. */
-	if (is_kfence_address(addr))
-		return;
-
 	if (WARN_ON((unsigned long)addr & KASAN_GRANULE_MASK))
 		return;
 	size = round_up(size, KASAN_GRANULE_SIZE);
 
-	hw_set_mem_tag_range((void *)addr, size, tag, init);
+	hw_set_mem_tag_range(kasan_reset_tag(addr), size, tag, init);
 }
 
 static inline bool kasan_byte_accessible(const void *addr)
diff --git a/mm/kasan/shadow.c b/mm/kasan/shadow.c
index 0154d200be40..30625303d01a 100644
--- a/mm/kasan/shadow.c
+++ b/mm/kasan/shadow.c
@@ -135,10 +135,6 @@ void kasan_poison(const void *addr, size_t size, u8 value, bool init)
 	 */
 	addr = kasan_reset_tag(addr);
 
-	/* Skip KFENCE memory if called explicitly outside of sl*b. */
-	if (is_kfence_address(addr))
-		return;
-
 	if (WARN_ON((unsigned long)addr & KASAN_GRANULE_MASK))
 		return;
 	if (WARN_ON(size & KASAN_GRANULE_MASK))
@@ -175,14 +171,6 @@ void kasan_unpoison(const void *addr, size_t size, bool init)
 	 */
 	addr = kasan_reset_tag(addr);
 
-	/*
-	 * Skip KFENCE memory if called explicitly outside of sl*b. Also note
-	 * that calls to ksize(), where size is not a multiple of machine-word
-	 * size, would otherwise poison the invalid portion of the word.
-	 */
-	if (is_kfence_address(addr))
-		return;
-
 	if (WARN_ON((unsigned long)addr & KASAN_GRANULE_MASK))
 		return;
 
-- 
2.25.1



^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH mm 07/11] kasan: respect CONFIG_KASAN_VMALLOC for kasan_flag_vmalloc
  2023-12-21 20:04 [PATCH mm 00/11] kasan: assorted clean-ups andrey.konovalov
                   ` (5 preceding siblings ...)
  2023-12-21 20:04 ` [PATCH mm 06/11] kasan: clean up is_kfence_address checks andrey.konovalov
@ 2023-12-21 20:04 ` andrey.konovalov
  2023-12-21 20:04 ` [PATCH mm 08/11] kasan: check kasan_vmalloc_enabled in vmalloc tests andrey.konovalov
                   ` (3 subsequent siblings)
  10 siblings, 0 replies; 12+ messages in thread
From: andrey.konovalov @ 2023-12-21 20:04 UTC (permalink / raw)
  To: Marco Elver
  Cc: Andrey Konovalov, Alexander Potapenko, Dmitry Vyukov,
	Andrey Ryabinin, kasan-dev, Andrew Morton, linux-mm,
	linux-kernel, Andrey Konovalov

From: Andrey Konovalov <andreyknvl@google.com>

Never enable the kasan_flag_vmalloc static branch unless
CONFIG_KASAN_VMALLOC is enabled.

This does not fix any observable bugs (vmalloc annotations for the
HW_TAGS mode are no-op with CONFIG_KASAN_VMALLOC disabled) but rather
just cleans up the code.

Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
---
 mm/kasan/hw_tags.c | 7 +++++++
 mm/kasan/kasan.h   | 1 +
 2 files changed, 8 insertions(+)

diff --git a/mm/kasan/hw_tags.c b/mm/kasan/hw_tags.c
index 06141bbc1e51..80f11a3eccd5 100644
--- a/mm/kasan/hw_tags.c
+++ b/mm/kasan/hw_tags.c
@@ -57,7 +57,11 @@ enum kasan_mode kasan_mode __ro_after_init;
 EXPORT_SYMBOL_GPL(kasan_mode);
 
 /* Whether to enable vmalloc tagging. */
+#ifdef CONFIG_KASAN_VMALLOC
 DEFINE_STATIC_KEY_TRUE(kasan_flag_vmalloc);
+#else
+DEFINE_STATIC_KEY_FALSE(kasan_flag_vmalloc);
+#endif
 
 #define PAGE_ALLOC_SAMPLE_DEFAULT	1
 #define PAGE_ALLOC_SAMPLE_ORDER_DEFAULT	3
@@ -119,6 +123,9 @@ static int __init early_kasan_flag_vmalloc(char *arg)
 	if (!arg)
 		return -EINVAL;
 
+	if (!IS_ENABLED(CONFIG_KASAN_VMALLOC))
+		return 0;
+
 	if (!strcmp(arg, "off"))
 		kasan_arg_vmalloc = KASAN_ARG_VMALLOC_OFF;
 	else if (!strcmp(arg, "on"))
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 5fbcc1b805bc..dee105ba32dd 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -49,6 +49,7 @@ DECLARE_PER_CPU(long, kasan_page_alloc_skip);
 
 static inline bool kasan_vmalloc_enabled(void)
 {
+	/* Static branch is never enabled with CONFIG_KASAN_VMALLOC disabled. */
 	return static_branch_likely(&kasan_flag_vmalloc);
 }
 
-- 
2.25.1



^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH mm 08/11] kasan: check kasan_vmalloc_enabled in vmalloc tests
  2023-12-21 20:04 [PATCH mm 00/11] kasan: assorted clean-ups andrey.konovalov
                   ` (6 preceding siblings ...)
  2023-12-21 20:04 ` [PATCH mm 07/11] kasan: respect CONFIG_KASAN_VMALLOC for kasan_flag_vmalloc andrey.konovalov
@ 2023-12-21 20:04 ` andrey.konovalov
  2023-12-21 20:04 ` [PATCH mm 09/11] kasan: export kasan_poison as GPL andrey.konovalov
                   ` (2 subsequent siblings)
  10 siblings, 0 replies; 12+ messages in thread
From: andrey.konovalov @ 2023-12-21 20:04 UTC (permalink / raw)
  To: Marco Elver
  Cc: Andrey Konovalov, Alexander Potapenko, Dmitry Vyukov,
	Andrey Ryabinin, kasan-dev, Andrew Morton, linux-mm,
	linux-kernel, Andrey Konovalov

From: Andrey Konovalov <andreyknvl@google.com>

Check that vmalloc poisoning is not disabled via command line when
running the vmalloc-related KASAN tests. Skip the tests otherwise.

Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
---
 mm/kasan/hw_tags.c    |  1 +
 mm/kasan/kasan.h      |  5 +++++
 mm/kasan/kasan_test.c | 11 ++++++++++-
 3 files changed, 16 insertions(+), 1 deletion(-)

diff --git a/mm/kasan/hw_tags.c b/mm/kasan/hw_tags.c
index 80f11a3eccd5..2b994092a2d4 100644
--- a/mm/kasan/hw_tags.c
+++ b/mm/kasan/hw_tags.c
@@ -62,6 +62,7 @@ DEFINE_STATIC_KEY_TRUE(kasan_flag_vmalloc);
 #else
 DEFINE_STATIC_KEY_FALSE(kasan_flag_vmalloc);
 #endif
+EXPORT_SYMBOL_GPL(kasan_flag_vmalloc);
 
 #define PAGE_ALLOC_SAMPLE_DEFAULT	1
 #define PAGE_ALLOC_SAMPLE_ORDER_DEFAULT	3
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index dee105ba32dd..acc1a9410f0d 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -83,6 +83,11 @@ static inline bool kasan_sample_page_alloc(unsigned int order)
 
 #else /* CONFIG_KASAN_HW_TAGS */
 
+static inline bool kasan_vmalloc_enabled(void)
+{
+	return IS_ENABLED(CONFIG_KASAN_VMALLOC);
+}
+
 static inline bool kasan_async_fault_possible(void)
 {
 	return false;
diff --git a/mm/kasan/kasan_test.c b/mm/kasan/kasan_test.c
index 1c77c73ff287..496154e38965 100644
--- a/mm/kasan/kasan_test.c
+++ b/mm/kasan/kasan_test.c
@@ -1540,6 +1540,9 @@ static void vmalloc_helpers_tags(struct kunit *test)
 
 	KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_KASAN_VMALLOC);
 
+	if (!kasan_vmalloc_enabled())
+		kunit_skip(test, "Test requires kasan.vmalloc=on");
+
 	ptr = vmalloc(PAGE_SIZE);
 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr);
 
@@ -1574,6 +1577,9 @@ static void vmalloc_oob(struct kunit *test)
 
 	KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_KASAN_VMALLOC);
 
+	if (!kasan_vmalloc_enabled())
+		kunit_skip(test, "Test requires kasan.vmalloc=on");
+
 	v_ptr = vmalloc(size);
 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, v_ptr);
 
@@ -1627,6 +1633,9 @@ static void vmap_tags(struct kunit *test)
 
 	KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_KASAN_VMALLOC);
 
+	if (!kasan_vmalloc_enabled())
+		kunit_skip(test, "Test requires kasan.vmalloc=on");
+
 	p_page = alloc_pages(GFP_KERNEL, 1);
 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, p_page);
 	p_ptr = page_address(p_page);
@@ -1745,7 +1754,7 @@ static void match_all_not_assigned(struct kunit *test)
 		free_pages((unsigned long)ptr, order);
 	}
 
-	if (!IS_ENABLED(CONFIG_KASAN_VMALLOC))
+	if (!kasan_vmalloc_enabled())
 		return;
 
 	for (i = 0; i < 256; i++) {
-- 
2.25.1



^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH mm 09/11] kasan: export kasan_poison as GPL
  2023-12-21 20:04 [PATCH mm 00/11] kasan: assorted clean-ups andrey.konovalov
                   ` (7 preceding siblings ...)
  2023-12-21 20:04 ` [PATCH mm 08/11] kasan: check kasan_vmalloc_enabled in vmalloc tests andrey.konovalov
@ 2023-12-21 20:04 ` andrey.konovalov
  2023-12-21 20:04 ` [PATCH mm 10/11] kasan: remove SLUB checks for page_alloc fallbacks in tests andrey.konovalov
  2023-12-21 20:04 ` [PATCH mm 11/11] kasan: speed up match_all_mem_tag test for SW_TAGS andrey.konovalov
  10 siblings, 0 replies; 12+ messages in thread
From: andrey.konovalov @ 2023-12-21 20:04 UTC (permalink / raw)
  To: Marco Elver
  Cc: Andrey Konovalov, Alexander Potapenko, Dmitry Vyukov,
	Andrey Ryabinin, kasan-dev, Andrew Morton, linux-mm,
	linux-kernel, Andrey Konovalov

From: Andrey Konovalov <andreyknvl@google.com>

KASAN uses EXPORT_SYMBOL_GPL for symbols whose exporting is only required
for KASAN tests when they are built as a module.

kasan_poison is one on those symbols, so export it as GPL.

Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
---
 mm/kasan/shadow.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/kasan/shadow.c b/mm/kasan/shadow.c
index 30625303d01a..9ef84f31833f 100644
--- a/mm/kasan/shadow.c
+++ b/mm/kasan/shadow.c
@@ -145,7 +145,7 @@ void kasan_poison(const void *addr, size_t size, u8 value, bool init)
 
 	__memset(shadow_start, value, shadow_end - shadow_start);
 }
-EXPORT_SYMBOL(kasan_poison);
+EXPORT_SYMBOL_GPL(kasan_poison);
 
 #ifdef CONFIG_KASAN_GENERIC
 void kasan_poison_last_granule(const void *addr, size_t size)
-- 
2.25.1



^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH mm 10/11] kasan: remove SLUB checks for page_alloc fallbacks in tests
  2023-12-21 20:04 [PATCH mm 00/11] kasan: assorted clean-ups andrey.konovalov
                   ` (8 preceding siblings ...)
  2023-12-21 20:04 ` [PATCH mm 09/11] kasan: export kasan_poison as GPL andrey.konovalov
@ 2023-12-21 20:04 ` andrey.konovalov
  2023-12-21 20:04 ` [PATCH mm 11/11] kasan: speed up match_all_mem_tag test for SW_TAGS andrey.konovalov
  10 siblings, 0 replies; 12+ messages in thread
From: andrey.konovalov @ 2023-12-21 20:04 UTC (permalink / raw)
  To: Marco Elver
  Cc: Andrey Konovalov, Alexander Potapenko, Dmitry Vyukov,
	Andrey Ryabinin, kasan-dev, Andrew Morton, linux-mm,
	linux-kernel, Andrey Konovalov

From: Andrey Konovalov <andreyknvl@google.com>

A number of KASAN tests rely on the fact that calling kmalloc with a size
larger than an order-1 page falls back onto page_alloc.

This fallback was originally only implemented for SLUB, but since
commit d6a71648dbc0 ("mm/slab: kmalloc: pass requests larger than order-1
page to page allocator"), it is also implemented for SLAB.

Thus, drop the SLUB checks from the tests.

Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
---
 mm/kasan/kasan_test.c | 26 ++------------------------
 1 file changed, 2 insertions(+), 24 deletions(-)

diff --git a/mm/kasan/kasan_test.c b/mm/kasan/kasan_test.c
index 496154e38965..798df4983858 100644
--- a/mm/kasan/kasan_test.c
+++ b/mm/kasan/kasan_test.c
@@ -215,7 +215,7 @@ static void kmalloc_node_oob_right(struct kunit *test)
 
 /*
  * Check that KASAN detects an out-of-bounds access for a big object allocated
- * via kmalloc(). But not as big as to trigger the page_alloc fallback for SLUB.
+ * via kmalloc(). But not as big as to trigger the page_alloc fallback.
  */
 static void kmalloc_big_oob_right(struct kunit *test)
 {
@@ -233,8 +233,7 @@ static void kmalloc_big_oob_right(struct kunit *test)
 /*
  * The kmalloc_large_* tests below use kmalloc() to allocate a memory chunk
  * that does not fit into the largest slab cache and therefore is allocated via
- * the page_alloc fallback for SLUB. SLAB has no such fallback, and thus these
- * tests are not supported for it.
+ * the page_alloc fallback.
  */
 
 static void kmalloc_large_oob_right(struct kunit *test)
@@ -242,8 +241,6 @@ static void kmalloc_large_oob_right(struct kunit *test)
 	char *ptr;
 	size_t size = KMALLOC_MAX_CACHE_SIZE + 10;
 
-	KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_SLUB);
-
 	ptr = kmalloc(size, GFP_KERNEL);
 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr);
 
@@ -258,8 +255,6 @@ static void kmalloc_large_uaf(struct kunit *test)
 	char *ptr;
 	size_t size = KMALLOC_MAX_CACHE_SIZE + 10;
 
-	KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_SLUB);
-
 	ptr = kmalloc(size, GFP_KERNEL);
 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr);
 	kfree(ptr);
@@ -272,8 +267,6 @@ static void kmalloc_large_invalid_free(struct kunit *test)
 	char *ptr;
 	size_t size = KMALLOC_MAX_CACHE_SIZE + 10;
 
-	KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_SLUB);
-
 	ptr = kmalloc(size, GFP_KERNEL);
 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr);
 
@@ -407,18 +400,12 @@ static void krealloc_less_oob(struct kunit *test)
 
 static void krealloc_large_more_oob(struct kunit *test)
 {
-	/* page_alloc fallback is only implemented for SLUB. */
-	KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_SLUB);
-
 	krealloc_more_oob_helper(test, KMALLOC_MAX_CACHE_SIZE + 201,
 					KMALLOC_MAX_CACHE_SIZE + 235);
 }
 
 static void krealloc_large_less_oob(struct kunit *test)
 {
-	/* page_alloc fallback is only implemented for SLUB. */
-	KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_SLUB);
-
 	krealloc_less_oob_helper(test, KMALLOC_MAX_CACHE_SIZE + 235,
 					KMALLOC_MAX_CACHE_SIZE + 201);
 }
@@ -1144,9 +1131,6 @@ static void mempool_kmalloc_large_uaf(struct kunit *test)
 	size_t size = KMALLOC_MAX_CACHE_SIZE + 1;
 	void *extra_elem;
 
-	/* page_alloc fallback is only implemented for SLUB. */
-	KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_SLUB);
-
 	extra_elem = mempool_prepare_kmalloc(test, &pool, size);
 
 	mempool_uaf_helper(test, &pool, false);
@@ -1215,9 +1199,6 @@ static void mempool_kmalloc_large_double_free(struct kunit *test)
 	size_t size = KMALLOC_MAX_CACHE_SIZE + 1;
 	char *extra_elem;
 
-	/* page_alloc fallback is only implemented for SLUB. */
-	KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_SLUB);
-
 	extra_elem = mempool_prepare_kmalloc(test, &pool, size);
 
 	mempool_double_free_helper(test, &pool);
@@ -1272,9 +1253,6 @@ static void mempool_kmalloc_large_invalid_free(struct kunit *test)
 	size_t size = KMALLOC_MAX_CACHE_SIZE + 1;
 	char *extra_elem;
 
-	/* page_alloc fallback is only implemented for SLUB. */
-	KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_SLUB);
-
 	extra_elem = mempool_prepare_kmalloc(test, &pool, size);
 
 	mempool_kmalloc_invalid_free_helper(test, &pool);
-- 
2.25.1



^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH mm 11/11] kasan: speed up match_all_mem_tag test for SW_TAGS
  2023-12-21 20:04 [PATCH mm 00/11] kasan: assorted clean-ups andrey.konovalov
                   ` (9 preceding siblings ...)
  2023-12-21 20:04 ` [PATCH mm 10/11] kasan: remove SLUB checks for page_alloc fallbacks in tests andrey.konovalov
@ 2023-12-21 20:04 ` andrey.konovalov
  10 siblings, 0 replies; 12+ messages in thread
From: andrey.konovalov @ 2023-12-21 20:04 UTC (permalink / raw)
  To: Marco Elver
  Cc: Andrey Konovalov, Alexander Potapenko, Dmitry Vyukov,
	Andrey Ryabinin, kasan-dev, Andrew Morton, linux-mm,
	linux-kernel, Andrey Konovalov

From: Andrey Konovalov <andreyknvl@google.com>

Checking all 256 possible tag values in the match_all_mem_tag KASAN test
is slow and produces 256 reports. Instead, just check the first 8 and the
last 8.

Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
---
 mm/kasan/kasan_test.c | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/mm/kasan/kasan_test.c b/mm/kasan/kasan_test.c
index 798df4983858..9c3a1aaab21b 100644
--- a/mm/kasan/kasan_test.c
+++ b/mm/kasan/kasan_test.c
@@ -1785,6 +1785,14 @@ static void match_all_mem_tag(struct kunit *test)
 
 	/* For each possible tag value not matching the pointer tag. */
 	for (tag = KASAN_TAG_MIN; tag <= KASAN_TAG_KERNEL; tag++) {
+		/*
+		 * For Software Tag-Based KASAN, skip the majority of tag
+		 * values to avoid the test printing too many reports.
+		 */
+		if (IS_ENABLED(CONFIG_KASAN_SW_TAGS) &&
+		    tag >= KASAN_TAG_MIN + 8 && tag <= KASAN_TAG_KERNEL - 8)
+			continue;
+
 		if (tag == get_tag(ptr))
 			continue;
 
-- 
2.25.1



^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2023-12-21 20:06 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-12-21 20:04 [PATCH mm 00/11] kasan: assorted clean-ups andrey.konovalov
2023-12-21 20:04 ` [PATCH mm 01/11] kasan/arm64: improve comments for KASAN_SHADOW_START/END andrey.konovalov
2023-12-21 20:04 ` [PATCH mm 02/11] mm, kasan: use KASAN_TAG_KERNEL instead of 0xff andrey.konovalov
2023-12-21 20:04 ` [PATCH mm 03/11] kasan: improve kasan_non_canonical_hook andrey.konovalov
2023-12-21 20:04 ` [PATCH mm 04/11] kasan: clean up kasan_requires_meta andrey.konovalov
2023-12-21 20:04 ` [PATCH mm 05/11] kasan: update kasan_poison documentation comment andrey.konovalov
2023-12-21 20:04 ` [PATCH mm 06/11] kasan: clean up is_kfence_address checks andrey.konovalov
2023-12-21 20:04 ` [PATCH mm 07/11] kasan: respect CONFIG_KASAN_VMALLOC for kasan_flag_vmalloc andrey.konovalov
2023-12-21 20:04 ` [PATCH mm 08/11] kasan: check kasan_vmalloc_enabled in vmalloc tests andrey.konovalov
2023-12-21 20:04 ` [PATCH mm 09/11] kasan: export kasan_poison as GPL andrey.konovalov
2023-12-21 20:04 ` [PATCH mm 10/11] kasan: remove SLUB checks for page_alloc fallbacks in tests andrey.konovalov
2023-12-21 20:04 ` [PATCH mm 11/11] kasan: speed up match_all_mem_tag test for SW_TAGS andrey.konovalov

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox