From: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
To: luto@kernel.org, xin@zytor.com, kirill.shutemov@linux.intel.com,
palmer@dabbelt.com, tj@kernel.org, andreyknvl@gmail.com,
brgerst@gmail.com, ardb@kernel.org, dave.hansen@linux.intel.com,
jgross@suse.com, will@kernel.org, akpm@linux-foundation.org,
arnd@arndb.de, corbet@lwn.net, maciej.wieczor-retman@intel.com,
dvyukov@google.com, richard.weiyang@gmail.com, ytcoode@gmail.com,
tglx@linutronix.de, hpa@zytor.com, seanjc@google.com,
paul.walmsley@sifive.com, aou@eecs.berkeley.edu,
justinstitt@google.com, jason.andryuk@amd.com, glider@google.com,
ubizjak@gmail.com, jannh@google.com, bhe@redhat.com,
vincenzo.frascino@arm.com, rafael.j.wysocki@intel.com,
ndesaulniers@google.com, mingo@redhat.com,
catalin.marinas@arm.com, junichi.nomura@nec.com,
nathan@kernel.org, ryabinin.a.a@gmail.com, dennis@kernel.org,
bp@alien8.de, kevinloughlin@google.com, morbo@google.com,
dan.j.williams@intel.com, julian.stecklina@cyberus-technology.de,
peterz@infradead.org, cl@linux.com, kees@kernel.org
Cc: kasan-dev@googlegroups.com, x86@kernel.org,
linux-arm-kernel@lists.infradead.org,
linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org,
linux-mm@kvack.org, llvm@lists.linux.dev,
linux-doc@vger.kernel.org
Subject: [PATCH 03/15] kasan: Vmalloc dense tag-based mode support
Date: Tue, 4 Feb 2025 18:33:44 +0100 [thread overview]
Message-ID: <a8cfb5d8d93ba48fd5f2defcccac5d758ecd7f39.1738686764.git.maciej.wieczor-retman@intel.com> (raw)
In-Reply-To: <cover.1738686764.git.maciej.wieczor-retman@intel.com>
To use KASAN with the vmalloc allocator multiple functions are
implemented that deal with full pages of memory. Many of these functions
are hardcoded to deal with byte aligned shadow memory regions by using
__memset().
With the introduction of the dense mode, tags won't necessarily occupy
whole bytes of shadow memory if the previously allocated memory wasn't
aligned to 32 bytes - which is the coverage of one shadow byte.
Change __memset() calls to kasan_poison(). With dense tag-based mode
enabled that will take care of any unaligned tags in shadow memory.
Signed-off-by: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
---
mm/kasan/kasan.h | 2 +-
mm/kasan/shadow.c | 14 ++++++--------
2 files changed, 7 insertions(+), 9 deletions(-)
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index d29bd0e65020..a56aadd51485 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -135,7 +135,7 @@ static inline bool kasan_requires_meta(void)
#define KASAN_GRANULE_MASK (KASAN_GRANULE_SIZE - 1)
-#define KASAN_MEMORY_PER_SHADOW_PAGE (KASAN_GRANULE_SIZE << PAGE_SHIFT)
+#define KASAN_MEMORY_PER_SHADOW_PAGE (KASAN_SHADOW_SCALE_SIZE << PAGE_SHIFT)
#ifdef CONFIG_KASAN_GENERIC
#define KASAN_PAGE_FREE 0xFF /* freed page */
diff --git a/mm/kasan/shadow.c b/mm/kasan/shadow.c
index 368503f54b87..94f51046e6ae 100644
--- a/mm/kasan/shadow.c
+++ b/mm/kasan/shadow.c
@@ -332,7 +332,7 @@ static int kasan_populate_vmalloc_pte(pte_t *ptep, unsigned long addr,
if (!page)
return -ENOMEM;
- __memset((void *)page, KASAN_VMALLOC_INVALID, PAGE_SIZE);
+ kasan_poison((void *)page, PAGE_SIZE, KASAN_VMALLOC_INVALID, false);
pte = pfn_pte(PFN_DOWN(__pa(page)), PAGE_KERNEL);
spin_lock(&init_mm.page_table_lock);
@@ -357,9 +357,6 @@ int kasan_populate_vmalloc(unsigned long addr, unsigned long size)
if (!is_vmalloc_or_module_addr((void *)addr))
return 0;
- shadow_start = (unsigned long)kasan_mem_to_shadow((void *)addr);
- shadow_end = (unsigned long)kasan_mem_to_shadow((void *)addr + size);
-
/*
* User Mode Linux maps enough shadow memory for all of virtual memory
* at boot, so doesn't need to allocate more on vmalloc, just clear it.
@@ -368,12 +365,12 @@ int kasan_populate_vmalloc(unsigned long addr, unsigned long size)
* reason.
*/
if (IS_ENABLED(CONFIG_UML)) {
- __memset((void *)shadow_start, KASAN_VMALLOC_INVALID, shadow_end - shadow_start);
+ kasan_poison((void *)addr, size, KASAN_VMALLOC_INVALID, false);
return 0;
}
- shadow_start = PAGE_ALIGN_DOWN(shadow_start);
- shadow_end = PAGE_ALIGN(shadow_end);
+ shadow_start = PAGE_ALIGN_DOWN((unsigned long)kasan_mem_to_shadow((void *)addr));
+ shadow_end = PAGE_ALIGN((unsigned long)kasan_mem_to_shadow((void *)addr + size));
ret = apply_to_page_range(&init_mm, shadow_start,
shadow_end - shadow_start,
@@ -546,7 +543,8 @@ void kasan_release_vmalloc(unsigned long start, unsigned long end,
if (shadow_end > shadow_start) {
size = shadow_end - shadow_start;
if (IS_ENABLED(CONFIG_UML)) {
- __memset(shadow_start, KASAN_SHADOW_INIT, shadow_end - shadow_start);
+ kasan_poison((void *)region_start, region_start - region_end,
+ KASAN_VMALLOC_INVALID, false);
return;
}
apply_to_existing_page_range(&init_mm,
--
2.47.1
next prev parent reply other threads:[~2025-02-04 17:35 UTC|newest]
Thread overview: 45+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-02-04 17:33 [PATCH 00/15] kasan: x86: arm64: risc-v: KASAN tag-based mode for x86 Maciej Wieczor-Retman
2025-02-04 17:33 ` [PATCH 01/15] kasan: Allocation enhancement for dense tag-based mode Maciej Wieczor-Retman
2025-02-05 23:43 ` Andrey Konovalov
2025-02-06 12:57 ` Maciej Wieczor-Retman
2025-02-06 18:14 ` Andrey Konovalov
2025-02-04 17:33 ` [PATCH 02/15] kasan: Tag checking with " Maciej Wieczor-Retman
2025-02-05 23:45 ` Andrey Konovalov
2025-02-06 14:55 ` Maciej Wieczor-Retman
2025-02-04 17:33 ` Maciej Wieczor-Retman [this message]
2025-02-04 17:33 ` [PATCH 04/15] kasan: arm64: x86: risc-v: Make special tags arch specific Maciej Wieczor-Retman
2025-02-05 20:20 ` Palmer Dabbelt
2025-02-06 11:22 ` Maciej Wieczor-Retman
2025-02-04 17:33 ` [PATCH 05/15] x86: Add arch specific kasan functions Maciej Wieczor-Retman
2025-02-04 17:33 ` [PATCH 06/15] x86: Reset tag for virtual to physical address conversions Maciej Wieczor-Retman
2025-02-04 17:33 ` [PATCH 07/15] mm: Pcpu chunk address tag reset Maciej Wieczor-Retman
2025-02-04 17:33 ` [PATCH 08/15] x86: Physical address comparisons in fill_p*d/pte Maciej Wieczor-Retman
2025-02-06 0:57 ` Dave Hansen
2025-02-07 16:37 ` Maciej Wieczor-Retman
2025-02-11 19:59 ` Dave Hansen
2025-02-04 17:33 ` [PATCH 09/15] x86: Physical address comparison in current_mm pgd check Maciej Wieczor-Retman
2025-02-04 17:33 ` [PATCH 10/15] x86: KASAN raw shadow memory PTE init Maciej Wieczor-Retman
2025-02-05 23:45 ` Andrey Konovalov
2025-02-06 15:39 ` Maciej Wieczor-Retman
2025-02-04 17:33 ` [PATCH 11/15] x86: LAM initialization Maciej Wieczor-Retman
2025-02-04 17:33 ` [PATCH 12/15] x86: Minimal SLAB alignment Maciej Wieczor-Retman
2025-02-04 17:33 ` [PATCH 13/15] x86: runtime_const used for KASAN_SHADOW_END Maciej Wieczor-Retman
2025-02-04 17:33 ` [PATCH 14/15] x86: Make software tag-based kasan available Maciej Wieczor-Retman
2025-02-04 17:33 ` [PATCH 15/15] kasan: Add mititgation and debug modes Maciej Wieczor-Retman
2025-02-05 23:46 ` Andrey Konovalov
2025-02-07 9:08 ` Maciej Wieczor-Retman
2025-02-04 18:58 ` [PATCH 00/15] kasan: x86: arm64: risc-v: KASAN tag-based mode for x86 Christoph Lameter (Ampere)
2025-02-04 21:05 ` Dave Hansen
2025-02-05 18:59 ` Christoph Lameter (Ampere)
2025-02-05 23:04 ` Ard Biesheuvel
2025-02-04 23:36 ` Jessica Clarke
2025-02-04 23:36 ` Jessica Clarke
2025-02-05 18:51 ` Christoph Lameter (Ampere)
2025-02-06 1:05 ` Jessica Clarke
2025-02-06 19:11 ` Christoph Lameter (Ampere)
2025-02-06 21:41 ` Dave Hansen
2025-02-07 7:41 ` Maciej Wieczor-Retman
2025-02-06 22:56 ` Andrey Konovalov
2025-02-05 23:40 ` Andrey Konovalov
2025-02-06 10:40 ` Maciej Wieczor-Retman
2025-02-06 18:10 ` Andrey Konovalov
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=a8cfb5d8d93ba48fd5f2defcccac5d758ecd7f39.1738686764.git.maciej.wieczor-retman@intel.com \
--to=maciej.wieczor-retman@intel.com \
--cc=akpm@linux-foundation.org \
--cc=andreyknvl@gmail.com \
--cc=aou@eecs.berkeley.edu \
--cc=ardb@kernel.org \
--cc=arnd@arndb.de \
--cc=bhe@redhat.com \
--cc=bp@alien8.de \
--cc=brgerst@gmail.com \
--cc=catalin.marinas@arm.com \
--cc=cl@linux.com \
--cc=corbet@lwn.net \
--cc=dan.j.williams@intel.com \
--cc=dave.hansen@linux.intel.com \
--cc=dennis@kernel.org \
--cc=dvyukov@google.com \
--cc=glider@google.com \
--cc=hpa@zytor.com \
--cc=jannh@google.com \
--cc=jason.andryuk@amd.com \
--cc=jgross@suse.com \
--cc=julian.stecklina@cyberus-technology.de \
--cc=junichi.nomura@nec.com \
--cc=justinstitt@google.com \
--cc=kasan-dev@googlegroups.com \
--cc=kees@kernel.org \
--cc=kevinloughlin@google.com \
--cc=kirill.shutemov@linux.intel.com \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-doc@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-riscv@lists.infradead.org \
--cc=llvm@lists.linux.dev \
--cc=luto@kernel.org \
--cc=mingo@redhat.com \
--cc=morbo@google.com \
--cc=nathan@kernel.org \
--cc=ndesaulniers@google.com \
--cc=palmer@dabbelt.com \
--cc=paul.walmsley@sifive.com \
--cc=peterz@infradead.org \
--cc=rafael.j.wysocki@intel.com \
--cc=richard.weiyang@gmail.com \
--cc=ryabinin.a.a@gmail.com \
--cc=seanjc@google.com \
--cc=tglx@linutronix.de \
--cc=tj@kernel.org \
--cc=ubizjak@gmail.com \
--cc=vincenzo.frascino@arm.com \
--cc=will@kernel.org \
--cc=x86@kernel.org \
--cc=xin@zytor.com \
--cc=ytcoode@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox