From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2FDE1CFA45D for ; Wed, 23 Oct 2024 18:42:17 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E87546B0083; Wed, 23 Oct 2024 14:42:14 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E37A26B0088; Wed, 23 Oct 2024 14:42:14 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CD8826B008A; Wed, 23 Oct 2024 14:42:14 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id AB4BD6B0083 for ; Wed, 23 Oct 2024 14:42:14 -0400 (EDT) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 8C1811A0E21 for ; Wed, 23 Oct 2024 18:41:42 +0000 (UTC) X-FDA: 82705736190.08.A8AE774 Received: from mail-wm1-f41.google.com (mail-wm1-f41.google.com [209.85.128.41]) by imf08.hostedemail.com (Postfix) with ESMTP id 7D69E160026 for ; Wed, 23 Oct 2024 18:42:00 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b="f/W5TgUP"; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf08.hostedemail.com: domain of andreyknvl@gmail.com designates 209.85.128.41 as permitted sender) smtp.mailfrom=andreyknvl@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1729708820; a=rsa-sha256; cv=none; b=RVXTNuqTxnpIZbtut8ZS8xN23vf0wHo/rlf1+6dQNwWf9QdPR7MYqcGO8DLdTRIIIh2zJJ qefMfM74kyaF3OFVE5j/7DQZKgOBBaX2QKjBJVaQGN0WkXGNJWSnj+ydn12UJxkJIvJ/4K b0+dUc1s6izgwIQ4q/ZFrkmmxNANeUE= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b="f/W5TgUP"; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf08.hostedemail.com: domain of andreyknvl@gmail.com designates 209.85.128.41 as permitted sender) smtp.mailfrom=andreyknvl@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1729708820; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=dnCz4Gz0DfA3iJQbcwkn7pdP+4IKH3HjV3Yhmz6VCSY=; b=zRWepmt7byV2P1YMarBvj3z6D1Jrw1pVEXX85F7890fu24LNNYBL5gbzwG1F4qlN3tXGFp ffGt0Kvw4q8iFlI5nAy3g0+5n5l0OMK5aHlLCbQwFdh7gX40ZeEmUtiIVwf0uayChcEThC sMEN8hpu5CLK33FM7dHdx+nnqu/e/s0= Received: by mail-wm1-f41.google.com with SMTP id 5b1f17b1804b1-43158625112so728285e9.3 for ; Wed, 23 Oct 2024 11:42:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1729708929; x=1730313729; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=dnCz4Gz0DfA3iJQbcwkn7pdP+4IKH3HjV3Yhmz6VCSY=; b=f/W5TgUPiKoL2qC9DDQbf0D+sfrbIo0sjvMV7CXY68XB+nIcJbLdsr1xk60D322xxi WojZIYguZ2O3qNKhdgpC91CCtg855z+rVWlGAbG3XLdmHfBrkHsSLgETU0vQ3s84OeZ9 Qvmq3t/0X86hT0oVSiCrinEd5cBGt1lZcIxVwKdyfSw8Ub33CuxeNp6OJqPV48Y4GDZA W15G7PO8aiP5Zm7YBZvRf8bd6w0itYCDnlsWChqQNF+5GjtauplYQSfJqkksb+XW/XCC DAXz55gD5xznEDhhPUrR6rtjmGLxKpxX5wRe/53D7lZD0S4MJl9kBWBwDeMNyumM9toZ Dt1A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1729708929; x=1730313729; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=dnCz4Gz0DfA3iJQbcwkn7pdP+4IKH3HjV3Yhmz6VCSY=; b=oBh24ZP+yrJ92h0XSlOtFsXh3jjdZnqwJmWpcpcESXX+nSZj6fankGwaM9ZpiyvRqy xT6LLTNkxFWyocuB6QIHxamFsRXtRfA1pBjUdA2LlTYFzOm2mR0qNKozQf8G2qgYpihz R+q5MBhS6+gHMWhEfgf8iKS/E+rAm56km7BKYAOL8Ekooy5tsqHfMHsQBTZuRaz2PZDV Mu4E+IuKAf9rYSzRfMVk3NY97o0RpcStuV1AIvIN5NZSlPwl79GvXL+m4aNUCyT7bvzn FYgQpKuZaBL0UqV8XWDoF4C0T4VBboWD4XzARU7AhCTAYp9M/0AnRTDL3gNfy2inUVew wyKw== X-Forwarded-Encrypted: i=1; AJvYcCWnHALab57Mzf57yfkfYO4qDwBlw+nQmppkXa9XZRmaKWT4Bsj2MXlgCV81LBoC8uOx/vyR0ZUezw==@kvack.org X-Gm-Message-State: AOJu0Ywd33gJC00qHnwRXYZcAWZ9gPTjff7QBNOPalpPXX1pipSBRHWP llwmiz+RV+vw+MMFXIJI8Y88aQfd+zEF52CGA7N2FRwzCgMRfHSaGWMFJZmhjquZCkztmuJP8VP AmedXqDNyjSapKBVHKKwIwkP5/So= X-Google-Smtp-Source: AGHT+IGg7n7i3SyqRuIa/xhUMj1hN55mZB4j6qxgVsau7X+77BU2sb51xczx42QYuNHR1RsEnbwZOc1YeDhSyzlxoHQ= X-Received: by 2002:a5d:5257:0:b0:37d:614e:2bc5 with SMTP id ffacd0b85a97d-37efcf1dbb3mr2066652f8f.29.1729708928528; Wed, 23 Oct 2024 11:42:08 -0700 (PDT) MIME-Version: 1.0 References: <20241022015913.3524425-1-samuel.holland@sifive.com> <20241022015913.3524425-2-samuel.holland@sifive.com> In-Reply-To: <20241022015913.3524425-2-samuel.holland@sifive.com> From: Andrey Konovalov Date: Wed, 23 Oct 2024 20:41:57 +0200 Message-ID: Subject: Re: [PATCH v2 1/9] kasan: sw_tags: Use arithmetic shift for shadow computation To: Samuel Holland Cc: Palmer Dabbelt , linux-riscv@lists.infradead.org, Andrey Ryabinin , Alexander Potapenko , Dmitry Vyukov , Vincenzo Frascino , kasan-dev@googlegroups.com, llvm@lists.linux.dev, Catalin Marinas , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Alexandre Ghiti , Will Deacon , Evgenii Stepanov , Andrew Morton , linux-arm-kernel@lists.infradead.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: 7D69E160026 X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: r8kyhgbtf8ki6fbx9e1m64bwfny4i1ry X-HE-Tag: 1729708920-683408 X-HE-Meta: U2FsdGVkX1+QzEGJowtCiOpsQEzFgWozUs0oPzCO2trHGEf3DLs3nPbStc8RduIMC/sRGq7EtWWpUSrUxH3Tf2Kly1wXWqF2I3EA2i1mOnbFUnss2hdyy5pZu/O2FPsfZt62a/VNSp0LX5yZbstyVTLPg4wSh28ReUTeFrD3NZUM/6JO6d+tC74ogvyUHeq+aEZpjxA4hl1IcsgpV5XyRrBip8cZKqYZrEW66E+W9G7sBmO+XdLvtPb61x1OFZ6Qy39dH18estEvYnY67bkVsDfb+jTXRG9+x6DlSD/pIlAcqg3EVNnDxjyQ6tRGPtEGJE3BBXBRcnJ1fwJvDos4rRRvKdNmlIA8oyaE9BX4VbXpSUFZW8i+hrd3+WPFB+aqZ4nL50FZUvWeFJ9n0aUIQx7/XqXhTw47RsAUVkn7bA3pfAFENDYR301TR/pmO+PysfDGPvvOEzH2LDxwCQ3m9V/SQOB1yi2aCm79QAgmxTTj8gFXVfDD/pL23XfBLpG8DBbd2V9U/vxPjLFUJI4i8xfaCDJN4Pou0PWEoKs5LjmeNFPZLWK02csY1dr/KuCfGCre/9Tt6Ltp976mDnFa4W8+3Hsj3zU8ZQyVjv3o1qs7kMPaRdrUAROhdH05jO6VFmFD30eHtlCI34eTZDowlkxUnCiQxXVtAC3eIywHSCmI3hdzprRnEdCpNfKwwJK9I4Q2nPM0YmcGhK57W7hj57e87VOAmcLLZUJ8htTvNqa+6ROOYno0peLxwCDEBbuWJbBKO2ofClQ1V9ZMfV4UgnTMjuRGK1wCKy8reono/lX4bg4KeTIbs3oaOXkAnx2OeiDQzF2FfIXalJoAVlShhyZ7SivYNJKH0GsH+K+x9tD8U9wR0WrR9Baf2pku0Sgu8mpeiEVqUfSmf0U24kvxjojVtoRtM/Y0mD1Y0FIhnVcQpW5VUWF2byglcPZNcv6wFIpTIZtLhk59d2IPKVe gGW23rBz 9ciyJL8yJEnm3QXzwAhZFIY6KIvNMiByqHhaP/GaobXqo/H9IwbcO8iX424wUD1wEPrLrMnacrKugUfTkg0vGJTMrstkPCHsSRGGNFcsVCiNJohfie6DI1dawX7myKiwi5Nj2c3zHrtL4hgqqLE523tfK+jYxo+B6EkxfcMPUYgVqnfjQRPAQBSrS8ZIJ4ppq1FB7yslyDlu5pecY4WzvrSWK23Hqg6v7dhygh0P9cKFC6De3h9Mb1uEPAQH/31sAdlb0HU435wLV+Zx5BnHpvsuxcjpgN+ctWHN9qZ0oWW7uMxINpyFBm4FgEVcD1Bt/oCug84MDpp6NmeIiJQFGpQe+TZnhaKzXo8oJexHUZorLcxdaomxpDCX7ydzdw9FL9MeWpp+POQ5hLLg1p0QaqZfUAlbgt+Sdye3siviyijheaOsPk4LfL73x4ADFoNElV/Ev5gG60xN9YIaFvk7gaSnpp8fSvDV37dCzl7kgSNOZ9MU= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Tue, Oct 22, 2024 at 3:59=E2=80=AFAM Samuel Holland wrote: > > Currently, kasan_mem_to_shadow() uses a logical right shift, which turns > canonical kernel addresses into non-canonical addresses by clearing the > high KASAN_SHADOW_SCALE_SHIFT bits. The value of KASAN_SHADOW_OFFSET is > then chosen so that the addition results in a canonical address for the > shadow memory. > > For KASAN_GENERIC, this shift/add combination is ABI with the compiler, > because KASAN_SHADOW_OFFSET is used in compiler-generated inline tag > checks[1], which must only attempt to dereference canonical addresses. > > However, for KASAN_SW_TAGS we have some freedom to change the algorithm > without breaking the ABI. Because TBI is enabled for kernel addresses, > the top bits of shadow memory addresses computed during tag checks are > irrelevant, and so likewise are the top bits of KASAN_SHADOW_OFFSET. > This is demonstrated by the fact that LLVM uses a logical right shift > in the tag check fast path[2] but a sbfx (signed bitfield extract) > instruction in the slow path[3] without causing any issues. > > Using an arithmetic shift in kasan_mem_to_shadow() provides a number of > benefits: > > 1) The memory layout is easier to understand. KASAN_SHADOW_OFFSET > becomes a canonical memory address, and the shifted pointer becomes a > negative offset, so KASAN_SHADOW_OFFSET =3D=3D KASAN_SHADOW_END regardles= s > of the shift amount or the size of the virtual address space. > > 2) KASAN_SHADOW_OFFSET becomes a simpler constant, requiring only one > instruction to load instead of two. Since it must be loaded in each > function with a tag check, this decreases kernel text size by 0.5%. > > 3) This shift and the sign extension from kasan_reset_tag() can be > combined into a single sbfx instruction. When this same algorithm change > is applied to the compiler, it removes an instruction from each inline > tag check, further reducing kernel text size by an additional 4.6%. > > These benefits extend to other architectures as well. On RISC-V, where > the baseline ISA does not shifted addition or have an equivalent to the > sbfx instruction, loading KASAN_SHADOW_OFFSET is reduced from 3 to 2 > instructions, and kasan_mem_to_shadow(kasan_reset_tag(addr)) similarly > combines two consecutive right shifts. > > Link: https://github.com/llvm/llvm-project/blob/llvmorg-20-init/llvm/lib/= Transforms/Instrumentation/AddressSanitizer.cpp#L1316 [1] > Link: https://github.com/llvm/llvm-project/blob/llvmorg-20-init/llvm/lib/= Transforms/Instrumentation/HWAddressSanitizer.cpp#L895 [2] > Link: https://github.com/llvm/llvm-project/blob/llvmorg-20-init/llvm/lib/= Target/AArch64/AArch64AsmPrinter.cpp#L669 [3] > Signed-off-by: Samuel Holland > --- > > Changes in v2: > - Improve the explanation for how KASAN_SHADOW_END is derived > - Update the range check in kasan_non_canonical_hook() > > arch/arm64/Kconfig | 10 +++++----- > arch/arm64/include/asm/memory.h | 17 +++++++++++++++-- > arch/arm64/mm/kasan_init.c | 7 +++++-- > include/linux/kasan.h | 10 ++++++++-- > mm/kasan/report.c | 22 ++++++++++++++++++---- > scripts/gdb/linux/mm.py | 5 +++-- > 6 files changed, 54 insertions(+), 17 deletions(-) > > diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig > index fd9df6dcc593..6a326908c941 100644 > --- a/arch/arm64/Kconfig > +++ b/arch/arm64/Kconfig > @@ -418,11 +418,11 @@ config KASAN_SHADOW_OFFSET > default 0xdffffe0000000000 if ARM64_VA_BITS_42 && !KASAN_SW_TAGS > default 0xdfffffc000000000 if ARM64_VA_BITS_39 && !KASAN_SW_TAGS > default 0xdffffff800000000 if ARM64_VA_BITS_36 && !KASAN_SW_TAGS > - default 0xefff800000000000 if (ARM64_VA_BITS_48 || (ARM64_VA_BITS= _52 && !ARM64_16K_PAGES)) && KASAN_SW_TAGS > - default 0xefffc00000000000 if (ARM64_VA_BITS_47 || ARM64_VA_BITS_= 52) && ARM64_16K_PAGES && KASAN_SW_TAGS > - default 0xeffffe0000000000 if ARM64_VA_BITS_42 && KASAN_SW_TAGS > - default 0xefffffc000000000 if ARM64_VA_BITS_39 && KASAN_SW_TAGS > - default 0xeffffff800000000 if ARM64_VA_BITS_36 && KASAN_SW_TAGS > + default 0xffff800000000000 if (ARM64_VA_BITS_48 || (ARM64_VA_BITS= _52 && !ARM64_16K_PAGES)) && KASAN_SW_TAGS > + default 0xffffc00000000000 if (ARM64_VA_BITS_47 || ARM64_VA_BITS_= 52) && ARM64_16K_PAGES && KASAN_SW_TAGS > + default 0xfffffe0000000000 if ARM64_VA_BITS_42 && KASAN_SW_TAGS > + default 0xffffffc000000000 if ARM64_VA_BITS_39 && KASAN_SW_TAGS > + default 0xfffffff800000000 if ARM64_VA_BITS_36 && KASAN_SW_TAGS > default 0xffffffffffffffff > > config UNWIND_TABLES > diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/mem= ory.h > index 0480c61dbb4f..a93fc9dc16f3 100644 > --- a/arch/arm64/include/asm/memory.h > +++ b/arch/arm64/include/asm/memory.h > @@ -80,7 +80,8 @@ > * where KASAN_SHADOW_SCALE_SHIFT is the order of the number of bits tha= t map > * to a single shadow byte and KASAN_SHADOW_OFFSET is a constant that of= fsets > * the mapping. Note that KASAN_SHADOW_OFFSET does not point to the star= t of > - * the shadow memory region. > + * the shadow memory region, since not all possible addresses have shado= w > + * memory allocated for them. I'm not sure this addition makes sense: the original statement was to point out that KASAN_SHADOW_OFFSET and KASAN_SHADOW_START are different values. Even if we were to map shadow for userspace, KASAN_SHADOW_OFFSET would still be a weird offset value for Generic KASAN. > * > * Based on this mapping, we define two constants: > * > @@ -89,7 +90,15 @@ > * > * KASAN_SHADOW_END is defined first as the shadow address that correspo= nds to > * the upper bound of possible virtual kernel memory addresses UL(1) << = 64 > - * according to the mapping formula. > + * according to the mapping formula. For Generic KASAN, the address in t= he > + * mapping formula is treated as unsigned (part of the compiler's ABI), = so the > + * end of the shadow memory region is at a large positive offset from > + * KASAN_SHADOW_OFFSET. For Software Tag-Based KASAN, the address in the > + * formula is treated as signed. Since all kernel addresses are negative= , they > + * map to shadow memory below KASAN_SHADOW_OFFSET, making KASAN_SHADOW_O= FFSET > + * itself the end of the shadow memory region. (User pointers are positi= ve and > + * would map to shadow memory above KASAN_SHADOW_OFFSET, but shadow memo= ry is > + * not allocated for them.) This looks good! > * > * KASAN_SHADOW_START is defined second based on KASAN_SHADOW_END. The s= hadow > * memory start must map to the lowest possible kernel virtual memory ad= dress > @@ -100,7 +109,11 @@ > */ > #if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS) > #define KASAN_SHADOW_OFFSET _AC(CONFIG_KASAN_SHADOW_OFFSET, UL) > +#ifdef CONFIG_KASAN_GENERIC > #define KASAN_SHADOW_END ((UL(1) << (64 - KASAN_SHADOW_SCALE_SHIFT= )) + KASAN_SHADOW_OFFSET) > +#else > +#define KASAN_SHADOW_END KASAN_SHADOW_OFFSET > +#endif > #define _KASAN_SHADOW_START(va) (KASAN_SHADOW_END - (UL(1) << ((v= a) - KASAN_SHADOW_SCALE_SHIFT))) > #define KASAN_SHADOW_START _KASAN_SHADOW_START(vabits_actual) > #define PAGE_END KASAN_SHADOW_START > diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c > index b65a29440a0c..6836e571555c 100644 > --- a/arch/arm64/mm/kasan_init.c > +++ b/arch/arm64/mm/kasan_init.c > @@ -198,8 +198,11 @@ static bool __init root_level_aligned(u64 addr) > /* The early shadow maps everything to a single page of zeroes */ > asmlinkage void __init kasan_early_init(void) > { > - BUILD_BUG_ON(KASAN_SHADOW_OFFSET !=3D > - KASAN_SHADOW_END - (1UL << (64 - KASAN_SHADOW_SCALE_SHIFT= ))); > + if (IS_ENABLED(CONFIG_KASAN_GENERIC)) > + BUILD_BUG_ON(KASAN_SHADOW_OFFSET !=3D > + KASAN_SHADOW_END - (1UL << (64 - KASAN_SHADOW_SCA= LE_SHIFT))); > + else > + BUILD_BUG_ON(KASAN_SHADOW_OFFSET !=3D KASAN_SHADOW_END); > BUILD_BUG_ON(!IS_ALIGNED(_KASAN_SHADOW_START(VA_BITS), SHADOW_ALI= GN)); > BUILD_BUG_ON(!IS_ALIGNED(_KASAN_SHADOW_START(VA_BITS_MIN), SHADOW= _ALIGN)); > BUILD_BUG_ON(!IS_ALIGNED(KASAN_SHADOW_END, SHADOW_ALIGN)); > diff --git a/include/linux/kasan.h b/include/linux/kasan.h > index 00a3bf7c0d8f..03b440658817 100644 > --- a/include/linux/kasan.h > +++ b/include/linux/kasan.h > @@ -58,8 +58,14 @@ int kasan_populate_early_shadow(const void *shadow_sta= rt, > #ifndef kasan_mem_to_shadow > static inline void *kasan_mem_to_shadow(const void *addr) > { > - return (void *)((unsigned long)addr >> KASAN_SHADOW_SCALE_SHIFT) > - + KASAN_SHADOW_OFFSET; > + void *scaled; > + > + if (IS_ENABLED(CONFIG_KASAN_GENERIC)) > + scaled =3D (void *)((unsigned long)addr >> KASAN_SHADOW_S= CALE_SHIFT); > + else > + scaled =3D (void *)((long)addr >> KASAN_SHADOW_SCALE_SHIF= T); > + > + return KASAN_SHADOW_OFFSET + scaled; > } > #endif > > diff --git a/mm/kasan/report.c b/mm/kasan/report.c > index b48c768acc84..c08097715686 100644 > --- a/mm/kasan/report.c > +++ b/mm/kasan/report.c > @@ -644,15 +644,29 @@ void kasan_report_async(void) > */ > void kasan_non_canonical_hook(unsigned long addr) > { > + unsigned long max_shadow_size =3D BIT(BITS_PER_LONG - KASAN_SHADO= W_SCALE_SHIFT); > unsigned long orig_addr; > const char *bug_type; > > /* > - * All addresses that came as a result of the memory-to-shadow ma= pping > - * (even for bogus pointers) must be >=3D KASAN_SHADOW_OFFSET. > + * With the default kasan_mem_to_shadow() algorithm, all addresse= s > + * returned by the memory-to-shadow mapping (even for bogus point= ers) > + * must be within a certain displacement from KASAN_SHADOW_OFFSET= . > + * > + * For Generic KASAN, the displacement is unsigned, so > + * KASAN_SHADOW_OFFSET is the smallest possible shadow address. F= or This part of the comment doesn't seem correct: KASAN_SHADOW_OFFSET is still a weird offset value for Generic KASAN, not the smallest possible shadow address. > + * Software Tag-Based KASAN, the displacement is signed, so > + * KASAN_SHADOW_OFFSET is the center of the range. > */ > - if (addr < KASAN_SHADOW_OFFSET) > - return; > + if (IS_ENABLED(CONFIG_KASAN_GENERIC)) { > + if (addr < KASAN_SHADOW_OFFSET || > + addr >=3D KASAN_SHADOW_OFFSET + max_shadow_size) > + return; > + } else { > + if (addr < KASAN_SHADOW_OFFSET - max_shadow_size / 2 || > + addr >=3D KASAN_SHADOW_OFFSET + max_shadow_size / 2) > + return; Hm, I might be wrong, but I think this check does not work. Let's say we have non-canonical address 0x4242424242424242 and number of VA bits is 48. Then: KASAN_SHADOW_OFFSET =3D=3D 0xffff800000000000 kasan_mem_to_shadow(0x4242424242424242) =3D=3D 0x0423a42424242424 max_shadow_size =3D=3D 0x1000000000000000 KASAN_SHADOW_OFFSET - max_shadow_size / 2 =3D=3D 0xf7ff800000000000 KASAN_SHADOW_OFFSET + max_shadow_size / 2 =3D=3D 0x07ff800000000000 (overfl= ows) 0x0423a42424242424 is < than 0xf7ff800000000000, so the function will wrongly return. > + } > > orig_addr =3D (unsigned long)kasan_shadow_to_mem((void *)addr); > Just to double-check: kasan_shadow_to_mem() and addr_has_metadata() don't need any changes, right? > diff --git a/scripts/gdb/linux/mm.py b/scripts/gdb/linux/mm.py > index 7571aebbe650..2e63f3dedd53 100644 > --- a/scripts/gdb/linux/mm.py > +++ b/scripts/gdb/linux/mm.py > @@ -110,12 +110,13 @@ class aarch64_page_ops(): > self.KERNEL_END =3D gdb.parse_and_eval("_end") > > if constants.LX_CONFIG_KASAN_GENERIC or constants.LX_CONFIG_KASA= N_SW_TAGS: > + self.KASAN_SHADOW_OFFSET =3D constants.LX_CONFIG_KASAN_SHADO= W_OFFSET > if constants.LX_CONFIG_KASAN_GENERIC: > self.KASAN_SHADOW_SCALE_SHIFT =3D 3 > + self.KASAN_SHADOW_END =3D (1 << (64 - self.KASAN_SHADOW_= SCALE_SHIFT)) + self.KASAN_SHADOW_OFFSET > else: > self.KASAN_SHADOW_SCALE_SHIFT =3D 4 > - self.KASAN_SHADOW_OFFSET =3D constants.LX_CONFIG_KASAN_SHADO= W_OFFSET > - self.KASAN_SHADOW_END =3D (1 << (64 - self.KASAN_SHADOW_SCAL= E_SHIFT)) + self.KASAN_SHADOW_OFFSET > + self.KASAN_SHADOW_END =3D self.KASAN_SHADOW_OFFSET > self.PAGE_END =3D self.KASAN_SHADOW_END - (1 << (self.vabits= _actual - self.KASAN_SHADOW_SCALE_SHIFT)) > else: > self.PAGE_END =3D self._PAGE_END(self.VA_BITS_MIN) > -- > 2.45.1 > Could you also check that everything works when CONFIG_KASAN_SW_TAGS + CONFIG_KASAN_OUTLINE? I think it should, be makes sense to confirm. Thank you!