From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 25799C83F22 for ; Thu, 17 Jul 2025 14:28:25 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3CA376B00B6; Thu, 17 Jul 2025 10:28:24 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 329BA6B00B7; Thu, 17 Jul 2025 10:28:24 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1F1F16B00B8; Thu, 17 Jul 2025 10:28:24 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 0B8986B00B6 for ; Thu, 17 Jul 2025 10:28:24 -0400 (EDT) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id BD4B410F5E8 for ; Thu, 17 Jul 2025 14:28:23 +0000 (UTC) X-FDA: 83673986886.13.AF52516 Received: from mail-lf1-f50.google.com (mail-lf1-f50.google.com [209.85.167.50]) by imf01.hostedemail.com (Postfix) with ESMTP id 97AB640014 for ; Thu, 17 Jul 2025 14:28:21 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b="I2g/BIg0"; spf=pass (imf01.hostedemail.com: domain of snovitoll@gmail.com designates 209.85.167.50 as permitted sender) smtp.mailfrom=snovitoll@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1752762501; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=f7ZdWtvjGwx1QVMejTFRuKlxOMwOJa4SlH8rLzpLr/Q=; b=yP0uCBffWH80nanmGsEnZogMa07G1zdGvgqHDnFC++P3cQTmyF6xnPyXlFo7Mgi3l6K5sr KoLtl5L/MTT2gOJtwSOu5iX+0AWPvMqNSznfsYJWFo03+1qfwRnNivIckqcjqZUwnnaYJw o5N9YuIJ4a9ohyJaal2+FjN00SsVefo= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b="I2g/BIg0"; spf=pass (imf01.hostedemail.com: domain of snovitoll@gmail.com designates 209.85.167.50 as permitted sender) smtp.mailfrom=snovitoll@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1752762501; a=rsa-sha256; cv=none; b=n03CmUnYowsCrAmtRRX08P4weYDSayPaz1Mu8A2MLVuIHCFA0XTfXTqrZ9bC4JyaCygYp6 h+isrB3gcPy93fsLoxu9O0F+usSrRTm2pLZNv9ohNigFXDFNzkjMXJOR8EDMdbTB0TfDEP BEnQZpIMzNZmB0F+qZnb/0UhDs6ze/s= Received: by mail-lf1-f50.google.com with SMTP id 2adb3069b0e04-5561d41fc96so1209363e87.1 for ; Thu, 17 Jul 2025 07:28:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1752762500; x=1753367300; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=f7ZdWtvjGwx1QVMejTFRuKlxOMwOJa4SlH8rLzpLr/Q=; b=I2g/BIg08U8N+vob7P5Tr02jAVBGPE6b8mm5q8ltJTYGJl7eELxC4XXhUmxERkLmSx l6LqR5V4jOgSmGhzmsJD7HrCTuScl31oL4G+HGUHn0QQ2ejR6pNKhR/DibEsF1+vj4Cl ngvXke81e2L4JyKc4og1YDFqo10tl9KcIpSnkeennCLvYPmU052X3nk9DEfLsA+nBigt mabqnn8DZIPpjPVJI23wEfK9LsSqguSPgFi7Ux2DMUITEck8Nytpir9szHEIx9XxUqKS T+TYRrCxIbnNOFAYzFTHyyrXJg3OR+rvaIt+x2hgFNSLpGXfVco3P+KkiKtadRddR64P Y/Pw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1752762500; x=1753367300; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=f7ZdWtvjGwx1QVMejTFRuKlxOMwOJa4SlH8rLzpLr/Q=; b=NML/TeJsB46HznQ3/GZ4ZnZBX4Hd3IYV3rP2c8seBmWCjQkZCh1gEt83aCTZG01F5m M6gPnXeyJJYDt3jX2VCToikrObW0Gc/rXyPydIYkmNiNjxLQCm8NILXJ/pvdHjpM3J5N 5VnKOjiMvcSq9bYxZ+bY7Fwfdh5moFlqvDOBWX//BAYKY3K+sswesuJwg/A7t9B/Vgdy 6/9sThNQgRbmtnMiVuzMdguZzqVyhSDmtXy5qPFHzxTeIqGAIU1D3mKAKygRMOzMT9Qv JrAVhPX39QFnJN44M/IvJ6OQ9nXxaMlnAjlDUVNbg2hFvkdGMR5rxFnd2bgo4AG4eRmp po6Q== X-Forwarded-Encrypted: i=1; AJvYcCXPbME3S3YZDDAzQXQs1fMgLxlyBb2pyTWLvgfCKHovxx7NUIT0g5leCSjDkGSZNkIkJYzJ0HugJQ==@kvack.org X-Gm-Message-State: AOJu0Yw/M29cqErFP01mk25bDhNx4BkK8CbFh9oWpwbEjinLw8ksKM8I dH9Fz+9OBCCAHBkXHFO4AZ2FxLM/D3LDIxoGo9hkzFSw3JZ0zXStw769 X-Gm-Gg: ASbGncv03dyHrc5mEY8O6hEh/1+MtWpS1AgPPyeGFjU2RV36cMJPlOYDsYZEeZsbW9A yMVy0EUq4SdqkZUFUcdCQTUPrQSbT4BZoCRpnncXvyEbOf7FEVsb6Tui6jgiyO2yecVsF9Hr3YP lB+4bG2UHX+F0WBLOIr3AYAMLjziUyYUHfHIVWejadia1Mz7U+p3ts8IXqEwkfZd3w5y/HzvgER Gjb6nGzi6Gf4TPbLWNaYOAzu10HHW07LYAlCpuGaSidzvpqC33hfOPdkOEaKNIVcITmf99Agp75 P2gx2koDhyDWrDDaRqE/XPkZW0AT+DtLn7q1yE8mfl/ijzUQFriaWDGZFgyyPK3d79WVZtA6GpD VTv2yVmQvPa4pGYsYNJrx/6Dq8vsIz5AZZsZiJ+3qCOb4xQRjeWqynrtR2kv83wUJn4ge X-Google-Smtp-Source: AGHT+IGCxJLgxZY02tc8foSIT/rHsNNpSknOWUP3l3DGsuRGAuDE+1eyiieQPzZhElaS9ma13LLajg== X-Received: by 2002:a05:6512:2345:b0:558:f7fc:87c4 with SMTP id 2adb3069b0e04-55a23f7fc98mr2461079e87.32.1752762499563; Thu, 17 Jul 2025 07:28:19 -0700 (PDT) Received: from localhost.localdomain (178.90.89.143.dynamic.telecom.kz. [178.90.89.143]) by smtp.gmail.com with ESMTPSA id 2adb3069b0e04-55989825fe3sm3022975e87.223.2025.07.17.07.28.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 17 Jul 2025 07:28:18 -0700 (PDT) From: Sabyrzhan Tasbolatov To: hca@linux.ibm.com, christophe.leroy@csgroup.eu, andreyknvl@gmail.com, agordeev@linux.ibm.com, akpm@linux-foundation.org Cc: ryabinin.a.a@gmail.com, glider@google.com, dvyukov@google.com, kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, loongarch@lists.linux.dev, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-um@lists.infradead.org, linux-mm@kvack.org, snovitoll@gmail.com Subject: [PATCH v3 12/12] kasan: add shadow checks to wrappers and rename kasan_arch_is_ready Date: Thu, 17 Jul 2025 19:27:32 +0500 Message-Id: <20250717142732.292822-13-snovitoll@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20250717142732.292822-1-snovitoll@gmail.com> References: <20250717142732.292822-1-snovitoll@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Rspam-User: X-Rspamd-Queue-Id: 97AB640014 X-Rspamd-Server: rspam06 X-Stat-Signature: an4a5qcaknbwugb14zp95qo75hkkz43u X-HE-Tag: 1752762501-164687 X-HE-Meta: U2FsdGVkX18X9hw+sFnNXdDbUmv9uZu2nQS0EywYUBH2EWPLoKockNHwteZdBL0HHA9cQDYhpo6mQsoGhk0l8yri96FLWPcPPacQnLVIQiP9P9ou9jSqypeLdhgw/tl4M3WwSWm6jycVlQNY+Un9BdFrBaWq73OM9V6LR+Y/hIJaYrhUxdTMq08DRj76ETXRlMG+FUTIwjqyHevZ7HfTnQXuFX0aG0xalL8Knuz9l20wt62DywCoQT4nLhvllIu5IBbEh81FKhnW081WPg7CajTSwKZdAMZNXtef/VlsA8Og3a+RfQuj6zqYiBBy2widg/iUpTpvKmUA8OGRXWc/6kJOftx2PXIEgMLx9JVXIoM/HFyqFj/C2HMIk1nN6qLw4zjGoclPE8sG41EKnLTZw/QGPxBARumU41I4Wmu0YXvAd7bNMyq+Ljd7BZPLwJeHYnQGUCGw6mTqQ/fqFsh5rWzZKtz2HR6xURqCuheE48ti9p1hMOqxIl4lzhEDQiJHiDIk/ioUnUGWQ/0rtKtyPCTBAH3kGOLpgnNWHmmdcWfbMDRYdJqtxPlw6hHrY+nF86ZJEeZQlyV6xcVsmwbmsJU2Cm3/4WF6+lrT+qmQiTZNFfwAY2AVdUC05DESm46qXukhMNIx6FsecD0yoOSaX6sTpMUXPaoX8y9bDie70XauzhV9oJp3jpZg+Iq2r/m3CORguoxe1jAwgv9USg/b8SW71+DFDYsz+f4SX6cPHoQlxFt0jlShgKsgl9xf3Io48k4MWf+NSWsk64xVJep5CD9GnHvOUXDQVAgZxBYrAReTxh+fXQ3qL+L67ax4VGNO9JIQPzhT7rDnX0dUzywhRq0gKIIPobFT3ZXq/0ayO0TZUymQFEYadKi/r4jZqiCLdzCcCvb5Ew7NDGHTl6L6righ4pTocpAY03ZLH/61OE+en/tqdUodYXwAjAz6Tw65WiIG9ijLufI1rfoXCG5 KixpxR4l KHoAi2QM1fX8c65brA7kzIAbJnW9CpQg6dfsHpCc3WzEmnM5nsU7/HhRQI1NQk4XQ6coy+kJk97jrpwnK8n4GMT1pPhaKZWi7c7ph0X3G/SkMYlEdyjCsiaaC8Nl4ZuI9nFZHrHvCqCaaLf5U39bBtNcX4E+HbjVImp+WL7cn5By/S41T6fKQ6gNvKV6Zq79tPNdQhD4l3Afviv2kMvfdPkVwYqfgo6laGsCjy1m5WnEdSl6TfyEKyb1gKJRznj3lfmhtSpzItevFzZeZ/iqdfMFEgt2mCwh0FbEmBwRQQLFmwiq9iHNr6R9PkBxtywbL0Ywn3DjZai4NwtwicctAIzQKcYl3Z2a+jaQKGVJrI8k4lwf0+jxzwdnljzTF7DHo3ptI8tjGGZmMhyHJh0/XT60VcwQ2N51Qe9eOtmJsh0zKLVpTaxCf4zR3rcv7BaWs/oF96cgA74S7LSenjUb7poOn+VtlbcreOIGxk6xLY7NqEi3J+YZDdk1U0bkubRqsC+bCBS3UnifkrtShE6ZWpoXR8qoSxL5AgPt+F7gy7Hg4q4lPi0KV8ZtqCs7TdG8t3j8z7qUPSvq1FH9mzemTlgA6Zw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: This patch completes: 1. Adding kasan_shadow_initialized() checks to existing wrapper functions 2. Replacing kasan_arch_is_ready() calls with kasan_shadow_initialized() 3. Creating wrapper functions for internal functions that need shadow readiness checks 4. Removing the kasan_arch_is_ready() fallback definition The two-level approach is now fully implemented: - kasan_enabled() - controls whether KASAN is enabled at all. (compile-time for most archs) - kasan_shadow_initialized() - tracks shadow memory initialization (static key for ARCH_DEFER_KASAN archs, compile-time for others) This provides complete elimination of kasan_arch_is_ready() calls from KASAN implementation while moving all shadow readiness logic to wrapper functions. Closes: https://bugzilla.kernel.org/show_bug.cgi?id=217049 Signed-off-by: Sabyrzhan Tasbolatov --- Changes in v3: - Addresses Andrey's feedback to move shadow checks to wrappers - Rename kasan_arch_is_ready with kasan_shadow_initialized - Added kasan_shadow_initialized() checks to all necessary wrapper functions - Eliminated all remaining kasan_arch_is_ready() usage per reviewer guidance --- include/linux/kasan.h | 36 +++++++++++++++++++++++++++--------- mm/kasan/common.c | 9 +++------ mm/kasan/generic.c | 12 +++--------- mm/kasan/kasan.h | 36 ++++++++++++++++++++++++++---------- mm/kasan/shadow.c | 32 +++++++------------------------- 5 files changed, 66 insertions(+), 59 deletions(-) diff --git a/include/linux/kasan.h b/include/linux/kasan.h index 51a8293d1af..292bd741d8d 100644 --- a/include/linux/kasan.h +++ b/include/linux/kasan.h @@ -194,7 +194,7 @@ bool __kasan_slab_pre_free(struct kmem_cache *s, void *object, static __always_inline bool kasan_slab_pre_free(struct kmem_cache *s, void *object) { - if (kasan_enabled()) + if (kasan_enabled() && kasan_shadow_initialized()) return __kasan_slab_pre_free(s, object, _RET_IP_); return false; } @@ -229,7 +229,7 @@ static __always_inline bool kasan_slab_free(struct kmem_cache *s, void *object, bool init, bool still_accessible) { - if (kasan_enabled()) + if (kasan_enabled() && kasan_shadow_initialized()) return __kasan_slab_free(s, object, init, still_accessible); return false; } @@ -237,7 +237,7 @@ static __always_inline bool kasan_slab_free(struct kmem_cache *s, void __kasan_kfree_large(void *ptr, unsigned long ip); static __always_inline void kasan_kfree_large(void *ptr) { - if (kasan_enabled()) + if (kasan_enabled() && kasan_shadow_initialized()) __kasan_kfree_large(ptr, _RET_IP_); } @@ -302,7 +302,7 @@ bool __kasan_mempool_poison_pages(struct page *page, unsigned int order, static __always_inline bool kasan_mempool_poison_pages(struct page *page, unsigned int order) { - if (kasan_enabled()) + if (kasan_enabled() && kasan_shadow_initialized()) return __kasan_mempool_poison_pages(page, order, _RET_IP_); return true; } @@ -356,7 +356,7 @@ bool __kasan_mempool_poison_object(void *ptr, unsigned long ip); */ static __always_inline bool kasan_mempool_poison_object(void *ptr) { - if (kasan_enabled()) + if (kasan_enabled() && kasan_shadow_initialized()) return __kasan_mempool_poison_object(ptr, _RET_IP_); return true; } @@ -568,11 +568,29 @@ static inline void kasan_init_hw_tags(void) { } #if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS) void kasan_populate_early_vm_area_shadow(void *start, unsigned long size); -int kasan_populate_vmalloc(unsigned long addr, unsigned long size); -void kasan_release_vmalloc(unsigned long start, unsigned long end, + +int __kasan_populate_vmalloc(unsigned long addr, unsigned long size); +static inline int kasan_populate_vmalloc(unsigned long addr, unsigned long size) +{ + if (!kasan_shadow_initialized()) + return 0; + return __kasan_populate_vmalloc(addr, size); +} + +void __kasan_release_vmalloc(unsigned long start, unsigned long end, unsigned long free_region_start, unsigned long free_region_end, unsigned long flags); +static inline void kasan_release_vmalloc(unsigned long start, + unsigned long end, + unsigned long free_region_start, + unsigned long free_region_end, + unsigned long flags) +{ + if (kasan_shadow_initialized()) + __kasan_release_vmalloc(start, end, free_region_start, + free_region_end, flags); +} #else /* CONFIG_KASAN_GENERIC || CONFIG_KASAN_SW_TAGS */ @@ -598,7 +616,7 @@ static __always_inline void *kasan_unpoison_vmalloc(const void *start, unsigned long size, kasan_vmalloc_flags_t flags) { - if (kasan_enabled()) + if (kasan_enabled() && kasan_shadow_initialized()) return __kasan_unpoison_vmalloc(start, size, flags); return (void *)start; } @@ -607,7 +625,7 @@ void __kasan_poison_vmalloc(const void *start, unsigned long size); static __always_inline void kasan_poison_vmalloc(const void *start, unsigned long size) { - if (kasan_enabled()) + if (kasan_enabled() && kasan_shadow_initialized()) __kasan_poison_vmalloc(start, size); } diff --git a/mm/kasan/common.c b/mm/kasan/common.c index c3a6446404d..b561734767d 100644 --- a/mm/kasan/common.c +++ b/mm/kasan/common.c @@ -259,7 +259,7 @@ static inline void poison_slab_object(struct kmem_cache *cache, void *object, bool __kasan_slab_pre_free(struct kmem_cache *cache, void *object, unsigned long ip) { - if (!kasan_arch_is_ready() || is_kfence_address(object)) + if (is_kfence_address(object)) return false; return check_slab_allocation(cache, object, ip); } @@ -267,7 +267,7 @@ bool __kasan_slab_pre_free(struct kmem_cache *cache, void *object, bool __kasan_slab_free(struct kmem_cache *cache, void *object, bool init, bool still_accessible) { - if (!kasan_arch_is_ready() || is_kfence_address(object)) + if (is_kfence_address(object)) return false; poison_slab_object(cache, object, init, still_accessible); @@ -291,9 +291,6 @@ bool __kasan_slab_free(struct kmem_cache *cache, void *object, bool init, static inline bool check_page_allocation(void *ptr, unsigned long ip) { - if (!kasan_arch_is_ready()) - return false; - if (ptr != page_address(virt_to_head_page(ptr))) { kasan_report_invalid_free(ptr, ip, KASAN_REPORT_INVALID_FREE); return true; @@ -520,7 +517,7 @@ bool __kasan_mempool_poison_object(void *ptr, unsigned long ip) return true; } - if (is_kfence_address(ptr) || !kasan_arch_is_ready()) + if (is_kfence_address(ptr)) return true; slab = folio_slab(folio); diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c index 03b6d322ff6..1d20b925b9d 100644 --- a/mm/kasan/generic.c +++ b/mm/kasan/generic.c @@ -176,7 +176,7 @@ static __always_inline bool check_region_inline(const void *addr, size_t size, bool write, unsigned long ret_ip) { - if (!kasan_arch_is_ready()) + if (!kasan_shadow_initialized()) return true; if (unlikely(size == 0)) @@ -200,13 +200,10 @@ bool kasan_check_range(const void *addr, size_t size, bool write, return check_region_inline(addr, size, write, ret_ip); } -bool kasan_byte_accessible(const void *addr) +bool __kasan_byte_accessible(const void *addr) { s8 shadow_byte; - if (!kasan_arch_is_ready()) - return true; - shadow_byte = READ_ONCE(*(s8 *)kasan_mem_to_shadow(addr)); return shadow_byte >= 0 && shadow_byte < KASAN_GRANULE_SIZE; @@ -506,9 +503,6 @@ static void release_alloc_meta(struct kasan_alloc_meta *meta) static void release_free_meta(const void *object, struct kasan_free_meta *meta) { - if (!kasan_arch_is_ready()) - return; - /* Check if free meta is valid. */ if (*(u8 *)kasan_mem_to_shadow(object) != KASAN_SLAB_FREE_META) return; @@ -573,7 +567,7 @@ void kasan_save_alloc_info(struct kmem_cache *cache, void *object, gfp_t flags) kasan_save_track(&alloc_meta->alloc_track, flags); } -void kasan_save_free_info(struct kmem_cache *cache, void *object) +void __kasan_save_free_info(struct kmem_cache *cache, void *object) { struct kasan_free_meta *free_meta; diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h index 129178be5e6..67a0a1095d2 100644 --- a/mm/kasan/kasan.h +++ b/mm/kasan/kasan.h @@ -398,7 +398,13 @@ depot_stack_handle_t kasan_save_stack(gfp_t flags, depot_flags_t depot_flags); void kasan_set_track(struct kasan_track *track, depot_stack_handle_t stack); void kasan_save_track(struct kasan_track *track, gfp_t flags); void kasan_save_alloc_info(struct kmem_cache *cache, void *object, gfp_t flags); -void kasan_save_free_info(struct kmem_cache *cache, void *object); + +void __kasan_save_free_info(struct kmem_cache *cache, void *object); +static inline void kasan_save_free_info(struct kmem_cache *cache, void *object) +{ + if (kasan_enabled() && kasan_shadow_initialized()) + __kasan_save_free_info(cache, object); +} #ifdef CONFIG_KASAN_GENERIC bool kasan_quarantine_put(struct kmem_cache *cache, void *object); @@ -499,6 +505,7 @@ static inline bool kasan_byte_accessible(const void *addr) #else /* CONFIG_KASAN_HW_TAGS */ +void __kasan_poison(const void *addr, size_t size, u8 value, bool init); /** * kasan_poison - mark the memory range as inaccessible * @addr: range start address, must be aligned to KASAN_GRANULE_SIZE @@ -506,7 +513,11 @@ static inline bool kasan_byte_accessible(const void *addr) * @value: value that's written to metadata for the range * @init: whether to initialize the memory range (only for hardware tag-based) */ -void kasan_poison(const void *addr, size_t size, u8 value, bool init); +static inline void kasan_poison(const void *addr, size_t size, u8 value, bool init) +{ + if (kasan_shadow_initialized()) + __kasan_poison(addr, size, value, init); +} /** * kasan_unpoison - mark the memory range as accessible @@ -521,12 +532,19 @@ void kasan_poison(const void *addr, size_t size, u8 value, bool init); */ void kasan_unpoison(const void *addr, size_t size, bool init); -bool kasan_byte_accessible(const void *addr); +bool __kasan_byte_accessible(const void *addr); +static inline bool kasan_byte_accessible(const void *addr) +{ + if (!kasan_shadow_initialized()) + return true; + return __kasan_byte_accessible(addr); +} #endif /* CONFIG_KASAN_HW_TAGS */ #ifdef CONFIG_KASAN_GENERIC +void __kasan_poison_last_granule(const void *address, size_t size); /** * kasan_poison_last_granule - mark the last granule of the memory range as * inaccessible @@ -536,7 +554,11 @@ bool kasan_byte_accessible(const void *addr); * This function is only available for the generic mode, as it's the only mode * that has partially poisoned memory granules. */ -void kasan_poison_last_granule(const void *address, size_t size); +static inline void kasan_poison_last_granule(const void *address, size_t size) +{ + if (kasan_shadow_initialized()) + __kasan_poison_last_granule(address, size); +} #else /* CONFIG_KASAN_GENERIC */ @@ -544,12 +566,6 @@ static inline void kasan_poison_last_granule(const void *address, size_t size) { #endif /* CONFIG_KASAN_GENERIC */ -#ifndef kasan_arch_is_ready -static inline bool kasan_arch_is_ready(void) { return true; } -#elif !defined(CONFIG_KASAN_GENERIC) || !defined(CONFIG_KASAN_OUTLINE) -#error kasan_arch_is_ready only works in KASAN generic outline mode! -#endif - #if IS_ENABLED(CONFIG_KASAN_KUNIT_TEST) void kasan_kunit_test_suite_start(void); diff --git a/mm/kasan/shadow.c b/mm/kasan/shadow.c index d2c70cd2afb..90c508cad63 100644 --- a/mm/kasan/shadow.c +++ b/mm/kasan/shadow.c @@ -121,13 +121,10 @@ void *__hwasan_memcpy(void *dest, const void *src, ssize_t len) __alias(__asan_m EXPORT_SYMBOL(__hwasan_memcpy); #endif -void kasan_poison(const void *addr, size_t size, u8 value, bool init) +void __kasan_poison(const void *addr, size_t size, u8 value, bool init) { void *shadow_start, *shadow_end; - if (!kasan_arch_is_ready()) - return; - /* * Perform shadow offset calculation based on untagged address, as * some of the callers (e.g. kasan_poison_new_object) pass tagged @@ -145,14 +142,11 @@ void kasan_poison(const void *addr, size_t size, u8 value, bool init) __memset(shadow_start, value, shadow_end - shadow_start); } -EXPORT_SYMBOL_GPL(kasan_poison); +EXPORT_SYMBOL_GPL(__kasan_poison); #ifdef CONFIG_KASAN_GENERIC -void kasan_poison_last_granule(const void *addr, size_t size) +void __kasan_poison_last_granule(const void *addr, size_t size) { - if (!kasan_arch_is_ready()) - return; - if (size & KASAN_GRANULE_MASK) { u8 *shadow = (u8 *)kasan_mem_to_shadow(addr + size); *shadow = size & KASAN_GRANULE_MASK; @@ -353,7 +347,7 @@ static int ___alloc_pages_bulk(struct page **pages, int nr_pages) return 0; } -static int __kasan_populate_vmalloc(unsigned long start, unsigned long end) +static int __kasan_populate_vmalloc_do(unsigned long start, unsigned long end) { unsigned long nr_pages, nr_total = PFN_UP(end - start); struct vmalloc_populate_data data; @@ -385,14 +379,11 @@ static int __kasan_populate_vmalloc(unsigned long start, unsigned long end) return ret; } -int kasan_populate_vmalloc(unsigned long addr, unsigned long size) +int __kasan_populate_vmalloc(unsigned long addr, unsigned long size) { unsigned long shadow_start, shadow_end; int ret; - if (!kasan_arch_is_ready()) - return 0; - if (!is_vmalloc_or_module_addr((void *)addr)) return 0; @@ -414,7 +405,7 @@ int kasan_populate_vmalloc(unsigned long addr, unsigned long size) shadow_start = PAGE_ALIGN_DOWN(shadow_start); shadow_end = PAGE_ALIGN(shadow_end); - ret = __kasan_populate_vmalloc(shadow_start, shadow_end); + ret = __kasan_populate_vmalloc_do(shadow_start, shadow_end); if (ret) return ret; @@ -551,7 +542,7 @@ static int kasan_depopulate_vmalloc_pte(pte_t *ptep, unsigned long addr, * pages entirely covered by the free region, we will not run in to any * trouble - any simultaneous allocations will be for disjoint regions. */ -void kasan_release_vmalloc(unsigned long start, unsigned long end, +void __kasan_release_vmalloc(unsigned long start, unsigned long end, unsigned long free_region_start, unsigned long free_region_end, unsigned long flags) @@ -560,9 +551,6 @@ void kasan_release_vmalloc(unsigned long start, unsigned long end, unsigned long region_start, region_end; unsigned long size; - if (!kasan_arch_is_ready()) - return; - region_start = ALIGN(start, KASAN_MEMORY_PER_SHADOW_PAGE); region_end = ALIGN_DOWN(end, KASAN_MEMORY_PER_SHADOW_PAGE); @@ -611,9 +599,6 @@ void *__kasan_unpoison_vmalloc(const void *start, unsigned long size, * with setting memory tags, so the KASAN_VMALLOC_INIT flag is ignored. */ - if (!kasan_arch_is_ready()) - return (void *)start; - if (!is_vmalloc_or_module_addr(start)) return (void *)start; @@ -636,9 +621,6 @@ void *__kasan_unpoison_vmalloc(const void *start, unsigned long size, */ void __kasan_poison_vmalloc(const void *start, unsigned long size) { - if (!kasan_arch_is_ready()) - return; - if (!is_vmalloc_or_module_addr(start)) return; -- 2.34.1