From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 407ABC001B0 for ; Wed, 5 Jul 2023 12:51:13 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B18AF8D0002; Wed, 5 Jul 2023 08:51:12 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id AA1748D0001; Wed, 5 Jul 2023 08:51:12 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 941898D0002; Wed, 5 Jul 2023 08:51:12 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 7D8728D0001 for ; Wed, 5 Jul 2023 08:51:12 -0400 (EDT) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 09531160BCA for ; Wed, 5 Jul 2023 12:51:11 +0000 (UTC) X-FDA: 80977543542.15.4AE6427 Received: from mail-wm1-f41.google.com (mail-wm1-f41.google.com [209.85.128.41]) by imf29.hostedemail.com (Postfix) with ESMTP id 269F312001A for ; Wed, 5 Jul 2023 12:51:08 +0000 (UTC) Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=6+ZPkY1y; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf29.hostedemail.com: domain of elver@google.com designates 209.85.128.41 as permitted sender) smtp.mailfrom=elver@google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1688561469; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=noF3Ta0PVHsT1dAI560cBu0m3xEmyMOxO2GSiAJv96w=; b=6gttrPnYBHok0M3vy0ZTaTnBOTxlA3hwbiwL1rrx3xiB9tZPTlFcZf+h9bpz3lY+ctElgV Ojqf0EVIWhGtbm+IByDSRQk68nWYaNiwHkSgFLFynixwrfc1qRTBe/1jROJx99WGlfB6ol 9xtZwjho/vRE5VNwSWzaAnpIuQEX3Ss= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=6+ZPkY1y; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf29.hostedemail.com: domain of elver@google.com designates 209.85.128.41 as permitted sender) smtp.mailfrom=elver@google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1688561469; a=rsa-sha256; cv=none; b=QWDx+51kxpndlF4M6+tsfD1DdT+c1FjdPhUtjwW1/WMI0u/qFBwtQXPe8MraZfvOvbVDIQ 0wbkxJ0uAqBHIH8qmxa2/7fy/NI9idt5Y1d3jxcZR1xcEcmLa5+0mNVfE+nqCJ9WLJHX3j uDhKKvFJ+GZAQ7STd6c5yO6FKVnrCrE= Received: by mail-wm1-f41.google.com with SMTP id 5b1f17b1804b1-3fbca8935bfso54048265e9.3 for ; Wed, 05 Jul 2023 05:51:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1688561467; x=1691153467; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=noF3Ta0PVHsT1dAI560cBu0m3xEmyMOxO2GSiAJv96w=; b=6+ZPkY1yjJwXTqbMFTMHCrK/T8datvgutZDhyMZnXhvNBxoAhfyIf+ANzHU6SA+iDF 15NS7oWLkd9Dz2l5TeJzBOyR6g4cWuq7+9CRVGIlPqGmovYdrN46qlhvj2LMvArQl42N B90c3qixMrWpkPgNA6yM4gGINevXFByNktY/I8rXs4lmt0/ilHDdlYgMYQt5EM5USIO0 TSfu6sRlGg/DMjejEgyIwHvQfF9dHhloReFNi8EKarKYcKCXCCa2PVpGXLJYojOIVHwk I8Mro2ud2P6fRx9BffNsFpWKrprg2At0+O0DS7a2oKpRCrgBFICFbsVqFSRIJJvCViGq kzKg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1688561467; x=1691153467; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=noF3Ta0PVHsT1dAI560cBu0m3xEmyMOxO2GSiAJv96w=; b=XaEcxjFJGOuTohQbccbQVNgKERFhIhLBRU2M8Hr7KC9hXjUAZiRIWoORFG+ZqKU091 tOIj8b+p6+oKZLEclBUrIDc9voXEt012/kXVOqc+G2Aj1JRcnVssibu7S2xQSl+JO5E+ 2ivjImkq8+DbeD+LikWAhMfPhWgfILv/T5hZnyJfH90sz+HKGs1Z/AetMZU2L9oUBHWy 3s+wVSeM+4HfW0ap/UXyN+U4tzuOaJMDk08yr2IFjQnyWAT0Dc7mC3n5Js4nGNOXTHYV Y2QbxwrpslyC8FCoZ/HFOAMJoJYD7EW2OQlV0k1Nr8+A19QATR6jwy2IxZ3j8hevsgcn 4EyQ== X-Gm-Message-State: AC+VfDy09nlBGi0nDAxPAwrISxuOImvKaAbidYC/OqbeRzrbf0rZ0YJ9 USfQjvnZuzSJnd8HoHglROSfhVMnqldMGICD9qTm2Q== X-Google-Smtp-Source: ACHHUZ7mUyPyhRDfHd4CI2NmZWEHYRE1pLrvi+Khzyl+5wxz5EtXLb+GHQjp/Vq1Ggszq6SjK7AVmpc/kNaHlG+RQO8= X-Received: by 2002:a1c:4c12:0:b0:3fb:b1af:a455 with SMTP id z18-20020a1c4c12000000b003fbb1afa455mr12792094wmf.5.1688561467416; Wed, 05 Jul 2023 05:51:07 -0700 (PDT) MIME-Version: 1.0 References: <678ac92ab790dba9198f9ca14f405651b97c8502.1688561016.git.andreyknvl@google.com> In-Reply-To: <678ac92ab790dba9198f9ca14f405651b97c8502.1688561016.git.andreyknvl@google.com> From: Marco Elver Date: Wed, 5 Jul 2023 14:50:30 +0200 Message-ID: Subject: Re: [PATCH] kasan, slub: fix HW_TAGS zeroing with slub_debug To: andrey.konovalov@linux.dev Cc: Mark Rutland , Andrey Konovalov , Alexander Potapenko , Dmitry Vyukov , Andrey Ryabinin , Vincenzo Frascino , kasan-dev@googlegroups.com, Andrew Morton , linux-mm@kvack.org, Catalin Marinas , Peter Collingbourne , Feng Tang , stable@vger.kernel.org, Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Vlastimil Babka , Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com>, linux-kernel@vger.kernel.org, Andrey Konovalov Content-Type: text/plain; charset="UTF-8" X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 269F312001A X-Stat-Signature: m67y8au1gjp9zpuidufp37s5xs3p65o9 X-Rspam-User: X-HE-Tag: 1688561468-143400 X-HE-Meta: U2FsdGVkX18p7B1dXnw+lJKa1oGZjhgka1AOQz3kptz6Dad9Hox7Qloy9+kH8RygGqiTYK4BvKtkbZyVdRoT3WP7AP2e+WkPuFHAK1fJosll7tEXICthQjjAwYcxsZ5RIdsgPfFUJuMCP53wf5NIMds151o/a9TSGGckKjrNmodpDjUyNaYDLJy7dp+djpSGdeDIQRmY8uHS0k50SokcAK7we8WcQ9RyfPKAasok5e/FGVtY0C7zM/nDFRe3BwEPX+++nAVi68AUkHEb+VWUvbLxnfv1+TBoiBS0pI5uRC+Un7/qoZkrHzFAdN1Z9EB9gQhnJt/x7EMDz+7cekf2rkNb0X66NsD7w9qA6A1zou+3uH4EbOPU22bO85y+nVUqwCIiHIZu7FQVC1cBGtuUZHDTUCll9QAXQx+lwuU0Ossfyc5jFzU8pc1TnVXE1FHNtXQP3RAYQGHeTsXi/LHhIum0UHvR880FIkedu/Ot1HTCTy9DXE7dXbI8WxcdKMJTD0i5TO5sI1D4yqYVhZXqY9MfQUnCNbUByvLQtJOvALLu3oqMNKHbfMK5svlB4buH1TTMhmQfPxYKH4+wWei7JYAPIMBXzZoWkhKlcGqjLQxVHOSm8A8q8k/b94Gsq4UailCz+JF5qDPYCgfN5y25AP7iXcOGNC7ldrsOMZ9UPhx7uB9YjXy2ojhexXYRs1rts75uFvr9aIewc87vCHNYd+AwJHroEg9qTuDTXHAN3V4UM1uSxzXlbhTmQXKaPTHN8dreQAK5QQilATl3eGgycrKK69chn0Px2XBUNlKqRqNet2JlOk8JXaW0UgfvBPc2aSV1OoMT/VTXa4tZtrgPyjWuq59OE3xPS3jbghfIXHLB7glTqARG3ms0kGPua4kGsMk3bseIKKOdX8yoh90FPLY+1pw0TI465cykRxTFU1oY4GDJDtB4NBDby0A67cfyPghj+W2YR48aMfRMWtX 6r+kU+u9 vJiTebP91ySmNVLIua4iS+59qwIWzeaI3x5do3BUNqYePNKAiyvwXJt3bAguWklC9V65c9ntFMi3uxSLRDnx4knUK7NTbHKMFxRsWuLQtD2Dtxp/I/1+Iq3Asgow/aMMXQGvpCiQKubsCfIt4RfYQf6dW9JR2P/kws02XbqWO911nznxzCf1mSiT6PF6cl+kbUN20yMQ5MJTX6b5m/AClHyBgEbVy7IRGhE9A/GRRJX6cDoAOLO2M2coSxhBekOFf+UTEYvwcT3o2tmeRP8OJqlZua+Tn36EQdmJbUJbFLr5nBdG8SshlWqp8LkNClPoJjhWDMAMyK36MWF7oGQWgNr3LucxmiAeTg5rnazl7iAHwvyCk8vMNGIFXsRueW8Q/Ikg0o047hWHlp+NBYOUa00y6S0stCGQavFPJOF/1kuVYMwK4pupdcHIKqQgTAE7/o4sFj6GPR9fz73k4md/LYR8bQYMQrklXZNlLKX/IlpmQmkg= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, 5 Jul 2023 at 14:44, wrote: > > From: Andrey Konovalov > > Commit 946fa0dbf2d8 ("mm/slub: extend redzone check to extra allocated > kmalloc space than requested") added precise kmalloc redzone poisoning > to the slub_debug functionality. > > However, this commit didn't account for HW_TAGS KASAN fully initializing > the object via its built-in memory initialization feature. Even though > HW_TAGS KASAN memory initialization contains special memory initialization > handling for when slub_debug is enabled, it does not account for in-object > slub_debug redzones. As a result, HW_TAGS KASAN can overwrite these > redzones and cause false-positive slub_debug reports. > > To fix the issue, avoid HW_TAGS KASAN memory initialization when slub_debug > is enabled altogether. Implement this by moving the __slub_debug_enabled > check to slab_post_alloc_hook. Common slab code seems like a more > appropriate place for a slub_debug check anyway. > > Fixes: 946fa0dbf2d8 ("mm/slub: extend redzone check to extra allocated kmalloc space than requested") > Cc: > Reported-by: Mark Rutland Is it fixing this issue: https://lore.kernel.org/all/20230628154714.GB22090@willie-the-truck/ Or some other issue? > Signed-off-by: Andrey Konovalov Other than the question above, it looks sane: Acked-by: Marco Elver > --- > mm/kasan/kasan.h | 12 ------------ > mm/slab.h | 16 ++++++++++++++-- > 2 files changed, 14 insertions(+), 14 deletions(-) > > diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h > index b799f11e45dc..2e973b36fe07 100644 > --- a/mm/kasan/kasan.h > +++ b/mm/kasan/kasan.h > @@ -466,18 +466,6 @@ static inline void kasan_unpoison(const void *addr, size_t size, bool init) > > if (WARN_ON((unsigned long)addr & KASAN_GRANULE_MASK)) > return; > - /* > - * Explicitly initialize the memory with the precise object size to > - * avoid overwriting the slab redzone. This disables initialization in > - * the arch code and may thus lead to performance penalty. This penalty > - * does not affect production builds, as slab redzones are not enabled > - * there. > - */ > - if (__slub_debug_enabled() && > - init && ((unsigned long)size & KASAN_GRANULE_MASK)) { > - init = false; > - memzero_explicit((void *)addr, size); > - } > size = round_up(size, KASAN_GRANULE_SIZE); > > hw_set_mem_tag_range((void *)addr, size, tag, init); > diff --git a/mm/slab.h b/mm/slab.h > index 6a5633b25eb5..9c0e09d0f81f 100644 > --- a/mm/slab.h > +++ b/mm/slab.h > @@ -723,6 +723,7 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s, > unsigned int orig_size) > { > unsigned int zero_size = s->object_size; > + bool kasan_init = init; > size_t i; > > flags &= gfp_allowed_mask; > @@ -739,6 +740,17 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s, > (s->flags & SLAB_KMALLOC)) > zero_size = orig_size; > > + /* > + * When slub_debug is enabled, avoid memory initialization integrated > + * into KASAN and instead zero out the memory via the memset below with > + * the proper size. Otherwise, KASAN might overwrite SLUB redzones and > + * cause false-positive reports. This does not lead to a performance > + * penalty on production builds, as slub_debug is not intended to be > + * enabled there. > + */ > + if (__slub_debug_enabled()) > + kasan_init = false; > + > /* > * As memory initialization might be integrated into KASAN, > * kasan_slab_alloc and initialization memset must be > @@ -747,8 +759,8 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s, > * As p[i] might get tagged, memset and kmemleak hook come after KASAN. > */ > for (i = 0; i < size; i++) { > - p[i] = kasan_slab_alloc(s, p[i], flags, init); > - if (p[i] && init && !kasan_has_integrated_init()) > + p[i] = kasan_slab_alloc(s, p[i], flags, kasan_init); > + if (p[i] && init && (!kasan_init || !kasan_has_integrated_init())) > memset(p[i], 0, zero_size); > kmemleak_alloc_recursive(p[i], s->object_size, 1, > s->flags, flags); > -- > 2.25.1 >