From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0B0B5C07E99 for ; Mon, 5 Jul 2021 11:23:47 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 9C3016135F for ; Mon, 5 Jul 2021 11:23:46 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9C3016135F Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 675286B006C; Mon, 5 Jul 2021 07:23:46 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 659C16B0070; Mon, 5 Jul 2021 07:23:46 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4ECCC6B0070; Mon, 5 Jul 2021 07:23:46 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0055.hostedemail.com [216.40.44.55]) by kanga.kvack.org (Postfix) with ESMTP id 2E6A16B005D for ; Mon, 5 Jul 2021 07:23:46 -0400 (EDT) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id A23018248D52 for ; Mon, 5 Jul 2021 11:23:45 +0000 (UTC) X-FDA: 78328299210.28.D3E3EF3 Received: from mail-ed1-f46.google.com (mail-ed1-f46.google.com [209.85.208.46]) by imf13.hostedemail.com (Postfix) with ESMTP id 5F9361000F41 for ; Mon, 5 Jul 2021 11:23:45 +0000 (UTC) Received: by mail-ed1-f46.google.com with SMTP id x2so4823766edr.10 for ; Mon, 05 Jul 2021 04:23:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=x/twNUpcgioeZ8e0ihy4OvgQsOjAP9TO2paDyac3Qkg=; b=YM1PR4H9+B8L8UwdtxpjutwB0rqH38R2O5DOrey1dK4UErgqysTvI+jj6ATz2yvQWG hPXFPXCn4AxF4al8wuVWBHAWFU7Zamp+OpUBbIdBbO9vryr5tC6J3hYvjIK4M5xkTxyw A6kPbU9iiglpyYbCk5dZ4DYIAov+znHDcB5iFbZO/U7EftfericcwXSaBDEw8t31BjCB FncrlAL3zbRDQr3iMPIlPGF1PLZFfHZO+mEOgJputwH/D57Np0JF5OI1UQ/+JiypcY6R rbNYFR9vSubxKeS0TvansMdId29zqxKXA6SrrOpfk7BzauMC51wndZlwa25vNNa8vSvn jAdw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=x/twNUpcgioeZ8e0ihy4OvgQsOjAP9TO2paDyac3Qkg=; b=N3l5RXteZBJ11pSEi0u/b/dbhn5Z9pbZa8CcVWNzkHd4/7hu1ozhfdAhh+TBAZ0hQB f8qhNIIroXThWLqkxQsZ/jDGv7oDSuYgvWGrTrq63Ywekj7cvdeAyuS6hY7LWRIEz/NJ 1xKFBw0V9omM6bKiQOAwqyFZ5WFSi5xVPXOpZFwsGWpc+wh21obzSqIUGUsyYlH7bXBz ZS0k4edgJu+eGvJLvrqglAQ0FYf3WYv0JVN2QkKdkmZLgSRJyX+JPujZGRnQ7jt21Cuf ivgQqu6qBWA3TtPKvU70s4/zKg7zR8JTG5Ryg7CQQObJYYoji4wkXe45jzfTWeghdCgp FSmw== X-Gm-Message-State: AOAM532+BRwRYT3XD4xpXdZzIl4G4BVQgxScYEktKSrbPLhsUc3MTIuy ZQ1w9iOOOQKwVjeQmY3oeFvxrYPeNI1MCgzzyU8= X-Google-Smtp-Source: ABdhPJxpa2IOlQc2LEHppS6wdYTu08dyDoHQP1v4Cdpbx6OBtbIxpdca+P2eecENgraXfPvnlr+f9YlLyHcZWvqk2Eo= X-Received: by 2002:a50:fb96:: with SMTP id e22mr8621005edq.95.1625484224390; Mon, 05 Jul 2021 04:23:44 -0700 (PDT) MIME-Version: 1.0 References: <20210705103229.8505-1-yee.lee@mediatek.com> <20210705103229.8505-3-yee.lee@mediatek.com> In-Reply-To: <20210705103229.8505-3-yee.lee@mediatek.com> From: Andrey Konovalov Date: Mon, 5 Jul 2021 13:23:33 +0200 Message-ID: Subject: Re: [PATCH v6 2/2] kasan: Add memzero int for unaligned size at DEBUG To: yee.lee@mediatek.com Cc: LKML , nicholas.Tang@mediatek.com, Kuan-Ying Lee , chinwen.chang@mediatek.com, Andrey Ryabinin , Alexander Potapenko , Dmitry Vyukov , Andrew Morton , Matthias Brugger , "open list:KASAN" , "open list:MEMORY MANAGEMENT" , "moderated list:ARM/Mediatek SoC support" , "moderated list:ARM/Mediatek SoC support" Content-Type: text/plain; charset="UTF-8" X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 5F9361000F41 Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=gmail.com header.s=20161025 header.b=YM1PR4H9; spf=pass (imf13.hostedemail.com: domain of andreyknvl@gmail.com designates 209.85.208.46 as permitted sender) smtp.mailfrom=andreyknvl@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-Stat-Signature: 18j4yjq7m8z8twi1dmyazggzno3djxoe X-HE-Tag: 1625484225-691019 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Jul 5, 2021 at 12:33 PM wrote: > > From: Yee Lee > > Issue: when SLUB debug is on, hwtag kasan_unpoison() would overwrite > the redzone of object with unaligned size. > > An additional memzero_explicit() path is added to replacing init by > hwtag instruction for those unaligned size at SLUB debug mode. > > The penalty is acceptable since they are only enabled in debug mode, > not production builds. A block of comment is added for explanation. > > Cc: Andrey Ryabinin > Cc: Alexander Potapenko > Cc: Dmitry Vyukov > Cc: Andrew Morton > Suggested-by: Marco Elver > Suggested-by: Andrey Konovalov > Signed-off-by: Yee Lee > --- > mm/kasan/kasan.h | 12 ++++++++++++ > 1 file changed, 12 insertions(+) > > diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h > index 98e3059bfea4..d739cdd1621a 100644 > --- a/mm/kasan/kasan.h > +++ b/mm/kasan/kasan.h > @@ -9,6 +9,7 @@ > #ifdef CONFIG_KASAN_HW_TAGS > > #include > +#include "../slab.h" > > DECLARE_STATIC_KEY_FALSE(kasan_flag_stacktrace); > extern bool kasan_flag_async __ro_after_init; > @@ -387,6 +388,17 @@ static inline void kasan_unpoison(const void *addr, size_t size, bool init) > > if (WARN_ON((unsigned long)addr & KASAN_GRANULE_MASK)) > return; > + /* > + * Explicitly initialize the memory with the precise object size to > + * avoid overwriting the SLAB redzone. This disables initialization in > + * the arch code and may thus lead to performance penalty. The penalty > + * is accepted since SLAB redzones aren't enabled in production builds. > + */ > + if (__slub_debug_enabled() && > + init && ((unsigned long)size & KASAN_GRANULE_MASK)) { > + init = false; > + memzero_explicit((void *)addr, size); > + } > size = round_up(size, KASAN_GRANULE_SIZE); > > hw_set_mem_tag_range((void *)addr, size, tag, init); > -- > 2.18.0 > Reviewed-by: Andrey Konovalov