From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-24.8 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_SANE_1,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E2474C433E6 for ; Mon, 8 Mar 2021 16:27:28 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 726F065228 for ; Mon, 8 Mar 2021 16:27:28 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 726F065228 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id F11698D004D; Mon, 8 Mar 2021 11:27:27 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id EC1B78D001D; Mon, 8 Mar 2021 11:27:27 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D13E38D004D; Mon, 8 Mar 2021 11:27:27 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0176.hostedemail.com [216.40.44.176]) by kanga.kvack.org (Postfix) with ESMTP id B2F648D001D for ; Mon, 8 Mar 2021 11:27:27 -0500 (EST) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 75565362E for ; Mon, 8 Mar 2021 16:27:27 +0000 (UTC) X-FDA: 77897237334.30.A5AC793 Received: from mail-wm1-f42.google.com (mail-wm1-f42.google.com [209.85.128.42]) by imf28.hostedemail.com (Postfix) with ESMTP id 8C75820003B5 for ; Mon, 8 Mar 2021 16:27:24 +0000 (UTC) Received: by mail-wm1-f42.google.com with SMTP id d139-20020a1c1d910000b029010b895cb6f2so4178953wmd.5 for ; Mon, 08 Mar 2021 08:27:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to:user-agent; bh=IiyDYjHRKW2wceZv/UNYcoAbgvHVesLsvlpudR7A0lk=; b=g4jVQNVTs5FWPseF1hKPa0IfQn9fEon7XGyisde6Vu5MMjP6+IH0XZ1qdvWSSkbSQw wOnEya9vBXdGzkUzi3c7RgTuqTANxW4ZieRDhUgUdIPDrsUbtHXvgjTkn/eh8Gg/BRP2 2yNX4Gw2+qig4lqOgUV91bOLY07/q5MCls/Vd6kp+FBLudkMF/n1UAuvH4Ds6vKdDfl3 60N//2jikbbS2XGNK77E51bKtUX0hWYkGudd4zsWNdNZJW5E+eQgioehOk56a1mmNrE3 tJZOqqUyJnrO9ZHZ0KKs7Ax3arFPOX8GGZ/OaSQAU8SAi1dOUBtcNgAvtKhsWZUZ+N8p aEIw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to:user-agent; bh=IiyDYjHRKW2wceZv/UNYcoAbgvHVesLsvlpudR7A0lk=; b=mur3iUoY4N7wC/uxIDsRiyvZVsnbeRo0r5l3xy8XrfyClGt8AyGru4sSrvduo0BkPq ezk/HqZGr9H1Iqo9dUYRdpgDb5ncBt5vMnz6K6MZaeSF4h0uaqdXXRXiimk2h+apxmOs oH0ofU7ZzwClJiyTBX8HYB0bIPt7I1A8XD9SEGB4oGmucV3oUn5/odktin5EH8NwWu3V 709omvpLbo+GU0uBIfvoNOhDsDVoXnUo091oUD56tTDiwjDNM0PkJCQEUY070Kw6vW3y 4NUPMj52+lwuYeAfT1uHyHCdH7s7Y+RNgHii1ePzUC/QCEJDOhUkkNIWX5ozpSElwxBj 4www== X-Gm-Message-State: AOAM5337iv+Y2TMerENw8geAtupJoonZMy4KjbvtLl0+4Whvo+AJw9H3 WO/7x6/EWbZnyRPkA7xUPCJHqQ== X-Google-Smtp-Source: ABdhPJwp1ErH6NyBV4/RxrA778bAOW5cuBCGayyDDhuAvxokw6ZL8OiYcgM0xp9gDIrDRpbUbViIOQ== X-Received: by 2002:a1c:4c17:: with SMTP id z23mr23208338wmf.17.1615220841452; Mon, 08 Mar 2021 08:27:21 -0800 (PST) Received: from elver.google.com ([2a00:79e0:15:13:9d1d:b6a0:d116:531b]) by smtp.gmail.com with ESMTPSA id 36sm21063862wrh.94.2021.03.08.08.27.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 08 Mar 2021 08:27:20 -0800 (PST) Date: Mon, 8 Mar 2021 17:27:15 +0100 From: Marco Elver To: Andrey Konovalov Cc: Catalin Marinas , Vincenzo Frascino , Alexander Potapenko , Andrew Morton , Will Deacon , Dmitry Vyukov , Andrey Ryabinin , Peter Collingbourne , Evgenii Stepanov , Branislav Rankov , Kevin Brodsky , kasan-dev@googlegroups.com, linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH v2 5/5] kasan, mm: integrate slab init_on_free with HW_TAGS Message-ID: References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/2.0.5 (2021-01-21) X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 8C75820003B5 X-Stat-Signature: sm94fy4niehgqfnkhcn8furmth4k6z4y Received-SPF: none (google.com>: No applicable sender policy available) receiver=imf28; identity=mailfrom; envelope-from=""; helo=mail-wm1-f42.google.com; client-ip=209.85.128.42 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1615220844-698614 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Mar 08, 2021 at 04:55PM +0100, Andrey Konovalov wrote: > This change uses the previously added memory initialization feature > of HW_TAGS KASAN routines for slab memory when init_on_free is enabled. > > With this change, memory initialization memset() is no longer called > when both HW_TAGS KASAN and init_on_free are enabled. Instead, memory > is initialized in KASAN runtime. > > For SLUB, the memory initialization memset() is moved into > slab_free_hook() that currently directly follows the initialization loop. > A new argument is added to slab_free_hook() that indicates whether to > initialize the memory or not. > > To avoid discrepancies with which memory gets initialized that can be > caused by future changes, both KASAN hook and initialization memset() > are put together and a warning comment is added. > > Combining setting allocation tags with memory initialization improves > HW_TAGS KASAN performance when init_on_free is enabled. > > Signed-off-by: Andrey Konovalov Reviewed-by: Marco Elver But same as other patch, given the internal API change, let's see if somebody else responds. > --- > include/linux/kasan.h | 10 ++++++---- > mm/kasan/common.c | 13 +++++++------ > mm/slab.c | 15 +++++++++++---- > mm/slub.c | 43 ++++++++++++++++++++++++------------------- > 4 files changed, 48 insertions(+), 33 deletions(-) > > diff --git a/include/linux/kasan.h b/include/linux/kasan.h > index 85f2a8786606..ed08c419a687 100644 > --- a/include/linux/kasan.h > +++ b/include/linux/kasan.h > @@ -203,11 +203,13 @@ static __always_inline void * __must_check kasan_init_slab_obj( > return (void *)object; > } > > -bool __kasan_slab_free(struct kmem_cache *s, void *object, unsigned long ip); > -static __always_inline bool kasan_slab_free(struct kmem_cache *s, void *object) > +bool __kasan_slab_free(struct kmem_cache *s, void *object, > + unsigned long ip, bool init); > +static __always_inline bool kasan_slab_free(struct kmem_cache *s, > + void *object, bool init) > { > if (kasan_enabled()) > - return __kasan_slab_free(s, object, _RET_IP_); > + return __kasan_slab_free(s, object, _RET_IP_, init); > return false; > } > > @@ -313,7 +315,7 @@ static inline void *kasan_init_slab_obj(struct kmem_cache *cache, > { > return (void *)object; > } > -static inline bool kasan_slab_free(struct kmem_cache *s, void *object) > +static inline bool kasan_slab_free(struct kmem_cache *s, void *object, bool init) > { > return false; > } > diff --git a/mm/kasan/common.c b/mm/kasan/common.c > index 7ea747b18c26..623cf94288a2 100644 > --- a/mm/kasan/common.c > +++ b/mm/kasan/common.c > @@ -322,8 +322,8 @@ void * __must_check __kasan_init_slab_obj(struct kmem_cache *cache, > return (void *)object; > } > > -static inline bool ____kasan_slab_free(struct kmem_cache *cache, > - void *object, unsigned long ip, bool quarantine) > +static inline bool ____kasan_slab_free(struct kmem_cache *cache, void *object, > + unsigned long ip, bool quarantine, bool init) > { > u8 tag; > void *tagged_object; > @@ -351,7 +351,7 @@ static inline bool ____kasan_slab_free(struct kmem_cache *cache, > } > > kasan_poison(object, round_up(cache->object_size, KASAN_GRANULE_SIZE), > - KASAN_KMALLOC_FREE, false); > + KASAN_KMALLOC_FREE, init); > > if ((IS_ENABLED(CONFIG_KASAN_GENERIC) && !quarantine)) > return false; > @@ -362,9 +362,10 @@ static inline bool ____kasan_slab_free(struct kmem_cache *cache, > return kasan_quarantine_put(cache, object); > } > > -bool __kasan_slab_free(struct kmem_cache *cache, void *object, unsigned long ip) > +bool __kasan_slab_free(struct kmem_cache *cache, void *object, > + unsigned long ip, bool init) > { > - return ____kasan_slab_free(cache, object, ip, true); > + return ____kasan_slab_free(cache, object, ip, true, init); > } > > static inline bool ____kasan_kfree_large(void *ptr, unsigned long ip) > @@ -409,7 +410,7 @@ void __kasan_slab_free_mempool(void *ptr, unsigned long ip) > return; > kasan_poison(ptr, page_size(page), KASAN_FREE_PAGE, false); > } else { > - ____kasan_slab_free(page->slab_cache, ptr, ip, false); > + ____kasan_slab_free(page->slab_cache, ptr, ip, false, false); > } > } > > diff --git a/mm/slab.c b/mm/slab.c > index 936dd686dec9..3adfe5bc3e2e 100644 > --- a/mm/slab.c > +++ b/mm/slab.c > @@ -3425,17 +3425,24 @@ static void cache_flusharray(struct kmem_cache *cachep, struct array_cache *ac) > static __always_inline void __cache_free(struct kmem_cache *cachep, void *objp, > unsigned long caller) > { > + bool init; > + > if (is_kfence_address(objp)) { > kmemleak_free_recursive(objp, cachep->flags); > __kfence_free(objp); > return; > } > > - if (unlikely(slab_want_init_on_free(cachep))) > + /* > + * As memory initialization might be integrated into KASAN, > + * kasan_slab_free and initialization memset must be > + * kept together to avoid discrepancies in behavior. > + */ > + init = slab_want_init_on_free(cachep); > + if (init && !kasan_has_integrated_init()) > memset(objp, 0, cachep->object_size); > - > - /* Put the object into the quarantine, don't touch it for now. */ > - if (kasan_slab_free(cachep, objp)) > + /* KASAN might put objp into memory quarantine, delaying its reuse. */ > + if (kasan_slab_free(cachep, objp, init)) > return; > > /* Use KCSAN to help debug racy use-after-free. */ > diff --git a/mm/slub.c b/mm/slub.c > index f53df23760e3..37afe6251bcc 100644 > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -1532,7 +1532,8 @@ static __always_inline void kfree_hook(void *x) > kasan_kfree_large(x); > } > > -static __always_inline bool slab_free_hook(struct kmem_cache *s, void *x) > +static __always_inline bool slab_free_hook(struct kmem_cache *s, > + void *x, bool init) > { > kmemleak_free_recursive(x, s->flags); > > @@ -1558,8 +1559,25 @@ static __always_inline bool slab_free_hook(struct kmem_cache *s, void *x) > __kcsan_check_access(x, s->object_size, > KCSAN_ACCESS_WRITE | KCSAN_ACCESS_ASSERT); > > - /* KASAN might put x into memory quarantine, delaying its reuse */ > - return kasan_slab_free(s, x); > + /* > + * As memory initialization might be integrated into KASAN, > + * kasan_slab_free and initialization memset's must be > + * kept together to avoid discrepancies in behavior. > + * > + * The initialization memset's clear the object and the metadata, > + * but don't touch the SLAB redzone. > + */ > + if (init) { > + int rsize; > + > + if (!kasan_has_integrated_init()) > + memset(kasan_reset_tag(x), 0, s->object_size); > + rsize = (s->flags & SLAB_RED_ZONE) ? s->red_left_pad : 0; > + memset((char *)kasan_reset_tag(x) + s->inuse, 0, > + s->size - s->inuse - rsize); > + } > + /* KASAN might put x into memory quarantine, delaying its reuse. */ > + return kasan_slab_free(s, x, init); > } > > static inline bool slab_free_freelist_hook(struct kmem_cache *s, > @@ -1569,10 +1587,9 @@ static inline bool slab_free_freelist_hook(struct kmem_cache *s, > void *object; > void *next = *head; > void *old_tail = *tail ? *tail : *head; > - int rsize; > > if (is_kfence_address(next)) { > - slab_free_hook(s, next); > + slab_free_hook(s, next, false); > return true; > } > > @@ -1584,20 +1601,8 @@ static inline bool slab_free_freelist_hook(struct kmem_cache *s, > object = next; > next = get_freepointer(s, object); > > - if (slab_want_init_on_free(s)) { > - /* > - * Clear the object and the metadata, but don't touch > - * the redzone. > - */ > - memset(kasan_reset_tag(object), 0, s->object_size); > - rsize = (s->flags & SLAB_RED_ZONE) ? s->red_left_pad > - : 0; > - memset((char *)kasan_reset_tag(object) + s->inuse, 0, > - s->size - s->inuse - rsize); > - > - } > /* If object's reuse doesn't have to be delayed */ > - if (!slab_free_hook(s, object)) { > + if (!slab_free_hook(s, object, slab_want_init_on_free(s))) { > /* Move object to the new freelist */ > set_freepointer(s, object, *head); > *head = object; > @@ -3235,7 +3240,7 @@ int build_detached_freelist(struct kmem_cache *s, size_t size, > } > > if (is_kfence_address(object)) { > - slab_free_hook(df->s, object); > + slab_free_hook(df->s, object, false); > __kfence_free(object); > p[size] = NULL; /* mark object processed */ > return size; > -- > 2.30.1.766.gb4fecdf3b7-goog >