From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-24.8 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_SANE_1,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1A528C433E0 for ; Mon, 8 Mar 2021 11:37:09 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 793DF651E8 for ; Mon, 8 Mar 2021 11:37:08 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 793DF651E8 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id F09468D0026; Mon, 8 Mar 2021 06:37:07 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id EB9548D001D; Mon, 8 Mar 2021 06:37:07 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D334B8D0026; Mon, 8 Mar 2021 06:37:07 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0211.hostedemail.com [216.40.44.211]) by kanga.kvack.org (Postfix) with ESMTP id B73F88D001D for ; Mon, 8 Mar 2021 06:37:07 -0500 (EST) Received: from smtpin11.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 6D0D4180AD83A for ; Mon, 8 Mar 2021 11:37:07 +0000 (UTC) X-FDA: 77896505694.11.C265B7B Received: from mail-wm1-f42.google.com (mail-wm1-f42.google.com [209.85.128.42]) by imf09.hostedemail.com (Postfix) with ESMTP id 59C6C6000102 for ; Mon, 8 Mar 2021 11:37:05 +0000 (UTC) Received: by mail-wm1-f42.google.com with SMTP id i9so5793488wml.0 for ; Mon, 08 Mar 2021 03:37:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to:user-agent; bh=87FdZea+iM/dFONNBtDahO6WoHLjdEMssDM4fhYdjyg=; b=jlVBStZqs8WQUraCWHSwKyJt6Qve0w4rI9lCnnim7Qm6Zz9AyojesiTCvj9xOTpCa3 zVtt7D2hb4XbqwjHCQ+Ka4aYXQSZOXo1dKJ0umpbH/ed/PgNWUbi+VfeZhCbz789/b89 12XUqVbSmElzPSp3wkaniTtAe7eyT7bSuI1Kcb1NDxArgKuFyPNuverTuqkjKW4/6G5d YgVq6WclS8de7RzQnfvzg/o7BDodxBjUnAWYxPaurh1JaYSLEYXdCHpEFYV5pRKLbeyj E1ziPjYCWelATZ/n136yNY5/rLHwfm9Qz8uMYK60cvRAO/polXcD2qRPpau+caurITHV xwZg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to:user-agent; bh=87FdZea+iM/dFONNBtDahO6WoHLjdEMssDM4fhYdjyg=; b=gWU2cCFKMzs5C/c4QVlvtSiuh8HQl/uIN4mlOrZfjcjLqIlbEI/4W9FzBpmykr5z04 GcPaBqtcIV9zyQHXLfdGyyXdSZ2TH32VSQmnqRcJLF5uWzuVbaPWpRCzgbBeGUhDnKfw x/ITH/BLVE+pwy9M4zPRxbub4O8XRDU57pC7fuPtWt4TNtH5P07O0DqAACK4PFdBcgKl BzOTH/550CG9FXeoLc9uyR8wFXv4LI3r3HpRRynjMLN8cb2MwI2qrTJo3JTWWGzhfHaj fP8l1cwdq/YOnUnwU3opGavuZHFhzuZfrpB1/h75YhNj17SDYDrihlc7Xieg++3Uzf0s avcA== X-Gm-Message-State: AOAM532bEn47LlPupJnkMHypQwEC74eac5YMcKFlzeYxzuwq/8/fDtSH HTpiAwfcQRRqgm/InsZtOrxMQA== X-Google-Smtp-Source: ABdhPJxeOjaQHVun5YbiM2V9rDOzEj+A75H/TI2A+ePs46qmxN3wauRLCAyHBGq9k5JV3IidA6cS5g== X-Received: by 2002:a7b:c119:: with SMTP id w25mr21264635wmi.127.1615203425609; Mon, 08 Mar 2021 03:37:05 -0800 (PST) Received: from elver.google.com ([2a00:79e0:15:13:9d1d:b6a0:d116:531b]) by smtp.gmail.com with ESMTPSA id p14sm18252588wmc.30.2021.03.08.03.37.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 08 Mar 2021 03:37:04 -0800 (PST) Date: Mon, 8 Mar 2021 12:36:59 +0100 From: Marco Elver To: Andrey Konovalov Cc: Alexander Potapenko , Andrew Morton , Catalin Marinas , Will Deacon , Vincenzo Frascino , Dmitry Vyukov , Andrey Ryabinin , Peter Collingbourne , Evgenii Stepanov , Branislav Rankov , Kevin Brodsky , kasan-dev@googlegroups.com, linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH 4/5] kasan, mm: integrate slab init_on_alloc with HW_TAGS Message-ID: References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/2.0.5 (2021-01-21) X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 59C6C6000102 X-Stat-Signature: rke5q5q7nyrjywpjtxttyjkzfzcgtnf3 Received-SPF: none (google.com>: No applicable sender policy available) receiver=imf09; identity=mailfrom; envelope-from=""; helo=mail-wm1-f42.google.com; client-ip=209.85.128.42 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1615203425-128181 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Sat, Mar 06, 2021 at 01:15AM +0100, Andrey Konovalov wrote: > This change uses the previously added memory initialization feature > of HW_TAGS KASAN routines for slab memory when init_on_alloc is enabled. > > With this change, memory initialization memset() is no longer called > when both HW_TAGS KASAN and init_on_alloc are enabled. Instead, memory > is initialized in KASAN runtime. > > The memory initialization memset() is moved into slab_post_alloc_hook() > that currently directly follows the initialization loop. A new argument > is added to slab_post_alloc_hook() that indicates whether to initialize > the memory or not. > > To avoid discrepancies with which memory gets initialized that can be > caused by future changes, both KASAN hook and initialization memset() > are put together and a warning comment is added. > > Combining setting allocation tags with memory initialization improves > HW_TAGS KASAN performance when init_on_alloc is enabled. > > Signed-off-by: Andrey Konovalov > --- > include/linux/kasan.h | 8 ++++---- > mm/kasan/common.c | 4 ++-- > mm/slab.c | 28 +++++++++++++--------------- > mm/slab.h | 17 +++++++++++++---- > mm/slub.c | 27 +++++++++++---------------- > 5 files changed, 43 insertions(+), 41 deletions(-) > > diff --git a/include/linux/kasan.h b/include/linux/kasan.h > index 4c0f414a893b..bb756f6c73b5 100644 > --- a/include/linux/kasan.h > +++ b/include/linux/kasan.h > @@ -216,12 +216,12 @@ static __always_inline void kasan_slab_free_mempool(void *ptr) > } > > void * __must_check __kasan_slab_alloc(struct kmem_cache *s, > - void *object, gfp_t flags); > + void *object, gfp_t flags, bool init); > static __always_inline void * __must_check kasan_slab_alloc( > - struct kmem_cache *s, void *object, gfp_t flags) > + struct kmem_cache *s, void *object, gfp_t flags, bool init) > { > if (kasan_enabled()) > - return __kasan_slab_alloc(s, object, flags); > + return __kasan_slab_alloc(s, object, flags, init); > return object; > } > > @@ -306,7 +306,7 @@ static inline bool kasan_slab_free(struct kmem_cache *s, void *object) > static inline void kasan_kfree_large(void *ptr) {} > static inline void kasan_slab_free_mempool(void *ptr) {} > static inline void *kasan_slab_alloc(struct kmem_cache *s, void *object, > - gfp_t flags) > + gfp_t flags, bool init) > { > return object; > } > diff --git a/mm/kasan/common.c b/mm/kasan/common.c > index 6107c795611f..7ea747b18c26 100644 > --- a/mm/kasan/common.c > +++ b/mm/kasan/common.c > @@ -428,7 +428,7 @@ static void set_alloc_info(struct kmem_cache *cache, void *object, > } > > void * __must_check __kasan_slab_alloc(struct kmem_cache *cache, > - void *object, gfp_t flags) > + void *object, gfp_t flags, bool init) > { > u8 tag; > void *tagged_object; > @@ -453,7 +453,7 @@ void * __must_check __kasan_slab_alloc(struct kmem_cache *cache, > * Unpoison the whole object. > * For kmalloc() allocations, kasan_kmalloc() will do precise poisoning. > */ > - kasan_unpoison(tagged_object, cache->object_size, false); > + kasan_unpoison(tagged_object, cache->object_size, init); > > /* Save alloc info (if possible) for non-kmalloc() allocations. */ > if (kasan_stack_collection_enabled()) > diff --git a/mm/slab.c b/mm/slab.c > index 51fd424e0d6d..936dd686dec9 100644 > --- a/mm/slab.c > +++ b/mm/slab.c > @@ -3216,6 +3216,7 @@ slab_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid, size_t orig_ > void *ptr; > int slab_node = numa_mem_id(); > struct obj_cgroup *objcg = NULL; > + bool init = false; > > flags &= gfp_allowed_mask; > cachep = slab_pre_alloc_hook(cachep, &objcg, 1, flags); > @@ -3254,12 +3255,10 @@ slab_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid, size_t orig_ > out: > local_irq_restore(save_flags); > ptr = cache_alloc_debugcheck_after(cachep, flags, ptr, caller); > - > - if (unlikely(slab_want_init_on_alloc(flags, cachep)) && ptr) > - memset(ptr, 0, cachep->object_size); > + init = slab_want_init_on_alloc(flags, cachep); > > out_hooks: > - slab_post_alloc_hook(cachep, objcg, flags, 1, &ptr); > + slab_post_alloc_hook(cachep, objcg, flags, 1, &ptr, init); > return ptr; > } > > @@ -3301,6 +3300,7 @@ slab_alloc(struct kmem_cache *cachep, gfp_t flags, size_t orig_size, unsigned lo > unsigned long save_flags; > void *objp; > struct obj_cgroup *objcg = NULL; > + bool init = false; > > flags &= gfp_allowed_mask; > cachep = slab_pre_alloc_hook(cachep, &objcg, 1, flags); > @@ -3317,12 +3317,10 @@ slab_alloc(struct kmem_cache *cachep, gfp_t flags, size_t orig_size, unsigned lo > local_irq_restore(save_flags); > objp = cache_alloc_debugcheck_after(cachep, flags, objp, caller); > prefetchw(objp); > - > - if (unlikely(slab_want_init_on_alloc(flags, cachep)) && objp) > - memset(objp, 0, cachep->object_size); > + init = slab_want_init_on_alloc(flags, cachep); > > out: > - slab_post_alloc_hook(cachep, objcg, flags, 1, &objp); > + slab_post_alloc_hook(cachep, objcg, flags, 1, &objp, init); > return objp; > } > > @@ -3542,18 +3540,18 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, > > cache_alloc_debugcheck_after_bulk(s, flags, size, p, _RET_IP_); > > - /* Clear memory outside IRQ disabled section */ > - if (unlikely(slab_want_init_on_alloc(flags, s))) > - for (i = 0; i < size; i++) > - memset(p[i], 0, s->object_size); > - > - slab_post_alloc_hook(s, objcg, flags, size, p); > + /* > + * memcg and kmem_cache debug support and memory initialization. > + * Done outside of the IRQ disabled section. > + */ > + slab_post_alloc_hook(s, objcg, flags, size, p, > + slab_want_init_on_alloc(flags, s)); > /* FIXME: Trace call missing. Christoph would like a bulk variant */ > return size; > error: > local_irq_enable(); > cache_alloc_debugcheck_after_bulk(s, flags, i, p, _RET_IP_); > - slab_post_alloc_hook(s, objcg, flags, i, p); > + slab_post_alloc_hook(s, objcg, flags, i, p, false); > __kmem_cache_free_bulk(s, i, p); > return 0; > } > diff --git a/mm/slab.h b/mm/slab.h > index 076582f58f68..0116a314cd21 100644 > --- a/mm/slab.h > +++ b/mm/slab.h > @@ -506,15 +506,24 @@ static inline struct kmem_cache *slab_pre_alloc_hook(struct kmem_cache *s, > } > > static inline void slab_post_alloc_hook(struct kmem_cache *s, > - struct obj_cgroup *objcg, > - gfp_t flags, size_t size, void **p) > + struct obj_cgroup *objcg, gfp_t flags, > + size_t size, void **p, bool init) > { > size_t i; > > flags &= gfp_allowed_mask; > + > + /* > + * As memory initialization is integrated with hardware tag-based > + * KASAN, kasan_slab_alloc and initialization memset must be > + * kept together to avoid discrepancies in behavior. > + * > + * As p[i] might get tagged, memset and kmemleak hook come after KASAN. > + */ > for (i = 0; i < size; i++) { > - p[i] = kasan_slab_alloc(s, p[i], flags); > - /* As p[i] might get tagged, call kmemleak hook after KASAN. */ > + p[i] = kasan_slab_alloc(s, p[i], flags, init); > + if (p[i] && init && !IS_ENABLED(CONFIG_KASAN_HW_TAGS)) Same as the other patch, it seems awkward to have !IS_ENABLED(CONFIG_KASAN_HW_TAGS) rather than a kasan_*() helper that returns this information. > + memset(p[i], 0, s->object_size); > kmemleak_alloc_recursive(p[i], s->object_size, 1, > s->flags, flags); > } > diff --git a/mm/slub.c b/mm/slub.c > index e26c274b4657..f53df23760e3 100644 > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -2822,6 +2822,7 @@ static __always_inline void *slab_alloc_node(struct kmem_cache *s, > struct page *page; > unsigned long tid; > struct obj_cgroup *objcg = NULL; > + bool init = false; > > s = slab_pre_alloc_hook(s, &objcg, 1, gfpflags); > if (!s) > @@ -2899,12 +2900,10 @@ static __always_inline void *slab_alloc_node(struct kmem_cache *s, > } > > maybe_wipe_obj_freeptr(s, object); > - > - if (unlikely(slab_want_init_on_alloc(gfpflags, s)) && object) > - memset(kasan_reset_tag(object), 0, s->object_size); > + init = slab_want_init_on_alloc(gfpflags, s); > > out: > - slab_post_alloc_hook(s, objcg, gfpflags, 1, &object); > + slab_post_alloc_hook(s, objcg, gfpflags, 1, &object, init); > > return object; > } > @@ -3356,20 +3355,16 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, > c->tid = next_tid(c->tid); > local_irq_enable(); > > - /* Clear memory outside IRQ disabled fastpath loop */ > - if (unlikely(slab_want_init_on_alloc(flags, s))) { > - int j; > - > - for (j = 0; j < i; j++) > - memset(kasan_reset_tag(p[j]), 0, s->object_size); > - } > - > - /* memcg and kmem_cache debug support */ > - slab_post_alloc_hook(s, objcg, flags, size, p); > + /* > + * memcg and kmem_cache debug support and memory initialization. > + * Done outside of the IRQ disabled fastpath loop. > + */ > + slab_post_alloc_hook(s, objcg, flags, size, p, > + slab_want_init_on_alloc(flags, s)); > return i; > error: > local_irq_enable(); > - slab_post_alloc_hook(s, objcg, flags, i, p); > + slab_post_alloc_hook(s, objcg, flags, i, p, false); > __kmem_cache_free_bulk(s, i, p); > return 0; > } > @@ -3579,7 +3574,7 @@ static void early_kmem_cache_node_alloc(int node) > init_object(kmem_cache_node, n, SLUB_RED_ACTIVE); > init_tracking(kmem_cache_node, n); > #endif > - n = kasan_slab_alloc(kmem_cache_node, n, GFP_KERNEL); > + n = kasan_slab_alloc(kmem_cache_node, n, GFP_KERNEL, false); > page->freelist = get_freepointer(kmem_cache_node, n); > page->inuse = 1; > page->frozen = 0; > -- > 2.30.1.766.gb4fecdf3b7-goog >