From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.4 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A8369C433E2 for ; Fri, 11 Sep 2020 13:04:00 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 2DDB622229 for ; Fri, 11 Sep 2020 13:04:00 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Byv0FemA" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2DDB622229 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 8F2EE8E0005; Fri, 11 Sep 2020 09:03:59 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 87C2B8E0001; Fri, 11 Sep 2020 09:03:59 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 745998E0005; Fri, 11 Sep 2020 09:03:59 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0246.hostedemail.com [216.40.44.246]) by kanga.kvack.org (Postfix) with ESMTP id 56F0F8E0001 for ; Fri, 11 Sep 2020 09:03:59 -0400 (EDT) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 1D160180AD811 for ; Fri, 11 Sep 2020 13:03:59 +0000 (UTC) X-FDA: 77250798198.18.coat18_1e079a3270ee Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin18.hostedemail.com (Postfix) with ESMTP id D105D100EC66C for ; Fri, 11 Sep 2020 13:03:58 +0000 (UTC) X-HE-Tag: coat18_1e079a3270ee X-Filterd-Recvd-Size: 9708 Received: from mail-qt1-f194.google.com (mail-qt1-f194.google.com [209.85.160.194]) by imf31.hostedemail.com (Postfix) with ESMTP for ; Fri, 11 Sep 2020 13:03:58 +0000 (UTC) Received: by mail-qt1-f194.google.com with SMTP id h6so7718538qtd.6 for ; Fri, 11 Sep 2020 06:03:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=UXp+wN/Ry6nApS13CcvCEGTyxqKSfC2ghriIPUG6Irg=; b=Byv0FemAT1MqVzcqIMdOkPE+ga6b61o5HDNfELLV3TxwpZ6hfopL2LOtz8Yo2RXznG 7D1Z6WcXezyHaJ7wIKNBkt6Y27wwuKCztLDzOOfZ7GNsWdw56kcfMNjQAm82rpE/c/bg DK0zEwQMQ25MFX9CF11cUDGUW6NWwbbfynJJSTjkTH8GeWfsAkxro4JKXxayh6LvKmfZ XcUQ+H0bJIIeDJcXD4F4sqgazyBvdc3SF/ixkj8D/MtWhlWAO99pskVogw0+mwzethdH gzXlzYxmeavjrIvAO5jDLb0mVYymb0AptZktGxZNuyWnn6gRLkEFgPMItuSn3JdkUgfr yRBQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=UXp+wN/Ry6nApS13CcvCEGTyxqKSfC2ghriIPUG6Irg=; b=ApMgCMOBfj3mOzrZfC3uGCoT1F3oFWzZvLq4IhA6qmvhH/FgTddGEnRnTxztMOj8yd RZb5Ug9L7eLF1q9Vag6nItg7lgVPZ9N/DFnHylqwlHBq6UA3UE86MYBKvHnjOLgudfau bDmI+I6ELJgMsjqwSFJfIb34gREnHu0aFXFg3a7+QrHj2tscgX6rX4J0UhpKSz971/eL p46PI+efz5wVwjyBhejR4pNAYZSkY6tU3Afw6wPlPtv5X1+/tq0vovbwImgtGt5NJ5Hl XtIfq+Wl34rTFnQ4k9+oKdu3PO0/1UuE+mbJHw+YXppmf1rKtI72Q4x3kaiD5ZnXmUb+ NwKw== X-Gm-Message-State: AOAM5339lv8vCM4T+8Fgcz6FGpzCzBnzBS0wscFRZagkEZZ+6nP4dLQp 04F4PHaVotoVWJ3UUTGtlpn/PbCEUIQDeIh971J/Ww== X-Google-Smtp-Source: ABdhPJxrVCH57CVfp1175OPiNhGvetEllsVJBj/B4eeSkXBeyz0Oiej2impuZk++RSNeZKJdRCmf8es+QgFN6yCtelg= X-Received: by 2002:ac8:4806:: with SMTP id g6mr1747161qtq.380.1599829437150; Fri, 11 Sep 2020 06:03:57 -0700 (PDT) MIME-Version: 1.0 References: <20200907134055.2878499-1-elver@google.com> <20200907134055.2878499-5-elver@google.com> In-Reply-To: From: Dmitry Vyukov Date: Fri, 11 Sep 2020 15:03:45 +0200 Message-ID: Subject: Re: [PATCH RFC 04/10] mm, kfence: insert KFENCE hooks for SLAB To: Marco Elver Cc: Alexander Potapenko , Andrew Morton , Catalin Marinas , Christoph Lameter , David Rientjes , Joonsoo Kim , Mark Rutland , Pekka Enberg , "H. Peter Anvin" , "Paul E. McKenney" , Andrey Konovalov , Andrey Ryabinin , Andy Lutomirski , Borislav Petkov , Dave Hansen , Eric Dumazet , Greg Kroah-Hartman , Ingo Molnar , Jann Horn , Jonathan Corbet , Kees Cook , Peter Zijlstra , Qian Cai , Thomas Gleixner , Will Deacon , "the arch/x86 maintainers" , "open list:DOCUMENTATION" , LKML , kasan-dev , Linux ARM , Linux-MM Content-Type: text/plain; charset="UTF-8" X-Rspamd-Queue-Id: D105D100EC66C X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam05 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri, Sep 11, 2020 at 2:24 PM Marco Elver wrote: > > > From: Alexander Potapenko > > > > > > Inserts KFENCE hooks into the SLAB allocator. > > > > > > We note the addition of the 'orig_size' argument to slab_alloc*() > > > functions, to be able to pass the originally requested size to KFENCE. > > > When KFENCE is disabled, there is no additional overhead, since these > > > functions are __always_inline. > > > > > > Co-developed-by: Marco Elver > > > Signed-off-by: Marco Elver > > > Signed-off-by: Alexander Potapenko > > > --- > > > mm/slab.c | 46 ++++++++++++++++++++++++++++++++++------------ > > > mm/slab_common.c | 6 +++++- > > > 2 files changed, 39 insertions(+), 13 deletions(-) > > > > > > diff --git a/mm/slab.c b/mm/slab.c > > > index 3160dff6fd76..30aba06ae02b 100644 > > > --- a/mm/slab.c > > > +++ b/mm/slab.c > > > @@ -100,6 +100,7 @@ > > > #include > > > #include > > > #include > > > +#include > > > #include > > > #include > > > #include > > > @@ -3206,7 +3207,7 @@ static void *____cache_alloc_node(struct kmem_cache *cachep, gfp_t flags, > > > } > > > > > > static __always_inline void * > > > -slab_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid, > > > +slab_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid, size_t orig_size, > > > unsigned long caller) > > > { > > > unsigned long save_flags; > > > @@ -3219,6 +3220,10 @@ slab_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid, > > > if (unlikely(!cachep)) > > > return NULL; > > > > > > + ptr = kfence_alloc(cachep, orig_size, flags); > > > + if (unlikely(ptr)) > > > + goto out_hooks; > > > + > > > cache_alloc_debugcheck_before(cachep, flags); > > > local_irq_save(save_flags); > > > > > > @@ -3251,6 +3256,7 @@ slab_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid, > > > if (unlikely(slab_want_init_on_alloc(flags, cachep)) && ptr) > > > memset(ptr, 0, cachep->object_size); > > > > > > +out_hooks: > > > slab_post_alloc_hook(cachep, objcg, flags, 1, &ptr); > > > return ptr; > > > } > > > @@ -3288,7 +3294,7 @@ __do_cache_alloc(struct kmem_cache *cachep, gfp_t flags) > > > #endif /* CONFIG_NUMA */ > > > > > > static __always_inline void * > > > -slab_alloc(struct kmem_cache *cachep, gfp_t flags, unsigned long caller) > > > +slab_alloc(struct kmem_cache *cachep, gfp_t flags, size_t orig_size, unsigned long caller) > > > { > > > unsigned long save_flags; > > > void *objp; > > > @@ -3299,6 +3305,10 @@ slab_alloc(struct kmem_cache *cachep, gfp_t flags, unsigned long caller) > > > if (unlikely(!cachep)) > > > return NULL; > > > > > > + objp = kfence_alloc(cachep, orig_size, flags); > > > + if (unlikely(objp)) > > > + goto leave; > > > + > > > cache_alloc_debugcheck_before(cachep, flags); > > > local_irq_save(save_flags); > > > objp = __do_cache_alloc(cachep, flags); > > > @@ -3309,6 +3319,7 @@ slab_alloc(struct kmem_cache *cachep, gfp_t flags, unsigned long caller) > > > if (unlikely(slab_want_init_on_alloc(flags, cachep)) && objp) > > > memset(objp, 0, cachep->object_size); > > > > > > +leave: > > > slab_post_alloc_hook(cachep, objcg, flags, 1, &objp); > > > return objp; > > > } > > > @@ -3414,6 +3425,11 @@ static void cache_flusharray(struct kmem_cache *cachep, struct array_cache *ac) > > > static __always_inline void __cache_free(struct kmem_cache *cachep, void *objp, > > > unsigned long caller) > > > { > > > + if (kfence_free(objp)) { > > > + kmemleak_free_recursive(objp, cachep->flags); > > > + return; > > > + } > > > + > > > /* Put the object into the quarantine, don't touch it for now. */ > > > if (kasan_slab_free(cachep, objp, _RET_IP_)) > > > return; > > > @@ -3479,7 +3495,7 @@ void ___cache_free(struct kmem_cache *cachep, void *objp, > > > */ > > > void *kmem_cache_alloc(struct kmem_cache *cachep, gfp_t flags) > > > { > > > - void *ret = slab_alloc(cachep, flags, _RET_IP_); > > > + void *ret = slab_alloc(cachep, flags, cachep->object_size, _RET_IP_); > > > > > > It's kinda minor, but since we are talking about malloc fast path: > > will passing 0 instead of cachep->object_size (here and everywhere > > else) and then using cachep->object_size on the slow path if 0 is > > passed as size improve codegen? > > It doesn't save us much, maybe 1 instruction based on what I'm looking > at right now. The main worry I have is that the 'orig_size' argument > is now part of slab_alloc, and changing its semantics may cause > problems in future if it's no longer just passed to kfence_alloc(). > Today, we can do the 'size = size ?: cache->object_size' trick inside > kfence_alloc(), but at the cost breaking the intuitive semantics of > slab_alloc's orig_size argument for future users. Is it worth it? I don't have an answer to this question. I will leave this to others. If nobody has strong support for changing semantics, let's leave it as is. Maybe keep in mind as potential ballast. FWIW most likely misuse of 0 size for other future purposes should manifest itself in a quite straightforward way.