From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D330DC433F5 for ; Thu, 30 Sep 2021 15:40:23 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 82B79617E6 for ; Thu, 30 Sep 2021 15:40:23 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 82B79617E6 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 07CE99400B8; Thu, 30 Sep 2021 11:40:23 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 02B7D94003A; Thu, 30 Sep 2021 11:40:22 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E35679400B8; Thu, 30 Sep 2021 11:40:22 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0191.hostedemail.com [216.40.44.191]) by kanga.kvack.org (Postfix) with ESMTP id D42A194003A for ; Thu, 30 Sep 2021 11:40:22 -0400 (EDT) Received: from smtpin04.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 898B131E5A for ; Thu, 30 Sep 2021 15:40:22 +0000 (UTC) X-FDA: 78644651484.04.8DDE76B Received: from mail-qk1-f181.google.com (mail-qk1-f181.google.com [209.85.222.181]) by imf02.hostedemail.com (Postfix) with ESMTP id 419F7700172E for ; Thu, 30 Sep 2021 15:40:22 +0000 (UTC) Received: by mail-qk1-f181.google.com with SMTP id f130so6213507qke.6 for ; Thu, 30 Sep 2021 08:40:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc:content-transfer-encoding; bh=rk528hO09V+xXN91B1hSKF0BEg5FNswtyGgnAwkONyI=; b=dESEuhyj1diuaN43mBYsQwKIqX9naWqnpqhhKA68G4OUlo0DTHbw8y2pY1FHxAFrzV nKSEL/1ki6qdHnNHPQNHYMzWLFD2L0w566kHcOkJmc4x+jGhRppQEqFU8y1gcn/S1WVS ecRwi6mmAz76ki1wey175nI40Fe6rqjoqPXb7XLIUTm5Ka8JHIav7eQ9Jp00o5DnN7Ix t/6BJKFAdciaAQ+LSO1dyfmJS2hysC5icO3MQg0XPd3tz+ab7gseuKOloQzRMiTrL/ml KQyFkfa0riIFzD58eRGM8c7k/GaNhhbqOEBANk9WCZ7n5/LdQmn2Ii7hmmm9Lu/Y6jnI arzg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc:content-transfer-encoding; bh=rk528hO09V+xXN91B1hSKF0BEg5FNswtyGgnAwkONyI=; b=sjjkffW/aH2UNJiwvylcU7iE9Y41aPFAqS8BJSW6x6CjoY6IN9e3F6NpZq3leH+39J y6Qe3V300mLnnjYScOUlihqaqMtFGt5tkQBel7v9ghJp2tb9c96mZ1+RwajBEnq5RCsM jG5xjkGd5fLNxaeQjB4EYTYH4JZ+AC00J1HbN0qOqMhRZ0Qku+f4OjlNzz2gIlQ6GmGa mrGPbGSfWZcW4/nH9ZiOp7D0Vlrz1XAwFvAcmI3YUOtTYvmC1tMppzklY8KsUPC3n7jG NziL2Wk8xUKogl+o5MXiCqO6z7SyzLqJgnpa6/25NGZp76n1xJ/6sS7tNucS7eeA2qBb fgew== X-Gm-Message-State: AOAM530GPzsgGzFUhpPgPoJpTgAmRnb9al542P7nDUQm9+zw4wBBC1Fx fAHZSATPRVUk9Sy5Al464E7m/cjkisxvusiSb5DIyA== X-Google-Smtp-Source: ABdhPJzBGOZXzCYT4XeV0yIPPzwV2eSqndWZYNRTuD9L7l0NpYMXXlDP51xyW1i8PBJ4zHLByvZv05hQceNyE07kd2M= X-Received: by 2002:a37:5446:: with SMTP id i67mr5480440qkb.502.1633016421262; Thu, 30 Sep 2021 08:40:21 -0700 (PDT) MIME-Version: 1.0 References: <20210930153706.2105471-1-elver@google.com> In-Reply-To: <20210930153706.2105471-1-elver@google.com> From: Alexander Potapenko Date: Thu, 30 Sep 2021 17:39:44 +0200 Message-ID: Subject: Re: [PATCH] kfence: shorten critical sections of alloc/free To: Marco Elver Cc: Andrew Morton , Dmitry Vyukov , Jann Horn , LKML , Linux Memory Management List , kasan-dev Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 419F7700172E X-Stat-Signature: azoy548soh9troyyi1zduryicqbi6kee Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=dESEuhyj; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf02.hostedemail.com: domain of glider@google.com designates 209.85.222.181 as permitted sender) smtp.mailfrom=glider@google.com X-HE-Tag: 1633016422-112090 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, Sep 30, 2021 at 5:37 PM Marco Elver wrote: > > Initializing memory and setting/checking the canary bytes is relatively > expensive, and doing so in the meta->lock critical sections extends the > duration with preemption and interrupts disabled unnecessarily. > > Any reads to meta->addr and meta->size in kfence_guarded_alloc() and > kfence_guarded_free() don't require locking meta->lock as long as the > object is removed from the freelist: only kfence_guarded_alloc() sets > meta->addr and meta->size after removing it from the freelist, which > requires a preceding kfence_guarded_free() returning it to the list or > the initial state. > > Therefore move reads to meta->addr and meta->size, including expensive > memory initialization using them, out of meta->lock critical sections. > > Signed-off-by: Marco Elver Acked-by: Alexander Potapenko > --- > mm/kfence/core.c | 38 +++++++++++++++++++++----------------- > 1 file changed, 21 insertions(+), 17 deletions(-) > > diff --git a/mm/kfence/core.c b/mm/kfence/core.c > index b61ef93d9f98..802905b1c89b 100644 > --- a/mm/kfence/core.c > +++ b/mm/kfence/core.c > @@ -309,12 +309,19 @@ static inline bool set_canary_byte(u8 *addr) > /* Check canary byte at @addr. */ > static inline bool check_canary_byte(u8 *addr) > { > + struct kfence_metadata *meta; > + unsigned long flags; > + > if (likely(*addr =3D=3D KFENCE_CANARY_PATTERN(addr))) > return true; > > atomic_long_inc(&counters[KFENCE_COUNTER_BUGS]); > - kfence_report_error((unsigned long)addr, false, NULL, addr_to_met= adata((unsigned long)addr), > - KFENCE_ERROR_CORRUPTION); > + > + meta =3D addr_to_metadata((unsigned long)addr); > + raw_spin_lock_irqsave(&meta->lock, flags); > + kfence_report_error((unsigned long)addr, false, NULL, meta, KFENC= E_ERROR_CORRUPTION); > + raw_spin_unlock_irqrestore(&meta->lock, flags); > + > return false; > } > > @@ -324,8 +331,6 @@ static __always_inline void for_each_canary(const str= uct kfence_metadata *meta, > const unsigned long pageaddr =3D ALIGN_DOWN(meta->addr, PAGE_SIZE= ); > unsigned long addr; > > - lockdep_assert_held(&meta->lock); > - > /* > * We'll iterate over each canary byte per-side until fn() return= s > * false. However, we'll still iterate over the canary bytes to t= he > @@ -414,8 +419,9 @@ static void *kfence_guarded_alloc(struct kmem_cache *= cache, size_t size, gfp_t g > WRITE_ONCE(meta->cache, cache); > meta->size =3D size; > meta->alloc_stack_hash =3D alloc_stack_hash; > + raw_spin_unlock_irqrestore(&meta->lock, flags); > > - for_each_canary(meta, set_canary_byte); > + alloc_covered_add(alloc_stack_hash, 1); > > /* Set required struct page fields. */ > page =3D virt_to_page(meta->addr); > @@ -425,11 +431,8 @@ static void *kfence_guarded_alloc(struct kmem_cache = *cache, size_t size, gfp_t g > if (IS_ENABLED(CONFIG_SLAB)) > page->s_mem =3D addr; > > - raw_spin_unlock_irqrestore(&meta->lock, flags); > - > - alloc_covered_add(alloc_stack_hash, 1); > - > /* Memory initialization. */ > + for_each_canary(meta, set_canary_byte); > > /* > * We check slab_want_init_on_alloc() ourselves, rather than lett= ing > @@ -454,6 +457,7 @@ static void kfence_guarded_free(void *addr, struct kf= ence_metadata *meta, bool z > { > struct kcsan_scoped_access assert_page_exclusive; > unsigned long flags; > + bool init; > > raw_spin_lock_irqsave(&meta->lock, flags); > > @@ -481,6 +485,13 @@ static void kfence_guarded_free(void *addr, struct k= fence_metadata *meta, bool z > meta->unprotected_page =3D 0; > } > > + /* Mark the object as freed. */ > + metadata_update_state(meta, KFENCE_OBJECT_FREED, NULL, 0); > + init =3D slab_want_init_on_free(meta->cache); > + raw_spin_unlock_irqrestore(&meta->lock, flags); > + > + alloc_covered_add(meta->alloc_stack_hash, -1); > + > /* Check canary bytes for memory corruption. */ > for_each_canary(meta, check_canary_byte); > > @@ -489,16 +500,9 @@ static void kfence_guarded_free(void *addr, struct k= fence_metadata *meta, bool z > * data is still there, and after a use-after-free is detected, w= e > * unprotect the page, so the data is still accessible. > */ > - if (!zombie && unlikely(slab_want_init_on_free(meta->cache))) > + if (!zombie && unlikely(init)) > memzero_explicit(addr, meta->size); > > - /* Mark the object as freed. */ > - metadata_update_state(meta, KFENCE_OBJECT_FREED, NULL, 0); > - > - raw_spin_unlock_irqrestore(&meta->lock, flags); > - > - alloc_covered_add(meta->alloc_stack_hash, -1); > - > /* Protect to detect use-after-frees. */ > kfence_protect((unsigned long)addr); > > -- > 2.33.0.685.g46640cef36-goog > --=20 Alexander Potapenko Software Engineer Google Germany GmbH Erika-Mann-Stra=C3=9Fe, 33 80636 M=C3=BCnchen Gesch=C3=A4ftsf=C3=BChrer: Paul Manicle, Halimah DeLaine Prado Registergericht und -nummer: Hamburg, HRB 86891 Sitz der Gesellschaft: Hamburg