From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.4 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D8486C388F7 for ; Wed, 28 Oct 2020 16:55:52 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 39A5F247F6 for ; Wed, 28 Oct 2020 16:55:51 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="uvnv6eHJ" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 39A5F247F6 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 332676B006E; Wed, 28 Oct 2020 12:55:51 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2E1FB6B0070; Wed, 28 Oct 2020 12:55:51 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1D1726B0071; Wed, 28 Oct 2020 12:55:51 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0026.hostedemail.com [216.40.44.26]) by kanga.kvack.org (Postfix) with ESMTP id E16FF6B006E for ; Wed, 28 Oct 2020 12:55:50 -0400 (EDT) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 7D9BA180AD80F for ; Wed, 28 Oct 2020 16:55:50 +0000 (UTC) X-FDA: 77421936060.23.jelly64_550e24c27286 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin23.hostedemail.com (Postfix) with ESMTP id 5AF0A37604 for ; Wed, 28 Oct 2020 16:55:50 +0000 (UTC) X-HE-Tag: jelly64_550e24c27286 X-Filterd-Recvd-Size: 8760 Received: from mail-qt1-f196.google.com (mail-qt1-f196.google.com [209.85.160.196]) by imf31.hostedemail.com (Postfix) with ESMTP for ; Wed, 28 Oct 2020 16:55:49 +0000 (UTC) Received: by mail-qt1-f196.google.com with SMTP id i7so31030qti.6 for ; Wed, 28 Oct 2020 09:55:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=Gj1Q53KOuMFzpLJ+wtLX5dUP0ORJzeWOcnEBSsd3zsg=; b=uvnv6eHJe/dDr45Kvj32vroIsWN2bAAj80zmfZJ4dTfzwfmcBo3FUkSCmPYHxwjRuK 2HfSWkQJyeRuT6YNCy0qSouEOvx/qI5p0TMUgf4ennvg3eTngtRXNmXExf9cK+3iVWpy RVIQis4Q0dFJfPBtNVwFgTRPbCXUTbF1CokrK/aRAExm6xLhHiwNH02MZFK4FwUuPRzI TZA4P8Eb3AtaNHOBTkaRF5ekDMgU/qIYgmStlzzu2fNewnK4SS6tncL6K0ruHLSaXBFm hwMViguDS3vEn3DwIuskyMAAWVbKV/MEIg4hHq/+cL6L7J3jeyaDHYukwgAMZEWyTheu 2ZaA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=Gj1Q53KOuMFzpLJ+wtLX5dUP0ORJzeWOcnEBSsd3zsg=; b=gnFwQMvD4ctExDb6QvMxABrSIba2VyJ0Olhrwc0RvYkMX+Z3eyhuCzY/dhV1i+KQLH qUxT1dmkkzQDlC6R8Aytqw50AWpHPnSAgWcvv9dvz6VC2lK6i69n4m+ouWmwq8s79NMM tIj9xgOUtEF8zU2JF3hFjfEcVhaRKKK5wxPQ2VVPzX3Oy3nslhfLLCJD+7Mdc47EuveW o8v0UYzjgzZzMrYquVPM7Qx1rKzL1CNPmXIfhOIqjmU97PfJSs6vZsZDOTWPu5D6Oqkp RuC/KsCZtL3wlbyOVBJXkt0XoElEw0e2AUvKgMU5066n4rS4sNSLEQSM8IkkiSbTOTN7 Wdig== X-Gm-Message-State: AOAM533MG3URgwv26F9VvI7ueZQgkxPPfyssKlE1lZszSNnjTbBtMjpX QqGLZJQ73F+/f07Quvq12xpp4ziqJJejQmMpt40zwQ== X-Google-Smtp-Source: ABdhPJwz5LI1wTtoz7JkcOu7YaHctZL9O9XEBtkrHvdSRG0C6VVe9BPS3JLFJlc1ic3m7WuwJ8uKbxu1cBuZ0pb79/E= X-Received: by 2002:ac8:6c54:: with SMTP id z20mr7606223qtu.337.1603904147897; Wed, 28 Oct 2020 09:55:47 -0700 (PDT) MIME-Version: 1.0 References: In-Reply-To: From: Dmitry Vyukov Date: Wed, 28 Oct 2020 17:55:36 +0100 Message-ID: Subject: Re: [PATCH RFC v2 16/21] kasan: optimize poisoning in kmalloc and krealloc To: Andrey Konovalov Cc: Catalin Marinas , Will Deacon , Vincenzo Frascino , Alexander Potapenko , Marco Elver , Evgenii Stepanov , Kostya Serebryany , Peter Collingbourne , Serban Constantinescu , Andrey Ryabinin , Elena Petrova , Branislav Rankov , Kevin Brodsky , Andrew Morton , kasan-dev , Linux ARM , Linux-MM , LKML Content-Type: text/plain; charset="UTF-8" X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, Oct 22, 2020 at 3:20 PM Andrey Konovalov wrote: > > Since kasan_kmalloc() always follows kasan_slab_alloc(), there's no need > to reunpoison the object data, only to poison the redzone. > > This requires changing kasan annotation for early SLUB cache to > kasan_slab_alloc(). Otherwise kasan_kmalloc() doesn't untag the object. > This doesn't do any functional changes, as kmem_cache_node->object_size > is equal to sizeof(struct kmem_cache_node). > > Similarly for kasan_krealloc(), as it's called after ksize(), which > already unpoisoned the object, there's no need to do it again. Have you considered doing this the other way around: make krealloc call __ksize and unpoison in kasan_krealloc? This has the advantage of more precise poisoning as ksize will unpoison the whole underlying object. But then maybe we will need to move first checks in ksize into __ksize as we may need them in krealloc as well. > Signed-off-by: Andrey Konovalov > Link: https://linux-review.googlesource.com/id/I4083d3b55605f70fef79bca9b90843c4390296f2 > --- > mm/kasan/common.c | 31 +++++++++++++++++++++---------- > mm/slub.c | 3 +-- > 2 files changed, 22 insertions(+), 12 deletions(-) > > diff --git a/mm/kasan/common.c b/mm/kasan/common.c > index c5ec60e1a4d2..a581937c2a44 100644 > --- a/mm/kasan/common.c > +++ b/mm/kasan/common.c > @@ -360,8 +360,14 @@ static void *____kasan_kmalloc(struct kmem_cache *cache, const void *object, > if (IS_ENABLED(CONFIG_KASAN_SW_TAGS) || IS_ENABLED(CONFIG_KASAN_HW_TAGS)) > tag = assign_tag(cache, object, false, keep_tag); > > - /* Tag is ignored in set_tag without CONFIG_KASAN_SW/HW_TAGS */ > - kasan_unpoison_memory(set_tag(object, tag), size); > + /* > + * Don't unpoison the object when keeping the tag. Tag is kept for: > + * 1. krealloc(), and then the memory has already been unpoisoned via ksize(); > + * 2. kmalloc(), and then the memory has already been unpoisoned by kasan_kmalloc(). > + * Tag is ignored in set_tag() without CONFIG_KASAN_SW/HW_TAGS. > + */ > + if (!keep_tag) > + kasan_unpoison_memory(set_tag(object, tag), size); > kasan_poison_memory((void *)redzone_start, redzone_end - redzone_start, > KASAN_KMALLOC_REDZONE); > > @@ -384,10 +390,9 @@ void * __must_check __kasan_kmalloc(struct kmem_cache *cache, const void *object > } > EXPORT_SYMBOL(__kasan_kmalloc); > > -void * __must_check __kasan_kmalloc_large(const void *ptr, size_t size, > - gfp_t flags) > +static void * __must_check ____kasan_kmalloc_large(struct page *page, const void *ptr, > + size_t size, gfp_t flags, bool realloc) > { > - struct page *page; > unsigned long redzone_start; > unsigned long redzone_end; > > @@ -397,18 +402,24 @@ void * __must_check __kasan_kmalloc_large(const void *ptr, size_t size, > if (unlikely(ptr == NULL)) > return NULL; > > - page = virt_to_page(ptr); > - redzone_start = round_up((unsigned long)(ptr + size), > - KASAN_GRANULE_SIZE); > + redzone_start = round_up((unsigned long)(ptr + size), KASAN_GRANULE_SIZE); > redzone_end = (unsigned long)ptr + page_size(page); > > - kasan_unpoison_memory(ptr, size); > + /* ksize() in __do_krealloc() already unpoisoned the memory. */ > + if (!realloc) > + kasan_unpoison_memory(ptr, size); > kasan_poison_memory((void *)redzone_start, redzone_end - redzone_start, > KASAN_PAGE_REDZONE); > > return (void *)ptr; > } > > +void * __must_check __kasan_kmalloc_large(const void *ptr, size_t size, > + gfp_t flags) > +{ > + return ____kasan_kmalloc_large(virt_to_page(ptr), ptr, size, flags, false); > +} > + > void * __must_check __kasan_krealloc(const void *object, size_t size, gfp_t flags) > { > struct page *page; > @@ -419,7 +430,7 @@ void * __must_check __kasan_krealloc(const void *object, size_t size, gfp_t flag > page = virt_to_head_page(object); > > if (unlikely(!PageSlab(page))) > - return __kasan_kmalloc_large(object, size, flags); > + return ____kasan_kmalloc_large(page, object, size, flags, true); > else > return ____kasan_kmalloc(page->slab_cache, object, size, > flags, true); > diff --git a/mm/slub.c b/mm/slub.c > index 1d3f2355df3b..afb035b0bf2d 100644 > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -3535,8 +3535,7 @@ static void early_kmem_cache_node_alloc(int node) > init_object(kmem_cache_node, n, SLUB_RED_ACTIVE); > init_tracking(kmem_cache_node, n); > #endif > - n = kasan_kmalloc(kmem_cache_node, n, sizeof(struct kmem_cache_node), > - GFP_KERNEL); > + n = kasan_slab_alloc(kmem_cache_node, n, GFP_KERNEL); > page->freelist = get_freepointer(kmem_cache_node, n); > page->inuse = 1; > page->frozen = 0; > -- > 2.29.0.rc1.297.gfa9743e501-goog >