From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.4 required=3.0 tests=DKIMWL_WL_MED,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 58EE3C1975A for ; Wed, 25 Mar 2020 16:14:04 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1632720777 for ; Wed, 25 Mar 2020 16:14:04 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="u+yPsS+/" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1632720777 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 385886B0082; Wed, 25 Mar 2020 12:13:55 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 338846B0083; Wed, 25 Mar 2020 12:13:55 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 224E66B0085; Wed, 25 Mar 2020 12:13:55 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0020.hostedemail.com [216.40.44.20]) by kanga.kvack.org (Postfix) with ESMTP id 0C53C6B0082 for ; Wed, 25 Mar 2020 12:13:55 -0400 (EDT) Received: from smtpin01.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id D708C7581B for ; Wed, 25 Mar 2020 16:13:54 +0000 (UTC) X-FDA: 76634380788.01.blade37_796d0d21d4725 X-HE-Tag: blade37_796d0d21d4725 X-Filterd-Recvd-Size: 8586 Received: from mail-wr1-f73.google.com (mail-wr1-f73.google.com [209.85.221.73]) by imf48.hostedemail.com (Postfix) with ESMTP for ; Wed, 25 Mar 2020 16:13:54 +0000 (UTC) Received: by mail-wr1-f73.google.com with SMTP id h14so1362602wrr.12 for ; Wed, 25 Mar 2020 09:13:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=Sa6nsq0BYXIlysvXMPCpNA3EIKSRawmp2xcpE7WG474=; b=u+yPsS+/uRngXo1UzqnXXoVjjJl/OCO7XeMd7L66pM0UT8/SDMqUSQXyemRPPhIMyf 51spg05rsggW3fUjL+x8m3uQhsFSODUHL5/86Y6VJNRnLv6jOuDIDgM3hBOavx98CcQO Xc5DvC0/TUsL0xHHqA3ckwuD1sn+0t3+2KAQLEEfysJauHmUJK341WVzz/n+09bsT+Xb 5bpQkyG4OyYDilbmddgitHhkp3/lfNErq1cDgeS5fIu+u+OQVMOMmQmwQ7bU5ThYJqCV 0Jr5ozz++HNE1YBNolTG/Nn3yGFMrdyihXqF69qWTZimMW71vV4JpgeIslzrtV3qodAP rJNQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=Sa6nsq0BYXIlysvXMPCpNA3EIKSRawmp2xcpE7WG474=; b=mFm0Rpa4pTzEIyGsuAPRx1nc0iRNNC4AtB0wAT5Qh7cezFPsb0LX+y7iu7QBbT5cRr k4N2wp5NbSF9+WH2OmJA4SBZ0nrU3g2dDDafTcwrg3BY7cbEl5cuofkv3v+0NBBawpAL dmy+tJYz9+s4qCd7DjOWHL3zI9aR213qc8CkX5l1X1p5rTj0imE/ijmX2aIcBOOOj8FE B87kLSWfy+GSCajZ/yRkROww21FHzrIp9nyYOImPfAref3XAybm5EOY8f0xS0xjPD9Gu 4Ou7qAQQV2Kq959pPqXqNembx0Y+xvo0HgMhnKhv9OM7OZEMzvDYTHU2p76iaqHZbuKH LWQQ== X-Gm-Message-State: ANhLgQ32TmptrCpfSu3KOqFXSxZTCCBCPyZkuemkd4CSbE8VSSkpLxJI 7eKIIFyffK4c/WKu2XM/XfRpyrzLdFM= X-Google-Smtp-Source: ADFU+vvAGZoF26dIztUr51cgdgQNZCrtakfrlaH+b0C27JkYlF9DYm/ikdaAvyvLczMLIRntLmkeHuicx/w= X-Received: by 2002:adf:e98b:: with SMTP id h11mr4243604wrm.409.1585152832911; Wed, 25 Mar 2020 09:13:52 -0700 (PDT) Date: Wed, 25 Mar 2020 17:12:29 +0100 In-Reply-To: <20200325161249.55095-1-glider@google.com> Message-Id: <20200325161249.55095-19-glider@google.com> Mime-Version: 1.0 References: <20200325161249.55095-1-glider@google.com> X-Mailer: git-send-email 2.25.1.696.g5e7596f4ac-goog Subject: [PATCH v5 18/38] kmsan: mm: call KMSAN hooks from SLUB code From: glider@google.com To: Andrew Morton , Vegard Nossum , Dmitry Vyukov , Marco Elver , Andrey Konovalov , linux-mm@kvack.org Cc: glider@google.com, viro@zeniv.linux.org.uk, adilger.kernel@dilger.ca, aryabinin@virtuozzo.com, luto@kernel.org, ard.biesheuvel@linaro.org, arnd@arndb.de, hch@infradead.org, hch@lst.de, darrick.wong@oracle.com, davem@davemloft.net, dmitry.torokhov@gmail.com, ebiggers@google.com, edumazet@google.com, ericvh@gmail.com, gregkh@linuxfoundation.org, harry.wentland@amd.com, herbert@gondor.apana.org.au, iii@linux.ibm.com, mingo@elte.hu, jasowang@redhat.com, axboe@kernel.dk, m.szyprowski@samsung.com, mark.rutland@arm.com, martin.petersen@oracle.com, schwidefsky@de.ibm.com, willy@infradead.org, mst@redhat.com, mhocko@suse.com, monstr@monstr.eu, pmladek@suse.com, cai@lca.pw, rdunlap@infradead.org, robin.murphy@arm.com, sergey.senozhatsky@gmail.com, rostedt@goodmis.org, tiwai@suse.com, tytso@mit.edu, tglx@linutronix.de, gor@linux.ibm.com, wsa@the-dreams.de Content-Type: text/plain; charset="UTF-8" X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In order to report uninitialized memory coming from heap allocations KMSAN has to poison them unless they're created with __GFP_ZERO. It's handy that we need KMSAN hooks in the places where init_on_alloc/init_on_free initialization is performed. Signed-off-by: Alexander Potapenko To: Alexander Potapenko Cc: Andrew Morton Cc: Vegard Nossum Cc: Dmitry Vyukov Cc: Marco Elver Cc: Andrey Konovalov Cc: linux-mm@kvack.org --- v3: - reverted unrelated whitespace changes Change-Id: I51103b7981d3aabed747d0c85cbdc85568665871 --- mm/slub.c | 29 ++++++++++++++++++++++++----- 1 file changed, 24 insertions(+), 5 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 332d4b459a907..67c7f76bee412 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -21,6 +21,8 @@ #include #include #include +#include +#include /* KMSAN_INIT_VALUE */ #include #include #include @@ -283,17 +285,27 @@ static void prefetch_freepointer(const struct kmem_cache *s, void *object) prefetch(object + s->offset); } +/* + * When running under KMSAN, get_freepointer_safe() may return an uninitialized + * pointer value in the case the current thread loses the race for the next + * memory chunk in the freelist. In that case this_cpu_cmpxchg_double() in + * slab_alloc_node() will fail, so the uninitialized value won't be used, but + * KMSAN will still check all arguments of cmpxchg because of imperfect + * handling of inline assembly. + * To work around this problem, use KMSAN_INIT_VALUE() to force initialize the + * return value of get_freepointer_safe(). + */ static inline void *get_freepointer_safe(struct kmem_cache *s, void *object) { unsigned long freepointer_addr; void *p; if (!debug_pagealloc_enabled_static()) - return get_freepointer(s, object); + return KMSAN_INIT_VALUE(get_freepointer(s, object)); freepointer_addr = (unsigned long)object + s->offset; probe_kernel_read(&p, (void **)freepointer_addr, sizeof(p)); - return freelist_ptr(s, p, freepointer_addr); + return KMSAN_INIT_VALUE(freelist_ptr(s, p, freepointer_addr)); } static inline void set_freepointer(struct kmem_cache *s, void *object, void *fp) @@ -1411,6 +1423,7 @@ static inline void *kmalloc_large_node_hook(void *ptr, size_t size, gfp_t flags) ptr = kasan_kmalloc_large(ptr, size, flags); /* As ptr might get tagged, call kmemleak hook after KASAN. */ kmemleak_alloc(ptr, size, 1, flags); + kmsan_kmalloc_large(ptr, size, flags); return ptr; } @@ -1418,6 +1431,7 @@ static __always_inline void kfree_hook(void *x) { kmemleak_free(x); kasan_kfree_large(x, _RET_IP_); + kmsan_kfree_large(x); } static __always_inline bool slab_free_hook(struct kmem_cache *s, void *x) @@ -1461,6 +1475,7 @@ static inline bool slab_free_freelist_hook(struct kmem_cache *s, do { object = next; next = get_freepointer(s, object); + kmsan_slab_free(s, object); if (slab_want_init_on_free(s)) { /* @@ -2784,6 +2799,7 @@ static __always_inline void *slab_alloc_node(struct kmem_cache *s, if (unlikely(slab_want_init_on_alloc(gfpflags, s)) && object) memset(object, 0, s->object_size); + kmsan_slab_alloc(s, object, gfpflags); slab_post_alloc_hook(s, gfpflags, 1, &object); return object; @@ -3167,7 +3183,7 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, void **p) { struct kmem_cache_cpu *c; - int i; + int i, j; /* memcg and kmem_cache debug support */ s = slab_pre_alloc_hook(s, flags); @@ -3217,11 +3233,11 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, /* Clear memory outside IRQ disabled fastpath loop */ if (unlikely(slab_want_init_on_alloc(flags, s))) { - int j; - for (j = 0; j < i; j++) memset(p[j], 0, s->object_size); } + for (j = 0; j < i; j++) + kmsan_slab_alloc(s, p[j], flags); /* memcg and kmem_cache debug support */ slab_post_alloc_hook(s, flags, size, p); @@ -3829,6 +3845,7 @@ static int __init setup_slub_min_objects(char *str) __setup("slub_min_objects=", setup_slub_min_objects); +__no_sanitize_memory void *__kmalloc(size_t size, gfp_t flags) { struct kmem_cache *s; @@ -5725,6 +5742,7 @@ static char *create_unique_id(struct kmem_cache *s) p += sprintf(p, "%07u", s->size); BUG_ON(p > name + ID_STR_LENGTH - 1); + kmsan_unpoison_shadow(name, p - name); return name; } @@ -5874,6 +5892,7 @@ static int sysfs_slab_alias(struct kmem_cache *s, const char *name) al->name = name; al->next = alias_list; alias_list = al; + kmsan_unpoison_shadow(al, sizeof(struct saved_alias)); return 0; } -- 2.25.1.696.g5e7596f4ac-goog