From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8E886C433E0 for ; Fri, 19 Jun 2020 19:03:41 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 42C4A20EDD for ; Fri, 19 Jun 2020 19:03:41 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b="hTeyg9KI" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 42C4A20EDD Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=chromium.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id A899C8D00F5; Fri, 19 Jun 2020 15:03:40 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A39EC8D00E9; Fri, 19 Jun 2020 15:03:40 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9316F8D00F5; Fri, 19 Jun 2020 15:03:40 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0059.hostedemail.com [216.40.44.59]) by kanga.kvack.org (Postfix) with ESMTP id 7B6D98D00E9 for ; Fri, 19 Jun 2020 15:03:40 -0400 (EDT) Received: from smtpin27.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 1A3D08248047 for ; Fri, 19 Jun 2020 19:03:40 +0000 (UTC) X-FDA: 76946885400.27.face63_490915826e1b Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin27.hostedemail.com (Postfix) with ESMTP id 3D11D3E903 for ; Fri, 19 Jun 2020 19:02:51 +0000 (UTC) X-HE-Tag: face63_490915826e1b X-Filterd-Recvd-Size: 11980 Received: from mail-pf1-f194.google.com (mail-pf1-f194.google.com [209.85.210.194]) by imf10.hostedemail.com (Postfix) with ESMTP for ; Fri, 19 Jun 2020 19:02:50 +0000 (UTC) Received: by mail-pf1-f194.google.com with SMTP id j12so2587902pfn.10 for ; Fri, 19 Jun 2020 12:02:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=y88OIJPCn3bdGjlWNOS2D2hXU5xfD9J8tKoU+uD6w5g=; b=hTeyg9KIqP3ZcTjLGl0cQApNQXJmHKQ2r8FidUL/ofMjK/xz9GbDvkVrJvimevAfcK WB9qVWbZc4ty2PuOgfWXQaYRV61KiQiiWjHLayh1dXApyZr7YKFWhMjOg4LxHovwwFAc kyMlcq36D8NvEa/CTDaxJNeeue3gcuCZcK7Bg= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=y88OIJPCn3bdGjlWNOS2D2hXU5xfD9J8tKoU+uD6w5g=; b=dXsw55TT//BW01fO4evu1oq9ftSK5J0+K4ptSzcfoIPEaeOjmjAUhoVbis4bXhvgWR +FrCm9mUTIoYTx9Q8yEV9+AHjAwiCH76cnsukNlQutZbxlKXCDX008uNcoqHMoQ/JHkT 3FPM+old0hj5Ii0aEEpXHYZZYv0/MII4aI4db/5SOrKJNdz1OZcQKF67JXm0J7ubHjq1 +Jc4VLRWUqoqcw4mbjHknGJoCZ3YvelImV+Ym5U1g+derz8SQtjmEkynnXB07QPAcdwf 6+wbufyZdOoh34l+d2Pi43ObrW719eG6Uzso4okD1b40zfNlDwXUXIlBbnRd5mGSU1Le /HMw== X-Gm-Message-State: AOAM532oNptRjf+q+6D6KjT7673LJ0z/rR+Sbl9/S00RRCKuxyxAVUTx CHPBIXlOPGdqMCnS+lfSGjqYxw== X-Google-Smtp-Source: ABdhPJwB9RinVh5E47GeHYSZms715oYMTalXTkTjLEmR5JFVuO2zR0zFZcdB9rPzV2tdyK+OIoZT9g== X-Received: by 2002:aa7:81c4:: with SMTP id c4mr9962563pfn.188.1592593369591; Fri, 19 Jun 2020 12:02:49 -0700 (PDT) Received: from www.outflux.net (smtp.outflux.net. [198.145.64.163]) by smtp.gmail.com with ESMTPSA id k2sm5669389pgm.11.2020.06.19.12.02.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 19 Jun 2020 12:02:48 -0700 (PDT) Date: Fri, 19 Jun 2020 12:02:47 -0700 From: Kees Cook To: Andrew Morton Cc: Vlastimil Babka , Roman Gushchin , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@android.com, vinmenon@codeaurora.org, Matthew Garrett , Jann Horn , Vijayanand Jitta Subject: Re: [PATCH 9/9] mm, slab/slub: move and improve cache_from_obj() Message-ID: <202006191201.C30D8AAFB@keescook> References: <20200610163135.17364-1-vbabka@suse.cz> <20200610163135.17364-10-vbabka@suse.cz> <202006171039.FBDF2D7F4A@keescook> <20200618200553.GE110603@carbon.dhcp.thefacebook.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20200618200553.GE110603@carbon.dhcp.thefacebook.com> X-Rspamd-Queue-Id: 3D11D3E903 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam04 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, Jun 18, 2020 at 01:05:53PM -0700, Roman Gushchin wrote: > On Thu, Jun 18, 2020 at 12:10:38PM +0200, Vlastimil Babka wrote: > > To prvent the churn of your patch moving the cache_from_obj() back to slab.h, I > > think it's best if we modify my patch. The patch below should be squashed into > > the current version in mmots, with the commit log used for the whole result. > > > > This will cause conflicts while reapplying Roman's > > mm-memcg-slab-use-a-single-set-of-kmem_caches-for-all-allocations.patch which > > can be fixed by > > a) throwing away the conflicting hunks for cache_from_obj() in slab.c and slub.c > > b) applying this hunk instead: > > > > --- a/mm/slab.h > > +++ b/mm/slab.h > > @@ -455,12 +455,11 @@ static inline struct kmem_cache *cache_from_obj(struct kmem_cache *s, void *x) > > struct kmem_cache *cachep; > > > > if (!IS_ENABLED(CONFIG_SLAB_FREELIST_HARDENED) && > > - !memcg_kmem_enabled() && > > !kmem_cache_debug_flags(s, SLAB_CONSISTENCY_CHECKS)) > > return s; > > > > cachep = virt_to_cache(x); > > - if (WARN(cachep && !slab_equal_or_root(cachep, s), > > + if (WARN(cachep && cachep != s, > > "%s: Wrong slab cache. %s but object is from %s\n", > > __func__, s->name, cachep->name)) > > print_tracking(cachep, x); > > > > The fixup patch itself: > > ----8<---- > > From b8df607d92b37e5329ce7bda62b2b364cc249893 Mon Sep 17 00:00:00 2001 > > From: Vlastimil Babka > > Date: Thu, 18 Jun 2020 11:52:03 +0200 > > Subject: [PATCH] mm, slab/slub: improve error reporting and overhead of > > cache_from_obj() Andrew, do you need this separately, or can you extract this fixup from this thread? -Kees > > > > The function cache_from_obj() was added by commit b9ce5ef49f00 ("sl[au]b: > > always get the cache from its page in kmem_cache_free()") to support > > kmemcg, where per-memcg cache can be different from the root one, so we > > can't use the kmem_cache pointer given to kmem_cache_free(). > > > > Prior to that commit, SLUB already had debugging check+warning that could > > be enabled to compare the given kmem_cache pointer to one referenced by > > the slab page where the object-to-be-freed resides. This check was moved > > to cache_from_obj(). Later the check was also enabled for > > SLAB_FREELIST_HARDENED configs by commit 598a0717a816 ("mm/slab: validate > > cache membership under freelist hardening"). > > > > These checks and warnings can be useful especially for the debugging, > > which can be improved. Commit 598a0717a816 changed the pr_err() with > > WARN_ON_ONCE() to WARN_ONCE() so only the first hit is now reported, > > others are silent. This patch changes it to WARN() so that all errors are > > reported. > > > > It's also useful to print SLUB allocation/free tracking info for the offending > > object, if tracking is enabled. Thus, export the SLUB print_tracking() function > > and provide an empty one for SLAB. > > > > For SLUB we can also benefit from the static key check in > > kmem_cache_debug_flags(), but we need to move this function to slab.h and > > declare the static key there. > > > > [1] https://lore.kernel.org/r/20200608230654.828134-18-guro@fb.com > > > > Signed-off-by: Vlastimil Babka > > Acked-by: Roman Gushchin > > Thanks! > > > --- > > mm/slab.c | 8 -------- > > mm/slab.h | 45 +++++++++++++++++++++++++++++++++++++++++++++ > > mm/slub.c | 38 +------------------------------------- > > 3 files changed, 46 insertions(+), 45 deletions(-) > > > > diff --git a/mm/slab.c b/mm/slab.c > > index 6134c4c36d4c..9350062ffc1a 100644 > > --- a/mm/slab.c > > +++ b/mm/slab.c > > @@ -3672,14 +3672,6 @@ void *__kmalloc_track_caller(size_t size, gfp_t flags, unsigned long caller) > > } > > EXPORT_SYMBOL(__kmalloc_track_caller); > > > > -static inline struct kmem_cache *cache_from_obj(struct kmem_cache *s, void *x) > > -{ > > - if (memcg_kmem_enabled()) > > - return virt_to_cache(x); > > - else > > - return s; > > -} > > - > > /** > > * kmem_cache_free - Deallocate an object > > * @cachep: The cache the allocation was from. > > diff --git a/mm/slab.h b/mm/slab.h > > index a2696d306b62..a9f5ba9ce9a7 100644 > > --- a/mm/slab.h > > +++ b/mm/slab.h > > @@ -275,6 +275,34 @@ static inline int cache_vmstat_idx(struct kmem_cache *s) > > NR_SLAB_RECLAIMABLE : NR_SLAB_UNRECLAIMABLE; > > } > > > > +#ifdef CONFIG_SLUB_DEBUG > > +#ifdef CONFIG_SLUB_DEBUG_ON > > +DECLARE_STATIC_KEY_TRUE(slub_debug_enabled); > > +#else > > +DECLARE_STATIC_KEY_FALSE(slub_debug_enabled); > > +#endif > > +extern void print_tracking(struct kmem_cache *s, void *object); > > +#else > > +static inline void print_tracking(struct kmem_cache *s, void *object) > > +{ > > +} > > +#endif > > + > > +/* > > + * Returns true if any of the specified slub_debug flags is enabled for the > > + * cache. Use only for flags parsed by setup_slub_debug() as it also enables > > + * the static key. > > + */ > > +static inline bool kmem_cache_debug_flags(struct kmem_cache *s, slab_flags_t flags) > > +{ > > + VM_WARN_ON_ONCE(!(flags & SLAB_DEBUG_FLAGS)); > > +#ifdef CONFIG_SLUB_DEBUG > > + if (static_branch_unlikely(&slub_debug_enabled)) > > + return s->flags & flags; > > +#endif > > + return false; > > +} > > + > > #ifdef CONFIG_MEMCG_KMEM > > > > /* List of all root caches. */ > > @@ -503,6 +531,23 @@ static __always_inline void uncharge_slab_page(struct page *page, int order, > > memcg_uncharge_slab(page, order, s); > > } > > > > +static inline struct kmem_cache *cache_from_obj(struct kmem_cache *s, void *x) > > +{ > > + struct kmem_cache *cachep; > > + > > + if (!IS_ENABLED(CONFIG_SLAB_FREELIST_HARDENED) && > > + !memcg_kmem_enabled() && > > + !kmem_cache_debug_flags(s, SLAB_CONSISTENCY_CHECKS)) > > + return s; > > + > > + cachep = virt_to_cache(x); > > + if (WARN(cachep && !slab_equal_or_root(cachep, s), > > + "%s: Wrong slab cache. %s but object is from %s\n", > > + __func__, s->name, cachep->name)) > > + print_tracking(cachep, x); > > + return cachep; > > +} > > + > > static inline size_t slab_ksize(const struct kmem_cache *s) > > { > > #ifndef CONFIG_SLUB > > diff --git a/mm/slub.c b/mm/slub.c > > index 202fb423d195..0e635a8aa340 100644 > > --- a/mm/slub.c > > +++ b/mm/slub.c > > @@ -122,21 +122,6 @@ DEFINE_STATIC_KEY_FALSE(slub_debug_enabled); > > #endif > > #endif > > > > -/* > > - * Returns true if any of the specified slub_debug flags is enabled for the > > - * cache. Use only for flags parsed by setup_slub_debug() as it also enables > > - * the static key. > > - */ > > -static inline bool kmem_cache_debug_flags(struct kmem_cache *s, slab_flags_t flags) > > -{ > > - VM_WARN_ON_ONCE(!(flags & SLAB_DEBUG_FLAGS)); > > -#ifdef CONFIG_SLUB_DEBUG > > - if (static_branch_unlikely(&slub_debug_enabled)) > > - return s->flags & flags; > > -#endif > > - return false; > > -} > > - > > static inline bool kmem_cache_debug(struct kmem_cache *s) > > { > > return kmem_cache_debug_flags(s, SLAB_DEBUG_FLAGS); > > @@ -653,7 +638,7 @@ static void print_track(const char *s, struct track *t, unsigned long pr_time) > > #endif > > } > > > > -static void print_tracking(struct kmem_cache *s, void *object) > > +void print_tracking(struct kmem_cache *s, void *object) > > { > > unsigned long pr_time = jiffies; > > if (!(s->flags & SLAB_STORE_USER)) > > @@ -1525,10 +1510,6 @@ static bool freelist_corrupted(struct kmem_cache *s, struct page *page, > > { > > return false; > > } > > - > > -static void print_tracking(struct kmem_cache *s, void *object) > > -{ > > -} > > #endif /* CONFIG_SLUB_DEBUG */ > > > > /* > > @@ -3180,23 +3161,6 @@ void ___cache_free(struct kmem_cache *cache, void *x, unsigned long addr) > > } > > #endif > > > > -static inline struct kmem_cache *cache_from_obj(struct kmem_cache *s, void *x) > > -{ > > - struct kmem_cache *cachep; > > - > > - if (!IS_ENABLED(CONFIG_SLAB_FREELIST_HARDENED) && > > - !memcg_kmem_enabled() && > > - !kmem_cache_debug_flags(s, SLAB_CONSISTENCY_CHECKS)) > > - return s; > > - > > - cachep = virt_to_cache(x); > > - if (WARN(cachep && !slab_equal_or_root(cachep, s), > > - "%s: Wrong slab cache. %s but object is from %s\n", > > - __func__, s->name, cachep->name)) > > - print_tracking(cachep, x); > > - return cachep; > > -} > > - > > void kmem_cache_free(struct kmem_cache *s, void *x) > > { > > s = cache_from_obj(s, x); > > -- > > 2.27.0 > > > > -- Kees Cook