From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-11.5 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,NICE_REPLY_A, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 55046C43457 for ; Fri, 16 Oct 2020 16:58:38 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A3BCB21D81 for ; Fri, 16 Oct 2020 16:58:37 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A3BCB21D81 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 8893F6B0068; Fri, 16 Oct 2020 12:58:36 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8398D900002; Fri, 16 Oct 2020 12:58:36 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 700336B0072; Fri, 16 Oct 2020 12:58:36 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0048.hostedemail.com [216.40.44.48]) by kanga.kvack.org (Postfix) with ESMTP id 54ACC6B0068 for ; Fri, 16 Oct 2020 12:58:36 -0400 (EDT) Received: from smtpin15.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 0B5BA181AEF1F for ; Fri, 16 Oct 2020 16:58:36 +0000 (UTC) X-FDA: 77378397432.15.metal14_3f082fa2721e Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin15.hostedemail.com (Postfix) with ESMTP id BF25C1814B0C7 for ; Fri, 16 Oct 2020 16:58:35 +0000 (UTC) X-HE-Tag: metal14_3f082fa2721e X-Filterd-Recvd-Size: 4006 Received: from mx2.suse.de (mx2.suse.de [195.135.220.15]) by imf30.hostedemail.com (Postfix) with ESMTP for ; Fri, 16 Oct 2020 16:58:35 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id E463EAE37; Fri, 16 Oct 2020 16:58:33 +0000 (UTC) Subject: Re: [PATCH] mm/slub: make add_full() condition more explicit To: wuyun.wu@huawei.com, Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton Cc: liu.xiang6@zte.com.cn, "open list:SLAB ALLOCATOR" , open list References: <20200811020240.1231-1-wuyun.wu@huawei.com> From: Vlastimil Babka Message-ID: <3ef24214-38c7-1238-8296-88caf7f48ab6@suse.cz> Date: Fri, 16 Oct 2020 18:58:30 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.3.2 MIME-Version: 1.0 In-Reply-To: <20200811020240.1231-1-wuyun.wu@huawei.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 8/11/20 4:02 AM, wuyun.wu@huawei.com wrote: > From: Abel Wu > > The commit below is incomplete, as it didn't handle the add_full() part. > commit a4d3f8916c65 ("slub: remove useless kmem_cache_debug() before remove_full()") > > This patch checks for SLAB_STORE_USER instead of kmem_cache_debug(), > since that should be the only context in which we need the list_lock for > add_full(). > > Signed-off-by: Abel Wu > --- > mm/slub.c | 4 +++- > 1 file changed, 3 insertions(+), 1 deletion(-) > > diff --git a/mm/slub.c b/mm/slub.c > index f226d66408ee..df93a5a0e9a4 100644 > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -2182,7 +2182,8 @@ static void deactivate_slab(struct kmem_cache *s, struct page *page, > } > } else { > m = M_FULL; > - if (kmem_cache_debug(s) && !lock) { > +#ifdef CONFIG_SLUB_DEBUG > + if ((s->flags & SLAB_STORE_USER) && !lock) { > lock = 1; > /* > * This also ensures that the scanning of full > @@ -2191,6 +2192,7 @@ static void deactivate_slab(struct kmem_cache *s, struct page *page, > */ > spin_lock(&n->list_lock); > } > +#endif > } > > if (l != m) { > Hm I missed this, otherwise I would have suggested the following -----8<----- From 0b43c7e20c81241f4b74cdb366795fc0b94a25c9 Mon Sep 17 00:00:00 2001 From: Vlastimil Babka Date: Fri, 16 Oct 2020 18:46:06 +0200 Subject: [PATCH] mm, slub: use kmem_cache_debug_flags() in deactivate_slab() Commit 9cf7a1118365 ("mm/slub: make add_full() condition more explicit") replaced an unnecessarily generic kmem_cache_debug(s) check with an explicit check of SLAB_STORE_USER and #ifdef CONFIG_SLUB_DEBUG. We can achieve the same specific check with the recently added kmem_cache_debug_flags() which removes the #ifdef and restores the no-branch-overhead benefit of static key check when slub debugging is not enabled. Signed-off-by: Vlastimil Babka --- mm/slub.c | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 61d0d2968413..28d78238f31e 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2245,8 +2245,7 @@ static void deactivate_slab(struct kmem_cache *s, struct page *page, } } else { m = M_FULL; -#ifdef CONFIG_SLUB_DEBUG - if ((s->flags & SLAB_STORE_USER) && !lock) { + if (kmem_cache_debug_flags(s, SLAB_STORE_USER) && !lock) { lock = 1; /* * This also ensures that the scanning of full @@ -2255,7 +2254,6 @@ static void deactivate_slab(struct kmem_cache *s, struct page *page, */ spin_lock(&n->list_lock); } -#endif } if (l != m) { -- 2.28.0