From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-11.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,NICE_REPLY_A, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0A17AC433E4 for ; Mon, 17 Aug 2020 09:20:07 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id CC5B22080D for ; Mon, 17 Aug 2020 09:20:06 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org CC5B22080D Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 57CD76B0005; Mon, 17 Aug 2020 05:20:06 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 52D926B0006; Mon, 17 Aug 2020 05:20:06 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 41CDE6B0007; Mon, 17 Aug 2020 05:20:06 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0064.hostedemail.com [216.40.44.64]) by kanga.kvack.org (Postfix) with ESMTP id 281546B0005 for ; Mon, 17 Aug 2020 05:20:06 -0400 (EDT) Received: from smtpin24.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id D4BA59882 for ; Mon, 17 Aug 2020 09:20:05 +0000 (UTC) X-FDA: 77159513970.24.house62_280418b27015 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin24.hostedemail.com (Postfix) with ESMTP id A8E741A4A5 for ; Mon, 17 Aug 2020 09:20:05 +0000 (UTC) X-HE-Tag: house62_280418b27015 X-Filterd-Recvd-Size: 2675 Received: from huawei.com (szxga04-in.huawei.com [45.249.212.190]) by imf49.hostedemail.com (Postfix) with ESMTP for ; Mon, 17 Aug 2020 09:20:04 +0000 (UTC) Received: from DGGEMS412-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id 5FC7371CC3DB3C292129; Mon, 17 Aug 2020 17:19:58 +0800 (CST) Received: from [10.174.179.61] (10.174.179.61) by DGGEMS412-HUB.china.huawei.com (10.3.19.212) with Microsoft SMTP Server id 14.3.487.0; Mon, 17 Aug 2020 17:19:54 +0800 Subject: Re: [PATCH] mm/slub: make add_full() condition more explicit To: Christoph Lameter , Pekka Enberg , "David Rientjes" , Joonsoo Kim , "Andrew Morton" CC: , "open list:SLAB ALLOCATOR" , open list References: <20200811020240.1231-1-wuyun.wu@huawei.com> From: Abel Wu Message-ID: <40c24455-02fd-4b4c-7740-bb7d2af0f5c7@huawei.com> Date: Mon, 17 Aug 2020 17:19:54 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.1.0 MIME-Version: 1.0 In-Reply-To: <20200811020240.1231-1-wuyun.wu@huawei.com> Content-Type: text/plain; charset="utf-8" Content-Language: en-US Content-Transfer-Encoding: 7bit X-Originating-IP: [10.174.179.61] X-CFilter-Loop: Reflected X-Rspamd-Queue-Id: A8E741A4A5 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam03 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: ping :) On 2020/8/11 10:02, wuyun.wu@huawei.com wrote: > From: Abel Wu > > The commit below is incomplete, as it didn't handle the add_full() part. > commit a4d3f8916c65 ("slub: remove useless kmem_cache_debug() before remove_full()") > > This patch checks for SLAB_STORE_USER instead of kmem_cache_debug(), > since that should be the only context in which we need the list_lock for > add_full(). > > Signed-off-by: Abel Wu > --- > mm/slub.c | 4 +++- > 1 file changed, 3 insertions(+), 1 deletion(-) > > diff --git a/mm/slub.c b/mm/slub.c > index f226d66408ee..df93a5a0e9a4 100644 > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -2182,7 +2182,8 @@ static void deactivate_slab(struct kmem_cache *s, struct page *page, > } > } else { > m = M_FULL; > - if (kmem_cache_debug(s) && !lock) { > +#ifdef CONFIG_SLUB_DEBUG > + if ((s->flags & SLAB_STORE_USER) && !lock) { > lock = 1; > /* > * This also ensures that the scanning of full > @@ -2191,6 +2192,7 @@ static void deactivate_slab(struct kmem_cache *s, struct page *page, > */ > spin_lock(&n->list_lock); > } > +#endif > } > > if (l != m) { >