From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-11.5 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,NICE_REPLY_A, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 21FD4C433E0 for ; Tue, 11 Aug 2020 01:50:20 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id DEDEB20768 for ; Tue, 11 Aug 2020 01:50:19 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DEDEB20768 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 5B1546B0003; Mon, 10 Aug 2020 21:50:19 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 561776B0005; Mon, 10 Aug 2020 21:50:19 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4506A6B0006; Mon, 10 Aug 2020 21:50:19 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0200.hostedemail.com [216.40.44.200]) by kanga.kvack.org (Postfix) with ESMTP id 303716B0003 for ; Mon, 10 Aug 2020 21:50:19 -0400 (EDT) Received: from smtpin27.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id B03833625 for ; Tue, 11 Aug 2020 01:50:18 +0000 (UTC) X-FDA: 77136607716.27.chess85_3211f7926fdf Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin27.hostedemail.com (Postfix) with ESMTP id 8CF623D663 for ; Tue, 11 Aug 2020 01:50:18 +0000 (UTC) X-HE-Tag: chess85_3211f7926fdf X-Filterd-Recvd-Size: 3355 Received: from huawei.com (szxga04-in.huawei.com [45.249.212.190]) by imf16.hostedemail.com (Postfix) with ESMTP for ; Tue, 11 Aug 2020 01:50:17 +0000 (UTC) Received: from DGGEMS407-HUB.china.huawei.com (unknown [172.30.72.58]) by Forcepoint Email with ESMTP id 4D89131BCE288B65FB54; Tue, 11 Aug 2020 09:50:14 +0800 (CST) Received: from [10.174.179.61] (10.174.179.61) by DGGEMS407-HUB.china.huawei.com (10.3.19.207) with Microsoft SMTP Server id 14.3.487.0; Tue, 11 Aug 2020 09:50:10 +0800 Subject: Re: [PATCH] mm/slub: remove useless kmem_cache_debug From: Abel Wu To: David Rientjes CC: Christoph Lameter , Pekka Enberg , Joonsoo Kim , Andrew Morton , , "open list:SLAB ALLOCATOR" , open list References: <20200810080758.940-1-wuyun.wu@huawei.com> <63ee904c-f6b7-3a00-c51d-3ff0feabc9d6@huawei.com> Message-ID: Date: Tue, 11 Aug 2020 09:50:10 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.1.0 MIME-Version: 1.0 In-Reply-To: <63ee904c-f6b7-3a00-c51d-3ff0feabc9d6@huawei.com> Content-Type: text/plain; charset="utf-8" Content-Language: en-US Content-Transfer-Encoding: 7bit X-Originating-IP: [10.174.179.61] X-CFilter-Loop: Reflected X-Rspamd-Queue-Id: 8CF623D663 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam04 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 2020/8/11 9:29, Abel Wu wrote: > > > On 2020/8/11 3:44, David Rientjes wrote: >> On Mon, 10 Aug 2020, wuyun.wu@huawei.com wrote: >> >>> From: Abel Wu >>> >>> The commit below is incomplete, as it didn't handle the add_full() part. >>> commit a4d3f8916c65 ("slub: remove useless kmem_cache_debug() before remove_full()") >>> >>> Signed-off-by: Abel Wu >>> --- >>> mm/slub.c | 4 +++- >>> 1 file changed, 3 insertions(+), 1 deletion(-) >>> >>> diff --git a/mm/slub.c b/mm/slub.c >>> index fe81773..0b021b7 100644 >>> --- a/mm/slub.c >>> +++ b/mm/slub.c >>> @@ -2182,7 +2182,8 @@ static void deactivate_slab(struct kmem_cache *s, struct page *page, >>> } >>> } else { >>> m = M_FULL; >>> - if (kmem_cache_debug(s) && !lock) { >>> +#ifdef CONFIG_SLUB_DEBUG >>> + if (!lock) { >>> lock = 1; >>> /* >>> * This also ensures that the scanning of full >>> @@ -2191,6 +2192,7 @@ static void deactivate_slab(struct kmem_cache *s, struct page *page, >>> */ >>> spin_lock(&n->list_lock); >>> } >>> +#endif >>> } >>> >>> if (l != m) { >> >> This should be functionally safe, I'm wonder if it would make sense to >> only check for SLAB_STORE_USER here instead of kmem_cache_debug(), >> however, since that should be the only context in which we need the >> list_lock for add_full()? It seems more explicit. >> . >> > Yes, checking for SLAB_STORE_USER here can also get rid of noising macros. > I will resend the patch later. > > Thanks, > Abel > . > Wait... It still needs CONFIG_SLUB_DEBUG to wrap around, but can avoid locking overhead when SLAB_STORE_USER is not set (as what you said). I will keep the CONFIG_SLUB_DEBUG in my new patch.