From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-11.5 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,NICE_REPLY_A, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E067AC433DF for ; Tue, 11 Aug 2020 01:29:54 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 85A49206DC for ; Tue, 11 Aug 2020 01:29:54 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 85A49206DC Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C96C46B0003; Mon, 10 Aug 2020 21:29:53 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C471C6B0005; Mon, 10 Aug 2020 21:29:53 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B5DDB6B0006; Mon, 10 Aug 2020 21:29:53 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0147.hostedemail.com [216.40.44.147]) by kanga.kvack.org (Postfix) with ESMTP id A0F8C6B0003 for ; Mon, 10 Aug 2020 21:29:53 -0400 (EDT) Received: from smtpin02.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 510CC3649 for ; Tue, 11 Aug 2020 01:29:53 +0000 (UTC) X-FDA: 77136556266.02.floor03_57067c426fde Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin02.hostedemail.com (Postfix) with ESMTP id 1D47210097AA2 for ; Tue, 11 Aug 2020 01:29:53 +0000 (UTC) X-HE-Tag: floor03_57067c426fde X-Filterd-Recvd-Size: 3026 Received: from huawei.com (szxga07-in.huawei.com [45.249.212.35]) by imf11.hostedemail.com (Postfix) with ESMTP for ; Tue, 11 Aug 2020 01:29:51 +0000 (UTC) Received: from DGGEMS409-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id F07293E44E40E631B431; Tue, 11 Aug 2020 09:29:38 +0800 (CST) Received: from [10.174.179.61] (10.174.179.61) by DGGEMS409-HUB.china.huawei.com (10.3.19.209) with Microsoft SMTP Server id 14.3.487.0; Tue, 11 Aug 2020 09:29:38 +0800 Subject: Re: [PATCH] mm/slub: remove useless kmem_cache_debug To: David Rientjes CC: Christoph Lameter , Pekka Enberg , Joonsoo Kim , Andrew Morton , , "open list:SLAB ALLOCATOR" , open list References: <20200810080758.940-1-wuyun.wu@huawei.com> From: Abel Wu Message-ID: <63ee904c-f6b7-3a00-c51d-3ff0feabc9d6@huawei.com> Date: Tue, 11 Aug 2020 09:29:38 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.1.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset="utf-8" Content-Language: en-US Content-Transfer-Encoding: 7bit X-Originating-IP: [10.174.179.61] X-CFilter-Loop: Reflected X-Rspamd-Queue-Id: 1D47210097AA2 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam04 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 2020/8/11 3:44, David Rientjes wrote: > On Mon, 10 Aug 2020, wuyun.wu@huawei.com wrote: > >> From: Abel Wu >> >> The commit below is incomplete, as it didn't handle the add_full() part. >> commit a4d3f8916c65 ("slub: remove useless kmem_cache_debug() before remove_full()") >> >> Signed-off-by: Abel Wu >> --- >> mm/slub.c | 4 +++- >> 1 file changed, 3 insertions(+), 1 deletion(-) >> >> diff --git a/mm/slub.c b/mm/slub.c >> index fe81773..0b021b7 100644 >> --- a/mm/slub.c >> +++ b/mm/slub.c >> @@ -2182,7 +2182,8 @@ static void deactivate_slab(struct kmem_cache *s, struct page *page, >> } >> } else { >> m = M_FULL; >> - if (kmem_cache_debug(s) && !lock) { >> +#ifdef CONFIG_SLUB_DEBUG >> + if (!lock) { >> lock = 1; >> /* >> * This also ensures that the scanning of full >> @@ -2191,6 +2192,7 @@ static void deactivate_slab(struct kmem_cache *s, struct page *page, >> */ >> spin_lock(&n->list_lock); >> } >> +#endif >> } >> >> if (l != m) { > > This should be functionally safe, I'm wonder if it would make sense to > only check for SLAB_STORE_USER here instead of kmem_cache_debug(), > however, since that should be the only context in which we need the > list_lock for add_full()? It seems more explicit. > . > Yes, checking for SLAB_STORE_USER here can also get rid of noising macros. I will resend the patch later. Thanks, Abel