From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 85F0AC433EF for ; Wed, 29 Sep 2021 01:59:10 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E5A53613E6 for ; Wed, 29 Sep 2021 01:59:09 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org E5A53613E6 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 23318900002; Tue, 28 Sep 2021 21:59:09 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1F08C6B0071; Tue, 28 Sep 2021 21:59:09 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0D13D900002; Tue, 28 Sep 2021 21:59:09 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0227.hostedemail.com [216.40.44.227]) by kanga.kvack.org (Postfix) with ESMTP id F22656B006C for ; Tue, 28 Sep 2021 21:59:08 -0400 (EDT) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 9627418121FAF for ; Wed, 29 Sep 2021 01:59:08 +0000 (UTC) X-FDA: 78638953176.18.1C8803B Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by imf09.hostedemail.com (Postfix) with ESMTP id 938B83000105 for ; Wed, 29 Sep 2021 01:59:07 +0000 (UTC) Received: from dggemv703-chm.china.huawei.com (unknown [172.30.72.57]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4HJzvp31LszRfdL; Wed, 29 Sep 2021 09:54:46 +0800 (CST) Received: from dggpemm500001.china.huawei.com (7.185.36.107) by dggemv703-chm.china.huawei.com (10.3.19.46) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.8; Wed, 29 Sep 2021 09:59:03 +0800 Received: from [10.174.177.243] (10.174.177.243) by dggpemm500001.china.huawei.com (7.185.36.107) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256) id 15.1.2308.8; Wed, 29 Sep 2021 09:59:02 +0800 Message-ID: <285af4a7-0053-b3e3-1cf0-2ed481210271@huawei.com> Date: Wed, 29 Sep 2021 09:59:02 +0800 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101 Thunderbird/91.1.1 Subject: Re: [PATCH v2] slub: Add back check for free nonslab objects Content-Language: en-US To: Shakeel Butt CC: Vlastimil Babka , Christoph Lameter , "Pekka Enberg" , David Rientjes , "Joonsoo Kim" , Andrew Morton , Linux MM , LKML , "Matthew Wilcox" References: <20210927122646.91934-1-wangkefeng.wang@huawei.com> From: Kefeng Wang In-Reply-To: Content-Type: text/plain; charset="UTF-8"; format=flowed Content-Transfer-Encoding: 7bit X-Originating-IP: [10.174.177.243] X-ClientProxiedBy: dggeme714-chm.china.huawei.com (10.1.199.110) To dggpemm500001.china.huawei.com (7.185.36.107) X-CFilter-Loop: Reflected X-Rspamd-Queue-Id: 938B83000105 X-Stat-Signature: gp8956b1j6xdy5kk6twidw69bwetff45 Authentication-Results: imf09.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=huawei.com; spf=pass (imf09.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com X-Rspamd-Server: rspam06 X-HE-Tag: 1632880747-389687 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 2021/9/28 23:09, Shakeel Butt wrote: > On Mon, Sep 27, 2021 at 5:24 AM Kefeng Wang wrote: >> >> After commit ("f227f0faf63b slub: fix unreclaimable slab stat for bulk >> free"), the check for free nonslab page is replaced by VM_BUG_ON_PAGE, >> which only check with CONFIG_DEBUG_VM enabled, but this config may >> impact performance, so it only for debug. >> >> Commit ("0937502af7c9 slub: Add check for kfree() of non slab objects.") >> add the ability, which should be needed in any configs to catch the >> invalid free, they even could be potential issue, eg, memory corruption, >> use after free and double-free, so replace VM_BUG_ON_PAGE to WARN_ON, add >> dump_page() and object address printing to help use to debug the issue. >> >> Signed-off-by: Kefeng Wang >> --- >> v2: Add object address printing suggested by Matthew Wilcox >> >> mm/slub.c | 6 +++++- >> 1 file changed, 5 insertions(+), 1 deletion(-) >> >> diff --git a/mm/slub.c b/mm/slub.c >> index 3095b889fab4..157973e22faf 100644 >> --- a/mm/slub.c >> +++ b/mm/slub.c >> @@ -3522,7 +3522,11 @@ static inline void free_nonslab_page(struct page *page, void *object) >> { >> unsigned int order = compound_order(page); >> >> - VM_BUG_ON_PAGE(!PageCompound(page), page); >> + if (WARN_ON(!PageCompound(page))) { > > If there is a problem then this would be too noisy. Why not WARN_ON_ONCE()? If lots of abnormal/illegal pages are freed to freelist, the system could be crash much more easier, with that in mind, I think the original logical use BUG_ON(). The ksize() use WARN_ON, looks no one report about too much log. If we don't want too much dump, I will change it in v3. > >> + dump_page(page, "invalid free nonslab page"); >> + pr_warn("object pointer: 0x%p\n", object); > > Actually why not add 'once' semantics for the whole if-block? > >> + } >> + >> kfree_hook(object); >> mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B, -(PAGE_SIZE << order)); >> __free_pages(page, order); >> -- >> 2.26.2 >> > . >