From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 29E93C43334 for ; Fri, 15 Jul 2022 08:05:41 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id AE59F9401D7; Fri, 15 Jul 2022 04:05:40 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A96C39401A5; Fri, 15 Jul 2022 04:05:40 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 984819401D7; Fri, 15 Jul 2022 04:05:40 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 8A0819401A5 for ; Fri, 15 Jul 2022 04:05:40 -0400 (EDT) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay13.hostedemail.com (Postfix) with ESMTP id 64E7D61303 for ; Fri, 15 Jul 2022 08:05:40 +0000 (UTC) X-FDA: 79688600040.07.F740BC9 Received: from out30-43.freemail.mail.aliyun.com (out30-43.freemail.mail.aliyun.com [115.124.30.43]) by imf30.hostedemail.com (Postfix) with ESMTP id EF34380096 for ; Fri, 15 Jul 2022 08:05:36 +0000 (UTC) X-Alimail-AntiSpam:AC=PASS;BC=-1|-1;BR=01201311R531e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046060;MF=rongwei.wang@linux.alibaba.com;NM=1;PH=DS;RN=10;SR=0;TI=SMTPD_---0VJOTsiV_1657872329; Received: from 30.240.97.187(mailfrom:rongwei.wang@linux.alibaba.com fp:SMTPD_---0VJOTsiV_1657872329) by smtp.aliyun-inc.com; Fri, 15 Jul 2022 16:05:31 +0800 Message-ID: Date: Fri, 15 Jul 2022 16:05:29 +0800 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:103.0) Gecko/20100101 Thunderbird/103.0 Subject: Re: [PATCH 1/3] mm/slub: fix the race between validate_slab and slab_free To: Vlastimil Babka , Hyeonggon Yoo <42.hyeyoo@gmail.com> Cc: David Rientjes , songmuchun@bytedance.com, Hyeonggon Yoo <42.hyeyoo@gmail.com>, akpm@linux-foundation.org, roman.gushchin@linux.dev, iamjoonsoo.kim@lge.com, penberg@kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org References: <20220529081535.69275-1-rongwei.wang@linux.alibaba.com> <9794df4f-3ffe-4e99-0810-a1346b139ce8@linux.alibaba.com> <29723aaa-5e28-51d3-7f87-9edf0f7b9c33@linux.alibaba.com> Content-Language: en-US From: Rongwei Wang In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=alibaba.com; spf=pass (imf30.hostedemail.com: domain of rongwei.wang@linux.alibaba.com designates 115.124.30.43 as permitted sender) smtp.mailfrom=rongwei.wang@linux.alibaba.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1657872339; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=P6tIFG2AqJP7cL9Vj8856etZZGAn5zpoQlklFejGIns=; b=EOUTlKsDDFWZk1MDmmZTAXxMQ0WFPWhyhnY0BVk7uyJ9IcejRTNIqCWwtkADbF3NmfTRYN TN+S/b/0ESIZ05qfo8uqex6ueIzzy3xkfgxAcs50w9kGR85S1mFAecaWQ1UJvKYgWdplCw DaALSdAS1BEKnGUmnStsmQ/acoaLADw= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1657872339; a=rsa-sha256; cv=none; b=u1RFw1mZ0t2az+si4333D2XA0baa+fR/T+kPxPseHTzJVcR5bOnNqVTN3EtFvlYu0B7/RY 7rUgamZnw87OQIaxQ38T3s3IYec69fJpgtCClE8mgYhtSJd016PwCq1EpPTHiyxW+3HLyr YsNWp+DLWXAaPgFPDQ+GF+0xOoWHJ2k= X-Rspam-User: X-Rspamd-Queue-Id: EF34380096 X-Rspamd-Server: rspam02 X-Stat-Signature: y4wx6w9jotxykei7ca9f6kxcb791dk3x Authentication-Results: imf30.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=alibaba.com; spf=pass (imf30.hostedemail.com: domain of rongwei.wang@linux.alibaba.com designates 115.124.30.43 as permitted sender) smtp.mailfrom=rongwei.wang@linux.alibaba.com X-HE-Tag: 1657872336-572324 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 6/17/22 5:40 PM, Vlastimil Babka wrote: > On 6/8/22 14:23, Christoph Lameter wrote: >> On Wed, 8 Jun 2022, Rongwei Wang wrote: >> >>> If available, I think document the issue and warn this incorrect behavior is >>> OK. But it still prints a large amount of confusing messages, and disturbs us? >> >> Correct it would be great if you could fix this in a way that does not >> impact performance. >> >>>> are current operations on the slab being validated. >>> And I am trying to fix it in following way. In a short, these changes only >>> works under the slub debug mode, and not affects the normal mode (I'm not >>> sure). It looks not elegant enough. And if all approve of this way, I can >>> submit the next version. >> >> >>> >>> Anyway, thanks for your time:). >>> -wrw >>> >>> @@ -3304,7 +3300,7 @@ static void __slab_free(struct kmem_cache *s, >> struct >>> slab *slab, >>> >>> { >>> void *prior; >>> - int was_frozen; >>> + int was_frozen, to_take_off = 0; >>> struct slab new; >> >> to_take_off has the role of !n ? Why is that needed? >> >>> - do { >>> - if (unlikely(n)) { >>> + spin_lock_irqsave(&n->list_lock, flags); >>> + ret = free_debug_processing(s, slab, head, tail, cnt, addr); >> >> Ok so the idea is to take the lock only if kmem_cache_debug. That looks >> ok. But it still adds a number of new branches etc to the free loop. > Hi, Vlastimil, sorry for missing your message long time. > It also further complicates the already tricky code. I wonder if we should > make more benefit from the fact that for kmem_cache_debug() caches we don't > leave any slabs on percpu or percpu partial lists, and also in > free_debug_processing() we aready take both list_lock and slab_lock. If we > just did the freeing immediately there under those locks, we would be > protected against other freeing cpus by that list_lock and don't need the > double cmpxchg tricks. enen, I'm not sure get your "don't need the double cmpxchg tricks" means completely. What you want to say is that replace cmpxchg_double_slab() here with following code when kmem_cache_debug(s)? __slab_lock(slab); if (slab->freelist == freelist_old && slab->counters == counters_old){ slab->freelist = freelist_new; slab->counters = counters_new; __slab_unlock(slab); local_irq_restore(flags); return true; } __slab_unlock(slab); If I make mistakes for your words, please let me know. Thanks! > > What about against allocating cpus? More tricky as those will currently end > up privatizing the freelist via get_partial(), only to deactivate it again, > so our list_lock+slab_lock in freeing path would not protect in the > meanwhile. But the allocation is currently very inefficient for debug > caches, as in get_partial() it will take the list_lock to take the slab from > partial list and then in most cases again in deactivate_slab() to return it. It seems that I need speed some time to eat these words. Anyway, thanks. > > If instead the allocation path for kmem_cache_debug() cache would take a > single object from the partial list (not whole freelist) under list_lock, it > would be ultimately more efficient, and protect against freeing using > list_lock. Sounds like an idea worth trying to me? Hyeonggon had a similar advice that split freeing and allocating slab from debugging, likes below: __slab_alloc() { if (kmem_cache_debug(s)) slab_alloc_debug() else ___slab_alloc() } I guess that above code aims to solve your mentioned problem (idea)? slab_free() { if (kmem_cache_debug(s)) slab_free_debug() else __do_slab_free() } Currently, I only modify the code of freeing slab to fix the confusing messages of "slabinfo -v". If you agree, I can try to realize above mentioned slab_alloc_debug() code. Maybe it's also a challenge to me. Thanks for your time. > And of course we would stop creating the 'validate' sysfs files for > non-debug caches.