From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.3 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS, USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7FB25C4338F for ; Wed, 11 Aug 2021 12:40:40 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 18A28610A4 for ; Wed, 11 Aug 2021 12:40:39 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 18A28610A4 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 6B8BA6B0071; Wed, 11 Aug 2021 08:40:39 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 669676B0072; Wed, 11 Aug 2021 08:40:39 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 50A3E8D0001; Wed, 11 Aug 2021 08:40:39 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0056.hostedemail.com [216.40.44.56]) by kanga.kvack.org (Postfix) with ESMTP id 345FD6B0071 for ; Wed, 11 Aug 2021 08:40:39 -0400 (EDT) Received: from smtpin36.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id C7750250A5 for ; Wed, 11 Aug 2021 12:40:38 +0000 (UTC) X-FDA: 78462758556.36.18AE658 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28]) by imf29.hostedemail.com (Postfix) with ESMTP id 45719901E0D1 for ; Wed, 11 Aug 2021 12:40:38 +0000 (UTC) Received: from imap1.suse-dmz.suse.de (imap1.suse-dmz.suse.de [192.168.254.73]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id C2961221EE; Wed, 11 Aug 2021 12:40:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1628685636; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=xT4Qi52Yg1wMfKZqVvXG/EH9qJsCPIJhtQKjM9Ipsko=; b=lB/2cjau/J6IT5r3aYnJoX1SX2U0FPDgl1BGNfjr7gXhboFWHdRnlnpx2B2MDBx5UfzACg jei7JTGrt0haCDfEVoTdKMsv2yyy+Pb7b1bsK362zJK3b5/jZWk1U6Q0SeTNmoODP6XUN3 V2EeXrkP4x0/bXl/fh87r9y65IxITUM= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1628685636; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=xT4Qi52Yg1wMfKZqVvXG/EH9qJsCPIJhtQKjM9Ipsko=; b=p01YTSUk3W1YL63lBRMaM7CYoB5UycnyUo1aJjtKetPlzP8qDw2XjDFsU1hn5RgoWyrk6n /MX7iRyJWgyZQgBw== Received: from imap1.suse-dmz.suse.de (imap1.suse-dmz.suse.de [192.168.254.73]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap1.suse-dmz.suse.de (Postfix) with ESMTPS id A000C136D9; Wed, 11 Aug 2021 12:40:36 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap1.suse-dmz.suse.de with ESMTPSA id kPdyJkTFE2FIMAAAGKfGzw (envelope-from ); Wed, 11 Aug 2021 12:40:36 +0000 Subject: Re: [PATCH] mm: slub: remove preemption disabling from put_cpu_partial To: Muchun Song , cl@linux.com, penberg@kernel.org, rientjes@google.com, iamjoonsoo.kim@lge.com, akpm@linux-foundation.org Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org References: <20210811111921.85999-1-songmuchun@bytedance.com> From: Vlastimil Babka Message-ID: <0d6c3e3b-c270-bb7d-c038-64ee3f0257cd@suse.cz> Date: Wed, 11 Aug 2021 14:40:36 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.12.0 MIME-Version: 1.0 In-Reply-To: <20210811111921.85999-1-songmuchun@bytedance.com> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 45719901E0D1 Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b="lB/2cjau"; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=p01YTSUk; dmarc=none; spf=pass (imf29.hostedemail.com: domain of vbabka@suse.cz designates 195.135.220.28 as permitted sender) smtp.mailfrom=vbabka@suse.cz X-Stat-Signature: spbaz9pfgkhpmp7ufswtus969tx9c84k X-HE-Tag: 1628685638-683066 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 8/11/21 1:19 PM, Muchun Song wrote: > The commit d6e0b7fa1186 ("slub: make dead caches discard free slabs > immediately") introduced those logic to speed up the destruction of > per-memcg kmem caches, because kmem caches created for a memory > cgroup are only destroyed after the last page charged to the cgroup > is freed at that time. But since commit 9855609bde03 ("mm: memcg/slab: > use a single set of kmem_caches for all accounted allocations), we > do not have per-memcg kmem caches anymore. Are those code pointless? > No, the kmem_cache->cpu_partial can be set to zero by 'echo 0 > /sys/ > kernel/slab/*/cpu_partial'. In this case, the slab page will be put > into cpu partial list and then moved to node list (because > slub_cpu_partial() returns zero). However, we can skip putting the > slab page to cpu partial list and just move it to node list directly. > We can adjust the condition of kmem_cache_has_cpu_partial() to > slub_cpu_partial() in __slab_free() and remove those code from > put_cpu_partial() for simplification. > > Signed-off-by: Muchun Song Please check again current mmotm/next if this still applies, I think it shouldn't anymore. Thanks. > --- > mm/slub.c | 23 +++-------------------- > 1 file changed, 3 insertions(+), 20 deletions(-) > > diff --git a/mm/slub.c b/mm/slub.c > index b6c5205252eb..69c8ada322a0 100644 > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -2438,7 +2438,6 @@ static void put_cpu_partial(struct kmem_cache *s, struct page *page, int drain) > int pages; > int pobjects; > > - preempt_disable(); > do { > pages = 0; > pobjects = 0; > @@ -2470,16 +2469,7 @@ static void put_cpu_partial(struct kmem_cache *s, struct page *page, int drain) > page->pobjects = pobjects; > page->next = oldpage; > > - } while (this_cpu_cmpxchg(s->cpu_slab->partial, oldpage, page) > - != oldpage); > - if (unlikely(!slub_cpu_partial(s))) { > - unsigned long flags; > - > - local_irq_save(flags); > - unfreeze_partials(s, this_cpu_ptr(s->cpu_slab)); > - local_irq_restore(flags); > - } > - preempt_enable(); > + } while (this_cpu_cmpxchg(s->cpu_slab->partial, oldpage, page) != oldpage); > #endif /* CONFIG_SLUB_CPU_PARTIAL */ > } > > @@ -3059,9 +3049,7 @@ static void __slab_free(struct kmem_cache *s, struct page *page, > was_frozen = new.frozen; > new.inuse -= cnt; > if ((!new.inuse || !prior) && !was_frozen) { > - > - if (kmem_cache_has_cpu_partial(s) && !prior) { > - > + if (slub_cpu_partial(s) && !prior) { > /* > * Slab was on no list before and will be > * partially empty > @@ -3069,9 +3057,7 @@ static void __slab_free(struct kmem_cache *s, struct page *page, > * freeze it. > */ > new.frozen = 1; > - > } else { /* Needs to be taken off a list */ > - > n = get_node(s, page_to_nid(page)); > /* > * Speculatively acquire the list_lock. > @@ -3082,17 +3068,14 @@ static void __slab_free(struct kmem_cache *s, struct page *page, > * other processors updating the list of slabs. > */ > spin_lock_irqsave(&n->list_lock, flags); > - > } > } > - > } while (!cmpxchg_double_slab(s, page, > prior, counters, > head, new.counters, > "__slab_free")); > > if (likely(!n)) { > - > if (likely(was_frozen)) { > /* > * The list lock was not taken therefore no list > @@ -3118,7 +3101,7 @@ static void __slab_free(struct kmem_cache *s, struct page *page, > * Objects left in the slab. If it was not on the partial list before > * then add it. > */ > - if (!kmem_cache_has_cpu_partial(s) && unlikely(!prior)) { > + if (unlikely(!prior)) { > remove_full(s, n, page); > add_partial(n, page, DEACTIVATE_TO_TAIL); > stat(s, FREE_ADD_PARTIAL); >