From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E9F7FC9EC98 for ; Mon, 12 Jan 2026 15:17:34 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D293F6B00A9; Mon, 12 Jan 2026 10:17:32 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id CE9E96B00AA; Mon, 12 Jan 2026 10:17:32 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B817D6B00AB; Mon, 12 Jan 2026 10:17:32 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id A43336B00A9 for ; Mon, 12 Jan 2026 10:17:32 -0500 (EST) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 6BD0E14048E for ; Mon, 12 Jan 2026 15:17:32 +0000 (UTC) X-FDA: 84323665944.23.DCEB32C Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.223.130]) by imf19.hostedemail.com (Postfix) with ESMTP id 441381A000B for ; Mon, 12 Jan 2026 15:17:30 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; spf=pass (imf19.hostedemail.com: domain of vbabka@suse.cz designates 195.135.223.130 as permitted sender) smtp.mailfrom=vbabka@suse.cz ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1768231050; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=diP2IFc7AwGuuRdNganw2C+plBX5QxFb3pNFx4A6T6k=; b=8RBJQhaJcmwlq4mfjkYrEITYkTf+5i8jK1zBufi+h/ELZ7/7Lf2nLvAX1Fz4fwF3wB9umw wTC0Dt2eVcNTYymPIGA1/6iYweO5O/peXOMQlWtD3m51qc76RIpXYv+Nta/PKzU3XggZ/c lIw4VULGNMLH3RJP3L0A4uSAeNPqigU= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=none; spf=pass (imf19.hostedemail.com: domain of vbabka@suse.cz designates 195.135.223.130 as permitted sender) smtp.mailfrom=vbabka@suse.cz; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1768231050; a=rsa-sha256; cv=none; b=JRpvPFbDnRpXwHb9xqEA5whqWQy9m1aSmWsMMWVWqlINg4wfHbrkdAF+1jichhVsqtyodi AJOu5LGG5vbnIlX7QcNeAd5Mk+1Ajci9G7E2B65YTd7YwsAbDsviZu1SS6+lGH4TF/wByx BQ3bMDFEQtvBDRXQVjWya3TDT8CtWCc= Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org [IPv6:2a07:de40:b281:104:10:150:64:97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id C4DF633695; Mon, 12 Jan 2026 15:16:58 +0000 (UTC) Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id A80C93EA65; Mon, 12 Jan 2026 15:16:58 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id oNG6KGoQZWn7FgAAD6G6ig (envelope-from ); Mon, 12 Jan 2026 15:16:58 +0000 From: Vlastimil Babka Date: Mon, 12 Jan 2026 16:17:06 +0100 Subject: [PATCH RFC v2 12/20] slab: remove defer_deactivate_slab() MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Message-Id: <20260112-sheaves-for-all-v2-12-98225cfb50cf@suse.cz> References: <20260112-sheaves-for-all-v2-0-98225cfb50cf@suse.cz> In-Reply-To: <20260112-sheaves-for-all-v2-0-98225cfb50cf@suse.cz> To: Harry Yoo , Petr Tesarik , Christoph Lameter , David Rientjes , Roman Gushchin Cc: Hao Li , Andrew Morton , Uladzislau Rezki , "Liam R. Howlett" , Suren Baghdasaryan , Sebastian Andrzej Siewior , Alexei Starovoitov , linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-rt-devel@lists.linux.dev, bpf@vger.kernel.org, kasan-dev@googlegroups.com, Vlastimil Babka X-Mailer: b4 0.14.3 X-Rspamd-Pre-Result: action=no action; module=replies; Message is reply to one we originated X-Rspamd-Pre-Result: action=no action; module=replies; Message is reply to one we originated X-Rspamd-Action: no action X-Rspam-User: X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 441381A000B X-Stat-Signature: z8cdjnrf7z8tnrxgnp4txzjra8eia5u8 X-HE-Tag: 1768231050-488660 X-HE-Meta: U2FsdGVkX18e8kfZP2SfUnGAxTjqRLRbV/yuFZ8wEL2v5qG6dys/P0Ow8xc55FlbXVXXjEoNP0s8Fb7ZY/27A7Fpx2ME/vkWjuG6zLnPtfn9KI5nQ1hlGJbkAh4e4feJ7Z3eTpY6qI119apWm+JZkFqpVl5lB+CX9H1UeHVE6cMEHJQW/lCOszKnUUvw0FcBaPZwxhA2mvW1nMP/8qh3IUSnpcszOJzZxvO7l6aIER7irKL6704Fnocing2lwbQpf/r87ah7iyAh98JkGKAVUbu1Ly0/mgicXdtDoo4GPQWP3uVWdfUbN5d36SRn4pHHg5K7qNklVSGZBlGmPSZkJYhgmycrOSGQGrcRgj/Bjw2+VvzkEChU31uLZeEStKe26xuHCcy4DChkhnKcM9n2Wc8KljU1kctLzq2yb/Nm57WXuODNtSpt5KSGe5MiV5VFRrQakn7uUxSuCH9Xvk8SaUCSsptHhds7V9D8dNolcDHYoxoTjkhgP1/JwhaFqkfrbukDE3YTobLJULFZLeuOye1pUz2RelMYpQDqDbXny9CvAj1gY13OgsjwW01H41NeN+gpGntyCqoStgk4GYy0FWuFdFfs2zs3PeIFCW4wiMZIa5jfLAY6qkhb55AgoP6IhJDdRKhZhGsPvRSITTE3Sx4kyCYk58Lr88ZkIHIGF4FrNCDvmpAcdyetApB2Mg7TcDL1PfKoQmHRDOeaoTXbf4NXeINROwkEWuFLUGMPNSp7hTugJ44+f+oLJoKmqh0mTYQuphPjgqWMyMAkjdkcfOq75z/2IbS6M5YHbgzCCyRpEhr5R/mRfw7cIAQsmkuL2IZvCcHWBWTJKdOGSOg7Rhd7/s2dd/oEjmCPyq82I6n/eXuBPPYNb7inUEwcSvK4OuV4lXl5AlDXf46MrUhfkLHcqGy5UjsXd9SJSCeSlOJiab+DS96BeTY/boVY98FL08OkHf8HNT4iJUMp1uJ /QHyKxKl 4BBCfQzcXkUaVKuZcpVBdkBhIIgB2UTKIE1LkUlWbu5nye5z087qV2MUTcQLI/jfeb4b05S8OicpmpB4nSxZmLcJqAZAzSJYG3Dbl X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: There are no more cpu slabs so we don't need their deferred deactivation. The function is now only used from places where we allocate a new slab but then can't spin on node list_lock to put it on the partial list. Instead of the deferred action we can free it directly via __free_slab(), we just need to tell it to use _nolock() freeing of the underlying pages and take care of the accounting. Since free_frozen_pages_nolock() variant does not yet exist for code outside of the page allocator, create it as a trivial wrapper for __free_frozen_pages(..., FPI_TRYLOCK). Signed-off-by: Vlastimil Babka --- mm/internal.h | 1 + mm/page_alloc.c | 5 +++++ mm/slab.h | 8 +------- mm/slub.c | 51 ++++++++++++++++----------------------------------- 4 files changed, 23 insertions(+), 42 deletions(-) diff --git a/mm/internal.h b/mm/internal.h index e430da900430..1f44ccb4badf 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -846,6 +846,7 @@ static inline struct page *alloc_frozen_pages_noprof(gfp_t gfp, unsigned int ord struct page *alloc_frozen_pages_nolock_noprof(gfp_t gfp_flags, int nid, unsigned int order); #define alloc_frozen_pages_nolock(...) \ alloc_hooks(alloc_frozen_pages_nolock_noprof(__VA_ARGS__)) +void free_frozen_pages_nolock(struct page *page, unsigned int order); extern void zone_pcp_reset(struct zone *zone); extern void zone_pcp_disable(struct zone *zone); diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 822e05f1a964..8a288ecfdd93 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -2981,6 +2981,11 @@ void free_frozen_pages(struct page *page, unsigned int order) __free_frozen_pages(page, order, FPI_NONE); } +void free_frozen_pages_nolock(struct page *page, unsigned int order) +{ + __free_frozen_pages(page, order, FPI_TRYLOCK); +} + /* * Free a batch of folios */ diff --git a/mm/slab.h b/mm/slab.h index e77260720994..4efec41b6445 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -71,13 +71,7 @@ struct slab { struct kmem_cache *slab_cache; union { struct { - union { - struct list_head slab_list; - struct { /* For deferred deactivate_slab() */ - struct llist_node llnode; - void *flush_freelist; - }; - }; + struct list_head slab_list; /* Double-word boundary */ struct freelist_counters; }; diff --git a/mm/slub.c b/mm/slub.c index 522a7e671a26..0effeb3b9552 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3248,7 +3248,7 @@ static struct slab *new_slab(struct kmem_cache *s, gfp_t flags, int node) flags & (GFP_RECLAIM_MASK | GFP_CONSTRAINT_MASK), node); } -static void __free_slab(struct kmem_cache *s, struct slab *slab) +static void __free_slab(struct kmem_cache *s, struct slab *slab, bool allow_spin) { struct page *page = slab_page(slab); int order = compound_order(page); @@ -3262,11 +3262,20 @@ static void __free_slab(struct kmem_cache *s, struct slab *slab) free_frozen_pages(page, order); } +static void free_new_slab_nolock(struct kmem_cache *s, struct slab *slab) +{ + /* + * Since it was just allocated, we can skip the actions in + * discard_slab() and free_slab(). + */ + __free_slab(s, slab, false); +} + static void rcu_free_slab(struct rcu_head *h) { struct slab *slab = container_of(h, struct slab, rcu_head); - __free_slab(slab->slab_cache, slab); + __free_slab(slab->slab_cache, slab, true); } static void free_slab(struct kmem_cache *s, struct slab *slab) @@ -3282,7 +3291,7 @@ static void free_slab(struct kmem_cache *s, struct slab *slab) if (unlikely(s->flags & SLAB_TYPESAFE_BY_RCU)) call_rcu(&slab->rcu_head, rcu_free_slab); else - __free_slab(s, slab); + __free_slab(s, slab, true); } static void discard_slab(struct kmem_cache *s, struct slab *slab) @@ -3375,8 +3384,6 @@ static void *alloc_single_from_partial(struct kmem_cache *s, return object; } -static void defer_deactivate_slab(struct slab *slab, void *flush_freelist); - /* * Called only for kmem_cache_debug() caches to allocate from a freshly * allocated slab. Allocate a single object instead of whole freelist @@ -3392,8 +3399,8 @@ static void *alloc_single_from_new_slab(struct kmem_cache *s, struct slab *slab, void *object; if (!allow_spin && !spin_trylock_irqsave(&n->list_lock, flags)) { - /* Unlucky, discard newly allocated slab */ - defer_deactivate_slab(slab, NULL); + /* Unlucky, discard newly allocated slab. */ + free_new_slab_nolock(s, slab); return NULL; } @@ -4262,7 +4269,7 @@ static unsigned int alloc_from_new_slab(struct kmem_cache *s, struct slab *slab, if (!spin_trylock_irqsave(&n->list_lock, flags)) { /* Unlucky, discard newly allocated slab */ - defer_deactivate_slab(slab, NULL); + free_new_slab_nolock(s, slab); return 0; } } @@ -6031,7 +6038,6 @@ static void free_to_pcs_bulk(struct kmem_cache *s, size_t size, void **p) struct defer_free { struct llist_head objects; - struct llist_head slabs; struct irq_work work; }; @@ -6039,7 +6045,6 @@ static void free_deferred_objects(struct irq_work *work); static DEFINE_PER_CPU(struct defer_free, defer_free_objects) = { .objects = LLIST_HEAD_INIT(objects), - .slabs = LLIST_HEAD_INIT(slabs), .work = IRQ_WORK_INIT(free_deferred_objects), }; @@ -6052,10 +6057,9 @@ static void free_deferred_objects(struct irq_work *work) { struct defer_free *df = container_of(work, struct defer_free, work); struct llist_head *objs = &df->objects; - struct llist_head *slabs = &df->slabs; struct llist_node *llnode, *pos, *t; - if (llist_empty(objs) && llist_empty(slabs)) + if (llist_empty(objs)) return; llnode = llist_del_all(objs); @@ -6079,16 +6083,6 @@ static void free_deferred_objects(struct irq_work *work) __slab_free(s, slab, x, x, 1, _THIS_IP_); } - - llnode = llist_del_all(slabs); - llist_for_each_safe(pos, t, llnode) { - struct slab *slab = container_of(pos, struct slab, llnode); - - if (slab->frozen) - deactivate_slab(slab->slab_cache, slab, slab->flush_freelist); - else - free_slab(slab->slab_cache, slab); - } } static void defer_free(struct kmem_cache *s, void *head) @@ -6102,19 +6096,6 @@ static void defer_free(struct kmem_cache *s, void *head) irq_work_queue(&df->work); } -static void defer_deactivate_slab(struct slab *slab, void *flush_freelist) -{ - struct defer_free *df; - - slab->flush_freelist = flush_freelist; - - guard(preempt)(); - - df = this_cpu_ptr(&defer_free_objects); - if (llist_add(&slab->llnode, &df->slabs)) - irq_work_queue(&df->work); -} - void defer_free_barrier(void) { int cpu; -- 2.52.0