From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3805CC982C4 for ; Fri, 16 Jan 2026 14:41:18 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id EB7136B00A0; Fri, 16 Jan 2026 09:41:16 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id E05C36B00A1; Fri, 16 Jan 2026 09:41:16 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CDD4A6B00A2; Fri, 16 Jan 2026 09:41:16 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id B222B6B00A0 for ; Fri, 16 Jan 2026 09:41:16 -0500 (EST) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 7DEE51A987 for ; Fri, 16 Jan 2026 14:41:16 +0000 (UTC) X-FDA: 84338089752.01.9FCABCF Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.223.130]) by imf10.hostedemail.com (Postfix) with ESMTP id 4735BC0008 for ; Fri, 16 Jan 2026 14:41:14 +0000 (UTC) Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=WuCIu3ap; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=GRQGqbgi; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=WuCIu3ap; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=GRQGqbgi; spf=pass (imf10.hostedemail.com: domain of vbabka@suse.cz designates 195.135.223.130 as permitted sender) smtp.mailfrom=vbabka@suse.cz; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1768574474; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=s+NEFIplY8DiruHyzDBaJMFyS25WmDkj2WChHn+qGs4=; b=J4yxoep9W3Lj83M8JVzwb2985A6dfcNDlHI1QQ6Im7NOn/ZLwYFnnMV+fcBZduikGTqwzf xCUogwvmaqcLawAXW7eZuayhEqTvJ05rUyRL6b/c5QhnK366eVfKA4ThsoonfbeJbaKvbv 3IabrXye7J4YbcW8CLC89zFTq35p08g= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=WuCIu3ap; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=GRQGqbgi; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=WuCIu3ap; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=GRQGqbgi; spf=pass (imf10.hostedemail.com: domain of vbabka@suse.cz designates 195.135.223.130 as permitted sender) smtp.mailfrom=vbabka@suse.cz; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1768574474; a=rsa-sha256; cv=none; b=MILkhYU+B5LaXZUGH2jy6thToqBjceQaViL6KrcaEU8i+qTvL6j8ZfiA+bAMX5jmGr6Q1d e12nBgtmPB26UWLzorHv1TsbJpfC+QZphrbfqDAQuCPrUhJeUNnvEoqO606TzRuXO0zafR Js8dcSUbcULYqtK2QpELpFQPeV2ZvZI= Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org [IPv6:2a07:de40:b281:104:10:150:64:97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id D2084337F9; Fri, 16 Jan 2026 14:40:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1768574437; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=s+NEFIplY8DiruHyzDBaJMFyS25WmDkj2WChHn+qGs4=; b=WuCIu3apgSVntSTEEf8koMFIhlGUfXgJEmxgOlvxmkgN4ACRinhGxEGxht1l/F712YlnXU BdJX2f7bdn9VjD61mjH/yZaQZ8b+j/sRXgRz7wntEu2JF2hYTquSEqRa70H14fPvCbxlTY Ah8Kz3SWPA1e6Dt4xNEisREgAvWn+CI= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1768574437; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=s+NEFIplY8DiruHyzDBaJMFyS25WmDkj2WChHn+qGs4=; b=GRQGqbgibHpEFDy7NmeUyKUqCwm6wHmZB6tx4jz8ux2Lm768+wPosB29A1x9L6Az84e8WC K4NGtiAdzrqGxqBg== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1768574437; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=s+NEFIplY8DiruHyzDBaJMFyS25WmDkj2WChHn+qGs4=; b=WuCIu3apgSVntSTEEf8koMFIhlGUfXgJEmxgOlvxmkgN4ACRinhGxEGxht1l/F712YlnXU BdJX2f7bdn9VjD61mjH/yZaQZ8b+j/sRXgRz7wntEu2JF2hYTquSEqRa70H14fPvCbxlTY Ah8Kz3SWPA1e6Dt4xNEisREgAvWn+CI= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1768574437; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=s+NEFIplY8DiruHyzDBaJMFyS25WmDkj2WChHn+qGs4=; b=GRQGqbgibHpEFDy7NmeUyKUqCwm6wHmZB6tx4jz8ux2Lm768+wPosB29A1x9L6Az84e8WC K4NGtiAdzrqGxqBg== Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id B4ADB3EA65; Fri, 16 Jan 2026 14:40:37 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id wB3TK+VNamnydgAAD6G6ig (envelope-from ); Fri, 16 Jan 2026 14:40:37 +0000 From: Vlastimil Babka Date: Fri, 16 Jan 2026 15:40:31 +0100 Subject: [PATCH v3 11/21] slab: remove SLUB_CPU_PARTIAL MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Message-Id: <20260116-sheaves-for-all-v3-11-5595cb000772@suse.cz> References: <20260116-sheaves-for-all-v3-0-5595cb000772@suse.cz> In-Reply-To: <20260116-sheaves-for-all-v3-0-5595cb000772@suse.cz> To: Harry Yoo , Petr Tesarik , Christoph Lameter , David Rientjes , Roman Gushchin Cc: Hao Li , Andrew Morton , Uladzislau Rezki , "Liam R. Howlett" , Suren Baghdasaryan , Sebastian Andrzej Siewior , Alexei Starovoitov , linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-rt-devel@lists.linux.dev, bpf@vger.kernel.org, kasan-dev@googlegroups.com, Vlastimil Babka X-Mailer: b4 0.14.3 X-Rspamd-Action: no action X-Rspam-User: X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 4735BC0008 X-Stat-Signature: by1khiaz1s8abjnkzkdw6ybpt3k5ru7t X-HE-Tag: 1768574474-405674 X-HE-Meta: U2FsdGVkX18LOvILHN+10cQ5owOt9/ENhJyiJFYGxffefFJgi1SqV4nb6C6MI44TcFfoTIbRRkN6Nmyx6LTJU6M2+mvNkLc9ZFMo1XIMJ6tmH7UP93Hfkrli52ylrSAxsTg0I0PPmSYOk7yuU1dKqk+pel0GGuBGRwVjFgBCsa2In+hLsMi1C8EcZFlMhAZIUA1HzrV21npoVQMz4aWkzx+Uj5gBMjOQp6yQNhpwcshLw+9pAfJnm1h/1lv80IyWBs3HJSqjIxgC7+Z3A5Ybq0npPkcYZLWLkCB2iBVIOi50D4UUx2SuUs2FGeWAdWnN+gtyzMuxjvfuwXb2/ZtKi2fz6So246rULxfrlS76f/w8X9Z5VFP+EMhYJSw450F/pvJJiX0dbtphJJLKfmJhzByGwWfyXSWiHy3b/1mBE6tE5oBfUxNRVS9EFEoduf6KD306filppPqZoL0OZFV8L1fECL/QefuJz6cEkIX/57UcQAo5j4ylLvIqgz19FN2K6sVJdY1IWtsOZ23YoYnZkDjCCBYbeiqLGjmuujfXGIMF5ANNNN2Nc18i/KAek589Tsg3y9TaynIAnKKmXaxUR5Qo5srFGTNfqctfxL62JQ6qRRWk1/edrKqb4BkIgWs9NWi+6L7XgSx0NYkrqybdorN4MIB3kVZQ4dCiZfoBmM9qKKaMLPgIWN2WobcRWKIkfvYAfO10cequ61ZEYBRphIgCzW5ORxs/3VS7DSkHQDKeZtCQb7zBfaFWMQ0R8rNc/UqI+B1k2rZ1Q56Fl+TldCnPLFe4YQSz6KUaQZtKwL9HFVVoUaFoGN6TzORb96W9KPIWFW2i4N0sieqba+kdHpk49bj2M7M9qHJUGezWx1NF7evqyJXTPieEANcGsJM9imZ5gRrgrBzEL80vaItmIoIx4jmAAehwJQrribizzSS5SX2mUFjqVNpw+VoGv3MlrpMVYWwlfsWZvftDwpl lQ+UdzNX ZdbJutANCI/Kss36Q+dKruldSwBTNEe9+h9735M46GQ2zimaPgeMRCVTIExDPAqHAaFSwhFIqi+cFXhQiZyRmI6qcgMfqQ8xzonyYgVVg/Q/L58w9RJPpNMn0HnvgS73zBrsrwZggl1k9Tu7O++xWk1pGYfwz0uldJKGS7a+D0TdyEzB8wTHV5QK8pjZwn9/6xmx0VYf/NJLQzRg4EvY9iLbZwXsQJo03GddgCrEGHaUWKlK5ZuK3otppS4S0khYLpSxOEi5Uk6pHWxPVTbvMNvJ1jtBw2KJN3U08VM2bMJFm7Yown/of6ApLjT/U69SgMFvD X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: We have removed the partial slab usage from allocation paths. Now remove the whole config option and associated code. Reviewed-by: Suren Baghdasaryan Signed-off-by: Vlastimil Babka --- mm/Kconfig | 11 --- mm/slab.h | 29 ------ mm/slub.c | 321 ++++--------------------------------------------------------- 3 files changed, 19 insertions(+), 342 deletions(-) diff --git a/mm/Kconfig b/mm/Kconfig index bd0ea5454af8..08593674cd20 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -247,17 +247,6 @@ config SLUB_STATS out which slabs are relevant to a particular load. Try running: slabinfo -DA -config SLUB_CPU_PARTIAL - default y - depends on SMP && !SLUB_TINY - bool "Enable per cpu partial caches" - help - Per cpu partial caches accelerate objects allocation and freeing - that is local to a processor at the price of more indeterminism - in the latency of the free. On overflow these caches will be cleared - which requires the taking of locks that may cause latency spikes. - Typically one would choose no for a realtime system. - config RANDOM_KMALLOC_CACHES default n depends on !SLUB_TINY diff --git a/mm/slab.h b/mm/slab.h index cb48ce5014ba..e77260720994 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -77,12 +77,6 @@ struct slab { struct llist_node llnode; void *flush_freelist; }; -#ifdef CONFIG_SLUB_CPU_PARTIAL - struct { - struct slab *next; - int slabs; /* Nr of slabs left */ - }; -#endif }; /* Double-word boundary */ struct freelist_counters; @@ -188,23 +182,6 @@ static inline size_t slab_size(const struct slab *slab) return PAGE_SIZE << slab_order(slab); } -#ifdef CONFIG_SLUB_CPU_PARTIAL -#define slub_percpu_partial(c) ((c)->partial) - -#define slub_set_percpu_partial(c, p) \ -({ \ - slub_percpu_partial(c) = (p)->next; \ -}) - -#define slub_percpu_partial_read_once(c) READ_ONCE(slub_percpu_partial(c)) -#else -#define slub_percpu_partial(c) NULL - -#define slub_set_percpu_partial(c, p) - -#define slub_percpu_partial_read_once(c) NULL -#endif // CONFIG_SLUB_CPU_PARTIAL - /* * Word size structure that can be atomically updated or read and that * contains both the order and the number of objects that a slab of the @@ -228,12 +205,6 @@ struct kmem_cache { unsigned int object_size; /* Object size without metadata */ struct reciprocal_value reciprocal_size; unsigned int offset; /* Free pointer offset */ -#ifdef CONFIG_SLUB_CPU_PARTIAL - /* Number of per cpu partial objects to keep around */ - unsigned int cpu_partial; - /* Number of per cpu partial slabs to keep around */ - unsigned int cpu_partial_slabs; -#endif unsigned int sheaf_capacity; struct kmem_cache_order_objects oo; diff --git a/mm/slub.c b/mm/slub.c index 698c0d940f06..6b1280f7900a 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -263,15 +263,6 @@ void *fixup_red_left(struct kmem_cache *s, void *p) return p; } -static inline bool kmem_cache_has_cpu_partial(struct kmem_cache *s) -{ -#ifdef CONFIG_SLUB_CPU_PARTIAL - return !kmem_cache_debug(s); -#else - return false; -#endif -} - /* * Issues still to be resolved: * @@ -426,9 +417,6 @@ struct freelist_tid { struct kmem_cache_cpu { struct freelist_tid; struct slab *slab; /* The slab from which we are allocating */ -#ifdef CONFIG_SLUB_CPU_PARTIAL - struct slab *partial; /* Partially allocated slabs */ -#endif local_trylock_t lock; /* Protects the fields above */ #ifdef CONFIG_SLUB_STATS unsigned int stat[NR_SLUB_STAT_ITEMS]; @@ -673,29 +661,6 @@ static inline unsigned int oo_objects(struct kmem_cache_order_objects x) return x.x & OO_MASK; } -#ifdef CONFIG_SLUB_CPU_PARTIAL -static void slub_set_cpu_partial(struct kmem_cache *s, unsigned int nr_objects) -{ - unsigned int nr_slabs; - - s->cpu_partial = nr_objects; - - /* - * We take the number of objects but actually limit the number of - * slabs on the per cpu partial list, in order to limit excessive - * growth of the list. For simplicity we assume that the slabs will - * be half-full. - */ - nr_slabs = DIV_ROUND_UP(nr_objects * 2, oo_objects(s->oo)); - s->cpu_partial_slabs = nr_slabs; -} -#elif defined(SLAB_SUPPORTS_SYSFS) -static inline void -slub_set_cpu_partial(struct kmem_cache *s, unsigned int nr_objects) -{ -} -#endif /* CONFIG_SLUB_CPU_PARTIAL */ - /* * If network-based swap is enabled, slub must keep track of whether memory * were allocated from pfmemalloc reserves. @@ -3474,12 +3439,6 @@ static void *alloc_single_from_new_slab(struct kmem_cache *s, struct slab *slab, return object; } -#ifdef CONFIG_SLUB_CPU_PARTIAL -static void put_cpu_partial(struct kmem_cache *s, struct slab *slab, int drain); -#else -static inline void put_cpu_partial(struct kmem_cache *s, struct slab *slab, - int drain) { } -#endif static inline bool pfmemalloc_match(struct slab *slab, gfp_t gfpflags); static bool get_partial_node_bulk(struct kmem_cache *s, @@ -3898,131 +3857,6 @@ static void deactivate_slab(struct kmem_cache *s, struct slab *slab, #define local_unlock_cpu_slab(s, flags) \ local_unlock_irqrestore(&(s)->cpu_slab->lock, flags) -#ifdef CONFIG_SLUB_CPU_PARTIAL -static void __put_partials(struct kmem_cache *s, struct slab *partial_slab) -{ - struct kmem_cache_node *n = NULL, *n2 = NULL; - struct slab *slab, *slab_to_discard = NULL; - unsigned long flags = 0; - - while (partial_slab) { - slab = partial_slab; - partial_slab = slab->next; - - n2 = get_node(s, slab_nid(slab)); - if (n != n2) { - if (n) - spin_unlock_irqrestore(&n->list_lock, flags); - - n = n2; - spin_lock_irqsave(&n->list_lock, flags); - } - - if (unlikely(!slab->inuse && n->nr_partial >= s->min_partial)) { - slab->next = slab_to_discard; - slab_to_discard = slab; - } else { - add_partial(n, slab, DEACTIVATE_TO_TAIL); - stat(s, FREE_ADD_PARTIAL); - } - } - - if (n) - spin_unlock_irqrestore(&n->list_lock, flags); - - while (slab_to_discard) { - slab = slab_to_discard; - slab_to_discard = slab_to_discard->next; - - stat(s, DEACTIVATE_EMPTY); - discard_slab(s, slab); - stat(s, FREE_SLAB); - } -} - -/* - * Put all the cpu partial slabs to the node partial list. - */ -static void put_partials(struct kmem_cache *s) -{ - struct slab *partial_slab; - unsigned long flags; - - local_lock_irqsave(&s->cpu_slab->lock, flags); - partial_slab = this_cpu_read(s->cpu_slab->partial); - this_cpu_write(s->cpu_slab->partial, NULL); - local_unlock_irqrestore(&s->cpu_slab->lock, flags); - - if (partial_slab) - __put_partials(s, partial_slab); -} - -static void put_partials_cpu(struct kmem_cache *s, - struct kmem_cache_cpu *c) -{ - struct slab *partial_slab; - - partial_slab = slub_percpu_partial(c); - c->partial = NULL; - - if (partial_slab) - __put_partials(s, partial_slab); -} - -/* - * Put a slab into a partial slab slot if available. - * - * If we did not find a slot then simply move all the partials to the - * per node partial list. - */ -static void put_cpu_partial(struct kmem_cache *s, struct slab *slab, int drain) -{ - struct slab *oldslab; - struct slab *slab_to_put = NULL; - unsigned long flags; - int slabs = 0; - - local_lock_cpu_slab(s, flags); - - oldslab = this_cpu_read(s->cpu_slab->partial); - - if (oldslab) { - if (drain && oldslab->slabs >= s->cpu_partial_slabs) { - /* - * Partial array is full. Move the existing set to the - * per node partial list. Postpone the actual unfreezing - * outside of the critical section. - */ - slab_to_put = oldslab; - oldslab = NULL; - } else { - slabs = oldslab->slabs; - } - } - - slabs++; - - slab->slabs = slabs; - slab->next = oldslab; - - this_cpu_write(s->cpu_slab->partial, slab); - - local_unlock_cpu_slab(s, flags); - - if (slab_to_put) { - __put_partials(s, slab_to_put); - stat(s, CPU_PARTIAL_DRAIN); - } -} - -#else /* CONFIG_SLUB_CPU_PARTIAL */ - -static inline void put_partials(struct kmem_cache *s) { } -static inline void put_partials_cpu(struct kmem_cache *s, - struct kmem_cache_cpu *c) { } - -#endif /* CONFIG_SLUB_CPU_PARTIAL */ - static inline void flush_slab(struct kmem_cache *s, struct kmem_cache_cpu *c) { unsigned long flags; @@ -4060,8 +3894,6 @@ static inline void __flush_cpu_slab(struct kmem_cache *s, int cpu) deactivate_slab(s, slab, freelist); stat(s, CPUSLAB_FLUSH); } - - put_partials_cpu(s, c); } static inline void flush_this_cpu_slab(struct kmem_cache *s) @@ -4070,15 +3902,13 @@ static inline void flush_this_cpu_slab(struct kmem_cache *s) if (c->slab) flush_slab(s, c); - - put_partials(s); } static bool has_cpu_slab(int cpu, struct kmem_cache *s) { struct kmem_cache_cpu *c = per_cpu_ptr(s->cpu_slab, cpu); - return c->slab || slub_percpu_partial(c); + return c->slab; } static bool has_pcs_used(int cpu, struct kmem_cache *s) @@ -5646,13 +5476,6 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab, return; } - /* - * It is enough to test IS_ENABLED(CONFIG_SLUB_CPU_PARTIAL) below - * instead of kmem_cache_has_cpu_partial(s), because kmem_cache_debug(s) - * is the only other reason it can be false, and it is already handled - * above. - */ - do { if (unlikely(n)) { spin_unlock_irqrestore(&n->list_lock, flags); @@ -5677,26 +5500,19 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab, * Unless it's frozen. */ if ((!new.inuse || was_full) && !was_frozen) { + + n = get_node(s, slab_nid(slab)); /* - * If slab becomes non-full and we have cpu partial - * lists, we put it there unconditionally to avoid - * taking the list_lock. Otherwise we need it. + * Speculatively acquire the list_lock. + * If the cmpxchg does not succeed then we may + * drop the list_lock without any processing. + * + * Otherwise the list_lock will synchronize with + * other processors updating the list of slabs. */ - if (!(IS_ENABLED(CONFIG_SLUB_CPU_PARTIAL) && was_full)) { - - n = get_node(s, slab_nid(slab)); - /* - * Speculatively acquire the list_lock. - * If the cmpxchg does not succeed then we may - * drop the list_lock without any processing. - * - * Otherwise the list_lock will synchronize with - * other processors updating the list of slabs. - */ - spin_lock_irqsave(&n->list_lock, flags); - - on_node_partial = slab_test_node_partial(slab); - } + spin_lock_irqsave(&n->list_lock, flags); + + on_node_partial = slab_test_node_partial(slab); } } while (!slab_update_freelist(s, slab, &old, &new, "__slab_free")); @@ -5709,13 +5525,6 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab, * activity can be necessary. */ stat(s, FREE_FROZEN); - } else if (IS_ENABLED(CONFIG_SLUB_CPU_PARTIAL) && was_full) { - /* - * If we started with a full slab then put it onto the - * per cpu partial list. - */ - put_cpu_partial(s, slab, 1); - stat(s, CPU_PARTIAL_FREE); } /* @@ -5744,10 +5553,9 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab, /* * Objects left in the slab. If it was not on the partial list before - * then add it. This can only happen when cache has no per cpu partial - * list otherwise we would have put it there. + * then add it. */ - if (!IS_ENABLED(CONFIG_SLUB_CPU_PARTIAL) && unlikely(was_full)) { + if (unlikely(was_full)) { add_partial(n, slab, DEACTIVATE_TO_TAIL); stat(s, FREE_ADD_PARTIAL); } @@ -6396,8 +6204,8 @@ static __always_inline void do_slab_free(struct kmem_cache *s, if (unlikely(!allow_spin)) { /* * __slab_free() can locklessly cmpxchg16 into a slab, - * but then it might need to take spin_lock or local_lock - * in put_cpu_partial() for further processing. + * but then it might need to take spin_lock + * for further processing. * Avoid the complexity and simply add to a deferred list. */ defer_free(s, head); @@ -7707,39 +7515,6 @@ static int init_kmem_cache_nodes(struct kmem_cache *s) return 1; } -static void set_cpu_partial(struct kmem_cache *s) -{ -#ifdef CONFIG_SLUB_CPU_PARTIAL - unsigned int nr_objects; - - /* - * cpu_partial determined the maximum number of objects kept in the - * per cpu partial lists of a processor. - * - * Per cpu partial lists mainly contain slabs that just have one - * object freed. If they are used for allocation then they can be - * filled up again with minimal effort. The slab will never hit the - * per node partial lists and therefore no locking will be required. - * - * For backwards compatibility reasons, this is determined as number - * of objects, even though we now limit maximum number of pages, see - * slub_set_cpu_partial() - */ - if (!kmem_cache_has_cpu_partial(s)) - nr_objects = 0; - else if (s->size >= PAGE_SIZE) - nr_objects = 6; - else if (s->size >= 1024) - nr_objects = 24; - else if (s->size >= 256) - nr_objects = 52; - else - nr_objects = 120; - - slub_set_cpu_partial(s, nr_objects); -#endif -} - static unsigned int calculate_sheaf_capacity(struct kmem_cache *s, struct kmem_cache_args *args) @@ -8595,8 +8370,6 @@ int do_kmem_cache_create(struct kmem_cache *s, const char *name, s->min_partial = min_t(unsigned long, MAX_PARTIAL, ilog2(s->size) / 2); s->min_partial = max_t(unsigned long, MIN_PARTIAL, s->min_partial); - set_cpu_partial(s); - s->cpu_sheaves = alloc_percpu(struct slub_percpu_sheaves); if (!s->cpu_sheaves) { err = -ENOMEM; @@ -8960,20 +8733,6 @@ static ssize_t show_slab_objects(struct kmem_cache *s, total += x; nodes[node] += x; -#ifdef CONFIG_SLUB_CPU_PARTIAL - slab = slub_percpu_partial_read_once(c); - if (slab) { - node = slab_nid(slab); - if (flags & SO_TOTAL) - WARN_ON_ONCE(1); - else if (flags & SO_OBJECTS) - WARN_ON_ONCE(1); - else - x = data_race(slab->slabs); - total += x; - nodes[node] += x; - } -#endif } } @@ -9108,12 +8867,7 @@ SLAB_ATTR(min_partial); static ssize_t cpu_partial_show(struct kmem_cache *s, char *buf) { - unsigned int nr_partial = 0; -#ifdef CONFIG_SLUB_CPU_PARTIAL - nr_partial = s->cpu_partial; -#endif - - return sysfs_emit(buf, "%u\n", nr_partial); + return sysfs_emit(buf, "0\n"); } static ssize_t cpu_partial_store(struct kmem_cache *s, const char *buf, @@ -9125,11 +8879,9 @@ static ssize_t cpu_partial_store(struct kmem_cache *s, const char *buf, err = kstrtouint(buf, 10, &objects); if (err) return err; - if (objects && !kmem_cache_has_cpu_partial(s)) + if (objects) return -EINVAL; - slub_set_cpu_partial(s, objects); - flush_all(s); return length; } SLAB_ATTR(cpu_partial); @@ -9168,42 +8920,7 @@ SLAB_ATTR_RO(objects_partial); static ssize_t slabs_cpu_partial_show(struct kmem_cache *s, char *buf) { - int objects = 0; - int slabs = 0; - int cpu __maybe_unused; - int len = 0; - -#ifdef CONFIG_SLUB_CPU_PARTIAL - for_each_online_cpu(cpu) { - struct slab *slab; - - slab = slub_percpu_partial(per_cpu_ptr(s->cpu_slab, cpu)); - - if (slab) - slabs += data_race(slab->slabs); - } -#endif - - /* Approximate half-full slabs, see slub_set_cpu_partial() */ - objects = (slabs * oo_objects(s->oo)) / 2; - len += sysfs_emit_at(buf, len, "%d(%d)", objects, slabs); - -#ifdef CONFIG_SLUB_CPU_PARTIAL - for_each_online_cpu(cpu) { - struct slab *slab; - - slab = slub_percpu_partial(per_cpu_ptr(s->cpu_slab, cpu)); - if (slab) { - slabs = data_race(slab->slabs); - objects = (slabs * oo_objects(s->oo)) / 2; - len += sysfs_emit_at(buf, len, " C%d=%d(%d)", - cpu, objects, slabs); - } - } -#endif - len += sysfs_emit_at(buf, len, "\n"); - - return len; + return sysfs_emit(buf, "0(0)\n"); } SLAB_ATTR_RO(slabs_cpu_partial); -- 2.52.0