From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 843E6D3EE9A for ; Fri, 23 Jan 2026 06:54:22 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DC57E6B041F; Fri, 23 Jan 2026 01:54:21 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D8F916B0420; Fri, 23 Jan 2026 01:54:21 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CAFF46B0421; Fri, 23 Jan 2026 01:54:21 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id B943F6B041F for ; Fri, 23 Jan 2026 01:54:21 -0500 (EST) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 7F47F14036C for ; Fri, 23 Jan 2026 06:54:21 +0000 (UTC) X-FDA: 84362314722.08.D1CC1AC Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.223.131]) by imf02.hostedemail.com (Postfix) with ESMTP id 5B9688000C for ; Fri, 23 Jan 2026 06:54:19 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; spf=pass (imf02.hostedemail.com: domain of vbabka@suse.cz designates 195.135.223.131 as permitted sender) smtp.mailfrom=vbabka@suse.cz ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1769151259; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=o+ybqgoZwawPF95TqOAOO7y+1weufOIC06TusY9kl9k=; b=1JhDXcNhxI0rAyIhCtvu2aAsxLJSEII0gIKPJXWF+aFN8ZGR8+6O2C3m5CLLU69QA2wLFo xi6o5AtaGTUGPfZxGOtbM4KJjX8JQCEP5k+KD99XLOH+NA5DBtayv+eWmAwmVZ81v4yPA+ vsUjSTsUrfZt234coiiUw8hZCoOEX5w= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=none; spf=pass (imf02.hostedemail.com: domain of vbabka@suse.cz designates 195.135.223.131 as permitted sender) smtp.mailfrom=vbabka@suse.cz; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1769151259; a=rsa-sha256; cv=none; b=KlhHvFhpvALErcgc+5MWANyUl2aZz3N6x8cT9cfkWp7NH8UVqigdF0yirSVr6Hqj280wyL ciivbsZyVcCNJnQrAVnWvz7NT1IKr0SWGKzxWGl3pS3Ibp0CMF+w/v4QruJsiD3HBhUDej BxPcZrTH9QZvxpT4XuMwJpmNlB0h78Q= Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org [IPv6:2a07:de40:b281:104:10:150:64:97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id 7B7DE5BCD8; Fri, 23 Jan 2026 06:53:11 +0000 (UTC) Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 52B08139E8; Fri, 23 Jan 2026 06:53:11 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id EMnqE9cac2k4YgAAD6G6ig (envelope-from ); Fri, 23 Jan 2026 06:53:11 +0000 From: Vlastimil Babka Date: Fri, 23 Jan 2026 07:52:59 +0100 Subject: [PATCH v4 21/22] mm/slub: remove DEACTIVATE_TO_* stat items MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Message-Id: <20260123-sheaves-for-all-v4-21-041323d506f7@suse.cz> References: <20260123-sheaves-for-all-v4-0-041323d506f7@suse.cz> In-Reply-To: <20260123-sheaves-for-all-v4-0-041323d506f7@suse.cz> To: Harry Yoo , Petr Tesarik , Christoph Lameter , David Rientjes , Roman Gushchin Cc: Hao Li , Andrew Morton , Uladzislau Rezki , "Liam R. Howlett" , Suren Baghdasaryan , Sebastian Andrzej Siewior , Alexei Starovoitov , linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-rt-devel@lists.linux.dev, bpf@vger.kernel.org, kasan-dev@googlegroups.com, Vlastimil Babka X-Mailer: b4 0.14.3 X-Rspamd-Pre-Result: action=no action; module=replies; Message is reply to one we originated X-Rspamd-Pre-Result: action=no action; module=replies; Message is reply to one we originated X-Rspamd-Action: no action X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 5B9688000C X-Stat-Signature: rdkocnc3dir9efdwqatbqednx1uswikg X-Rspam-User: X-HE-Tag: 1769151259-239581 X-HE-Meta: U2FsdGVkX1+Pz5jPZc0m95I3PhQMHagVM3dIBeRsq9ZvGKfDc830ZjyvploznymqKnwtSvOtzPRqzP30W2pYUyAaA2dw0iKnw5FN5UDi6+HCy51wzggV9uKLTwhwZ/64IpDTpaoJuYccPhkrL+CRv/es+H62xgwA4vcmr6SLNmMvvRFdnHQlEqeSia8orM91EH1ZPq/AkPj4qNUx2cn9i8xNAyVDypCXguP35qWc+BqeqtzgSlX1bYbBI5nlEl70FtI7kX98Ii8OHJ8fmQmQ6GrsfGhJGY/A0jgslhd2Vi0c1q8hBPeCQ4BW1btBMdlU7sQUTv9CvNr3F6n87Pm52yquhqx0TnHcMpVFwW5BFqIgCDu2pbopwD7dBms9Hc8/+dpquKudW6ZyCxLeL8OgPz823nhEF0ZoNEuQDwJqhRWkZ7b42BnnpCYxjFxgm/eLNJ7lwUckotlSbOrRUmzLofxd7jQb0tWvToWU7KFGIwsixCZeAXnjOvKvRsj5i/1657lY7XjkfuabxLwAxxwrFUsMv+veofgm5PClTb3rZObcF+QgW/bBKTvHKM3FxA/ycZ1wNtR7tCtDlPdDJQtIPpd3Hudd0y7Oc7C4uWpRWy6D0nHlePdBqEKkIsAlqgsrxHb/HeDZjnbIpXCRHA9qxiO/r0o9QSq0VQ57hHuumgSqtnaaegAClpYnYX1HX0VIWdrfdz5RsHpLGWjnUnqLE5b2lWiK5s4Rjg/fls33tb7GK5N8LKBmp5PuRbdj949qFOUiao1dbNjmyQ09s6i0RaW/UwcmPGrMGjF8hx/t40G/ldpzzIKpvkvJkvZ3S25FKe+70gQay1S8cCciepz3TgkKngNQud5aP5smy/3x7j1uLGLl3hsq8CPUDyvgdyqAgeA76FZcK5ljkl82iTaP/DT7vZErOVByMb99Fygy7pOajLeAqvugGzWfPuUl9fdSBYDKsRr/yBof4UsnH7k w3h5JPB4 88qxZROuyWo8BRTrK+6+RNBTzNwg3BA15NmlTFTBiMH+gencgMCdufLCebAhxCvT+V3sp3Aj02uoW2nZDTok4CsqzLIwgPgLLM1XQQFT4s08X72tK0RIwS0kF11/tuvvGUjDAGhFsIWDzZgVbWgIrTj5KD8EM0IVbVcpXb4/IX/pAA864wy56FAzwQEmRlP9YuCJ4 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: The cpu slabs and their deactivations were removed, so remove the unused stat items. Weirdly enough the values were also used to control __add_partial() adding to head or tail of the list, so replace that with a new enum add_mode, which is cleaner. Reviewed-by: Suren Baghdasaryan Reviewed-by: Hao Li Signed-off-by: Vlastimil Babka --- mm/slub.c | 31 +++++++++++++++---------------- 1 file changed, 15 insertions(+), 16 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 3009eb7bd8d2..369fb9bbdb75 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -329,6 +329,11 @@ static void debugfs_slab_add(struct kmem_cache *); static inline void debugfs_slab_add(struct kmem_cache *s) { } #endif +enum add_mode { + ADD_TO_HEAD, + ADD_TO_TAIL, +}; + enum stat_item { ALLOC_PCS, /* Allocation from percpu sheaf */ ALLOC_FASTPATH, /* Allocation from cpu slab */ @@ -348,8 +353,6 @@ enum stat_item { CPUSLAB_FLUSH, /* Abandoning of the cpu slab */ DEACTIVATE_FULL, /* Cpu slab was full when deactivated */ DEACTIVATE_EMPTY, /* Cpu slab was empty when deactivated */ - DEACTIVATE_TO_HEAD, /* Cpu slab was moved to the head of partials */ - DEACTIVATE_TO_TAIL, /* Cpu slab was moved to the tail of partials */ DEACTIVATE_REMOTE_FREES,/* Slab contained remotely freed objects */ DEACTIVATE_BYPASS, /* Implicit deactivation */ ORDER_FALLBACK, /* Number of times fallback was necessary */ @@ -3270,10 +3273,10 @@ static inline void slab_clear_node_partial(struct slab *slab) * Management of partially allocated slabs. */ static inline void -__add_partial(struct kmem_cache_node *n, struct slab *slab, int tail) +__add_partial(struct kmem_cache_node *n, struct slab *slab, enum add_mode mode) { n->nr_partial++; - if (tail == DEACTIVATE_TO_TAIL) + if (mode == ADD_TO_TAIL) list_add_tail(&slab->slab_list, &n->partial); else list_add(&slab->slab_list, &n->partial); @@ -3281,10 +3284,10 @@ __add_partial(struct kmem_cache_node *n, struct slab *slab, int tail) } static inline void add_partial(struct kmem_cache_node *n, - struct slab *slab, int tail) + struct slab *slab, enum add_mode mode) { lockdep_assert_held(&n->list_lock); - __add_partial(n, slab, tail); + __add_partial(n, slab, mode); } static inline void remove_partial(struct kmem_cache_node *n, @@ -3377,7 +3380,7 @@ static void *alloc_single_from_new_slab(struct kmem_cache *s, struct slab *slab, if (slab->inuse == slab->objects) add_full(s, n, slab); else - add_partial(n, slab, DEACTIVATE_TO_HEAD); + add_partial(n, slab, ADD_TO_HEAD); inc_slabs_node(s, nid, slab->objects); spin_unlock_irqrestore(&n->list_lock, flags); @@ -3999,7 +4002,7 @@ static unsigned int alloc_from_new_slab(struct kmem_cache *s, struct slab *slab, n = get_node(s, slab_nid(slab)); spin_lock_irqsave(&n->list_lock, flags); } - add_partial(n, slab, DEACTIVATE_TO_HEAD); + add_partial(n, slab, ADD_TO_HEAD); spin_unlock_irqrestore(&n->list_lock, flags); } @@ -5070,7 +5073,7 @@ static noinline void free_to_partial_list( /* was on full list */ remove_full(s, n, slab); if (!slab_free) { - add_partial(n, slab, DEACTIVATE_TO_TAIL); + add_partial(n, slab, ADD_TO_TAIL); stat(s, FREE_ADD_PARTIAL); } } else if (slab_free) { @@ -5190,7 +5193,7 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab, * then add it. */ if (unlikely(was_full)) { - add_partial(n, slab, DEACTIVATE_TO_TAIL); + add_partial(n, slab, ADD_TO_TAIL); stat(s, FREE_ADD_PARTIAL); } spin_unlock_irqrestore(&n->list_lock, flags); @@ -6592,7 +6595,7 @@ __refill_objects_node(struct kmem_cache *s, void **p, gfp_t gfp, unsigned int mi continue; list_del(&slab->slab_list); - add_partial(n, slab, DEACTIVATE_TO_HEAD); + add_partial(n, slab, ADD_TO_HEAD); } spin_unlock_irqrestore(&n->list_lock, flags); @@ -7059,7 +7062,7 @@ static void early_kmem_cache_node_alloc(int node) * No locks need to be taken here as it has just been * initialized and there is no concurrent access. */ - __add_partial(n, slab, DEACTIVATE_TO_HEAD); + __add_partial(n, slab, ADD_TO_HEAD); } static void free_kmem_cache_nodes(struct kmem_cache *s) @@ -8751,8 +8754,6 @@ STAT_ATTR(FREE_SLAB, free_slab); STAT_ATTR(CPUSLAB_FLUSH, cpuslab_flush); STAT_ATTR(DEACTIVATE_FULL, deactivate_full); STAT_ATTR(DEACTIVATE_EMPTY, deactivate_empty); -STAT_ATTR(DEACTIVATE_TO_HEAD, deactivate_to_head); -STAT_ATTR(DEACTIVATE_TO_TAIL, deactivate_to_tail); STAT_ATTR(DEACTIVATE_REMOTE_FREES, deactivate_remote_frees); STAT_ATTR(DEACTIVATE_BYPASS, deactivate_bypass); STAT_ATTR(ORDER_FALLBACK, order_fallback); @@ -8855,8 +8856,6 @@ static struct attribute *slab_attrs[] = { &cpuslab_flush_attr.attr, &deactivate_full_attr.attr, &deactivate_empty_attr.attr, - &deactivate_to_head_attr.attr, - &deactivate_to_tail_attr.attr, &deactivate_remote_frees_attr.attr, &deactivate_bypass_attr.attr, &order_fallback_attr.attr, -- 2.52.0