From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 50D7FC9EC97 for ; Mon, 12 Jan 2026 15:17:56 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B2F3E6B009F; Mon, 12 Jan 2026 10:17:55 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id AECDA6B00BF; Mon, 12 Jan 2026 10:17:55 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A229C6B00C0; Mon, 12 Jan 2026 10:17:55 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 8DAD86B009F for ; Mon, 12 Jan 2026 10:17:55 -0500 (EST) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 6983A1A049C for ; Mon, 12 Jan 2026 15:17:55 +0000 (UTC) X-FDA: 84323666910.27.531933B Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.223.131]) by imf30.hostedemail.com (Postfix) with ESMTP id 3AB3880015 for ; Mon, 12 Jan 2026 15:17:52 +0000 (UTC) Authentication-Results: imf30.hostedemail.com; spf=pass (imf30.hostedemail.com: domain of vbabka@suse.cz designates 195.135.223.131 as permitted sender) smtp.mailfrom=vbabka@suse.cz ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1768231073; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=w348dUZyoPi7wkt0W2Bp5ZFw39tDylKXV3508KD8AX8=; b=e3c/n746A8LSz0YZ1oyZUWJPxUTeJelkuTXl5bw5pamsixOlvLqANEaAMAvDCNC38ZAkFg c8xPqoQY1cjYSK6FzUfBBtMgnpnigj8ptsPjx3et50/81lG74JngyF/WSGiYQGKNXPbXOG gUJFCmIRSJC1X4iaASBVa2k5iiEJkzA= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=none; spf=pass (imf30.hostedemail.com: domain of vbabka@suse.cz designates 195.135.223.131 as permitted sender) smtp.mailfrom=vbabka@suse.cz; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1768231073; a=rsa-sha256; cv=none; b=H65baQsLMt1SD1f5Lg2Yz3x8rNKBZ2BAIOvNHM2rMZtY7vcVsZk3b6vIvg7Sik6i+6ctRq ZL5ndGHou/lKaHwMO3/NE+IkpcB6K7Bf/kf0s8EGSPjs7oMyiVaI6MljbAR1oCYptiekaO dgchNpj7l7wp/GFVi3DFfF684K941LA= Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org [IPv6:2a07:de40:b281:104:10:150:64:97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id A4A0C5BCD8; Mon, 12 Jan 2026 15:16:59 +0000 (UTC) Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 886A23EA65; Mon, 12 Jan 2026 15:16:59 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id EB0MIWsQZWn7FgAAD6G6ig (envelope-from ); Mon, 12 Jan 2026 15:16:59 +0000 From: Vlastimil Babka Date: Mon, 12 Jan 2026 16:17:14 +0100 Subject: [PATCH RFC v2 20/20] mm/slub: cleanup and repurpose some stat items MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Message-Id: <20260112-sheaves-for-all-v2-20-98225cfb50cf@suse.cz> References: <20260112-sheaves-for-all-v2-0-98225cfb50cf@suse.cz> In-Reply-To: <20260112-sheaves-for-all-v2-0-98225cfb50cf@suse.cz> To: Harry Yoo , Petr Tesarik , Christoph Lameter , David Rientjes , Roman Gushchin Cc: Hao Li , Andrew Morton , Uladzislau Rezki , "Liam R. Howlett" , Suren Baghdasaryan , Sebastian Andrzej Siewior , Alexei Starovoitov , linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-rt-devel@lists.linux.dev, bpf@vger.kernel.org, kasan-dev@googlegroups.com, Vlastimil Babka X-Mailer: b4 0.14.3 X-Rspamd-Pre-Result: action=no action; module=replies; Message is reply to one we originated X-Rspamd-Pre-Result: action=no action; module=replies; Message is reply to one we originated X-Rspamd-Action: no action X-Rspam-User: X-Stat-Signature: hque4ayuxamm7aps9kcemtrgantm3tb3 X-Rspamd-Queue-Id: 3AB3880015 X-Rspamd-Server: rspam04 X-HE-Tag: 1768231072-624842 X-HE-Meta: U2FsdGVkX18LEHEbFKYzftyRLDinTp30OCdkb9EbCN1dh/kALZFNzZcwKAifGSa+A8tL6jlhaqawUpenq/r3fcAax8P9agLABzPyqJ7x3aKZsg86ZfOmGrYYOAXhrY2u1eilKvJbgVMS0UReyYiKsUaQYtmB9glOkQo3cJQ0t7jIXUt/zuuEZ/Mq4//Wng4sgPa5p0fSvAtXXyGTmhChqrFSpYTFjRS+4Z/l7W1PA/8hNqhiwUde/pxIDOteIanCxurguw+GJLO8EuMl+LUdyKFfBZ6HJlmo/sfaD9B8mGd/mpW7JxmBEHegKNcwNt0E5ffzR3uzf9bHmItRvjGLzGy2gRlG+flLC2XWogF/M2Qr8mg/icczclv6y1BecgzPxCaAupK+RmWM5SKv8qvScI9/cKIBwAv0EvsKOVRmz+XdjLxwuj+bf2Q/TISizbUW8e7EziXBBS5NfFIwNhrEDp2KokY8o/EsIIEp+s0LNCX5zND2XA9rAV4/eb6yZCeB9t3rXDN2XWq2UuUKW9AZz27esYdDbthW6WDNmB+XPJJzgetotwzWciZ8pJrkKBQ5AdE3YM6vBSeShZ3M5mPWslW9qUHi/IKYgj3wZah+ztevaJqsvJuBLEMM7FXaDOXf6fyO96Kp+RMPsUvYXaU+m5eIQoAhJnGR/XC/09Po47SnAlq1J2pWyWZ8Y5VAPzkEY+7TNNH52+hyilfK4n7ixqbBxaw2y0PagFfcJGOKoVG7lFaKdzSe85uxmfcv7WjYRdW+x61now0kovA8TZXFGB7Hv116A780uoiOgIkkd1Fw4TNrKlKj+MonaX3C1WOk8rfhTCqcEnopW9/S1qFLf1hnEaeOt5AiWx3SmmssFm3rC8Wz3UbviaZc47X87wZowNbdalSNiDYkd2Jm5uf/7Tf5pfkcFn+BclCmPPArmQbwE1iW/vMeyLgz2yILIFmpiNLkUpXb0df3mYsQMHV 1haqp1/G 3Y+BQ++zqlkUE6RCXEpyXeJQl0KSHjsd08h2wo1PgVwhSm13k1a72A6JWl/x9ftwa/S0vc7iBsajXBY3Gme/hKQ76mg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: A number of stat items related to cpu slabs became unused, remove them. Two of those were ALLOC_FASTPATH and FREE_FASTPATH. But instead of removing those, use them instead of ALLOC_PCS and FREE_PCS, since sheaves are the new (and only) fastpaths, Remove the recently added _PCS variants instead. Change where FREE_SLOWPATH is counted so that it only counts freeing of objects by slab users that (for whatever reason) do not go to a percpu sheaf, and not all (including internal) callers of __slab_free(). Thus flushing sheaves (counted by SHEAF_FLUSH) no longer also increments FREE_SLOWPATH. This matches how ALLOC_SLOWPATH doesn't count sheaf refills (counted by SHEAF_REFILL). Signed-off-by: Vlastimil Babka --- mm/slub.c | 77 +++++++++++++++++---------------------------------------------- 1 file changed, 21 insertions(+), 56 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index a473fa29a905..70314c72773e 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -330,33 +330,19 @@ enum add_mode { }; enum stat_item { - ALLOC_PCS, /* Allocation from percpu sheaf */ - ALLOC_FASTPATH, /* Allocation from cpu slab */ - ALLOC_SLOWPATH, /* Allocation by getting a new cpu slab */ - FREE_PCS, /* Free to percpu sheaf */ + ALLOC_FASTPATH, /* Allocation from percpu sheaves */ + ALLOC_SLOWPATH, /* Allocation from partial or new slab */ FREE_RCU_SHEAF, /* Free to rcu_free sheaf */ FREE_RCU_SHEAF_FAIL, /* Failed to free to a rcu_free sheaf */ - FREE_FASTPATH, /* Free to cpu slab */ - FREE_SLOWPATH, /* Freeing not to cpu slab */ + FREE_FASTPATH, /* Free to percpu sheaves */ + FREE_SLOWPATH, /* Free to a slab */ FREE_ADD_PARTIAL, /* Freeing moves slab to partial list */ FREE_REMOVE_PARTIAL, /* Freeing removes last object */ - ALLOC_FROM_PARTIAL, /* Cpu slab acquired from node partial list */ - ALLOC_SLAB, /* Cpu slab acquired from page allocator */ - ALLOC_REFILL, /* Refill cpu slab from slab freelist */ - ALLOC_NODE_MISMATCH, /* Switching cpu slab */ + ALLOC_SLAB, /* New slab acquired from page allocator */ + ALLOC_NODE_MISMATCH, /* Requested node different from cpu sheaf */ FREE_SLAB, /* Slab freed to the page allocator */ - CPUSLAB_FLUSH, /* Abandoning of the cpu slab */ - DEACTIVATE_FULL, /* Cpu slab was full when deactivated */ - DEACTIVATE_EMPTY, /* Cpu slab was empty when deactivated */ - DEACTIVATE_REMOTE_FREES,/* Slab contained remotely freed objects */ - DEACTIVATE_BYPASS, /* Implicit deactivation */ ORDER_FALLBACK, /* Number of times fallback was necessary */ - CMPXCHG_DOUBLE_CPU_FAIL,/* Failures of this_cpu_cmpxchg_double */ CMPXCHG_DOUBLE_FAIL, /* Failures of slab freelist update */ - CPU_PARTIAL_ALLOC, /* Used cpu partial on alloc */ - CPU_PARTIAL_FREE, /* Refill cpu partial on free */ - CPU_PARTIAL_NODE, /* Refill cpu partial from node partial */ - CPU_PARTIAL_DRAIN, /* Drain cpu partial to node partial */ SHEAF_FLUSH, /* Objects flushed from a sheaf */ SHEAF_REFILL, /* Objects refilled to a sheaf */ SHEAF_ALLOC, /* Allocation of an empty sheaf */ @@ -4330,8 +4316,10 @@ void *alloc_from_pcs(struct kmem_cache *s, gfp_t gfp, int node) * We assume the percpu sheaves contain only local objects although it's * not completely guaranteed, so we verify later. */ - if (unlikely(node_requested && node != numa_mem_id())) + if (unlikely(node_requested && node != numa_mem_id())) { + stat(s, ALLOC_NODE_MISMATCH); return NULL; + } if (!local_trylock(&s->cpu_sheaves->lock)) return NULL; @@ -4354,6 +4342,7 @@ void *alloc_from_pcs(struct kmem_cache *s, gfp_t gfp, int node) */ if (page_to_nid(virt_to_page(object)) != node) { local_unlock(&s->cpu_sheaves->lock); + stat(s, ALLOC_NODE_MISMATCH); return NULL; } } @@ -4362,7 +4351,7 @@ void *alloc_from_pcs(struct kmem_cache *s, gfp_t gfp, int node) local_unlock(&s->cpu_sheaves->lock); - stat(s, ALLOC_PCS); + stat(s, ALLOC_FASTPATH); return object; } @@ -4434,7 +4423,7 @@ unsigned int alloc_from_pcs_bulk(struct kmem_cache *s, gfp_t gfp, size_t size, local_unlock(&s->cpu_sheaves->lock); - stat_add(s, ALLOC_PCS, batch); + stat_add(s, ALLOC_FASTPATH, batch); allocated += batch; @@ -5101,8 +5090,6 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab, unsigned long flags; bool on_node_partial; - stat(s, FREE_SLOWPATH); - if (IS_ENABLED(CONFIG_SLUB_TINY) || kmem_cache_debug(s)) { free_to_partial_list(s, slab, head, tail, cnt, addr); return; @@ -5408,7 +5395,7 @@ bool free_to_pcs(struct kmem_cache *s, void *object, bool allow_spin) local_unlock(&s->cpu_sheaves->lock); - stat(s, FREE_PCS); + stat(s, FREE_FASTPATH); return true; } @@ -5659,7 +5646,7 @@ static void free_to_pcs_bulk(struct kmem_cache *s, size_t size, void **p) local_unlock(&s->cpu_sheaves->lock); - stat_add(s, FREE_PCS, batch); + stat_add(s, FREE_FASTPATH, batch); if (batch < size) { p += batch; @@ -5681,10 +5668,12 @@ static void free_to_pcs_bulk(struct kmem_cache *s, size_t size, void **p) */ fallback: __kmem_cache_free_bulk(s, size, p); + stat_add(s, FREE_SLOWPATH, size); flush_remote: if (remote_nr) { __kmem_cache_free_bulk(s, remote_nr, &remote_objects[0]); + stat_add(s, FREE_SLOWPATH, remote_nr); if (i < size) { remote_nr = 0; goto next_remote_batch; @@ -5777,6 +5766,7 @@ void slab_free(struct kmem_cache *s, struct slab *slab, void *object, } __slab_free(s, slab, object, object, 1, addr); + stat(s, FREE_SLOWPATH); } #ifdef CONFIG_MEMCG @@ -5799,8 +5789,10 @@ void slab_free_bulk(struct kmem_cache *s, struct slab *slab, void *head, * With KASAN enabled slab_free_freelist_hook modifies the freelist * to remove objects, whose reuse must be delayed. */ - if (likely(slab_free_freelist_hook(s, &head, &tail, &cnt))) + if (likely(slab_free_freelist_hook(s, &head, &tail, &cnt))) { __slab_free(s, slab, head, tail, cnt, addr); + stat_add(s, FREE_SLOWPATH, cnt); + } } #ifdef CONFIG_SLUB_RCU_DEBUG @@ -6699,6 +6691,7 @@ int __kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, i = refill_objects(s, p, flags, size, size); if (i < size) goto error; + stat_add(s, ALLOC_SLOWPATH, i); } return i; @@ -8698,33 +8691,19 @@ static ssize_t text##_store(struct kmem_cache *s, \ } \ SLAB_ATTR(text); \ -STAT_ATTR(ALLOC_PCS, alloc_cpu_sheaf); STAT_ATTR(ALLOC_FASTPATH, alloc_fastpath); STAT_ATTR(ALLOC_SLOWPATH, alloc_slowpath); -STAT_ATTR(FREE_PCS, free_cpu_sheaf); STAT_ATTR(FREE_RCU_SHEAF, free_rcu_sheaf); STAT_ATTR(FREE_RCU_SHEAF_FAIL, free_rcu_sheaf_fail); STAT_ATTR(FREE_FASTPATH, free_fastpath); STAT_ATTR(FREE_SLOWPATH, free_slowpath); STAT_ATTR(FREE_ADD_PARTIAL, free_add_partial); STAT_ATTR(FREE_REMOVE_PARTIAL, free_remove_partial); -STAT_ATTR(ALLOC_FROM_PARTIAL, alloc_from_partial); STAT_ATTR(ALLOC_SLAB, alloc_slab); -STAT_ATTR(ALLOC_REFILL, alloc_refill); STAT_ATTR(ALLOC_NODE_MISMATCH, alloc_node_mismatch); STAT_ATTR(FREE_SLAB, free_slab); -STAT_ATTR(CPUSLAB_FLUSH, cpuslab_flush); -STAT_ATTR(DEACTIVATE_FULL, deactivate_full); -STAT_ATTR(DEACTIVATE_EMPTY, deactivate_empty); -STAT_ATTR(DEACTIVATE_REMOTE_FREES, deactivate_remote_frees); -STAT_ATTR(DEACTIVATE_BYPASS, deactivate_bypass); STAT_ATTR(ORDER_FALLBACK, order_fallback); -STAT_ATTR(CMPXCHG_DOUBLE_CPU_FAIL, cmpxchg_double_cpu_fail); STAT_ATTR(CMPXCHG_DOUBLE_FAIL, cmpxchg_double_fail); -STAT_ATTR(CPU_PARTIAL_ALLOC, cpu_partial_alloc); -STAT_ATTR(CPU_PARTIAL_FREE, cpu_partial_free); -STAT_ATTR(CPU_PARTIAL_NODE, cpu_partial_node); -STAT_ATTR(CPU_PARTIAL_DRAIN, cpu_partial_drain); STAT_ATTR(SHEAF_FLUSH, sheaf_flush); STAT_ATTR(SHEAF_REFILL, sheaf_refill); STAT_ATTR(SHEAF_ALLOC, sheaf_alloc); @@ -8800,33 +8779,19 @@ static struct attribute *slab_attrs[] = { &remote_node_defrag_ratio_attr.attr, #endif #ifdef CONFIG_SLUB_STATS - &alloc_cpu_sheaf_attr.attr, &alloc_fastpath_attr.attr, &alloc_slowpath_attr.attr, - &free_cpu_sheaf_attr.attr, &free_rcu_sheaf_attr.attr, &free_rcu_sheaf_fail_attr.attr, &free_fastpath_attr.attr, &free_slowpath_attr.attr, &free_add_partial_attr.attr, &free_remove_partial_attr.attr, - &alloc_from_partial_attr.attr, &alloc_slab_attr.attr, - &alloc_refill_attr.attr, &alloc_node_mismatch_attr.attr, &free_slab_attr.attr, - &cpuslab_flush_attr.attr, - &deactivate_full_attr.attr, - &deactivate_empty_attr.attr, - &deactivate_remote_frees_attr.attr, - &deactivate_bypass_attr.attr, &order_fallback_attr.attr, &cmpxchg_double_fail_attr.attr, - &cmpxchg_double_cpu_fail_attr.attr, - &cpu_partial_alloc_attr.attr, - &cpu_partial_free_attr.attr, - &cpu_partial_node_attr.attr, - &cpu_partial_drain_attr.attr, &sheaf_flush_attr.attr, &sheaf_refill_attr.attr, &sheaf_alloc_attr.attr, -- 2.52.0