From: Suren Baghdasaryan <surenb@google.com>
To: Vlastimil Babka <vbabka@suse.cz>
Cc: Harry Yoo <harry.yoo@oracle.com>,
Petr Tesarik <ptesarik@suse.com>,
Christoph Lameter <cl@gentwo.org>,
David Rientjes <rientjes@google.com>,
Roman Gushchin <roman.gushchin@linux.dev>,
Hao Li <hao.li@linux.dev>,
Andrew Morton <akpm@linux-foundation.org>,
Uladzislau Rezki <urezki@gmail.com>,
"Liam R. Howlett" <Liam.Howlett@oracle.com>,
Sebastian Andrzej Siewior <bigeasy@linutronix.de>,
Alexei Starovoitov <ast@kernel.org>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org,
linux-rt-devel@lists.linux.dev, bpf@vger.kernel.org,
kasan-dev@googlegroups.com
Subject: Re: [PATCH v3 21/21] mm/slub: cleanup and repurpose some stat items
Date: Wed, 21 Jan 2026 18:35:51 -0800 [thread overview]
Message-ID: <CAJuCfpHg9YfkVwtfCUvLH_0HNWzUgx1ekQ-QMyYBW_Qeqt=WjA@mail.gmail.com> (raw)
In-Reply-To: <20260116-sheaves-for-all-v3-21-5595cb000772@suse.cz>
On Fri, Jan 16, 2026 at 6:41 AM Vlastimil Babka <vbabka@suse.cz> wrote:
>
> A number of stat items related to cpu slabs became unused, remove them.
>
> Two of those were ALLOC_FASTPATH and FREE_FASTPATH. But instead of
> removing those, use them instead of ALLOC_PCS and FREE_PCS, since
> sheaves are the new (and only) fastpaths, Remove the recently added
> _PCS variants instead.
>
> Change where FREE_SLOWPATH is counted so that it only counts freeing of
> objects by slab users that (for whatever reason) do not go to a percpu
> sheaf, and not all (including internal) callers of __slab_free(). Thus
> flushing sheaves (counted by SHEAF_FLUSH) no longer also increments
> FREE_SLOWPATH.
nit: I think I understand what you mean but "no longer also
increments" sounds wrong. Maybe repharase as "Thus sheaf flushing
(already counted by SHEAF_FLUSH) does not affect FREE_SLOWPATH
anymore."?
> This matches how ALLOC_SLOWPATH doesn't count sheaf
> refills (counted by SHEAF_REFILL).
>
> Reviewed-by: Suren Baghdasaryan <surenb@google.com>
> Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
> ---
> mm/slub.c | 77 +++++++++++++++++----------------------------------------------
> 1 file changed, 21 insertions(+), 56 deletions(-)
>
> diff --git a/mm/slub.c b/mm/slub.c
> index c12e90cb2fca..d73ad44fa046 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -330,33 +330,19 @@ enum add_mode {
> };
>
> enum stat_item {
> - ALLOC_PCS, /* Allocation from percpu sheaf */
> - ALLOC_FASTPATH, /* Allocation from cpu slab */
> - ALLOC_SLOWPATH, /* Allocation by getting a new cpu slab */
> - FREE_PCS, /* Free to percpu sheaf */
> + ALLOC_FASTPATH, /* Allocation from percpu sheaves */
> + ALLOC_SLOWPATH, /* Allocation from partial or new slab */
> FREE_RCU_SHEAF, /* Free to rcu_free sheaf */
> FREE_RCU_SHEAF_FAIL, /* Failed to free to a rcu_free sheaf */
> - FREE_FASTPATH, /* Free to cpu slab */
> - FREE_SLOWPATH, /* Freeing not to cpu slab */
> + FREE_FASTPATH, /* Free to percpu sheaves */
> + FREE_SLOWPATH, /* Free to a slab */
> FREE_ADD_PARTIAL, /* Freeing moves slab to partial list */
> FREE_REMOVE_PARTIAL, /* Freeing removes last object */
> - ALLOC_FROM_PARTIAL, /* Cpu slab acquired from node partial list */
> - ALLOC_SLAB, /* Cpu slab acquired from page allocator */
> - ALLOC_REFILL, /* Refill cpu slab from slab freelist */
> - ALLOC_NODE_MISMATCH, /* Switching cpu slab */
> + ALLOC_SLAB, /* New slab acquired from page allocator */
> + ALLOC_NODE_MISMATCH, /* Requested node different from cpu sheaf */
> FREE_SLAB, /* Slab freed to the page allocator */
> - CPUSLAB_FLUSH, /* Abandoning of the cpu slab */
> - DEACTIVATE_FULL, /* Cpu slab was full when deactivated */
> - DEACTIVATE_EMPTY, /* Cpu slab was empty when deactivated */
> - DEACTIVATE_REMOTE_FREES,/* Slab contained remotely freed objects */
> - DEACTIVATE_BYPASS, /* Implicit deactivation */
> ORDER_FALLBACK, /* Number of times fallback was necessary */
> - CMPXCHG_DOUBLE_CPU_FAIL,/* Failures of this_cpu_cmpxchg_double */
> CMPXCHG_DOUBLE_FAIL, /* Failures of slab freelist update */
> - CPU_PARTIAL_ALLOC, /* Used cpu partial on alloc */
> - CPU_PARTIAL_FREE, /* Refill cpu partial on free */
> - CPU_PARTIAL_NODE, /* Refill cpu partial from node partial */
> - CPU_PARTIAL_DRAIN, /* Drain cpu partial to node partial */
> SHEAF_FLUSH, /* Objects flushed from a sheaf */
> SHEAF_REFILL, /* Objects refilled to a sheaf */
> SHEAF_ALLOC, /* Allocation of an empty sheaf */
> @@ -4347,8 +4333,10 @@ void *alloc_from_pcs(struct kmem_cache *s, gfp_t gfp, int node)
> * We assume the percpu sheaves contain only local objects although it's
> * not completely guaranteed, so we verify later.
> */
> - if (unlikely(node_requested && node != numa_mem_id()))
> + if (unlikely(node_requested && node != numa_mem_id())) {
> + stat(s, ALLOC_NODE_MISMATCH);
> return NULL;
> + }
>
> if (!local_trylock(&s->cpu_sheaves->lock))
> return NULL;
> @@ -4371,6 +4359,7 @@ void *alloc_from_pcs(struct kmem_cache *s, gfp_t gfp, int node)
> */
> if (page_to_nid(virt_to_page(object)) != node) {
> local_unlock(&s->cpu_sheaves->lock);
> + stat(s, ALLOC_NODE_MISMATCH);
> return NULL;
> }
> }
> @@ -4379,7 +4368,7 @@ void *alloc_from_pcs(struct kmem_cache *s, gfp_t gfp, int node)
>
> local_unlock(&s->cpu_sheaves->lock);
>
> - stat(s, ALLOC_PCS);
> + stat(s, ALLOC_FASTPATH);
>
> return object;
> }
> @@ -4451,7 +4440,7 @@ unsigned int alloc_from_pcs_bulk(struct kmem_cache *s, gfp_t gfp, size_t size,
>
> local_unlock(&s->cpu_sheaves->lock);
>
> - stat_add(s, ALLOC_PCS, batch);
> + stat_add(s, ALLOC_FASTPATH, batch);
>
> allocated += batch;
>
> @@ -5111,8 +5100,6 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab,
> unsigned long flags;
> bool on_node_partial;
>
> - stat(s, FREE_SLOWPATH);
After moving the above accounting to the callers I think there are
several callers which won't account it anymore:
- free_deferred_objects
- memcg_alloc_abort_single
- slab_free_after_rcu_debug
- ___cache_free
Am I missing something or is that intentional?
> -
> if (IS_ENABLED(CONFIG_SLUB_TINY) || kmem_cache_debug(s)) {
> free_to_partial_list(s, slab, head, tail, cnt, addr);
> return;
> @@ -5416,7 +5403,7 @@ bool free_to_pcs(struct kmem_cache *s, void *object, bool allow_spin)
>
> local_unlock(&s->cpu_sheaves->lock);
>
> - stat(s, FREE_PCS);
> + stat(s, FREE_FASTPATH);
>
> return true;
> }
> @@ -5664,7 +5651,7 @@ static void free_to_pcs_bulk(struct kmem_cache *s, size_t size, void **p)
>
> local_unlock(&s->cpu_sheaves->lock);
>
> - stat_add(s, FREE_PCS, batch);
> + stat_add(s, FREE_FASTPATH, batch);
>
> if (batch < size) {
> p += batch;
> @@ -5686,10 +5673,12 @@ static void free_to_pcs_bulk(struct kmem_cache *s, size_t size, void **p)
> */
> fallback:
> __kmem_cache_free_bulk(s, size, p);
> + stat_add(s, FREE_SLOWPATH, size);
>
> flush_remote:
> if (remote_nr) {
> __kmem_cache_free_bulk(s, remote_nr, &remote_objects[0]);
> + stat_add(s, FREE_SLOWPATH, remote_nr);
> if (i < size) {
> remote_nr = 0;
> goto next_remote_batch;
> @@ -5784,6 +5773,7 @@ void slab_free(struct kmem_cache *s, struct slab *slab, void *object,
> }
>
> __slab_free(s, slab, object, object, 1, addr);
> + stat(s, FREE_SLOWPATH);
> }
>
> #ifdef CONFIG_MEMCG
> @@ -5806,8 +5796,10 @@ void slab_free_bulk(struct kmem_cache *s, struct slab *slab, void *head,
> * With KASAN enabled slab_free_freelist_hook modifies the freelist
> * to remove objects, whose reuse must be delayed.
> */
> - if (likely(slab_free_freelist_hook(s, &head, &tail, &cnt)))
> + if (likely(slab_free_freelist_hook(s, &head, &tail, &cnt))) {
> __slab_free(s, slab, head, tail, cnt, addr);
> + stat_add(s, FREE_SLOWPATH, cnt);
> + }
> }
>
> #ifdef CONFIG_SLUB_RCU_DEBUG
> @@ -6705,6 +6697,7 @@ int __kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size,
> i = refill_objects(s, p, flags, size, size);
> if (i < size)
> goto error;
> + stat_add(s, ALLOC_SLOWPATH, i);
> }
>
> return i;
> @@ -8704,33 +8697,19 @@ static ssize_t text##_store(struct kmem_cache *s, \
> } \
> SLAB_ATTR(text); \
>
> -STAT_ATTR(ALLOC_PCS, alloc_cpu_sheaf);
> STAT_ATTR(ALLOC_FASTPATH, alloc_fastpath);
> STAT_ATTR(ALLOC_SLOWPATH, alloc_slowpath);
> -STAT_ATTR(FREE_PCS, free_cpu_sheaf);
> STAT_ATTR(FREE_RCU_SHEAF, free_rcu_sheaf);
> STAT_ATTR(FREE_RCU_SHEAF_FAIL, free_rcu_sheaf_fail);
> STAT_ATTR(FREE_FASTPATH, free_fastpath);
> STAT_ATTR(FREE_SLOWPATH, free_slowpath);
> STAT_ATTR(FREE_ADD_PARTIAL, free_add_partial);
> STAT_ATTR(FREE_REMOVE_PARTIAL, free_remove_partial);
> -STAT_ATTR(ALLOC_FROM_PARTIAL, alloc_from_partial);
> STAT_ATTR(ALLOC_SLAB, alloc_slab);
> -STAT_ATTR(ALLOC_REFILL, alloc_refill);
> STAT_ATTR(ALLOC_NODE_MISMATCH, alloc_node_mismatch);
> STAT_ATTR(FREE_SLAB, free_slab);
> -STAT_ATTR(CPUSLAB_FLUSH, cpuslab_flush);
> -STAT_ATTR(DEACTIVATE_FULL, deactivate_full);
> -STAT_ATTR(DEACTIVATE_EMPTY, deactivate_empty);
> -STAT_ATTR(DEACTIVATE_REMOTE_FREES, deactivate_remote_frees);
> -STAT_ATTR(DEACTIVATE_BYPASS, deactivate_bypass);
> STAT_ATTR(ORDER_FALLBACK, order_fallback);
> -STAT_ATTR(CMPXCHG_DOUBLE_CPU_FAIL, cmpxchg_double_cpu_fail);
> STAT_ATTR(CMPXCHG_DOUBLE_FAIL, cmpxchg_double_fail);
> -STAT_ATTR(CPU_PARTIAL_ALLOC, cpu_partial_alloc);
> -STAT_ATTR(CPU_PARTIAL_FREE, cpu_partial_free);
> -STAT_ATTR(CPU_PARTIAL_NODE, cpu_partial_node);
> -STAT_ATTR(CPU_PARTIAL_DRAIN, cpu_partial_drain);
> STAT_ATTR(SHEAF_FLUSH, sheaf_flush);
> STAT_ATTR(SHEAF_REFILL, sheaf_refill);
> STAT_ATTR(SHEAF_ALLOC, sheaf_alloc);
> @@ -8806,33 +8785,19 @@ static struct attribute *slab_attrs[] = {
> &remote_node_defrag_ratio_attr.attr,
> #endif
> #ifdef CONFIG_SLUB_STATS
> - &alloc_cpu_sheaf_attr.attr,
> &alloc_fastpath_attr.attr,
> &alloc_slowpath_attr.attr,
> - &free_cpu_sheaf_attr.attr,
> &free_rcu_sheaf_attr.attr,
> &free_rcu_sheaf_fail_attr.attr,
> &free_fastpath_attr.attr,
> &free_slowpath_attr.attr,
> &free_add_partial_attr.attr,
> &free_remove_partial_attr.attr,
> - &alloc_from_partial_attr.attr,
> &alloc_slab_attr.attr,
> - &alloc_refill_attr.attr,
> &alloc_node_mismatch_attr.attr,
> &free_slab_attr.attr,
> - &cpuslab_flush_attr.attr,
> - &deactivate_full_attr.attr,
> - &deactivate_empty_attr.attr,
> - &deactivate_remote_frees_attr.attr,
> - &deactivate_bypass_attr.attr,
> &order_fallback_attr.attr,
> &cmpxchg_double_fail_attr.attr,
> - &cmpxchg_double_cpu_fail_attr.attr,
> - &cpu_partial_alloc_attr.attr,
> - &cpu_partial_free_attr.attr,
> - &cpu_partial_node_attr.attr,
> - &cpu_partial_drain_attr.attr,
> &sheaf_flush_attr.attr,
> &sheaf_refill_attr.attr,
> &sheaf_alloc_attr.attr,
>
> --
> 2.52.0
>
next prev parent reply other threads:[~2026-01-22 2:36 UTC|newest]
Thread overview: 106+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-01-16 14:40 [PATCH v3 00/21] slab: replace cpu (partial) slabs with sheaves Vlastimil Babka
2026-01-16 14:40 ` [PATCH v3 01/21] mm/slab: add rcu_barrier() to kvfree_rcu_barrier_on_cache() Vlastimil Babka
2026-01-16 14:40 ` [PATCH v3 02/21] slab: add SLAB_CONSISTENCY_CHECKS to SLAB_NEVER_MERGE Vlastimil Babka
2026-01-16 17:22 ` Suren Baghdasaryan
2026-01-19 3:41 ` Harry Yoo
2026-01-16 14:40 ` [PATCH v3 03/21] mm/slab: move and refactor __kmem_cache_alias() Vlastimil Babka
2026-01-16 14:40 ` [PATCH v3 04/21] mm/slab: make caches with sheaves mergeable Vlastimil Babka
2026-01-16 14:40 ` [PATCH v3 05/21] slab: add sheaves to most caches Vlastimil Babka
2026-01-20 18:47 ` Breno Leitao
2026-01-21 8:12 ` Vlastimil Babka
2026-01-16 14:40 ` [PATCH v3 06/21] slab: introduce percpu sheaves bootstrap Vlastimil Babka
2026-01-17 2:11 ` Suren Baghdasaryan
2026-01-19 3:40 ` Harry Yoo
2026-01-19 9:13 ` Vlastimil Babka
2026-01-19 9:34 ` Vlastimil Babka
2026-01-21 10:52 ` Vlastimil Babka
2026-01-19 11:32 ` Hao Li
2026-01-21 10:54 ` Vlastimil Babka
2026-01-16 14:40 ` [PATCH v3 07/21] slab: make percpu sheaves compatible with kmalloc_nolock()/kfree_nolock() Vlastimil Babka
2026-01-18 20:45 ` Suren Baghdasaryan
2026-01-19 4:31 ` Harry Yoo
2026-01-19 10:09 ` Vlastimil Babka
2026-01-19 10:23 ` Vlastimil Babka
2026-01-19 12:06 ` Hao Li
2026-01-16 14:40 ` [PATCH v3 08/21] slab: handle kmalloc sheaves bootstrap Vlastimil Babka
2026-01-19 5:23 ` Harry Yoo
2026-01-20 1:04 ` Hao Li
2026-01-16 14:40 ` [PATCH v3 09/21] slab: add optimized sheaf refill from partial list Vlastimil Babka
2026-01-19 6:41 ` Harry Yoo
2026-01-19 8:02 ` Harry Yoo
2026-01-19 10:54 ` Vlastimil Babka
2026-01-20 1:41 ` Harry Yoo
2026-01-20 9:32 ` Hao Li
2026-01-20 10:22 ` Harry Yoo
2026-01-20 2:32 ` Harry Yoo
2026-01-20 6:33 ` Vlastimil Babka
2026-01-20 10:27 ` Harry Yoo
2026-01-20 10:32 ` Vlastimil Babka
2026-01-20 2:55 ` Hao Li
2026-01-20 17:19 ` Suren Baghdasaryan
2026-01-21 13:22 ` Vlastimil Babka
2026-01-21 16:12 ` Suren Baghdasaryan
2026-01-16 14:40 ` [PATCH v3 10/21] slab: remove cpu (partial) slabs usage from allocation paths Vlastimil Babka
2026-01-20 4:20 ` Harry Yoo
2026-01-20 8:36 ` Hao Li
2026-01-20 18:06 ` Suren Baghdasaryan
2026-01-21 13:56 ` Vlastimil Babka
2026-01-16 14:40 ` [PATCH v3 11/21] slab: remove SLUB_CPU_PARTIAL Vlastimil Babka
2026-01-20 5:24 ` Harry Yoo
2026-01-20 12:10 ` Hao Li
2026-01-20 22:25 ` Suren Baghdasaryan
2026-01-21 0:58 ` Harry Yoo
2026-01-21 1:06 ` Harry Yoo
2026-01-21 16:21 ` Suren Baghdasaryan
2026-01-21 14:22 ` Vlastimil Babka
2026-01-21 14:43 ` Vlastimil Babka
2026-01-21 16:22 ` Suren Baghdasaryan
2026-01-16 14:40 ` [PATCH v3 12/21] slab: remove the do_slab_free() fastpath Vlastimil Babka
2026-01-20 5:35 ` Harry Yoo
2026-01-20 12:29 ` Hao Li
2026-01-21 16:57 ` Suren Baghdasaryan
2026-01-16 14:40 ` [PATCH v3 13/21] slab: remove defer_deactivate_slab() Vlastimil Babka
2026-01-20 5:47 ` Harry Yoo
2026-01-20 9:35 ` Hao Li
2026-01-21 17:11 ` Suren Baghdasaryan
2026-01-16 14:40 ` [PATCH v3 14/21] slab: simplify kmalloc_nolock() Vlastimil Babka
2026-01-20 12:06 ` Hao Li
2026-01-21 17:39 ` Suren Baghdasaryan
2026-01-22 1:53 ` Harry Yoo
2026-01-22 8:16 ` Vlastimil Babka
2026-01-22 8:34 ` Harry Yoo
2026-01-16 14:40 ` [PATCH v3 15/21] slab: remove struct kmem_cache_cpu Vlastimil Babka
2026-01-20 12:40 ` Hao Li
2026-01-21 14:29 ` Vlastimil Babka
2026-01-21 17:54 ` Suren Baghdasaryan
2026-01-21 19:03 ` Vlastimil Babka
2026-01-22 3:10 ` Harry Yoo
2026-01-16 14:40 ` [PATCH v3 16/21] slab: remove unused PREEMPT_RT specific macros Vlastimil Babka
2026-01-21 6:42 ` Hao Li
2026-01-21 17:57 ` Suren Baghdasaryan
2026-01-22 3:50 ` Harry Yoo
2026-01-16 14:40 ` [PATCH v3 17/21] slab: refill sheaves from all nodes Vlastimil Babka
2026-01-21 18:30 ` Suren Baghdasaryan
2026-01-22 4:44 ` Harry Yoo
2026-01-22 8:37 ` Vlastimil Babka
2026-01-22 4:58 ` Hao Li
2026-01-22 8:32 ` Vlastimil Babka
2026-01-22 7:02 ` Harry Yoo
2026-01-22 8:42 ` Vlastimil Babka
2026-01-16 14:40 ` [PATCH v3 18/21] slab: update overview comments Vlastimil Babka
2026-01-21 20:58 ` Suren Baghdasaryan
2026-01-22 3:54 ` Hao Li
2026-01-22 6:41 ` Harry Yoo
2026-01-22 8:49 ` Vlastimil Babka
2026-01-16 14:40 ` [PATCH v3 19/21] slab: remove frozen slab checks from __slab_free() Vlastimil Babka
2026-01-22 0:54 ` Suren Baghdasaryan
2026-01-22 6:31 ` Vlastimil Babka
2026-01-22 5:01 ` Hao Li
2026-01-16 14:40 ` [PATCH v3 20/21] mm/slub: remove DEACTIVATE_TO_* stat items Vlastimil Babka
2026-01-22 0:58 ` Suren Baghdasaryan
2026-01-22 5:17 ` Hao Li
2026-01-16 14:40 ` [PATCH v3 21/21] mm/slub: cleanup and repurpose some " Vlastimil Babka
2026-01-22 2:35 ` Suren Baghdasaryan [this message]
2026-01-22 9:30 ` Vlastimil Babka
2026-01-22 5:52 ` Hao Li
2026-01-22 9:30 ` Vlastimil Babka
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CAJuCfpHg9YfkVwtfCUvLH_0HNWzUgx1ekQ-QMyYBW_Qeqt=WjA@mail.gmail.com' \
--to=surenb@google.com \
--cc=Liam.Howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=ast@kernel.org \
--cc=bigeasy@linutronix.de \
--cc=bpf@vger.kernel.org \
--cc=cl@gentwo.org \
--cc=hao.li@linux.dev \
--cc=harry.yoo@oracle.com \
--cc=kasan-dev@googlegroups.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-rt-devel@lists.linux.dev \
--cc=ptesarik@suse.com \
--cc=rientjes@google.com \
--cc=roman.gushchin@linux.dev \
--cc=urezki@gmail.com \
--cc=vbabka@suse.cz \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox