From: Hao Li <hao.li@linux.dev>
To: Vlastimil Babka <vbabka@suse.cz>
Cc: Harry Yoo <harry.yoo@oracle.com>,
Petr Tesarik <ptesarik@suse.com>,
Christoph Lameter <cl@gentwo.org>,
David Rientjes <rientjes@google.com>,
Roman Gushchin <roman.gushchin@linux.dev>,
Andrew Morton <akpm@linux-foundation.org>,
Uladzislau Rezki <urezki@gmail.com>,
"Liam R. Howlett" <Liam.Howlett@oracle.com>,
Suren Baghdasaryan <surenb@google.com>,
Sebastian Andrzej Siewior <bigeasy@linutronix.de>,
Alexei Starovoitov <ast@kernel.org>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org,
linux-rt-devel@lists.linux.dev, bpf@vger.kernel.org,
kasan-dev@googlegroups.com,
kernel test robot <oliver.sang@intel.com>,
stable@vger.kernel.org, "Paul E. McKenney" <paulmck@kernel.org>
Subject: Re: [PATCH v4 00/22] slab: replace cpu (partial) slabs with sheaves
Date: Fri, 30 Jan 2026 14:17:32 +0800 [thread overview]
Message-ID: <oj5ossmsvybogs5fr2fjdmms66usoh7pdpkuxwlkagxniscrrb@vghtzkxauvix> (raw)
In-Reply-To: <pdmjsvpkl5nsntiwfwguplajq27ak3xpboq3ab77zrbu763pq7@la3hyiqigpir>
On Fri, Jan 30, 2026 at 12:50:25PM +0800, Hao Li wrote:
> On Thu, Jan 29, 2026 at 04:28:01PM +0100, Vlastimil Babka wrote:
> >
> > So previously those would become kind of double
> > cached by both sheaves and cpu (partial) slabs (and thus hopefully benefited
> > more than they should) since sheaves introduction in 6.18, and now they are
> > not double cached anymore?
> >
>
> I've conducted new tests, and here are the details of three scenarios:
>
> 1. Checked out commit 9d4e6ab865c4, which represents the state before the
> introduction of the sheaves mechanism.
> 2. Tested with 6.19-rc5, which includes sheaves but does not yet apply the
> "sheaves for all" patchset.
> 3. Applied the "sheaves for all" patchset and also included the "avoid
> list_lock contention" patch.
Here is my testing environment information and the raw test data.
Command:
cd will-it-scale/
python3 ./runtest.py mmap2 25 process 0 0 64 128 192
Env:
CPU(s): 192
Thread(s) per core: 1
Core(s) per socket: 96
Socket(s): 2
NUMA node(s): 4
NUMA node0 CPU(s): 0-47
NUMA node1 CPU(s): 48-95
NUMA node2 CPU(s): 96-143
NUMA node3 CPU(s): 144-191
Memory: 1.5T
Raw data:
1. Checked out commit 9d4e6ab865c4, which represents the state before the
introduction of the sheaves mechanism.
{
"time.elapsed_time": 93.88,
"time.elapsed_time.max": 93.88,
"time.file_system_inputs": 2640,
"time.file_system_outputs": 128,
"time.involuntary_context_switches": 417738,
"time.major_page_faults": 54,
"time.maximum_resident_set_size": 90012,
"time.minor_page_faults": 80569,
"time.page_size": 4096,
"time.percent_of_cpu_this_job_got": 5707,
"time.system_time": 5272.97,
"time.user_time": 85.59,
"time.voluntary_context_switches": 2436,
"will-it-scale.128.processes": 28445014,
"will-it-scale.128.processes_idle": 33.89,
"will-it-scale.192.processes": 39899678,
"will-it-scale.192.processes_idle": 1.29,
"will-it-scale.64.processes": 15645502,
"will-it-scale.64.processes_idle": 66.75,
"will-it-scale.per_process_ops": 224832,
"will-it-scale.time.elapsed_time": 93.88,
"will-it-scale.time.elapsed_time.max": 93.88,
"will-it-scale.time.file_system_inputs": 2640,
"will-it-scale.time.file_system_outputs": 128,
"will-it-scale.time.involuntary_context_switches": 417738,
"will-it-scale.time.major_page_faults": 54,
"will-it-scale.time.maximum_resident_set_size": 90012,
"will-it-scale.time.minor_page_faults": 80569,
"will-it-scale.time.page_size": 4096,
"will-it-scale.time.percent_of_cpu_this_job_got": 5707,
"will-it-scale.time.system_time": 5272.97,
"will-it-scale.time.user_time": 85.59,
"will-it-scale.time.voluntary_context_switches": 2436,
"will-it-scale.workload": 83990194
}
2. Tested with 6.19-rc5, which includes sheaves but does not yet apply the
"sheaves for all" patchset.
{
"time.elapsed_time": 93.86000000000001,
"time.elapsed_time.max": 93.86000000000001,
"time.file_system_inputs": 1952,
"time.file_system_outputs": 160,
"time.involuntary_context_switches": 766225,
"time.major_page_faults": 50.666666666666664,
"time.maximum_resident_set_size": 90012,
"time.minor_page_faults": 80635,
"time.page_size": 4096,
"time.percent_of_cpu_this_job_got": 5738,
"time.system_time": 5251.88,
"time.user_time": 134.57666666666665,
"time.voluntary_context_switches": 2539,
"will-it-scale.128.processes": 38223543.333333336,
"will-it-scale.128.processes_idle": 33.833333333333336,
"will-it-scale.192.processes": 54039039,
"will-it-scale.192.processes_idle": 1.26,
"will-it-scale.64.processes": 20579207.666666668,
"will-it-scale.64.processes_idle": 66.74333333333334,
"will-it-scale.per_process_ops": 300541,
"will-it-scale.time.elapsed_time": 93.86000000000001,
"will-it-scale.time.elapsed_time.max": 93.86000000000001,
"will-it-scale.time.file_system_inputs": 1952,
"will-it-scale.time.file_system_outputs": 160,
"will-it-scale.time.involuntary_context_switches": 766225,
"will-it-scale.time.major_page_faults": 50.666666666666664,
"will-it-scale.time.maximum_resident_set_size": 90012,
"will-it-scale.time.minor_page_faults": 80635,
"will-it-scale.time.page_size": 4096,
"will-it-scale.time.percent_of_cpu_this_job_got": 5738,
"will-it-scale.time.system_time": 5251.88,
"will-it-scale.time.user_time": 134.57666666666665,
"will-it-scale.time.voluntary_context_switches": 2539,
"will-it-scale.workload": 112841790
}
3. Applied the "sheaves for all" patchset and also included the "avoid
list_lock contention" patch.
{
"time.elapsed_time": 93.86666666666667,
"time.elapsed_time.max": 93.86666666666667,
"time.file_system_inputs": 1800,
"time.file_system_outputs": 149.33333333333334,
"time.involuntary_context_switches": 421120,
"time.major_page_faults": 37,
"time.maximum_resident_set_size": 90016,
"time.minor_page_faults": 80645,
"time.page_size": 4096,
"time.percent_of_cpu_this_job_got": 5714.666666666667,
"time.system_time": 5256.176666666667,
"time.user_time": 108.88333333333333,
"time.voluntary_context_switches": 2513,
"will-it-scale.128.processes": 28067051.333333332,
"will-it-scale.128.processes_idle": 33.82,
"will-it-scale.192.processes": 38232965.666666664,
"will-it-scale.192.processes_idle": 1.2733333333333334,
"will-it-scale.64.processes": 15464041.333333334,
"will-it-scale.64.processes_idle": 66.76333333333334,
"will-it-scale.per_process_ops": 220009.33333333334,
"will-it-scale.time.elapsed_time": 93.86666666666667,
"will-it-scale.time.elapsed_time.max": 93.86666666666667,
"will-it-scale.time.file_system_inputs": 1800,
"will-it-scale.time.file_system_outputs": 149.33333333333334,
"will-it-scale.time.involuntary_context_switches": 421120,
"will-it-scale.time.major_page_faults": 37,
"will-it-scale.time.maximum_resident_set_size": 90016,
"will-it-scale.time.minor_page_faults": 80645,
"will-it-scale.time.page_size": 4096,
"will-it-scale.time.percent_of_cpu_this_job_got": 5714.666666666667,
"will-it-scale.time.system_time": 5256.176666666667,
"will-it-scale.time.user_time": 108.88333333333333,
"will-it-scale.time.voluntary_context_switches": 2513,
"will-it-scale.workload": 81764058.33333333
}
>
>
> Results:
>
> For scenario 2 (with sheaves but without "sheaves for all"), there is a
> noticeable performance improvement compared to scenario 1:
>
> will-it-scale.128.processes +34.3%
> will-it-scale.192.processes +35.4%
> will-it-scale.64.processes +31.5%
> will-it-scale.per_process_ops +33.7%
>
> For scenario 3 (after applying "sheaves for all"), performance slightly
> regressed compared to scenario 1:
>
> will-it-scale.128.processes -1.3%
> will-it-scale.192.processes -4.2%
> will-it-scale.64.processes -1.2%
> will-it-scale.per_process_ops -2.1%
>
> Analysis:
>
> So when the sheaf size for maple nodes is set to 32 by default, the performance
> of fully adopting the sheaves mechanism roughly matches the performance of the
> previous approach that relied solely on the percpu slab partial list.
>
> The performance regression observed with the "sheaves for all" patchset can
> actually be explained as follows: moving from scenario 1 to scenario 2
> introduces an additional cache layer, which boosts performance temporarily.
> When moving from scenario 2 to scenario 3, this additional cache layer is
> removed, then performance reverted to its original level.
>
> So I think the performance of the percpu partial list and the sheaves mechanism
> is roughly the same, which is consistent with our expectations.
>
> --
> Thanks,
> Hao
next prev parent reply other threads:[~2026-01-30 6:17 UTC|newest]
Thread overview: 69+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-01-23 6:52 Vlastimil Babka
2026-01-23 6:52 ` [PATCH v4 01/22] mm/slab: add rcu_barrier() to kvfree_rcu_barrier_on_cache() Vlastimil Babka
2026-01-27 16:08 ` Liam R. Howlett
2026-01-23 6:52 ` [PATCH v4 02/22] mm/slab: fix false lockdep warning in __kfree_rcu_sheaf() Vlastimil Babka
2026-01-23 12:03 ` Sebastian Andrzej Siewior
2026-01-24 10:58 ` Harry Yoo
2026-01-23 6:52 ` [PATCH v4 03/22] slab: add SLAB_CONSISTENCY_CHECKS to SLAB_NEVER_MERGE Vlastimil Babka
2026-01-23 6:52 ` [PATCH v4 04/22] mm/slab: move and refactor __kmem_cache_alias() Vlastimil Babka
2026-01-27 16:17 ` Liam R. Howlett
2026-01-27 16:59 ` Vlastimil Babka
2026-01-23 6:52 ` [PATCH v4 05/22] mm/slab: make caches with sheaves mergeable Vlastimil Babka
2026-01-27 16:23 ` Liam R. Howlett
2026-01-23 6:52 ` [PATCH v4 06/22] slab: add sheaves to most caches Vlastimil Babka
2026-01-26 6:36 ` Hao Li
2026-01-26 8:39 ` Vlastimil Babka
2026-01-26 13:59 ` Breno Leitao
2026-01-27 16:34 ` Liam R. Howlett
2026-01-27 17:01 ` Vlastimil Babka
2026-01-29 7:24 ` Zhao Liu
2026-01-29 8:21 ` Vlastimil Babka
2026-01-30 7:15 ` Zhao Liu
2026-02-04 18:01 ` Vlastimil Babka
2026-01-23 6:52 ` [PATCH v4 07/22] slab: introduce percpu sheaves bootstrap Vlastimil Babka
2026-01-26 6:13 ` Hao Li
2026-01-26 8:42 ` Vlastimil Babka
2026-01-27 17:31 ` Liam R. Howlett
2026-01-23 6:52 ` [PATCH v4 08/22] slab: make percpu sheaves compatible with kmalloc_nolock()/kfree_nolock() Vlastimil Babka
2026-01-23 18:05 ` Alexei Starovoitov
2026-01-27 17:36 ` Liam R. Howlett
2026-01-29 8:25 ` Vlastimil Babka
2026-01-23 6:52 ` [PATCH v4 09/22] slab: handle kmalloc sheaves bootstrap Vlastimil Babka
2026-01-27 18:30 ` Liam R. Howlett
2026-01-23 6:52 ` [PATCH v4 10/22] slab: add optimized sheaf refill from partial list Vlastimil Babka
2026-01-26 7:12 ` Hao Li
2026-01-29 7:43 ` Harry Yoo
2026-01-29 8:29 ` Vlastimil Babka
2026-01-27 20:05 ` Liam R. Howlett
2026-01-29 8:01 ` Harry Yoo
2026-01-23 6:52 ` [PATCH v4 11/22] slab: remove cpu (partial) slabs usage from allocation paths Vlastimil Babka
2026-01-23 18:17 ` Alexei Starovoitov
2026-01-23 6:52 ` [PATCH v4 12/22] slab: remove SLUB_CPU_PARTIAL Vlastimil Babka
2026-01-23 6:52 ` [PATCH v4 13/22] slab: remove the do_slab_free() fastpath Vlastimil Babka
2026-01-23 18:15 ` Alexei Starovoitov
2026-01-23 6:52 ` [PATCH v4 14/22] slab: remove defer_deactivate_slab() Vlastimil Babka
2026-01-23 17:31 ` Alexei Starovoitov
2026-01-23 6:52 ` [PATCH v4 15/22] slab: simplify kmalloc_nolock() Vlastimil Babka
2026-01-23 6:52 ` [PATCH v4 16/22] slab: remove struct kmem_cache_cpu Vlastimil Babka
2026-01-23 6:52 ` [PATCH v4 17/22] slab: remove unused PREEMPT_RT specific macros Vlastimil Babka
2026-01-23 6:52 ` [PATCH v4 18/22] slab: refill sheaves from all nodes Vlastimil Babka
2026-01-27 14:28 ` Mateusz Guzik
2026-01-27 22:04 ` Vlastimil Babka
2026-01-29 9:16 ` Harry Yoo
2026-01-23 6:52 ` [PATCH v4 19/22] slab: update overview comments Vlastimil Babka
2026-01-23 6:52 ` [PATCH v4 20/22] slab: remove frozen slab checks from __slab_free() Vlastimil Babka
2026-01-29 7:16 ` Harry Yoo
2026-01-23 6:52 ` [PATCH v4 21/22] mm/slub: remove DEACTIVATE_TO_* stat items Vlastimil Babka
2026-01-29 7:21 ` Harry Yoo
2026-01-23 6:53 ` [PATCH v4 22/22] mm/slub: cleanup and repurpose some " Vlastimil Babka
2026-01-29 7:40 ` Harry Yoo
2026-01-29 15:18 ` [PATCH v4 00/22] slab: replace cpu (partial) slabs with sheaves Hao Li
2026-01-29 15:28 ` Vlastimil Babka
2026-01-29 16:06 ` Hao Li
2026-01-29 16:44 ` Liam R. Howlett
2026-01-30 4:38 ` Hao Li
2026-01-30 4:50 ` Hao Li
2026-01-30 6:17 ` Hao Li [this message]
2026-02-04 18:02 ` Vlastimil Babka
2026-02-04 18:24 ` Christoph Lameter (Ampere)
2026-02-06 16:44 ` Vlastimil Babka
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=oj5ossmsvybogs5fr2fjdmms66usoh7pdpkuxwlkagxniscrrb@vghtzkxauvix \
--to=hao.li@linux.dev \
--cc=Liam.Howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=ast@kernel.org \
--cc=bigeasy@linutronix.de \
--cc=bpf@vger.kernel.org \
--cc=cl@gentwo.org \
--cc=harry.yoo@oracle.com \
--cc=kasan-dev@googlegroups.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-rt-devel@lists.linux.dev \
--cc=oliver.sang@intel.com \
--cc=paulmck@kernel.org \
--cc=ptesarik@suse.com \
--cc=rientjes@google.com \
--cc=roman.gushchin@linux.dev \
--cc=stable@vger.kernel.org \
--cc=surenb@google.com \
--cc=urezki@gmail.com \
--cc=vbabka@suse.cz \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox