linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: "D, Suneeth" <Suneeth.D@amd.com>
To: Vlastimil Babka <vbabka@suse.cz>,
	Suren Baghdasaryan <surenb@google.com>,
	"Liam R. Howlett" <Liam.Howlett@oracle.com>,
	Christoph Lameter <cl@gentwo.org>,
	David Rientjes <rientjes@google.com>
Cc: Roman Gushchin <roman.gushchin@linux.dev>,
	Harry Yoo <harry.yoo@oracle.com>,
	Uladzislau Rezki <urezki@gmail.com>,
	Sidhartha Kumar <sidhartha.kumar@oracle.com>,
	<linux-mm@kvack.org>, <linux-kernel@vger.kernel.org>,
	<rcu@vger.kernel.org>, <maple-tree@lists.infradead.org>
Subject: Re: [PATCH v8 15/23] maple_tree: use percpu sheaves for maple_node_cache
Date: Fri, 17 Oct 2025 23:56:24 +0530	[thread overview]
Message-ID: <156a1377-167a-4455-8a9f-6ad98094a7f5@amd.com> (raw)
In-Reply-To: <ad9864db-a297-44d9-ab1a-61e0285eac5f@suse.cz>

Hi Vlastimil Babka,

On 10/16/2025 9:45 PM, Vlastimil Babka wrote:
> On 10/16/25 17:16, D, Suneeth wrote:
>> Hi Vlastimil Babka,
>>
>> On 9/10/2025 1:31 PM, Vlastimil Babka wrote:
>>> Setup the maple_node_cache with percpu sheaves of size 32 to hopefully
>>> improve its performance. Note this will not immediately take advantage
>>> of sheaf batching of kfree_rcu() operations due to the maple tree using
>>> call_rcu with custom callbacks. The followup changes to maple tree will
>>> change that and also make use of the prefilled sheaves functionality.
>>>
>>
>>
>> We run will-it-scale-process-mmap2 micro-benchmark as part of our weekly
>> CI for Kernel Performance Regression testing between a stable vs rc
>> kernel. In this week's run we were able to observe severe regression on
>> AMD platforms (Turin and Bergamo) with running the micro-benchmark
>> between the kernels v6.17 and v6.18-rc1 in the range of 12-13% (Turin)
>> and 22-26% (Bergamo). Bisecting further landed me onto this commit
>> (59faa4da7cd4565cbce25358495556b75bb37022) as first bad commit. The
>> following were the machines' configuration and test parameters used:-
>>
>> Model name:           AMD EPYC 128-Core Processor [Bergamo]
>> Thread(s) per core:   2
>> Core(s) per socket:   128
>> Socket(s):            1
>> Total online memory:  258G
>>
>> Model name:           AMD EPYC 64-Core Processor [Turin]
>> Thread(s) per core:   2
>> Core(s) per socket:   64
>> Socket(s):            1
>> Total online memory:  258G
>>
>> Test params:
>>
>>       nr_task: [1 8 64 128 192 256]
>>       mode: process
>>       test: mmap2
>>       kpi: per_process_ops
>>       cpufreq_governor: performance
>>
>> The following are the stats after bisection:-
>> (the KPI used here is per_process_ops)
>>
>> kernel_versions      					 per_process_ops
>> ---------------      					 ---------------
>> v6.17.0 	                                       - 258291
>> v6.18.0-rc1 	                                       - 225839
>> v6.17.0-rc3-59faa4da7                                  - 212152
>> v6.17.0-rc3-3accabda4da1(one commit before bad commit) - 265054
> 
> Thanks for the info. Is there any difference if you increase the
> sheaf_capacity in the commit from 32 to a higher value? For example 120 to
> match what the automatically calculated cpu partial slabs target would be.
> I think there's a lock contention on the barn lock causing the regression.
> By matching the cpu partial slabs value we should have same batching factor
> for the barn lock as there would be on the node list_lock before sheaves.
> 
> Thanks.
> 

I tried changing the sheaf_capacity from 32 to 120 and tested it. The 
numbers are improving around 28% w.r.t baseline(6.17) with 
will-it-scale-mmap2-process testcase.

v6.17.0(w/o sheaf) %diff v6.18-rc1(sheaf=32)  %diff v6.18-rc1(sheaf=120)
------------------ ----- -------------------  ----- --------------------
260222              -13   225839               +28   334079

Thanks.

>> Recreation steps:
>>
>> 1) git clone https://github.com/antonblanchard/will-it-scale.git
>> 2) git clone https://github.com/intel/lkp-tests.git
>> 3) cd will-it-scale && git apply
>> lkp-tests/programs/will-it-scale/pkg/will-it-scale.patch
>> 4) make
>> 5) python3 runtest.py mmap2 25 process 0 0 1 8 64 128 192 256
>>
>> NOTE: [5] is specific to machine's architecture. starting from 1 is the
>> array of no.of tasks that you'd wish to run the testcase which here is
>> no.cores per CCX, per NUMA node/ per Socket, nr_threads.
>>
>> I also ran the micro-benchmark with tools/testing/perf record and
>> following is the collected data:-
>>
>> # perf diff perf.data.old perf.data
>> No kallsyms or vmlinux with build-id
>> 0fc9c7b62ade1502af5d6a060914732523f367ef was found
>> Warning:
>> 43 out of order events recorded.
>> Warning:
>> 54 out of order events recorded.
>> # Event 'cycles:P'
>> #
>> # Baseline  Delta Abs  Shared Object           Symbol
>> # ........  .........  ......................
>> ..............................................
>> #
>>                 +51.51%  [kernel.kallsyms]       [k]
>> native_queued_spin_lock_slowpath
>>                 +14.39%  [kernel.kallsyms]       [k] perf_iterate_ctx
>>                  +2.52%  [kernel.kallsyms]       [k] unmap_page_range
>>                  +1.75%  [kernel.kallsyms]       [k] mas_wr_node_store
>>                  +1.47%  [kernel.kallsyms]       [k] __pi_memset
>>                  +1.38%  [kernel.kallsyms]       [k] mt_free_rcu
>>                  +1.36%  [kernel.kallsyms]       [k] free_pgd_range
>>                  +1.10%  [kernel.kallsyms]       [k] __pi_memcpy
>>                  +0.96%  [kernel.kallsyms]       [k] __kmem_cache_alloc_bulk
>>                  +0.92%  [kernel.kallsyms]       [k] __mmap_region
>>                  +0.79%  [kernel.kallsyms]       [k] mas_empty_area_rev
>>                  +0.74%  [kernel.kallsyms]       [k] __cond_resched
>>                  +0.73%  [kernel.kallsyms]       [k] mas_walk
>>                  +0.59%  [kernel.kallsyms]       [k] mas_pop_node
>>                  +0.57%  [kernel.kallsyms]       [k] perf_event_mmap_output
>>                  +0.49%  [kernel.kallsyms]       [k] mas_find
>>                  +0.48%  [kernel.kallsyms]       [k] mas_next_slot
>>                  +0.46%  [kernel.kallsyms]       [k] kmem_cache_free
>>                  +0.42%  [kernel.kallsyms]       [k] mas_leaf_max_gap
>>                  +0.42%  [kernel.kallsyms]       [k]
>> __call_rcu_common.constprop.0
>>                  +0.39%  [kernel.kallsyms]       [k] entry_SYSCALL_64
>>                  +0.38%  [kernel.kallsyms]       [k] mas_prev_slot
>>                  +0.38%  [kernel.kallsyms]       [k] kmem_cache_alloc_noprof
>>                  +0.37%  [kernel.kallsyms]       [k] mas_store_gfp
>>
>>
>>> Reviewed-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
>>> Reviewed-by: Suren Baghdasaryan <surenb@google.com>
>>> Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
>>> ---
>>>    lib/maple_tree.c | 9 +++++++--
>>>    1 file changed, 7 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/lib/maple_tree.c b/lib/maple_tree.c
>>> index 4f0e30b57b0cef9e5cf791f3f64f5898752db402..d034f170ac897341b40cfd050b6aee86b6d2cf60 100644
>>> --- a/lib/maple_tree.c
>>> +++ b/lib/maple_tree.c
>>> @@ -6040,9 +6040,14 @@ bool mas_nomem(struct ma_state *mas, gfp_t gfp)
>>>    
>>>    void __init maple_tree_init(void)
>>>    {
>>> +	struct kmem_cache_args args = {
>>> +		.align  = sizeof(struct maple_node),
>>> +		.sheaf_capacity = 32,
>>> +	};
>>> +
>>>    	maple_node_cache = kmem_cache_create("maple_node",
>>> -			sizeof(struct maple_node), sizeof(struct maple_node),
>>> -			SLAB_PANIC, NULL);
>>> +			sizeof(struct maple_node), &args,
>>> +			SLAB_PANIC);
>>>    }
>>>    
>>>    /**
>>>
>>
>> ---
>> Thanks and Regards
>> Suneeth D
>>
> 


  reply	other threads:[~2025-10-17 18:26 UTC|newest]

Thread overview: 95+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-09-10  8:01 [PATCH v8 00/23] SLUB percpu sheaves Vlastimil Babka
2025-09-10  8:01 ` [PATCH v8 01/23] locking/local_lock: Expose dep_map in local_trylock_t Vlastimil Babka
2025-09-24 16:49   ` Suren Baghdasaryan
2025-09-10  8:01 ` [PATCH v8 02/23] slab: simplify init_kmem_cache_nodes() error handling Vlastimil Babka
2025-09-24 16:52   ` Suren Baghdasaryan
2025-09-10  8:01 ` [PATCH v8 03/23] slab: add opt-in caching layer of percpu sheaves Vlastimil Babka
2025-12-02  8:48   ` [PATCH] slub: add barn_get_full_sheaf() and refine empty-main sheaf Hao Li
2025-12-02  8:55     ` Hao Li
2025-12-02  9:00   ` slub: add barn_get_full_sheaf() and refine empty-main sheaf replacement Hao Li
2025-12-03  5:46     ` Harry Yoo
2025-12-03 11:15       ` Hao Li
2025-09-10  8:01 ` [PATCH v8 04/23] slab: add sheaf support for batching kfree_rcu() operations Vlastimil Babka
2025-09-12  0:38   ` Sergey Senozhatsky
2025-09-12  7:03     ` Vlastimil Babka
2025-09-17  8:30   ` Harry Yoo
2025-09-17  9:55     ` Vlastimil Babka
2025-09-17 11:32       ` Harry Yoo
2025-09-17 12:05         ` Vlastimil Babka
2025-09-17 13:07           ` Harry Yoo
2025-09-17 13:21             ` Vlastimil Babka
2025-09-17 13:34               ` Harry Yoo
2025-09-17 14:14                 ` Vlastimil Babka
2025-09-18  8:09                   ` Vlastimil Babka
2025-09-19  6:47                     ` Harry Yoo
2025-09-19  7:02                       ` Vlastimil Babka
2025-09-19  8:59                         ` Harry Yoo
2025-09-25  4:35                     ` Suren Baghdasaryan
2025-09-25  8:52                       ` Harry Yoo
2025-09-25 13:38                         ` Suren Baghdasaryan
2025-09-26 10:08                       ` Vlastimil Babka
2025-09-26 15:41                         ` Suren Baghdasaryan
2025-09-17 11:36       ` Paul E. McKenney
2025-09-17 12:13         ` Vlastimil Babka
2025-10-31 21:32   ` Daniel Gomez
2025-11-03  3:17     ` Harry Yoo
2025-11-05 11:25       ` Vlastimil Babka
2025-11-27 14:00         ` Daniel Gomez
2025-11-27 19:29           ` Suren Baghdasaryan
2025-11-28 11:37             ` [PATCH V1] mm/slab: introduce kvfree_rcu_barrier_on_cache() for cache destruction Harry Yoo
2025-11-28 12:22               ` Harry Yoo
2025-11-28 12:38               ` Daniel Gomez
2025-12-02  9:29               ` Jon Hunter
2025-12-02 10:18                 ` Harry Yoo
2025-11-27 11:38     ` [PATCH v8 04/23] slab: add sheaf support for batching kfree_rcu() operations Jon Hunter
2025-11-27 11:50       ` Jon Hunter
2025-11-27 12:33       ` Harry Yoo
2025-11-27 12:48         ` Harry Yoo
2025-11-28  8:57           ` Jon Hunter
2025-12-01  6:55             ` Harry Yoo
2025-11-27 13:18       ` Vlastimil Babka
2025-11-28  8:59         ` Jon Hunter
2025-09-10  8:01 ` [PATCH v8 05/23] slab: sheaf prefilling for guaranteed allocations Vlastimil Babka
2025-09-10  8:01 ` [PATCH v8 06/23] slab: determine barn status racily outside of lock Vlastimil Babka
2025-09-10  8:01 ` [PATCH v8 07/23] slab: skip percpu sheaves for remote object freeing Vlastimil Babka
2025-09-25 16:14   ` Suren Baghdasaryan
2025-09-10  8:01 ` [PATCH v8 08/23] slab: allow NUMA restricted allocations to use percpu sheaves Vlastimil Babka
2025-09-25 16:27   ` Suren Baghdasaryan
2025-09-10  8:01 ` [PATCH v8 09/23] maple_tree: remove redundant __GFP_NOWARN Vlastimil Babka
2025-09-10  8:01 ` [PATCH v8 10/23] tools/testing/vma: clean up stubs in vma_internal.h Vlastimil Babka
2025-09-10  8:01 ` [PATCH v8 11/23] maple_tree: Drop bulk insert support Vlastimil Babka
2025-09-25 16:38   ` Suren Baghdasaryan
2025-09-10  8:01 ` [PATCH v8 12/23] tools/testing/vma: Implement vm_refcnt reset Vlastimil Babka
2025-09-25 16:38   ` Suren Baghdasaryan
2025-09-10  8:01 ` [PATCH v8 13/23] tools/testing: Add support for changes to slab for sheaves Vlastimil Babka
2025-09-26 23:28   ` Suren Baghdasaryan
2025-09-10  8:01 ` [PATCH v8 14/23] mm, vma: use percpu sheaves for vm_area_struct cache Vlastimil Babka
2025-09-10  8:01 ` [PATCH v8 15/23] maple_tree: use percpu sheaves for maple_node_cache Vlastimil Babka
2025-09-12  2:20   ` Liam R. Howlett
2025-10-16 15:16   ` D, Suneeth
2025-10-16 16:15     ` Vlastimil Babka
2025-10-17 18:26       ` D, Suneeth [this message]
2025-09-10  8:01 ` [PATCH v8 16/23] tools/testing: include maple-shim.c in maple.c Vlastimil Babka
2025-09-26 23:45   ` Suren Baghdasaryan
2025-09-10  8:01 ` [PATCH v8 17/23] testing/radix-tree/maple: Hack around kfree_rcu not existing Vlastimil Babka
2025-09-26 23:53   ` Suren Baghdasaryan
2025-09-10  8:01 ` [PATCH v8 18/23] maple_tree: Use kfree_rcu in ma_free_rcu Vlastimil Babka
2025-09-17 11:46   ` Harry Yoo
2025-09-27  0:05     ` Suren Baghdasaryan
2025-09-10  8:01 ` [PATCH v8 19/23] maple_tree: Replace mt_free_one() with kfree() Vlastimil Babka
2025-09-27  0:06   ` Suren Baghdasaryan
2025-09-10  8:01 ` [PATCH v8 20/23] tools/testing: Add support for prefilled slab sheafs Vlastimil Babka
2025-09-27  0:28   ` Suren Baghdasaryan
2025-09-10  8:01 ` [PATCH v8 21/23] maple_tree: Prefilled sheaf conversion and testing Vlastimil Babka
2025-09-27  1:08   ` Suren Baghdasaryan
2025-09-29  7:30     ` Vlastimil Babka
2025-09-29 16:51       ` Liam R. Howlett
2025-09-10  8:01 ` [PATCH v8 22/23] maple_tree: Add single node allocation support to maple state Vlastimil Babka
2025-09-27  1:17   ` Suren Baghdasaryan
2025-09-29  7:39     ` Vlastimil Babka
2025-09-10  8:01 ` [PATCH v8 23/23] maple_tree: Convert forking to use the sheaf interface Vlastimil Babka
2025-10-07  6:34 ` [PATCH v8 00/23] SLUB percpu sheaves Christoph Hellwig
2025-10-07  8:03   ` Vlastimil Babka
2025-10-08  6:04     ` Christoph Hellwig
2025-10-15  8:32       ` Vlastimil Babka
2025-10-22  6:47         ` Christoph Hellwig

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=156a1377-167a-4455-8a9f-6ad98094a7f5@amd.com \
    --to=suneeth.d@amd.com \
    --cc=Liam.Howlett@oracle.com \
    --cc=cl@gentwo.org \
    --cc=harry.yoo@oracle.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=maple-tree@lists.infradead.org \
    --cc=rcu@vger.kernel.org \
    --cc=rientjes@google.com \
    --cc=roman.gushchin@linux.dev \
    --cc=sidhartha.kumar@oracle.com \
    --cc=surenb@google.com \
    --cc=urezki@gmail.com \
    --cc=vbabka@suse.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox