linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Chengming Zhou <chengming.zhou@linux.dev>
To: Vlastimil Babka <vbabka@suse.cz>,
	cl@linux.com, penberg@kernel.org, willy@infradead.org
Cc: rientjes@google.com, iamjoonsoo.kim@lge.com,
	akpm@linux-foundation.org, roman.gushchin@linux.dev,
	42.hyeyoo@gmail.com, linux-mm@kvack.org,
	linux-kernel@vger.kernel.org,
	Chengming Zhou <zhouchengming@bytedance.com>
Subject: Re: [RFC PATCH v4 0/9] slub: Delay freezing of CPU partial slabs
Date: Thu, 2 Nov 2023 10:19:18 +0800	[thread overview]
Message-ID: <f38c7dd0-deec-4e55-9216-3c39925edef5@linux.dev> (raw)
In-Reply-To: <029f5042-e41d-5079-fdba-fbe3d4e60dcf@suse.cz>

On 2023/11/1 21:59, Vlastimil Babka wrote:
>> 3. Testing
>> ==========
>> We just did some simple testing on a server with 128 CPUs (2 nodes) to
>> compare performance for now.
>>
>>  - perf bench sched messaging -g 5 -t -l 100000
>>    baseline	RFC
>>    7.042s	6.966s
>>    7.022s	7.045s
>>    7.054s	6.985s
>>
>>  - stress-ng --rawpkt 128 --rawpkt-ops 100000000
>>    baseline	RFC
>>    2.42s	2.15s
>>    2.45s	2.16s
>>    2.44s	2.17s
> 
> Looks like these numbers are carried over from the first RFC. Could you
> please retest with v4 as there were some bigger changes (i.e. getting
> rid of acquire_slab()).
> 
> Otherwise I think v5 can drop "RFC" and will add it to slab tree after
> the merge window and 6.7-rc1. Thanks!

Ah, yes, I will retest v5 and update the numbers today.

Thanks!

> 
>> It shows above there is about 10% improvement on stress-ng rawpkt
>> testcase, although no much improvement on perf sched bench testcase.
>>
>> Thanks for any comment and code review!
>>
>> Chengming Zhou (9):
>>   slub: Reflow ___slab_alloc()
>>   slub: Change get_partial() interfaces to return slab
>>   slub: Keep track of whether slub is on the per-node partial list
>>   slub: Prepare __slab_free() for unfrozen partial slab out of node
>>     partial list
>>   slub: Introduce freeze_slab()
>>   slub: Delay freezing of partial slabs
>>   slub: Optimize deactivate_slab()
>>   slub: Rename all *unfreeze_partials* functions to *put_partials*
>>   slub: Update frozen slabs documentations in the source
>>
>>  mm/slub.c | 381 ++++++++++++++++++++++++++----------------------------
>>  1 file changed, 180 insertions(+), 201 deletions(-)
>>


      reply	other threads:[~2023-11-02  2:19 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-10-31 14:07 chengming.zhou
2023-10-31 14:07 ` [RFC PATCH v4 1/9] slub: Reflow ___slab_alloc() chengming.zhou
2023-10-31 14:07 ` [RFC PATCH v4 2/9] slub: Change get_partial() interfaces to return slab chengming.zhou
2023-10-31 14:07 ` [RFC PATCH v4 3/9] slub: Keep track of whether slub is on the per-node partial list chengming.zhou
2023-11-01 12:23   ` Vlastimil Babka
2023-10-31 14:07 ` [RFC PATCH v4 4/9] slub: Prepare __slab_free() for unfrozen partial slab out of node " chengming.zhou
2023-11-01 12:26   ` Vlastimil Babka
2023-10-31 14:07 ` [RFC PATCH v4 5/9] slub: Introduce freeze_slab() chengming.zhou
2023-12-03  6:08   ` Hyeonggon Yoo
2023-10-31 14:07 ` [RFC PATCH v4 6/9] slub: Delay freezing of partial slabs chengming.zhou
2023-10-31 14:07 ` [RFC PATCH v4 7/9] slub: Optimize deactivate_slab() chengming.zhou
2023-11-01 13:21   ` Vlastimil Babka
2023-11-02  2:10     ` Chengming Zhou
2023-10-31 14:07 ` [RFC PATCH v4 8/9] slub: Rename all *unfreeze_partials* functions to *put_partials* chengming.zhou
2023-11-01 13:40   ` Vlastimil Babka
2023-11-02  2:12     ` Chengming Zhou
2023-10-31 14:07 ` [RFC PATCH v4 9/9] slub: Update frozen slabs documentations in the source chengming.zhou
2023-11-01 13:51   ` Vlastimil Babka
2023-11-02  2:48     ` Chengming Zhou
2023-11-01 13:33 ` [RFC PATCH v4 0/9] slub: Delay freezing of CPU partial slabs Hyeonggon Yoo
2023-11-02  2:17   ` Chengming Zhou
2023-11-01 13:59 ` Vlastimil Babka
2023-11-02  2:19   ` Chengming Zhou [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=f38c7dd0-deec-4e55-9216-3c39925edef5@linux.dev \
    --to=chengming.zhou@linux.dev \
    --cc=42.hyeyoo@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=cl@linux.com \
    --cc=iamjoonsoo.kim@lge.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=penberg@kernel.org \
    --cc=rientjes@google.com \
    --cc=roman.gushchin@linux.dev \
    --cc=vbabka@suse.cz \
    --cc=willy@infradead.org \
    --cc=zhouchengming@bytedance.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox