linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Rongwei Wang <rongwei.wang@linux.alibaba.com>
To: Christoph Lameter <cl@gentwo.de>
Cc: David Rientjes <rientjes@google.com>,
	songmuchun@bytedance.com, Hyeonggon Yoo <42.hyeyoo@gmail.com>,
	akpm@linux-foundation.org, vbabka@suse.cz,
	roman.gushchin@linux.dev, iamjoonsoo.kim@lge.com,
	penberg@kernel.org, linux-mm@kvack.org,
	linux-kernel@vger.kernel.org
Subject: Re: [PATCH 1/3] mm/slub: fix the race between validate_slab and slab_free
Date: Sat, 18 Jun 2022 10:33:51 +0800	[thread overview]
Message-ID: <1b434d4c-2a19-9ac1-b2b9-b767b642ec0c@linux.alibaba.com> (raw)
In-Reply-To: <alpine.DEB.2.22.394.2206171617560.638056@gentwo.de>



On 6/17/22 10:19 PM, Christoph Lameter wrote:
> On Fri, 17 Jun 2022, Rongwei Wang wrote:
> 
>> Christoph, I refer [1] to test some data below. The slub_test case is same to
>> your provided. And here you the result of its test (the baseline is the data
>> of upstream kernel, and fix is results of patched kernel).
> 
> Ah good.
>> Single thread testing
>>
>> 1. Kmalloc: Repeatedly allocate then free test
>>
>>                     before (baseline)        fix
>>                     kmalloc      kfree       kmalloc      kfree
>> 10000 times 8      7 cycles     8 cycles    5 cycles     7 cycles
>> 10000 times 16     4 cycles     8 cycles    3 cycles     6 cycles
>> 10000 times 32     4 cycles     8 cycles    3 cycles     6 cycles
> 
> Well the cycle reduction is strange. Tests are not done in the same
> environment? Maybe good to not use NUMA or bind to the same cpu
It's the same environment. I can sure. And there are four nodes (32G 
per-node and 8 cores per-node) in my test environment. whether I need to 
test in one node? If right, I can try.
> 
>> 10000 times 64     3 cycles     8 cycles    3 cycles     6 cycles
>> 10000 times 128    3 cycles     8 cycles    3 cycles     6 cycles
>> 10000 times 256    12 cycles    8 cycles    11 cycles    7 cycles
>> 10000 times 512    27 cycles    10 cycles   23 cycles    11 cycles
>> 10000 times 1024   18 cycles    9 cycles    20 cycles    10 cycles
>> 10000 times 2048   54 cycles    12 cycles   54 cycles    12 cycles
>> 10000 times 4096   105 cycles   20 cycles   105 cycles   25 cycles
>> 10000 times 8192   210 cycles   35 cycles   212 cycles   39 cycles
>> 10000 times 16384  133 cycles   45 cycles   119 cycles   46 cycles
> 
> 
> Seems to be different environments.
> 
>> According to the above data, It seems that no significant performance
>> degradation in patched kernel. Plus, in concurrent allocs test, likes Kmalloc
>> N*alloc N*free(1024), the data of 'fix' column is better than baseline (it
>> looks less is better, if I am wrong, please let me know). And if you have
>> other suggestions, I can try to test more data.
> 
> Well can you explain the cycle reduction?
Maybe because of four nodes in my system or only 8 cores (very small) in 
each node? Thanks, you remind me that I need to increase core number of 
each node or change node number to compere the results.

Thanks!




  reply	other threads:[~2022-06-18  2:34 UTC|newest]

Thread overview: 30+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-05-29  8:15 Rongwei Wang
2022-05-29  8:15 ` [PATCH 2/3] mm/slub: improve consistency of nr_slabs count Rongwei Wang
2022-05-29 12:26   ` Hyeonggon Yoo
2022-05-29  8:15 ` [PATCH 3/3] mm/slub: add nr_full count for debugging slub Rongwei Wang
2022-05-29 11:37 ` [PATCH 1/3] mm/slub: fix the race between validate_slab and slab_free Hyeonggon Yoo
2022-05-30 21:14   ` David Rientjes
2022-06-02 15:14     ` Christoph Lameter
2022-06-03  3:35       ` Rongwei Wang
2022-06-07 12:14         ` Christoph Lameter
2022-06-08  3:04           ` Rongwei Wang
2022-06-08 12:23             ` Christoph Lameter
2022-06-11  4:04               ` Rongwei Wang
2022-06-13 13:50                 ` Christoph Lameter
2022-06-14  2:38                   ` Rongwei Wang
2022-06-17  7:55                   ` Rongwei Wang
2022-06-17 14:19                     ` Christoph Lameter
2022-06-18  2:33                       ` Rongwei Wang [this message]
2022-06-20 11:57                         ` Christoph Lameter
2022-06-26 16:48                           ` Rongwei Wang
2022-06-17  9:40               ` Vlastimil Babka
2022-07-15  8:05                 ` Rongwei Wang
2022-07-15 10:33                   ` Vlastimil Babka
2022-07-15 10:51                     ` Rongwei Wang
2022-05-31  3:47   ` Muchun Song
2022-06-04 11:05     ` Hyeonggon Yoo
2022-05-31  8:50   ` Rongwei Wang
2022-07-18 11:09 ` Vlastimil Babka
2022-07-19 14:15   ` Rongwei Wang
2022-07-19 14:21     ` Vlastimil Babka
2022-07-19 14:43       ` Rongwei Wang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1b434d4c-2a19-9ac1-b2b9-b767b642ec0c@linux.alibaba.com \
    --to=rongwei.wang@linux.alibaba.com \
    --cc=42.hyeyoo@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=cl@gentwo.de \
    --cc=iamjoonsoo.kim@lge.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=penberg@kernel.org \
    --cc=rientjes@google.com \
    --cc=roman.gushchin@linux.dev \
    --cc=songmuchun@bytedance.com \
    --cc=vbabka@suse.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox