From: "Sun, Jiebin" <jiebin.sun@intel.com>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: vasily.averin@linux.dev, shakeelb@google.com, dennis@kernel.org,
tj@kernel.org, cl@linux.com, ebiederm@xmission.com,
legion@kernel.org, manfred@colorfullife.com,
alexander.mikhalitsyn@virtuozzo.com, linux-mm@kvack.org,
linux-kernel@vger.kernel.org, tim.c.chen@intel.com,
feng.tang@intel.com, ying.huang@intel.com, tianyou.li@intel.com,
wangyang.guo@intel.com
Subject: Re: [PATCH] ipc/msg.c: mitigate the lock contention with percpu counter
Date: Mon, 5 Sep 2022 19:54:35 +0800 [thread overview]
Message-ID: <da91f763-b74b-68d9-312b-1bc86179273f@intel.com> (raw)
In-Reply-To: <20220902090659.28829853543cac3f3f725df5@linux-foundation.org>
On 9/3/2022 12:06 AM, Andrew Morton wrote:
> On Fri, 2 Sep 2022 23:22:43 +0800 Jiebin Sun <jiebin.sun@intel.com> wrote:
>
>> The msg_bytes and msg_hdrs atomic counters are frequently
>> updated when IPC msg queue is in heavy use, causing heavy
>> cache bounce and overhead. Change them to percpu_counters
>> greatly improve the performance. Since there is one unique
>> ipc namespace, additional memory cost is minimal. Reading
>> of the count done in msgctl call, which is infrequent. So
>> the need to sum up the counts in each CPU is infrequent.
>>
>> Apply the patch and test the pts/stress-ng-1.4.0
>> -- system v message passing (160 threads).
>>
>> Score gain: 3.38x
> So this test became 3x faster?
Yes. It is from the phoronix test suite stress-ng-1.4.0 -- system v message
passing with dual sockets ICX servers. In this benchmark, there are 160
pairs of threads, which do msgsnd and msgrcv. The patch benefit more as the
threads of workload increase.
>
>> CPU: ICX 8380 x 2 sockets
>> Core number: 40 x 2 physical cores
>> Benchmark: pts/stress-ng-1.4.0
>> -- system v message passing (160 threads)
>>
>> ...
>>
>> @@ -138,6 +139,14 @@ percpu_counter_add(struct percpu_counter *fbc, s64 amount)
>> preempt_enable();
>> }
>>
>> +static inline void
>> +percpu_counter_add_local(struct percpu_counter *fbc, s64 amount)
>> +{
>> + preempt_disable();
>> + fbc->count += amount;
>> + preempt_enable();
>> +}
> What's this and why is it added?
>
> It would be best to propose this as a separate preparatory patch.
> Fully changelogged and perhaps even with a code comment explaining why
> and when it should be used.
>
> Thanks.
As it will always do sum in msgctl_info, there is no need to use
percpu_counter_add_batch. It will do global updating when the counter reach
the batch size. So we add percpu_counter_add_local for smp and non_smp,
which will only do local adding to the percpu counter.
I have separate the original patch into two patches.
Thanks.
next prev parent reply other threads:[~2022-09-05 11:54 UTC|newest]
Thread overview: 40+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-09-02 15:22 Jiebin Sun
2022-09-02 16:06 ` Andrew Morton
2022-09-05 11:54 ` Sun, Jiebin [this message]
2022-09-02 16:27 ` Shakeel Butt
2022-09-05 12:02 ` Sun, Jiebin
2022-09-06 18:44 ` Tim Chen
2022-09-07 9:39 ` Sun, Jiebin
2022-09-07 20:43 ` Andrew Morton
2022-09-07 17:25 ` [PATCH v4] ipc/msg: " Jiebin Sun
2022-09-07 16:01 ` Tim Chen
2022-09-07 21:34 ` Andrew Morton
2022-09-07 22:10 ` Tim Chen
2022-09-08 8:25 ` Sun, Jiebin
2022-09-08 15:38 ` Andrew Morton
2022-09-08 16:15 ` Dennis Zhou
2022-09-03 19:35 ` [PATCH] ipc/msg.c: " Manfred Spraul
2022-09-05 12:12 ` Sun, Jiebin
[not found] ` <20220905193516.846647-1-jiebin.sun@intel.com>
[not found] ` <20220905193516.846647-3-jiebin.sun@intel.com>
2022-09-05 19:31 ` [PATCH v2 1/2] percpu: Add percpu_counter_add_local Shakeel Butt
2022-09-06 8:41 ` Sun, Jiebin
2022-09-05 19:35 ` [PATCH v2 2/2] ipc/msg: mitigate the lock contention with percpu counter Jiebin Sun
2022-09-06 16:54 ` [PATCH v3 0/2] ipc/msg: mitigate the lock contention in ipc/msg Jiebin Sun
2022-09-06 16:54 ` [PATCH v3 2/2] ipc/msg: mitigate the lock contention with percpu counter Jiebin Sun
2022-09-09 20:36 ` [PATCH v5 0/2] ipc/msg: mitigate the lock contention in ipc/msg Jiebin Sun
2022-09-09 20:36 ` [PATCH v5 1/2] percpu: Add percpu_counter_add_local and percpu_counter_sub_local Jiebin Sun
2022-09-09 16:37 ` Tim Chen
2022-09-10 1:37 ` kernel test robot
2022-09-10 8:15 ` kernel test robot
2022-09-10 8:26 ` kernel test robot
2022-09-09 20:36 ` [PATCH v5 2/2] ipc/msg: mitigate the lock contention with percpu counter Jiebin Sun
2022-09-09 16:11 ` Tim Chen
2022-09-13 19:25 ` [PATCH v6 0/2] ipc/msg: mitigate the lock contention in ipc/msg Jiebin Sun
2022-09-13 19:25 ` [PATCH v6 1/2] percpu: Add percpu_counter_add_local and percpu_counter_sub_local Jiebin Sun
2022-09-18 11:08 ` Manfred Spraul
2022-09-20 6:01 ` Sun, Jiebin
2022-09-13 19:25 ` [PATCH v6 2/2] ipc/msg: mitigate the lock contention with percpu counter Jiebin Sun
2022-09-18 12:53 ` Manfred Spraul
2022-09-20 2:36 ` Sun, Jiebin
2022-09-20 4:53 ` Manfred Spraul
2022-09-20 5:50 ` Sun, Jiebin
2022-09-20 15:08 ` [PATCH] ipc/msg: avoid negative value by overflow in msginfo Jiebin Sun
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=da91f763-b74b-68d9-312b-1bc86179273f@intel.com \
--to=jiebin.sun@intel.com \
--cc=akpm@linux-foundation.org \
--cc=alexander.mikhalitsyn@virtuozzo.com \
--cc=cl@linux.com \
--cc=dennis@kernel.org \
--cc=ebiederm@xmission.com \
--cc=feng.tang@intel.com \
--cc=legion@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=manfred@colorfullife.com \
--cc=shakeelb@google.com \
--cc=tianyou.li@intel.com \
--cc=tim.c.chen@intel.com \
--cc=tj@kernel.org \
--cc=vasily.averin@linux.dev \
--cc=wangyang.guo@intel.com \
--cc=ying.huang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox