From: Shakeel Butt <shakeelb@google.com>
To: Feng Tang <feng.tang@intel.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
Michal Hocko <mhocko@suse.com>,
Johannes Weiner <hannes@cmpxchg.org>,
Roman Gushchin <guro@fb.com>, Linux MM <linux-mm@kvack.org>,
LKML <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH v2] mm: page_counter: relayout structure to reduce false sharing
Date: Tue, 19 Jan 2021 08:39:28 -0800 [thread overview]
Message-ID: <CALvZod4BjRx1PRhkR0_MCF6Hnixie2t=8RHx+5s1DikSw66Dfw@mail.gmail.com> (raw)
In-Reply-To: <1611040814-33449-1-git-send-email-feng.tang@intel.com>
On Mon, Jan 18, 2021 at 11:20 PM Feng Tang <feng.tang@intel.com> wrote:
>
> When checking a memory cgroup related performance regression [1],
> from the perf c2c profiling data, we found high false sharing for
> accessing 'usage' and 'parent'.
>
> On 64 bit system, the 'usage' and 'parent' are close to each other,
> and easy to be in one cacheline (for cacheline size == 64+ B). 'usage'
> is usally written, while 'parent' is usually read as the cgroup's
> hierarchical counting nature.
>
> So move the 'parent' to the end of the structure to make sure they
> are in different cache lines.
>
> Following are some performance data with the patch, against
> v5.11-rc1. [ In the data, A means a platform with 2 sockets 48C/96T,
> B is a platform of 4 sockests 72C/144T, and if a %stddev will be
> shown bigger than 2%, P100/P50 means number of test tasks equals
> to 100%/50% of nr_cpu]
>
> will-it-scale/malloc1
> ---------------------
> v5.11-rc1 v5.11-rc1+patch
>
> A-P100 15782 ± 2% -0.1% 15765 ± 3% will-it-scale.per_process_ops
> A-P50 21511 +8.9% 23432 will-it-scale.per_process_ops
> B-P100 9155 +2.2% 9357 will-it-scale.per_process_ops
> B-P50 10967 +7.1% 11751 ± 2% will-it-scale.per_process_ops
>
> will-it-scale/pagefault2
> ------------------------
> v5.11-rc1 v5.11-rc1+patch
>
> A-P100 79028 +3.0% 81411 will-it-scale.per_process_ops
> A-P50 183960 ± 2% +4.4% 192078 ± 2% will-it-scale.per_process_ops
> B-P100 85966 +9.9% 94467 ± 3% will-it-scale.per_process_ops
> B-P50 198195 +9.8% 217526 will-it-scale.per_process_ops
>
> fio (4k/1M is block size)
> -------------------------
> v5.11-rc1 v5.11-rc1+patch
>
> A-P50-r-4k 16881 ± 2% +1.2% 17081 ± 2% fio.read_bw_MBps
> A-P50-w-4k 3931 +4.5% 4111 ± 2% fio.write_bw_MBps
> A-P50-r-1M 15178 -0.2% 15154 fio.read_bw_MBps
> A-P50-w-1M 3924 +0.1% 3929 fio.write_bw_MBps
>
> [1].https://lore.kernel.org/lkml/20201102091543.GM31092@shao2-debian/
> Signed-off-by: Feng Tang <feng.tang@intel.com>
> Reviewed-by: Roman Gushchin <guro@fb.com>
> Cc: Johannes Weiner <hannes@cmpxchg.org>
> Cc: Michal Hocko <mhocko@suse.com>
Reviewed-by: Shakeel Butt <shakeelb@google.com>
next prev parent reply other threads:[~2021-01-19 18:37 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-01-19 7:20 Feng Tang
2021-01-19 16:39 ` Shakeel Butt [this message]
2021-01-19 17:00 ` Johannes Weiner
2021-01-20 7:56 ` Michal Hocko
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CALvZod4BjRx1PRhkR0_MCF6Hnixie2t=8RHx+5s1DikSw66Dfw@mail.gmail.com' \
--to=shakeelb@google.com \
--cc=akpm@linux-foundation.org \
--cc=feng.tang@intel.com \
--cc=guro@fb.com \
--cc=hannes@cmpxchg.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@suse.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox