From: Tim Chen <tim.c.chen@linux.intel.com>
To: Michal Hocko <mhocko@kernel.org>, Dave Hansen <dave.hansen@intel.com>
Cc: Honglei Wang <honglei.wang@oracle.com>,
Johannes Weiner <hannes@cmpxchg.org>,
linux-mm@kvack.org, Andrew Morton <akpm@linux-foundation.org>
Subject: Re: memcgroup lruvec_lru_size scaling issue
Date: Mon, 14 Oct 2019 11:06:24 -0700 [thread overview]
Message-ID: <40748407-eafc-e08b-5777-1cbf892fcc52@linux.intel.com> (raw)
In-Reply-To: <20191014175918.GN317@dhcp22.suse.cz>
On 10/14/19 10:59 AM, Michal Hocko wrote:
> On Mon 14-10-19 10:49:49, Dave Hansen wrote:
>> On 10/14/19 10:37 AM, Michal Hocko wrote:
>>>> for_each_possible_cpu(cpu)
>>>> x += per_cpu(pn->lruvec_stat_local->count[idx], cpu);
>>>>
>>>> It is costly looping through all the cpus to get the lru vec size info.
>>>> And doing this on our workload with 96 cpu threads and 500 mem cgroups
>>>> makes things much worse. We might end up having 96 cpus * 500 cgroups * 2 (main) LRUs pagevecs,
>>>> which is a lot of data structures to be running through all the time.
>>> Why does the number of cgroup matter?
>>
>> I was thinking purely of the cache footprint. If it's reading
>> pn->lruvec_stat_local->count[idx] is three separate cachelines, so 192
>> bytes of cache *96 CPUs = 18k of data, mostly read-only. 1 cgroup would
>> be 18k of data for the whole system and the caching would be pretty
>> efficient and all 18k would probably survive a tight page fault loop in
>> the L1. 500 cgroups would be ~90k of data per CPU thread which doesn't
>> fit in the L1 and probably wouldn't survive a tight page fault loop if
>> both logical threads were banging on different cgroups.
>>
>> It's just a theory, but it's why I noted the number of cgroups when I
>> initially saw this show up in profiles.
>
> Yes, the cache traffic might be really high but I still find it a bit
> surprising that it makes such a large footprint because this should be
> mostly called from slow paths (reclaim) and the real work done should
> just be larger - at least that's my intuition which might be quite off
> here. How much is that 25% of the system time in the total time btw?
>
About 7% of total cpu cycles.
Tim
next prev parent reply other threads:[~2019-10-14 18:06 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-10-14 17:17 Tim Chen
2019-10-14 17:37 ` Michal Hocko
2019-10-14 17:49 ` Dave Hansen
2019-10-14 17:59 ` Michal Hocko
2019-10-14 18:06 ` Tim Chen [this message]
2019-10-14 18:31 ` Michal Hocko
2019-10-14 22:14 ` Andrew Morton
2019-10-15 6:19 ` Michal Hocko
2019-10-15 20:38 ` Andrew Morton
2019-10-16 7:25 ` Michal Hocko
2019-10-15 18:23 ` Tim Chen
2019-10-14 18:11 ` Dave Hansen
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=40748407-eafc-e08b-5777-1cbf892fcc52@linux.intel.com \
--to=tim.c.chen@linux.intel.com \
--cc=akpm@linux-foundation.org \
--cc=dave.hansen@intel.com \
--cc=hannes@cmpxchg.org \
--cc=honglei.wang@oracle.com \
--cc=linux-mm@kvack.org \
--cc=mhocko@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox