From: Jesper Dangaard Brouer <hawk@kernel.org>
To: Shakeel Butt <shakeel.butt@linux.dev>
Cc: stable@vger.kernel.org, yosryahmed@google.com, tj@kernel.org,
hannes@cmpxchg.org, lizefan.x@bytedance.com,
cgroups@vger.kernel.org, longman@redhat.com, linux-mm@kvack.org,
kernel-team@cloudflare.com
Subject: Re: [PATCH 6.6.y] mm: ratelimit stat flush from workingset shrinker
Date: Fri, 7 Jun 2024 19:26:47 +0200 [thread overview]
Message-ID: <6ee2518b-81dd-4082-bdf5-322883895ffc@kernel.org> (raw)
In-Reply-To: <tge6txvuepcu3iy7nz3cuafbd5x2hmeprbaz3d3fzawvvzg3xr@f4utxxs2egxl>
On 07/06/2024 16.32, Shakeel Butt wrote:
> On Fri, Jun 07, 2024 at 03:48:06PM GMT, Jesper Dangaard Brouer wrote:
>> From: Shakeel Butt <shakeelb@google.com>
>>
>> commit d4a5b369ad6d8aae552752ff438dddde653a72ec upstream.
>>
>> One of our workloads (Postgres 14 + sysbench OLTP) regressed on newer
>> upstream kernel and on further investigation, it seems like the cause is
>> the always synchronous rstat flush in the count_shadow_nodes() added by
>> the commit f82e6bf9bb9b ("mm: memcg: use rstat for non-hierarchical
>> stats"). On further inspection it seems like we don't really need
>> accurate stats in this function as it was already approximating the amount
>> of appropriate shadow entries to keep for maintaining the refault
>> information. Since there is already 2 sec periodic rstat flush, we don't
>> need exact stats here. Let's ratelimit the rstat flush in this code path.
>>
>> Link: https://lkml.kernel.org/r/20231228073055.4046430-1-shakeelb@google.com
>> Fixes: f82e6bf9bb9b ("mm: memcg: use rstat for non-hierarchical stats")
>> Signed-off-by: Shakeel Butt <shakeelb@google.com>
>> Cc: Johannes Weiner <hannes@cmpxchg.org>
>> Cc: Yosry Ahmed <yosryahmed@google.com>
>> Cc: Yu Zhao <yuzhao@google.com>
>> Cc: Michal Hocko <mhocko@suse.com>
>> Cc: Roman Gushchin <roman.gushchin@linux.dev>
>> Cc: Muchun Song <songmuchun@bytedance.com>
>> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
>> Signed-off-by: Jesper Dangaard Brouer <hawk@kernel.org>
>>
>> ---
>> On production with kernel v6.6 we are observing issues with excessive
>> cgroup rstat flushing due to the extra call to mem_cgroup_flush_stats()
>> in count_shadow_nodes() introduced in commit f82e6bf9bb9b ("mm: memcg:
>> use rstat for non-hierarchical stats") that commit is part of v6.6.
>> We request backport of commit d4a5b369ad6d ("mm: ratelimit stat flush
>> from workingset shrinker") as it have a fixes tag for this commit.
>>
>> IMHO it is worth explaining call path that makes count_shadow_nodes()
>> cause excessive cgroup rstat flushing calls. Function shrink_node()
>> calls mem_cgroup_flush_stats() on its own first, and then invokes
>> shrink_node_memcgs(). Function shrink_node_memcgs() iterates over
>> cgroups via mem_cgroup_iter() for each calling shrink_slab(). The
>> shrink_slab() calls do_shrink_slab() that via shrinker->count_objects()
>> invoke count_shadow_nodes(), and count_shadow_nodes() does
>> a mem_cgroup_flush_stats() call, that seems unnecessary.
>>
>
> Actually at Meta production we have also replaced
> mem_cgroup_flush_stats() in shrink_node() with
> mem_cgroup_flush_stats_ratelimited() as it was causing too much flushing
> issue. We have not observed any issue after the change. I will propose
> that patch to upstream as well.
(Please Cc me as I'm not subscribed on cgroups@vger.kernel.org)
Yes, we also see mem_cgroup_flush_stats() in shrink_node() cause issues.
So, I can confirm the issue. What we see is that it originates from
kswapd, which have a kthread per NUMA node that runs concurrently... we
measure cgroup rstat lock contention happening due to call in shrink_node().
See call stacks I captured with bpftrace script[1]:
stack_wait[695, kswapd0, 1]:
__cgroup_rstat_lock+107
__cgroup_rstat_lock+107
cgroup_rstat_flush_locked+851
cgroup_rstat_flush+35
shrink_node+226
balance_pgdat+807
kswapd+521
kthread+228
ret_from_fork+48
ret_from_fork_asm+27
@stack_wait[696, kswapd1, 1]:
__cgroup_rstat_lock+107
__cgroup_rstat_lock+107
cgroup_rstat_flush_locked+851
cgroup_rstat_flush+35
shrink_node+226
balance_pgdat+807
kswapd+521
kthread+228
ret_from_fork+48
ret_from_fork_asm+27
@stack_wait[697, kswapd2, 1]:
__cgroup_rstat_lock+107
__cgroup_rstat_lock+107
cgroup_rstat_flush_locked+851
cgroup_rstat_flush+35
shrink_node+226
balance_pgdat+807
kswapd+521
kthread+228
ret_from_fork+48
ret_from_fork_asm+27
@stack_wait[698, kswapd3, 1]:
__cgroup_rstat_lock+107
__cgroup_rstat_lock+107
cgroup_rstat_flush_locked+851
cgroup_rstat_flush+35
shrink_node+226
balance_pgdat+807
kswapd+521
kthread+228
ret_from_fork+48
ret_from_fork_asm+27
@stack_wait[699, kswapd4, 1]:
__cgroup_rstat_lock+107
__cgroup_rstat_lock+107
cgroup_rstat_flush_locked+851
cgroup_rstat_flush+35
shrink_node+226
balance_pgdat+807
kswapd+521
kthread+228
ret_from_fork+48
ret_from_fork_asm+27
@stack_wait[700, kswapd5, 1]:
__cgroup_rstat_lock+107
__cgroup_rstat_lock+107
cgroup_rstat_flush_locked+851
cgroup_rstat_flush+35
shrink_node+226
balance_pgdat+807
kswapd+521
kthread+228
ret_from_fork+48
ret_from_fork_asm+27
@stack_wait[701, kswapd6, 1]:
__cgroup_rstat_lock+107
__cgroup_rstat_lock+107
cgroup_rstat_flush_locked+851
cgroup_rstat_flush+35
shrink_node+226
balance_pgdat+807
kswapd+521
kthread+228
ret_from_fork+48
ret_from_fork_asm+27
@stack_wait[702, kswapd7, 1]:
__cgroup_rstat_lock+107
__cgroup_rstat_lock+107
cgroup_rstat_flush_locked+851
cgroup_rstat_flush+35
shrink_node+226
balance_pgdat+807
kswapd+521
kthread+228
ret_from_fork+48
ret_from_fork_asm+27
--Jesper
[1]
https://github.com/xdp-project/xdp-project/blob/master/areas/latency/cgroup_rstat_latency.bt
next prev parent reply other threads:[~2024-06-07 17:27 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-06-07 13:48 Jesper Dangaard Brouer
2024-06-07 14:32 ` Shakeel Butt
2024-06-07 17:26 ` Jesper Dangaard Brouer [this message]
2024-06-12 13:38 ` Greg KH
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=6ee2518b-81dd-4082-bdf5-322883895ffc@kernel.org \
--to=hawk@kernel.org \
--cc=cgroups@vger.kernel.org \
--cc=hannes@cmpxchg.org \
--cc=kernel-team@cloudflare.com \
--cc=linux-mm@kvack.org \
--cc=lizefan.x@bytedance.com \
--cc=longman@redhat.com \
--cc=shakeel.butt@linux.dev \
--cc=stable@vger.kernel.org \
--cc=tj@kernel.org \
--cc=yosryahmed@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox