From: JP Kobryn <inwardvessel@gmail.com>
To: "Shakeel Butt" <shakeel.butt@linux.dev>,
"Michal Koutný" <mkoutny@suse.com>
Cc: hannes@cmpxchg.org, yosryahmed@google.com,
akpm@linux-foundation.org, linux-mm@kvack.org,
cgroups@vger.kernel.org, Tejun Heo <tj@kernel.org>
Subject: Re: [PATCH 0/9 RFC] cgroup: separate rstat trees
Date: Tue, 14 Jan 2025 17:33:17 -0800 [thread overview]
Message-ID: <3348742b-4e49-44c1-b447-b21553ff704a@gmail.com> (raw)
In-Reply-To: <3wew3ngaqq7cjqphpqltbq77de5rmqviolyqphneer4pfzu5h5@4ucytmd6rpfa>
Hi Michal,
On 1/13/25 10:25 AM, Shakeel Butt wrote:
> On Wed, Jan 08, 2025 at 07:16:47PM +0100, Michal Koutný wrote:
>> Hello JP.
>>
>> On Mon, Dec 23, 2024 at 05:13:53PM -0800, JP Kobryn <inwardvessel@gmail.com> wrote:
>>> I've been experimenting with these changes to allow for separate
>>> updating/flushing of cgroup stats per-subsystem.
>>
>> Nice.
>>
>>> I reached a point where this started to feel stable in my local testing, so I
>>> wanted to share and get feedback on this approach.
>>
>> The split is not straight-forwardly an improvement --
>
> The major improvement in my opinion is the performance isolation for
> stats readers i.e. cpu stats readers do not need to flush memory stats.
>
>> there's at least
>> higher memory footprint
>
> Yes this is indeed the case and JP, can you please give a ballmark on
> the memory overhead?
Yes, the trade-off is using more memory to allow for separate trees.
With these patches the changes in allocated memory for the
cgroup_rstat_cpu instances and their associated locks are:
static
reduced by 58%
dynamic
increased by 344%
The threefold increase on the dynamic side is attributed to now having 3
rstat trees per cgroup (1 for base stats, 1 for memory, 1 for io),
instead of originally just 1. The number will change if more subsystems
start or stop using rstat in the future. Feel free to let me know if you
would like to see the detailed breakdown of these values.
>
>> and flushing efffectiveness depends on how
>> individual readers are correlated,
>
> Sorry I am confused by the above statement, can you please expand on
> what you meant by it?
>
>> OTOH writer correlation affects
>> updaters when extending the update tree.
>
> Here I am confused about the difference between writer and updater.
>
>> So a workload dependent effect
>> can go (in my theory) both sides.
>> There are also in-kernel consumers of stats, namely memory controller
>> that's been optimized over the years to balance the tradeoff between
>> precision and latency.
>
> In-kernel memcg stats readers will be unaffected most of the time with
> this change. The only difference will be when they flush, they will only
> flush memcg stats.
>
>>
>> So do you have any measurements (or expectations) that show how readers
>> or writers are affected?
>>
>
> Here I am assuming you meant measurements in terms of cpu cost or do you
> have something else in mind?
>
>
> Thanks a lot Michal for taking a look.
next prev parent reply other threads:[~2025-01-15 1:33 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-12-24 1:13 JP Kobryn
2024-12-24 1:13 ` [PATCH 1/9 RFC] change cgroup to css in rstat updated and flush api JP Kobryn
2024-12-24 1:13 ` [PATCH 2/9 RFC] cgroup: change cgroup to css in rstat internal flush and lock funcs JP Kobryn
2024-12-24 1:13 ` [PATCH 3/9 RFC] cgroup: change cgroup to css in rstat init and exit api JP Kobryn
2024-12-24 1:13 ` [PATCH 4/9 RFC] cgroup: split rstat from cgroup into separate css JP Kobryn
2024-12-24 1:13 ` [PATCH 5/9 RFC] cgroup: separate locking between base css and others JP Kobryn
2024-12-24 1:13 ` [PATCH 6/9 RFC] cgroup: isolate base stat flush JP Kobryn
2024-12-24 1:14 ` [PATCH 7/9 RFC] cgroup: remove unneeded rcu list JP Kobryn
2024-12-24 1:14 ` [PATCH 8/9 RFC] cgroup: remove bpf rstat flush from css generic flush JP Kobryn
2024-12-24 1:14 ` [PATCH 9/9 RFC] cgroup: avoid allocating rstat when flush func not present JP Kobryn
2024-12-24 4:57 ` [PATCH 0/9 RFC] cgroup: separate rstat trees Shakeel Butt
2025-01-08 18:16 ` Michal Koutný
2025-01-13 18:25 ` Shakeel Butt
2025-01-15 1:33 ` JP Kobryn [this message]
2025-01-15 1:39 ` Yosry Ahmed
2025-01-15 19:38 ` JP Kobryn
2025-01-15 21:36 ` Yosry Ahmed
2025-01-16 18:20 ` JP Kobryn
2025-01-16 15:19 ` Michal Koutný
2025-01-16 15:35 ` Yosry Ahmed
2025-01-16 19:03 ` Shakeel Butt
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=3348742b-4e49-44c1-b447-b21553ff704a@gmail.com \
--to=inwardvessel@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=cgroups@vger.kernel.org \
--cc=hannes@cmpxchg.org \
--cc=linux-mm@kvack.org \
--cc=mkoutny@suse.com \
--cc=shakeel.butt@linux.dev \
--cc=tj@kernel.org \
--cc=yosryahmed@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox