linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Yosry Ahmed <yosryahmed@google.com>
To: JP Kobryn <inwardvessel@gmail.com>
Cc: "Shakeel Butt" <shakeel.butt@linux.dev>,
	"Michal Koutný" <mkoutny@suse.com>,
	hannes@cmpxchg.org, akpm@linux-foundation.org,
	linux-mm@kvack.org, cgroups@vger.kernel.org,
	"Tejun Heo" <tj@kernel.org>
Subject: Re: [PATCH 0/9 RFC] cgroup: separate rstat trees
Date: Wed, 15 Jan 2025 13:36:28 -0800	[thread overview]
Message-ID: <CAJD7tkZAc_ZBpUL2+X6zjBCQxU+EHjQy+jZMDg5C8XTT5vXm=w@mail.gmail.com> (raw)
In-Reply-To: <babbf756-48ec-47c7-91fc-895e44fb18cc@gmail.com>

On Wed, Jan 15, 2025 at 11:39 AM JP Kobryn <inwardvessel@gmail.com> wrote:
>
> Hi Yosry,
>
> On 1/14/25 5:39 PM, Yosry Ahmed wrote:
> > On Tue, Jan 14, 2025 at 5:33 PM JP Kobryn <inwardvessel@gmail.com> wrote:
> >>
> >> Hi Michal,
> >>
> >> On 1/13/25 10:25 AM, Shakeel Butt wrote:
> >>> On Wed, Jan 08, 2025 at 07:16:47PM +0100, Michal Koutný wrote:
> >>>> Hello JP.
> >>>>
> >>>> On Mon, Dec 23, 2024 at 05:13:53PM -0800, JP Kobryn <inwardvessel@gmail.com> wrote:
> >>>>> I've been experimenting with these changes to allow for separate
> >>>>> updating/flushing of cgroup stats per-subsystem.
> >>>>
> >>>> Nice.
> >>>>
> >>>>> I reached a point where this started to feel stable in my local testing, so I
> >>>>> wanted to share and get feedback on this approach.
> >>>>
> >>>> The split is not straight-forwardly an improvement --
> >>>
> >>> The major improvement in my opinion is the performance isolation for
> >>> stats readers i.e. cpu stats readers do not need to flush memory stats.
> >>>
> >>>> there's at least
> >>>> higher memory footprint
> >>>
> >>> Yes this is indeed the case and JP, can you please give a ballmark on
> >>> the memory overhead?
> >>
> >> Yes, the trade-off is using more memory to allow for separate trees.
> >> With these patches the changes in allocated memory for the
> >> cgroup_rstat_cpu instances and their associated locks are:
> >> static
> >>          reduced by 58%
> >> dynamic
> >>          increased by 344%
> >>
> >> The threefold increase on the dynamic side is attributed to now having 3
> >> rstat trees per cgroup (1 for base stats, 1 for memory, 1 for io),
> >> instead of originally just 1. The number will change if more subsystems
> >> start or stop using rstat in the future. Feel free to let me know if you
> >> would like to see the detailed breakdown of these values.
> >
> > What is the absolute per-CPU memory usage?
>
> This is what I calculate as the combined per-cpu usage.
> before:
>         one cgroup_rstat_cpu instance for every cgroup
>         sizeof(cgroup_rstat_cpu) * nr_cgroups
> after:
>         three cgroup_rstat_cpu instances for every cgroup + updater lock for
> every subsystem plus one for base stats
>         sizeof(cgroup_rstat_cpu) * 3 * nr_cgroups +
>                 sizeof(spinlock_t) * (CGROUP_SUBSYS_COUNT + 1)
>
> Note that "every cgroup" includes the root cgroup. Also, 3 represents
> the number of current rstat clients: base stats, memory, and io
> (assuming all enabled).

On a config I have at hand sizeof(cgroup_rstat_cpu) is 160 bytes.
Ignoring the spinlock for a second because it doesn't scale with
cgroups, that'd be an extra 320 * nr_cgroups * nr_cpus bytes. On a
moderately large machine with 256 CPUs and 100 cgroups for example
that's ~8MB.

>
> As I'm writing this, I realize I might need to include the bpf cgroups
> as a fourth client and include this in my testing.


  reply	other threads:[~2025-01-15 21:37 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-12-24  1:13 JP Kobryn
2024-12-24  1:13 ` [PATCH 1/9 RFC] change cgroup to css in rstat updated and flush api JP Kobryn
2024-12-24  1:13 ` [PATCH 2/9 RFC] cgroup: change cgroup to css in rstat internal flush and lock funcs JP Kobryn
2024-12-24  1:13 ` [PATCH 3/9 RFC] cgroup: change cgroup to css in rstat init and exit api JP Kobryn
2024-12-24  1:13 ` [PATCH 4/9 RFC] cgroup: split rstat from cgroup into separate css JP Kobryn
2024-12-24  1:13 ` [PATCH 5/9 RFC] cgroup: separate locking between base css and others JP Kobryn
2024-12-24  1:13 ` [PATCH 6/9 RFC] cgroup: isolate base stat flush JP Kobryn
2024-12-24  1:14 ` [PATCH 7/9 RFC] cgroup: remove unneeded rcu list JP Kobryn
2024-12-24  1:14 ` [PATCH 8/9 RFC] cgroup: remove bpf rstat flush from css generic flush JP Kobryn
2024-12-24  1:14 ` [PATCH 9/9 RFC] cgroup: avoid allocating rstat when flush func not present JP Kobryn
2024-12-24  4:57 ` [PATCH 0/9 RFC] cgroup: separate rstat trees Shakeel Butt
2025-01-08 18:16 ` Michal Koutný
2025-01-13 18:25   ` Shakeel Butt
2025-01-15  1:33     ` JP Kobryn
2025-01-15  1:39       ` Yosry Ahmed
2025-01-15 19:38         ` JP Kobryn
2025-01-15 21:36           ` Yosry Ahmed [this message]
2025-01-16 18:20             ` JP Kobryn
2025-01-16 15:19     ` Michal Koutný
2025-01-16 15:35       ` Yosry Ahmed
2025-01-16 19:03       ` Shakeel Butt

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAJD7tkZAc_ZBpUL2+X6zjBCQxU+EHjQy+jZMDg5C8XTT5vXm=w@mail.gmail.com' \
    --to=yosryahmed@google.com \
    --cc=akpm@linux-foundation.org \
    --cc=cgroups@vger.kernel.org \
    --cc=hannes@cmpxchg.org \
    --cc=inwardvessel@gmail.com \
    --cc=linux-mm@kvack.org \
    --cc=mkoutny@suse.com \
    --cc=shakeel.butt@linux.dev \
    --cc=tj@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox