From: JP Kobryn <inwardvessel@gmail.com>
To: tj@kernel.org, shakeel.butt@linux.dev, yosryahmed@google.com,
mkoutny@suse.com, hannes@cmpxchg.org, akpm@linux-foundation.org
Cc: linux-mm@kvack.org, cgroups@vger.kernel.org, kernel-team@meta.com
Subject: [PATCH v5 0/5] cgroup: separate rstat trees
Date: Fri, 2 May 2025 17:12:17 -0700 [thread overview]
Message-ID: <20250503001222.146355-1-inwardvessel@gmail.com> (raw)
The current design of rstat takes the approach that if one subsystem is to
be flushed, all other subsystems with pending updates should also be
flushed. A flush may be initiated by reading specific stats (like cpu.stat)
and other subsystems will be flushed alongside. The complexity of flushing
some subsystems has grown to the extent that the overhead of side flushes
is causing noticeable delays in reading the desired stats.
One big area where the issue comes up is system telemetry, where programs
periodically sample cpu stats while the memory controller is enabled. It
would be a benefit for programs sampling cpu.stat if the overhead of having
to flush memory (and also io) stats was eliminated. It would save cpu
cycles for existing stat reader programs and improve scalability in terms
of sampling frequency and host volume.
This series changes the approach of "flush all subsystems" to "flush only
the requested subsystem". The core design change is moving from a unified
model where rstat trees are shared by subsystems to having separate trees
for each subsystem. On a per-cpu basis, there will be separate trees for
each enabled subsystem that implements css_rstat_flush plus one tree
dedicated to the base stats. In order to do this, the rstat list pointers
were moved off of the cgroup and onto the css. In the transition, these
pointer types were changed to cgroup_subsys_state. Finally the API for
updated/flush was changed to accept a reference to a css instead of a
cgroup. This allows for a specific subsystem to be associated with a given
update or flush. The result is that rstat trees will now be made up of css
nodes, and a given tree will only contain nodes associated with a specific
subsystem.
Since separate trees will now be in use, the locking scheme was adjusted.
The global locks were split up in such a way that there are separate locks
for the base stats and also for each subsystem (memory, io, etc). This
allows different subsystems (and base stats) to use rstat in parallel with
no contention.
Breaking up the unified tree into separate trees eliminates the overhead
and scalability issues explained in the first section, but comes at the
cost of additional memory. Originally, each cgroup contained an instance of
the cgroup_rstat_cpu. The design change of moving to css-based trees calls
for each css having the rstat per-cpu objects instead. Moving these objects
to every css is where this overhead is created. In an effort to minimize
this, the cgroup_rstat_cpu struct was split into two separate structs. One
is the cgroup_rstat_base_cpu struct which only contains the per-cpu base
stat objects used in rstat. The other is the css_rstat_cpu struct which
contains the minimum amount of pointers needed for a css to participate in
rstat. Since only the cgroup::self css is associated with the base stats,
an instance of the cgroup_rstat_base_cpu struct is placed on the cgroup.
Meanwhile an instance of the css_rstat_cpu is placed on the
cgroup_subsys_state. This allows for all css's to participate in rstat
while avoiding the unnecessary inclusion of the base stats. The base stat
objects will only exist once per-cgroup regardless of however many
subsystems are enabled. With this division of rstat list pointers and base
stats, the change in memory overhead on a per-cpu basis before/after is
shown below.
memory overhead before:
nr_cgroups * sizeof(struct cgroup_rstat_cpu)
where
sizeof(struct cgroup_rstat_cpu) = 144 bytes /* config-dependent */
resulting in
nr_cgroups * 144 bytes
memory overhead after:
nr_cgroups * (
sizeof(struct cgroup_rstat_base_cpu) +
sizeof(struct css_rstat_cpu) * (1 + nr_rstat_controllers)
)
where
sizeof(struct cgroup_rstat_base_cpu) = 128 bytes
sizeof(struct css_rstat_cpu) = 16 bytes
the constant "1" accounts for the cgroup::self css
nr_rstat_controllers = number of controllers defining css_rstat_flush
when both memory and io are enabled
nr_rstat_controllers = 2
resulting in
nr_cgroups * (128 + 16 * (1 + 2))
nr_cgroups * 176 bytes
This leaves us with an increase in memory overhead of:
32 bytes per cgroup per cpu
Validation was performed by reading some *.stat files of a target parent
cgroup while the system was under different workloads. A test program was
made to loop 1M times, reading the files cgroup.stat, cpu.stat, io.stat,
memory.stat of the parent cgroup each iteration. Using a non-patched kernel
as control and this series as experimental, the findings show perf gains
when reading stats with this series.
The first experiment consisted of a parent cgroup with memory.swap.max=0
and memory.max=1G. On a 52-cpu machine, 26 child cgroups were created and
within each child cgroup a process was spawned to frequently update the
memory cgroup stats by creating and then reading a file of size 1T
(encouraging reclaim). The test program was run alongside these 26 tasks in
parallel. The results showed time and perf gains for the reader test
program.
test program elapsed time
control:
real 1m29.929s
user 0m0.933s
sys 1m28.525s
experiment:
real 1m3.604s
user 0m0.828s
sys 1m2.497s
test program perf
control:
29.47% mem_cgroup_css_rstat_flush
5.09% __blkcg_rstat_flush
0.07% cpu_stat_show
experiment:
6.89% mem_cgroup_css_rstat_flush
0.31% blkcg_print_stat
0.07% cpu_stat_show
It's worth noting that memcg uses heuristics to optimize flushing.
Depending on the state of updated stats at a given time, a memcg flush may
be considered unnecessary and skipped as a result. This opportunity to skip
a flush is bypassed when memcg is flushed as a consequence of sharing the
tree with another controller.
A second experiment was setup on the same host using a parent cgroup with
two child cgroups. In the two child cgroups, kernel builds were done in
parallel, each using "-j 20". The perf comparison is shown below.
test program elapsed time
control:
real 1m59.647s
user 0m1.263s
sys 1m57.511s
experiment:
real 1m0.328s
user 0m1.077s
sys 0m58.834s
test program perf
control:
35.69% mem_cgroup_css_rstat_flush
4.49% __blkcg_rstat_flush
0.07% cpu_stat_show
0.05% cgroup_base_stat_cputime_show
experiment:
2.04% mem_cgroup_css_rstat_flush
0.18% blkcg_print_stat
0.09% cpu_stat_show
0.09% cgroup_base_stat_cputime_show
The final experiment differs from the previous two in that it measures
performance from the stat updater perspective. A kernel build was run in a
child node with -j 20 on the same host and cgroup setup. A baseline was
established by having the build run while no stats were read. The builds
were then repeated while stats were constantly being read. In all cases,
perf appeared similar in cycles spent on cgroup_rstat_updated()
(insignificant compared to the other recorded events). As for the elapsed
build times, the results of the different scenarios are shown below,
showing no significant drawbacks of the split tree approach.
control with no readers
real 5m11.548s
user 84m45.072s
sys 3m52.069s
control with constant readers of {memory,io,cpu,cgroup}.stat
real 5m13.619s
user 85m1.847s
sys 4m5.379s
experiment with no readers
real 5m12.557s
user 84m54.966s
sys 3m53.383s
experiment with constant readers of {memory,io,cpu,cgroup}.stat
real 5m12.548s
user 84m56.313s
sys 3m54.955s
changelog
v5:
new patch for using css_is_cgroup() in more places
new patch adding is_css_rstat() helper
new patch documenting circumstances behind where css_rstat_init occurs
check if css is cgroup early in css_rstat_flush()
remove ss->css_rstat_flush check in flush loop
fix css_rstat_flush where "pos" should be used instead of "css"
change lockdep text in __css_rstat_lock/unlock()
remove unnecessary base lock init in ss_rstat_init()
guard against invalid css in css_rstat_updated/flush()
guard against invalid css in css_rstat_init/exit()
call css_rstat_updated/flush and css_rstat_init/exit unconditionally
consolidate calls to css_rstat_exit() into one (aside from error cases)
eliminate call to css_rstat_init() in cgroup_init() for ss->early_init
move comment changes to matching commits where applicable
fix comment with mention of stale function css_rstat_flush_locked()
fix comment referring to "cgroup" where "css" should be used
v4:
drop bpf api patch
drop cgroup_rstat_cpu split and union patch,
replace with patch for moving base stats into new struct
new patch for renaming rstat api's from cgroup_* to css_*
new patch for adding css_is_cgroup() helper
rename ss->lock and ss->cpu_lock to ss->rstat_ss_lock and
ss->rstat_ss_cpu_lock respectively
rename root_self_stat_cpu to root_base_rstat_cpu
rename cgroup_rstat_push_children to css_rstat_push_children
format comments for consistency in wings and capitalization
update comments in bpf selftests
v3:
new bpf kfunc api for updated/flush
rename cgroup_rstat_{updated,flush} and related to "css_rstat_*"
check for ss->css_rstat_flush existence where applicable
rename locks for base stats
move subsystem locks to cgroup_subsys struct
change cgroup_rstat_boot() to ss_rstat_init(ss) and init locks within
change lock helpers to accept css and perform lock selection within
fix comments that had outdated lock names
add open css_is_cgroup() helper
rename rstatc to rstatbc to reflect base stats in use
rename cgroup_dfl_root_rstat_cpu to root_self_rstat_cpu
add comments in early init code to explain deferred allocation
misc formatting fixes
v2:
drop the patch creating a new cgroup_rstat struct and related code
drop bpf-specific patches. instead just use cgroup::self in bpf progs
drop the cpu lock patches. instead select cpu lock in updated_list func
relocate the cgroup_rstat_init() call to inside css_create()
relocate the cgroup_rstat_exit() cleanup from apply_control_enable()
to css_free_rwork_fn()
v1:
https://lore.kernel.org/all/20250218031448.46951-1-inwardvessel@gmail.com/
JP Kobryn (5):
cgroup: use helper for distingushing css in callbacks
cgroup: use separate rstat trees for each subsystem
cgroup: use subsystem-specific rstat locks to avoid contention
cgroup: helper for checking rstat participation of css
cgroup: document the rstat per-cpu initialization
block/blk-cgroup.c | 2 +-
include/linux/cgroup-defs.h | 78 +++--
include/trace/events/cgroup.h | 12 +-
kernel/cgroup/cgroup-internal.h | 2 +-
kernel/cgroup/cgroup.c | 41 +--
kernel/cgroup/rstat.c | 310 +++++++++++-------
.../selftests/bpf/progs/btf_type_tag_percpu.c | 18 +-
7 files changed, 289 insertions(+), 174 deletions(-)
--
2.47.1
next reply other threads:[~2025-05-03 0:12 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-05-03 0:12 JP Kobryn [this message]
2025-05-03 0:12 ` [PATCH v5 1/5] cgroup: use helper for distingushing css in callbacks JP Kobryn
2025-05-06 0:52 ` Shakeel Butt
2025-05-07 9:02 ` Yosry Ahmed
2025-05-09 21:46 ` JP Kobryn
2025-05-03 0:12 ` [PATCH v5 2/5] cgroup: use separate rstat trees for each subsystem JP Kobryn
2025-05-07 9:24 ` Yosry Ahmed
2025-05-09 17:53 ` JP Kobryn
2025-05-12 17:30 ` JP Kobryn
2025-05-03 0:12 ` [PATCH v5 3/5] cgroup: use subsystem-specific rstat locks to avoid contention JP Kobryn
2025-05-07 9:37 ` Yosry Ahmed
2025-05-03 0:12 ` [PATCH v5 4/5] cgroup: helper for checking rstat participation of css JP Kobryn
2025-05-07 9:38 ` Yosry Ahmed
2025-05-03 0:12 ` [PATCH v5 5/5] cgroup: document the rstat per-cpu initialization JP Kobryn
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250503001222.146355-1-inwardvessel@gmail.com \
--to=inwardvessel@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=cgroups@vger.kernel.org \
--cc=hannes@cmpxchg.org \
--cc=kernel-team@meta.com \
--cc=linux-mm@kvack.org \
--cc=mkoutny@suse.com \
--cc=shakeel.butt@linux.dev \
--cc=tj@kernel.org \
--cc=yosryahmed@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox