linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: JP Kobryn <inwardvessel@gmail.com>
To: Tejun Heo <tj@kernel.org>
Cc: shakeel.butt@linux.dev, mhocko@kernel.org, hannes@cmpxchg.org,
	yosryahmed@google.com, akpm@linux-foundation.org,
	linux-mm@kvack.org, cgroups@vger.kernel.org,
	kernel-team@meta.com
Subject: Re: [PATCH 00/11] cgroup: separate rstat trees
Date: Thu, 27 Feb 2025 15:44:12 -0800	[thread overview]
Message-ID: <bbb633a4-7007-4444-a391-3305a9fc8ffa@gmail.com> (raw)
In-Reply-To: <Z7dPZ9dNcaYuT6SA@slm.duckdns.org>

On 2/20/25 7:51 AM, Tejun Heo wrote:
> Hello,
> 
> On Mon, Feb 17, 2025 at 07:14:37PM -0800, JP Kobryn wrote:
> ...
>> The first experiment consisted of a parent cgroup with memory.swap.max=0
>> and memory.max=1G. On a 52-cpu machine, 26 child cgroups were created and
>> within each child cgroup a process was spawned to encourage the updating of
>> memory cgroup stats by creating and then reading a file of size 1T
>> (encouraging reclaim). These 26 tasks were run in parallel.  While this was
>> going on, a custom program was used to open cpu.stat file of the parent
>> cgroup, read the entire file 1M times, then close it. The perf report for
>> the task performing the reading showed that most of the cycles (42%) were
>> spent on the function mem_cgroup_css_rstat_flush() of the control side. It
>> also showed a smaller but significant number of cycles spent in
>> __blkcg_rstat_flush. The perf report for patched kernel differed in that no
>> cycles were spent in these functions. Instead most cycles were spent on
>> cgroup_base_stat_flush(). Aside from the perf reports, the amount of time
>> spent running the program performing the reading of cpu.stats showed a gain
>> when comparing the control to the experimental kernel.The time in kernel
>> mode was reduced.
>>
>> before:
>> real    0m18.449s
>> user    0m0.209s
>> sys     0m18.165s
>>
>> after:
>> real    0m6.080s
>> user    0m0.170s
>> sys     0m5.890s
>>
>> Another experiment on the same host was setup using a parent cgroup with
>> two child cgroups. The same swap and memory max were used as the previous
>> experiment. In the two child cgroups, kernel builds were done in parallel,
>> each using "-j 20". The program from the previous experiment was used to
>> perform 1M reads of the parent cpu.stat file. The perf comparison showed
>> similar results as the previous experiment. For the control side, a
>> majority of cycles (42%) on mem_cgroup_css_rstat_flush() and significant
>> cycles in __blkcg_rstat_flush(). On the experimental side, most cycles were
>> spent on cgroup_base_stat_flush() and no cycles were spent flushing memory
>> or io. As for the time taken by the program reading cpu.stat, measurements
>> are shown below.
>>
>> before:
>> real    0m17.223s
>> user    0m0.259s
>> sys     0m16.871s
>>
>> after:
>> real    0m6.498s
>> user    0m0.237s
>> sys     0m6.220s
>>
>> For the final experiment, perf events were recorded during a kernel build
>> with the same host and cgroup setup. The builds took place in the child
>> node.  Control and experimental sides both showed similar in cycles spent
>> on cgroup_rstat_updated() and appeard insignificant compared among the
>> events recorded with the workload.
> 
> One of the reasons why the original design used one rstat tree is because
> readers, in addition to writers, can often be correlated too - e.g. You'd
> often have periodic monitoring tools which poll all the major stat files
> periodically. Splitting the trees will likely make those at least a bit
> worse. Can you test how much worse that'd be? ie. Repeat the above tests but
> read all the major stat files - cgroup.stat, cpu.stat, memory.stat and
> io.stat.

Sure. I changed the experiment to read all of these files. It still 
showed an improvement in performance. You can see the details in
v2 [0] which I sent out earlier today.

[0] 
https://lore.kernel.org/all/20250227215543.49928-1-inwardvessel@gmail.com/
> 
> Thanks.
> 



  reply	other threads:[~2025-02-27 23:44 UTC|newest]

Thread overview: 42+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-02-18  3:14 JP Kobryn
2025-02-18  3:14 ` [PATCH 01/11] cgroup: move rstat pointers into struct of their own JP Kobryn
2025-02-19  1:05   ` Shakeel Butt
2025-02-19  1:23     ` Shakeel Butt
2025-02-20 16:53   ` Yosry Ahmed
2025-02-24 17:06     ` JP Kobryn
2025-02-24 18:36       ` Yosry Ahmed
2025-02-18  3:14 ` [PATCH 02/11] cgroup: add level of indirection for cgroup_rstat struct JP Kobryn
2025-02-19  2:26   ` Shakeel Butt
2025-02-20 17:08     ` Yosry Ahmed
2025-02-19  5:57   ` kernel test robot
2025-02-18  3:14 ` [PATCH 03/11] cgroup: move cgroup_rstat from cgroup to cgroup_subsys_state JP Kobryn
2025-02-20 17:06   ` Shakeel Butt
2025-02-20 17:22     ` Yosry Ahmed
2025-02-25 19:20       ` JP Kobryn
2025-02-18  3:14 ` [PATCH 04/11] cgroup: introduce cgroup_rstat_ops JP Kobryn
2025-02-19  7:21   ` kernel test robot
2025-02-20 17:50   ` Shakeel Butt
2025-02-18  3:14 ` [PATCH 05/11] cgroup: separate rstat for bpf cgroups JP Kobryn
2025-02-21 18:14   ` Shakeel Butt
2025-02-18  3:14 ` [PATCH 06/11] cgroup: rstat lock indirection JP Kobryn
2025-02-21 22:09   ` Shakeel Butt
2025-02-18  3:14 ` [PATCH 07/11] cgroup: fetch cpu-specific lock in rstat cpu lock helpers JP Kobryn
2025-02-21 22:35   ` Shakeel Butt
2025-02-18  3:14 ` [PATCH 08/11] cgroup: rstat cpu lock indirection JP Kobryn
2025-02-19  8:48   ` kernel test robot
2025-02-22  0:18   ` Shakeel Butt
2025-02-18  3:14 ` [PATCH 09/11] cgroup: separate rstat locks for bpf cgroups JP Kobryn
2025-02-18  3:14 ` [PATCH 10/11] cgroup: separate rstat locks for subsystems JP Kobryn
2025-02-22  0:23   ` Shakeel Butt
2025-02-18  3:14 ` [PATCH 11/11] cgroup: separate rstat list pointers from base stats JP Kobryn
2025-02-22  0:28   ` Shakeel Butt
2025-02-20 15:51 ` [PATCH 00/11] cgroup: separate rstat trees Tejun Heo
2025-02-27 23:44   ` JP Kobryn [this message]
2025-02-20 17:26 ` Yosry Ahmed
2025-02-20 17:53   ` Shakeel Butt
2025-02-20 17:59     ` Yosry Ahmed
2025-02-20 18:14       ` JP Kobryn
2025-02-20 20:04         ` Yosry Ahmed
2025-02-20 20:22           ` Yosry Ahmed
2025-02-24 21:13           ` Shakeel Butt
2025-02-24 21:54             ` Yosry Ahmed

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=bbb633a4-7007-4444-a391-3305a9fc8ffa@gmail.com \
    --to=inwardvessel@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=cgroups@vger.kernel.org \
    --cc=hannes@cmpxchg.org \
    --cc=kernel-team@meta.com \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@kernel.org \
    --cc=shakeel.butt@linux.dev \
    --cc=tj@kernel.org \
    --cc=yosryahmed@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox