From: Yosry Ahmed <yosryahmed@google.com>
To: Yonghong Song <yhs@fb.com>
Cc: Tejun Heo <tj@kernel.org>, Alexei Starovoitov <ast@kernel.org>,
Andrii Nakryiko <andrii@kernel.org>,
Daniel Borkmann <daniel@iogearbox.net>,
Johannes Weiner <hannes@cmpxchg.org>, Hao Luo <haoluo@google.com>,
Shakeel Butt <shakeelb@google.com>,
Stanislav Fomichev <sdf@google.com>,
David Rientjes <rientjes@google.com>, bpf <bpf@vger.kernel.org>,
KP Singh <kpsingh@kernel.org>,
cgroups@vger.kernel.org, Linux-MM <linux-mm@kvack.org>
Subject: Re: [RFC bpf-next] Hierarchical Cgroup Stats Collection Using BPF
Date: Mon, 28 Mar 2022 02:22:06 -0700 [thread overview]
Message-ID: <CAJD7tkYK7A1Vn+LRo9xZA+K7BuRmWeUyLX6XE-g-MBf8myLn6Q@mail.gmail.com> (raw)
In-Reply-To: <f049c2f6-499b-ff7a-3910-38487878606a@fb.com>
On Tue, Mar 22, 2022 at 11:09 AM Yonghong Song <yhs@fb.com> wrote:
>
>
>
> On 3/16/22 9:35 AM, Yosry Ahmed wrote:
> > Hi Tejun,
> >
> > Thanks for taking the time to read my proposal! Sorry for the late
> > reply. This email skipped my inbox for some reason.
> >
> > On Sun, Mar 13, 2022 at 10:35 PM Tejun Heo <tj@kernel.org> wrote:
> >>
> >> Hello,
> >>
> >> On Wed, Mar 09, 2022 at 12:27:15PM -0800, Yosry Ahmed wrote:
> >> ...
> >>> These problems are already addressed by the rstat aggregation
> >>> mechanism in the kernel, which is primarily used for memcg stats. We
> >>
> >> Not that it matters all that much but I don't think the above statement is
> >> true given that sched stats are an integrated part of the rstat
> >> implementation and io was converted before memcg.
> >>
> >
> > Excuse my ignorance, I am new to kernel development. I only saw calls
> > to cgroup_rstat_updated() in memcg and io and assumed they were the
> > only users. Now I found cpu_account_cputime() :)
> >
> >>> - For every cgroup, we will either use flags to distinguish BPF stats
> >>> updates from normal stats updates, or flush both anyway (memcg stats
> >>> are periodically flushed anyway).
> >>
> >> I'd just keep them together. Usually most activities tend to happen
> >> together, so it's cheaper to aggregate all of them in one go in most cases.
> >
> > This makes sense to me, thanks.
> >
> >>
> >>> - Provide flags to enable/disable using per-cpu arrays (for stats that
> >>> are not updated frequently), and enable/disable hierarchical
> >>> aggregation (for non-hierarchical stats, they can still make benefit
> >>> of the automatic entries creation & deletion).
> >>> - Provide different hierarchical aggregation operations : SUM, MAX, MIN, etc.
> >>> - Instead of an array as the map value, use a struct, and let the user
> >>> provide an aggregator function in the form of a BPF program.
> >>
> >> I'm more partial to the last option. It does make the usage a bit more
> >> compilcated but hopefully it shouldn't be too bad with good examples.
> >>
> >> I don't have strong opinions on the bpf side of things but it'd be great to
> >> be able to use rstat from bpf.
> >
> > It indeed gives more flexibility but is more complicated. Also, I am
> > not sure about the overhead to make calls to BPF programs in every
> > aggregation step. Looking forward to get feedback on the bpf side of
> > things.
>
> Hi, Yosry, I heard this was discussed in bpf office hour which I
> didn't attend. Could you summarize the conclusion and what is the
> step forward? We also have an internal tool which collects cgroup
> stats and this might help us as well. Thanks!
>
> >
> >>
> >> Thanks.
> >>
> >> --
> >> tejun
Hi Yonghong,
Hao has already done an excellent job summarizing the outcome of the meeting.
The idea I have is basically to introduce "rstat flushing" BPF
programs. BPF programs that collect and display stats would use
helpers to call cgroup_rstat_flush() and cgroup_rstat_updated() (or
similar). rstat would then make calls to the "rstat flushing" BPF
programs during flushes, similar to calls to css_rstat_flush().
I will work on an RFC patch(es) for this soon. Let me know if you have
any comments/suggestions/feedback.
Thanks!
next prev parent reply other threads:[~2022-03-28 9:22 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-03-09 20:27 Yosry Ahmed
2022-03-14 5:35 ` Tejun Heo
2022-03-16 16:35 ` Yosry Ahmed
2022-03-22 18:09 ` Yonghong Song
2022-03-22 21:37 ` Hao Luo
2022-03-22 22:06 ` Yonghong Song
2022-03-28 9:22 ` Yosry Ahmed [this message]
2022-03-16 6:04 ` Song Liu
2022-03-16 16:11 ` Yosry Ahmed
2022-03-16 16:13 ` Yosry Ahmed
2022-03-16 16:31 ` Tejun Heo
2022-03-18 19:59 ` Song Liu
2022-03-28 9:16 ` Yosry Ahmed
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAJD7tkYK7A1Vn+LRo9xZA+K7BuRmWeUyLX6XE-g-MBf8myLn6Q@mail.gmail.com \
--to=yosryahmed@google.com \
--cc=andrii@kernel.org \
--cc=ast@kernel.org \
--cc=bpf@vger.kernel.org \
--cc=cgroups@vger.kernel.org \
--cc=daniel@iogearbox.net \
--cc=hannes@cmpxchg.org \
--cc=haoluo@google.com \
--cc=kpsingh@kernel.org \
--cc=linux-mm@kvack.org \
--cc=rientjes@google.com \
--cc=sdf@google.com \
--cc=shakeelb@google.com \
--cc=tj@kernel.org \
--cc=yhs@fb.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox