From: Michal Hocko <mhocko@suse.com>
To: Tejun Heo <tj@kernel.org>
Cc: Yafang Shao <laoar.shao@gmail.com>,
Alexei Starovoitov <alexei.starovoitov@gmail.com>,
Shakeel Butt <shakeelb@google.com>,
Matthew Wilcox <willy@infradead.org>,
Christoph Hellwig <hch@infradead.org>,
"David S. Miller" <davem@davemloft.net>,
Daniel Borkmann <daniel@iogearbox.net>,
Andrii Nakryiko <andrii@kernel.org>,
Martin KaFai Lau <kafai@fb.com>, bpf <bpf@vger.kernel.org>,
Kernel Team <kernel-team@fb.com>, linux-mm <linux-mm@kvack.org>,
Christoph Lameter <cl@linux.com>,
Pekka Enberg <penberg@kernel.org>,
David Rientjes <rientjes@google.com>,
Joonsoo Kim <iamjoonsoo.kim@lge.com>,
Andrew Morton <akpm@linux-foundation.org>,
Vlastimil Babka <vbabka@suse.cz>
Subject: Re: [PATCH bpf-next 0/5] bpf: BPF specific memory allocator.
Date: Mon, 18 Jul 2022 16:13:31 +0200 [thread overview]
Message-ID: <YtVqi3IqnYpE3R46@dhcp22.suse.cz> (raw)
In-Reply-To: <Ys2gNCAyYGX3XVMm@slm.duckdns.org>
On Tue 12-07-22 06:24:20, Tejun Heo wrote:
> Hello, Michal.
>
> On Tue, Jul 12, 2022 at 11:52:11AM +0200, Michal Hocko wrote:
> > > Agreed. That's why I don't like reparenting.
> > > Reparenting just reparent the charged pages and then redirect the new
> > > charge, but can't reparents the 'limit' of the original memcg.
> > > So it is a risk if the original memcg is still being charged. We have
> > > to forbid the destruction of the original memcg.
> >
> > yes, I was toying with an idea like that. I guess we really want a
> > measure to keep cgroups around if they are bound to a resource which is
> > sticky itself. I am not sure how many other resources like BPF (aka
> > module like) we already do charge for memcg but considering the
> > potential memory consumption just reparenting will not help in general
> > case I am afraid.
>
> I think the solution here is an extra cgroup layering to represent
> persistent resource tracking. In systemd-speak, a service should have a
> cgroup representing a persistent service type and a cgroup representing the
> current running instance. This way, the user (or system agent) can clearly
> distinguish all resources that have ever been attributed to the service and
> the resources that are accounted to the current instance while also giving
> visibility into residual resources for services that are no longer running.
Just to make sure I follow what you mean here. You are suggesting that
the cgroup manager would ensure that there will be a parent responsible
for the resource and that cgroup will not go away as long as there are
resources (well modulo admin interferese which is not really interesting
usecase).
I do agree that this approach would be easier from the kernel
perspective. It is also more error prone because some resources might be
so runtime specific that it is hard to predict and configure them in
advance. My thinking about "sticky" cgroups was based on the reference
count of approach when the kernel knows that the resource requires some
sort of tear down which is not process life scoped and would take a
reference on the cgroup to keep it alive. I can see a concern that this
can get quite subtle in many cases though.
> This gives userspace control over what to track for how long and also fits
> what the kernel can do in terms of resource tracking. If we try to do
> something smart from kernel side, there are cases which are inherently
> insolvable. e.g. if a service instance creates tmpfs / shmem / whawtever and
> leaves it pinned one way or another and then exits, and there's no one who
> actively accessed it afterwards, there is no userland visible entity we can
> reasonably attribute that memory to other than the parent cgroup.
yeah, tmpfs would be another example which is even more complex because
a single file can "belong" to different memcgs.
--
Michal Hocko
SUSE Labs
next prev parent reply other threads:[~2022-07-18 14:13 UTC|newest]
Thread overview: 64+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <20220623003230.37497-1-alexei.starovoitov@gmail.com>
2022-06-27 7:03 ` Christoph Hellwig
2022-06-28 0:17 ` Christoph Lameter
2022-06-28 5:01 ` Alexei Starovoitov
2022-06-28 13:57 ` Christoph Lameter
2022-06-28 17:03 ` Alexei Starovoitov
2022-06-29 2:35 ` Christoph Lameter
2022-06-29 2:49 ` Alexei Starovoitov
2022-07-04 16:13 ` Vlastimil Babka
2022-07-06 17:43 ` Alexei Starovoitov
2022-07-19 11:52 ` Vlastimil Babka
2022-07-04 20:34 ` Matthew Wilcox
2022-07-06 17:50 ` Alexei Starovoitov
2022-07-06 17:55 ` Matthew Wilcox
2022-07-06 18:05 ` Alexei Starovoitov
2022-07-06 18:21 ` Matthew Wilcox
2022-07-06 18:26 ` Alexei Starovoitov
2022-07-06 18:31 ` Matthew Wilcox
2022-07-06 18:36 ` Alexei Starovoitov
2022-07-06 18:40 ` Matthew Wilcox
2022-07-06 18:51 ` Alexei Starovoitov
2022-07-06 18:55 ` Matthew Wilcox
2022-07-08 13:41 ` Michal Hocko
2022-07-08 17:48 ` Alexei Starovoitov
2022-07-08 20:13 ` Yosry Ahmed
2022-07-08 21:55 ` Shakeel Butt
2022-07-10 5:26 ` Alexei Starovoitov
2022-07-10 7:32 ` Shakeel Butt
2022-07-11 12:15 ` Michal Hocko
2022-07-12 4:39 ` Alexei Starovoitov
2022-07-12 7:40 ` Michal Hocko
2022-07-12 8:39 ` Yafang Shao
2022-07-12 9:52 ` Michal Hocko
2022-07-12 15:25 ` Shakeel Butt
2022-07-12 16:32 ` Tejun Heo
2022-07-12 17:26 ` Shakeel Butt
2022-07-12 17:36 ` Tejun Heo
2022-07-12 18:11 ` Shakeel Butt
2022-07-12 18:43 ` Alexei Starovoitov
2022-07-13 13:56 ` Yafang Shao
2022-07-12 19:11 ` Mina Almasry
2022-07-12 16:24 ` Tejun Heo
2022-07-18 14:13 ` Michal Hocko [this message]
2022-07-13 2:39 ` Roman Gushchin
2022-07-13 14:24 ` Yafang Shao
2022-07-13 16:24 ` Tejun Heo
2022-07-14 6:15 ` Yafang Shao
2022-07-18 17:55 ` Yosry Ahmed
2022-07-19 11:30 ` cgroup specific sticky resources (was: Re: [PATCH bpf-next 0/5] bpf: BPF specific memory allocator.) Michal Hocko
2022-07-19 18:00 ` Yosry Ahmed
2022-07-19 18:01 ` Yosry Ahmed
2022-07-19 18:46 ` Mina Almasry
2022-07-19 19:16 ` Tejun Heo
2022-07-19 19:30 ` Yosry Ahmed
2022-07-19 19:38 ` Tejun Heo
2022-07-19 19:40 ` Yosry Ahmed
2022-07-19 19:47 ` Mina Almasry
2022-07-19 19:54 ` Tejun Heo
2022-07-19 20:16 ` Mina Almasry
2022-07-19 20:29 ` Tejun Heo
2022-07-20 12:26 ` Michal Hocko
2022-07-12 18:40 ` [PATCH bpf-next 0/5] bpf: BPF specific memory allocator Alexei Starovoitov
2022-07-18 12:27 ` Michal Hocko
2022-07-13 2:27 ` Roman Gushchin
2022-07-11 12:22 ` Michal Hocko
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=YtVqi3IqnYpE3R46@dhcp22.suse.cz \
--to=mhocko@suse.com \
--cc=akpm@linux-foundation.org \
--cc=alexei.starovoitov@gmail.com \
--cc=andrii@kernel.org \
--cc=bpf@vger.kernel.org \
--cc=cl@linux.com \
--cc=daniel@iogearbox.net \
--cc=davem@davemloft.net \
--cc=hch@infradead.org \
--cc=iamjoonsoo.kim@lge.com \
--cc=kafai@fb.com \
--cc=kernel-team@fb.com \
--cc=laoar.shao@gmail.com \
--cc=linux-mm@kvack.org \
--cc=penberg@kernel.org \
--cc=rientjes@google.com \
--cc=shakeelb@google.com \
--cc=tj@kernel.org \
--cc=vbabka@suse.cz \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox