From: Yosry Ahmed <yosryahmed@google.com>
To: Michal Hocko <mhocko@suse.com>
Cc: Roman Gushchin <roman.gushchin@linux.dev>,
Yafang Shao <laoar.shao@gmail.com>,
Alexei Starovoitov <alexei.starovoitov@gmail.com>,
Shakeel Butt <shakeelb@google.com>,
Matthew Wilcox <willy@infradead.org>,
Christoph Hellwig <hch@infradead.org>,
"David S. Miller" <davem@davemloft.net>,
Daniel Borkmann <daniel@iogearbox.net>,
Andrii Nakryiko <andrii@kernel.org>, Tejun Heo <tj@kernel.org>,
Martin KaFai Lau <kafai@fb.com>, bpf <bpf@vger.kernel.org>,
Kernel Team <kernel-team@fb.com>, linux-mm <linux-mm@kvack.org>,
Christoph Lameter <cl@linux.com>,
Pekka Enberg <penberg@kernel.org>,
David Rientjes <rientjes@google.com>,
Joonsoo Kim <iamjoonsoo.kim@lge.com>,
Andrew Morton <akpm@linux-foundation.org>,
Vlastimil Babka <vbabka@suse.cz>,
Mina Almasry <almasrymina@google.com>
Subject: Re: cgroup specific sticky resources (was: Re: [PATCH bpf-next 0/5] bpf: BPF specific memory allocator.)
Date: Tue, 19 Jul 2022 11:01:40 -0700 [thread overview]
Message-ID: <CAJD7tkYnYbnrCNo_4NU1MPDTU3Bg5bEP6KMR=ha0EMG6Y5Y+Cw@mail.gmail.com> (raw)
In-Reply-To: <CAJD7tkbieq_vDxwnkk_jTYz9Fe1t5AMY6b3Q=8O-ag9YLo9uZg@mail.gmail.com>
+Mina Almasry
On Tue, Jul 19, 2022 at 11:00 AM Yosry Ahmed <yosryahmed@google.com> wrote:
>
> On Tue, Jul 19, 2022 at 4:30 AM Michal Hocko <mhocko@suse.com> wrote:
> >
> > On Mon 18-07-22 10:55:59, Yosry Ahmed wrote:
> > > On Tue, Jul 12, 2022 at 7:39 PM Roman Gushchin <roman.gushchin@linux.dev> wrote:
> > > >
> > > > On Tue, Jul 12, 2022 at 11:52:11AM +0200, Michal Hocko wrote:
> > > > > On Tue 12-07-22 16:39:48, Yafang Shao wrote:
> > > > > > On Tue, Jul 12, 2022 at 3:40 PM Michal Hocko <mhocko@suse.com> wrote:
> > > > > [...]
> > > > > > > > Roman already sent reparenting fix:
> > > > > > > > https://patchwork.kernel.org/project/netdevbpf/patch/20220711162827.184743-1-roman.gushchin@linux.dev/
> > > > > > >
> > > > > > > Reparenting is nice but not a silver bullet. Consider a shallow
> > > > > > > hierarchy where the charging happens in the first level under the root
> > > > > > > memcg. Reparenting to the root is just pushing everything under the
> > > > > > > system resources category.
> > > > > > >
> > > > > >
> > > > > > Agreed. That's why I don't like reparenting.
> > > > > > Reparenting just reparent the charged pages and then redirect the new
> > > > > > charge, but can't reparents the 'limit' of the original memcg.
> > > > > > So it is a risk if the original memcg is still being charged. We have
> > > > > > to forbid the destruction of the original memcg.
> > > >
> > > > I agree, I also don't like reparenting for !kmem case. For kmem (and *maybe*
> > > > bpf maps is an exception), I don't think there is a better choice.
> > > >
> > > > > yes, I was toying with an idea like that. I guess we really want a
> > > > > measure to keep cgroups around if they are bound to a resource which is
> > > > > sticky itself. I am not sure how many other resources like BPF (aka
> > > > > module like) we already do charge for memcg but considering the
> > > > > potential memory consumption just reparenting will not help in general
> > > > > case I am afraid.
> > > >
> > > > Well, then we have to make these objects a first-class citizens in cgroup API,
> > > > like processes. E.g. introduce cgroup.bpf.maps, cgroup.mounts.tmpfs etc.
> > > > I easily can see some value here, but it's a big API change.
> > > >
> > > > With the current approach when a bpf map pins a memory cgroup of the creator
> > > > process (which I think is completely transparent for most bpf users), I don't
> > > > think preventing the deletion of a such cgroup is possible. It will break too
> > > > many things.
> > > >
> > > > But honestly I don't see why userspace can't handle it. If there is a cgroup which
> > > > contains shared bpf maps, why would it delete it? It's a weird use case, I don't
> > > > think we have to optimize for it. Also, we do a ton of optimizations for live
> > > > cgroups (e.g. css refcounting being percpu) which are not working for a deleted
> > > > cgroup. So noone really should expect any properties from dying cgroups.
> > > >
> > >
> > > Just a random thought here, and I can easily be wrong (and this can
> > > easily be the wrong thread for this), but if we introduce a more
> > > generic concept to generally tie a resource explicitly to a cgroup
> > > (tmpfs, bpf maps, etc) using cgroupfs interfaces, and then prevent the
> > > cgroup from being deleted unless the resource is freed or moved to a
> > > different cgroup?
> >
> > My understanding is that Tejun would prefer a user space defined policy
> > by a proper layering. And I would tend to agree that this is less prone
> > to corner cases.
> >
> > Anyway, how would you envision such an interface?
>
> I imagine something like cgroup.sticky.[bpf/tmpfs/..] (I suck at naming things).
>
> The file can contain a list of bpf map ids or tmpfs inode ids or mount
> paths. You can write a new id/path to the file to add one, or read the
> file to see a list of them. Basically very similar semantics to
> cgroup.procs. Instead of processes, we have different types of sticky
> resources here that usually outlive processes. The charging of such
> resources would be deterministically attributed to this cgroup
> (instead of the cgroup of the process that created the map or touched
> the tmpfs file first).
>
> Since the system admin has explicitly attributed these resources to
> this cgroup, it makes sense that the cgroup cannot be deleted as long
> as these resources exist and are charged to it, similar to
> cgroup.procs. This also addresses some of the zombie cgroups problems.
>
> The obvious question is: to maintain the current behavior, bpf maps
> and tmpfs mounts will be by default initially charged the same way as
> today. bpf maps for the cgroup of the process that created them, and
> tmpfs pages on first touch basis. How do we move their charging after
> their creation in the current way to a cgroup.sticky file?
>
> For things like bpf maps, it should be relatively simple to move
> charges the same way we do when we move processes, because the bpf map
> is by default charged to one memcg. For tmpfs, it would be challenging
> to move the charges because pages could be individually attributed to
> different cgroups already. For this, we can impose a (imo reasonable)
> restriction that for tmpfs (and similar resources that are currently
> not attributed to a single cgroup), that you can only add a tmpfs
> mount to cgroup.sticky.tmpfs for the first time if no pages have been
> charged yet (no files created). Once the tmpfs mount lives in any
> cgroup.sticky.tmpfs file, we know that it is charged to one cgroup,
> and we can move the charges more easily.
>
> The system admin would basically mount the tmpfs directory and then
> directly write it to a cgroup.sticky.tmpfs file. For that point
> onwards, all tmpfs pages will be charged to that cgroup, and they need
> to be freed or moved before the cgroup can be removed.
>
> I could be missing something here, but I imagine such an interface(s)
> would address both the bpf progs/maps charging concerns, and our
> previous memcg= mount attempts. Please correct me if I am wrong.
>
> >
> > > This would be optional, so the current status quo is maintainable, but
> > > also gives flexibility to admins to assign resources to cgroups to
> > > make sure nothing is ( unaccounted / accounted to a zombie memcg /
> > > reparented to an unrelated parent ). This might be too fine-grained to
> > > be practical but I just thought it might be useful. We will also need
> > > to define an OOM behavior for such resources. Things like bpf maps
> > > will be unreclaimable, but tmpfs memory can be swapped out.
> >
> > Keep in mind that the swap is a shared resource in itself. So tmpfs is
> > essentially a sticky resource as well. A tmpfs file is not bound to any
> > proces life time the same way BPF program is. You might need less
> > priviledges to remove a file but in principle they are consuming
> > resources without any explicit owner.
>
> I fully agree, which is exactly why I am suggesting having a defined
> way to handle such "sticky" resources that outlive processes.
> Currently cgroups operate in terms of processes and this can be a
> limitation for such resources.
>
> > --
> > Michal Hocko
> > SUSE Labs
next prev parent reply other threads:[~2022-07-19 18:02 UTC|newest]
Thread overview: 64+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <20220623003230.37497-1-alexei.starovoitov@gmail.com>
2022-06-27 7:03 ` [PATCH bpf-next 0/5] bpf: BPF specific memory allocator Christoph Hellwig
2022-06-28 0:17 ` Christoph Lameter
2022-06-28 5:01 ` Alexei Starovoitov
2022-06-28 13:57 ` Christoph Lameter
2022-06-28 17:03 ` Alexei Starovoitov
2022-06-29 2:35 ` Christoph Lameter
2022-06-29 2:49 ` Alexei Starovoitov
2022-07-04 16:13 ` Vlastimil Babka
2022-07-06 17:43 ` Alexei Starovoitov
2022-07-19 11:52 ` Vlastimil Babka
2022-07-04 20:34 ` Matthew Wilcox
2022-07-06 17:50 ` Alexei Starovoitov
2022-07-06 17:55 ` Matthew Wilcox
2022-07-06 18:05 ` Alexei Starovoitov
2022-07-06 18:21 ` Matthew Wilcox
2022-07-06 18:26 ` Alexei Starovoitov
2022-07-06 18:31 ` Matthew Wilcox
2022-07-06 18:36 ` Alexei Starovoitov
2022-07-06 18:40 ` Matthew Wilcox
2022-07-06 18:51 ` Alexei Starovoitov
2022-07-06 18:55 ` Matthew Wilcox
2022-07-08 13:41 ` Michal Hocko
2022-07-08 17:48 ` Alexei Starovoitov
2022-07-08 20:13 ` Yosry Ahmed
2022-07-08 21:55 ` Shakeel Butt
2022-07-10 5:26 ` Alexei Starovoitov
2022-07-10 7:32 ` Shakeel Butt
2022-07-11 12:15 ` Michal Hocko
2022-07-12 4:39 ` Alexei Starovoitov
2022-07-12 7:40 ` Michal Hocko
2022-07-12 8:39 ` Yafang Shao
2022-07-12 9:52 ` Michal Hocko
2022-07-12 15:25 ` Shakeel Butt
2022-07-12 16:32 ` Tejun Heo
2022-07-12 17:26 ` Shakeel Butt
2022-07-12 17:36 ` Tejun Heo
2022-07-12 18:11 ` Shakeel Butt
2022-07-12 18:43 ` Alexei Starovoitov
2022-07-13 13:56 ` Yafang Shao
2022-07-12 19:11 ` Mina Almasry
2022-07-12 16:24 ` Tejun Heo
2022-07-18 14:13 ` Michal Hocko
2022-07-13 2:39 ` Roman Gushchin
2022-07-13 14:24 ` Yafang Shao
2022-07-13 16:24 ` Tejun Heo
2022-07-14 6:15 ` Yafang Shao
2022-07-18 17:55 ` Yosry Ahmed
2022-07-19 11:30 ` cgroup specific sticky resources (was: Re: [PATCH bpf-next 0/5] bpf: BPF specific memory allocator.) Michal Hocko
2022-07-19 18:00 ` Yosry Ahmed
2022-07-19 18:01 ` Yosry Ahmed [this message]
2022-07-19 18:46 ` Mina Almasry
2022-07-19 19:16 ` Tejun Heo
2022-07-19 19:30 ` Yosry Ahmed
2022-07-19 19:38 ` Tejun Heo
2022-07-19 19:40 ` Yosry Ahmed
2022-07-19 19:47 ` Mina Almasry
2022-07-19 19:54 ` Tejun Heo
2022-07-19 20:16 ` Mina Almasry
2022-07-19 20:29 ` Tejun Heo
2022-07-20 12:26 ` Michal Hocko
2022-07-12 18:40 ` [PATCH bpf-next 0/5] bpf: BPF specific memory allocator Alexei Starovoitov
2022-07-18 12:27 ` Michal Hocko
2022-07-13 2:27 ` Roman Gushchin
2022-07-11 12:22 ` Michal Hocko
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CAJD7tkYnYbnrCNo_4NU1MPDTU3Bg5bEP6KMR=ha0EMG6Y5Y+Cw@mail.gmail.com' \
--to=yosryahmed@google.com \
--cc=akpm@linux-foundation.org \
--cc=alexei.starovoitov@gmail.com \
--cc=almasrymina@google.com \
--cc=andrii@kernel.org \
--cc=bpf@vger.kernel.org \
--cc=cl@linux.com \
--cc=daniel@iogearbox.net \
--cc=davem@davemloft.net \
--cc=hch@infradead.org \
--cc=iamjoonsoo.kim@lge.com \
--cc=kafai@fb.com \
--cc=kernel-team@fb.com \
--cc=laoar.shao@gmail.com \
--cc=linux-mm@kvack.org \
--cc=mhocko@suse.com \
--cc=penberg@kernel.org \
--cc=rientjes@google.com \
--cc=roman.gushchin@linux.dev \
--cc=shakeelb@google.com \
--cc=tj@kernel.org \
--cc=vbabka@suse.cz \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox