From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 92CF5C28D13 for ; Mon, 22 Aug 2022 16:15:47 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 29C638D0002; Mon, 22 Aug 2022 12:15:47 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 24AB28D0001; Mon, 22 Aug 2022 12:15:47 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1131D8D0002; Mon, 22 Aug 2022 12:15:47 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 01F568D0001 for ; Mon, 22 Aug 2022 12:15:46 -0400 (EDT) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id C427EAB6C0 for ; Mon, 22 Aug 2022 16:15:46 +0000 (UTC) X-FDA: 79827729492.24.A722CC7 Received: from mail-pg1-f180.google.com (mail-pg1-f180.google.com [209.85.215.180]) by imf16.hostedemail.com (Postfix) with ESMTP id 9E3D81804E8 for ; Mon, 22 Aug 2022 16:12:37 +0000 (UTC) Received: by mail-pg1-f180.google.com with SMTP id 12so9806579pga.1 for ; Mon, 22 Aug 2022 09:12:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc; bh=7BuvMVyleR3Hw8mkAELF9i7HfPKnfLamJfP4eJKjBTI=; b=ZtGfm781LiOy0zfGugOT2dFYFP8nCYjMngUuJ7i43OurHzkaxNOOp48ZZ5Ic7V9ea7 96CJQmAYiGjoSdQ/OW7s6NZVSvpFZkwG8ooJftJJQUudaffxgCUJmDVVyMWSj0GHdtAz 0xdv5P8CWQ+8cnaRjWZIPKPAGoXwsKu8I7vz8/rQcsDEtiifrBvJp87PfaAOa0ByiFNI /HoUTLj0gUoD/xrKelAJev5/wgXe+l8rCYLAJGS7XdvJ4n5ENREjWDMqEcmbFla824ds oyqdZt7x4ZxO7+9MkkiPRbX2va9gqg4hR/e5RHohZSmOcYkglmAcXjzSUAagTWYCGm/Y 12Jg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc; bh=7BuvMVyleR3Hw8mkAELF9i7HfPKnfLamJfP4eJKjBTI=; b=h6qGqDfda9xW7ppxOZzTD+wvjbzDZEEgQn+lJdlL/CLeFbrkDZXqcCVUDU6qK9KuWm qLApRkl835Ct4YmWdl6dcPkn2oR0ad6vZNkgVByfVD+MuryUi5YSldIkfU9Rx2J2ysgo SmC9R9M+hJXvvLMeoeiD3lfT4qT3lxl0Vc4pEZ/Px8ImjJsfIXAv0hzeOV4SNVinzwYV k5N3ro9y1YWz7Muvfxdx//3PQlk4CKcsJA3pB10k2LckPspivZbGxTGVcMabdaApft6p 5XtU16qdun5og5I4BDoQux9TmjOD8lCEc4V1s/37fYi0BQxkviZbw2QPuF60JTFmxOCj vtEw== X-Gm-Message-State: ACgBeo32YuIqM/KJ06wXupgTPrW48cf9L62OzUW19unL0+WNqUB53i6j dNI71RmvEx+cplVkvQ+gQt0xoeaF0AYc2FJ1UtSVcg== X-Google-Smtp-Source: AA6agR5S9hFZUry+hlyQRfRO4ySXHQzaQtQaXSyPXOPlH8Vcmh2GSOHNGAYWBhzwTjcJ5JM90zrpdU1qmSiVtTolLgg= X-Received: by 2002:a65:494b:0:b0:428:d68c:35bf with SMTP id q11-20020a65494b000000b00428d68c35bfmr16904973pgs.509.1661184756390; Mon, 22 Aug 2022 09:12:36 -0700 (PDT) MIME-Version: 1.0 References: <20220818143118.17733-1-laoar.shao@gmail.com> In-Reply-To: From: Shakeel Butt Date: Mon, 22 Aug 2022 09:12:25 -0700 Message-ID: Subject: Re: [RFD RESEND] cgroup: Persistent memory usage tracking To: Tejun Heo , Mina Almasry Cc: Yafang Shao , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Martin Lau , Song Liu , Yonghong Song , john fastabend , KP Singh , Stanislav Fomichev , Hao Luo , jolsa@kernel.org, Johannes Weiner , Michal Hocko , Roman Gushchin , Muchun Song , Andrew Morton , Zefan Li , Cgroups , netdev , bpf , Linux MM , Yosry Ahmed , Dan Schatzberg , Lennart Poettering Content-Type: text/plain; charset="UTF-8" ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1661184757; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=7BuvMVyleR3Hw8mkAELF9i7HfPKnfLamJfP4eJKjBTI=; b=d0bxcpIsdS9mUZa5UbdaiN3P9DmxUT9IeX9p2DmuR2x84s1NWHCig8SXrj2sP4AqKtcmLk WSWhLX2nYNbOwRneV6EKxHAduvmHBoOtJpaO+p+q5rwA069mUbVZxTV3JOFNs5fR9L7Kz1 mkBhEwUJACbvPSb752c2SU3ttOtRnaM= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=ZtGfm781; spf=pass (imf16.hostedemail.com: domain of shakeelb@google.com designates 209.85.215.180 as permitted sender) smtp.mailfrom=shakeelb@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1661184757; a=rsa-sha256; cv=none; b=5zC0qwzsneY41NYfAjsyg/IDzATGSbQJUgSU/qsliDBVHEYe4E96XEeBxPLL9vpCv0Ftf2 reKVMr7VubA0KLWIOUYyfsgmXYXWSXR6OvdpyNrZvM53/VvlxAk53PWrj7/HqRNRmtlPOs +JF8qx+kCNVpN9wL+BB1iAJXG1tVn2M= X-Stat-Signature: derfxezagx77sqzdim4hpduc1aa4ihet Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=ZtGfm781; spf=pass (imf16.hostedemail.com: domain of shakeelb@google.com designates 209.85.215.180 as permitted sender) smtp.mailfrom=shakeelb@google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 9E3D81804E8 X-Rspam-User: X-HE-Tag: 1661184757-712268 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Ccing Mina. On Mon, Aug 22, 2022 at 4:29 AM Tejun Heo wrote: > > (Sorry, this is a resend. I messed up the header in the first posting.) > > Hello, > > This thread started on a bpf-specific memory tracking change proposal and > went south, but a lot of people who would be interested are already cc'd, so > I'm hijacking it to discuss what to do w/ persistent memory usage tracking. > > Cc'ing Mina and Yosry who were involved in the discussions on the similar > problem re. tmpfs, Dan Schatzberg who has a lot more prod knowledge and > experience than me, and Lennart for his thoughts from systemd side. > > The root problem is that there are resources (almost solely memory > currently) that outlive a given instance of a, to use systemd-lingo, > service. Page cache is the most common case. > > Let's say there's system.slice/hello.service. When it runs for the first > time, page cache backing its binary will be charged to hello.service. > However, when it restarts after e.g. a config change, when the initial > hello.service cgroup gets destroyed, we reparent the page cache charges to > the parent system.slice and when the second instance starts, its binary will > stay charged to system.slice. Over time, some may get reclaimed and > refaulted into the new hello.service but that's not guaranteed and most hot > pages likely won't. > > The same problem exists for any memory which is not freed synchronously when > the current instance exits. While this isn't a problem for many cases, it's > not difficult to imagine situations where the amount of memory which ends up > getting pushed to the parent is significant, even clear majority, with big > page cache footprint, persistent tmpfs instances and so on, creating issues > with accounting accuracy and thus control. > > I think there are two broad issues to discuss here: > > [1] Can this be solved by layering the instance cgroups under persistent > entity cgroup? > > So, instead of systemd.slice/hello.service, the application runs inside > something like systemd.slice/hello.service/hello.service.instance and the > service-level cgroup hello.service is not destroyed as long as it is > something worth tracking on the system. > > The benefits are > > a. While requiring changing how userland organizes cgroup hiearchy, it is a > straight-forward extension of the current architecture and doesn't > require any conceptual or structural changes. All the accounting and > control schemes work exactly the same as before. The only difference is > that we now have a persistent entity representing each service as we want > to track their persistent resource usages. > > b. Per-instance tracking and control is optional. To me, it seems that the > persistent resource usages would be more meaningful than per-instance and > tracking down to the persistent usages shouldn't add noticeable runtime > overheads while keeping per-instance process management niceties and > allowing use cases to opt-in for per-instance resource tracking and > control as needed. > > The complications are: > > a. It requires changing cgroup hierarchy in a very visible way. > > b. What should be the lifetime rules for persistent cgroups? Do we keep them > around forever or maybe they can be created on the first use and kept > around until the service is removed from the system? When the persistent > cgroup is removed, do we need to make sure that the remaining resource > usages are low enough? Note that this problem exists for any approach > that tries to track persistent usages no matter how it's done. > > c. Do we need to worry about nesting overhead? Given that there's no reason > to enable controllers w/o persisten states for the instance level and the > nesting overhead is pretty low for memcg, this doesn't seem like a > problem to me. If this becomes a problem, we just need to fix it. > > A couple alternatives discussed are: > > a. Userspace keeps reusing the same cgroup for different instances of the > same service. This simplifies some aspects while making others more > complicated. e.g. Determining the current instance's CPU or IO usages now > require the monitoring software remembering what they were when this > instance started and calculating the deltas. Also, if some use cases want > to distinguish persistent vs. instance usages (more on this later), this > isn't gonna work. That said, this definitely is attractive in that it > miminizes overt user visible changes. > > b. Memory is disassociated rather than just reparented on cgroup destruction > and get re-charged to the next first user. This is attractive in that it > doesn't require any userspace changes; however, I'm not sure how this > would work for non-pageable memory usages such as bpf maps. How would we > detect the next first usage? > > > [2] Whether and how to solve first and second+ instance charge differences. > > If we take the layering approach, the first instance will get charged for > all memory that it uses while the second+ instances likely won't get charged > for a lot of persistent usages. I don't think there is a consensus on > whether this needs to be solved and I don't have enough context to form a > strong opinion. memcg folks are a lot better equipped to make this decision. > > Assuming this needs to be solved, here's a braindump to be taken with a big > pinch of salt: > > I have a bit of difficult time imagining a perfect solution given that > whether a given page cache page is persistent or not would be really > difficult to know (or maybe all page cache is persistent by default while > anon is not). However, the problem still seems worthwhile to consider for > big ticket items such as persistent tmpfs mounts and huge bpf maps as they > can easily make the differences really big. > > If we want to solve this problem, here are options that I can think of: > > a. Let userspace put the charges where they belong using the current > mechanisms. ie. Create persistent entities in the persistent parent > cgroup while there's no current instance. > > Pro: It won't require any major kernel or interface changes. There still > need to be some tweaking such as allowing tmpfs pages to be always > charged to the cgroup which created the instance (maybe as long as it's > an ancestor of the faulting cgroup?) but nothing too invasive. > > Con: It may not be flexible enough. > > b. Let userspace specify which cgroup to charge for some of constructs like > tmpfs and bpf maps. The key problems with this approach are > > 1. How to grant/deny what can be charged where. We must ensure that a > descendant can't move charges up or across the tree without the > ancestors allowing it. > > 2. How to specify the cgroup to charge. While specifying the target > cgroup directly might seem like an obvious solution, it has a couple > rather serious problems. First, if the descendant is inside a cgroup > namespace, it might be able to see the target cgroup at all. Second, > it's an interface which is likely to cause misunderstandings on how it > can be used. It's too broad an interface. > > One solution that I can think of is leveraging the resource domain > concept which is currently only used for threaded cgroups. All memory > usages of threaded cgroups are charged to their resource domain cgroup > which hosts the processes for those threads. The persistent usages have a > similar pattern, so maybe the service level cgroup can declare that it's > the encompassing resource domain and the instance cgroup can say whether > it's gonna charge e.g. the tmpfs instance to its own or the encompassing > resource domain. > > This has the benefit that the user only needs to indicate its intention > without worrying about how cgroups are composed and what their IDs are. > It just indicates whether the given resource is persistent and if the > cgroup hierarchy is set up for that, it gets charged that way and if not > it can be just charged to itself. > > This is a shower thought but, if we allow nesting such domains (and maybe > name them), we can use it for shared resources too so that co-services > are put inside a shared slice and shared resources are pushed to the > slice level. > > This became pretty long. I obviously have a pretty strong bias towards > solving this within the current basic architecture but other than that most > of these decisions are best made by memcg folks. We can hopefully build some > consensus on the issue. > > Thanks. > > -- > tejun