From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6395FC28D13 for ; Sat, 20 Aug 2022 02:28:13 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id AC5FB8D0003; Fri, 19 Aug 2022 22:28:12 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A74F28D0002; Fri, 19 Aug 2022 22:28:12 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 93C348D0003; Fri, 19 Aug 2022 22:28:12 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 8165D8D0002 for ; Fri, 19 Aug 2022 22:28:12 -0400 (EDT) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 5021DA0408 for ; Sat, 20 Aug 2022 02:28:12 +0000 (UTC) X-FDA: 79818386424.26.077D0C5 Received: from mail-vs1-f47.google.com (mail-vs1-f47.google.com [209.85.217.47]) by imf10.hostedemail.com (Postfix) with ESMTP id 42042C0033 for ; Sat, 20 Aug 2022 02:26:39 +0000 (UTC) Received: by mail-vs1-f47.google.com with SMTP id m66so6113223vsm.12 for ; Fri, 19 Aug 2022 19:26:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc; bh=oh6Q6X5fu9r3g+mvH4/jQjUaJWygBx3MJ5JQBin8ub0=; b=jkvhqkc8tm4M+RR6BZs50gF7P3/UCtOxjjx1ig7yZY1X4PkwGTSnuAo7KN3DA+G38H h0rP55Py5h3qCfvw8j6f+I0Q/WmKn9l2aykoaPbu1Dsqyee4DhW5DbTggZMfyvg0kxTe Ovu8yHalWDXiCibV/VFmJxadK5g7LkbBFBz+mIFo+oQd9sqtEysjAxfz6X6OD3dDTM7B EDi7cORUHb1k0m3eesFCAxJI2MJt7dh63Ou0KXEifgnc9q/6FsJ4iCLA9B/QBPPpExn/ pQxuxMUGOXKM3feKbSQrVbnLWoyn7rqWLfmtUwNuD367CtqkW/x6xQc/f5hoKXHd4ibf DxIw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc; bh=oh6Q6X5fu9r3g+mvH4/jQjUaJWygBx3MJ5JQBin8ub0=; b=TwVAELS0tl7JoKaaFhSrAbwihIe++qJazFdGh0dodfVmMn0OgPdyW833yPik4h1wao hfw4t0E4NV60WkreZhYT6thrAiCJeFOtsHGFXb1H9bH2iqLBLuM3QGBWVyFLfbUgzB78 pNjJFZA8iEZwcoTE+iw1XB3s9kNC1mWCxsRDcxSEG+TvGRyfNtuKbG4Hq7NnxlwomP3r hPCcDcjG+YHPJaHYllJUZhRKGFjQkajAW2EfrMg6Dlppzz/X140j0Ct+S4hTlih5i+AT 9mcrbdjhThbcM9l+T76bgUE3xnh1GfnU0Ax4+TfqF3l3SYpodJTBe/yhTy7FDitNXtFg wpWw== X-Gm-Message-State: ACgBeo2OV6wVko4Ymdne4+5Wrj6NNfdF5CWdlD2aRBWF8KTHQ5GX1C90 1xhUTwswRgQos0I0ORqaKQF0cNb3UxvXdYPiinc= X-Google-Smtp-Source: AA6agR77IBY+CPyDUhh5l/7XAT66exhQPCgU+XVLGCjmos38SCThQYSEex67Z26AjehATEySAdHIYv3wTX+fe1wwZHA= X-Received: by 2002:a05:6102:570a:b0:38f:6031:412c with SMTP id dg10-20020a056102570a00b0038f6031412cmr3517866vsb.35.1660962398582; Fri, 19 Aug 2022 19:26:38 -0700 (PDT) MIME-Version: 1.0 References: <20220818143118.17733-1-laoar.shao@gmail.com> In-Reply-To: From: Yafang Shao Date: Sat, 20 Aug 2022 10:25:59 +0800 Message-ID: Subject: Re: [PATCH bpf-next v2 00/12] bpf: Introduce selectable memcg for bpf map To: Tejun Heo Cc: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Martin Lau , Song Liu , Yonghong Song , john fastabend , KP Singh , Stanislav Fomichev , Hao Luo , jolsa@kernel.org, Johannes Weiner , Michal Hocko , Roman Gushchin , Shakeel Butt , Muchun Song , Andrew Morton , Zefan Li , Cgroups , netdev , bpf , Linux MM Content-Type: text/plain; charset="UTF-8" ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1660962399; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=oh6Q6X5fu9r3g+mvH4/jQjUaJWygBx3MJ5JQBin8ub0=; b=D64Ic9RmY+CUM2x5nOSfXlsZtz83Tz2GRarDbICn/kenr3nhQP9qTH940pbYJA24wxVDdS QFDWHUbfuM56x7Y5vlHBICTlBPnpvo3jCFqa4bsBZGx0BTUwNM2eEKI4iGzTI4nzE0OHZM ja1nqAhYHZj8IwdDa3rJp7wXJDxeo+4= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=jkvhqkc8; spf=pass (imf10.hostedemail.com: domain of laoar.shao@gmail.com designates 209.85.217.47 as permitted sender) smtp.mailfrom=laoar.shao@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1660962399; a=rsa-sha256; cv=none; b=Nj4O5ZRd9mW+dGl4EdNTNXThH0EqQeeDDKwUl8RTOJLdw7RuUTqCN5q4Amh8obMaMhubzJ F3gB5CJrY+bjhvh8un8iY4wLonY1yF9Ux2SQ+RunNPfDarPSWHZxKqipQbNyQdWZgM2mD8 EFpKNplMGxRhZhH7IBO/HTimk0zHEpU= X-Stat-Signature: e1544cphsz43jy3g59f4i1gh3rqoxbjb X-Rspam-User: Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=jkvhqkc8; spf=pass (imf10.hostedemail.com: domain of laoar.shao@gmail.com designates 209.85.217.47 as permitted sender) smtp.mailfrom=laoar.shao@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 42042C0033 X-HE-Tag: 1660962399-238937 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Sat, Aug 20, 2022 at 1:06 AM Tejun Heo wrote: > > Hello, > > On Fri, Aug 19, 2022 at 09:09:25AM +0800, Yafang Shao wrote: > > On Fri, Aug 19, 2022 at 6:33 AM Tejun Heo wrote: > > > > > > On Thu, Aug 18, 2022 at 12:20:33PM -1000, Tejun Heo wrote: > > > > We have the exact same problem for any resources which span multiple > > > > instances of a service including page cache, tmpfs instances and any other > > > > thing which can persist longer than procss life time. My current opinion is > > > > > > To expand a bit more on this point, once we start including page cache and > > > tmpfs, we now get entangled with memory reclaim which then brings in IO and > > > not-yet-but-eventually CPU usage. > > > > Introduce-a-new-layer vs introduce-a-new-cgroup, which one is more overhead? > > Introducing a new layer in cgroup2 doesn't mean that any specific resource > controller is enabled, so there is no runtime overhead difference. In terms > of logical complexity, introducing a localized layer seems a lot more > straightforward than building a whole separate tree. > > Note that the same applies to cgroup1 where collapsed controller tree is > represented by simply not creating those layers in that particular > controller tree. > No, we have observed on our product env that multiple-layers cpuacct would cause obvious performance hit due to cache miss. > No matter how we cut the problem here, if we want to track these persistent > resources, we have to create a cgroup to host them somewhere. The discussion > we're having is mostly around where to put them. With your proposal, it can > be anywhere and you draw out an example where the persistent cgroups form > their own separate tree. What I'm saying is that the logical place to put it > is where the current resource consumption is and we just need to put the > persistent entity as the parent of the instances. > > Flexibility, just like anything else, isn't free. Here, if we extrapolate > this approach, the cost is evidently hefty in that it doesn't generically > work with the basic resource control structure. > > > > Once you start splitting the tree like > > > you're suggesting here, all those will break down and now we have to worry > > > about how to split resource accounting and control for the same entities > > > across two split branches of the tree, which doesn't really make any sense. > > > > The k8s has already been broken thanks to the memcg accounting on bpf memory. > > If you ignored it, I paste it below. > > [0]"1. The memory usage is not consistent between the first generation and > > new generations." > > > > This issue will persist even if you introduce a new layer. > > Please watch your tone. > Hm? I apologize if my words offend you. But, could you pls take a serious look at the patchset before giving a NACK? You didn't even want to know the background before you sent your NACK. > Again, this isn't a problem specific to k8s. We have the same problem with > e.g. persistent tmpfs. One idea which I'm not against is allowing specific > resources to be charged to an ancestor. We gotta think carefully about how > such charges should be granted / denied but an approach like that jives well > with the existing hierarchical control structure and because introducing a > persistent layer does too, the combination of the two works well. > > > > So, we *really* don't wanna paint ourselves into that kind of a corner. This > > > is a dead-end. Please ditch it. > > > > It makes non-sensen to ditch it. > > Because, the hierarchy I described in the commit log is *one* use case > > of the selectable memcg, but not *the only one* use case of it. If you > > dislike that hierarchy, I will remove it to avoid misleading you. > > But if you drop that, what'd be the rationale for adding what you're > proposing? Why would we want bpf memory charges to be attached any part of > the hierarchy? > I have explained it to you. But unfortunately you ignored it again. But I don't mind explaining to you again. Parent-memcg \ Child-memcg (k8s pod) The user can charge the memory to the parent directly without charging into the k8s pod. Then the memory.stat is consistent between different generations. > > Even if you introduce a new layer, you still need the selectable memcg. > > For example, to avoid the issue I described in [0], you still need to > > charge to the parent cgroup instead of the current cgroup. > > As I wrote above, we've been discussing the above. Again, I'd be a lot more > amenable to such approach because it fits with how everything is structured. > > > That's why I described in the commit log that the selectable memcg is flexible. > > Hopefully, my point on this is clear by now. > Unfortunately, you didn't want to get my point. -- Regards Yafang