From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 048A4ECAAD8 for ; Tue, 20 Sep 2022 12:43:16 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 72DF7940008; Tue, 20 Sep 2022 08:43:15 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6DDD7940007; Tue, 20 Sep 2022 08:43:15 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 57F54940008; Tue, 20 Sep 2022 08:43:15 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 43239940007 for ; Tue, 20 Sep 2022 08:43:15 -0400 (EDT) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 14BCC1C5DA8 for ; Tue, 20 Sep 2022 12:43:15 +0000 (UTC) X-FDA: 79932429150.01.29015A7 Received: from mail-lf1-f41.google.com (mail-lf1-f41.google.com [209.85.167.41]) by imf23.hostedemail.com (Postfix) with ESMTP id ABA4714001B for ; Tue, 20 Sep 2022 12:43:14 +0000 (UTC) Received: by mail-lf1-f41.google.com with SMTP id s6so3558982lfo.7 for ; Tue, 20 Sep 2022 05:43:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date; bh=zs9lyhx1mfjAFlS/mBeyhjLjOL7dZ+dJLjskU8EXRfk=; b=G7ESa2D4SPXF7MxNRP4dA331iIxWF80kVrOlQw1Q0zuiMQ/3c7qtBzzhyMJw7ZGTI+ X+2aG5eNYwipyIBD9gfCaW+XgRMdIJTjAXyKHQVAt/Egl4PJlz6oYfrCPkcGqYNDZ+ZA JcShuqhlEdNZOdjeib6US9uzPneDKKZCp0T88u3txNktiCC6XVnxeJKKflLhdp2mZnZm VBW1TzMBu97MDKwc9gun48rq7/EBAOfpW7GEsHn3heV+k0/L9myYAbWYbpd6FOyfZSSc 10pFlCACgHsBh28Rlu9Ue5axQjjd6iHNfSP8N889Qu9b8l9HgRCYc4cSFlLD/fOlzHSM UVuw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date; bh=zs9lyhx1mfjAFlS/mBeyhjLjOL7dZ+dJLjskU8EXRfk=; b=lLsK9z+aC7tUABopij59tIFs6trqOHm3on86VgvaN1wW3YiLs18bSiP65tZH/r5kXI ATPCL0K3HXc/nzA/VCEwRikP+wlDAPIbUUVyXZKAJ88uQ9t4MYc9dLwAo4L4+bE3sdBT CZGf0m2LvssNcJGNa0Xo9wKqdhFW+T5vCWHDu/WeAuvZ4q7FKhi95mdo6CTzaUY2iLhx dzg5J1X/MzpP6LdS4wj9Qt8S1ANVv5rawZsFUJ7uuPzLkTAdszc41Ew5G0Q4tfbxn4Tu f8dYEOqGM1SC67kL+tLVe6/iZnfR25T8Ls8UbB67QycPHb7JDb9tmCpvMJ5xiTf/HtG7 YhCQ== X-Gm-Message-State: ACrzQf3eSNiVXIg8Mvwg6hAN2+DLcPUMd8sS8/0zYL9FTcbIgCLbGBBI vnJ38t+8szaUZmRu3FgUn3goNtgmN4msIG1mq6o= X-Google-Smtp-Source: AMsMyM6/rC1+AQCI69KOc9XZtM1L0HXdcWEePWaBz2MZ+Z1XkbeEgRmVzABlmzg130XiuObulm9tbDJgqyLxoYCfDvQ= X-Received: by 2002:a05:6512:1395:b0:48d:81b:4955 with SMTP id p21-20020a056512139500b0048d081b4955mr7703393lfa.307.1663677792578; Tue, 20 Sep 2022 05:43:12 -0700 (PDT) MIME-Version: 1.0 References: <20220902023003.47124-1-laoar.shao@gmail.com> In-Reply-To: From: Yafang Shao Date: Tue, 20 Sep 2022 20:42:36 +0800 Message-ID: Subject: Re: [PATCH bpf-next v3 00/13] bpf: Introduce selectable memcg for bpf map To: Roman Gushchin Cc: Tejun Heo , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Martin Lau , Song Liu , Yonghong Song , john fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , Johannes Weiner , Michal Hocko , Shakeel Butt , Muchun Song , Andrew Morton , Zefan Li , Cgroups , netdev , bpf , Linux MM Content-Type: text/plain; charset="UTF-8" ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1663677794; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=zs9lyhx1mfjAFlS/mBeyhjLjOL7dZ+dJLjskU8EXRfk=; b=hkdGlSXF+FiEbdS7N5INPQbMzml5JJFEp1w0peEo/ICu96az/FanUzy3cKTjEzsX3/BZuG ZRJ6OBAaV/CmX4sWuFN5V3aLUBI77i844GCtq9l47HbuzUc/eoRb4/Q4tNJyklpU2xqUrU ze0A0vysKDckcLjtqdTqaaXDHNyhy9E= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=G7ESa2D4; spf=pass (imf23.hostedemail.com: domain of laoar.shao@gmail.com designates 209.85.167.41 as permitted sender) smtp.mailfrom=laoar.shao@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1663677794; a=rsa-sha256; cv=none; b=03fcb0vSwBj7dmjw21MEUyKnRcQlaOe5Rf0UpzoE4j9MunJoa/e8p3vG0s9s15Df9djKjM aSh4T1S/0XEzhexFKQ7Q5hscWZ4lkUuu+IA+LnPLphugV6kunj1BOKt9KZ76hvmvMmeLXe 5ftm2WjAbJCA1mY5hTfKLpizFDvxBVY= X-Stat-Signature: 4ar7oxkte9gb5zjdzdwhs8cou5d6ojo7 X-Rspam-User: X-Rspamd-Queue-Id: ABA4714001B X-Rspamd-Server: rspam02 Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=G7ESa2D4; spf=pass (imf23.hostedemail.com: domain of laoar.shao@gmail.com designates 209.85.167.41 as permitted sender) smtp.mailfrom=laoar.shao@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-HE-Tag: 1663677794-259146 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Sep 20, 2022 at 10:40 AM Roman Gushchin wrote: > > On Sun, Sep 18, 2022 at 11:44:48AM +0800, Yafang Shao wrote: > > On Sat, Sep 17, 2022 at 12:53 AM Roman Gushchin > > wrote: > > > > > > On Tue, Sep 13, 2022 at 02:15:20PM +0800, Yafang Shao wrote: > > > > On Fri, Sep 9, 2022 at 12:13 AM Roman Gushchin wrote: > > > > > > > > > > On Thu, Sep 08, 2022 at 10:37:02AM +0800, Yafang Shao wrote: > > > > > > On Thu, Sep 8, 2022 at 6:29 AM Roman Gushchin wrote: > > > > > > > > > > > > > > On Wed, Sep 07, 2022 at 05:43:31AM -1000, Tejun Heo wrote: > > > > > > > > Hello, > > > > > > > > > > > > > > > > On Fri, Sep 02, 2022 at 02:29:50AM +0000, Yafang Shao wrote: > > > > > > > > ... > > > > > > > > > This patchset tries to resolve the above two issues by introducing a > > > > > > > > > selectable memcg to limit the bpf memory. Currently we only allow to > > > > > > > > > select its ancestor to avoid breaking the memcg hierarchy further. > > > > > > > > > Possible use cases of the selectable memcg as follows, > > > > > > > > > > > > > > > > As discussed in the following thread, there are clear downsides to an > > > > > > > > interface which requires the users to specify the cgroups directly. > > > > > > > > > > > > > > > > https://lkml.kernel.org/r/YwNold0GMOappUxc@slm.duckdns.org > > > > > > > > > > > > > > > > So, I don't really think this is an interface we wanna go for. I was hoping > > > > > > > > to hear more from memcg folks in the above thread. Maybe ping them in that > > > > > > > > thread and continue there? > > > > > > > > > > > > > > > > > > > Hi Roman, > > > > > > > > > > > > > As I said previously, I don't like it, because it's an attempt to solve a non > > > > > > > bpf-specific problem in a bpf-specific way. > > > > > > > > > > > > > > > > > > > Why do you still insist that bpf_map->memcg is not a bpf-specific > > > > > > issue after so many discussions? > > > > > > Do you charge the bpf-map's memory the same way as you charge the page > > > > > > caches or slabs ? > > > > > > No, you don't. You charge it in a bpf-specific way. > > > > > > > > > > > > > Hi Roman, > > > > > > > > Sorry for the late response. > > > > I've been on vacation in the past few days. > > > > > > > > > The only difference is that we charge the cgroup of the processes who > > > > > created a map, not a process who is doing a specific allocation. > > > > > > > > This means the bpf-map can be indepent of process, IOW, the memcg of > > > > bpf-map can be indepent of the memcg of the processes. > > > > This is the fundamental difference between bpf-map and page caches, then... > > > > > > > > > Your patchset doesn't change this. > > > > > > > > We can make this behavior reasonable by introducing an independent > > > > memcg, as what I did in the previous version. > > > > > > > > > There are pros and cons with this approach, we've discussed it back > > > > > to the times when bpf memcg accounting was developed. If you want > > > > > to revisit this, it's maybe possible (given there is a really strong and likely > > > > > new motivation appears), but I haven't seen any complaints yet except from you. > > > > > > > > > > > > > memcg-base bpf accounting is a new feature, which may not be used widely. > > > > > > > > > > > > > > > > > Yes, memory cgroups are not great for accounting of shared resources, it's well > > > > > > > known. This patchset looks like an attempt to "fix" it specifically for bpf maps > > > > > > > in a particular cgroup setup. Honestly, I don't think it's worth the added > > > > > > > complexity. Especially because a similar behaviour can be achieved simple > > > > > > > by placing the task which creates the map into the desired cgroup. > > > > > > > > > > > > Are you serious ? > > > > > > Have you ever read the cgroup doc? Which clearly describe the "No > > > > > > Internal Process Constraint".[1] > > > > > > Obviously you can't place the task in the desired cgroup, i.e. the parent memcg. > > > > > > > > > > But you can place it into another leaf cgroup. You can delete this leaf cgroup > > > > > and your memcg will get reparented. You can attach this process and create > > > > > a bpf map to the parent cgroup before it gets child cgroups. > > > > > > > > If the process doesn't exit after it created bpf-map, we have to > > > > migrate it around memcgs.... > > > > The complexity in deployment can introduce unexpected issues easily. > > > > > > > > > You can revisit the idea of shared bpf maps and outlive specific cgroups. > > > > > Lof of options. > > > > > > > > > > > > > > > > > [1] https://www.kernel.org/doc/Documentation/cgroup-v2.txt > > > > > > > > > > > > > Beatiful? Not. Neither is the proposed solution. > > > > > > > > > > > > > > > > > > > Is it really hard to admit a fault? > > > > > > > > > > Yafang, you posted several versions and so far I haven't seen much of support > > > > > or excitement from anyone (please, fix me if I'm wrong). It's not like I'm > > > > > nacking a patchset with many acks, reviews and supporters. > > > > > > > > > > Still think you're solving an important problem in a reasonable way? > > > > > It seems like not many are convinced yet. I'd recommend to focus on this instead > > > > > of blaming me. > > > > > > > > > > > > > The best way so far is to introduce specific memcg for specific resources. > > > > Because not only the process owns its memcg, but also specific > > > > resources own their memcgs, for example bpf-map, or socket. > > > > > > > > struct bpf_map { <<<< memcg owner > > > > struct memcg_cgroup *memcg; > > > > }; > > > > > > > > struct sock { <<<< memcg owner > > > > struct mem_cgroup *sk_memcg; > > > > }; > > > > > > > > These resources already have their own memcgs, so we should make this > > > > behavior formal. > > > > > > > > The selectable memcg is just a variant of 'echo ${proc} > cgroup.procs'. > > > > > > This is a fundamental change: cgroups were always hierarchical groups > > > of processes/threads. You're basically suggesting to extend it to > > > hierarchical groups of processes and some other objects (what's a good > > > definition?). > > > > Kind of, but not exactly. > > We can do it without breaking the cgroup hierarchy. Under current > > cgroup hierarchy, the user can only echo processes/threads into a > > cgroup, that won't be changed in the future. The specific resources > > are not exposed to the user, the user can only control these specific > > resources by controlling their associated processes/threads. > > For example, > > > > Memcg-A > > |---- Memcg-A1 > > |---- Memcg-A2 > > > > We can introduce a new file memory.owner into each memcg. Each bit of > > memory.owner represents a specific resources, > > > > memory.owner: | bit31 | bitN | ... | bit1 | bit0 | > > | | > > |------ bit0: bpf memory > > | > > |-------------- bit1: socket memory > > | > > |--------------------------- > > bitN: a specific resource > > > > There won't be too many specific resources which have to own their > > memcgs, so I think 32bits is enough. > > > > Memcg-A : memory.owner == 0x1 > > |---- Memcg-A1 : memory.owner == 0 > > |---- Memcg-A2 : memory.owner == 0x1 > > > > Then the bpf created by processes in Memcg-A1 will be charged into > > Memcg-A directly without charging into Memcg-A1. > > But the bpf created by processes in Memcg-A2 will be charged into > > Memcg-A2 as its memory.owner is 0x1. > > That said, these specific resources are not fully independent of > > process, while they are still associated with the processes which > > create them. > > Luckily memory.move_charge_at_immigrate is disabled in cgroup2, so we > > don't need to care about the possible migration issue. > > > > I think we may also apply it to shared page caches. For example, > > struct inode { > > struct mem_cgroup *memcg; <<<< add a new member > > }; > > > > We define struct inode as a memcg owner, and use scope-based charge to > > charge its pages into inode->memcg. > > And then put all memcgs which shared these resources under the same > > parent. The page caches of this inode will be charged into the parent > > directly. > > Ok, so it's something like premature selective reparenting. > Right. I think it may be a good way to handle the resources which may outlive the process. > > The shared page cache is more complicated than bpf memory, so I'm not > > quite sure if it can apply to shared page cache, but it can work well > > for bpf memory. > > Yeah, this is the problem. It feels like it's a problem very specific > to bpf maps and an exact way you use them. I don't think you can successfully > advocate for changes of these calibre without a more generic problem. I might > be wrong. > What is your concern about this method? Are there any potential issues? > > > > Regarding the observability, we can introduce a specific item into > > memory.stat for this specific memory. For example a new item 'bpf' for > > bpf memory. > > That can be accounted/unaccounted for in the same way as we do in > > set_active_memcg(). for example, > > > > struct task_struct { > > struct mem_cgroup *active_memcg; > > int active_memcg_item; <<<< > > introduce a new member > > }; > > > > bpf_memory_alloc() > > { > > old_memcg = set_active_memcg(memcg); > > old_item = set_active_memcg_item(MEMCG_BPF); > > I thought about something like this but for a different purpose: > to track the amount of memory consumed by bpf. > Right, we can use it to track bpf memory consumption. > > alloc(); > > set_active_memcg_item(old_item); > > set_active_memcg(old_memcg); > > } > > > > bpf_memory_free() > > { > > old = set_active_memcg_item(MEMCG_BPF); > > free(); > > set_active_memcg_item(old); > > } > > But the problem is that we shoud very carefully mark all allocations and > releases, which is very error-prone. Interfaces which don't require annotating > releases are generally better, but require additional memory. > If we don't annotate the releases, we have to add something into the struct page, which may not be worth it. It is clear how the bpf memory is allocated and freed, so I think we can start it with bpf memory. If in the future we can figure out a lightweight way to avoid annotating the releases, then we can remove the annotations in the bpf memory releases. -- Regards Yafang