From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4F20BC433EF for ; Wed, 13 Jul 2022 02:27:42 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B7E8B9400F6; Tue, 12 Jul 2022 22:27:41 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B3A3E9400E5; Tue, 12 Jul 2022 22:27:41 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9F5239400F6; Tue, 12 Jul 2022 22:27:41 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 8F2F89400E5 for ; Tue, 12 Jul 2022 22:27:41 -0400 (EDT) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay11.hostedemail.com (Postfix) with ESMTP id 629E780135 for ; Wed, 13 Jul 2022 02:27:41 +0000 (UTC) X-FDA: 79680490722.18.685ECF7 Received: from out0.migadu.com (out0.migadu.com [94.23.1.103]) by imf30.hostedemail.com (Postfix) with ESMTP id C32CB80079 for ; Wed, 13 Jul 2022 02:27:40 +0000 (UTC) Date: Tue, 12 Jul 2022 19:27:28 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1657679257; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=H+H3JbmgR9nzDmYyIZDpk3Exe35EXBhIWXcF/NhhLnk=; b=i3dK+DXONC7Oj/CnBtm8OHCzp3iDTdpdQkfjC17Jnk6SL89Z3d19NsnZmebhJtLunmq3F2 YpUnfL+iMilG/zrQEonILYCF87EWql9i0cf3rmkXUZCXYnuZe3/dcetXRX/e+jSNfM6Ktz pNY+GrMA7scn5H40JMshAhH2PUQ3SM8= X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Roman Gushchin To: Michal Hocko Cc: Alexei Starovoitov , Shakeel Butt , Matthew Wilcox , Christoph Hellwig , "David S. Miller" , Daniel Borkmann , Andrii Nakryiko , Tejun Heo , Martin KaFai Lau , bpf , Kernel Team , linux-mm , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Vlastimil Babka Subject: Re: [PATCH bpf-next 0/5] bpf: BPF specific memory allocator. Message-ID: References: <20220706180525.ozkxnbifgd4vzxym@MacBook-Pro-3.local.dhcp.thefacebook.com> <20220708174858.6gl2ag3asmoimpoe@macbook-pro-3.dhcp.thefacebook.com> <20220708215536.pqclxdqvtrfll2y4@google.com> <20220710073213.bkkdweiqrlnr35sv@google.com> <20220712043914.pxmbm7vockuvpmmh@macbook-pro-3.dhcp.thefacebook.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Migadu-Flow: FLOW_OUT X-Migadu-Auth-User: linux.dev ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1657679261; a=rsa-sha256; cv=none; b=7VpZh1PHt+CI2PMnGmGh5BMyji53h9cAi3WIS4AxR7OS46b46bHsGUfXwN58bCM8PY5rmM IZLL6cUobpjeGXsxEyNGjLrtGf5uFWUw15lLSQiJ54XkHDR7Oc4MmiJ37XYbuxZpQTUDBX jBsGhFLljF8hLgfs71B8SDK6yM5/zfI= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=i3dK+DXO; spf=pass (imf30.hostedemail.com: domain of roman.gushchin@linux.dev designates 94.23.1.103 as permitted sender) smtp.mailfrom=roman.gushchin@linux.dev; dmarc=pass (policy=none) header.from=linux.dev ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1657679261; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=H+H3JbmgR9nzDmYyIZDpk3Exe35EXBhIWXcF/NhhLnk=; b=nyFqJdquXCQvWEE9XRfKZ4vHhjqd6aekCTZ8axfjjKycmzl1DNlJXXkDCZQkgSPF5H069H 02pL6826FVIxsQttemSBkeHnfmYlzxtCap1LCfI2oy7fyJhcWvRgqxZNc5vwABux10d06F quluUrSmpznEm/DRQTtw35Q8dvgZEPA= X-Rspamd-Queue-Id: C32CB80079 Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=i3dK+DXO; spf=pass (imf30.hostedemail.com: domain of roman.gushchin@linux.dev designates 94.23.1.103 as permitted sender) smtp.mailfrom=roman.gushchin@linux.dev; dmarc=pass (policy=none) header.from=linux.dev X-Rspamd-Server: rspam02 X-Rspam-User: X-Stat-Signature: 67ooh7yugfc51sajbb1aprpmqtkxrq6j X-HE-Tag: 1657679260-394723 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Jul 12, 2022 at 09:40:13AM +0200, Michal Hocko wrote: > On Mon 11-07-22 21:39:14, Alexei Starovoitov wrote: > > On Mon, Jul 11, 2022 at 02:15:07PM +0200, Michal Hocko wrote: > > > On Sun 10-07-22 07:32:13, Shakeel Butt wrote: > > > > On Sat, Jul 09, 2022 at 10:26:23PM -0700, Alexei Starovoitov wrote: > > > > > On Fri, Jul 8, 2022 at 2:55 PM Shakeel Butt wrote: > > > > [...] > > > > > > > > > > > > Most probably Michal's comment was on free objects sitting in the caches > > > > > > (also pointed out by Yosry). Should we drain them on memory pressure / > > > > > > OOM or should we ignore them as the amount of memory is not significant? > > > > > > > > > > Are you suggesting to design a shrinker for 0.01% of the memory > > > > > consumed by bpf? > > > > > > > > No, just claim that the memory sitting on such caches is insignificant. > > > > > > yes, that is not really clear from the patch description. Earlier you > > > have said that the memory consumed might go into GBs. If that is a > > > memory that is actively used and not really reclaimable then bad luck. > > > There are other users like that in the kernel and this is not a new > > > problem. I think it would really help to add a counter to describe both > > > the overall memory claimed by the bpf allocator and actively used > > > portion of it. If you use our standard vmstat infrastructure then we can > > > easily show that information in the OOM report. > > > > OOM report can potentially be extended with info about bpf consumed > > memory, but it's not clear whether it will help OOM analysis. > > If GBs of memory can be sitting there then it is surely an interesting > information to have when seeing OOM. One of the big shortcomings of the > OOM analysis is unaccounted memory. > > > bpftool map show > > prints all map data already. > > Some devs use bpf to inspect bpf maps for finer details in run-time. > > drgn scripts pull that data from crash dumps. > > There is no need for new counters. > > The idea of bpf specific counters/limits was rejected by memcg folks. > > I would argue that integration into vmstat is useful not only for oom > analysis but also for regular health check scripts watching /proc/vmstat > content. I do not think most of those generic tools are BPF aware. So > unless there is a good reason to not account this memory there then I > would vote for adding them. They are cheap and easy to integrate. > > > > OK, thanks for the clarification. There is still one thing that is not > > > really clear to me. Without a proper ownership bound to any process why > > > is it desired/helpful to account the memory to a memcg? > > > > The first step is to have a limit. memcg provides it. > > I am sorry but this doesn't really explain it. Could you elaborate > please? Is the limit supposed to protect against adversaries? Or is it > just to prevent from accidental runaways? Is it purely for accounting > purposes? > > > > We have discussed something similar in a different email thread and I > > > still didn't manage to find time to put all the parts together. But if > > > the initiator (or however you call the process which loads the program) > > > exits then this might be the last process in the specific cgroup and so > > > it can be offlined and mostly invisible to an admin. > > > > Roman already sent reparenting fix: > > https://patchwork.kernel.org/project/netdevbpf/patch/20220711162827.184743-1-roman.gushchin@linux.dev/ Just to be clear: for the actual memory which is backing up bpf maps (slabs, percpu or vmallocs) reparenting was implemented several years ago. Nothing is changing here. This patch only adds reparenting to the map->memcg pointer (by replacing it to an objcg), which affects *new* allocations which are happening after the deletion of the cgroup. This would help to reduce the number of dying cgroups, but unlikely significantly, this is why it hasn't been implemented from scratch. Thanks!