From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 957EFC4332F for ; Wed, 14 Dec 2022 10:47:32 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 20E648E0005; Wed, 14 Dec 2022 05:47:32 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 1BD758E0002; Wed, 14 Dec 2022 05:47:32 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0864C8E0005; Wed, 14 Dec 2022 05:47:32 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id EAC1B8E0002 for ; Wed, 14 Dec 2022 05:47:31 -0500 (EST) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id BC2BB1604E4 for ; Wed, 14 Dec 2022 10:47:31 +0000 (UTC) X-FDA: 80240585502.04.3E6E0E2 Received: from mail-lf1-f52.google.com (mail-lf1-f52.google.com [209.85.167.52]) by imf14.hostedemail.com (Postfix) with ESMTP id 05D9710000A for ; Wed, 14 Dec 2022 10:47:29 +0000 (UTC) Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b="mfCSH/Tq"; spf=pass (imf14.hostedemail.com: domain of laoar.shao@gmail.com designates 209.85.167.52 as permitted sender) smtp.mailfrom=laoar.shao@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1671014850; a=rsa-sha256; cv=none; b=rHwBWlKD5ZewHSk+CezGj0LQl4LGpUyM/tfRlz77dyLhcEFZT/P3GCehntChUPbl3QZJDB 9H2X8mYcCar5rDQ4S2bDKoDw7PUui+ipcGuE6UNWH5e+wmLRDS+LJaUxTvCp2NuaMm7MWw i4qPN1ci/XBYs+AL3dItB/4QQ2Q/oPY= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b="mfCSH/Tq"; spf=pass (imf14.hostedemail.com: domain of laoar.shao@gmail.com designates 209.85.167.52 as permitted sender) smtp.mailfrom=laoar.shao@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1671014850; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=OOfDqnDmZreJyIzvRztZ+g+nyHtC9YBDCnmJg83zL7o=; b=uPv4MUxuMDtez6bug9rwA4npAR6H5gNw6gxCGDKQu6UbmLLxWV18DIH1RitEWW2OF2226T BzIjfp46xSE1KxdObEYeqxLfjthFk1kDswPt+0u1U2s8tmYwBA+MfWSBRmm2QaxGhBjnKc UMprSvPVHRkGWBrhPzr5YuXneb4ongA= Received: by mail-lf1-f52.google.com with SMTP id p8so9698254lfu.11 for ; Wed, 14 Dec 2022 02:47:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=OOfDqnDmZreJyIzvRztZ+g+nyHtC9YBDCnmJg83zL7o=; b=mfCSH/TqUf403/6Wtp2CVDcx3k9l7ZFQ5K41iZXw2QGkR5x7NEcC4SIovztp2ztR+O PVLkxwmhLLpaVpqujuddSZfF/NrR1+XoYLslQYjBi1A/WBSQEmj739Q0Ar6eTurfJuai o0LU4gYanImwFYiuIXtD4FWwhmR5rH+/rjKLgz5OR7bCseHTyh03h0Np4PobmHsqlkmj m0HS/FJxxymNQFOCf8+TJtSBOmxYzzPd1+wKedGvGKp5BajG086vybDLpIoEQgF6a0CW OjvcfaCy5Hnll//+iU4Hh9EnPDHI7zrpInmVc0zUhPDry8zbiqCu9LXeKEdSHkSxQrbj aq3w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=OOfDqnDmZreJyIzvRztZ+g+nyHtC9YBDCnmJg83zL7o=; b=c0keJ2kuUG1q+yTqKxSpItmO7WDXrVkONjw4ccidcFae+Pw+nJhHhfgo/3uVV+7yVB zCpeS+d+UckbFIQ9K6N+8X1UCZZziv0OsQdCVYzoLCwPieDoHgjz6Row35RmPpwP0WXJ NVJrxmM+OlKqGKJl69lees5Amr0KVHi/XQO544ofFPXNniyMh6vPhaMTuSOtFD6HzL9T bDUB60LgqGtSsiBdBKg3xh/HLRtD4QXnZlOgXRJQ+1mFqBmUaLruvSan2nQEtvXb53y4 +VUv+pYrmCuHmh+Hn7lmjrZPNy03z+nvA4F5PN0K4xrCmjvW2MGwiUZTRp61/pbQWtTN 7kdA== X-Gm-Message-State: ANoB5pkfhzHgHh0de37XLhPpI58E56Oa+vJtHrvf+C8bGsfqrAbTt+Ed z2IFnGquuNoEoGk2jMCQgKhiwcks28yH1y18Rgk= X-Google-Smtp-Source: AA0mqf4oC063wWy1jk/2G2pZYBeB0lqoqdYLKypw1R5NdhTdWxq4Pz/Gtx3VCgAjjQjN3oxtX5t/xMRgI2B9ARowx3E= X-Received: by 2002:ac2:5a50:0:b0:4b5:4387:8e1c with SMTP id r16-20020ac25a50000000b004b543878e1cmr12271490lfn.58.1671014848402; Wed, 14 Dec 2022 02:47:28 -0800 (PST) MIME-Version: 1.0 References: <20221212003711.24977-1-laoar.shao@gmail.com> <6f9bb391-580e-cfc2-e039-25f47d162d17@suse.cz> <20221213192156.GS4001@paulmck-ThinkPad-P17-Gen-1> In-Reply-To: <20221213192156.GS4001@paulmck-ThinkPad-P17-Gen-1> From: Yafang Shao Date: Wed, 14 Dec 2022 18:46:52 +0800 Message-ID: Subject: Re: [RFC PATCH bpf-next 0/9] mm, bpf: Add BPF into /proc/meminfo To: paulmck@kernel.org Cc: Vlastimil Babka , Hyeonggon Yoo <42.hyeyoo@gmail.com>, ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org, kafai@fb.com, songliubraving@fb.com, yhs@fb.com, john.fastabend@gmail.com, kpsingh@kernel.org, sdf@google.com, haoluo@google.com, jolsa@kernel.org, tj@kernel.org, dennis@kernel.org, cl@linux.com, akpm@linux-foundation.org, penberg@kernel.org, rientjes@google.com, iamjoonsoo.kim@lge.com, roman.gushchin@linux.dev, linux-mm@kvack.org, bpf@vger.kernel.org, rcu@vger.kernel.org, Matthew Wilcox Content-Type: text/plain; charset="UTF-8" X-Rspam-User: X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 05D9710000A X-Stat-Signature: iimpzffqxcww7zdc66pyjh9u3gptc7u4 X-HE-Tag: 1671014849-308502 X-HE-Meta: U2FsdGVkX1+IioG2Py/aIEZWsg9VZ7HQvZDmMZMFFLQQ/21m61THwJGKSAKg8wK66dDJYksqQ4jwyTHwnuBBf3GMxnPvRQ9s8ParaQw8sIHOVWs5RdGvfaankmgqbygWxHYgkmpE4VBWKkSlq+lVYlc9ozQttkWjptVnTK4poz0EvobJIektFzaCTy0BkwEuvqOmiyz8tNiSCSfn63OkmqJxmj12FcqM+6Yy9eUtC+zbtCtBXX+WcmOKkGikzuV3HtBn5TfqCbHaihruNwiWsj6u2XE40pxYEin2GuBPLHh0PryLEKG6EaWmimpEBbvmomSKX4wNI6VHQTonLCk5HlcIVCPoo/n8vxY/JZ7OXF07tCPohkwMgrF1QIz+ABB14otsml19Qk7xB1BEbHpeI2JnxHpo+dlldp5mtb2FsDA2ao97Fqo9608SkGPN77cAiicmUpzvYRTgqXiZLmynXy2bk9zMUUbIUI6NvIuGM3R0cpmaBBFjh0DhA6CXX9hu3Nv18ao36bSrncq2pSv3Soxj7O1ogUEJoiUIZLEU+zyiNnNkSibuXLlxsmjCkp3e+Yz8qdS4nZbyBWG5kaSslnsjGgS92RCgdj8SiUADi4nDKFIGAjU+TsO419czb+jlkDtJlS4WiBDMh8C9GOuMo9+xfYPI41vY71fWd6+tdJli2/W3TMVdvIMbuPLWjNm4Mjb9R8fBEDYQD3TmKB8wrUeCmS33lhfJcdDOJK7ODzRMwz+l6oYH1zcNmVkJedyvOy7EOkj/SE3UAwyL42zHvcvuHe53HxsJrcuTzTnnmgO88gII3SUnM3BVXR8oZEyaE9U6XeFv+ktbAoXWIfSKUrqHUuRGlGptErZ9M1GEvUOPAz5hC6kXWEdm/aUNvCfhysmlt0ti8NPfHEso+fXJb9sMZkRkcIjxV1+u41auUWQ7wXoZgCIYyQrzk+bBuESM1eDO8ZfUKHhZxFIUIVE 9Pbn0Gz8 Z3oTxhCrCXpHSzUZ84wuUB1wtFHpz9tpFTUnvkt8GIX93DNOn98LGiOZLmuDVL7ZqiD3t9d0qMhMiogWF+F8JnAvb9tLsjt4DATODyS7LjAutPPkGKQWxWlQrYzhtQf2gTTFp48x/R4ajWR8amwsZWcN3SPcNn6EN9fH/ X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Dec 14, 2022 at 3:22 AM Paul E. McKenney wrote: > > On Tue, Dec 13, 2022 at 04:52:09PM +0100, Vlastimil Babka wrote: > > On 12/13/22 15:56, Hyeonggon Yoo wrote: > > > On Tue, Dec 13, 2022 at 07:52:42PM +0800, Yafang Shao wrote: > > >> On Tue, Dec 13, 2022 at 1:54 AM Vlastimil Babka wrote: > > >> > > > >> > On 12/12/22 01:37, Yafang Shao wrote: > > >> > > Currently there's no way to get BPF memory usage, while we can only > > >> > > estimate the usage by bpftool or memcg, both of which are not reliable. > > >> > > > > >> > > - bpftool > > >> > > `bpftool {map,prog} show` can show us the memlock of each map and > > >> > > prog, but the memlock is vary from the real memory size. The memlock > > >> > > of a bpf object is approximately > > >> > > `round_up(key_size + value_size, 8) * max_entries`, > > >> > > so 1) it can't apply to the non-preallocated bpf map which may > > >> > > increase or decrease the real memory size dynamically. 2) the element > > >> > > size of some bpf map is not `key_size + value_size`, for example the > > >> > > element size of htab is > > >> > > `sizeof(struct htab_elem) + round_up(key_size, 8) + round_up(value_size, 8)` > > >> > > That said the differece between these two values may be very great if > > >> > > the key_size and value_size is small. For example in my verifaction, > > >> > > the size of memlock and real memory of a preallocated hash map are, > > >> > > > > >> > > $ grep BPF /proc/meminfo > > >> > > BPF: 1026048 B <<< the size of preallocated memalloc pool > > >> > > > > >> > > (create hash map) > > >> > > > > >> > > $ bpftool map show > > >> > > 3: hash name count_map flags 0x0 > > >> > > key 4B value 4B max_entries 1048576 memlock 8388608B > > >> > > > > >> > > $ grep BPF /proc/meminfo > > >> > > BPF: 84919344 B > > >> > > > > >> > > So the real memory size is $((84919344 - 1026048)) which is 83893296 > > >> > > bytes while the memlock is only 8388608 bytes. > > >> > > > > >> > > - memcg > > >> > > With memcg we only know that the BPF memory usage is less than > > >> > > memory.usage_in_bytes (or memory.current in v2). Furthermore, we only > > >> > > know that the BPF memory usage is less than $MemTotal if the BPF > > >> > > object is charged into root memcg :) > > >> > > > > >> > > So we need a way to get the BPF memory usage especially there will be > > >> > > more and more bpf programs running on the production environment. The > > >> > > memory usage of BPF memory is not trivial, which deserves a new item in > > >> > > /proc/meminfo. > > >> > > > > >> > > This patchset introduce a solution to calculate the BPF memory usage. > > >> > > This solution is similar to how memory is charged into memcg, so it is > > >> > > easy to understand. It counts three types of memory usage - > > >> > > - page > > >> > > via kmalloc, vmalloc, kmem_cache_alloc or alloc pages directly and > > >> > > their families. > > >> > > When a page is allocated, we will count its size and mark the head > > >> > > page, and then check the head page at page freeing. > > >> > > - slab > > >> > > via kmalloc, kmem_cache_alloc and their families. > > >> > > When a slab object is allocated, we will mark this object in this > > >> > > slab and check it at slab object freeing. That said we need extra memory > > >> > > to store the information of each object in a slab. > > >> > > - percpu > > >> > > via alloc_percpu and its family. > > >> > > When a percpu area is allocated, we will mark this area in this > > >> > > percpu chunk and check it at percpu area freeing. That said we need > > >> > > extra memory to store the information of each area in a percpu chunk. > > >> > > > > >> > > So we only need to annotate the allcation to add the BPF memory size, > > >> > > and the sub of the BPF memory size will be handled automatically at > > >> > > freeing. We can annotate it in irq, softirq or process context. To avoid > > >> > > counting the nested allcations, for example the percpu backing allocator, > > >> > > we reuse the __GFP_ACCOUNT to filter them out. __GFP_ACCOUNT also make > > >> > > the count consistent with memcg accounting. > > >> > > > >> > So you can't easily annotate the freeing places as well, to avoid the whole > > >> > tracking infrastructure? > > >> > > >> The trouble is kfree_rcu(). for example, > > >> old_item = active_vm_item_set(ACTIVE_VM_BPF); > > >> kfree_rcu(); > > >> active_vm_item_set(old_item); > > >> If we want to pass the ACTIVE_VM_BPF into the deferred rcu context, we > > >> will change lots of code in the RCU subsystem. I'm not sure if it is > > >> worth it. > > > > > > (+Cc rcu folks) > > > > > > IMO adding new kfree_rcu() varient for BPF that accounts BPF memory > > > usage would be much less churn :) > > > > Alternatively, just account the bpf memory as freed already when calling > > kfree_rcu()? I think the amount of memory "in flight" to be freed by rcu is > > a separate issue (if it's actually an issue) and not something each > > kfree_rcu() user should think about separately? > > If the in-flight memory really does need to be accounted for, then one > straightforward approach is to use call_rcu() and do the first part of > the needed accounting at the call_rcu() callsite and the rest of the > accounting when the callback is invoked. Or, if memory must be freed > quickly even on ChromeOS and Android, use call_rcu_hurry() instead > of call_rcu(). > Right, call_rcu() can make it work. But I'm not sure if all kfree_rcu() in kernel/bpf can be replaced by call_rcu(). Alexei, any comment on it ? $ grep -r "kfree_rcu" kernel/bpf/ kernel/bpf/local_storage.c: kfree_rcu(new, rcu); kernel/bpf/lpm_trie.c: kfree_rcu(node, rcu); kernel/bpf/lpm_trie.c: kfree_rcu(parent, rcu); kernel/bpf/lpm_trie.c: kfree_rcu(node, rcu); kernel/bpf/lpm_trie.c: kfree_rcu(node, rcu); kernel/bpf/bpf_inode_storage.c: kfree_rcu(local_storage, rcu); kernel/bpf/bpf_task_storage.c: kfree_rcu(local_storage, rcu); kernel/bpf/trampoline.c: kfree_rcu(im, rcu); kernel/bpf/core.c: kfree_rcu(progs, rcu); kernel/bpf/core.c: * no need to call kfree_rcu(), just call kfree() directly. kernel/bpf/core.c: kfree_rcu(progs, rcu); kernel/bpf/bpf_local_storage.c: * kfree(), else do kfree_rcu(). kernel/bpf/bpf_local_storage.c: kfree_rcu(local_storage, rcu); kernel/bpf/bpf_local_storage.c: kfree_rcu(selem, rcu); kernel/bpf/bpf_local_storage.c: kfree_rcu(selem, rcu); kernel/bpf/bpf_local_storage.c: kfree_rcu(local_storage, rcu); -- Regards Yafang