From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E8A9FC4167B for ; Tue, 13 Dec 2022 19:22:05 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 72CEF8E0005; Tue, 13 Dec 2022 14:22:05 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 6DD278E0002; Tue, 13 Dec 2022 14:22:05 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 57DE38E0005; Tue, 13 Dec 2022 14:22:05 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 44A708E0002 for ; Tue, 13 Dec 2022 14:22:05 -0500 (EST) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 260E3AB487 for ; Tue, 13 Dec 2022 19:22:05 +0000 (UTC) X-FDA: 80238253410.18.60EEB46 Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55]) by imf07.hostedemail.com (Postfix) with ESMTP id D7A2E4001D for ; Tue, 13 Dec 2022 19:22:02 +0000 (UTC) Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=ZgPayF5I; spf=pass (imf07.hostedemail.com: domain of "SRS0=kqXI=4L=paulmck-ThinkPad-P17-Gen-1.home=paulmck@kernel.org" designates 145.40.73.55 as permitted sender) smtp.mailfrom="SRS0=kqXI=4L=paulmck-ThinkPad-P17-Gen-1.home=paulmck@kernel.org"; dmarc=pass (policy=none) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1670959323; h=from:from:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=azDyzv7JR8MfrjTIqD9OwbO/Hs6cjDZYuXQgJkK2+sU=; b=LnRQCDandgNtPppckJFU6XJ7NNZY7Z6rj4qWCOnQJQq82YrT/uaKrFyrc8tKsjYY1Rex4j w5xiLs5xnFw3r85Ae2fhHB8jPKSSYvKBHhdiYvas/R4MxpzGzWbOdrUAes4QQkEu26uyt2 UgRtAxV7T3FKzmmw1iDQjwD5eDhf+44= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=ZgPayF5I; spf=pass (imf07.hostedemail.com: domain of "SRS0=kqXI=4L=paulmck-ThinkPad-P17-Gen-1.home=paulmck@kernel.org" designates 145.40.73.55 as permitted sender) smtp.mailfrom="SRS0=kqXI=4L=paulmck-ThinkPad-P17-Gen-1.home=paulmck@kernel.org"; dmarc=pass (policy=none) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1670959323; a=rsa-sha256; cv=none; b=SAj9PWHqbyUmH3xFf4w2d+KYl1Kq9dZ/lLkn2SsV1jODSLz1vn1Lpttg7fnFeuzIBTnz3p C8bLK2jaiaTjtqZh9STsxVmlwHMvCbg37juvP8SBcFHrj0/HFY6x5bx3MDlhTFIWPvn4UG g7t7SI0ACDTI4XDPHpjD5nkoS8yy7WU= Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sin.source.kernel.org (Postfix) with ESMTPS id CEEFDCE1409; Tue, 13 Dec 2022 19:21:58 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0D646C433F1; Tue, 13 Dec 2022 19:21:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1670959317; bh=416EG1jqoZyfQepWXUOCU6p4qjbCITDKy3iVyn1tW6s=; h=Date:From:To:Cc:Subject:Reply-To:References:In-Reply-To:From; b=ZgPayF5IxW0yNUcmpwTB6EK8/YmmSLJG5Pf/KLo903w5CMzK35p32VmL+x3geytyl 603J73ylMdWSV2RW8IawQyZyksY9QCfPpYM35NE3rVqfyjsXeKrjNj86ps387ov3XQ sanljfuiXCJJ3MerXM3xQeenGEDnFgBZ+jSmsHnmdqSm+XKb0R5Oe1N1TbEeDoRtvP NZVXJSfxo1ZbLD1/8dxWDZe9OTSx2UKXPF3cSJYkwKPtQvUJYt+8qLvJtsQJHYsQl8 0lRKG2ruqUFlCZ8kgmgANwkBJ+Sl1LlZ3ZcjjwXYnq0lTgwc+5ai4v2nPLxhKXXWiP BajX1Phpc51PQ== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 9ED615C13B7; Tue, 13 Dec 2022 11:21:56 -0800 (PST) Date: Tue, 13 Dec 2022 11:21:56 -0800 From: "Paul E. McKenney" To: Vlastimil Babka Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com>, Yafang Shao , ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org, kafai@fb.com, songliubraving@fb.com, yhs@fb.com, john.fastabend@gmail.com, kpsingh@kernel.org, sdf@google.com, haoluo@google.com, jolsa@kernel.org, tj@kernel.org, dennis@kernel.org, cl@linux.com, akpm@linux-foundation.org, penberg@kernel.org, rientjes@google.com, iamjoonsoo.kim@lge.com, roman.gushchin@linux.dev, linux-mm@kvack.org, bpf@vger.kernel.org, rcu@vger.kernel.org, Matthew Wilcox Subject: Re: [RFC PATCH bpf-next 0/9] mm, bpf: Add BPF into /proc/meminfo Message-ID: <20221213192156.GS4001@paulmck-ThinkPad-P17-Gen-1> Reply-To: paulmck@kernel.org References: <20221212003711.24977-1-laoar.shao@gmail.com> <6f9bb391-580e-cfc2-e039-25f47d162d17@suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <6f9bb391-580e-cfc2-e039-25f47d162d17@suse.cz> X-Stat-Signature: ssweq9k86945gty5g4s3nhmyghugopox X-Rspam-User: X-Rspamd-Queue-Id: D7A2E4001D X-Rspamd-Server: rspam06 X-HE-Tag: 1670959322-446346 X-HE-Meta: U2FsdGVkX1/V26xPDjLoKR1ZvPkKnVqvc45vxrso/p/ok50SybpjCo3Gk31hpYh0c2rRx2JKItDPdabMaQlLfegInuCnQIkxLLQPXooR4rTG3iF8jMUzvle4muVZ62OlYFO9XmdWKlSAYoynNb5WV6IAG4bFGOun6XkB7Rt3dC1ZFP6xOUeZ8Qf5tXz8Xbt7E05MSFcJ+INxWgBgWGpR/yf9tklRyiUWjU7FUfRrUWfTLLUQCD4pe8XkYSxAwnxQgZhFifFk6sA8QjFNIlQcJRnwU0hS/Er1pzHayKqTr9Iv4aQEJHi1i6i7qG3OYKW9pc3o8PoDJAspmqS/URq5n4pyH/3F7M0oN7PBy2TUiUdIWgnXA0VTM3/Qmn0fWsmf7CR5eLRLBqEJn61DelI4en8d+Z0E6wTsx5OSAtZGaVVZcMeCJMbX/84OIog8bcqRqWDy9M3jHh0xtqckYhlRJuhUYjpLo2HZXuvakVtIaaDs/xfVAAMT8YwXk18uu/TRRY0fQJBMJMNUSVxmgEfs/cGn9UAmZcFTTERp8Hq3c5TWtcD5UZvWyGNgex7KGx78JoRXMJO3rQerKGqzWhHG6R7Ld/3DrG6Hs6+J3257V7Susem4qnlk22gSOJAPKvUN+vNIUV4lDQFhMAN24uX6Yei1gMpn2t+Lh8oGFfjgfgVUumdEYrh96fgWb3Mxx8u908YaqhytCm23O2H8E2aohusocAU74xJi2YLrnSqiu0fHch2kmKWQOvEbMFYSXeONF6KF9g0MByRIx7nWBwvMj1hT4h0uo64bLEygTWW3nD7avxMG+h7aX34x3OcmalJB++Ew8+EO/qtZfmgOS6bT7xLbMp2kpyjVV+XxfDL6r8ur+miCbW5KW7U7fBYCYgLVrgtX+QCqkMSzKt7EDL/DF5Vf3qFhM52Xnpt0PJY31LdWggc3tbgNCIWPdMLl7AjH9HfVrgRy0gKz1kUMz2I c8ergoLS D8rYGgBnqLbofT0olMq4wCRdDel/5Fj1Ee8m96T8Y4ldicCnhahAUFY2lQFk0SXjFxKSBzoYWYijssA6hVpyUhzfTkiqR0EYa4bmBJBHs4MTOIfXTn2MWO/bDT/EEWWwhHdU0/D2G6hlZzlhxc9GanqS9hxc1XhLqBmfgJGkL4LimzIpEKMzBTnDROJHNR36AWgRndZRI9geVRricARUPv0CzYqxpu59nUg03ezLTUP9C51ttzhLhcdVdHgpqYDHXUqr7Jzh/1ZtJ0cjU7CRMbCF4bPJWj3O78Qzaa4klR1O2D4MmAWeDbfOlKsNpiUm2N83nTj+cMCouHsGd6CFeFMLFc0aNwRrYrGRbopz4CNryGivhHOEsZS0QB1owm73RsY2R5h6wCK38GBhK45A848183LRdgX4PXWv8nV4qpVQuLBn2mi9UEqe3DjWffDQHZxyM0IJFqCeS6VIhR+Nj4h31xHV8u19bB65Rq4pFfT+Q4+2E7kOS8aDZig== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Dec 13, 2022 at 04:52:09PM +0100, Vlastimil Babka wrote: > On 12/13/22 15:56, Hyeonggon Yoo wrote: > > On Tue, Dec 13, 2022 at 07:52:42PM +0800, Yafang Shao wrote: > >> On Tue, Dec 13, 2022 at 1:54 AM Vlastimil Babka wrote: > >> > > >> > On 12/12/22 01:37, Yafang Shao wrote: > >> > > Currently there's no way to get BPF memory usage, while we can only > >> > > estimate the usage by bpftool or memcg, both of which are not reliable. > >> > > > >> > > - bpftool > >> > > `bpftool {map,prog} show` can show us the memlock of each map and > >> > > prog, but the memlock is vary from the real memory size. The memlock > >> > > of a bpf object is approximately > >> > > `round_up(key_size + value_size, 8) * max_entries`, > >> > > so 1) it can't apply to the non-preallocated bpf map which may > >> > > increase or decrease the real memory size dynamically. 2) the element > >> > > size of some bpf map is not `key_size + value_size`, for example the > >> > > element size of htab is > >> > > `sizeof(struct htab_elem) + round_up(key_size, 8) + round_up(value_size, 8)` > >> > > That said the differece between these two values may be very great if > >> > > the key_size and value_size is small. For example in my verifaction, > >> > > the size of memlock and real memory of a preallocated hash map are, > >> > > > >> > > $ grep BPF /proc/meminfo > >> > > BPF: 1026048 B <<< the size of preallocated memalloc pool > >> > > > >> > > (create hash map) > >> > > > >> > > $ bpftool map show > >> > > 3: hash name count_map flags 0x0 > >> > > key 4B value 4B max_entries 1048576 memlock 8388608B > >> > > > >> > > $ grep BPF /proc/meminfo > >> > > BPF: 84919344 B > >> > > > >> > > So the real memory size is $((84919344 - 1026048)) which is 83893296 > >> > > bytes while the memlock is only 8388608 bytes. > >> > > > >> > > - memcg > >> > > With memcg we only know that the BPF memory usage is less than > >> > > memory.usage_in_bytes (or memory.current in v2). Furthermore, we only > >> > > know that the BPF memory usage is less than $MemTotal if the BPF > >> > > object is charged into root memcg :) > >> > > > >> > > So we need a way to get the BPF memory usage especially there will be > >> > > more and more bpf programs running on the production environment. The > >> > > memory usage of BPF memory is not trivial, which deserves a new item in > >> > > /proc/meminfo. > >> > > > >> > > This patchset introduce a solution to calculate the BPF memory usage. > >> > > This solution is similar to how memory is charged into memcg, so it is > >> > > easy to understand. It counts three types of memory usage - > >> > > - page > >> > > via kmalloc, vmalloc, kmem_cache_alloc or alloc pages directly and > >> > > their families. > >> > > When a page is allocated, we will count its size and mark the head > >> > > page, and then check the head page at page freeing. > >> > > - slab > >> > > via kmalloc, kmem_cache_alloc and their families. > >> > > When a slab object is allocated, we will mark this object in this > >> > > slab and check it at slab object freeing. That said we need extra memory > >> > > to store the information of each object in a slab. > >> > > - percpu > >> > > via alloc_percpu and its family. > >> > > When a percpu area is allocated, we will mark this area in this > >> > > percpu chunk and check it at percpu area freeing. That said we need > >> > > extra memory to store the information of each area in a percpu chunk. > >> > > > >> > > So we only need to annotate the allcation to add the BPF memory size, > >> > > and the sub of the BPF memory size will be handled automatically at > >> > > freeing. We can annotate it in irq, softirq or process context. To avoid > >> > > counting the nested allcations, for example the percpu backing allocator, > >> > > we reuse the __GFP_ACCOUNT to filter them out. __GFP_ACCOUNT also make > >> > > the count consistent with memcg accounting. > >> > > >> > So you can't easily annotate the freeing places as well, to avoid the whole > >> > tracking infrastructure? > >> > >> The trouble is kfree_rcu(). for example, > >> old_item = active_vm_item_set(ACTIVE_VM_BPF); > >> kfree_rcu(); > >> active_vm_item_set(old_item); > >> If we want to pass the ACTIVE_VM_BPF into the deferred rcu context, we > >> will change lots of code in the RCU subsystem. I'm not sure if it is > >> worth it. > > > > (+Cc rcu folks) > > > > IMO adding new kfree_rcu() varient for BPF that accounts BPF memory > > usage would be much less churn :) > > Alternatively, just account the bpf memory as freed already when calling > kfree_rcu()? I think the amount of memory "in flight" to be freed by rcu is > a separate issue (if it's actually an issue) and not something each > kfree_rcu() user should think about separately? If the in-flight memory really does need to be accounted for, then one straightforward approach is to use call_rcu() and do the first part of the needed accounting at the call_rcu() callsite and the rest of the accounting when the callback is invoked. Or, if memory must be freed quickly even on ChromeOS and Android, use call_rcu_hurry() instead of call_rcu(). Or is there some accounting requirement that I am missing? Thanx, Paul > >> > I thought there was a patchset for a whole > >> > bfp-specific memory allocator, where accounting would be implemented > >> > naturally, I would imagine. > >> > > >> > >> I posted a patchset[1] which annotates both allocating and freeing > >> several months ago. > >> But unfortunately after more investigation and verification I found > >> the deferred freeing context is a problem, which can't be resolved > >> easily. > >> That's why I finally decided to annotate allocating only. > >> > >> [1]. https://lore.kernel.org/linux-mm/20220921170002.29557-1-laoar.shao@gmail.com/ > >> > >> > > To store the information of a slab or a page, we need to create a new > >> > > member in struct page, but we can do it in page extension which can > >> > > avoid changing the size of struct page. So a new page extension > >> > > active_vm is introduced. Each page and each slab which is allocated as > >> > > BPF memory will have a struct active_vm. The reason it is named as > >> > > active_vm is that we can extend it to other areas easily, for example in > >> > > the future we may use it to count other memory usage. > >> > > > >> > > The new page extension active_vm can be disabled via CONFIG_ACTIVE_VM at > >> > > compile time or kernel parameter `active_vm=` at runtime. > >> > > >> > The issue with page_ext is the extra memory usage, so it was rather intended > >> > for debugging features that can be always compiled in, but only enabled at > >> > runtime when debugging is needed. The overhead is only paid when enabled. > >> > That's at least the case of page_owner and page_table_check. The 32bit > >> > page_idle is rather an oddity that could have instead stayed 64-bit only. > >> > > >> > >> Right, it seems currently page_ext is for debugging purposes only. > >> > >> > But this is proposing a page_ext functionality supposed to be enabled at all > >> > times in production, with the goal of improved accounting. Not an on-demand > >> > debugging. I'm afraid the costs will outweight the benefits. > >> > > >> > >> The memory overhead of this new page extension is (8/4096), which is > >> 0.2% of total memory. Not too big to be acceptable. > > > > It's generally unacceptable to increase sizeof(struct page) > > (nor enabling page_ext by default, and that's the why page_ext is for > > debugging purposes only) > > > >> If the user really > >> thinks this overhead is not accepted, he can set "active_vm=off" to > >> disable it. > > > > I'd say many people won't welcome adding 0.2% of total memory by default > > to get BPF memory usage. > > Agreed. > > >> To reduce the memory overhead further, I have a bold idea. > >> Actually we don't need to allocate such a page extension for every > >> page, while we only need to allocate it if the user needs to access > >> it. That said it seems that we can allocate some kind of page > >> extensions dynamically rather than preallocate at booting, but I > >> haven't investigated it deeply to check if it can work. What do you > >> think? > > There's lots of benefits (simplicity) of page_ext being allocated as it is > today. What you're suggesting will be better solved (in few years :) by > Matthew's bold ideas about shrinking the current struct page and allocating > usecase-specific descriptors. > > >> > Just a quick thought, in case the bpf accounting really can't be handled > >> > without marking pages and slab objects - since memcg already has hooks there > >> > without need of page_ext, couldn't it be done by extending the memcg infra > >> > instead? > >> > > >> > >> We need to make sure the accounting of BPF memory usage is still > >> workable even without memcg, see also the previous discussion[2]. > >> > >> [2]. https://lore.kernel.org/linux-mm/Yy53cgcwx+hTll4R@slm.duckdns.org/ > > >