From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 38378C00A5A for ; Wed, 18 Jan 2023 03:08:29 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9D9DB6B0071; Tue, 17 Jan 2023 22:08:28 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 98A056B0072; Tue, 17 Jan 2023 22:08:28 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 852886B0074; Tue, 17 Jan 2023 22:08:28 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 74C836B0071 for ; Tue, 17 Jan 2023 22:08:28 -0500 (EST) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 2BF05C037C for ; Wed, 18 Jan 2023 03:08:28 +0000 (UTC) X-FDA: 80366436696.06.3993B87 Received: from mail-lj1-f170.google.com (mail-lj1-f170.google.com [209.85.208.170]) by imf24.hostedemail.com (Postfix) with ESMTP id 82492180009 for ; Wed, 18 Jan 2023 03:08:26 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=qac3T4OT; spf=pass (imf24.hostedemail.com: domain of laoar.shao@gmail.com designates 209.85.208.170 as permitted sender) smtp.mailfrom=laoar.shao@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1674011306; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Lt/NFinXeCqS49IKrVARWuoaPBVffqEh8QE5qsQ5J7g=; b=6Kd5LfR4Wnd2DqTMWN9Svyk8tWnWRk5XkX1tkiHk5AIzCInxFMf4PirQ4XF52sEbE0Vpa4 f+vHp0moFKw46Ud1ajFiDTVFz0aKo2HzbPOcDd60dlsyUD+adXCywvk1WgiKwbOa06qoPf 8uIuFyNVRHF0rfkgl9ONXS9EuvfRtNs= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=qac3T4OT; spf=pass (imf24.hostedemail.com: domain of laoar.shao@gmail.com designates 209.85.208.170 as permitted sender) smtp.mailfrom=laoar.shao@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1674011306; a=rsa-sha256; cv=none; b=ksektBj8UuHEbraekpVMf8X0+BrbL0KMUPBAOLnE50dAVFP3vqpkpEVTh6WyjrSnY9j8OS Nglq8oltJZRQqkqyggHBuPnoaVh5RHBl+SLCS2c04GoI/heVBLe38U+N3+BEs2kWdmOZM/ /5DB7wSjEkFzhKDhHdh2mTyW9BZJFlw= Received: by mail-lj1-f170.google.com with SMTP id o7so34875955ljj.8 for ; Tue, 17 Jan 2023 19:08:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=Lt/NFinXeCqS49IKrVARWuoaPBVffqEh8QE5qsQ5J7g=; b=qac3T4OTJDZNtfrZzrGAd7p2pCJIG17tqjDOBIlEvQlUGcVfcTgEa4G+SPRUg3eXe0 EtyqT7RpyQtXEEr566UZ1OaZfDPaVOTnI1pYpSF8rNaIuWW4wKjq0H/xyL75p/TLUUO1 hlTKW/aJmyh2RuLHvmZkl3XBHp7PNyAUEVt6c9NLimY3vgwzGsBFfuz3ujEG0UvQnRgl Lf9Y+wW4UHqAAG2nMIWB5EIC04BbmOr11G7m2OOpH9WPGmwvLdbNDc0FQPy9qTRibAX0 qSKzmHjdR/s4Azwr107OUWwikRqBI6DgAm6tASaqImWZkl5r0rWW5cAxugAnphhJMLsV q9zA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=Lt/NFinXeCqS49IKrVARWuoaPBVffqEh8QE5qsQ5J7g=; b=t6/OGjrSwhOpTWf7BFlqhMAFrOoVuwIt6OivASS8J1s2k1SKTdIzkLe9P2S/UkKdUP /9y2J8HuBHgPedugMR3bHUme35O2ypptaBDspnZ8qsPx8T7aIKkDyHunGePz372q6T6y r3H0UakWJ4iyEGKD46UOIbmcdBsnYDG0yvBO9tLzGn5OE0NDEbzQNNdhKrwp/Rjxyp3+ EU0acgbDlN4Hpgw8DurudAkeRY8ArNjT27yHVLmYun3+3fYYygpkyTg/t4va4DUBRYDN IJvCex9NJ6nHi8giXJR7Xe34hLQM6twB5G8jyxrX0rG/vSyCukjZIzhYQ8f3mkIZ+2hc qm8w== X-Gm-Message-State: AFqh2ko7NY2CpKDI8zggNFU8azyJ9ZlMSgYWgW2N2yt5lIE3XzBPNSjX GCy9kLBxd4zH+f/txpCjJcKdQo0hL7fZ174pG8Y= X-Google-Smtp-Source: AMrXdXsm+H5ADlb9ST8q40ikTozor3BacynUUl0x1jOpzLEQchLbHOZYVkp9fPiLqgGIG+5SRebL+FO4J5/JvCYLgSc= X-Received: by 2002:a05:651c:886:b0:28b:63e0:b8d5 with SMTP id d6-20020a05651c088600b0028b63e0b8d5mr311533ljq.512.1674011304658; Tue, 17 Jan 2023 19:08:24 -0800 (PST) MIME-Version: 1.0 References: <20230112155326.26902-1-laoar.shao@gmail.com> In-Reply-To: From: Yafang Shao Date: Wed, 18 Jan 2023 11:07:48 +0800 Message-ID: Subject: Re: [RFC PATCH bpf-next v2 00/11] mm, bpf: Add BPF into /proc/meminfo To: Alexei Starovoitov Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com>, Vlastimil Babka , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , Tejun Heo , dennis@kernel.org, Chris Lameter , Andrew Morton , Pekka Enberg , David Rientjes , Joonsoo Kim , Roman Gushchin , linux-mm , bpf Content-Type: text/plain; charset="UTF-8" X-Rspam-User: X-Rspamd-Server: rspam03 X-Stat-Signature: qxpewaqifsm4zanqzsr1f49c6khiizyb X-Rspamd-Queue-Id: 82492180009 X-HE-Tag: 1674011306-859803 X-HE-Meta: U2FsdGVkX191Zr+6PrTjEOokTplOyxUWhxRDGPgsSC3le7xrcTso9UtEGMkHJ4rpWqqaq5MCvqINV9FfRLfjlk28VWEJtni/9AySVAVZuI8yFm9uPUKpm+q8nRgpIC+cNKS2nn08pShPKWsbtTenH/4DML6abTeCdBDigAF8Il33+Wb6aYrpFqoC9xFeyIgFkGTY20fsc691mpsxAEUkf7Bdtix/n93sWr0V1KSALH31oxfYfCkCNWGkeJQs5blpYveZ4Mbpf20b7f1u1668EbAvk5NsVpAtH44at7kEhqfGCGBC9jTrTy7egs4eshxR7UGl4HGlQOp3DQN2MrLVdKpYZedGa5klhMeaDhCwQxzdUgCSLEdtmVdkTM5KTMr6KNXKlxAyhemduhhS1Rnz+7VNXX2raJHdbemgmV7leZgSMenFLMEL6ecLdNW/S9EjYByKLv2N0N8Oxn8WPRMviY7qhvktPExjByn/viCYN4fTJ83UxXX54KmgpEWBXG8/0EACMYzvBRWkrKMv6ej4Cd4yv8Nn4n9eBR92Rb2ZxEuDmhZ+CEp8JmwCvaEH6uqyZWCkawzdvgDHKcjE0yzxBQZx5jn6dFsu8F+8kIHqCJAkkl6HAuisvE2sbNpha6SdsctPU4VdvLi80UI4TjWTo5lDzlRbOBCC33cV66R5ooLdA2SYcP6J3P9y5hkOfnlSKs8zt14YxUaQqG6ohfovDUj3yTjyNRrx4Ae0AXnR4BVxMKpJQs51PJVfIB3/Q8YzWojeCiYWzhIYooaNY/GTSJX6hmCt62mKShiGNcrO9BiLlTb7MvFgUCxlS6caN1a91E0JDV06hhXrbiTTt2k147d8OMu4gzPI5XrSzYdZxSBs5n1kaJrUQBmnHd60V70OKm/kkKNTO29/6lRA7CMlO62ZUDbW/ViEp0Ug8+WId9JinXTEfPymsVsmGiR/YYmP7IQt7ivcAgLC+Tjlh2v FEJPfUpp J5KAbcKGyHjQ6bqnIVLu4eb/Ha73Ox9XQMCe3K/1OoyGWO1KdHKhL/Rxk4zKspl+tnRoc0mrb9c3SKVmOE5d/62UmPb8R7fAZMvyFs36f4N3s/HSZsMfP0m4SdLcDm2U+TDHXEjW2VEL4mezxqA2e+ZpHySdOOah1L6X8 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Jan 18, 2023 at 1:25 AM Alexei Starovoitov wrote: > > On Fri, Jan 13, 2023 at 3:53 AM Yafang Shao wrote: > > > > On Fri, Jan 13, 2023 at 5:05 AM Alexei Starovoitov > > wrote: > > > > > > On Thu, Jan 12, 2023 at 7:53 AM Yafang Shao wrote: > > > > > > > > Currently there's no way to get BPF memory usage, while we can only > > > > estimate the usage by bpftool or memcg, both of which are not reliable. > > > > > > > > - bpftool > > > > `bpftool {map,prog} show` can show us the memlock of each map and > > > > prog, but the memlock is vary from the real memory size. The memlock > > > > of a bpf object is approximately > > > > `round_up(key_size + value_size, 8) * max_entries`, > > > > so 1) it can't apply to the non-preallocated bpf map which may > > > > increase or decrease the real memory size dynamically. 2) the element > > > > size of some bpf map is not `key_size + value_size`, for example the > > > > element size of htab is > > > > `sizeof(struct htab_elem) + round_up(key_size, 8) + round_up(value_size, 8)` > > > > That said the differece between these two values may be very great if > > > > the key_size and value_size is small. For example in my verifaction, > > > > the size of memlock and real memory of a preallocated hash map are, > > > > > > > > $ grep BPF /proc/meminfo > > > > BPF: 350 kB <<< the size of preallocated memalloc pool > > > > > > > > (create hash map) > > > > > > > > $ bpftool map show > > > > 41549: hash name count_map flags 0x0 > > > > key 4B value 4B max_entries 1048576 memlock 8388608B > > > > > > > > $ grep BPF /proc/meminfo > > > > BPF: 82284 kB > > > > > > > > So the real memory size is $((82284 - 350)) which is 81934 kB > > > > while the memlock is only 8192 kB. > > > > > > hashmap with key 4b and value 4b looks artificial to me, > > > but since you're concerned with accuracy of bpftool reporting, > > > please fix the estimation in bpf_map_memory_footprint(). > > > > I thought bpf_map_memory_footprint() was deprecated, so I didn't try > > to fix it before. > > It's not deprecated. It's trying to be accurate. > See bpf_map_value_size(). > In the past we had to be precise when we calculated the required memory > before we allocated and that was causing ongoing maintenance issues. > Now bpf_map_memory_footprint() is an estimate for show_fdinfo. > It can be made more accurate for this map with corner case key/value sizes. > Thanks for the clarification. > > > You're correct that: > > > > > > > size of some bpf map is not `key_size + value_size`, for example the > > > > element size of htab is > > > > `sizeof(struct htab_elem) + round_up(key_size, 8) + round_up(value_size, 8)` > > > > > > So just teach bpf_map_memory_footprint() to do this more accurately. > > > Add bucket size to it as well. > > > Make it even more accurate with prealloc vs not. > > > Much simpler change than adding run-time overhead to every alloc/free > > > on bpf side. > > > > > > > It seems that we'd better introduce ->memory_footprint for some > > specific bpf maps. I will think about it. > > No. Don't build it into a replica of what we had before. > Making existing bpf_map_memory_footprint() more accurate. > I just don't want to add many if-elses or switch-cases into bpf_map_memory_footprint(), because I think it is a little ugly. Introducing a new map ops could make it more clear. For example, static unsigned long bpf_map_memory_footprint(const struct bpf_map *map) { unsigned long size; if (map->ops->map_mem_footprint) return map->ops->map_mem_footprint(map); size = round_up(map->key_size + bpf_map_value_size(map), 8); return round_up(map->max_entries * size, PAGE_SIZE); } > > > bpf side tracks all of its allocation. There is no need to do that > > > in generic mm side. > > > Exposing an aggregated single number if /proc/meminfo also looks wrong. > > > > Do you mean that we shouldn't expose it in /proc/meminfo ? > > We should not because it helps one particular use case only. > Somebody else might want map mem info per container, > then somebody would need it per user, etc. It seems we should show memcg info and user info in bpftool map show. > bpftool map show | awk > solves all those cases without adding new uapi-s. Makes sense to me. -- Regards Yafang