From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id EB650C433FE for ; Tue, 22 Nov 2022 02:28:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 786428E0001; Mon, 21 Nov 2022 21:28:55 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 736116B0073; Mon, 21 Nov 2022 21:28:55 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5FE138E0001; Mon, 21 Nov 2022 21:28:55 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 4FBCC6B0071 for ; Mon, 21 Nov 2022 21:28:55 -0500 (EST) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 2396DA02B5 for ; Tue, 22 Nov 2022 02:28:55 +0000 (UTC) X-FDA: 80159495430.16.99EB46A Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by imf11.hostedemail.com (Postfix) with ESMTP id 79F634000A for ; Tue, 22 Nov 2022 02:28:54 +0000 (UTC) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 79368B8190D for ; Tue, 22 Nov 2022 02:28:52 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id D48C7C433D6 for ; Tue, 22 Nov 2022 02:28:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1669084130; bh=JrZtrjEHuLa5/88wHl53LnA/5Ds0/h21PXHg62YsdHI=; h=References:In-Reply-To:From:Date:Subject:To:Cc:From; b=kKkVw1zqCjxH5cEIVmbR5RwHAvZ/46Izq2J73/WcWTbnk3oeBrM1MhrnZ6nIMJ2VN EFz4X+NCJ+8NBLiorMwJMK0HZTNP8NKTRnHPgFsUY5VQ+OTTrSr11WwkUoyhp5OYlk KlZWUdmxlQkmnk4rB/C8mZAHUK4LwRTQW7NLy6klLDYXgZMoJgCoV7scDUG9+P4m+h PqURLFjTYbvZGkqhY8YD4SrC8YYXU8fVeK8G5w99ZgAITqgdN10j3IQwU1R5lorWyH N8GEcZL9uYkLWTRS35g0by0Q+2etwWoqTV6v7o7C9QhCql1d0cjLmajXR54TZ10Ejl OUKPyXYssyq9Q== Received: by mail-ed1-f46.google.com with SMTP id l24so6196983edj.8 for ; Mon, 21 Nov 2022 18:28:50 -0800 (PST) X-Gm-Message-State: ANoB5pmU5EbKN7SnXLLMFUVboVYu3IHKhLHQrubAUS7LyA1Pz24oHDUP AMhBGUBk9uo+u0XKSFlCAfVd7722x9TY2bD2qYo= X-Google-Smtp-Source: AA0mqf7dr8eaagiMOnVJVTFErGeCoLAY82SbBfvX0Kp7s0ATQtqIWurGVUydWjcYHeq2R0J/vR9UbCaJ+iOaydRgR/Q= X-Received: by 2002:aa7:cd91:0:b0:469:2f36:fd with SMTP id x17-20020aa7cd91000000b004692f3600fdmr4876417edv.385.1669084129032; Mon, 21 Nov 2022 18:28:49 -0800 (PST) MIME-Version: 1.0 References: <20221117202322.944661-1-song@kernel.org> In-Reply-To: From: Song Liu Date: Mon, 21 Nov 2022 19:28:36 -0700 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: [PATCH bpf-next v4 0/6] execmem_alloc for BPF programs To: Luis Chamberlain Cc: bpf@vger.kernel.org, linux-mm@kvack.org, akpm@linux-foundation.org, x86@kernel.org, peterz@infradead.org, hch@lst.de, rick.p.edgecombe@intel.com, rppt@kernel.org, willy@infradead.org, dave@stgolabs.net, a.manzanares@samsung.com Content-Type: text/plain; charset="UTF-8" ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1669084134; a=rsa-sha256; cv=none; b=Th2AgeUDhMr0/y1/O1Xsqz+nrDNvEtkewxfX4Mha1thYyAsRLO+TYHPF+JYixjUe+X9Va0 Nc/ipzRngO7XDG1iTAxrcFKJBUzoNcWqM0En5bCXFpmOAY3uSvHDD9mZGEhA28rMv00bjy hEKG82keuyXp89feJKLxZuo+T4n10VM= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=kKkVw1zq; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf11.hostedemail.com: domain of song@kernel.org designates 145.40.68.75 as permitted sender) smtp.mailfrom=song@kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1669084134; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=vHOyGwJWU/IwBe8vvjLc36c3NpGLTp41CdFjijBezUA=; b=fafnOeB//Q3QpM/ev14iD+8QWHLc8EHAcOjVFjpT7lQSx+6RT2zwjRx7rR9WzDX8RIi2fv pcQbC0E4aN3vy8l2KNnyxgnuUdGezJVWi66lcEt04xSjTG8z9nYZOz5poJnWzmj93IrUXx ajTLVyf6UgQ2kE13boQNPWGTA2uIoPs= X-Stat-Signature: te94uxw8ifbsj69c17kns7eodxxt8htb X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 79F634000A Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=kKkVw1zq; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf11.hostedemail.com: domain of song@kernel.org designates 145.40.68.75 as permitted sender) smtp.mailfrom=song@kernel.org X-Rspam-User: X-HE-Tag: 1669084134-478108 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Nov 21, 2022 at 1:12 PM Luis Chamberlain wrote: > > On Thu, Nov 17, 2022 at 12:23:16PM -0800, Song Liu wrote: > > This patchset tries to address the following issues: > > > > 1. Direct map fragmentation > > > > On x86, STRICT_*_RWX requires the direct map of any RO+X memory to be also > > RO+X. These set_memory_* calls cause 1GB page table entries to be split > > into 2MB and 4kB ones. This fragmentation in direct map results in bigger > > and slower page table, and pressure for both instruction and data TLB. > > > > Our previous work in bpf_prog_pack tries to address this issue from BPF > > program side. Based on the experiments by Aaron Lu [4], bpf_prog_pack has > > greatly reduced direct map fragmentation from BPF programs. > > This value is clear, but I'd like to see at least another new user and > the respective commit log show the gains as Aaron Lu showed. > > > 2. iTLB pressure from BPF program > > > > Dynamic kernel text such as modules and BPF programs (even with current > > bpf_prog_pack) use 4kB pages on x86, when the total size of modules and > > BPF program is big, we can see visible performance drop caused by high > > iTLB miss rate. > > As suggested by Mike Rapoport, "benchmarking iTLB performance on an idle > system is not very representative. TLB is a scarce resource, so it'd be > interesting to see this benchmark on a loaded system." > > This would also help pave the way to measure this for more possible > future callers like modules. There in lies true value to this > consideration. > > Also, you mention your perf stats are run on a VM, I am curious what > things you need to get TLB to be properly measured on the VM and if > this is really reliable data Vs bare metal. I haven't yet been sucessful > on getting perf stat for TBL to work on a VM and based on what I've read > have been catious about the results. To make these perf counters work on VM, we need a newer host kernel (my system is running 5.6 based kernel, but I am not sure what is the minimum required version). Then we need to run qemu with option "-cpu host" (both host and guest are x86_64). > > So curious if you'd see something different on bare metal. Once the above all worked out, VM runs the same as bare metal from perf counter's point of view. > > [0] https://lkml.kernel.org/r/Y3YA2mRZDJkB4lmP@kernel.org > > > 3. TLB shootdown for short-living BPF programs > > > > Before bpf_prog_pack loading and unloading BPF programs requires global > > TLB shootdown. This patchset (and bpf_prog_pack) replaces it with a local > > TLB flush. > > If this is all done on the bpf code replacement then the commit log > should clarify that in the commit log, as then it allows future users > to not be surprised if they don't see these gains as this is specific > to the way bpf code used bpf_prog_pag. Also, you can measure the > shootdowns and show the differences with perf stat tlb:tlb_flush. > > > 4. Reduce memory usage by BPF programs (in some cases) > > > > Most BPF programs and various trampolines are small, and they often > > occupies a whole page. From a random server in our fleet, 50% of the > > loaded BPF programs are less than 500 byte in size, and 75% of them are > > less than 2kB in size. Allowing these BPF programs to share 2MB pages > > would yield some memory saving for systems with many BPF programs. For > > systems with only small number of BPF programs, this patch may waste a > > little memory by allocating one 2MB page, but using only part of it. > > > > 5. Introduce a unified API to allocate memory with special permissions. > > > > This will help get rid of set_vm_flush_reset_perms calls from users of > > vmalloc, module_alloc, etc. > > And *this* is one of the reasons I'm so eager to see a proper solution > drawn up. This would be a huge win for modules, however since some of > the complexities in special permissions with modules lies in all the > cross architecture hanky panky, I'd prefer to see this through merged > *iff* we have modules converted as well as it would give us a clearer > picture if the solution covers the bases. And we'd get proper testing > on this. Rather than it being a special thing for BPF. > > > Based on our experiments [5], we measured ~0.6% performance improvement > > from bpf_prog_pack. This patchset further boosts the improvement to ~0.8%. > > I'd prefer we leave out arbitrary performance data, as it does not help much. This really bothers me. With real workload, we are talking about performance difference of ~1%. I don't think there is any open source benchmark that can show this level of performance difference. In our case, we used A/B test with 80 hosts (40 vs. 40) and runs for many hours to confidently show 1% performance difference. This exact benchmark has a very good record of reporting smallish performance regression. For example, this commit commit 7af0145067bc ("x86/mm/cpa: Avoid the 4k pages check completely") fixes a bug that splits the page table (from 2MB to 4kB) for the WHOLE kernel text. The bug stayed in the kernel for almost a year. None of all the available open source benchmark had caught it before this specific benchmark. We have used this benchmark to demonstrate performance benefits of many optimizations. I don't understand why it suddenly becomes "arbitrary performance data". Song > > > The difference is because bpf_prog_pack uses 512x 4kB pages instead of > > 1x 2MB page, bpf_prog_pack as-is doesn't resolve #2 above. > > > > This patchset replaces bpf_prog_pack with a better API and makes it > > available for other dynamic kernel text, such as modules, ftrace, kprobe. > > Let's see that through, then I think the series builds confidence in > implementation.