linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Song Liu <song@kernel.org>
To: Luis Chamberlain <mcgrof@kernel.org>
Cc: bpf@vger.kernel.org, linux-mm@kvack.org,
	akpm@linux-foundation.org,  x86@kernel.org, peterz@infradead.org,
	hch@lst.de, rick.p.edgecombe@intel.com,  rppt@kernel.org,
	willy@infradead.org, dave@stgolabs.net,
	 a.manzanares@samsung.com
Subject: Re: [PATCH bpf-next v4 0/6] execmem_alloc for BPF programs
Date: Mon, 21 Nov 2022 19:28:36 -0700	[thread overview]
Message-ID: <CAPhsuW7AfwpV6G8U7VRXMcjBEUf7OCOY5eR7eagEoXVK-AmBRg@mail.gmail.com> (raw)
In-Reply-To: <Y3vbwMptiNP6aJDh@bombadil.infradead.org>

On Mon, Nov 21, 2022 at 1:12 PM Luis Chamberlain <mcgrof@kernel.org> wrote:
>
> On Thu, Nov 17, 2022 at 12:23:16PM -0800, Song Liu wrote:
> > This patchset tries to address the following issues:
> >
> > 1. Direct map fragmentation
> >
> > On x86, STRICT_*_RWX requires the direct map of any RO+X memory to be also
> > RO+X. These set_memory_* calls cause 1GB page table entries to be split
> > into 2MB and 4kB ones. This fragmentation in direct map results in bigger
> > and slower page table, and pressure for both instruction and data TLB.
> >
> > Our previous work in bpf_prog_pack tries to address this issue from BPF
> > program side. Based on the experiments by Aaron Lu [4], bpf_prog_pack has
> > greatly reduced direct map fragmentation from BPF programs.
>
> This value is clear, but I'd like to see at least another new user and
> the respective commit log show the gains as Aaron Lu showed.
>
> > 2. iTLB pressure from BPF program
> >
> > Dynamic kernel text such as modules and BPF programs (even with current
> > bpf_prog_pack) use 4kB pages on x86, when the total size of modules and
> > BPF program is big, we can see visible performance drop caused by high
> > iTLB miss rate.
>
> As suggested by Mike Rapoport, "benchmarking iTLB performance on an idle
> system is not very representative. TLB is a scarce resource, so it'd be
> interesting to see this benchmark on a loaded system."
>
> This would also help pave the way to measure this for more possible
> future callers like modules. There in lies true value to this
> consideration.
>
> Also, you mention your perf stats are run on a VM, I am curious what
> things you need to get TLB to be properly measured on the VM and if
> this is really reliable data Vs bare metal. I haven't yet been sucessful
> on getting perf stat for TBL to work on a VM and based on what I've read
> have been catious about the results.

To make these perf counters work on VM, we need a newer host kernel
(my system is running 5.6 based kernel, but I am not sure what is the
minimum required version). Then we need to run qemu with option
"-cpu host" (both host and guest are x86_64).

>
> So curious if you'd see something different on bare metal.

Once the above all worked out, VM runs the same as bare metal from
perf counter's point of view.

>
> [0] https://lkml.kernel.org/r/Y3YA2mRZDJkB4lmP@kernel.org
>
> > 3. TLB shootdown for short-living BPF programs
> >
> > Before bpf_prog_pack loading and unloading BPF programs requires global
> > TLB shootdown. This patchset (and bpf_prog_pack) replaces it with a local
> > TLB flush.
>
> If this is all done on the bpf code replacement then the commit log
> should clarify that in the commit log, as then it allows future users
> to not be surprised if they don't see these gains as this is specific
> to the way bpf code used bpf_prog_pag. Also, you can measure the
> shootdowns and show the differences with perf stat tlb:tlb_flush.
>
> > 4. Reduce memory usage by BPF programs (in some cases)
> >
> > Most BPF programs and various trampolines are small, and they often
> > occupies a whole page. From a random server in our fleet, 50% of the
> > loaded BPF programs are less than 500 byte in size, and 75% of them are
> > less than 2kB in size. Allowing these BPF programs to share 2MB pages
> > would yield some memory saving for systems with many BPF programs. For
> > systems with only small number of BPF programs, this patch may waste a
> > little memory by allocating one 2MB page, but using only part of it.
> >
> > 5. Introduce a unified API to allocate memory with special permissions.
> >
> > This will help get rid of set_vm_flush_reset_perms calls from users of
> > vmalloc, module_alloc, etc.
>
> And *this* is one of the reasons I'm so eager to see a proper solution
> drawn up. This would be a huge win for modules, however since some of
> the complexities in special permissions with modules lies in all the
> cross architecture hanky panky, I'd prefer to see this through merged
> *iff* we have modules converted as well as it would give us a clearer
> picture if the solution covers the bases. And we'd get proper testing
> on this. Rather than it being a special thing for BPF.
>
> > Based on our experiments [5], we measured ~0.6% performance improvement
> > from bpf_prog_pack. This patchset further boosts the improvement to ~0.8%.
>
> I'd prefer we leave out arbitrary performance data, as it does not help much.

This really bothers me. With real workload, we are talking about performance
difference of ~1%. I don't think there is any open source benchmark that can
show this level of performance difference. In our case, we used A/B test with
80 hosts (40 vs. 40) and runs for many hours to confidently show 1%
performance difference.

This exact benchmark has a very good record of reporting smallish
performance regression. For example, this commit

  commit 7af0145067bc ("x86/mm/cpa: Avoid the 4k pages check completely")

fixes a bug that splits the page table (from 2MB to 4kB) for the WHOLE kernel
text. The bug stayed in the kernel for almost a year. None of all the available
open source benchmark had caught it before this specific benchmark.

We have used this benchmark to demonstrate performance benefits of many
optimizations. I don't understand why it suddenly becomes "arbitrary
performance data".

Song

>
> > The difference is because bpf_prog_pack uses 512x 4kB pages instead of
> > 1x 2MB page, bpf_prog_pack as-is doesn't resolve #2 above.
> >
> > This patchset replaces bpf_prog_pack with a better API and makes it
> > available for other dynamic kernel text, such as modules, ftrace, kprobe.
>
> Let's see that through, then I think the series builds confidence in
> implementation.


  parent reply	other threads:[~2022-11-22  2:28 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-11-17 20:23 Song Liu
2022-11-17 20:23 ` [PATCH bpf-next v4 1/6] vmalloc: introduce execmem_alloc, execmem_free, and execmem_fill Song Liu
     [not found]   ` <882e2964-932e-0113-d3cd-344281add3a1@iogearbox.net>
2022-11-21 15:55     ` Christoph Hellwig
2022-11-21 16:29       ` Song Liu
2022-11-21 19:55         ` Luis Chamberlain
2022-11-22  2:55           ` Song Liu
2022-11-22  6:13           ` Christoph Hellwig
2022-11-22 17:25             ` Song Liu
2022-11-28 17:53     ` Song Liu
2022-11-17 20:23 ` [PATCH bpf-next v4 2/6] x86/alternative: support execmem_alloc() and execmem_free() Song Liu
2022-11-17 20:23 ` [PATCH bpf-next v4 3/6] selftests/vm: extend test_vmalloc to test execmem_* APIs Song Liu
2022-11-17 20:23 ` [PATCH bpf-next v4 4/6] bpf: use execmem_alloc for bpf program and bpf dispatcher Song Liu
2022-11-17 20:23 ` [PATCH bpf-next v4 5/6] vmalloc: introduce register_text_tail_vm() Song Liu
2022-11-17 20:23 ` [PATCH bpf-next v4 6/6] x86: use register_text_tail_vm Song Liu
2022-11-21 20:12 ` [PATCH bpf-next v4 0/6] execmem_alloc for BPF programs Luis Chamberlain
2022-11-21 20:20   ` Luis Chamberlain
2022-11-22  2:36     ` Song Liu
2022-12-08  2:48       ` Luis Chamberlain
2022-11-22  2:28   ` Song Liu [this message]
2022-11-23  0:21     ` Luis Chamberlain
2022-11-23  5:06       ` Song Liu
2022-11-30  9:53         ` Mike Rapoport
2022-11-30  9:41     ` Mike Rapoport
2022-11-22  2:55   ` Song Liu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAPhsuW7AfwpV6G8U7VRXMcjBEUf7OCOY5eR7eagEoXVK-AmBRg@mail.gmail.com \
    --to=song@kernel.org \
    --cc=a.manzanares@samsung.com \
    --cc=akpm@linux-foundation.org \
    --cc=bpf@vger.kernel.org \
    --cc=dave@stgolabs.net \
    --cc=hch@lst.de \
    --cc=linux-mm@kvack.org \
    --cc=mcgrof@kernel.org \
    --cc=peterz@infradead.org \
    --cc=rick.p.edgecombe@intel.com \
    --cc=rppt@kernel.org \
    --cc=willy@infradead.org \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox