From: Christophe Leroy <christophe.leroy@csgroup.eu>
To: Song Liu <song@kernel.org>,
"bpf@vger.kernel.org" <bpf@vger.kernel.org>,
"linux-mm@kvack.org" <linux-mm@kvack.org>
Cc: "akpm@linux-foundation.org" <akpm@linux-foundation.org>,
"x86@kernel.org" <x86@kernel.org>,
"peterz@infradead.org" <peterz@infradead.org>,
"hch@lst.de" <hch@lst.de>,
"rick.p.edgecombe@intel.com" <rick.p.edgecombe@intel.com>,
"aaron.lu@intel.com" <aaron.lu@intel.com>,
"rppt@kernel.org" <rppt@kernel.org>,
"mcgrof@kernel.org" <mcgrof@kernel.org>
Subject: Re: [PATCH bpf-next v2 0/5] execmem_alloc for BPF programs
Date: Tue, 8 Nov 2022 11:44:48 +0000 [thread overview]
Message-ID: <83277694-6cb3-3fc7-b300-d39f82ac5b04@csgroup.eu> (raw)
In-Reply-To: <20221107223921.3451913-1-song@kernel.org>
Le 07/11/2022 à 23:39, Song Liu a écrit :
> This patchset tries to address the following issues:
>
> 1. Direct map fragmentation
>
> On x86, STRICT_*_RWX requires the direct map of any RO+X memory to be also
> RO+X. These set_memory_* calls cause 1GB page table entries to be split
> into 2MB and 4kB ones. This fragmentation in direct map results in bigger
> and slower page table, and pressure for both instruction and data TLB.
>
> Our previous work in bpf_prog_pack tries to address this issue from BPF
> program side. Based on the experiments by Aaron Lu [4], bpf_prog_pack has
> greatly reduced direct map fragmentation from BPF programs.
>
> 2. iTLB pressure from BPF program
>
> Dynamic kernel text such as modules and BPF programs (even with current
> bpf_prog_pack) use 4kB pages on x86, when the total size of modules and
> BPF program is big, we can see visible performance drop caused by high
> iTLB miss rate.
>
> 3. TLB shootdown for short-living BPF programs
>
> Before bpf_prog_pack loading and unloading BPF programs requires global
> TLB shootdown. This patchset (and bpf_prog_pack) replaces it with a local
> TLB flush.
>
> 4. Reduce memory usage by BPF programs (in some cases)
>
> Most BPF programs and various trampolines are small, and they often
> occupies a whole page. From a random server in our fleet, 50% of the
> loaded BPF programs are less than 500 byte in size, and 75% of them are
> less than 2kB in size. Allowing these BPF programs to share 2MB pages
> would yield some memory saving for systems with many BPF programs. For
> systems with only small number of BPF programs, this patch may waste a
> little memory by allocating one 2MB page, but using only part of it.
>
>
> Based on our experiments [5], we measured 0.5% performance improvement
> from bpf_prog_pack. This patchset further boosts the improvement to 0.7%.
> The difference is because bpf_prog_pack uses 512x 4kB pages instead of
> 1x 2MB page, bpf_prog_pack as-is doesn't resolve #2 above.
>
> This patchset replaces bpf_prog_pack with a better API and makes it
> available for other dynamic kernel text, such as modules, ftrace, kprobe.
>
>
> This set enables bpf programs and bpf dispatchers to share huge pages with
> new API:
> execmem_alloc()
> execmem_alloc()
> execmem_fill()
>
> The idea is similar to Peter's suggestion in [1].
>
> execmem_alloc() manages a set of PMD_SIZE RO+X memory, and allocates these
> memory to its users. execmem_alloc() is used to free memory allocated by
> execmem_alloc(). execmem_fill() is used to update memory allocated by
> execmem_alloc().
>
> Memory allocated by execmem_alloc() is RO+X, so this doesnot violate W^X.
> The caller has to update the content with text_poke like mechanism.
> Specifically, execmem_fill() is provided to update memory allocated by
> execmem_alloc(). execmem_fill() also makes sure the update stays in the
> boundary of one chunk allocated by execmem_alloc(). Please refer to patch
> 1/5 for more details of
>
> Patch 3/5 uses these new APIs in bpf program and bpf dispatcher.
>
> Patch 4/5 and 5/5 allows static kernel text (_stext to _etext) to share
> PMD_SIZE pages with dynamic kernel text on x86_64. This is achieved by
> allocating PMD_SIZE pages to roundup(_etext, PMD_SIZE), and then use
> _etext to roundup(_etext, PMD_SIZE) for dynamic kernel text.
Would it be possible to have something more generic than being stuck to
PMD_SIZE ?
On powerpc 8xx, PMD_SIZE is 4MB and hugepages are 512kB and 8MB.
Christophe
next prev parent reply other threads:[~2022-11-08 11:44 UTC|newest]
Thread overview: 91+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-11-07 22:39 Song Liu
2022-11-07 22:39 ` [PATCH bpf-next v2 1/5] vmalloc: introduce execmem_alloc, execmem_free, and execmem_fill Song Liu
2022-11-07 22:39 ` [PATCH bpf-next v2 2/5] x86/alternative: support execmem_alloc() and execmem_free() Song Liu
2022-11-07 22:39 ` [PATCH bpf-next v2 3/5] bpf: use execmem_alloc for bpf program and bpf dispatcher Song Liu
2022-11-07 22:39 ` [PATCH bpf-next v2 4/5] vmalloc: introduce register_text_tail_vm() Song Liu
2022-11-07 22:39 ` [PATCH bpf-next v2 5/5] x86: use register_text_tail_vm Song Liu
2022-11-08 19:04 ` Edgecombe, Rick P
2022-11-08 22:15 ` Song Liu
2022-11-15 17:28 ` Edgecombe, Rick P
2022-11-07 22:55 ` [PATCH bpf-next v2 0/5] execmem_alloc for BPF programs Luis Chamberlain
2022-11-07 23:13 ` Song Liu
2022-11-07 23:39 ` Luis Chamberlain
2022-11-08 0:13 ` Edgecombe, Rick P
2022-11-08 2:45 ` Luis Chamberlain
2022-11-08 18:20 ` Song Liu
2022-11-08 18:12 ` Song Liu
2022-11-08 11:27 ` Mike Rapoport
2022-11-08 12:38 ` Aaron Lu
2022-11-09 6:55 ` Christoph Hellwig
2022-11-09 11:05 ` Peter Zijlstra
2022-11-08 16:51 ` Edgecombe, Rick P
2022-11-08 18:50 ` Song Liu
2022-11-09 11:17 ` Mike Rapoport
2022-11-09 17:04 ` Edgecombe, Rick P
2022-11-09 17:53 ` Song Liu
2022-11-13 10:34 ` Mike Rapoport
2022-11-14 20:30 ` Song Liu
2022-11-15 21:18 ` Luis Chamberlain
2022-11-15 21:39 ` Edgecombe, Rick P
2022-11-16 22:34 ` Luis Chamberlain
2022-11-17 8:50 ` Mike Rapoport
2022-11-17 18:36 ` Song Liu
2022-11-20 10:41 ` Mike Rapoport
2022-11-21 14:52 ` Song Liu
2022-11-30 9:39 ` Mike Rapoport
2022-11-09 17:43 ` Song Liu
2022-11-09 21:23 ` Christophe Leroy
2022-11-10 1:50 ` Song Liu
2022-11-13 10:42 ` Mike Rapoport
2022-11-14 20:45 ` Song Liu
2022-11-15 20:51 ` Luis Chamberlain
2022-11-20 10:44 ` Mike Rapoport
2022-11-08 18:41 ` Song Liu
2022-11-08 19:43 ` Christophe Leroy
2022-11-08 21:40 ` Song Liu
2022-11-13 9:58 ` Mike Rapoport
2022-11-14 20:13 ` Song Liu
2022-11-08 11:44 ` Christophe Leroy [this message]
2022-11-08 18:47 ` Song Liu
2022-11-08 19:32 ` Christophe Leroy
2022-11-08 11:48 ` Mike Rapoport
2022-11-15 1:30 ` Song Liu
2022-11-15 17:34 ` Edgecombe, Rick P
2022-11-15 21:54 ` Song Liu
2022-11-15 22:14 ` Edgecombe, Rick P
2022-11-15 22:32 ` Song Liu
2022-11-16 1:20 ` Song Liu
2022-11-16 21:22 ` Edgecombe, Rick P
2022-11-16 22:03 ` Song Liu
2022-11-15 21:09 ` Luis Chamberlain
2022-11-15 21:32 ` Luis Chamberlain
2022-11-15 22:48 ` Song Liu
2022-11-16 22:33 ` Luis Chamberlain
2022-11-16 22:47 ` Edgecombe, Rick P
2022-11-16 23:53 ` Luis Chamberlain
2022-11-17 1:17 ` Song Liu
2022-11-17 9:37 ` Mike Rapoport
2022-11-29 10:23 ` Thomas Gleixner
2022-11-29 17:26 ` Song Liu
2022-11-29 23:56 ` Thomas Gleixner
2022-11-30 16:18 ` Song Liu
2022-12-01 9:08 ` Thomas Gleixner
2022-12-01 19:31 ` Song Liu
2022-12-02 1:38 ` Thomas Gleixner
2022-12-02 8:38 ` Song Liu
2022-12-02 9:22 ` Thomas Gleixner
2022-12-06 20:25 ` Song Liu
2022-12-07 15:36 ` Thomas Gleixner
2022-12-07 16:53 ` Christophe Leroy
2022-12-07 19:29 ` Song Liu
2022-12-07 21:04 ` Thomas Gleixner
2022-12-07 21:48 ` Christophe Leroy
2022-12-07 19:26 ` Song Liu
2022-12-07 20:57 ` Thomas Gleixner
2022-12-07 23:17 ` Song Liu
2022-12-02 10:46 ` Christophe Leroy
2022-12-02 17:43 ` Thomas Gleixner
2022-12-01 20:23 ` Mike Rapoport
2022-12-01 22:34 ` Thomas Gleixner
2022-12-03 14:46 ` Mike Rapoport
2022-12-03 20:58 ` Thomas Gleixner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=83277694-6cb3-3fc7-b300-d39f82ac5b04@csgroup.eu \
--to=christophe.leroy@csgroup.eu \
--cc=aaron.lu@intel.com \
--cc=akpm@linux-foundation.org \
--cc=bpf@vger.kernel.org \
--cc=hch@lst.de \
--cc=linux-mm@kvack.org \
--cc=mcgrof@kernel.org \
--cc=peterz@infradead.org \
--cc=rick.p.edgecombe@intel.com \
--cc=rppt@kernel.org \
--cc=song@kernel.org \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox