From: Song Liu <songliubraving@meta.com>
To: "Edgecombe, Rick P" <rick.p.edgecombe@intel.com>
Cc: "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
"linux-mm@kvack.org" <linux-mm@kvack.org>,
"song@kernel.org" <song@kernel.org>, "hch@lst.de" <hch@lst.de>,
Kernel Team <Kernel-team@fb.com>,
"peterz@infradead.org" <peterz@infradead.org>,
"urezki@gmail.com" <urezki@gmail.com>,
"akpm@linux-foundation.org" <akpm@linux-foundation.org>,
"x86@kernel.org" <x86@kernel.org>,
"Hansen, Dave" <dave.hansen@intel.com>
Subject: Re: [RFC v2 3/4] modules, x86: use vmalloc_exec for module core
Date: Fri, 14 Oct 2022 18:26:04 +0000 [thread overview]
Message-ID: <7112B8B4-B593-45AA-A9AD-2ABEEE96223E@fb.com> (raw)
In-Reply-To: <fb7a38faa52ce0f35061473c9c8b56394a726e59.camel@intel.com>
> On Oct 14, 2022, at 8:42 AM, Edgecombe, Rick P <rick.p.edgecombe@intel.com> wrote:
>
> On Fri, 2022-10-07 at 16:43 -0700, Song Liu wrote:
>> diff --git a/kernel/module/main.c b/kernel/module/main.c
>> index a4e4d84b6f4e..b44806e31a56 100644
>> --- a/kernel/module/main.c
>> +++ b/kernel/module/main.c
>> @@ -53,6 +53,7 @@
>> #include <linux/bsearch.h>
>> #include <linux/dynamic_debug.h>
>> #include <linux/audit.h>
>> +#include <linux/bpf.h>
>> #include <uapi/linux/module.h>
>> #include "internal.h"
>>
>> @@ -1203,7 +1204,7 @@ static void free_module(struct module *mod)
>> lockdep_free_key_range(mod->data_layout.base, mod-
>>> data_layout.size);
>>
>> /* Finally, free the core (containing the module structure)
>> */
>> - module_memfree(mod->core_layout.base);
>> + vfree_exec(mod->core_layout.base);
>> #ifdef CONFIG_ARCH_WANTS_MODULES_DATA_IN_VMALLOC
>> vfree(mod->data_layout.base);
>> #endif
>> @@ -1321,7 +1322,8 @@ static int simplify_symbols(struct module *mod,
>> const struct load_info *info)
>> ksym = resolve_symbol_wait(mod, info, name);
>> /* Ok if resolved. */
>> if (ksym && !IS_ERR(ksym)) {
>> - sym[i].st_value =
>> kernel_symbol_value(ksym);
>> + unsigned long val =
>> kernel_symbol_value(ksym);
>> + bpf_arch_text_copy(&sym[i].st_value,
>> &val, sizeof(val));
>
> Why bpf_arch_text_copy()? This of course won't work for other
> architectures. So there needs to be fallback method. That RFC broke the
> operation into two stages: Loading and finalized. When loading, on non-
> x86 the writes would simply be to the allocation mapped as writable.
> When it was finalized it changed it to it's final permission (RO, etc).
> Then for x86 it does text_pokes() for the writes and has it RO from the
> beginning.
Yeah, this one (3/4) is really a prototype to show vmalloc_exec could
work for modules (with a lot more work of course). And something to
replace bpf_arch_text_copy() is one of the issues we need to address in
the future.
>
> I ended up needing a staging buffer for modules too, so that the code
> could operate on it directly. I can't remember why that was, it might
> be unneeded now since you moved data out of the core allocation.
Both bpf_jit and bpf_dispather uses a staging buffer with bpf_prog_pack.
The benefit of this approach is that it minimizes the number of
text_poke/copy() calls. OTOH, it is quite a pain to make all the
relative calls correct, as the staging buffer has different address to
the final allocation.
I think we may not need the staging buffer for modules, as module
load/unload happens less often than BPF program JITs (so it is ok for
it to be slightly slower).
btw: I cannot take credit for split module data out of core allocation,
Christophe Leroy did the work. :)
Thanks,
Song
next prev parent reply other threads:[~2022-10-14 18:26 UTC|newest]
Thread overview: 26+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-10-07 23:43 [RFC v2 0/4] vmalloc_exec for modules and BPF programs Song Liu
2022-10-07 23:43 ` [RFC v2 1/4] vmalloc: introduce vmalloc_exec and vfree_exec Song Liu
2022-10-10 18:13 ` Edgecombe, Rick P
2022-10-10 19:04 ` Song Liu
2022-10-10 19:59 ` Edgecombe, Rick P
2022-10-07 23:43 ` [RFC v2 2/4] bpf: use vmalloc_exec Song Liu
2022-10-07 23:43 ` [RFC v2 3/4] modules, x86: use vmalloc_exec for module core Song Liu
2022-10-14 3:48 ` Aaron Lu
2022-10-14 6:07 ` Song Liu
[not found] ` <fb7a38faa52ce0f35061473c9c8b56394a726e59.camel@intel.com>
2022-10-14 18:26 ` Song Liu [this message]
2022-10-07 23:43 ` [RFC v2 4/4] vmalloc_exec: share a huge page with kernel text Song Liu
2022-10-10 18:32 ` Edgecombe, Rick P
2022-10-10 19:08 ` Song Liu
2022-10-10 20:09 ` Edgecombe, Rick P
[not found] ` <2B66E2E7-7D32-418C-9DFD-1E17180300B4@fb.com>
2022-10-11 20:40 ` Edgecombe, Rick P
2022-10-12 5:37 ` Song Liu
2022-10-12 18:38 ` Edgecombe, Rick P
2022-10-12 19:01 ` Song Liu
2022-10-08 0:17 ` [RFC v2 0/4] vmalloc_exec for modules and BPF programs Song Liu
2022-10-12 19:03 ` Song Liu
2022-10-17 7:26 ` Christoph Hellwig
2022-10-17 16:23 ` Song Liu
2022-10-18 14:50 ` Christoph Hellwig
2022-10-18 15:05 ` Song Liu
2022-10-18 15:40 ` Christoph Hellwig
2022-10-18 15:40 ` Christoph Hellwig
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=7112B8B4-B593-45AA-A9AD-2ABEEE96223E@fb.com \
--to=songliubraving@meta.com \
--cc=Kernel-team@fb.com \
--cc=akpm@linux-foundation.org \
--cc=dave.hansen@intel.com \
--cc=hch@lst.de \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=peterz@infradead.org \
--cc=rick.p.edgecombe@intel.com \
--cc=song@kernel.org \
--cc=urezki@gmail.com \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox