linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Alexei Starovoitov <alexei.starovoitov@gmail.com>
To: Barret Rhoden <brho@google.com>
Cc: bpf <bpf@vger.kernel.org>, Daniel Borkmann <daniel@iogearbox.net>,
	 Andrii Nakryiko <andrii@kernel.org>,
	Martin KaFai Lau <martin.lau@kernel.org>,
	 Kumar Kartikeya Dwivedi <memxor@gmail.com>,
	Eddy Z <eddyz87@gmail.com>, Tejun Heo <tj@kernel.org>,
	 Johannes Weiner <hannes@cmpxchg.org>,
	linux-mm <linux-mm@kvack.org>,  Kernel Team <kernel-team@fb.com>
Subject: Re: [PATCH bpf-next 04/16] bpf: Introduce bpf_arena.
Date: Wed, 7 Feb 2024 12:55:14 -0800	[thread overview]
Message-ID: <CAADnVQJJ7M+OHnygbuN4qapCS8_r-mimM6CLw5oee8ixvmqg4Q@mail.gmail.com> (raw)
In-Reply-To: <c9001d70-a6ae-46b1-b20e-1aaf4a06ffd1@google.com>

On Wed, Feb 7, 2024 at 10:40 AM Barret Rhoden <brho@google.com> wrote:
>
> On 2/6/24 17:04, Alexei Starovoitov wrote:
> > +
> > +static long compute_pgoff(struct bpf_arena *arena, long uaddr)
> > +{
> > +     return (u32)(uaddr - (u32)arena->user_vm_start) >> PAGE_SHIFT;
> > +}
> > +
> > +#define MT_ENTRY ((void *)&arena_map_ops) /* unused. has to be valid pointer */
> > +
> > +/*
> > + * Reserve a "zero page", so that bpf prog and user space never see
> > + * a pointer to arena with lower 32 bits being zero.
> > + * bpf_cast_user() promotes it to full 64-bit NULL.
> > + */
> > +static int reserve_zero_page(struct bpf_arena *arena)
> > +{
> > +     long pgoff = compute_pgoff(arena, 0);
> > +
> > +     return mtree_insert(&arena->mt, pgoff, MT_ENTRY, GFP_KERNEL);
> > +}
> > +
>
> this is pretty tricky, and i think i didn't understand it at first.
>
> you're punching a hole in the arena, such that BPF won't allocate it via
> arena_alloc_pages().  thus BPF won't 'produce' an object with a pointer
> ending in 0x00000000.
>
> depending on where userspace mmaps the arena, that hole may or may not
> be the first page in the array.  if userspace mmaps it to a 4GB aligned
> virtual address, it'll be page 0.  but it could be at some arbitrary
> offset within the 4GB arena.
>
> that arbitrariness makes it harder for a BPF program to do its own
> allocations within the arena.  i'm planning on carving up the 4GB arena
> for my own purposes, managed by BPF, with the expectation that i'll be
> able to allocate any 'virtual address' within the arena.  but there's a
> magic page that won't be usable.
>
> i can certainly live with this.  just mmap userspace to a 4GB aligned
> address + PGSIZE, so that the last page in the arena is page 0.  but
> it's a little weird.

Agree. I made the same conclusion while adding global variables to the arena.
From the compiler point of view all such global vars start at offset zero
and there is no way to just "move them up by a page".
For example in C code it will look like:
int __arena var1;
int __arena var2;

&var1 == user_vm_start
&var2 == user_vm_start + 4

If __ulong(map_extra,...) was used or mmap(addr, MAP_FIXED)
and was 4Gb aligned the lower 32-bits of &var1 address will be zero
and there is not much we can do about it.
We can tell LLVM to emit extra 8 byte padding to the arena section,
but it will be useless pad if the arena is not aligned to 4Gb.

Anyway, in the v2 I will remove this reserve_zero_page() logic.
It's causing more harm than good.

>
> though i think we'll have more serious issues if anyone accidentally
> tries to use that zero page.  BPF would get an EEXIST if they try to
> allocate it directly, but then page fault and die if they touched it,
> since there's no page.  i can live with that, if we force it to be the
> last page in the arena.
>
> however, i think you need to add something to the fault handler (below)
> in case userspace touches that page:
>
> [snip]
> > +static vm_fault_t arena_vm_fault(struct vm_fault *vmf)
> > +{
> > +     struct bpf_map *map = vmf->vma->vm_file->private_data;
> > +     struct bpf_arena *arena = container_of(map, struct bpf_arena, map);
> > +     struct page *page;
> > +     long kbase, kaddr;
> > +     int ret;
> > +
> > +     kbase = bpf_arena_get_kern_vm_start(arena);
> > +     kaddr = kbase + (u32)(vmf->address & PAGE_MASK);
> > +
> > +     guard(mutex)(&arena->lock);
> > +     page = vmalloc_to_page((void *)kaddr);
> > +     if (page)
> > +             /* already have a page vmap-ed */
> > +             goto out;
> > +
> > +     if (arena->map.map_flags & BPF_F_SEGV_ON_FAULT)
> > +             /* User space requested to segfault when page is not allocated by bpf prog */
> > +             return VM_FAULT_SIGSEGV;
> > +
> > +     ret = mtree_insert(&arena->mt, vmf->pgoff, MT_ENTRY, GFP_KERNEL);
> > +     if (ret == -EEXIST)
> > +             return VM_FAULT_RETRY;
>
> say this was the zero page.  vmalloc_to_page() failed, so we tried to
> insert.  we get EEXIST, since the slot is reserved.  we retry, since we
> were expecting the case where "no page, yet slot reserved" meant that
> BPF was in the middle of filling this page.

Yes. Great catch! I hit that too while playing with global vars.

>
> though i think you can fix this by just treating this as a SIGSEGV
> instead of RETRY.

Agree.

> when i made the original suggestion of making this a
> retry (in an email off list), that was before you had the arena mutex.
> now that you have the mutex, you shouldn't have the scenario where two
> threads are concurrently trying to fill a page.  i.e. mtree_insert +
> page_alloc + vmap are all atomic w.r.t. the mutex.

yes. mutex part makes sense.

> > +
> > +     if (!arena->user_vm_start) {
> > +             arena->user_vm_start = vma->vm_start;
> > +             err = reserve_zero_page(arena);
> > +             if (err)
> > +                     return err;
> > +     }
> > +     arena->user_vm_end = vma->vm_end;
> > +     /*
> > +      * bpf_map_mmap() checks that it's being mmaped as VM_SHARED and
> > +      * clears VM_MAYEXEC. Set VM_DONTEXPAND as well to avoid
> > +      * potential change of user_vm_start.
> > +      */
> > +     vm_flags_set(vma, VM_DONTEXPAND);
> > +     vma->vm_ops = &arena_vm_ops;
> > +     return 0;
> > +}
>
> i think this whole function needs to be protected by the mutex, or at
> least all the stuff relate to user_vm_{start,end}.  if you have to
> threads mmapping the region for the first time, you'll race on the
> values of user_vm_*.

yes. will add a mutex guard.

>
> [snip]
>
> > +/*
> > + * Allocate pages and vmap them into kernel vmalloc area.
> > + * Later the pages will be mmaped into user space vma.
> > + */
> > +static long arena_alloc_pages(struct bpf_arena *arena, long uaddr, long page_cnt, int node_id)
>
> instead of uaddr, can you change this to take an address relative to the
> arena ("arena virtual address"?)?  the caller of this is in BPF, and
> they don't easily know the user virtual address.  maybe even just pgoff
> directly.

I thought about it, but it doesn't quite make sense.
bpf prog only sees user addresses.
All load/store returns them. If it bpf_printk-s an address it will be
user address.
bpf_arena_alloc_pages() also returns a user address.

Kernel addresses are not seen by bpf prog at all.
kern_vm_base is completely hidden.
Only at JIT time, it's added to pointers.
So passing uaddr to arena_alloc_pages() matches mmap style.

uaddr = bpf_arena_alloc_pages(... uaddr ...)
uaddr = mmap(uaddr, ...MAP_FIXED)

Passing pgoff would be weird.
Also note that there is no extra flag for bpf_arena_alloc_pages().
uaddr == full 64-bit of zeros is not a valid addr to use.

> additionally, you won't need to call compute_pgoff().  as it is now, i'm
> not sure what would happen if BPF did an arena_alloc with a uaddr and
> user_vm_start wasn't set yet.

That's impossible. bpf prog won't load, if the arena map is not created
either with fixed map_extra == user_vm_start or map is created
with map_extra == 0 and mmaped.
Only then bpf prog that uses that arena can be loaded.

>
> > +{
> > +     long page_cnt_max = (arena->user_vm_end - arena->user_vm_start) >> PAGE_SHIFT;
>
> any time you compute_pgoff() or look at user_vm_{start,end}, maybe
> either hold the mutex, or only do it from mmap faults (where we know
> user_vm_start is already set).  o/w there might be subtle races where
> some other thread is mmapping the arena for the first time.

That's unnecessary, since user_vm_start is fixed for the lifetime of
bpf prog.
But you spotted a bug. I need to set user_vm_end in arena_map_alloc()
when map_extra is specified.
Otherwise after arena create and before mmap bpf prog will see different
page_cnt_max. user_vm_start will be the same, of course.

> > +
> > +     kaddr = kern_vm_start + (u32)(arena->user_vm_start + pgoff * PAGE_SIZE);
>
> adding user_vm_start here is pretty subtle.
>
> so far i've been thinking that the mtree is the "address space" of the
> arena, in units of pages instead of bytes.  and pgoff is an address
> within the arena.  so mtree slot 0 is the 0th page of the 4GB region.
> and that "arena address space" is mapped at a kernel virtual address and
> a user virtual address (the same for all processes).
>
> to convert user addresses (uaddr et al.) to arena addresses, we subtract
> user_vm_start, which makes sense.  that's what compute_pgoff() does.
>
> i was expecting kaddr = kern_vm_start + pgoff * PGSIZE, essentially
> converting from arena address space to kernel virtual address.
>
> instead, by adding user_vm_start and casting to u32, you're converting
> or shifting arena addresses to *another* arena address (user address,
> truncated to 4GB to keep it in the arena), and that is the one that the
> kernel will use.
>
> is that correct?

Pretty much. kern and user have to see lower 32-bit exactly the same.
From user pov allocation starts at pgoff=0 which is the first page
after user_vm_start. Which is normal mmap behavior and in-kernel
vma convention.
Hence in arena_alloc_pages() does the same pgoff=0 is the first
page from user vma pov.
The math to compute kernel address gets complicated because two
bases need to be added. Both kernel and user bases are different
64-bit bases and may not be aligned to 4G.

> my one concern is that there's some subtle wrap-around going on, and due
> to the shifting, kaddr can be very close to the end of the arena and
> page_cnt can be big enough to go outside the 4GB range.  we'd want it to

page_cnt cannot go outside of 4gb range.
page_cnt is a number of pages in arena->user_vm_end - arena->user_vm_start
and during mmap we check that it's <= 4gb.

> wrap around.  e.g.
>
> user_start_va = 0x1,fffff000
> user_end_va =   0x2,fffff000
> page_cnt_max = 0x100000 or whatever.  full 4GB range.
>
> say we want to alloc at pgoff=0 (uaddr 0x1,fffff000), page_cnt = X.  you
> can get this pgoff either by doing mtree_insert_range or
> mtree_alloc_range on an arena with no allocations.
>
> kaddr = kern_vm_start + 0xfffff000
>
> the size of the vm area is 4GB + guard stuff, and we're right up against
> the end of it.
>
> if page_cnt > the guard size, vmap_pages_range() would be called on
> something outside the vm area we reserved, which seems bad.  and even if
> it wasn't, what we want is for later page maps to start at the beginning
> of kern_vm_start.
>
> the fix might be to just only map a page at a time - maybe a loop.  or
> detect when we're close to the edge and break it into two vmaps.  i feel
> like the loop would be easier to understand, but maybe less efficient.

Oops. You're correct. Great catch.
In earlier versions I had it as a loop, but then I decided that doing
all mapping ops page at a time is not efficient.
Oh well. Will fix.

Thanks a lot for the review!


  reply	other threads:[~2024-02-07 20:55 UTC|newest]

Thread overview: 56+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-02-06 22:04 [PATCH bpf-next 00/16] bpf: Introduce BPF arena Alexei Starovoitov
2024-02-06 22:04 ` [PATCH bpf-next 01/16] bpf: Allow kfuncs return 'void *' Alexei Starovoitov
2024-02-08 19:40   ` Andrii Nakryiko
2024-02-09  0:09     ` Alexei Starovoitov
2024-02-09 19:09       ` Andrii Nakryiko
2024-02-10  2:32         ` Alexei Starovoitov
2024-02-09 16:06   ` David Vernet
2024-02-06 22:04 ` [PATCH bpf-next 02/16] bpf: Recognize '__map' suffix in kfunc arguments Alexei Starovoitov
2024-02-09 16:57   ` David Vernet
2024-02-09 17:46     ` Alexei Starovoitov
2024-02-09 18:11       ` David Vernet
2024-02-09 18:59         ` Alexei Starovoitov
2024-02-09 19:18           ` David Vernet
2024-02-06 22:04 ` [PATCH bpf-next 03/16] mm: Expose vmap_pages_range() to the rest of the kernel Alexei Starovoitov
2024-02-07 21:07   ` Lorenzo Stoakes
2024-02-07 22:56     ` Alexei Starovoitov
2024-02-08  5:44     ` Johannes Weiner
2024-02-08 23:55       ` Alexei Starovoitov
2024-02-09  6:36       ` Lorenzo Stoakes
2024-02-14  8:31     ` Christoph Hellwig
2024-02-06 22:04 ` [PATCH bpf-next 04/16] bpf: Introduce bpf_arena Alexei Starovoitov
2024-02-07 18:40   ` Barret Rhoden
2024-02-07 20:55     ` Alexei Starovoitov [this message]
2024-02-07 21:11       ` Barret Rhoden
2024-02-08  6:26         ` Alexei Starovoitov
2024-02-08 21:58           ` Barret Rhoden
2024-02-08 23:36             ` Alexei Starovoitov
2024-02-08 23:50               ` Barret Rhoden
2024-02-06 22:04 ` [PATCH bpf-next 05/16] bpf: Disasm support for cast_kern/user instructions Alexei Starovoitov
2024-02-06 22:04 ` [PATCH bpf-next 06/16] bpf: Add x86-64 JIT support for PROBE_MEM32 pseudo instructions Alexei Starovoitov
2024-02-06 22:04 ` [PATCH bpf-next 07/16] bpf: Add x86-64 JIT support for bpf_cast_user instruction Alexei Starovoitov
2024-02-06 22:04 ` [PATCH bpf-next 08/16] bpf: Recognize cast_kern/user instructions in the verifier Alexei Starovoitov
2024-02-06 22:04 ` [PATCH bpf-next 09/16] bpf: Recognize btf_decl_tag("arg:arena") as PTR_TO_ARENA Alexei Starovoitov
2024-02-06 22:04 ` [PATCH bpf-next 10/16] libbpf: Add __arg_arena to bpf_helpers.h Alexei Starovoitov
2024-02-06 22:04 ` [PATCH bpf-next 11/16] libbpf: Add support for bpf_arena Alexei Starovoitov
2024-02-08  1:15   ` Andrii Nakryiko
2024-02-08  1:38     ` Alexei Starovoitov
2024-02-08 18:29       ` Andrii Nakryiko
2024-02-08 18:45         ` Alexei Starovoitov
2024-02-08 18:54           ` Andrii Nakryiko
2024-02-08 18:59             ` Alexei Starovoitov
2024-02-06 22:04 ` [PATCH bpf-next 12/16] libbpf: Allow specifying 64-bit integers in map BTF Alexei Starovoitov
2024-02-08  1:16   ` Andrii Nakryiko
2024-02-08  1:58     ` Alexei Starovoitov
2024-02-08 18:16       ` Andrii Nakryiko
2024-02-06 22:04 ` [PATCH bpf-next 13/16] bpf: Tell bpf programs kernel's PAGE_SIZE Alexei Starovoitov
2024-02-06 22:04 ` [PATCH bpf-next 14/16] bpf: Add helper macro bpf_arena_cast() Alexei Starovoitov
2024-02-06 22:04 ` [PATCH bpf-next 15/16] selftests/bpf: Add bpf_arena_list test Alexei Starovoitov
2024-02-07 17:04   ` Eduard Zingerman
2024-02-08  2:59     ` Alexei Starovoitov
2024-02-08 11:10       ` Jose E. Marchesi
2024-02-06 22:04 ` [PATCH bpf-next 16/16] selftests/bpf: Add bpf_arena_htab test Alexei Starovoitov
2024-02-07 12:34 ` [PATCH bpf-next 00/16] bpf: Introduce BPF arena Donald Hunter
2024-02-07 13:33   ` Barret Rhoden
2024-02-07 20:16     ` Alexei Starovoitov
2024-02-07 20:12   ` Alexei Starovoitov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAADnVQJJ7M+OHnygbuN4qapCS8_r-mimM6CLw5oee8ixvmqg4Q@mail.gmail.com \
    --to=alexei.starovoitov@gmail.com \
    --cc=andrii@kernel.org \
    --cc=bpf@vger.kernel.org \
    --cc=brho@google.com \
    --cc=daniel@iogearbox.net \
    --cc=eddyz87@gmail.com \
    --cc=hannes@cmpxchg.org \
    --cc=kernel-team@fb.com \
    --cc=linux-mm@kvack.org \
    --cc=martin.lau@kernel.org \
    --cc=memxor@gmail.com \
    --cc=tj@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox