From: David Vernet <void@manifault.com>
To: Alexei Starovoitov <alexei.starovoitov@gmail.com>
Cc: bpf@vger.kernel.org, daniel@iogearbox.net, andrii@kernel.org,
memxor@gmail.com, eddyz87@gmail.com, tj@kernel.org,
brho@google.com, hannes@cmpxchg.org, lstoakes@gmail.com,
akpm@linux-foundation.org, urezki@gmail.com, hch@infradead.org,
linux-mm@kvack.org, kernel-team@fb.com
Subject: Re: [PATCH v2 bpf-next 05/20] bpf: Introduce bpf_arena.
Date: Fri, 9 Feb 2024 14:36:34 -0600 [thread overview]
Message-ID: <20240209203634.GA3615691@maniforge.lan> (raw)
In-Reply-To: <20240209040608.98927-6-alexei.starovoitov@gmail.com>
[-- Attachment #1: Type: text/plain, Size: 12246 bytes --]
On Thu, Feb 08, 2024 at 08:05:53PM -0800, Alexei Starovoitov wrote:
> From: Alexei Starovoitov <ast@kernel.org>
>
> Introduce bpf_arena, which is a sparse shared memory region between the bpf
> program and user space.
>
> Use cases:
> 1. User space mmap-s bpf_arena and uses it as a traditional mmap-ed anonymous
> region, like memcached or any key/value storage. The bpf program implements an
> in-kernel accelerator. XDP prog can search for a key in bpf_arena and return a
> value without going to user space.
> 2. The bpf program builds arbitrary data structures in bpf_arena (hash tables,
> rb-trees, sparse arrays), while user space consumes it.
> 3. bpf_arena is a "heap" of memory from the bpf program's point of view.
> The user space may mmap it, but bpf program will not convert pointers
> to user base at run-time to improve bpf program speed.
>
> Initially, the kernel vm_area and user vma are not populated. User space can
> fault in pages within the range. While servicing a page fault, bpf_arena logic
> will insert a new page into the kernel and user vmas. The bpf program can
> allocate pages from that region via bpf_arena_alloc_pages(). This kernel
> function will insert pages into the kernel vm_area. The subsequent fault-in
> from user space will populate that page into the user vma. The
> BPF_F_SEGV_ON_FAULT flag at arena creation time can be used to prevent fault-in
> from user space. In such a case, if a page is not allocated by the bpf program
> and not present in the kernel vm_area, the user process will segfault. This is
> useful for use cases 2 and 3 above.
>
> bpf_arena_alloc_pages() is similar to user space mmap(). It allocates pages
> either at a specific address within the arena or allocates a range with the
> maple tree. bpf_arena_free_pages() is analogous to munmap(), which frees pages
> and removes the range from the kernel vm_area and from user process vmas.
>
> bpf_arena can be used as a bpf program "heap" of up to 4GB. The speed of bpf
> program is more important than ease of sharing with user space. This is use
> case 3. In such a case, the BPF_F_NO_USER_CONV flag is recommended. It will
> tell the verifier to treat the rX = bpf_arena_cast_user(rY) instruction as a
> 32-bit move wX = wY, which will improve bpf prog performance. Otherwise,
> bpf_arena_cast_user is translated by JIT to conditionally add the upper 32 bits
> of user vm_start (if the pointer is not NULL) to arena pointers before they are
> stored into memory. This way, user space sees them as valid 64-bit pointers.
>
> Diff https://github.com/llvm/llvm-project/pull/79902 taught LLVM BPF backend to
> generate the bpf_cast_kern() instruction before dereference of the arena
> pointer and the bpf_cast_user() instruction when the arena pointer is formed.
> In a typical bpf program there will be very few bpf_cast_user().
>
> From LLVM's point of view, arena pointers are tagged as
> __attribute__((address_space(1))). Hence, clang provides helpful diagnostics
> when pointers cross address space. Libbpf and the kernel support only
> address_space == 1. All other address space identifiers are reserved.
>
> rX = bpf_cast_kern(rY, addr_space) tells the verifier that
> rX->type = PTR_TO_ARENA. Any further operations on PTR_TO_ARENA register have
> to be in the 32-bit domain. The verifier will mark load/store through
> PTR_TO_ARENA with PROBE_MEM32. JIT will generate them as
> kern_vm_start + 32bit_addr memory accesses. The behavior is similar to
> copy_from_kernel_nofault() except that no address checks are necessary. The
> address is guaranteed to be in the 4GB range. If the page is not present, the
> destination register is zeroed on read, and the operation is ignored on write.
>
> rX = bpf_cast_user(rY, addr_space) tells the verifier that
> rX->type = unknown scalar. If arena->map_flags has BPF_F_NO_USER_CONV set, then
> the verifier converts cast_user to mov32. Otherwise, JIT will emit native code
> equivalent to:
> rX = (u32)rY;
> if (rY)
> rX |= clear_lo32_bits(arena->user_vm_start); /* replace hi32 bits in rX */
>
> After such conversion, the pointer becomes a valid user pointer within
> bpf_arena range. The user process can access data structures created in
> bpf_arena without any additional computations. For example, a linked list built
> by a bpf program can be walked natively by user space.
>
> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
> ---
> include/linux/bpf.h | 5 +-
> include/linux/bpf_types.h | 1 +
> include/uapi/linux/bpf.h | 7 +
> kernel/bpf/Makefile | 3 +
> kernel/bpf/arena.c | 557 +++++++++++++++++++++++++++++++++
> kernel/bpf/core.c | 11 +
> kernel/bpf/syscall.c | 3 +
> kernel/bpf/verifier.c | 1 +
> tools/include/uapi/linux/bpf.h | 7 +
> 9 files changed, 593 insertions(+), 2 deletions(-)
> create mode 100644 kernel/bpf/arena.c
>
> diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> index 8b0dcb66eb33..de557c6c42e0 100644
> --- a/include/linux/bpf.h
> +++ b/include/linux/bpf.h
> @@ -37,6 +37,7 @@ struct perf_event;
> struct bpf_prog;
> struct bpf_prog_aux;
> struct bpf_map;
> +struct bpf_arena;
> struct sock;
> struct seq_file;
> struct btf;
> @@ -534,8 +535,8 @@ void bpf_list_head_free(const struct btf_field *field, void *list_head,
> struct bpf_spin_lock *spin_lock);
> void bpf_rb_root_free(const struct btf_field *field, void *rb_root,
> struct bpf_spin_lock *spin_lock);
> -
> -
> +u64 bpf_arena_get_kern_vm_start(struct bpf_arena *arena);
> +u64 bpf_arena_get_user_vm_start(struct bpf_arena *arena);
> int bpf_obj_name_cpy(char *dst, const char *src, unsigned int size);
>
> struct bpf_offload_dev;
> diff --git a/include/linux/bpf_types.h b/include/linux/bpf_types.h
> index 94baced5a1ad..9f2a6b83b49e 100644
> --- a/include/linux/bpf_types.h
> +++ b/include/linux/bpf_types.h
> @@ -132,6 +132,7 @@ BPF_MAP_TYPE(BPF_MAP_TYPE_STRUCT_OPS, bpf_struct_ops_map_ops)
> BPF_MAP_TYPE(BPF_MAP_TYPE_RINGBUF, ringbuf_map_ops)
> BPF_MAP_TYPE(BPF_MAP_TYPE_BLOOM_FILTER, bloom_filter_map_ops)
> BPF_MAP_TYPE(BPF_MAP_TYPE_USER_RINGBUF, user_ringbuf_map_ops)
> +BPF_MAP_TYPE(BPF_MAP_TYPE_ARENA, arena_map_ops)
>
> BPF_LINK_TYPE(BPF_LINK_TYPE_RAW_TRACEPOINT, raw_tracepoint)
> BPF_LINK_TYPE(BPF_LINK_TYPE_TRACING, tracing)
> diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> index d96708380e52..f6648851eae6 100644
> --- a/include/uapi/linux/bpf.h
> +++ b/include/uapi/linux/bpf.h
> @@ -983,6 +983,7 @@ enum bpf_map_type {
> BPF_MAP_TYPE_BLOOM_FILTER,
> BPF_MAP_TYPE_USER_RINGBUF,
> BPF_MAP_TYPE_CGRP_STORAGE,
> + BPF_MAP_TYPE_ARENA,
> __MAX_BPF_MAP_TYPE
> };
>
> @@ -1370,6 +1371,12 @@ enum {
>
> /* BPF token FD is passed in a corresponding command's token_fd field */
> BPF_F_TOKEN_FD = (1U << 16),
> +
> +/* When user space page faults in bpf_arena send SIGSEGV instead of inserting new page */
> + BPF_F_SEGV_ON_FAULT = (1U << 17),
> +
> +/* Do not translate kernel bpf_arena pointers to user pointers */
> + BPF_F_NO_USER_CONV = (1U << 18),
> };
>
> /* Flags for BPF_PROG_QUERY. */
> diff --git a/kernel/bpf/Makefile b/kernel/bpf/Makefile
> index 4ce95acfcaa7..368c5d86b5b7 100644
> --- a/kernel/bpf/Makefile
> +++ b/kernel/bpf/Makefile
> @@ -15,6 +15,9 @@ obj-${CONFIG_BPF_LSM} += bpf_inode_storage.o
> obj-$(CONFIG_BPF_SYSCALL) += disasm.o mprog.o
> obj-$(CONFIG_BPF_JIT) += trampoline.o
> obj-$(CONFIG_BPF_SYSCALL) += btf.o memalloc.o
> +ifeq ($(CONFIG_MMU)$(CONFIG_64BIT),yy)
> +obj-$(CONFIG_BPF_SYSCALL) += arena.o
> +endif
> obj-$(CONFIG_BPF_JIT) += dispatcher.o
> ifeq ($(CONFIG_NET),y)
> obj-$(CONFIG_BPF_SYSCALL) += devmap.o
> diff --git a/kernel/bpf/arena.c b/kernel/bpf/arena.c
> new file mode 100644
> index 000000000000..5c1014471740
> --- /dev/null
> +++ b/kernel/bpf/arena.c
> @@ -0,0 +1,557 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +/* Copyright (c) 2024 Meta Platforms, Inc. and affiliates. */
> +#include <linux/bpf.h>
> +#include <linux/btf.h>
> +#include <linux/err.h>
> +#include <linux/btf_ids.h>
> +#include <linux/vmalloc.h>
> +#include <linux/pagemap.h>
> +
> +/*
> + * bpf_arena is a sparsely populated shared memory region between bpf program and
> + * user space process.
> + *
> + * For example on x86-64 the values could be:
> + * user_vm_start 7f7d26200000 // picked by mmap()
> + * kern_vm_start ffffc90001e69000 // picked by get_vm_area()
> + * For user space all pointers within the arena are normal 8-byte addresses.
> + * In this example 7f7d26200000 is the address of the first page (pgoff=0).
> + * The bpf program will access it as: kern_vm_start + lower_32bit_of_user_ptr
> + * (u32)7f7d26200000 -> 26200000
> + * hence
> + * ffffc90001e69000 + 26200000 == ffffc90028069000 is "pgoff=0" within 4Gb
> + * kernel memory region.
> + *
> + * BPF JITs generate the following code to access arena:
> + * mov eax, eax // eax has lower 32-bit of user pointer
> + * mov word ptr [rax + r12 + off], bx
> + * where r12 == kern_vm_start and off is s16.
> + * Hence allocate 4Gb + GUARD_SZ/2 on each side.
> + *
> + * Initially kernel vm_area and user vma are not populated.
> + * User space can fault-in any address which will insert the page
> + * into kernel and user vma.
> + * bpf program can allocate a page via bpf_arena_alloc_pages() kfunc
> + * which will insert it into kernel vm_area.
> + * The later fault-in from user space will populate that page into user vma.
> + */
> +
> +/* number of bytes addressable by LDX/STX insn with 16-bit 'off' field */
> +#define GUARD_SZ (1ull << sizeof(((struct bpf_insn *)0)->off) * 8)
> +#define KERN_VM_SZ ((1ull << 32) + GUARD_SZ)
> +
> +struct bpf_arena {
> + struct bpf_map map;
> + u64 user_vm_start;
> + u64 user_vm_end;
> + struct vm_struct *kern_vm;
> + struct maple_tree mt;
> + struct list_head vma_list;
> + struct mutex lock;
> +};
> +
> +u64 bpf_arena_get_kern_vm_start(struct bpf_arena *arena)
> +{
> + return arena ? (u64) (long) arena->kern_vm->addr + GUARD_SZ / 2 : 0;
> +}
> +
> +u64 bpf_arena_get_user_vm_start(struct bpf_arena *arena)
> +{
> + return arena ? arena->user_vm_start : 0;
> +}
> +
> +static long arena_map_peek_elem(struct bpf_map *map, void *value)
> +{
> + return -EOPNOTSUPP;
> +}
> +
> +static long arena_map_push_elem(struct bpf_map *map, void *value, u64 flags)
> +{
> + return -EOPNOTSUPP;
> +}
> +
> +static long arena_map_pop_elem(struct bpf_map *map, void *value)
> +{
> + return -EOPNOTSUPP;
> +}
> +
> +static long arena_map_delete_elem(struct bpf_map *map, void *value)
> +{
> + return -EOPNOTSUPP;
> +}
> +
> +static int arena_map_get_next_key(struct bpf_map *map, void *key, void *next_key)
> +{
> + return -EOPNOTSUPP;
> +}
> +
> +static long compute_pgoff(struct bpf_arena *arena, long uaddr)
> +{
> + return (u32)(uaddr - (u32)arena->user_vm_start) >> PAGE_SHIFT;
> +}
> +
> +static struct bpf_map *arena_map_alloc(union bpf_attr *attr)
> +{
> + struct vm_struct *kern_vm;
> + int numa_node = bpf_map_attr_numa_node(attr);
> + struct bpf_arena *arena;
> + u64 vm_range;
> + int err = -ENOMEM;
> +
> + if (attr->key_size || attr->value_size || attr->max_entries == 0 ||
> + /* BPF_F_MMAPABLE must be set */
> + !(attr->map_flags & BPF_F_MMAPABLE) ||
> + /* No unsupported flags present */
> + (attr->map_flags & ~(BPF_F_SEGV_ON_FAULT | BPF_F_MMAPABLE | BPF_F_NO_USER_CONV)))
> + return ERR_PTR(-EINVAL);
> +
> + if (attr->map_extra & ~PAGE_MASK)
> + /* If non-zero the map_extra is an expected user VMA start address */
> + return ERR_PTR(-EINVAL);
So I haven't done a thorough review of this patch, beyond trying to
understand the semantics of bpf arenas. On that note, could you please
document the semantics of map_extra with arena maps where map_extra is
defined in [0]?
[0]: https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git/tree/include/uapi/linux/bpf.h#n1439
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 228 bytes --]
next prev parent reply other threads:[~2024-02-09 20:36 UTC|newest]
Thread overview: 93+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-02-09 4:05 [PATCH v2 bpf-next 00/20] bpf: Introduce BPF arena Alexei Starovoitov
2024-02-09 4:05 ` [PATCH v2 bpf-next 01/20] bpf: Allow kfuncs return 'void *' Alexei Starovoitov
2024-02-10 6:49 ` Kumar Kartikeya Dwivedi
2024-02-09 4:05 ` [PATCH v2 bpf-next 02/20] bpf: Recognize '__map' suffix in kfunc arguments Alexei Starovoitov
2024-02-09 4:05 ` [PATCH v2 bpf-next 03/20] bpf: Plumb get_unmapped_area() callback into bpf_map_ops Alexei Starovoitov
2024-02-09 4:05 ` [PATCH v2 bpf-next 04/20] mm: Expose vmap_pages_range() to the rest of the kernel Alexei Starovoitov
2024-02-14 8:36 ` Christoph Hellwig
2024-02-14 20:53 ` Alexei Starovoitov
2024-02-15 6:58 ` Christoph Hellwig
2024-02-15 20:50 ` Alexei Starovoitov
2024-02-15 21:26 ` Linus Torvalds
2024-02-16 9:31 ` Christoph Hellwig
2024-02-16 16:54 ` Alexei Starovoitov
2024-02-16 17:18 ` Uladzislau Rezki
2024-02-18 2:06 ` Alexei Starovoitov
2024-02-20 6:57 ` Christoph Hellwig
2024-02-09 4:05 ` [PATCH v2 bpf-next 05/20] bpf: Introduce bpf_arena Alexei Starovoitov
2024-02-09 20:36 ` David Vernet [this message]
2024-02-10 4:38 ` Alexei Starovoitov
2024-02-12 15:56 ` Barret Rhoden
2024-02-12 18:23 ` Alexei Starovoitov
[not found] ` <CAP01T75y-E8qjMpn_9E-k8H0QpPdjvYx9MMgx6cxGfmdVat+Xw@mail.gmail.com>
2024-02-12 18:21 ` Alexei Starovoitov
2024-02-13 23:14 ` Andrii Nakryiko
2024-02-13 23:29 ` Alexei Starovoitov
2024-02-14 0:03 ` Andrii Nakryiko
2024-02-14 0:14 ` Alexei Starovoitov
2024-02-09 4:05 ` [PATCH v2 bpf-next 06/20] bpf: Disasm support for cast_kern/user instructions Alexei Starovoitov
2024-02-09 4:05 ` [PATCH v2 bpf-next 07/20] bpf: Add x86-64 JIT support for PROBE_MEM32 pseudo instructions Alexei Starovoitov
2024-02-09 17:20 ` Eduard Zingerman
2024-02-13 22:20 ` Alexei Starovoitov
[not found] ` <CAP01T75sq=G5pfYvsYuxfdoFGOqSGrNcamCyA0posFA9pxNWRA@mail.gmail.com>
2024-02-13 22:00 ` Alexei Starovoitov
2024-02-09 4:05 ` [PATCH v2 bpf-next 08/20] bpf: Add x86-64 JIT support for bpf_cast_user instruction Alexei Starovoitov
2024-02-10 1:15 ` Eduard Zingerman
[not found] ` <CAP01T76JMbnS3PSpontzWmtSZ9cs97yO772R8zpWH-eHXviLSA@mail.gmail.com>
2024-02-13 22:28 ` Alexei Starovoitov
2024-02-09 4:05 ` [PATCH v2 bpf-next 09/20] bpf: Recognize cast_kern/user instructions in the verifier Alexei Starovoitov
2024-02-10 1:13 ` Eduard Zingerman
2024-02-13 2:58 ` Alexei Starovoitov
2024-02-13 12:01 ` Eduard Zingerman
2024-02-09 4:05 ` [PATCH v2 bpf-next 10/20] bpf: Recognize btf_decl_tag("arg:arena") as PTR_TO_ARENA Alexei Starovoitov
2024-02-13 23:14 ` Andrii Nakryiko
2024-02-14 0:26 ` Alexei Starovoitov
2024-02-09 4:05 ` [PATCH v2 bpf-next 11/20] libbpf: Add __arg_arena to bpf_helpers.h Alexei Starovoitov
2024-02-13 23:14 ` Andrii Nakryiko
2024-02-09 4:06 ` [PATCH v2 bpf-next 12/20] libbpf: Add support for bpf_arena Alexei Starovoitov
2024-02-12 18:12 ` Eduard Zingerman
2024-02-12 20:14 ` Alexei Starovoitov
2024-02-12 20:21 ` Eduard Zingerman
[not found] ` <CAP01T761B1+paMwrQesjX+zqFwQp8iUzLORueTjTLSHPbJ+0fQ@mail.gmail.com>
2024-02-12 19:11 ` Andrii Nakryiko
2024-02-13 23:15 ` Andrii Nakryiko
2024-02-14 0:32 ` Alexei Starovoitov
2024-02-09 4:06 ` [PATCH v2 bpf-next 13/20] libbpf: Allow specifying 64-bit integers in map BTF Alexei Starovoitov
2024-02-12 18:58 ` Eduard Zingerman
2024-02-13 23:15 ` Andrii Nakryiko
2024-02-14 0:47 ` Alexei Starovoitov
2024-02-14 0:51 ` Andrii Nakryiko
2024-02-09 4:06 ` [PATCH v2 bpf-next 14/20] libbpf: Recognize __arena global varaibles Alexei Starovoitov
2024-02-13 0:34 ` Eduard Zingerman
2024-02-13 0:44 ` Alexei Starovoitov
2024-02-13 0:49 ` Eduard Zingerman
2024-02-13 2:08 ` Alexei Starovoitov
2024-02-13 12:48 ` Eduard Zingerman
2024-02-13 23:11 ` Eduard Zingerman
2024-02-13 23:17 ` Andrii Nakryiko
2024-02-13 23:36 ` Eduard Zingerman
2024-02-14 0:09 ` Andrii Nakryiko
2024-02-14 0:16 ` Eduard Zingerman
2024-02-14 0:29 ` Andrii Nakryiko
2024-02-14 1:24 ` Alexei Starovoitov
2024-02-14 17:24 ` Andrii Nakryiko
2024-02-15 23:22 ` Andrii Nakryiko
2024-02-16 2:45 ` Alexei Starovoitov
2024-02-16 4:51 ` Andrii Nakryiko
2024-02-14 1:02 ` Alexei Starovoitov
2024-02-14 15:10 ` Eduard Zingerman
2024-02-13 23:15 ` Andrii Nakryiko
2024-02-09 4:06 ` [PATCH v2 bpf-next 15/20] bpf: Tell bpf programs kernel's PAGE_SIZE Alexei Starovoitov
2024-02-09 4:06 ` [PATCH v2 bpf-next 16/20] bpf: Add helper macro bpf_arena_cast() Alexei Starovoitov
[not found] ` <CAP01T743Mzfi9+2yMjB5+m2jpBLvij_tLyLFptkOpCekUn=soA@mail.gmail.com>
2024-02-13 22:35 ` Alexei Starovoitov
2024-02-14 16:47 ` Eduard Zingerman
2024-02-14 17:45 ` Alexei Starovoitov
2024-02-09 4:06 ` [PATCH v2 bpf-next 17/20] selftests/bpf: Add unit tests for bpf_arena_alloc/free_pages Alexei Starovoitov
2024-02-09 23:14 ` David Vernet
2024-02-10 4:35 ` Alexei Starovoitov
2024-02-12 16:48 ` David Vernet
[not found] ` <CAP01T75qCUabu4-18nYwRDnSyTTgeAgNN3kePY5PXdnoTKt+Cg@mail.gmail.com>
2024-02-13 23:19 ` Alexei Starovoitov
2024-02-09 4:06 ` [PATCH v2 bpf-next 18/20] selftests/bpf: Add bpf_arena_list test Alexei Starovoitov
2024-02-09 4:06 ` [PATCH v2 bpf-next 19/20] selftests/bpf: Add bpf_arena_htab test Alexei Starovoitov
2024-02-09 4:06 ` [PATCH v2 bpf-next 20/20] selftests/bpf: Convert simple page_frag allocator to per-cpu Alexei Starovoitov
[not found] ` <CAP01T74x-N71rbS+jZ2z+3MPMe5WDeWKV_gWJmDCikV0YOpPFQ@mail.gmail.com>
2024-02-14 1:37 ` Alexei Starovoitov
2024-02-12 14:14 ` [PATCH v2 bpf-next 00/20] bpf: Introduce BPF arena David Hildenbrand
2024-02-12 18:14 ` Alexei Starovoitov
2024-02-13 10:35 ` David Hildenbrand
2024-02-12 17:36 ` Barret Rhoden
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240209203634.GA3615691@maniforge.lan \
--to=void@manifault.com \
--cc=akpm@linux-foundation.org \
--cc=alexei.starovoitov@gmail.com \
--cc=andrii@kernel.org \
--cc=bpf@vger.kernel.org \
--cc=brho@google.com \
--cc=daniel@iogearbox.net \
--cc=eddyz87@gmail.com \
--cc=hannes@cmpxchg.org \
--cc=hch@infradead.org \
--cc=kernel-team@fb.com \
--cc=linux-mm@kvack.org \
--cc=lstoakes@gmail.com \
--cc=memxor@gmail.com \
--cc=tj@kernel.org \
--cc=urezki@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox