* [PATCH v2 bpf-next 0/3] mm: Cleanup and identify various users of kernel virtual address space
@ 2024-02-23 23:57 Alexei Starovoitov
2024-02-23 23:57 ` [PATCH v2 bpf-next 1/3] mm: Enforce VM_IOREMAP flag and range in ioremap_page_range Alexei Starovoitov
` (2 more replies)
0 siblings, 3 replies; 11+ messages in thread
From: Alexei Starovoitov @ 2024-02-23 23:57 UTC (permalink / raw)
To: bpf
Cc: daniel, andrii, torvalds, brho, hannes, lstoakes, akpm, urezki,
hch, boris.ostrovsky, sstabellini, jgross, linux-mm, xen-devel,
kernel-team
From: Alexei Starovoitov <ast@kernel.org>
There are various users of kernel virtual address space: vmalloc, vmap, ioremap, xen.
- vmalloc use case dominates the usage. Such vm areas have VM_ALLOC flag
and these areas are treated differently by KASAN.
- the areas created by vmap() function should be tagged with VM_MAP
(as majority of the users do).
- ioremap areas are tagged with VM_IOREMAP and vm area start is aligned to size
of the area unlike vmalloc/vmap.
- there is also xen usage that is marked as VM_IOREMAP, but it doesn't
call ioremap_page_range() unlike all other VM_IOREMAP users.
To clean this up:
1. Enforce that ioremap_page_range() checks the range and VM_IOREMAP flag.
2. Introduce VM_XEN flag to separate xen us cases from ioremap.
In addition BPF would like to reserve regions of kernel virtual address
space and populate it lazily, similar to xen use cases.
For that reason, introduce VM_SPARSE flag and vm_area_[un]map_pages() helpers
to populate this sparse area.
In the end the /proc/vmallocinfo will show
"vmalloc"
"vmap"
"ioremap"
"xen"
"sparse"
categories for different kinds of address regions.
ioremap, xen, sparse will return zero when dumped through /proc/kcore
Alexei Starovoitov (3):
mm: Enforce VM_IOREMAP flag and range in ioremap_page_range.
mm, xen: Separate xen use cases from ioremap.
mm: Introduce VM_SPARSE kind and vm_area_[un]map_pages().
arch/x86/xen/grant-table.c | 2 +-
drivers/xen/xenbus/xenbus_client.c | 2 +-
include/linux/vmalloc.h | 5 +++
mm/vmalloc.c | 71 +++++++++++++++++++++++++++++-
4 files changed, 76 insertions(+), 4 deletions(-)
--
2.34.1
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH v2 bpf-next 1/3] mm: Enforce VM_IOREMAP flag and range in ioremap_page_range.
2024-02-23 23:57 [PATCH v2 bpf-next 0/3] mm: Cleanup and identify various users of kernel virtual address space Alexei Starovoitov
@ 2024-02-23 23:57 ` Alexei Starovoitov
2024-02-26 10:50 ` Christoph Hellwig
2024-02-23 23:57 ` [PATCH v2 bpf-next 2/3] mm, xen: Separate xen use cases from ioremap Alexei Starovoitov
2024-02-23 23:57 ` [PATCH v2 bpf-next 3/3] mm: Introduce VM_SPARSE kind and vm_area_[un]map_pages() Alexei Starovoitov
2 siblings, 1 reply; 11+ messages in thread
From: Alexei Starovoitov @ 2024-02-23 23:57 UTC (permalink / raw)
To: bpf
Cc: daniel, andrii, torvalds, brho, hannes, lstoakes, akpm, urezki,
hch, boris.ostrovsky, sstabellini, jgross, linux-mm, xen-devel,
kernel-team
From: Alexei Starovoitov <ast@kernel.org>
There are various users of get_vm_area() + ioremap_page_range() APIs.
Enforce that get_vm_area() was requested as VM_IOREMAP type and range passed to
ioremap_page_range() matches created vm_area to avoid accidentally ioremap-ing
into wrong address range.
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
---
mm/vmalloc.c | 13 +++++++++++++
1 file changed, 13 insertions(+)
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index d12a17fc0c17..f42f98a127d5 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -307,8 +307,21 @@ static int vmap_range_noflush(unsigned long addr, unsigned long end,
int ioremap_page_range(unsigned long addr, unsigned long end,
phys_addr_t phys_addr, pgprot_t prot)
{
+ struct vm_struct *area;
int err;
+ area = find_vm_area((void *)addr);
+ if (!area || !(area->flags & VM_IOREMAP)) {
+ WARN_ONCE(1, "vm_area at addr %lx is not marked as VM_IOREMAP\n", addr);
+ return -EINVAL;
+ }
+ if (addr != (unsigned long)area->addr ||
+ (void *)end != area->addr + get_vm_area_size(area)) {
+ WARN_ONCE(1, "ioremap request [%lx,%lx) doesn't match vm_area [%lx, %lx)\n",
+ addr, end, (long)area->addr,
+ (long)area->addr + get_vm_area_size(area));
+ return -ERANGE;
+ }
err = vmap_range_noflush(addr, end, phys_addr, pgprot_nx(prot),
ioremap_max_page_shift);
flush_cache_vmap(addr, end);
--
2.34.1
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH v2 bpf-next 2/3] mm, xen: Separate xen use cases from ioremap.
2024-02-23 23:57 [PATCH v2 bpf-next 0/3] mm: Cleanup and identify various users of kernel virtual address space Alexei Starovoitov
2024-02-23 23:57 ` [PATCH v2 bpf-next 1/3] mm: Enforce VM_IOREMAP flag and range in ioremap_page_range Alexei Starovoitov
@ 2024-02-23 23:57 ` Alexei Starovoitov
2024-02-26 10:51 ` Christoph Hellwig
2024-03-04 7:54 ` Mike Rapoport
2024-02-23 23:57 ` [PATCH v2 bpf-next 3/3] mm: Introduce VM_SPARSE kind and vm_area_[un]map_pages() Alexei Starovoitov
2 siblings, 2 replies; 11+ messages in thread
From: Alexei Starovoitov @ 2024-02-23 23:57 UTC (permalink / raw)
To: bpf
Cc: daniel, andrii, torvalds, brho, hannes, lstoakes, akpm, urezki,
hch, boris.ostrovsky, sstabellini, jgross, linux-mm, xen-devel,
kernel-team
From: Alexei Starovoitov <ast@kernel.org>
xen grant table and xenbus ring are not ioremap the way arch specific code is using it,
so let's add VM_XEN flag to separate them from VM_IOREMAP users.
xen will not and should not be calling ioremap_page_range() on that range.
/proc/vmallocinfo will print such region as "xen" instead of "ioremap" as well.
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
---
arch/x86/xen/grant-table.c | 2 +-
drivers/xen/xenbus/xenbus_client.c | 2 +-
include/linux/vmalloc.h | 1 +
mm/vmalloc.c | 7 +++++--
4 files changed, 8 insertions(+), 4 deletions(-)
diff --git a/arch/x86/xen/grant-table.c b/arch/x86/xen/grant-table.c
index 1e681bf62561..b816db0349c4 100644
--- a/arch/x86/xen/grant-table.c
+++ b/arch/x86/xen/grant-table.c
@@ -104,7 +104,7 @@ static int arch_gnttab_valloc(struct gnttab_vm_area *area, unsigned nr_frames)
area->ptes = kmalloc_array(nr_frames, sizeof(*area->ptes), GFP_KERNEL);
if (area->ptes == NULL)
return -ENOMEM;
- area->area = get_vm_area(PAGE_SIZE * nr_frames, VM_IOREMAP);
+ area->area = get_vm_area(PAGE_SIZE * nr_frames, VM_XEN);
if (!area->area)
goto out_free_ptes;
if (apply_to_page_range(&init_mm, (unsigned long)area->area->addr,
diff --git a/drivers/xen/xenbus/xenbus_client.c b/drivers/xen/xenbus/xenbus_client.c
index 32835b4b9bc5..b9c81a2d578b 100644
--- a/drivers/xen/xenbus/xenbus_client.c
+++ b/drivers/xen/xenbus/xenbus_client.c
@@ -758,7 +758,7 @@ static int xenbus_map_ring_pv(struct xenbus_device *dev,
bool leaked = false;
int err = -ENOMEM;
- area = get_vm_area(XEN_PAGE_SIZE * nr_grefs, VM_IOREMAP);
+ area = get_vm_area(XEN_PAGE_SIZE * nr_grefs, VM_XEN);
if (!area)
return -ENOMEM;
if (apply_to_page_range(&init_mm, (unsigned long)area->addr,
diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
index c720be70c8dd..223e51c243bc 100644
--- a/include/linux/vmalloc.h
+++ b/include/linux/vmalloc.h
@@ -28,6 +28,7 @@ struct iov_iter; /* in uio.h */
#define VM_FLUSH_RESET_PERMS 0x00000100 /* reset direct map and flush TLB on unmap, can't be freed in atomic context */
#define VM_MAP_PUT_PAGES 0x00000200 /* put pages and free array in vfree */
#define VM_ALLOW_HUGE_VMAP 0x00000400 /* Allow for huge pages on archs with HAVE_ARCH_HUGE_VMALLOC */
+#define VM_XEN 0x00000800 /* xen use cases */
#if (defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)) && \
!defined(CONFIG_KASAN_VMALLOC)
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index f42f98a127d5..d769a65bddad 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -3822,9 +3822,9 @@ long vread_iter(struct iov_iter *iter, const char *addr, size_t count)
if (flags & VMAP_RAM)
copied = vmap_ram_vread_iter(iter, addr, n, flags);
- else if (!(vm && (vm->flags & VM_IOREMAP)))
+ else if (!(vm && (vm->flags & (VM_IOREMAP | VM_XEN))))
copied = aligned_vread_iter(iter, addr, n);
- else /* IOREMAP area is treated as memory hole */
+ else /* IOREMAP|XEN area is treated as memory hole */
copied = zero_iter(iter, n);
addr += copied;
@@ -4415,6 +4415,9 @@ static int s_show(struct seq_file *m, void *p)
if (v->flags & VM_IOREMAP)
seq_puts(m, " ioremap");
+ if (v->flags & VM_XEN)
+ seq_puts(m, " xen");
+
if (v->flags & VM_ALLOC)
seq_puts(m, " vmalloc");
--
2.34.1
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH v2 bpf-next 3/3] mm: Introduce VM_SPARSE kind and vm_area_[un]map_pages().
2024-02-23 23:57 [PATCH v2 bpf-next 0/3] mm: Cleanup and identify various users of kernel virtual address space Alexei Starovoitov
2024-02-23 23:57 ` [PATCH v2 bpf-next 1/3] mm: Enforce VM_IOREMAP flag and range in ioremap_page_range Alexei Starovoitov
2024-02-23 23:57 ` [PATCH v2 bpf-next 2/3] mm, xen: Separate xen use cases from ioremap Alexei Starovoitov
@ 2024-02-23 23:57 ` Alexei Starovoitov
2024-02-27 17:59 ` Christoph Hellwig
2 siblings, 1 reply; 11+ messages in thread
From: Alexei Starovoitov @ 2024-02-23 23:57 UTC (permalink / raw)
To: bpf
Cc: daniel, andrii, torvalds, brho, hannes, lstoakes, akpm, urezki,
hch, boris.ostrovsky, sstabellini, jgross, linux-mm, xen-devel,
kernel-team
From: Alexei Starovoitov <ast@kernel.org>
vmap/vmalloc APIs are used to map a set of pages into contiguous kernel virtual space.
get_vm_area() with appropriate flag is used to request an area of kernel address range.
It'se used for vmalloc, vmap, ioremap, xen use cases.
- vmalloc use case dominates the usage. Such vm areas have VM_ALLOC flag.
- the areas created by vmap() function should be tagged with VM_MAP.
- ioremap areas are tagged with VM_IOREMAP.
- xen use cases are VM_XEN.
BPF would like to extend the vmap API to implement a lazily-populated
sparse, yet contiguous kernel virtual space.
Introduce VM_SPARSE vm_area flag and
vm_area_map_pages(area, start_addr, count, pages) API to map a set
of pages within a given area.
It has the same sanity checks as vmap() does.
It also checks that get_vm_area() was created with VM_SPARSE flag
which identifies such areas in /proc/vmallocinfo
and returns zero pages on read through /proc/kcore.
The next commits will introduce bpf_arena which is a sparsely populated shared
memory region between bpf program and user space process. It will map
privately-managed pages into a sparse vm area with the following steps:
area = get_vm_area(area_size, VM_SPARSE); // at bpf prog verification time
vm_area_map_pages(area, kaddr, 1, page); // on demand
// it will return an error if kaddr is out of range
vm_area_unmap_pages(area, kaddr, 1);
free_vm_area(area); // after bpf prog is unloaded
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
---
include/linux/vmalloc.h | 4 +++
mm/vmalloc.c | 55 +++++++++++++++++++++++++++++++++++++++--
2 files changed, 57 insertions(+), 2 deletions(-)
diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
index 223e51c243bc..416bc7b0b4db 100644
--- a/include/linux/vmalloc.h
+++ b/include/linux/vmalloc.h
@@ -29,6 +29,7 @@ struct iov_iter; /* in uio.h */
#define VM_MAP_PUT_PAGES 0x00000200 /* put pages and free array in vfree */
#define VM_ALLOW_HUGE_VMAP 0x00000400 /* Allow for huge pages on archs with HAVE_ARCH_HUGE_VMALLOC */
#define VM_XEN 0x00000800 /* xen use cases */
+#define VM_SPARSE 0x00001000 /* sparse vm_area. not all pages are present. */
#if (defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)) && \
!defined(CONFIG_KASAN_VMALLOC)
@@ -233,6 +234,9 @@ static inline bool is_vm_area_hugepages(const void *addr)
}
#ifdef CONFIG_MMU
+int vm_area_map_pages(struct vm_struct *area, unsigned long addr, unsigned int count,
+ struct page **pages);
+int vm_area_unmap_pages(struct vm_struct *area, unsigned long addr, unsigned int count);
void vunmap_range(unsigned long addr, unsigned long end);
static inline void set_vm_flush_reset_perms(void *addr)
{
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index d769a65bddad..a05dfbbacb78 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -648,6 +648,54 @@ static int vmap_pages_range(unsigned long addr, unsigned long end,
return err;
}
+/**
+ * vm_area_map_pages - map pages inside given vm_area
+ * @area: vm_area
+ * @addr: start address inside vm_area
+ * @count: number of pages
+ * @pages: pages to map (always PAGE_SIZE pages)
+ */
+int vm_area_map_pages(struct vm_struct *area, unsigned long addr, unsigned int count,
+ struct page **pages)
+{
+ unsigned long size = ((unsigned long)count) * PAGE_SIZE;
+ unsigned long end = addr + size;
+
+ might_sleep();
+ if (WARN_ON_ONCE(area->flags & VM_FLUSH_RESET_PERMS))
+ return -EINVAL;
+ if (WARN_ON_ONCE(area->flags & VM_NO_GUARD))
+ return -EINVAL;
+ if (WARN_ON_ONCE(!(area->flags & VM_SPARSE)))
+ return -EINVAL;
+ if (count > totalram_pages())
+ return -E2BIG;
+ if (addr < (unsigned long)area->addr || (void *)end > area->addr + area->size)
+ return -ERANGE;
+
+ return vmap_pages_range(addr, end, PAGE_KERNEL, pages, PAGE_SHIFT);
+}
+
+/**
+ * vm_area_unmap_pages - unmap pages inside given vm_area
+ * @area: vm_area
+ * @addr: start address inside vm_area
+ * @count: number of pages to unmap
+ */
+int vm_area_unmap_pages(struct vm_struct *area, unsigned long addr, unsigned int count)
+{
+ unsigned long size = ((unsigned long)count) * PAGE_SIZE;
+ unsigned long end = addr + size;
+
+ if (WARN_ON_ONCE(!(area->flags & VM_SPARSE)))
+ return -EINVAL;
+ if (addr < (unsigned long)area->addr || (void *)end > area->addr + area->size)
+ return -ERANGE;
+
+ vunmap_range(addr, end);
+ return 0;
+}
+
int is_vmalloc_or_module_addr(const void *x)
{
/*
@@ -3822,9 +3870,9 @@ long vread_iter(struct iov_iter *iter, const char *addr, size_t count)
if (flags & VMAP_RAM)
copied = vmap_ram_vread_iter(iter, addr, n, flags);
- else if (!(vm && (vm->flags & (VM_IOREMAP | VM_XEN))))
+ else if (!(vm && (vm->flags & (VM_IOREMAP | VM_XEN | VM_SPARSE))))
copied = aligned_vread_iter(iter, addr, n);
- else /* IOREMAP|XEN area is treated as memory hole */
+ else /* IOREMAP|XEN|SPARSE area is treated as memory hole */
copied = zero_iter(iter, n);
addr += copied;
@@ -4418,6 +4466,9 @@ static int s_show(struct seq_file *m, void *p)
if (v->flags & VM_XEN)
seq_puts(m, " xen");
+ if (v->flags & VM_SPARSE)
+ seq_puts(m, " sparse");
+
if (v->flags & VM_ALLOC)
seq_puts(m, " vmalloc");
--
2.34.1
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v2 bpf-next 1/3] mm: Enforce VM_IOREMAP flag and range in ioremap_page_range.
2024-02-23 23:57 ` [PATCH v2 bpf-next 1/3] mm: Enforce VM_IOREMAP flag and range in ioremap_page_range Alexei Starovoitov
@ 2024-02-26 10:50 ` Christoph Hellwig
0 siblings, 0 replies; 11+ messages in thread
From: Christoph Hellwig @ 2024-02-26 10:50 UTC (permalink / raw)
To: Alexei Starovoitov
Cc: bpf, daniel, andrii, torvalds, brho, hannes, lstoakes, akpm,
urezki, hch, boris.ostrovsky, sstabellini, jgross, linux-mm,
xen-devel, kernel-team
On Fri, Feb 23, 2024 at 03:57:26PM -0800, Alexei Starovoitov wrote:
> From: Alexei Starovoitov <ast@kernel.org>
>
> There are various users of get_vm_area() + ioremap_page_range() APIs.
> Enforce that get_vm_area() was requested as VM_IOREMAP type and range passed to
> ioremap_page_range() matches created vm_area to avoid accidentally ioremap-ing
> into wrong address range.
Nit: overly long lines in the commit message here.
Otherwise looks good:
Reviewed-by: Christoph Hellwig <hch@lst.de>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v2 bpf-next 2/3] mm, xen: Separate xen use cases from ioremap.
2024-02-23 23:57 ` [PATCH v2 bpf-next 2/3] mm, xen: Separate xen use cases from ioremap Alexei Starovoitov
@ 2024-02-26 10:51 ` Christoph Hellwig
2024-03-04 7:54 ` Mike Rapoport
1 sibling, 0 replies; 11+ messages in thread
From: Christoph Hellwig @ 2024-02-26 10:51 UTC (permalink / raw)
To: Alexei Starovoitov
Cc: bpf, daniel, andrii, torvalds, brho, hannes, lstoakes, akpm,
urezki, hch, boris.ostrovsky, sstabellini, jgross, linux-mm,
xen-devel, kernel-team
On Fri, Feb 23, 2024 at 03:57:27PM -0800, Alexei Starovoitov wrote:
> From: Alexei Starovoitov <ast@kernel.org>
>
> xen grant table and xenbus ring are not ioremap the way arch specific code is using it,
> so let's add VM_XEN flag to separate them from VM_IOREMAP users.
> xen will not and should not be calling ioremap_page_range() on that range.
> /proc/vmallocinfo will print such region as "xen" instead of "ioremap" as well.
Splitting this out is a good idea, but XEN seems a bit of a too
generit time. Probably GRANT_TABLE or XEN_GRANT_TABLE if that isn't
too long would be better. Maybe the Xen maintainers have an idea.
Also more overlong commit message lines here, I'm not going to complain
on the third patch if they show up again :)
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v2 bpf-next 3/3] mm: Introduce VM_SPARSE kind and vm_area_[un]map_pages().
2024-02-23 23:57 ` [PATCH v2 bpf-next 3/3] mm: Introduce VM_SPARSE kind and vm_area_[un]map_pages() Alexei Starovoitov
@ 2024-02-27 17:59 ` Christoph Hellwig
2024-02-28 1:31 ` Alexei Starovoitov
0 siblings, 1 reply; 11+ messages in thread
From: Christoph Hellwig @ 2024-02-27 17:59 UTC (permalink / raw)
To: Alexei Starovoitov
Cc: bpf, daniel, andrii, torvalds, brho, hannes, lstoakes, akpm,
urezki, hch, boris.ostrovsky, sstabellini, jgross, linux-mm,
xen-devel, kernel-team
> privately-managed pages into a sparse vm area with the following steps:
>
> area = get_vm_area(area_size, VM_SPARSE); // at bpf prog verification time
> vm_area_map_pages(area, kaddr, 1, page); // on demand
> // it will return an error if kaddr is out of range
> vm_area_unmap_pages(area, kaddr, 1);
> free_vm_area(area); // after bpf prog is unloaded
I'm still wondering if this should just use an opaque cookie instead
of exposing the vm_area. But otherwise this mostly looks fine to me.
> + if (addr < (unsigned long)area->addr || (void *)end > area->addr + area->size)
> + return -ERANGE;
This check is duplicated so many times that it really begs for a helper.
> +int vm_area_unmap_pages(struct vm_struct *area, unsigned long addr, unsigned int count)
> +{
> + unsigned long size = ((unsigned long)count) * PAGE_SIZE;
> + unsigned long end = addr + size;
> +
> + if (WARN_ON_ONCE(!(area->flags & VM_SPARSE)))
> + return -EINVAL;
> + if (addr < (unsigned long)area->addr || (void *)end > area->addr + area->size)
> + return -ERANGE;
> +
> + vunmap_range(addr, end);
> + return 0;
Does it make much sense to have an error return here vs just debug
checks? It's not like the caller can do much if it violates these
basic invariants.
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v2 bpf-next 3/3] mm: Introduce VM_SPARSE kind and vm_area_[un]map_pages().
2024-02-27 17:59 ` Christoph Hellwig
@ 2024-02-28 1:31 ` Alexei Starovoitov
2024-02-29 15:56 ` Christoph Hellwig
0 siblings, 1 reply; 11+ messages in thread
From: Alexei Starovoitov @ 2024-02-28 1:31 UTC (permalink / raw)
To: Christoph Hellwig
Cc: bpf, Daniel Borkmann, Andrii Nakryiko, Linus Torvalds,
Barret Rhoden, Johannes Weiner, Lorenzo Stoakes, Andrew Morton,
Uladzislau Rezki, Boris Ostrovsky, sstabellini, Juergen Gross,
linux-mm, xen-devel, Kernel Team
On Tue, Feb 27, 2024 at 9:59 AM Christoph Hellwig <hch@infradead.org> wrote:
>
> > privately-managed pages into a sparse vm area with the following steps:
> >
> > area = get_vm_area(area_size, VM_SPARSE); // at bpf prog verification time
> > vm_area_map_pages(area, kaddr, 1, page); // on demand
> > // it will return an error if kaddr is out of range
> > vm_area_unmap_pages(area, kaddr, 1);
> > free_vm_area(area); // after bpf prog is unloaded
>
> I'm still wondering if this should just use an opaque cookie instead
> of exposing the vm_area. But otherwise this mostly looks fine to me.
What would it look like with a cookie?
A static inline wrapper around get_vm_area() that returns area->addr ?
And the start address of vmap range will be such a cookie?
Then vm_area_map_pages() will be doing find_vm_area() for kaddr
to check that vm_area->flag & VM_SPARSE ?
That's fine,
but what would be an equivalent of void free_vm_area(struct vm_struct *area) ?
Another static inline wrapper similar to remove_vm_area()
that also does kfree(area); ?
Fine by me, but api isn't user friendly with such obfuscation.
I guess I don't understand the motivation to hide 'struct vm_struct *'.
> > + if (addr < (unsigned long)area->addr || (void *)end > area->addr + area->size)
> > + return -ERANGE;
>
> This check is duplicated so many times that it really begs for a helper.
ok. will do.
> > +int vm_area_unmap_pages(struct vm_struct *area, unsigned long addr, unsigned int count)
> > +{
> > + unsigned long size = ((unsigned long)count) * PAGE_SIZE;
> > + unsigned long end = addr + size;
> > +
> > + if (WARN_ON_ONCE(!(area->flags & VM_SPARSE)))
> > + return -EINVAL;
> > + if (addr < (unsigned long)area->addr || (void *)end > area->addr + area->size)
> > + return -ERANGE;
> > +
> > + vunmap_range(addr, end);
> > + return 0;
>
> Does it make much sense to have an error return here vs just debug
> checks? It's not like the caller can do much if it violates these
> basic invariants.
Ok. Will switch to void return.
Will reduce commit line logs to 75 chars in all patches as suggested.
re: VM_GRANT_TABLE or VM_XEN_GRANT_TABLE suggestion for patch 2.
I'm not sure it fits, since only one of get_vm_area() in xen code
is a grant table related. The other one is for xenbus that
creates a shared memory ring between domains.
So I'm planning to keep it as VM_XEN in the next revision unless
folks come up with a better name.
Thanks for the reviews.
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v2 bpf-next 3/3] mm: Introduce VM_SPARSE kind and vm_area_[un]map_pages().
2024-02-28 1:31 ` Alexei Starovoitov
@ 2024-02-29 15:56 ` Christoph Hellwig
0 siblings, 0 replies; 11+ messages in thread
From: Christoph Hellwig @ 2024-02-29 15:56 UTC (permalink / raw)
To: Alexei Starovoitov
Cc: Christoph Hellwig, bpf, Daniel Borkmann, Andrii Nakryiko,
Linus Torvalds, Barret Rhoden, Johannes Weiner, Lorenzo Stoakes,
Andrew Morton, Uladzislau Rezki, Boris Ostrovsky, sstabellini,
Juergen Gross, linux-mm, xen-devel, Kernel Team
On Tue, Feb 27, 2024 at 05:31:28PM -0800, Alexei Starovoitov wrote:
> What would it look like with a cookie?
> A static inline wrapper around get_vm_area() that returns area->addr ?
> And the start address of vmap range will be such a cookie?
Hmm, just making the kernel virtual address the cookie actually
sounds pretty neat indeed even if I did not have that in mind.
> I guess I don't understand the motivation to hide 'struct vm_struct *'.
The prime reason is that then people will try to start random APIs that
work on it. But let's give it a try without the wrappers and see how
things go.
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v2 bpf-next 2/3] mm, xen: Separate xen use cases from ioremap.
2024-02-23 23:57 ` [PATCH v2 bpf-next 2/3] mm, xen: Separate xen use cases from ioremap Alexei Starovoitov
2024-02-26 10:51 ` Christoph Hellwig
@ 2024-03-04 7:54 ` Mike Rapoport
2024-03-05 0:38 ` Alexei Starovoitov
1 sibling, 1 reply; 11+ messages in thread
From: Mike Rapoport @ 2024-03-04 7:54 UTC (permalink / raw)
To: Alexei Starovoitov
Cc: bpf, daniel, andrii, torvalds, brho, hannes, lstoakes, akpm,
urezki, hch, boris.ostrovsky, sstabellini, jgross, linux-mm,
xen-devel, kernel-team
On Fri, Feb 23, 2024 at 03:57:27PM -0800, Alexei Starovoitov wrote:
> From: Alexei Starovoitov <ast@kernel.org>
>
> xen grant table and xenbus ring are not ioremap the way arch specific code is using it,
> so let's add VM_XEN flag to separate them from VM_IOREMAP users.
> xen will not and should not be calling ioremap_page_range() on that range.
> /proc/vmallocinfo will print such region as "xen" instead of "ioremap" as well.
>
> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
> ---
> arch/x86/xen/grant-table.c | 2 +-
> drivers/xen/xenbus/xenbus_client.c | 2 +-
> include/linux/vmalloc.h | 1 +
> mm/vmalloc.c | 7 +++++--
> 4 files changed, 8 insertions(+), 4 deletions(-)
>
> diff --git a/arch/x86/xen/grant-table.c b/arch/x86/xen/grant-table.c
> index 1e681bf62561..b816db0349c4 100644
> --- a/arch/x86/xen/grant-table.c
> +++ b/arch/x86/xen/grant-table.c
> @@ -104,7 +104,7 @@ static int arch_gnttab_valloc(struct gnttab_vm_area *area, unsigned nr_frames)
> area->ptes = kmalloc_array(nr_frames, sizeof(*area->ptes), GFP_KERNEL);
> if (area->ptes == NULL)
> return -ENOMEM;
> - area->area = get_vm_area(PAGE_SIZE * nr_frames, VM_IOREMAP);
> + area->area = get_vm_area(PAGE_SIZE * nr_frames, VM_XEN);
> if (!area->area)
> goto out_free_ptes;
> if (apply_to_page_range(&init_mm, (unsigned long)area->area->addr,
> diff --git a/drivers/xen/xenbus/xenbus_client.c b/drivers/xen/xenbus/xenbus_client.c
> index 32835b4b9bc5..b9c81a2d578b 100644
> --- a/drivers/xen/xenbus/xenbus_client.c
> +++ b/drivers/xen/xenbus/xenbus_client.c
> @@ -758,7 +758,7 @@ static int xenbus_map_ring_pv(struct xenbus_device *dev,
> bool leaked = false;
> int err = -ENOMEM;
>
> - area = get_vm_area(XEN_PAGE_SIZE * nr_grefs, VM_IOREMAP);
> + area = get_vm_area(XEN_PAGE_SIZE * nr_grefs, VM_XEN);
> if (!area)
> return -ENOMEM;
> if (apply_to_page_range(&init_mm, (unsigned long)area->addr,
> diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
> index c720be70c8dd..223e51c243bc 100644
> --- a/include/linux/vmalloc.h
> +++ b/include/linux/vmalloc.h
> @@ -28,6 +28,7 @@ struct iov_iter; /* in uio.h */
> #define VM_FLUSH_RESET_PERMS 0x00000100 /* reset direct map and flush TLB on unmap, can't be freed in atomic context */
> #define VM_MAP_PUT_PAGES 0x00000200 /* put pages and free array in vfree */
> #define VM_ALLOW_HUGE_VMAP 0x00000400 /* Allow for huge pages on archs with HAVE_ARCH_HUGE_VMALLOC */
> +#define VM_XEN 0x00000800 /* xen use cases */
>
> #if (defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)) && \
> !defined(CONFIG_KASAN_VMALLOC)
There's also VM_DEFER_KMEMLEAK a line below:
#if (defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)) && \
!defined(CONFIG_KASAN_VMALLOC)
#define VM_DEFER_KMEMLEAK 0x00000801 /* defer kmemleak object creation */
#else
#define VM_DEFER_KMEMLEAK 0
#endif
It should be adjusted as well.
I think it makes sense to use an enumeration for vm_flags, just like as
Suren did for GFP
(https://lore.kernel.org/linux-mm/20240224015800.2569851-1-surenb@google.com/)
--
Sincerely yours,
Mike.
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v2 bpf-next 2/3] mm, xen: Separate xen use cases from ioremap.
2024-03-04 7:54 ` Mike Rapoport
@ 2024-03-05 0:38 ` Alexei Starovoitov
0 siblings, 0 replies; 11+ messages in thread
From: Alexei Starovoitov @ 2024-03-05 0:38 UTC (permalink / raw)
To: Mike Rapoport
Cc: bpf, Daniel Borkmann, Andrii Nakryiko, Linus Torvalds,
Barret Rhoden, Johannes Weiner, Lorenzo Stoakes, Andrew Morton,
Uladzislau Rezki, Christoph Hellwig, Boris Ostrovsky,
sstabellini, Juergen Gross, linux-mm, xen-devel, Kernel Team
On Sun, Mar 3, 2024 at 11:55 PM Mike Rapoport <rppt@kernel.org> wrote:
>
> On Fri, Feb 23, 2024 at 03:57:27PM -0800, Alexei Starovoitov wrote:
> > From: Alexei Starovoitov <ast@kernel.org>
> >
> > xen grant table and xenbus ring are not ioremap the way arch specific code is using it,
> > so let's add VM_XEN flag to separate them from VM_IOREMAP users.
> > xen will not and should not be calling ioremap_page_range() on that range.
> > /proc/vmallocinfo will print such region as "xen" instead of "ioremap" as well.
> >
> > Signed-off-by: Alexei Starovoitov <ast@kernel.org>
> > ---
> > arch/x86/xen/grant-table.c | 2 +-
> > drivers/xen/xenbus/xenbus_client.c | 2 +-
> > include/linux/vmalloc.h | 1 +
> > mm/vmalloc.c | 7 +++++--
> > 4 files changed, 8 insertions(+), 4 deletions(-)
> >
> > diff --git a/arch/x86/xen/grant-table.c b/arch/x86/xen/grant-table.c
> > index 1e681bf62561..b816db0349c4 100644
> > --- a/arch/x86/xen/grant-table.c
> > +++ b/arch/x86/xen/grant-table.c
> > @@ -104,7 +104,7 @@ static int arch_gnttab_valloc(struct gnttab_vm_area *area, unsigned nr_frames)
> > area->ptes = kmalloc_array(nr_frames, sizeof(*area->ptes), GFP_KERNEL);
> > if (area->ptes == NULL)
> > return -ENOMEM;
> > - area->area = get_vm_area(PAGE_SIZE * nr_frames, VM_IOREMAP);
> > + area->area = get_vm_area(PAGE_SIZE * nr_frames, VM_XEN);
> > if (!area->area)
> > goto out_free_ptes;
> > if (apply_to_page_range(&init_mm, (unsigned long)area->area->addr,
> > diff --git a/drivers/xen/xenbus/xenbus_client.c b/drivers/xen/xenbus/xenbus_client.c
> > index 32835b4b9bc5..b9c81a2d578b 100644
> > --- a/drivers/xen/xenbus/xenbus_client.c
> > +++ b/drivers/xen/xenbus/xenbus_client.c
> > @@ -758,7 +758,7 @@ static int xenbus_map_ring_pv(struct xenbus_device *dev,
> > bool leaked = false;
> > int err = -ENOMEM;
> >
> > - area = get_vm_area(XEN_PAGE_SIZE * nr_grefs, VM_IOREMAP);
> > + area = get_vm_area(XEN_PAGE_SIZE * nr_grefs, VM_XEN);
> > if (!area)
> > return -ENOMEM;
> > if (apply_to_page_range(&init_mm, (unsigned long)area->addr,
> > diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
> > index c720be70c8dd..223e51c243bc 100644
> > --- a/include/linux/vmalloc.h
> > +++ b/include/linux/vmalloc.h
> > @@ -28,6 +28,7 @@ struct iov_iter; /* in uio.h */
> > #define VM_FLUSH_RESET_PERMS 0x00000100 /* reset direct map and flush TLB on unmap, can't be freed in atomic context */
> > #define VM_MAP_PUT_PAGES 0x00000200 /* put pages and free array in vfree */
> > #define VM_ALLOW_HUGE_VMAP 0x00000400 /* Allow for huge pages on archs with HAVE_ARCH_HUGE_VMALLOC */
> > +#define VM_XEN 0x00000800 /* xen use cases */
> >
> > #if (defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)) && \
> > !defined(CONFIG_KASAN_VMALLOC)
>
> There's also VM_DEFER_KMEMLEAK a line below:
Ohh. Good catch. Will fix.
> I think it makes sense to use an enumeration for vm_flags, just like as
> Suren did for GFP
> (https://lore.kernel.org/linux-mm/20240224015800.2569851-1-surenb@google.com/)
Hmm. I'm pretty sure Christoph hates BIT macro obfuscation.
I'm not a fan of it either, though we use it in bpf in a few places.
If mm folks prefer that style they can do such conversion later.
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2024-03-05 0:39 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-02-23 23:57 [PATCH v2 bpf-next 0/3] mm: Cleanup and identify various users of kernel virtual address space Alexei Starovoitov
2024-02-23 23:57 ` [PATCH v2 bpf-next 1/3] mm: Enforce VM_IOREMAP flag and range in ioremap_page_range Alexei Starovoitov
2024-02-26 10:50 ` Christoph Hellwig
2024-02-23 23:57 ` [PATCH v2 bpf-next 2/3] mm, xen: Separate xen use cases from ioremap Alexei Starovoitov
2024-02-26 10:51 ` Christoph Hellwig
2024-03-04 7:54 ` Mike Rapoport
2024-03-05 0:38 ` Alexei Starovoitov
2024-02-23 23:57 ` [PATCH v2 bpf-next 3/3] mm: Introduce VM_SPARSE kind and vm_area_[un]map_pages() Alexei Starovoitov
2024-02-27 17:59 ` Christoph Hellwig
2024-02-28 1:31 ` Alexei Starovoitov
2024-02-29 15:56 ` Christoph Hellwig
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox