linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] mm: vmalloc: Support more granular vrealloc() sizing
@ 2025-04-24  2:31 Kees Cook
  2025-04-24  9:11 ` Danilo Krummrich
  0 siblings, 1 reply; 3+ messages in thread
From: Kees Cook @ 2025-04-24  2:31 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Kees Cook, Erhard Furtner, Danilo Krummrich, Michal Hocko,
	Vlastimil Babka, Uladzislau Rezki, linux-mm, linux-kernel,
	linux-hardening

Introduce struct vm_struct::requested_size so that the requested
(re)allocation size is retained separately from the allocated area
size. This means that KASAN will correctly poison the correct spans
of requested bytes. This also means we can support growing the usable
portion of an allocation that can already be supported by the existing
area's existing allocation.

Reported-by: Erhard Furtner <erhard_f@mailbox.org>
Closes: https://lore.kernel.org/all/20250408192503.6149a816@outsider.home/
Fixes: 3ddc2fefe6f3 ("mm: vmalloc: implement vrealloc()")
Signed-off-by: Kees Cook <kees@kernel.org>
---
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Danilo Krummrich <dakr@kernel.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Uladzislau Rezki <urezki@gmail.com>
Cc: <linux-mm@kvack.org>
---
 include/linux/vmalloc.h |  1 +
 mm/vmalloc.c            | 30 ++++++++++++++++++++++++------
 2 files changed, 25 insertions(+), 6 deletions(-)

diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
index 31e9ffd936e3..5ca8d4dd149d 100644
--- a/include/linux/vmalloc.h
+++ b/include/linux/vmalloc.h
@@ -61,6 +61,7 @@ struct vm_struct {
 	unsigned int		nr_pages;
 	phys_addr_t		phys_addr;
 	const void		*caller;
+	unsigned long		requested_size;
 };
 
 struct vmap_area {
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 3ed720a787ec..bd8cf50f06b3 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -1940,7 +1940,7 @@ static inline void setup_vmalloc_vm(struct vm_struct *vm,
 {
 	vm->flags = flags;
 	vm->addr = (void *)va->va_start;
-	vm->size = va_size(va);
+	vm->size = vm->requested_size = va_size(va);
 	vm->caller = caller;
 	va->vm = vm;
 }
@@ -3133,6 +3133,7 @@ struct vm_struct *__get_vm_area_node(unsigned long size,
 
 	area->flags = flags;
 	area->caller = caller;
+	area->requested_size = requested_size;
 
 	va = alloc_vmap_area(size, align, start, end, node, gfp_mask, 0, area);
 	if (IS_ERR(va)) {
@@ -4063,6 +4064,8 @@ EXPORT_SYMBOL(vzalloc_node_noprof);
  */
 void *vrealloc_noprof(const void *p, size_t size, gfp_t flags)
 {
+	struct vm_struct *vm = NULL;
+	size_t alloced_size = 0;
 	size_t old_size = 0;
 	void *n;
 
@@ -4072,15 +4075,17 @@ void *vrealloc_noprof(const void *p, size_t size, gfp_t flags)
 	}
 
 	if (p) {
-		struct vm_struct *vm;
-
 		vm = find_vm_area(p);
 		if (unlikely(!vm)) {
 			WARN(1, "Trying to vrealloc() nonexistent vm area (%p)\n", p);
 			return NULL;
 		}
 
-		old_size = get_vm_area_size(vm);
+		alloced_size = get_vm_area_size(vm);
+		old_size = vm->requested_size;
+		if (WARN(alloced_size < old_size,
+			 "vrealloc() has mismatched area vs requested sizes (%p)\n", p))
+			return NULL;
 	}
 
 	/*
@@ -4088,14 +4093,27 @@ void *vrealloc_noprof(const void *p, size_t size, gfp_t flags)
 	 * would be a good heuristic for when to shrink the vm_area?
 	 */
 	if (size <= old_size) {
-		/* Zero out spare memory. */
-		if (want_init_on_alloc(flags))
+		/* Zero out "freed" memory. */
+		if (want_init_on_free())
 			memset((void *)p + size, 0, old_size - size);
+		vm->requested_size = size;
 		kasan_poison_vmalloc(p + size, old_size - size);
 		kasan_unpoison_vmalloc(p, size, KASAN_VMALLOC_PROT_NORMAL);
 		return (void *)p;
 	}
 
+	/*
+	 * We already have the bytes available in the allocation; use them.
+	 */
+	if (size <= alloced_size) {
+		kasan_unpoison_vmalloc(p, size, KASAN_VMALLOC_PROT_NORMAL);
+		/* Zero out "alloced" memory. */
+		if (want_init_on_alloc(flags))
+			memset((void *)p + old_size, 0, size - old_size);
+		vm->requested_size = size;
+		kasan_poison_vmalloc(p + size, alloced_size - size);
+	}
+
 	/* TODO: Grow the vm_area, i.e. allocate and map additional pages. */
 	n = __vmalloc_noprof(size, flags);
 	if (!n)
-- 
2.34.1



^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH] mm: vmalloc: Support more granular vrealloc() sizing
  2025-04-24  2:31 [PATCH] mm: vmalloc: Support more granular vrealloc() sizing Kees Cook
@ 2025-04-24  9:11 ` Danilo Krummrich
  2025-04-24 18:42   ` Kees Cook
  0 siblings, 1 reply; 3+ messages in thread
From: Danilo Krummrich @ 2025-04-24  9:11 UTC (permalink / raw)
  To: Kees Cook
  Cc: Andrew Morton, Erhard Furtner, Michal Hocko, Vlastimil Babka,
	Uladzislau Rezki, linux-mm, linux-kernel, linux-hardening

On Wed, Apr 23, 2025 at 07:31:23PM -0700, Kees Cook wrote:
> Introduce struct vm_struct::requested_size so that the requested
> (re)allocation size is retained separately from the allocated area
> size. This means that KASAN will correctly poison the correct spans
> of requested bytes. This also means we can support growing the usable
> portion of an allocation that can already be supported by the existing
> area's existing allocation.
> 
> Reported-by: Erhard Furtner <erhard_f@mailbox.org>
> Closes: https://lore.kernel.org/all/20250408192503.6149a816@outsider.home/
> Fixes: 3ddc2fefe6f3 ("mm: vmalloc: implement vrealloc()")
> Signed-off-by: Kees Cook <kees@kernel.org>

Good catch!

One question below, otherwise

	Reviewed-by: Danilo Krummrich <dakr@kernel.org>

> @@ -4088,14 +4093,27 @@ void *vrealloc_noprof(const void *p, size_t size, gfp_t flags)
>  	 * would be a good heuristic for when to shrink the vm_area?
>  	 */
>  	if (size <= old_size) {
> -		/* Zero out spare memory. */
> -		if (want_init_on_alloc(flags))
> +		/* Zero out "freed" memory. */
> +		if (want_init_on_free())
>  			memset((void *)p + size, 0, old_size - size);
> +		vm->requested_size = size;
>  		kasan_poison_vmalloc(p + size, old_size - size);
>  		kasan_unpoison_vmalloc(p, size, KASAN_VMALLOC_PROT_NORMAL);
>  		return (void *)p;
>  	}
>  
> +	/*
> +	 * We already have the bytes available in the allocation; use them.
> +	 */
> +	if (size <= alloced_size) {
> +		kasan_unpoison_vmalloc(p, size, KASAN_VMALLOC_PROT_NORMAL);
> +		/* Zero out "alloced" memory. */
> +		if (want_init_on_alloc(flags))
> +			memset((void *)p + old_size, 0, size - old_size);
> +		vm->requested_size = size;
> +		kasan_poison_vmalloc(p + size, alloced_size - size);

Do we need this? We know that old_size < size <= alloced_size. And since
previously [p + old_size, p + alloced_size) must have been poisoned,
[p + size, p + alloced_size) must be poisoned already?

Maybe there was a reason, since in the above (size <= old_size) case
kasan_unpoison_vmalloc() seems unnecessary too.


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH] mm: vmalloc: Support more granular vrealloc() sizing
  2025-04-24  9:11 ` Danilo Krummrich
@ 2025-04-24 18:42   ` Kees Cook
  0 siblings, 0 replies; 3+ messages in thread
From: Kees Cook @ 2025-04-24 18:42 UTC (permalink / raw)
  To: Danilo Krummrich
  Cc: Andrew Morton, Erhard Furtner, Michal Hocko, Vlastimil Babka,
	Uladzislau Rezki, linux-mm, linux-kernel, linux-hardening

On Thu, Apr 24, 2025 at 11:11:47AM +0200, Danilo Krummrich wrote:
> On Wed, Apr 23, 2025 at 07:31:23PM -0700, Kees Cook wrote:
> > Introduce struct vm_struct::requested_size so that the requested
> > (re)allocation size is retained separately from the allocated area
> > size. This means that KASAN will correctly poison the correct spans
> > of requested bytes. This also means we can support growing the usable
> > portion of an allocation that can already be supported by the existing
> > area's existing allocation.
> > 
> > Reported-by: Erhard Furtner <erhard_f@mailbox.org>
> > Closes: https://lore.kernel.org/all/20250408192503.6149a816@outsider.home/
> > Fixes: 3ddc2fefe6f3 ("mm: vmalloc: implement vrealloc()")
> > Signed-off-by: Kees Cook <kees@kernel.org>
> 
> Good catch!
> 
> One question below, otherwise
> 
> 	Reviewed-by: Danilo Krummrich <dakr@kernel.org>
> 
> > @@ -4088,14 +4093,27 @@ void *vrealloc_noprof(const void *p, size_t size, gfp_t flags)
> >  	 * would be a good heuristic for when to shrink the vm_area?
> >  	 */
> >  	if (size <= old_size) {
> > -		/* Zero out spare memory. */
> > -		if (want_init_on_alloc(flags))
> > +		/* Zero out "freed" memory. */
> > +		if (want_init_on_free())
> >  			memset((void *)p + size, 0, old_size - size);
> > +		vm->requested_size = size;
> >  		kasan_poison_vmalloc(p + size, old_size - size);
> >  		kasan_unpoison_vmalloc(p, size, KASAN_VMALLOC_PROT_NORMAL);
> >  		return (void *)p;
> >  	}
> >  
> > +	/*
> > +	 * We already have the bytes available in the allocation; use them.
> > +	 */
> > +	if (size <= alloced_size) {
> > +		kasan_unpoison_vmalloc(p, size, KASAN_VMALLOC_PROT_NORMAL);
> > +		/* Zero out "alloced" memory. */
> > +		if (want_init_on_alloc(flags))
> > +			memset((void *)p + old_size, 0, size - old_size);
> > +		vm->requested_size = size;
> > +		kasan_poison_vmalloc(p + size, alloced_size - size);
> 
> Do we need this? We know that old_size < size <= alloced_size. And since
> previously [p + old_size, p + alloced_size) must have been poisoned,
> [p + size, p + alloced_size) must be poisoned already?
> 
> Maybe there was a reason, since in the above (size <= old_size) case
> kasan_unpoison_vmalloc() seems unnecessary too.

Honestly I was just copying the logic from the prior case. But yeah, it
should be possible (in both cases) to just apply the changed span. For
the "size <= old_size" case, it would just be:

	kasan_poison_vmalloc(p + size, old_size - size);

(i.e. the kasan_unpoison_vmalloc() call isn't needed at all, as you say.)

And in the "size <= alloced_size" case, it would just be:

	kasan_unpoison_vmalloc(p + old_size, size - old_size, KASAN_VMALLOC_PROT_NORMAL);

and no kasan_poison_vmalloc() should be needed.

Do the KASAN folks on CC have any opinion on best practices here?

Thanks for looking it over!

-Kees

-- 
Kees Cook


^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2025-04-24 18:42 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-04-24  2:31 [PATCH] mm: vmalloc: Support more granular vrealloc() sizing Kees Cook
2025-04-24  9:11 ` Danilo Krummrich
2025-04-24 18:42   ` Kees Cook

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox