linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v1 0/2] Fix KASAN support for KHO restored vmalloc regions
@ 2026-02-25 22:38 Pasha Tatashin
  2026-02-25 22:38 ` [PATCH v1 1/2] mm/vmalloc: export clear_vm_uninitialized_flag() Pasha Tatashin
  2026-02-25 22:38 ` [PATCH v1 2/2] kho: fix KASAN support for restored vmalloc regions Pasha Tatashin
  0 siblings, 2 replies; 5+ messages in thread
From: Pasha Tatashin @ 2026-02-25 22:38 UTC (permalink / raw)
  To: pratyush, akpm, david, lorenzo.stoakes, Liam.Howlett, vbabka,
	rppt, graf, pasha.tatashin, linux-mm, linux-kernel, surenb,
	mhocko, urezki

When KHO restores a vmalloc area, it maps existing physical pages into a
newly allocated virtual memory area. However, because these areas were not
properly unpoisoned, KASAN would treat any access to the restored region
as out-of-bounds, as seen in the following trace:

BUG: KASAN: vmalloc-out-of-bounds in kho_test_restore_data.isra.0+0x17b/0x2cd
Read of size 8 at addr ffffc90000025000 by task swapper/0/1
[...]
Call Trace:
[...]
kasan_report+0xe8/0x120
kho_test_restore_data.isra.0+0x17b/0x2cd
kho_test_init+0x15a/0x1f0
do_one_initcall+0xd5/0x4b0

The fix involves deferring KASAN's default poisoning by using the
VM_UNINITIALIZED flag during allocation, manually unpoisoning the
memory once it is correctly mapped, and then clearing the uninitialized
flag using a newly exported helper.

Pasha Tatashin (2):
  mm/vmalloc: export clear_vm_uninitialized_flag()
  kho: fix KASAN support for restored vmalloc regions

 kernel/liveupdate/kexec_handover.c | 12 +++++++++++-
 mm/internal.h                      |  2 ++
 mm/vmalloc.c                       |  2 +-
 3 files changed, 14 insertions(+), 2 deletions(-)

-- 
2.53.0.414.gf7e9f6c205-goog



^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH v1 1/2] mm/vmalloc: export clear_vm_uninitialized_flag()
  2026-02-25 22:38 [PATCH v1 0/2] Fix KASAN support for KHO restored vmalloc regions Pasha Tatashin
@ 2026-02-25 22:38 ` Pasha Tatashin
  2026-02-26  9:52   ` Pratyush Yadav
  2026-02-25 22:38 ` [PATCH v1 2/2] kho: fix KASAN support for restored vmalloc regions Pasha Tatashin
  1 sibling, 1 reply; 5+ messages in thread
From: Pasha Tatashin @ 2026-02-25 22:38 UTC (permalink / raw)
  To: pratyush, akpm, david, lorenzo.stoakes, Liam.Howlett, vbabka,
	rppt, graf, pasha.tatashin, linux-mm, linux-kernel, surenb,
	mhocko, urezki

Make clear_vm_uninitialized_flag() available to other parts of the
kernel that need to manage vmalloc areas manually, such as KHO for
restoring vmallocs.

Signed-off-by: Pasha Tatashin <pasha.tatashin@soleen.com>
---
 mm/internal.h | 2 ++
 mm/vmalloc.c  | 2 +-
 2 files changed, 3 insertions(+), 1 deletion(-)

diff --git a/mm/internal.h b/mm/internal.h
index 39ab37bb0e1d..2daa6a744172 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -1469,6 +1469,8 @@ int __must_check vmap_pages_range_noflush(unsigned long addr, unsigned long end,
 }
 #endif
 
+void clear_vm_uninitialized_flag(struct vm_struct *vm);
+
 int __must_check __vmap_pages_range_noflush(unsigned long addr,
 			       unsigned long end, pgprot_t prot,
 			       struct page **pages, unsigned int page_shift);
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 56e3611c562a..33216b3c15de 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -3189,7 +3189,7 @@ void __init vm_area_register_early(struct vm_struct *vm, size_t align)
 	kasan_populate_early_vm_area_shadow(vm->addr, vm->size);
 }
 
-static void clear_vm_uninitialized_flag(struct vm_struct *vm)
+void clear_vm_uninitialized_flag(struct vm_struct *vm)
 {
 	/*
 	 * Before removing VM_UNINITIALIZED,
-- 
2.53.0.414.gf7e9f6c205-goog



^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH v1 2/2] kho: fix KASAN support for restored vmalloc regions
  2026-02-25 22:38 [PATCH v1 0/2] Fix KASAN support for KHO restored vmalloc regions Pasha Tatashin
  2026-02-25 22:38 ` [PATCH v1 1/2] mm/vmalloc: export clear_vm_uninitialized_flag() Pasha Tatashin
@ 2026-02-25 22:38 ` Pasha Tatashin
  2026-02-26 10:06   ` Pratyush Yadav
  1 sibling, 1 reply; 5+ messages in thread
From: Pasha Tatashin @ 2026-02-25 22:38 UTC (permalink / raw)
  To: pratyush, akpm, david, lorenzo.stoakes, Liam.Howlett, vbabka,
	rppt, graf, pasha.tatashin, linux-mm, linux-kernel, surenb,
	mhocko, urezki

Restored vmalloc regions are currently not properly marked for KASAN,
causing KASAN to treat accesses to these regions as out-of-bounds.

Fix this by properly unpoisoning the restored vmalloc area using
kasan_unpoison_vmalloc(). This requires setting the VM_UNINITIALIZED
flag during the initial area allocation and clearing it after the pages
have been mapped and unpoisoned, using the clear_vm_uninitialized_flag()
helper.

Reported-by: Pratyush Yadav <pratyush@kernel.org>
Fixes: a667300bd53f ("kho: add support for preserving vmalloc allocations")
Signed-off-by: Pasha Tatashin <pasha.tatashin@soleen.com>
---
 kernel/liveupdate/kexec_handover.c | 12 +++++++++++-
 1 file changed, 11 insertions(+), 1 deletion(-)

diff --git a/kernel/liveupdate/kexec_handover.c b/kernel/liveupdate/kexec_handover.c
index 410098bae0bf..747a35107c84 100644
--- a/kernel/liveupdate/kexec_handover.c
+++ b/kernel/liveupdate/kexec_handover.c
@@ -14,6 +14,7 @@
 #include <linux/cma.h>
 #include <linux/kmemleak.h>
 #include <linux/count_zeros.h>
+#include <linux/kasan.h>
 #include <linux/kexec.h>
 #include <linux/kexec_handover.h>
 #include <linux/kho_radix_tree.h>
@@ -1077,6 +1078,7 @@ EXPORT_SYMBOL_GPL(kho_unpreserve_vmalloc);
 void *kho_restore_vmalloc(const struct kho_vmalloc *preservation)
 {
 	struct kho_vmalloc_chunk *chunk = KHOSER_LOAD_PTR(preservation->first);
+	kasan_vmalloc_flags_t kasan_flags = KASAN_VMALLOC_PROT_NORMAL;
 	unsigned int align, order, shift, vm_flags;
 	unsigned long total_pages, contig_pages;
 	unsigned long addr, size;
@@ -1128,7 +1130,8 @@ void *kho_restore_vmalloc(const struct kho_vmalloc *preservation)
 		goto err_free_pages_array;
 
 	area = __get_vm_area_node(total_pages * PAGE_SIZE, align, shift,
-				  vm_flags, VMALLOC_START, VMALLOC_END,
+				  vm_flags | VM_UNINITIALIZED,
+				  VMALLOC_START, VMALLOC_END,
 				  NUMA_NO_NODE, GFP_KERNEL,
 				  __builtin_return_address(0));
 	if (!area)
@@ -1143,6 +1146,13 @@ void *kho_restore_vmalloc(const struct kho_vmalloc *preservation)
 	area->nr_pages = total_pages;
 	area->pages = pages;
 
+	if (vm_flags & VM_ALLOC)
+		kasan_flags |= KASAN_VMALLOC_VM_ALLOC;
+
+	area->addr = kasan_unpoison_vmalloc(area->addr, total_pages * PAGE_SIZE,
+					    kasan_flags);
+	clear_vm_uninitialized_flag(area);
+
 	return area->addr;
 
 err_free_vm_area:
-- 
2.53.0.414.gf7e9f6c205-goog



^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH v1 1/2] mm/vmalloc: export clear_vm_uninitialized_flag()
  2026-02-25 22:38 ` [PATCH v1 1/2] mm/vmalloc: export clear_vm_uninitialized_flag() Pasha Tatashin
@ 2026-02-26  9:52   ` Pratyush Yadav
  0 siblings, 0 replies; 5+ messages in thread
From: Pratyush Yadav @ 2026-02-26  9:52 UTC (permalink / raw)
  To: Pasha Tatashin
  Cc: pratyush, akpm, david, lorenzo.stoakes, Liam.Howlett, vbabka,
	rppt, graf, linux-mm, linux-kernel, surenb, mhocko, urezki

On Wed, Feb 25 2026, Pasha Tatashin wrote:

> Make clear_vm_uninitialized_flag() available to other parts of the
> kernel that need to manage vmalloc areas manually, such as KHO for
> restoring vmallocs.
>
> Signed-off-by: Pasha Tatashin <pasha.tatashin@soleen.com>

Acked-by: Pratyush Yadav (Google) <pratyush@kernel.org>

[...]

-- 
Regards,
Pratyush Yadav


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH v1 2/2] kho: fix KASAN support for restored vmalloc regions
  2026-02-25 22:38 ` [PATCH v1 2/2] kho: fix KASAN support for restored vmalloc regions Pasha Tatashin
@ 2026-02-26 10:06   ` Pratyush Yadav
  0 siblings, 0 replies; 5+ messages in thread
From: Pratyush Yadav @ 2026-02-26 10:06 UTC (permalink / raw)
  To: Pasha Tatashin
  Cc: pratyush, akpm, david, lorenzo.stoakes, Liam.Howlett, vbabka,
	rppt, graf, linux-mm, linux-kernel, surenb, mhocko, urezki

Hi Pasha,

On Wed, Feb 25 2026, Pasha Tatashin wrote:

> Restored vmalloc regions are currently not properly marked for KASAN,
> causing KASAN to treat accesses to these regions as out-of-bounds.
>
> Fix this by properly unpoisoning the restored vmalloc area using
> kasan_unpoison_vmalloc(). This requires setting the VM_UNINITIALIZED
> flag during the initial area allocation and clearing it after the pages
> have been mapped and unpoisoned, using the clear_vm_uninitialized_flag()
> helper.
>
> Reported-by: Pratyush Yadav <pratyush@kernel.org>
> Fixes: a667300bd53f ("kho: add support for preserving vmalloc allocations")
> Signed-off-by: Pasha Tatashin <pasha.tatashin@soleen.com>
> ---
>  kernel/liveupdate/kexec_handover.c | 12 +++++++++++-
>  1 file changed, 11 insertions(+), 1 deletion(-)
>
> diff --git a/kernel/liveupdate/kexec_handover.c b/kernel/liveupdate/kexec_handover.c
> index 410098bae0bf..747a35107c84 100644
> --- a/kernel/liveupdate/kexec_handover.c
> +++ b/kernel/liveupdate/kexec_handover.c
> @@ -14,6 +14,7 @@
>  #include <linux/cma.h>
>  #include <linux/kmemleak.h>
>  #include <linux/count_zeros.h>
> +#include <linux/kasan.h>
>  #include <linux/kexec.h>
>  #include <linux/kexec_handover.h>
>  #include <linux/kho_radix_tree.h>
> @@ -1077,6 +1078,7 @@ EXPORT_SYMBOL_GPL(kho_unpreserve_vmalloc);
>  void *kho_restore_vmalloc(const struct kho_vmalloc *preservation)
>  {
>  	struct kho_vmalloc_chunk *chunk = KHOSER_LOAD_PTR(preservation->first);
> +	kasan_vmalloc_flags_t kasan_flags = KASAN_VMALLOC_PROT_NORMAL;
>  	unsigned int align, order, shift, vm_flags;
>  	unsigned long total_pages, contig_pages;
>  	unsigned long addr, size;
> @@ -1128,7 +1130,8 @@ void *kho_restore_vmalloc(const struct kho_vmalloc *preservation)
>  		goto err_free_pages_array;
>  
>  	area = __get_vm_area_node(total_pages * PAGE_SIZE, align, shift,
> -				  vm_flags, VMALLOC_START, VMALLOC_END,
> +				  vm_flags | VM_UNINITIALIZED,
> +				  VMALLOC_START, VMALLOC_END,
>  				  NUMA_NO_NODE, GFP_KERNEL,
>  				  __builtin_return_address(0));
>  	if (!area)
> @@ -1143,6 +1146,13 @@ void *kho_restore_vmalloc(const struct kho_vmalloc *preservation)
>  	area->nr_pages = total_pages;
>  	area->pages = pages;
>  
> +	if (vm_flags & VM_ALLOC)
> +		kasan_flags |= KASAN_VMALLOC_VM_ALLOC;
> +
> +	area->addr = kasan_unpoison_vmalloc(area->addr, total_pages * PAGE_SIZE,
> +					    kasan_flags);

Ugh, this is tricky. Say I do vmalloc(sizeof(unsigned long)). After KHO,
this would unpoison the whole page, effectively missing all
out-of-bounds access within that page.

We need to either store the buffer size in struct kho_vmalloc, or only
allow preserving PAGE_SIZE aligned allocations, or just live with this
missed coverage. I kind of prefer the second option, but no strong
opinions.

Anyway, I think this is a clear improvement regardless of this problem.
So,

Reviewed-by: Pratyush Yadav (Google) <pratyush@kernel.org>
Tested-by: Pratyush Yadav (Google) <pratyush@kernel.org>

Thanks for fixing it.

> +	clear_vm_uninitialized_flag(area);
> +
>  	return area->addr;
>  
>  err_free_vm_area:

-- 
Regards,
Pratyush Yadav


^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2026-02-26 10:07 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2026-02-25 22:38 [PATCH v1 0/2] Fix KASAN support for KHO restored vmalloc regions Pasha Tatashin
2026-02-25 22:38 ` [PATCH v1 1/2] mm/vmalloc: export clear_vm_uninitialized_flag() Pasha Tatashin
2026-02-26  9:52   ` Pratyush Yadav
2026-02-25 22:38 ` [PATCH v1 2/2] kho: fix KASAN support for restored vmalloc regions Pasha Tatashin
2026-02-26 10:06   ` Pratyush Yadav

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox