linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/2] liveupdate: Fix boot failure due to kmemleak access to unmapped pages
@ 2025-11-20 14:41 ranxiaokai627
  2025-11-20 14:41 ` [PATCH 1/2] mm: kmemleak: introduce kmemleak_no_scan_phys() helper ranxiaokai627
  2025-11-20 14:41 ` [PATCH 2/2] liveupdate: Fix boot failure due to kmemleak access to unmapped pages ranxiaokai627
  0 siblings, 2 replies; 7+ messages in thread
From: ranxiaokai627 @ 2025-11-20 14:41 UTC (permalink / raw)
  To: catalin.marinas, akpm, graf, rppt, pasha.tatashin, pratyush, changyuanl
  Cc: linux-kernel, linux-mm, kexec, ran.xiaokai, ranxiaokai627

From: Ran Xiaokai <ran.xiaokai@zte.com.cn>

When booting with debug_pagealloc=on while having:
CONFIG_KEXEC_HANDOVER_ENABLE_DEFAULT=y  
CONFIG_DEBUG_KMEMLEAK_DEFAULT_OFF=n  
the system fails to boot due to page faults during kmemleak scanning.

Crash logs:
BUG: unable to handle page fault for address: ffff8880cd400000
#PF: supervisor read access in kernel mode
#PF: error_code(0x0000) - not-present page
PGD 11de00067 P4D 11de00067 PUD 11af2b067 PMD 11aec1067 PTE 800fffff32bff020
Oops: Oops: 0000 [#1] SMP DEBUG_PAGEALLOC
RIP: 0010:scan_block+0x43/0xb0
Call Trace:
 <TASK>
 scan_gray_list+0x2b5/0x2f0
 kmemleak_scan+0x3b1/0xcf0
 kmemleak_scan_thread+0x7d/0xc0
 kthread+0x11c/0x240
 ret_from_fork+0x2d3/0x370
 ret_from_fork_asm+0x11/0x20
 </TASK>

This occurs because:
With debug_pagealloc enabled, __free_pages() invokes
debug_pagealloc_unmap_pages(), clearing the _PAGE_PRESENT bit for
freed pages in the direct mapping.
Commit 3dc92c311498 ("kexec: add Kexec HandOver (KHO) generation helpers")
releases the KHO scratch region via init_cma_reserved_pageblock(),
unmapping its physical pages. Subsequent kmemleak scanning accesses
these unmapped pages, triggering fatal page faults.

This patch introduces kmemleak_no_scan_phys(phys_addr_t),
a physical-address variant of kmemleak_no_scan(), which marks
memblock regions as OBJECT_NO_SCAN.

We invoke this from kho_reserve_scratch() to exclude the reserved
region from scanning before it is released to the buddy allocator.

This is based linux next-20251119.

Ran Xiaokai (2):
  mm: kmemleak: introduce kmemleak_no_scan_phys() helper
  liveupdate: Fix boot failure due to kmemleak access to unmapped pages

 include/linux/kmemleak.h           |  4 ++++
 kernel/liveupdate/kexec_handover.c |  4 ++++
 mm/kmemleak.c                      | 15 ++++++++++++---
 3 files changed, 20 insertions(+), 3 deletions(-)

-- 
2.25.1




^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH 1/2] mm: kmemleak: introduce kmemleak_no_scan_phys() helper
  2025-11-20 14:41 [PATCH 0/2] liveupdate: Fix boot failure due to kmemleak access to unmapped pages ranxiaokai627
@ 2025-11-20 14:41 ` ranxiaokai627
  2025-11-20 14:41 ` [PATCH 2/2] liveupdate: Fix boot failure due to kmemleak access to unmapped pages ranxiaokai627
  1 sibling, 0 replies; 7+ messages in thread
From: ranxiaokai627 @ 2025-11-20 14:41 UTC (permalink / raw)
  To: catalin.marinas, akpm, graf, rppt, pasha.tatashin, pratyush, changyuanl
  Cc: linux-kernel, linux-mm, kexec, ran.xiaokai, ranxiaokai627

From: Ran Xiaokai <ran.xiaokai@zte.com.cn>

Introduce kmemleak_no_scan_phys(phys_addr_t), a physical-address
variant to kmemleak_no_scan(). This helper marks memory regions
as non-scanable using physical addresses directly.

It is specifically designed to prevent kmemleak from accessing pages
that have been unmapped by debug_pagealloc after being freed to
the buddy allocator. The kexec handover (KHO) subsystem will call
this helper to exclude the kho_scratch reservation region from scanning,
thereby avoiding fatal page faults during boot when debug_pagealloc=on.

Signed-off-by: Ran Xiaokai <ran.xiaokai@zte.com.cn>
---
 include/linux/kmemleak.h |  4 ++++
 mm/kmemleak.c            | 15 ++++++++++++---
 2 files changed, 16 insertions(+), 3 deletions(-)

diff --git a/include/linux/kmemleak.h b/include/linux/kmemleak.h
index fbd424b2abb1..e955ad441b8a 100644
--- a/include/linux/kmemleak.h
+++ b/include/linux/kmemleak.h
@@ -31,6 +31,7 @@ extern void kmemleak_ignore(const void *ptr) __ref;
 extern void kmemleak_ignore_percpu(const void __percpu *ptr) __ref;
 extern void kmemleak_scan_area(const void *ptr, size_t size, gfp_t gfp) __ref;
 extern void kmemleak_no_scan(const void *ptr) __ref;
+extern void kmemleak_no_scan_phys(phys_addr_t phys) __ref;
 extern void kmemleak_alloc_phys(phys_addr_t phys, size_t size,
 				gfp_t gfp) __ref;
 extern void kmemleak_free_part_phys(phys_addr_t phys, size_t size) __ref;
@@ -113,6 +114,9 @@ static inline void kmemleak_erase(void **ptr)
 static inline void kmemleak_no_scan(const void *ptr)
 {
 }
+static inline void kmemleak_no_scan_phys(phys_addr_t phys)
+{
+}
 static inline void kmemleak_alloc_phys(phys_addr_t phys, size_t size,
 				       gfp_t gfp)
 {
diff --git a/mm/kmemleak.c b/mm/kmemleak.c
index 1ac56ceb29b6..b2b8374e19c3 100644
--- a/mm/kmemleak.c
+++ b/mm/kmemleak.c
@@ -1058,12 +1058,12 @@ static void object_set_excess_ref(unsigned long ptr, unsigned long excess_ref)
  * pointer. Such object will not be scanned by kmemleak but references to it
  * are searched.
  */
-static void object_no_scan(unsigned long ptr)
+static void object_no_scan_flags(unsigned long ptr, unsigned long objflags)
 {
 	unsigned long flags;
 	struct kmemleak_object *object;
 
-	object = find_and_get_object(ptr, 0);
+	object = __find_and_get_object(ptr, 0, objflags);
 	if (!object) {
 		kmemleak_warn("Not scanning unknown object at 0x%08lx\n", ptr);
 		return;
@@ -1328,10 +1328,19 @@ void __ref kmemleak_no_scan(const void *ptr)
 	pr_debug("%s(0x%px)\n", __func__, ptr);
 
 	if (kmemleak_enabled && ptr && !IS_ERR(ptr))
-		object_no_scan((unsigned long)ptr);
+		object_no_scan_flags((unsigned long)ptr, 0);
 }
 EXPORT_SYMBOL(kmemleak_no_scan);
 
+void __ref kmemleak_no_scan_phys(phys_addr_t phys)
+{
+	pr_debug("%s(%pap)\n", __func__, &phys);
+
+	if (kmemleak_enabled)
+		object_no_scan_flags((unsigned long)phys, OBJECT_PHYS);
+}
+EXPORT_SYMBOL(kmemleak_no_scan_phys);
+
 /**
  * kmemleak_alloc_phys - similar to kmemleak_alloc but taking a physical
  *			 address argument
-- 
2.25.1




^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH 2/2] liveupdate: Fix boot failure due to kmemleak access to unmapped pages
  2025-11-20 14:41 [PATCH 0/2] liveupdate: Fix boot failure due to kmemleak access to unmapped pages ranxiaokai627
  2025-11-20 14:41 ` [PATCH 1/2] mm: kmemleak: introduce kmemleak_no_scan_phys() helper ranxiaokai627
@ 2025-11-20 14:41 ` ranxiaokai627
  2025-11-20 16:17   ` Pratyush Yadav
  2025-11-21 13:36   ` Mike Rapoport
  1 sibling, 2 replies; 7+ messages in thread
From: ranxiaokai627 @ 2025-11-20 14:41 UTC (permalink / raw)
  To: catalin.marinas, akpm, graf, rppt, pasha.tatashin, pratyush, changyuanl
  Cc: linux-kernel, linux-mm, kexec, ran.xiaokai, ranxiaokai627

From: Ran Xiaokai <ran.xiaokai@zte.com.cn>

When booting with debug_pagealloc=on while having:
CONFIG_KEXEC_HANDOVER_ENABLE_DEFAULT=y
CONFIG_DEBUG_KMEMLEAK_DEFAULT_OFF=n
the system fails to boot due to page faults during kmemleak scanning.

This occurs because:
With debug_pagealloc enabled, __free_pages() invokes
debug_pagealloc_unmap_pages(), clearing the _PAGE_PRESENT bit for
freed pages in the direct mapping.
Commit 3dc92c311498 ("kexec: add Kexec HandOver (KHO) generation helpers")
releases the KHO scratch region via init_cma_reserved_pageblock(),
unmapping its physical pages. Subsequent kmemleak scanning accesses
these unmapped pages, triggering fatal page faults.

Call kmemleak_no_scan_phys() from kho_reserve_scratch() to
exclude the reserved region from scanning before
it is released to the buddy allocator.

Fixes: 3dc92c311498 ("kexec: add Kexec HandOver (KHO) generation helpers")
Signed-off-by: Ran Xiaokai <ran.xiaokai@zte.com.cn>
---
 kernel/liveupdate/kexec_handover.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/kernel/liveupdate/kexec_handover.c b/kernel/liveupdate/kexec_handover.c
index 224bdf5becb6..dd4942d1d76c 100644
--- a/kernel/liveupdate/kexec_handover.c
+++ b/kernel/liveupdate/kexec_handover.c
@@ -11,6 +11,7 @@
 
 #include <linux/cleanup.h>
 #include <linux/cma.h>
+#include <linux/kmemleak.h>
 #include <linux/count_zeros.h>
 #include <linux/kexec.h>
 #include <linux/kexec_handover.h>
@@ -654,6 +655,7 @@ static void __init kho_reserve_scratch(void)
 	if (!addr)
 		goto err_free_scratch_desc;
 
+	kmemleak_no_scan_phys(addr);
 	kho_scratch[i].addr = addr;
 	kho_scratch[i].size = size;
 	i++;
@@ -664,6 +666,7 @@ static void __init kho_reserve_scratch(void)
 	if (!addr)
 		goto err_free_scratch_areas;
 
+	kmemleak_no_scan_phys(addr);
 	kho_scratch[i].addr = addr;
 	kho_scratch[i].size = size;
 	i++;
@@ -676,6 +679,7 @@ static void __init kho_reserve_scratch(void)
 		if (!addr)
 			goto err_free_scratch_areas;
 
+		kmemleak_no_scan_phys(addr);
 		kho_scratch[i].addr = addr;
 		kho_scratch[i].size = size;
 		i++;
-- 
2.25.1




^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH 2/2] liveupdate: Fix boot failure due to kmemleak access to unmapped pages
  2025-11-20 14:41 ` [PATCH 2/2] liveupdate: Fix boot failure due to kmemleak access to unmapped pages ranxiaokai627
@ 2025-11-20 16:17   ` Pratyush Yadav
  2025-11-22 17:57     ` ranxiaokai627
  2025-11-21 13:36   ` Mike Rapoport
  1 sibling, 1 reply; 7+ messages in thread
From: Pratyush Yadav @ 2025-11-20 16:17 UTC (permalink / raw)
  To: ranxiaokai627
  Cc: catalin.marinas, akpm, graf, rppt, pasha.tatashin, pratyush,
	changyuanl, linux-kernel, linux-mm, kexec, ran.xiaokai

On Thu, Nov 20 2025, ranxiaokai627@163.com wrote:

> From: Ran Xiaokai <ran.xiaokai@zte.com.cn>
>
> When booting with debug_pagealloc=on while having:
> CONFIG_KEXEC_HANDOVER_ENABLE_DEFAULT=y
> CONFIG_DEBUG_KMEMLEAK_DEFAULT_OFF=n
> the system fails to boot due to page faults during kmemleak scanning.
>
> This occurs because:
> With debug_pagealloc enabled, __free_pages() invokes
> debug_pagealloc_unmap_pages(), clearing the _PAGE_PRESENT bit for
> freed pages in the direct mapping.
> Commit 3dc92c311498 ("kexec: add Kexec HandOver (KHO) generation helpers")
> releases the KHO scratch region via init_cma_reserved_pageblock(),
> unmapping its physical pages. Subsequent kmemleak scanning accesses
> these unmapped pages, triggering fatal page faults.

I don't know how kmemleak works. Why does kmemleak access the unmapped
pages? If pages are not mapped, it should learn to not access them,
right?

>
> Call kmemleak_no_scan_phys() from kho_reserve_scratch() to
> exclude the reserved region from scanning before
> it is released to the buddy allocator.

kho_reserve_scratch() is called on the first boot. It allocates the
scratch areas for subsequent boots. On every KHO boot after this,
kho_reserve_scratch() is not called and kho_release_scratch() is called
instead since the scratch areas already exist from previous boot.

Eventually both paths converge to kho_init() and call
init_cma_reserved_pageblock().

So shouldn't you call kmemleak_no_scan_phys() from kho_init() instead?
This would reduce code duplication and cover both paths.

>
> Fixes: 3dc92c311498 ("kexec: add Kexec HandOver (KHO) generation helpers")
> Signed-off-by: Ran Xiaokai <ran.xiaokai@zte.com.cn>
> ---
>  kernel/liveupdate/kexec_handover.c | 4 ++++
>  1 file changed, 4 insertions(+)
>
> diff --git a/kernel/liveupdate/kexec_handover.c b/kernel/liveupdate/kexec_handover.c
> index 224bdf5becb6..dd4942d1d76c 100644
> --- a/kernel/liveupdate/kexec_handover.c
> +++ b/kernel/liveupdate/kexec_handover.c
> @@ -11,6 +11,7 @@
>  
>  #include <linux/cleanup.h>
>  #include <linux/cma.h>
> +#include <linux/kmemleak.h>
>  #include <linux/count_zeros.h>
>  #include <linux/kexec.h>
>  #include <linux/kexec_handover.h>
> @@ -654,6 +655,7 @@ static void __init kho_reserve_scratch(void)
>  	if (!addr)
>  		goto err_free_scratch_desc;
>  
> +	kmemleak_no_scan_phys(addr);
>  	kho_scratch[i].addr = addr;
>  	kho_scratch[i].size = size;
>  	i++;
> @@ -664,6 +666,7 @@ static void __init kho_reserve_scratch(void)
>  	if (!addr)
>  		goto err_free_scratch_areas;
>  
> +	kmemleak_no_scan_phys(addr);
>  	kho_scratch[i].addr = addr;
>  	kho_scratch[i].size = size;
>  	i++;
> @@ -676,6 +679,7 @@ static void __init kho_reserve_scratch(void)
>  		if (!addr)
>  			goto err_free_scratch_areas;
>  
> +		kmemleak_no_scan_phys(addr);
>  		kho_scratch[i].addr = addr;
>  		kho_scratch[i].size = size;
>  		i++;

-- 
Regards,
Pratyush Yadav


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH 2/2] liveupdate: Fix boot failure due to kmemleak access to unmapped pages
  2025-11-20 14:41 ` [PATCH 2/2] liveupdate: Fix boot failure due to kmemleak access to unmapped pages ranxiaokai627
  2025-11-20 16:17   ` Pratyush Yadav
@ 2025-11-21 13:36   ` Mike Rapoport
  2025-11-22 18:07     ` ranxiaokai627
  1 sibling, 1 reply; 7+ messages in thread
From: Mike Rapoport @ 2025-11-21 13:36 UTC (permalink / raw)
  To: ranxiaokai627
  Cc: catalin.marinas, akpm, graf, pasha.tatashin, pratyush,
	changyuanl, linux-kernel, linux-mm, kexec, ran.xiaokai

On Thu, Nov 20, 2025 at 02:41:47PM +0000, ranxiaokai627@163.com wrote:
> Subject: liveupdate: Fix boot failure due to kmemleak access to unmapped pages

Please prefix kexec handover patches with kho: rather than liveupdate.

> From: Ran Xiaokai <ran.xiaokai@zte.com.cn>
> 
> When booting with debug_pagealloc=on while having:
> CONFIG_KEXEC_HANDOVER_ENABLE_DEFAULT=y
> CONFIG_DEBUG_KMEMLEAK_DEFAULT_OFF=n
> the system fails to boot due to page faults during kmemleak scanning.
> 
> This occurs because:
> With debug_pagealloc enabled, __free_pages() invokes
> debug_pagealloc_unmap_pages(), clearing the _PAGE_PRESENT bit for
> freed pages in the direct mapping.
> Commit 3dc92c311498 ("kexec: add Kexec HandOver (KHO) generation helpers")
> releases the KHO scratch region via init_cma_reserved_pageblock(),
> unmapping its physical pages. Subsequent kmemleak scanning accesses
> these unmapped pages, triggering fatal page faults.
> 
> Call kmemleak_no_scan_phys() from kho_reserve_scratch() to
> exclude the reserved region from scanning before
> it is released to the buddy allocator.
> 
> Fixes: 3dc92c311498 ("kexec: add Kexec HandOver (KHO) generation helpers")
> Signed-off-by: Ran Xiaokai <ran.xiaokai@zte.com.cn>
> ---
>  kernel/liveupdate/kexec_handover.c | 4 ++++
>  1 file changed, 4 insertions(+)
> 
> diff --git a/kernel/liveupdate/kexec_handover.c b/kernel/liveupdate/kexec_handover.c
> index 224bdf5becb6..dd4942d1d76c 100644
> --- a/kernel/liveupdate/kexec_handover.c
> +++ b/kernel/liveupdate/kexec_handover.c
> @@ -11,6 +11,7 @@
>  
>  #include <linux/cleanup.h>
>  #include <linux/cma.h>
> +#include <linux/kmemleak.h>
>  #include <linux/count_zeros.h>
>  #include <linux/kexec.h>
>  #include <linux/kexec_handover.h>
> @@ -654,6 +655,7 @@ static void __init kho_reserve_scratch(void)
>  	if (!addr)
>  		goto err_free_scratch_desc;
>  
> +	kmemleak_no_scan_phys(addr);

There's kmemleak_ignore_phys() that can be called after the scratch areas
allocated from memblock and with that kmemleak should not access them.

Take a look at __cma_declare_contiguous_nid().

>  	kho_scratch[i].addr = addr;
>  	kho_scratch[i].size = size;
>  	i++;

-- 
Sincerely yours,
Mike.


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH 2/2] liveupdate: Fix boot failure due to kmemleak access to unmapped pages
  2025-11-20 16:17   ` Pratyush Yadav
@ 2025-11-22 17:57     ` ranxiaokai627
  0 siblings, 0 replies; 7+ messages in thread
From: ranxiaokai627 @ 2025-11-22 17:57 UTC (permalink / raw)
  To: pratyush
  Cc: akpm, catalin.marinas, changyuanl, graf, kexec, linux-kernel,
	linux-mm, pasha.tatashin, ran.xiaokai, ranxiaokai627, rppt

>On Thu, Nov 20 2025, ranxiaokai627@163.com wrote:
>
>> From: Ran Xiaokai <ran.xiaokai@zte.com.cn>
>>
>> When booting with debug_pagealloc=on while having:
>> CONFIG_KEXEC_HANDOVER_ENABLE_DEFAULT=y
>> CONFIG_DEBUG_KMEMLEAK_DEFAULT_OFF=n
>> the system fails to boot due to page faults during kmemleak scanning.
>>
>> This occurs because:
>> With debug_pagealloc enabled, __free_pages() invokes
>> debug_pagealloc_unmap_pages(), clearing the _PAGE_PRESENT bit for
>> freed pages in the direct mapping.
>> Commit 3dc92c311498 ("kexec: add Kexec HandOver (KHO) generation helpers")
>> releases the KHO scratch region via init_cma_reserved_pageblock(),
>> unmapping its physical pages. Subsequent kmemleak scanning accesses
>> these unmapped pages, triggering fatal page faults.
>
>I don't know how kmemleak works. Why does kmemleak access the unmapped
>pages? If pages are not mapped, it should learn to not access them,
>right?
>
>>
>> Call kmemleak_no_scan_phys() from kho_reserve_scratch() to
>> exclude the reserved region from scanning before
>> it is released to the buddy allocator.
>
>kho_reserve_scratch() is called on the first boot. It allocates the
>scratch areas for subsequent boots. On every KHO boot after this,
>kho_reserve_scratch() is not called and kho_release_scratch() is called
>instead since the scratch areas already exist from previous boot.
>
>Eventually both paths converge to kho_init() and call
>init_cma_reserved_pageblock().
>
>So shouldn't you call kmemleak_no_scan_phys() from kho_init() instead?
>This would reduce code duplication and cover both paths.

Thanks for your review!

Yes, both paths converge to kho_init(),
for the first boot, kho_get_fdt() returns NULL and
init_cma_reserved_pageblock() is called, but for KHO boot,
kho_get_fdt() returns non-NULL, kho_init() returns before
calling init_cma_reserved_pageblock().

However, in KHO boot, calling kmemleak_no_scan_phys() is unnecessary
because kmemleak objects are created when called memblock_phys_alloc() and
KHO boot does not invoke memblock_phys_alloc(),
moving the kmemleak_no_scan_phys() call into kho_init() both resolves the issue
and reduces code duplication.



^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH 2/2] liveupdate: Fix boot failure due to kmemleak access to unmapped pages
  2025-11-21 13:36   ` Mike Rapoport
@ 2025-11-22 18:07     ` ranxiaokai627
  0 siblings, 0 replies; 7+ messages in thread
From: ranxiaokai627 @ 2025-11-22 18:07 UTC (permalink / raw)
  To: rppt
  Cc: akpm, catalin.marinas, changyuanl, graf, kexec, linux-kernel,
	linux-mm, pasha.tatashin, pratyush, ran.xiaokai, ranxiaokai627

>On Thu, Nov 20, 2025 at 02:41:47PM +0000, ranxiaokai627@163.com wrote:
>> Subject: liveupdate: Fix boot failure due to kmemleak access to unmapped pages
>
>Please prefix kexec handover patches with kho: rather than liveupdate.

Thanks for your review, i will update the patch subject.

>> From: Ran Xiaokai <ran.xiaokai@zte.com.cn>
>> 
>> When booting with debug_pagealloc=on while having:
>> CONFIG_KEXEC_HANDOVER_ENABLE_DEFAULT=y
>> CONFIG_DEBUG_KMEMLEAK_DEFAULT_OFF=n
>> the system fails to boot due to page faults during kmemleak scanning.
>> 
>> This occurs because:
>> With debug_pagealloc enabled, __free_pages() invokes
>> debug_pagealloc_unmap_pages(), clearing the _PAGE_PRESENT bit for
>> freed pages in the direct mapping.
>> Commit 3dc92c311498 ("kexec: add Kexec HandOver (KHO) generation helpers")
>> releases the KHO scratch region via init_cma_reserved_pageblock(),
>> unmapping its physical pages. Subsequent kmemleak scanning accesses
>> these unmapped pages, triggering fatal page faults.
>> 
>> Call kmemleak_no_scan_phys() from kho_reserve_scratch() to
>> exclude the reserved region from scanning before
>> it is released to the buddy allocator.
>> 
>> Fixes: 3dc92c311498 ("kexec: add Kexec HandOver (KHO) generation helpers")
>> Signed-off-by: Ran Xiaokai <ran.xiaokai@zte.com.cn>
>> ---
>>  kernel/liveupdate/kexec_handover.c | 4 ++++
>>  1 file changed, 4 insertions(+)
>> 
>> diff --git a/kernel/liveupdate/kexec_handover.c b/kernel/liveupdate/kexec_handover.c
>> index 224bdf5becb6..dd4942d1d76c 100644
>> --- a/kernel/liveupdate/kexec_handover.c
>> +++ b/kernel/liveupdate/kexec_handover.c
>> @@ -11,6 +11,7 @@
>>  
>>  #include <linux/cleanup.h>
>>  #include <linux/cma.h>
>> +#include <linux/kmemleak.h>
>>  #include <linux/count_zeros.h>
>>  #include <linux/kexec.h>
>>  #include <linux/kexec_handover.h>
>> @@ -654,6 +655,7 @@ static void __init kho_reserve_scratch(void)
>>  	if (!addr)
>>  		goto err_free_scratch_desc;
>>  
>> +	kmemleak_no_scan_phys(addr);
>
>There's kmemleak_ignore_phys() that can be called after the scratch areas
>allocated from memblock and with that kmemleak should not access them.
>
>Take a look at __cma_declare_contiguous_nid().

Thanks for catching this.
Since kmemleak_ignore_phys() perfectly handles this issue,
introducing another helper is unnecessary.
I'll post v2 shortly.

>>  	kho_scratch[i].addr = addr;
>>  	kho_scratch[i].size = size;
>>  	i++;
>
>-- 
>Sincerely yours,
>Mike.



^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2025-11-22 18:08 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-11-20 14:41 [PATCH 0/2] liveupdate: Fix boot failure due to kmemleak access to unmapped pages ranxiaokai627
2025-11-20 14:41 ` [PATCH 1/2] mm: kmemleak: introduce kmemleak_no_scan_phys() helper ranxiaokai627
2025-11-20 14:41 ` [PATCH 2/2] liveupdate: Fix boot failure due to kmemleak access to unmapped pages ranxiaokai627
2025-11-20 16:17   ` Pratyush Yadav
2025-11-22 17:57     ` ranxiaokai627
2025-11-21 13:36   ` Mike Rapoport
2025-11-22 18:07     ` ranxiaokai627

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox