* Re: [PATCH] kmsan: fix kmsan_handle_dma() to avoid false positives
2025-10-02 5:10 ` [PATCH] kmsan: fix kmsan_handle_dma() to avoid false positives Shigeru Yoshida
@ 2025-10-02 13:59 ` Jason Gunthorpe
2025-10-03 6:59 ` Marek Szyprowski
1 sibling, 0 replies; 3+ messages in thread
From: Jason Gunthorpe @ 2025-10-02 13:59 UTC (permalink / raw)
To: Shigeru Yoshida
Cc: glider, elver, dvyukov, akpm, leon, m.szyprowski, kasan-dev,
linux-mm, linux-kernel
On Thu, Oct 02, 2025 at 02:10:24PM +0900, Shigeru Yoshida wrote:
> KMSAN reports an uninitialized value issue in dma_map_phys()[1]. This
> is a false positive caused by the way the virtual address is handled
> in kmsan_handle_dma(). Fix it by translating the physical address to
> a virtual address using phys_to_virt().
This is the same sort of thinko as was found on the alpha patch, it is
tricky!
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
> @@ -339,13 +339,12 @@ static void kmsan_handle_dma_page(const void *addr, size_t size,
> void kmsan_handle_dma(phys_addr_t phys, size_t size,
> enum dma_data_direction dir)
> {
> - struct page *page = phys_to_page(phys);
This throws away the page_offset encoded in phys
> u64 page_offset, to_go;
> void *addr;
>
> if (PhysHighMem(phys))
> return;
> - addr = page_to_virt(page);
And this gives an addr that is now 0 page_offset, which is not right.
> + addr = phys_to_virt(phys);
Make more sense anyhow when combined with PhysHighMem() and gives the
right page_offset.
Jason
^ permalink raw reply [flat|nested] 3+ messages in thread* Re: [PATCH] kmsan: fix kmsan_handle_dma() to avoid false positives
2025-10-02 5:10 ` [PATCH] kmsan: fix kmsan_handle_dma() to avoid false positives Shigeru Yoshida
2025-10-02 13:59 ` Jason Gunthorpe
@ 2025-10-03 6:59 ` Marek Szyprowski
1 sibling, 0 replies; 3+ messages in thread
From: Marek Szyprowski @ 2025-10-03 6:59 UTC (permalink / raw)
To: Shigeru Yoshida, glider, elver, dvyukov, akpm, jgg, leon
Cc: kasan-dev, linux-mm, linux-kernel
On 02.10.2025 07:10, Shigeru Yoshida wrote:
> KMSAN reports an uninitialized value issue in dma_map_phys()[1]. This
> is a false positive caused by the way the virtual address is handled
> in kmsan_handle_dma(). Fix it by translating the physical address to
> a virtual address using phys_to_virt().
>
> [1]
> BUG: KMSAN: uninit-value in dma_map_phys+0xdc5/0x1060
> dma_map_phys+0xdc5/0x1060
> dma_map_page_attrs+0xcf/0x130
> e1000_xmit_frame+0x3c51/0x78f0
> dev_hard_start_xmit+0x22f/0xa30
> sch_direct_xmit+0x3b2/0xcf0
> __dev_queue_xmit+0x3588/0x5e60
> neigh_resolve_output+0x9c5/0xaf0
> ip6_finish_output2+0x24e0/0x2d30
> ip6_finish_output+0x903/0x10d0
> ip6_output+0x331/0x600
> mld_sendpack+0xb4a/0x1770
> mld_ifc_work+0x1328/0x19b0
> process_scheduled_works+0xb91/0x1d80
> worker_thread+0xedf/0x1590
> kthread+0xd5c/0xf00
> ret_from_fork+0x1f5/0x4c0
> ret_from_fork_asm+0x1a/0x30
>
> Uninit was created at:
> __kmalloc_cache_noprof+0x8f5/0x16b0
> syslog_print+0x9a/0xef0
> do_syslog+0x849/0xfe0
> __x64_sys_syslog+0x97/0x100
> x64_sys_call+0x3cf8/0x3e30
> do_syscall_64+0xd9/0xfa0
> entry_SYSCALL_64_after_hwframe+0x77/0x7f
>
> Bytes 0-89 of 90 are uninitialized
> Memory access of size 90 starts at ffff8880367ed000
>
> CPU: 1 UID: 0 PID: 1552 Comm: kworker/1:2 Not tainted 6.17.0-next-20250929 #26 PREEMPT(none)
> Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.17.0-5.fc42 04/01/2014
> Workqueue: mld mld_ifc_work
>
> Fixes: 6eb1e769b2c1 ("kmsan: convert kmsan_handle_dma to use physical addresses")
> Signed-off-by: Shigeru Yoshida <syoshida@redhat.com>
Applied to dma-mapping-for-next (for v6.18-rc1) branch. Thanks!
> ---
> The hash in the "Fixes" tag comes from the linux-next tree
> (next-20250929), as it has not yet been included in the mainline tree.
> ---
> mm/kmsan/hooks.c | 3 +--
> 1 file changed, 1 insertion(+), 2 deletions(-)
>
> diff --git a/mm/kmsan/hooks.c b/mm/kmsan/hooks.c
> index 90bee565b9bc..2cee59d89c80 100644
> --- a/mm/kmsan/hooks.c
> +++ b/mm/kmsan/hooks.c
> @@ -339,13 +339,12 @@ static void kmsan_handle_dma_page(const void *addr, size_t size,
> void kmsan_handle_dma(phys_addr_t phys, size_t size,
> enum dma_data_direction dir)
> {
> - struct page *page = phys_to_page(phys);
> u64 page_offset, to_go;
> void *addr;
>
> if (PhysHighMem(phys))
> return;
> - addr = page_to_virt(page);
> + addr = phys_to_virt(phys);
> /*
> * The kernel may occasionally give us adjacent DMA pages not belonging
> * to the same allocation. Process them separately to avoid triggering
Best regards
--
Marek Szyprowski, PhD
Samsung R&D Institute Poland
^ permalink raw reply [flat|nested] 3+ messages in thread