linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [syzbot] [mm?] KCSAN: data-race in try_to_migrate_one / zap_page_range_single
@ 2025-02-18  9:11 syzbot
  2025-03-15 11:52 ` Ignacio Encinas Rubio
  0 siblings, 1 reply; 2+ messages in thread
From: syzbot @ 2025-02-18  9:11 UTC (permalink / raw)
  To: akpm, hughd, linux-kernel, linux-mm, syzkaller-bugs

Hello,

syzbot found the following issue on:

HEAD commit:    2408a807bfc3 Merge tag 'vfs-6.14-rc4.fixes' of git://git.k..
git tree:       upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=17481898580000
kernel config:  https://syzkaller.appspot.com/x/.config?x=ce4e433ac2a58cc2
dashboard link: https://syzkaller.appspot.com/bug?extid=419c4b42acc36c420ad3
compiler:       Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/2b5bc36288cc/disk-2408a807.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/75982289b311/vmlinux-2408a807.xz
kernel image: https://storage.googleapis.com/syzbot-assets/c94f6df111e0/bzImage-2408a807.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+419c4b42acc36c420ad3@syzkaller.appspotmail.com

==================================================================
BUG: KCSAN: data-race in try_to_migrate_one / zap_page_range_single

write to 0xffff88810005d8f8 of 8 bytes by task 8962 on cpu 1:
 update_hiwater_rss include/linux/mm.h:2669 [inline]
 zap_page_range_single+0x1a0/0x2f0 mm/memory.c:2012
 unmap_mapping_range_vma mm/memory.c:3834 [inline]
 unmap_mapping_range_tree mm/memory.c:3851 [inline]
 unmap_mapping_pages mm/memory.c:3917 [inline]
 unmap_mapping_range+0x15c/0x1a0 mm/memory.c:3954
 shmem_fallocate+0x278/0x860 mm/shmem.c:3672
 vfs_fallocate+0x368/0x3b0 fs/open.c:338
 madvise_remove mm/madvise.c:1025 [inline]
 madvise_vma_behavior mm/madvise.c:1260 [inline]
 madvise_walk_vmas mm/madvise.c:1502 [inline]
 do_madvise+0x14da/0x2ad0 mm/madvise.c:1689
 __do_sys_madvise mm/madvise.c:1705 [inline]
 __se_sys_madvise mm/madvise.c:1703 [inline]
 __x64_sys_madvise+0x61/0x70 mm/madvise.c:1703
 x64_sys_call+0x23ab/0x2dc0 arch/x86/include/generated/asm/syscalls_64.h:29
 do_syscall_x64 arch/x86/entry/common.c:52 [inline]
 do_syscall_64+0xc9/0x1c0 arch/x86/entry/common.c:83
 entry_SYSCALL_64_after_hwframe+0x77/0x7f

read to 0xffff88810005d8f8 of 8 bytes by task 8928 on cpu 0:
 update_hiwater_rss include/linux/mm.h:2668 [inline]
 try_to_migrate_one+0x775/0x12e0 mm/rmap.c:2183
 rmap_walk_file+0x272/0x3c0 mm/rmap.c:2708
 try_to_migrate+0x108/0x150
 migrate_folio_unmap mm/migrate.c:1320 [inline]
 migrate_pages_batch+0x786/0x1930 mm/migrate.c:1866
 migrate_pages_sync mm/migrate.c:1989 [inline]
 migrate_pages+0xf02/0x1840 mm/migrate.c:2098
 do_mbind mm/mempolicy.c:1394 [inline]
 kernel_mbind mm/mempolicy.c:1537 [inline]
 __do_sys_mbind mm/mempolicy.c:1611 [inline]
 __se_sys_mbind+0xfd1/0x11c0 mm/mempolicy.c:1607
 __x64_sys_mbind+0x78/0x90 mm/mempolicy.c:1607
 x64_sys_call+0x2662/0x2dc0 arch/x86/include/generated/asm/syscalls_64.h:238
 do_syscall_x64 arch/x86/entry/common.c:52 [inline]
 do_syscall_64+0xc9/0x1c0 arch/x86/entry/common.c:83
 entry_SYSCALL_64_after_hwframe+0x77/0x7f

value changed: 0x00000000000021af -> 0x00000000000021b9

Reported by Kernel Concurrency Sanitizer on:
CPU: 0 UID: 0 PID: 8928 Comm: syz.4.2451 Not tainted 6.14.0-rc3-syzkaller-00012-g2408a807bfc3 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 12/27/2024
==================================================================


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzkaller@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup


^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: [syzbot] [mm?] KCSAN: data-race in try_to_migrate_one / zap_page_range_single
  2025-02-18  9:11 [syzbot] [mm?] KCSAN: data-race in try_to_migrate_one / zap_page_range_single syzbot
@ 2025-03-15 11:52 ` Ignacio Encinas Rubio
  0 siblings, 0 replies; 2+ messages in thread
From: Ignacio Encinas Rubio @ 2025-03-15 11:52 UTC (permalink / raw)
  To: syzbot+419c4b42acc36c420ad3
  Cc: akpm, hughd, linux-kernel, linux-mm, syzkaller-bugs,
	linux-kernel-mentees, skhan

Hello!

This seems like a recurrent complaint from KCSAN (other examples are
[1], [2]). 

From what I understand looking at the source code, imprecise reporting 
of the maximum resident set size is considered to be acceptable.

Annotating `hiwater_rss` as `__data_racy` [3] should make KCSAN happy. 
Would this be an acceptable change? Another possibility would be making 
the update from `update_hiwater_rss` atomic, but it doesn't seem worth 
the potential slowdown.

Let me know if my proposal works for you.

Thanks!

[1] https://syzkaller.appspot.com/bug?extid=73d65fc86d6338db5990
[2] https://syzkaller.appspot.com/bug?extid=31cd52ba68feee156393
[3] https://docs.kernel.org/dev-tools/kcsan.html#selective-analysis


^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2025-03-15 11:53 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-02-18  9:11 [syzbot] [mm?] KCSAN: data-race in try_to_migrate_one / zap_page_range_single syzbot
2025-03-15 11:52 ` Ignacio Encinas Rubio

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox