linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: David Hildenbrand <david@redhat.com>
To: syzbot <syzbot+2b99589e33edbe9475ca@syzkaller.appspotmail.com>,
	Liam.Howlett@oracle.com, akpm@linux-foundation.org,
	baolin.wang@linux.alibaba.com, dev.jain@arm.com,
	linux-kernel@vger.kernel.org, linux-mm@kvack.org,
	lorenzo.stoakes@oracle.com, npache@redhat.com,
	ryan.roberts@arm.com, syzkaller-bugs@googlegroups.com,
	ziy@nvidia.com, Matthew Wilcox <willy@infradead.org>
Subject: Re: [syzbot] [mm?] WARNING in folio_large_mapcount
Date: Mon, 19 May 2025 15:26:28 +0200	[thread overview]
Message-ID: <5cdc53ff-ff48-4deb-9551-92bd47590a53@redhat.com> (raw)
In-Reply-To: <6828470d.a70a0220.38f255.000c.GAE@google.com>

On 17.05.25 10:21, syzbot wrote:
> Hello,
> 
> syzbot found the following issue on:
> 
> HEAD commit:    627277ba7c23 Merge tag 'arm64_cbpf_mitigation_2025_05_08' ..
> git tree:       upstream
> console output: https://syzkaller.appspot.com/x/log.txt?x=1150f670580000
> kernel config:  https://syzkaller.appspot.com/x/.config?x=5929ac65be9baf3c
> dashboard link: https://syzkaller.appspot.com/bug?extid=2b99589e33edbe9475ca
> compiler:       Debian clang version 20.1.2 (++20250402124445+58df0ef89dd6-1~exp1~20250402004600.97), Debian LLD 20.1.2
> 
> Unfortunately, I don't have any reproducer for this issue yet.
> 
> Downloadable assets:
> disk image: https://storage.googleapis.com/syzbot-assets/0a42ae72fe0e/disk-627277ba.raw.xz
> vmlinux: https://storage.googleapis.com/syzbot-assets/0be88297bb66/vmlinux-627277ba.xz
> kernel image: https://storage.googleapis.com/syzbot-assets/31808a4b1210/bzImage-627277ba.xz
> 
> IMPORTANT: if you fix the issue, please add the following tag to the commit:
> Reported-by: syzbot+2b99589e33edbe9475ca@syzkaller.appspotmail.com
> 
> ------------[ cut here ]------------
> WARNING: CPU: 1 PID: 38 at ./include/linux/mm.h:1335 folio_large_mapcount+0xd0/0x110 include/linux/mm.h:1335

This should be

VM_WARN_ON_FOLIO(!folio_test_large(folio), folio);

> Modules linked in:
> CPU: 1 UID: 0 PID: 38 Comm: khugepaged Not tainted 6.15.0-rc6-syzkaller-00025-g627277ba7c23 #0 PREEMPT(full)
> Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/07/2025
> RIP: 0010:folio_large_mapcount+0xd0/0x110 include/linux/mm.h:1335
> Code: 04 38 84 c0 75 29 8b 03 ff c0 5b 41 5e 41 5f e9 96 d2 2b 09 cc e8 d0 cb 99 ff 48 89 df 48 c7 c6 20 de 77 8b e8 a1 dc de ff 90 <0f> 0b 90 eb b6 89 d9 80 e1 07 80 c1 03 38 c1 7c cb 48 89 df e8 87
> RSP: 0018:ffffc90000af77e0 EFLAGS: 00010246
> RAX: e1fcb38c0ff8ce00 RBX: ffffea00014c8000 RCX: e1fcb38c0ff8ce00
> RDX: 0000000000000001 RSI: ffffffff8d9226df RDI: ffff88801e2fbc00
> RBP: ffffc90000af7b50 R08: ffff8880b8923e93 R09: 1ffff110171247d2
> R10: dffffc0000000000 R11: ffffed10171247d3 R12: 1ffffd4000299000
> R13: dffffc0000000000 R14: 0000000000000000 R15: dffffc0000000000
> FS:  0000000000000000(0000) GS:ffff8881261fb000(0000) knlGS:0000000000000000
> CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> CR2: 00007ffe58f12dc0 CR3: 0000000030e04000 CR4: 00000000003526f0
> DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
> DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
> Call Trace:
>   <TASK>
>   folio_mapcount include/linux/mm.h:1369 [inline]

And here we come through

if (likely(!folio_test_large(folio))) {
	...
}
return folio_large_mapcount(folio);


So the folio is split concurrently. And I think there is nothing 
stopping it from getting freed.

We do a xas_for_each() under RCU. So yes, this is racy.

In  collapse_file(), we re-validate everything.

We could

(A) Take proper pagecache locks

(B) Try grabbing a temporary folio reference

(C) Try snapshotting the folio

Probably, in this code, (B) might be cleanest for now? Handling it just 
like other code in mm/filemap.c.

>   is_refcount_suitable+0x350/0x430 mm/khugepaged.c:553
>   hpage_collapse_scan_file+0x6d4/0x4200 mm/khugepaged.c:2323
>   khugepaged_scan_mm_slot mm/khugepaged.c:2447 [inline]
>   khugepaged_do_scan mm/khugepaged.c:2548 [inline]
>   khugepaged+0xa2a/0x1690 mm/khugepaged.c:2604
>   kthread+0x70e/0x8a0 kernel/kthread.c:464
>   ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:153
>   ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
>   </TASK>
-- 
Cheers,

David / dhildenb



  reply	other threads:[~2025-05-19 13:26 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-05-17  8:21 syzbot
2025-05-19 13:26 ` David Hildenbrand [this message]
2025-05-20  5:45   ` Shivank Garg
2025-05-20  5:46     ` syzbot
2025-05-20 14:05     ` David Hildenbrand
2025-05-22  4:57       ` Shivank Garg
2025-05-22  7:11         ` David Hildenbrand

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5cdc53ff-ff48-4deb-9551-92bd47590a53@redhat.com \
    --to=david@redhat.com \
    --cc=Liam.Howlett@oracle.com \
    --cc=akpm@linux-foundation.org \
    --cc=baolin.wang@linux.alibaba.com \
    --cc=dev.jain@arm.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=lorenzo.stoakes@oracle.com \
    --cc=npache@redhat.com \
    --cc=ryan.roberts@arm.com \
    --cc=syzbot+2b99589e33edbe9475ca@syzkaller.appspotmail.com \
    --cc=syzkaller-bugs@googlegroups.com \
    --cc=willy@infradead.org \
    --cc=ziy@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox