From: Qu Wenruo <wqu@suse.com>
To: Linux Memory Management List <linux-mm@kvack.org>,
linux-btrfs <linux-btrfs@vger.kernel.org>
Subject: Sleep inside atomic for aarch64, triggered by generic/482
Date: Sun, 20 Apr 2025 20:24:12 +0930 [thread overview]
Message-ID: <d6c22895-b3b5-454e-b889-9d0bd148e2fb@suse.com> (raw)
Hi,
Recently I hit two dmesg reports from generic/482, on aarch64 64K page
size with 4K fs block size.
The involved warning looks like this:
117645.139610] BTRFS info (device dm-13): using free-space-tree
[117645.146707] BTRFS info (device dm-13): start tree-log replay
[117645.158598] BTRFS info (device dm-13): last unmount of filesystem
214efad4-5c63-49b6-ad29-f09c4966de33
[117645.322288] BUG: sleeping function called from invalid context at
mm/util.c:743
[117645.322312] in_atomic(): 1, irqs_disabled(): 0, non_block: 0, pid:
46, name: kcompactd0
[117645.322325] preempt_count: 1, expected: 0
[117645.322329] RCU nest depth: 0, expected: 0
[117645.322338] CPU: 3 UID: 0 PID: 46 Comm: kcompactd0 Tainted: G
W OE 6.15.0-rc2-custom+ #116 PREEMPT(voluntary)
[117645.322343] Tainted: [W]=WARN, [O]=OOT_MODULE, [E]=UNSIGNED_MODULE
[117645.322345] Hardware name: QEMU KVM Virtual Machine, BIOS unknown
2/2/2022
[117645.322347] Call trace:
[117645.322349] show_stack+0x34/0x98 (C)
[117645.322360] dump_stack_lvl+0x60/0x80
[117645.322366] dump_stack+0x18/0x24
[117645.322370] __might_resched+0x130/0x168
[117645.322375] folio_mc_copy+0x54/0xa8
[117645.322382] __migrate_folio.isra.0+0x5c/0x1f8
[117645.322387] __buffer_migrate_folio+0x28c/0x2a0
[117645.322391] buffer_migrate_folio_norefs+0x1c/0x30
[117645.322395] move_to_new_folio+0x94/0x2c0
[117645.322398] migrate_pages_batch+0x7e4/0xd10
[117645.322402] migrate_pages_sync+0x88/0x240
[117645.322405] migrate_pages+0x4d0/0x660
[117645.322409] compact_zone+0x454/0x718
[117645.322414] compact_node+0xa4/0x1b8
[117645.322418] kcompactd+0x300/0x458
[117645.322421] kthread+0x11c/0x140
[117645.322426] ret_from_fork+0x10/0x20
[117645.400370] BTRFS: device fsid 214efad4-5c63-49b6-ad29-f09c4966de33
devid 1 transid 31 /dev/mapper/thin-vol.482 (253:13) scanned by mount
(924470)
[117645.404282] BTRFS info (device dm-13): first mount of filesystem
214efad4-5c63-49b6-ad29-f09c4966de33
Again this has no btrfs involved in the call trace.
This looks exactly like the report here:
https://lore.kernel.org/linux-mm/67f6e11f.050a0220.25d1c8.000b.GAE@google.com/
However there are something new here:
- The target fs is btrfs, no large folio support yet
At least the branch I'm testing (based on v6.15-rc2) doesn't support
folio.
Furthermore since it's btrfs, there is no buffer_head usage involved.
(But the rootfs is indeed ext4)
- Arm64 64K page size with 4K block size
It's less common than x86_64.
Fortunately I can reproduce the bug reliable, it takes around 3~10 runs
to hit it.
Hope this report would help a little.
Thanks,
Qu
next reply other threads:[~2025-04-20 10:54 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-04-20 10:54 Qu Wenruo [this message]
2025-04-21 17:29 ` Darrick J. Wong
2025-04-21 20:09 ` Qu Wenruo
2025-04-22 1:10 ` Darrick J. Wong
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=d6c22895-b3b5-454e-b889-9d0bd148e2fb@suse.com \
--to=wqu@suse.com \
--cc=linux-btrfs@vger.kernel.org \
--cc=linux-mm@kvack.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox