From: Baolin Wang <baolin.wang@linux.alibaba.com>
To: "Alex Xu (Hello71)" <alex_y_xu@yahoo.ca>,
linux-mm@kvack.org, Daniel Gomez <da.gomez@samsung.com>
Cc: Barry Song <baohua@kernel.org>,
David Hildenbrand <david@redhat.com>,
Hugh Dickins <hughd@google.com>,
Kefeng Wang <wangkefeng.wang@huawei.com>,
Lance Yang <ioworker0@gmail.com>,
Matthew Wilcox <willy@infradead.org>,
Ryan Roberts <ryan.roberts@arm.com>,
linux-kernel@vger.kernel.org,
Andrew Morton <akpm@linux-foundation.org>
Subject: Re: Hang when swapping huge=within_size tmpfs from zram
Date: Wed, 5 Feb 2025 09:55:09 +0800 [thread overview]
Message-ID: <25e2d5e4-8214-40de-99d3-2b657181a9fd@linux.alibaba.com> (raw)
In-Reply-To: <1738717785.im3r5g2vxc.none@localhost>
Hi Alex,
On 2025/2/5 09:23, Alex Xu (Hello71) wrote:
> Hi all,
>
> On 6.14-rc1, I found that creating a lot of files in tmpfs then deleting
> them reliably hangs when tmpfs is mounted with huge=within_size, and it
> is swapped out to zram (zstd/zsmalloc/no backing dev). I bisected this
> to acd7ccb284b "mm: shmem: add large folio support for tmpfs".
>
> When the issue occurs, rm uses 100% CPU, cannot be killed, and has no
> output in /proc/pid/stack or wchan. Eventually, an RCU stall is
> detected:
Thanks for your report. Let me try to reproduce the issue locally and
see what happens.
> rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
> rcu: Tasks blocked on level-0 rcu_node (CPUs 0-11): P25160
> rcu: (detected by 10, t=2102 jiffies, g=532677, q=4997 ncpus=12)
> task:rm state:R running task stack:0 pid:25160 tgid:25160 ppid:24309 task_flags:0x400000 flags:0x00004004
> Call Trace:
> <TASK>
> ? __schedule+0x388/0x1000
> ? kmem_cache_free.part.0+0x23d/0x280
> ? sysvec_apic_timer_interrupt+0xa/0x80
> ? asm_sysvec_apic_timer_interrupt+0x16/0x20
> ? xas_load+0x12/0xc0
> ? xas_load+0x8/0xc0
> ? xas_find+0x144/0x190
> ? find_lock_entries+0x75/0x260
> ? shmem_undo_range+0xe6/0x5f0
> ? shmem_evict_inode+0xe4/0x230
> ? mtree_erase+0x7e/0xe0
> ? inode_set_ctime_current+0x2e/0x1f0
> ? evict+0xe9/0x260
> ? _atomic_dec_and_lock+0x31/0x50
> ? do_unlinkat+0x270/0x2b0
> ? __x64_sys_unlinkat+0x30/0x50
> ? do_syscall_64+0x37/0xe0
> ? entry_SYSCALL_64_after_hwframe+0x50/0x58
> </TASK>
>
> Let me know what information is needed to further troubleshoot this
> issue.
>
> Thanks,
> Alex.
next prev parent reply other threads:[~2025-02-05 1:55 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <1738717785.im3r5g2vxc.none.ref@localhost>
2025-02-05 1:23 ` Alex Xu (Hello71)
2025-02-05 1:55 ` Baolin Wang [this message]
2025-02-05 6:38 ` Baolin Wang
2025-02-05 14:39 ` Lance Yang
2025-02-07 7:23 ` Baolin Wang
2025-02-23 17:53 ` Kairui Song
2025-02-23 18:22 ` Kairui Song
2025-02-24 3:21 ` Baolin Wang
2025-02-24 8:47 ` [PATCH] mm: shmem: fix potential data corruption during shmem swapin Baolin Wang
2025-02-24 17:50 ` Kairui Song
2025-02-25 1:07 ` Baolin Wang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=25e2d5e4-8214-40de-99d3-2b657181a9fd@linux.alibaba.com \
--to=baolin.wang@linux.alibaba.com \
--cc=akpm@linux-foundation.org \
--cc=alex_y_xu@yahoo.ca \
--cc=baohua@kernel.org \
--cc=da.gomez@samsung.com \
--cc=david@redhat.com \
--cc=hughd@google.com \
--cc=ioworker0@gmail.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=ryan.roberts@arm.com \
--cc=wangkefeng.wang@huawei.com \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox