linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Baolin Wang <baolin.wang@linux.alibaba.com>
To: Lance Yang <ioworker0@gmail.com>
Cc: "Alex Xu (Hello71)" <alex_y_xu@yahoo.ca>,
	linux-mm@kvack.org, Daniel Gomez <da.gomez@samsung.com>,
	Barry Song <baohua@kernel.org>,
	David Hildenbrand <david@redhat.com>,
	Hugh Dickins <hughd@google.com>,
	Kefeng Wang <wangkefeng.wang@huawei.com>,
	Matthew Wilcox <willy@infradead.org>,
	Ryan Roberts <ryan.roberts@arm.com>,
	linux-kernel@vger.kernel.org,
	Andrew Morton <akpm@linux-foundation.org>
Subject: Re: Hang when swapping huge=within_size tmpfs from zram
Date: Fri, 7 Feb 2025 15:23:54 +0800	[thread overview]
Message-ID: <e7b276eb-960a-4e05-9f84-6152de9ac2ea@linux.alibaba.com> (raw)
In-Reply-To: <CAK1f24ni707gcGpYKXqsb9XHxjx3froLs3DzVqkkNZdca_pw4Q@mail.gmail.com>



On 2025/2/5 22:39, Lance Yang wrote:
> On Wed, Feb 5, 2025 at 2:38 PM Baolin Wang
> <baolin.wang@linux.alibaba.com> wrote:
>>
>>
>>
>> On 2025/2/5 09:55, Baolin Wang wrote:
>>> Hi Alex,
>>>
>>> On 2025/2/5 09:23, Alex Xu (Hello71) wrote:
>>>> Hi all,
>>>>
>>>> On 6.14-rc1, I found that creating a lot of files in tmpfs then deleting
>>>> them reliably hangs when tmpfs is mounted with huge=within_size, and it
>>>> is swapped out to zram (zstd/zsmalloc/no backing dev). I bisected this
>>>> to acd7ccb284b "mm: shmem: add large folio support for tmpfs".
>>>>
>>>> When the issue occurs, rm uses 100% CPU, cannot be killed, and has no
>>>> output in /proc/pid/stack or wchan. Eventually, an RCU stall is
>>>> detected:
>>>
>>> Thanks for your report. Let me try to reproduce the issue locally and
>>> see what happens.
>>>
>>>> rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
>>>> rcu:     Tasks blocked on level-0 rcu_node (CPUs 0-11): P25160
>>>> rcu:     (detected by 10, t=2102 jiffies, g=532677, q=4997 ncpus=12)
>>>> task:rm              state:R  running task     stack:0     pid:25160
>>>> tgid:25160 ppid:24309  task_flags:0x400000 flags:0x00004004
>>>> Call Trace:
>>>>    <TASK>
>>>>    ? __schedule+0x388/0x1000
>>>>    ? kmem_cache_free.part.0+0x23d/0x280
>>>>    ? sysvec_apic_timer_interrupt+0xa/0x80
>>>>    ? asm_sysvec_apic_timer_interrupt+0x16/0x20
>>>>    ? xas_load+0x12/0xc0
>>>>    ? xas_load+0x8/0xc0
>>>>    ? xas_find+0x144/0x190
>>>>    ? find_lock_entries+0x75/0x260
>>>>    ? shmem_undo_range+0xe6/0x5f0
>>>>    ? shmem_evict_inode+0xe4/0x230
>>>>    ? mtree_erase+0x7e/0xe0
>>>>    ? inode_set_ctime_current+0x2e/0x1f0
>>>>    ? evict+0xe9/0x260
>>>>    ? _atomic_dec_and_lock+0x31/0x50
>>>>    ? do_unlinkat+0x270/0x2b0
>>>>    ? __x64_sys_unlinkat+0x30/0x50
>>>>    ? do_syscall_64+0x37/0xe0
>>>>    ? entry_SYSCALL_64_after_hwframe+0x50/0x58
>>>>    </TASK>
>>>>
>>>> Let me know what information is needed to further troubleshoot this
>>>> issue.
>>
>> Sorry, I can't reproduce this issue, and my testing process is as follows:
>> 1. Mount tmpfs with huge=within_size
>> 2. Create and write a tmpfs file
>> 3. Swap out the large folios of the tmpfs file to zram
>> 4. Execute 'rm' command to remove the tmpfs file
> 
> I’m unable to reproduce the issue as well, and am following steps similar
> to Baolin's process:
> 
> 1) Mount tmpfs with the huge=within_size option and enable swap (using
> zstd/zsmalloc without a backing device).
> 2) Create and write over 10,000 files in the tmpfs.
> 3) Swap out the large folios of these tmpfs files to zram.
> 4) Use the rm command to delete all the files from the tmpfs.
> 
> Testing with both 2MiB and 64KiB large folio sizes, and with
> shmem_enabled=within_size, but everything works as expected.

Thanks Lance for confirming again.

Alex, could you give more hints on how to reproduce this issue?


  reply	other threads:[~2025-02-07  7:24 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <1738717785.im3r5g2vxc.none.ref@localhost>
2025-02-05  1:23 ` Alex Xu (Hello71)
2025-02-05  1:55   ` Baolin Wang
2025-02-05  6:38     ` Baolin Wang
2025-02-05 14:39       ` Lance Yang
2025-02-07  7:23         ` Baolin Wang [this message]
2025-02-23 17:53           ` Kairui Song
2025-02-23 18:22             ` Kairui Song
2025-02-24  3:21               ` Baolin Wang
2025-02-24  8:47                 ` [PATCH] mm: shmem: fix potential data corruption during shmem swapin Baolin Wang
2025-02-24 17:50                   ` Kairui Song
2025-02-25  1:07                     ` Baolin Wang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=e7b276eb-960a-4e05-9f84-6152de9ac2ea@linux.alibaba.com \
    --to=baolin.wang@linux.alibaba.com \
    --cc=akpm@linux-foundation.org \
    --cc=alex_y_xu@yahoo.ca \
    --cc=baohua@kernel.org \
    --cc=da.gomez@samsung.com \
    --cc=david@redhat.com \
    --cc=hughd@google.com \
    --cc=ioworker0@gmail.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=ryan.roberts@arm.com \
    --cc=wangkefeng.wang@huawei.com \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox