From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1BD7FC021B2 for ; Mon, 24 Feb 2025 03:21:19 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9B2E16B007B; Sun, 23 Feb 2025 22:21:18 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 9630F6B0083; Sun, 23 Feb 2025 22:21:18 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 82A666B0085; Sun, 23 Feb 2025 22:21:18 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 6204A6B007B for ; Sun, 23 Feb 2025 22:21:18 -0500 (EST) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id D91881207FB for ; Mon, 24 Feb 2025 03:21:17 +0000 (UTC) X-FDA: 83153387394.30.A883F66 Received: from out30-97.freemail.mail.aliyun.com (out30-97.freemail.mail.aliyun.com [115.124.30.97]) by imf13.hostedemail.com (Postfix) with ESMTP id D5E6B20004 for ; Mon, 24 Feb 2025 03:21:14 +0000 (UTC) Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=linux.alibaba.com header.s=default header.b=QlOJOU3o; spf=pass (imf13.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.97 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com; dmarc=pass (policy=none) header.from=linux.alibaba.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1740367276; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=lFVUl0ChxVPHiYmaZIlNnh/aUnisCGgY8HqFbQrRlVs=; b=UHM318+fAOJzHKy069P4U5EMBwJ26/czrpiJOQ4GFGyVgwkvx/YJAA6HF1Dcc8a2MGz5e3 IF6aiGNj/FzcNRqaRqAh/5KFi0yCCSMw3lWIGXS0NOFUSS3pCUm9uoXRvH4vBJnyPZSrbN OtYk+Cmn/6R+AewAxTRgstH4cCdZOE4= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=pass header.d=linux.alibaba.com header.s=default header.b=QlOJOU3o; spf=pass (imf13.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.97 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com; dmarc=pass (policy=none) header.from=linux.alibaba.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1740367276; a=rsa-sha256; cv=none; b=d4aM7g/S9xp8OVzqkhMHwgp8oD/G8zyQ37MU3Ivx611CE6iapp9rPpt6rEm75Klt16qjfj lTS6BpubAcgsRzZ9ZKvvsOeorBtS/YhInF1R5b43KAlMcHMZ7EJKqO6wRlEEIRNbzwSln/ 2ljRRqPv1RPSngCkHUoQ+78A/YwP7Ig= DKIM-Signature:v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1740367272; h=Message-ID:Date:MIME-Version:Subject:To:From:Content-Type; bh=lFVUl0ChxVPHiYmaZIlNnh/aUnisCGgY8HqFbQrRlVs=; b=QlOJOU3oWq7PLwWZn8mZhPCGOEtOXW1ipHGA5Guxrqwh+lg4nxlDElrfrT231OD2YRZuMz5FCbr/tQzMI0snBdTCOumBUpTT0HV2kGtkOfsOLk8XpYpIinJwLVZ9dU5kA50QSy9Qs8ie1MxA7b+mIlZg4g6TFxmOzGPf0uzfqzE= Received: from 30.74.144.138(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0WQ1wH7x_1740367269 cluster:ay36) by smtp.aliyun-inc.com; Mon, 24 Feb 2025 11:21:10 +0800 Message-ID: <731904cf-d862-4c0e-ae5b-26444faff253@linux.alibaba.com> Date: Mon, 24 Feb 2025 11:21:09 +0800 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: Hang when swapping huge=within_size tmpfs from zram To: Kairui Song , "Alex Xu (Hello71)" Cc: Lance Yang , linux-mm@kvack.org, Daniel Gomez , Barry Song , David Hildenbrand , Hugh Dickins , Kefeng Wang , Matthew Wilcox , Ryan Roberts , linux-kernel@vger.kernel.org, Andrew Morton , "ziy@nvidia.com" References: <1738717785.im3r5g2vxc.none.ref@localhost> <1738717785.im3r5g2vxc.none@localhost> <25e2d5e4-8214-40de-99d3-2b657181a9fd@linux.alibaba.com> <5dd39b03-c40e-4f34-bf89-b3e5a12753dc@linux.alibaba.com> From: Baolin Wang In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Rspam-User: X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: D5E6B20004 X-Stat-Signature: m6w5k6mcy693joonk5dzstyc15fttm6o X-HE-Tag: 1740367274-678491 X-HE-Meta: U2FsdGVkX1/1+3IO1kd0Y5pA1rhJU6Tv+NPgyjD8A4gzlqE/z6fa93WorjpYwj3LW5ZdA9ctnSMDs2YhiyqcxCqmziemtCc8rCBzsG8u8jiqHdXkF98x5zYK74TRWRbxHzf+rX/oK8NQdd61WVKV3YYuNm6A5CV9ic/PkUIN1GGOecH5UA5pkzt5jD1aiy1elYqNPe1dnlPxMd6EThaKEnI+C0KZ6CcQHaCulDi82pt9VeAJ9PG/e58XkXF9EU6ZR+Ly/wB0xFpIGqPHtgT/BHuFAAIEsazueaS08gsJMcQoyUcJKCWM5nvm07wax1VvUOAKXzEw4XtWHIeGHkmIHgChagSwWLRN0TB1BuX4Vj0+3eel5IaAjV7qnqXUjh/QWZfRzAulKmaAnBWWtNdWT++bUKU1r/o4NR8sgSipiR6J9yClnLDjoy5JqfncZzRrzVUP9dQIt4NH2pMjExXjdQ73XBpWyMP4nVQuznVQNlvT7GCMAO59qY4roaeObRmkVrSkzWorh2PdJtmz0GpmGwqJUVnPJ1ZM7ux/S1J0rZqwDNiV/uwXpX1Ee2vJ2cEi3X99GpCuMzWBq0DaCHddbMGNSgESQjEIepxsgCsxtPpUqDWreutuIf631pvTJzK0E3DFkF8EzdvCsvTNCVraQ5nf4nNWgwYI88NBNJftlY1qN9QxfAd6Cct2ocda3OgsZJ+XCqb4Irle/uPOxBMOJLUJwWVJsVv0paMGsjInacw9j2JXD5AmVQN49biTpPeWewCprhueeBn20HbAl+nn6vt4AdavkPjb4gaH+5CYn2j0lljrdBxpN3f9gYKr6x2sAaapGz93xwaoaxNllbKrN9c4HY3I5w0cUM1AEKMZWmp5ZULzgYSj0s81gm5q40yS9tEcarkp9byXbVWYlefrnz8BwWzX6nFPYH7GcW57iRKMhI7v6d1AL2ID/JRGDJibtR3lSlQKIYFMw/YbMPh NQjhqMi8 8DerpGWIkQANJzz/gbCAxMMfZ14NwpDoXiUD62xuKZx3gGEsTQlTlZTMkSUiFxL1gbBkZn5qwaoqQ7dT9XEjVaF8FL4I2yM2BElRzaWg0Qz2GIV/7I4RUthAGyeN39/ZO2e3td4Ar2StpwqQwfrU5mr4KA4du90nIL+aFUk06SAEvSEWjxK7vWjdcGF9m9YdSJ1jASq0qluD0iUkXXTAr2DF3SxAjwvtDZnbNR/YpIz5jEFFcmcYq/PZxkamlt1Nl/259eMIy8klN0vj0LpDSLaFu94k0bE7ZxFtgbXGLP4cIebmYm/wiY5lssjv0Txep6Mf3XXyL5V9U9UH3TQeChtvPe1gKM1tNY9iLwH1Mg2geL3o+Xpr+Rw4ZVodXZEHFJYjkjhvljviqTE2/k9UdFhQx0qbxk/uTcNzR X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Hi Kairui, On 2025/2/24 02:22, Kairui Song wrote: > On Mon, Feb 24, 2025 at 1:53 AM Kairui Song wrote: >> >> On Fri, Feb 7, 2025 at 3:24 PM Baolin Wang >> wrote: >>> >>> On 2025/2/5 22:39, Lance Yang wrote: >>>> On Wed, Feb 5, 2025 at 2:38 PM Baolin Wang >>>> wrote: >>>>> On 2025/2/5 09:55, Baolin Wang wrote: >>>>>> Hi Alex, >>>>>> >>>>>> On 2025/2/5 09:23, Alex Xu (Hello71) wrote: >>>>>>> Hi all, >>>>>>> >>>>>>> On 6.14-rc1, I found that creating a lot of files in tmpfs then deleting >>>>>>> them reliably hangs when tmpfs is mounted with huge=within_size, and it >>>>>>> is swapped out to zram (zstd/zsmalloc/no backing dev). I bisected this >>>>>>> to acd7ccb284b "mm: shmem: add large folio support for tmpfs". >>>>>>> >>>>>>> When the issue occurs, rm uses 100% CPU, cannot be killed, and has no >>>>>>> output in /proc/pid/stack or wchan. Eventually, an RCU stall is >>>>>>> detected: >>>>>> >>>>>> Thanks for your report. Let me try to reproduce the issue locally and >>>>>> see what happens. >>>>>> >>>>>>> rcu: INFO: rcu_preempt detected stalls on CPUs/tasks: >>>>>>> rcu: Tasks blocked on level-0 rcu_node (CPUs 0-11): P25160 >>>>>>> rcu: (detected by 10, t=2102 jiffies, g=532677, q=4997 ncpus=12) >>>>>>> task:rm state:R running task stack:0 pid:25160 >>>>>>> tgid:25160 ppid:24309 task_flags:0x400000 flags:0x00004004 >>>>>>> Call Trace: >>>>>>> >>>>>>> ? __schedule+0x388/0x1000 >>>>>>> ? kmem_cache_free.part.0+0x23d/0x280 >>>>>>> ? sysvec_apic_timer_interrupt+0xa/0x80 >>>>>>> ? asm_sysvec_apic_timer_interrupt+0x16/0x20 >>>>>>> ? xas_load+0x12/0xc0 >>>>>>> ? xas_load+0x8/0xc0 >>>>>>> ? xas_find+0x144/0x190 >>>>>>> ? find_lock_entries+0x75/0x260 >>>>>>> ? shmem_undo_range+0xe6/0x5f0 >>>>>>> ? shmem_evict_inode+0xe4/0x230 >>>>>>> ? mtree_erase+0x7e/0xe0 >>>>>>> ? inode_set_ctime_current+0x2e/0x1f0 >>>>>>> ? evict+0xe9/0x260 >>>>>>> ? _atomic_dec_and_lock+0x31/0x50 >>>>>>> ? do_unlinkat+0x270/0x2b0 >>>>>>> ? __x64_sys_unlinkat+0x30/0x50 >>>>>>> ? do_syscall_64+0x37/0xe0 >>>>>>> ? entry_SYSCALL_64_after_hwframe+0x50/0x58 >>>>>>> >>>>>>> >>>>>>> Let me know what information is needed to further troubleshoot this >>>>>>> issue. >>>>> >>>>> Sorry, I can't reproduce this issue, and my testing process is as follows: >>>>> 1. Mount tmpfs with huge=within_size >>>>> 2. Create and write a tmpfs file >>>>> 3. Swap out the large folios of the tmpfs file to zram >>>>> 4. Execute 'rm' command to remove the tmpfs file >>>> >>>> I’m unable to reproduce the issue as well, and am following steps similar >>>> to Baolin's process: >>>> >>>> 1) Mount tmpfs with the huge=within_size option and enable swap (using >>>> zstd/zsmalloc without a backing device). >>>> 2) Create and write over 10,000 files in the tmpfs. >>>> 3) Swap out the large folios of these tmpfs files to zram. >>>> 4) Use the rm command to delete all the files from the tmpfs. >>>> >>>> Testing with both 2MiB and 64KiB large folio sizes, and with >>>> shmem_enabled=within_size, but everything works as expected. >>> >>> Thanks Lance for confirming again. >>> >>> Alex, could you give more hints on how to reproduce this issue? >>> >> >> Hi Baolin, >> >> I can reproduce this issue very easily with the build linux kernel >> test, and the failure rate is very high. I'm not exactly sure this is >> the same bug but very likely, my test step: >> >> 1. Create a 10G ZRAM device and set up SWAP on it. >> 2. Create a 1G memcg, and spawn a shell in it. >> 3. Mount tmpfs with huge=within_size, and then untar linux kernel >> source code into it. >> 4. Build with make -j32 (higher or lower job number may also work), >> the build will always fall within 10s due to file corrupted. Very appreciated for your reproducer, and now I can reproduce the issue locally. >> After some debugging, the reason is in shmem_swapin_folio, when swap >> cache is hit `folio = swap_cache_get_folio(swap, NULL, 0);` sets folio >> to a 0 order folio, then the following shmem_add_to_page_cache will >> insert a order 0 folio overriding a high order entry in shmem's >> xarray, so data are lost. Swap cache hit could be due to many reasons, >> in this case it's the readahead. Yes, thanks for your analysis. I missed that the swap readahead can swap in order 0 folios asynchronously. >> >> One quick fix is just always split the entry upon shmem fault of 0 >> order folio like this: >> >> diff --git a/mm/shmem.c b/mm/shmem.c >> index 4ea6109a8043..c8e5c419c675 100644 >> --- a/mm/shmem.c >> +++ b/mm/shmem.c >> @@ -2341,6 +2341,10 @@ static int shmem_swapin_folio(struct inode >> *inode, pgoff_t index, >> } >> } >> >> + /* Swapin of 0 order folio must always ensure the entries are split */ >> + if (!folio_order(folio)) >> + shmem_split_large_entry(inode, index, swap, gfp); >> + >> alloced: >> /* We have to do this with folio locked to prevent races */ >> folio_lock(folio); I don't think we should always split the large entry when getting folio from the swap cache. Instead, splitting should only be done when the order stored in the shmem mapping is inconsistent with the folio order, as well as updating the swap value. Could you help to try below fix? I tested it and it can work well with your reproducer. Thanks a lot. diff --git a/mm/shmem.c b/mm/shmem.c index 671f63063fd4..7e70081a96d4 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -2253,7 +2253,7 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index, struct folio *folio = NULL; bool skip_swapcache = false; swp_entry_t swap; - int error, nr_pages; + int error, nr_pages, order, split_order; VM_BUG_ON(!*foliop || !xa_is_value(*foliop)); swap = radix_to_swp_entry(*foliop); @@ -2272,10 +2272,9 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index, /* Look it up and read it in.. */ folio = swap_cache_get_folio(swap, NULL, 0); + order = xa_get_order(&mapping->i_pages, index); if (!folio) { - int order = xa_get_order(&mapping->i_pages, index); bool fallback_order0 = false; - int split_order; /* Or update major stats only when swapin succeeds?? */ if (fault_type) { @@ -2339,6 +2338,29 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index, error = -ENOMEM; goto failed; } + } else if (order != folio_order(folio)) { + /* + * Swap readahead may swap in order 0 folios into swapcache + * asynchronously, while the shmem mapping can still stores + * large swap entries. In such cases, we should split the + * large swap entry to prevent possible data loss. + */ + split_order = shmem_split_large_entry(inode, index, swap, gfp); + if (split_order < 0) { + error = split_order; + goto failed; + } + + /* + * If the large swap entry has already been split, it is + * necessary to recalculate the new swap entry based on + * the old order alignment. + */ + if (split_order > 0) { + pgoff_t offset = index - round_down(index, 1 << split_order); + + swap = swp_entry(swp_type(swap), swp_offset(swap) + offset); + } } alloced: @@ -2346,7 +2368,8 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index, folio_lock(folio); if ((!skip_swapcache && !folio_test_swapcache(folio)) || folio->swap.val != swap.val || - !shmem_confirm_swap(mapping, index, swap)) { + !shmem_confirm_swap(mapping, index, swap) || + xa_get_order(&mapping->i_pages, index) != folio_order(folio)) { error = -EEXIST; goto unlock; } >> And Hi Alex, can you help confirm if the above patch fixes your reported bug? >> >> If we are OK with this, this should be merged into 6.14 I think, but >> for the long term, it might be a good idea to just share a similar >> logic of (or just reuse) __filemap_add_folio for shmem? >> __filemap_add_folio will split the entry on insert, and code will be >> much cleaner. > > Some extra comments for above patch: If it raced with another split, > or the entry used for swap cache lookup is wrongly aligned due to > large entry, the shmem_add_to_page_cache below will fail with -EEXIST > and try again. So that seems to be working well in my test.