linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Liu Shixin <liushixin2@huawei.com>
To: Zi Yan <ziy@nvidia.com>
Cc: Baolin Wang <baolin.wang@linux.alibaba.com>, <linux-mm@kvack.org>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	Barry Song <baohua@kernel.org>,
	David Hildenbrand <david@redhat.com>,
	Hugh Dickins <hughd@google.com>,
	Kefeng Wang <wangkefeng.wang@huawei.com>,
	Lance Yang <ioworker0@gmail.com>,
	Matthew Wilcox <willy@infradead.org>,
	Ryan Roberts <ryan.roberts@arm.com>,
	Andrew Morton <akpm@linux-foundation.org>
Subject: Re: Softlockup when test shmem swapout-swapin and compaction
Date: Fri, 28 Feb 2025 09:52:00 +0800	[thread overview]
Message-ID: <ffea2791-817e-b1a5-ba41-10ed2a5d9636@huawei.com> (raw)
In-Reply-To: <696E1819-D7E3-42BD-B3F0-8B3AC67A8ADB@nvidia.com>



On 2025/2/28 7:43, Zi Yan wrote:
> On 27 Feb 2025, at 2:04, Liu Shixin wrote:
>
>> On 2025/2/26 15:22, Baolin Wang wrote:
>>> Add Zi.
>>>
>>> On 2025/2/26 15:03, Liu Shixin wrote:
>>>> Hi all,
>>>>
>>>> I found a softlockup when testing shmem large folio swapout-swapin and compaction:
>>>>
>>>>   watchdog: BUG: soft lockup - CPU#30 stuck for 179s! [folio_swap:4714]
>>>>   Modules linked in: zram xt_MASQUERADE nf_conntrack_netlink nfnetlink iptable_nat xt_addrtype iptable_filter ip_tantel_rapl_msr intel_rapl_common intel_uncore_frequency_common skx_edac_common nfit libnvdimm kvm_intel kvm rapl cixt4 mbcache jbd2 sr_mod cdrom ata_generic ata_piix virtio_net net_failover ghash_clmulni_intel libata sha512_ssse3
>>>>   CPU: 30 UID: 0 PID: 4714 Comm: folio_swap Kdump: loaded Tainted: G             L     6.14.0-rc4-next-20250225+ #2
>>>>   Tainted: [L]=SOFTLOCKUP
>>>>   Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.13.0-1ubuntu1.1 04/01/2014
>>>>   RIP: 0010:xas_load+0x5d/0xc0
>>>>   Code: 08 48 d3 ea 83 e2 3f 89 d0 48 83 c0 04 48 8b 44 c6 08 48 89 73 18 48 89 c1 83 e1 03 48 83 f9 02 75 08 48 3d
>>>>   RSP: 0000:ffffadf142f1ba60 EFLAGS: 00000293
>>>>   RAX: ffffe524cc4f6700 RBX: ffffadf142f1ba90 RCX: 0000000000000000
>>>>   RDX: 0000000000000011 RSI: ffff9a3e058acb68 RDI: ffffadf142f1ba90
>>>>   RBP: fffffffffffffffe R08: ffffadf142f1bb50 R09: 0000000000000392
>>>>   R10: 0000000000000001 R11: 0000000000000000 R12: 0000000000000011
>>>>   R13: ffffadf142f1bb48 R14: ffff9a3e04e9c588 R15: 0000000000000000
>>>>   FS:  00007fd957666740(0000) GS:ffff9a41ac0e5000(0000) knlGS:0000000000000000
>>>>   CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
>>>>   CR2: 00007fd922860000 CR3: 000000025c360001 CR4: 0000000000772ef0
>>>>   DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
>>>>   DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
>>>>   PKRU: 55555554
>>>>   Call Trace:
>>>>    <IRQ>
>>>>    ? watchdog_timer_fn+0x1c9/0x250
>>>>    ? __pfx_watchdog_timer_fn+0x10/0x10
>>>>    ? __hrtimer_run_queues+0x10e/0x250
>>>>    ? hrtimer_interrupt+0xfb/0x240
>>>>    ? __sysvec_apic_timer_interrupt+0x4e/0xe0
>>>>    ? sysvec_apic_timer_interrupt+0x68/0x90
>>>>    </IRQ>
>>>>    <TASK>
>>>>    ? asm_sysvec_apic_timer_interrupt+0x16/0x20
>>>>    ? xas_load+0x5d/0xc0
>>>>    xas_find+0x153/0x1a0
>>>>    find_get_entries+0x73/0x280
>>>>    shmem_undo_range+0x1fc/0x640
>>>>    shmem_evict_inode+0x109/0x270
>>>>    evict+0x107/0x240
>>>>    ? fsnotify_destroy_marks+0x25/0x180
>>>>    ? _atomic_dec_and_lock+0x35/0x50
>>>>    __dentry_kill+0x71/0x190
>>>>    dput+0xd1/0x190
>>>>    __fput+0x128/0x2a0
>>>>    task_work_run+0x57/0x90
>>>>    syscall_exit_to_user_mode+0x1cb/0x1e0
>>>>    do_syscall_64+0x67/0x170
>>>>    entry_SYSCALL_64_after_hwframe+0x76/0x7e
>>>>   RIP: 0033:0x7fd95776eb8b
>>>>
>>>> If CONFIG_DEBUG_VM is enabled, we will meet VM_BUG_ON_FOLIO(!folio_test_locked(folio)) in
>>>> shmem_add_to_page_cache() too.  It seems that the problem is related to memory migration or
>>>> compaction which is necessary for reproduction,  although without a clear why.
>>>>
>>>> To reproduce the problem, we need firstly a zram device as swap backend, and then run the
>>>> reproduction program. The reproduction program consists of three parts:
>>>>   1. A process constantly changes the status of shmem large folio by these interfaces:
>>>>          /sys/kernel/mm/transparent_hugepage/hugepages-<size>/shmem_enabled
>>>>   2. A process constantly echo 1 > /proc/sys/vm/compact_memory
>>>>   3. A process constantly alloc/free/swapout/swapin shmem large folios.
>>>>
>>>> I'm not sure whether the first process is necessary but the second and third are. In addition,
>>>> I tried hacking to modify compaction_alloc to return NULL, and the problem disappeared,
>>>> so I guess the problem is in migration.
>>>>
>>>> The problem is different with https://lore.kernel.org/all/1738717785.im3r5g2vxc.none@localhost/
>>>> since I have confirmed this porblem still existed after merge the fixed patch.
>>> Could you check if your version includes Zi's fix[1]? Not sure if it's related to the shmem large folio split.
>>>
>>> [1] https://lore.kernel.org/all/AF487A7A-F685-485D-8D74-756C843D6F0A@nvidia.com/
>>> .
>>>
>> Already include this patch when test.
> Hi Shixin,
>
> Can you try the diff below? It fixed my local repro.
>
> The issue is that after Baolin’s patch, shmem folios now use high-order
> entry, so the migration code should not update multiple xarray slots.
Looks reasonable. I'll try it, thanks.
>
> Hi Baolin,
>
> Is your patch affecting anonymous swapping out? If yes, we can remove
> the for loop of updating xarray in __folio_migrate_mapping().
>
> diff --git a/mm/migrate.c b/mm/migrate.c
> index 365c6daa8d1b..be77932596b3 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -44,6 +44,7 @@
>  #include <linux/sched/sysctl.h>
>  #include <linux/memory-tiers.h>
>  #include <linux/pagewalk.h>
> +#include <linux/shmem_fs.h>
>
>  #include <asm/tlbflush.h>
>
> @@ -524,7 +525,11 @@ static int __folio_migrate_mapping(struct address_space *mapping,
>  			folio_set_swapcache(newfolio);
>  			newfolio->private = folio_get_private(folio);
>  		}
> -		entries = nr;
> +		/* shmem now uses high-order entry */
> +		if (folio->mapping && shmem_mapping(folio->mapping))
> +			entries = 1;
> +		else
> +			entries = nr;
>  	} else {
>  		VM_BUG_ON_FOLIO(folio_test_swapcache(folio), folio);
>  		entries = 1;
>
>
> Best Regards,
> Yan, Zi
> .
>



  reply	other threads:[~2025-02-28  1:52 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-02-26  7:03 Liu Shixin
2025-02-26  7:22 ` Baolin Wang
2025-02-27  7:04   ` Liu Shixin
2025-02-27 23:43     ` Zi Yan
2025-02-28  1:52       ` Liu Shixin [this message]
2025-02-28  3:39       ` Baolin Wang
2025-02-28 15:19         ` Zi Yan
2025-03-05 16:27   ` Zi Yan
2025-03-05 18:44     ` Zi Yan
2025-03-05 18:51       ` Matthew Wilcox

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ffea2791-817e-b1a5-ba41-10ed2a5d9636@huawei.com \
    --to=liushixin2@huawei.com \
    --cc=akpm@linux-foundation.org \
    --cc=baohua@kernel.org \
    --cc=baolin.wang@linux.alibaba.com \
    --cc=david@redhat.com \
    --cc=hughd@google.com \
    --cc=ioworker0@gmail.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=ryan.roberts@arm.com \
    --cc=wangkefeng.wang@huawei.com \
    --cc=willy@infradead.org \
    --cc=ziy@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox