From: "Chris Murphy" <lists@colorremedies.com>
To: "Linux List" <linux-mm@kvack.org>
Subject: 6.14.0-rc6 lockdep warning kswapd
Date: Wed, 12 Mar 2025 21:52:45 -0600 [thread overview]
Message-ID: <ab3b5b2e-2736-40d7-92bd-67d6c3082eb7@app.fastmail.com> (raw)
6.14.0-0.rc6.49.fc42.x86_64+debug
swap is enabled using a swapfile on btrfs (which is also used for /)
zswap is enabled using zsmalloc/zstd
Downstream bug report includes full dmesg.log attachment
https://bugzilla.redhat.com/show_bug.cgi?id=2351794
[ 4898.604852] perf: interrupt took too long (6155 > 6142), lowering kernel.perf_event_max_sample_rate to 32000
[ 5009.879938] ======================================================
[ 5009.879948] WARNING: possible circular locking dependency detected
[ 5009.879958] 6.14.0-0.rc6.49.fc42.x86_64+debug #1 Not tainted
[ 5009.879971] ------------------------------------------------------
[ 5009.879980] kswapd0/97 is trying to acquire lock:
[ 5009.879991] ffffe8fffea2bf00 (&per_cpu_ptr(pool->acomp_ctx, cpu)->mutex){+.+.}-{4:4}, at: zswap_compress+0x123/0x630
[ 5009.880036]
but task is already holding lock:
[ 5009.880046] ffffffff91a52ec0 (fs_reclaim){+.+.}-{0:0}, at: balance_pgdat+0x19d/0x1040
[ 5009.880083]
which lock already depends on the new lock.
[ 5009.880092]
the existing dependency chain (in reverse order) is:
[ 5009.880101]
-> #2 (fs_reclaim){+.+.}-{0:0}:
[ 5009.880130] lock_acquire.part.0+0x125/0x360
[ 5009.880147] fs_reclaim_acquire+0xc9/0x110
[ 5009.880163] __kmalloc_cache_node_noprof+0x61/0x4f0
[ 5009.880178] __get_vm_area_node+0xf6/0x2a0
[ 5009.880194] __vmalloc_node_range_noprof+0x1fe/0x4c0
[ 5009.880206] __vmalloc_node_noprof+0xb1/0x180
[ 5009.880218] crypto_scomp_init_tfm+0x113/0x340
[ 5009.880229] crypto_create_tfm_node+0xe9/0x2d0
[ 5009.880239] crypto_init_scomp_ops_async+0x5a/0x1c0
[ 5009.880252] crypto_create_tfm_node+0xe9/0x2d0
[ 5009.880265] crypto_alloc_tfm_node+0xd7/0x1e0
[ 5009.880280] alg_test_comp+0x10e/0x2c0
[ 5009.880294] alg_test+0x365/0xff0
[ 5009.880306] cryptomgr_test+0x54/0x80
[ 5009.880320] kthread+0x39d/0x760
[ 5009.880332] ret_from_fork+0x31/0x70
[ 5009.880344] ret_from_fork_asm+0x1a/0x30
[ 5009.880357]
-> #1 (scomp_lock){+.+.}-{4:4}:
[ 5009.880376] lock_acquire.part.0+0x125/0x360
[ 5009.880386] __mutex_lock+0x1b3/0x1430
[ 5009.880395] crypto_exit_scomp_ops_async+0x42/0x80
[ 5009.880405] crypto_destroy_tfm+0xd8/0x250
[ 5009.880413] zswap_cpu_comp_dead+0x11d/0x1c0
[ 5009.880420] cpuhp_invoke_callback+0x190/0xa70
[ 5009.880431] cpuhp_issue_call+0x13a/0x8a0
[ 5009.880439] __cpuhp_state_remove_instance+0x214/0x510
[ 5009.880448] __zswap_pool_release+0x48/0x110
[ 5009.880455] process_one_work+0x896/0x14b0
[ 5009.880465] worker_thread+0x5e5/0xfb0
[ 5009.880473] kthread+0x39d/0x760
[ 5009.880481] ret_from_fork+0x31/0x70
[ 5009.880488] ret_from_fork_asm+0x1a/0x30
[ 5009.880495]
-> #0 (&per_cpu_ptr(pool->acomp_ctx, cpu)->mutex){+.+.}-{4:4}:
[ 5009.880511] check_prev_add+0x1ab/0x23c0
[ 5009.880519] __lock_acquire+0x22d6/0x2e30
[ 5009.880527] lock_acquire.part.0+0x125/0x360
[ 5009.880535] __mutex_lock+0x1b3/0x1430
[ 5009.880543] zswap_compress+0x123/0x630
[ 5009.880550] zswap_store_page+0xf0/0xb50
[ 5009.880562] zswap_store+0x72f/0xb90
[ 5009.880575] swap_writepage+0x384/0x790
[ 5009.880588] shmem_writepage+0xd14/0x14b0
[ 5009.880602] pageout+0x372/0xa60
[ 5009.880615] shrink_folio_list+0x26da/0x3880
[ 5009.880628] evict_folios+0x670/0x1c40
[ 5009.880640] try_to_shrink_lruvec+0x422/0x9d0
[ 5009.880654] shrink_one+0x36d/0x820
[ 5009.880667] shrink_many+0x337/0xc90
[ 5009.880680] shrink_node+0x2f5/0x1460
[ 5009.880694] balance_pgdat+0x544/0x1040
[ 5009.880708] kswapd+0x2f9/0x510
[ 5009.880722] kthread+0x39d/0x760
[ 5009.880736] ret_from_fork+0x31/0x70
[ 5009.880751] ret_from_fork_asm+0x1a/0x30
[ 5009.880764]
other info that might help us debug this:
[ 5009.880774] Chain exists of:
&per_cpu_ptr(pool->acomp_ctx, cpu)->mutex --> scomp_lock --> fs_reclaim
[ 5009.880811] Possible unsafe locking scenario:
[ 5009.880819] CPU0 CPU1
[ 5009.880825] ---- ----
[ 5009.880831] lock(fs_reclaim);
[ 5009.880848] lock(scomp_lock);
[ 5009.880866] lock(fs_reclaim);
[ 5009.880883] lock(&per_cpu_ptr(pool->acomp_ctx, cpu)->mutex);
[ 5009.880900]
*** DEADLOCK ***
[ 5009.880906] 1 lock held by kswapd0/97:
[ 5009.880918] #0: ffffffff91a52ec0 (fs_reclaim){+.+.}-{0:0}, at: balance_pgdat+0x19d/0x1040
[ 5009.880960]
stack backtrace:
[ 5009.880971] CPU: 3 UID: 0 PID: 97 Comm: kswapd0 Not tainted 6.14.0-0.rc6.49.fc42.x86_64+debug #1
[ 5009.880984] Hardware name: LENOVO 20QDS3E200/20QDS3E200, BIOS N2HET77W (1.60 ) 02/06/2024
[ 5009.880989] Call Trace:
[ 5009.880994] <TASK>
[ 5009.881002] dump_stack_lvl+0x84/0xd0
[ 5009.881015] print_circular_bug.cold+0x38/0x48
[ 5009.881031] check_noncircular+0x309/0x3e0
[ 5009.881044] ? __pfx_check_noncircular+0x10/0x10
[ 5009.881061] ? mark_lock+0x75/0x890
[ 5009.881074] ? alloc_chain_hlocks+0x4c2/0x6f0
[ 5009.881086] check_prev_add+0x1ab/0x23c0
[ 5009.881104] __lock_acquire+0x22d6/0x2e30
[ 5009.881125] ? __pfx___lock_acquire+0x10/0x10
[ 5009.881135] ? __lock_release.isra.0+0x4ab/0xa30
[ 5009.881144] ? __lock_acquired+0x22b/0x880
[ 5009.881157] lock_acquire.part.0+0x125/0x360
[ 5009.881167] ? zswap_compress+0x123/0x630
[ 5009.881180] ? __pfx_lock_acquire.part.0+0x10/0x10
[ 5009.881195] ? rcu_is_watching+0x15/0xe0
[ 5009.881206] ? lock_acquire+0x1a6/0x210
[ 5009.881220] __mutex_lock+0x1b3/0x1430
[ 5009.881230] ? zswap_compress+0x123/0x630
[ 5009.881237] ? kmem_cache_alloc_node_noprof+0x153/0x4e0
[ 5009.881250] ? swap_writepage+0x384/0x790
[ 5009.881257] ? zswap_compress+0x123/0x630
[ 5009.881265] ? pageout+0x372/0xa60
[ 5009.881271] ? shrink_folio_list+0x26da/0x3880
[ 5009.881279] ? evict_folios+0x670/0x1c40
[ 5009.881286] ? try_to_shrink_lruvec+0x422/0x9d0
[ 5009.881295] ? shrink_one+0x36d/0x820
[ 5009.881302] ? shrink_many+0x337/0xc90
[ 5009.881313] ? __pfx___mutex_lock+0x10/0x10
[ 5009.881321] ? ret_from_fork+0x31/0x70
[ 5009.881330] ? ret_from_fork_asm+0x1a/0x30
[ 5009.881354] ? zswap_compress+0x123/0x630
[ 5009.881362] zswap_compress+0x123/0x630
[ 5009.881374] ? __pfx_zswap_compress+0x10/0x10
[ 5009.881389] ? rcu_is_watching+0x15/0xe0
[ 5009.881401] ? zswap_store_page+0xd6/0xb50
[ 5009.881417] zswap_store_page+0xf0/0xb50
[ 5009.881430] zswap_store+0x72f/0xb90
[ 5009.881442] ? __pfx_zswap_store+0x10/0x10
[ 5009.881450] ? folio_free_swap+0x169/0x470
[ 5009.881466] swap_writepage+0x384/0x790
[ 5009.881479] shmem_writepage+0xd14/0x14b0
[ 5009.881495] ? __pfx_shmem_writepage+0x10/0x10
[ 5009.881504] ? mark_usage+0x11e/0x330
[ 5009.881521] ? folio_clear_dirty_for_io+0x115/0x6a0
[ 5009.881537] pageout+0x372/0xa60
[ 5009.881547] ? __pfx_pageout+0x10/0x10
[ 5009.881586] ? folio_check_references.isra.0+0x79/0x2f0
[ 5009.881596] ? __pfx_folio_check_references.isra.0+0x10/0x10
[ 5009.881610] ? folio_evictable+0xa5/0x200
[ 5009.881627] shrink_folio_list+0x26da/0x3880
[ 5009.881645] ? __pfx_shrink_folio_list+0x10/0x10
[ 5009.881661] ? __pfx_scan_folios+0x10/0x10
[ 5009.881692] ? mark_held_locks+0x96/0xe0
[ 5009.881704] ? _raw_spin_unlock_irq+0x28/0x60
[ 5009.881717] evict_folios+0x670/0x1c40
[ 5009.881739] ? mark_usage+0x11e/0x330
[ 5009.881749] ? __pfx_evict_folios+0x10/0x10
[ 5009.881760] ? mark_lock+0x75/0x890
[ 5009.881782] ? __pfx___might_resched+0x10/0x10
[ 5009.881800] try_to_shrink_lruvec+0x422/0x9d0
[ 5009.881821] ? __lock_release.isra.0+0x4ab/0xa30
[ 5009.881833] ? __pfx_try_to_shrink_lruvec+0x10/0x10
[ 5009.881845] ? mark_lock+0x75/0x890
[ 5009.881859] shrink_one+0x36d/0x820
[ 5009.881870] ? shrink_many+0x312/0xc90
[ 5009.881882] shrink_many+0x337/0xc90
[ 5009.881891] ? shrink_many+0x312/0xc90
[ 5009.881909] shrink_node+0x2f5/0x1460
[ 5009.881932] ? __pfx_shrink_node+0x10/0x10
[ 5009.881951] ? pgdat_balanced+0xb3/0x1a0
[ 5009.881965] balance_pgdat+0x544/0x1040
[ 5009.881978] ? __pfx_balance_pgdat+0x10/0x10
[ 5009.881986] ? set_pgdat_percpu_threshold+0x1bd/0x300
[ 5009.882000] ? _raw_spin_unlock_irq+0x38/0x60
[ 5009.882005] ? __refrigerator+0x110/0x260
[ 5009.882015] kswapd+0x2f9/0x510
[ 5009.882023] ? __pfx_kswapd+0x10/0x10
[ 5009.882029] ? __kthread_parkme+0xb0/0x1e0
[ 5009.882037] ? __pfx_kswapd+0x10/0x10
[ 5009.882042] kthread+0x39d/0x760
[ 5009.882048] ? __pfx_kthread+0x10/0x10
[ 5009.882056] ? _raw_spin_unlock_irq+0x28/0x60
[ 5009.882060] ? __pfx_kthread+0x10/0x10
[ 5009.882067] ret_from_fork+0x31/0x70
[ 5009.882072] ? __pfx_kthread+0x10/0x10
[ 5009.882077] ret_from_fork_asm+0x1a/0x30
[ 5009.882089] </TASK>
[ 5380.636730] show_signal_msg: 6 callbacks suppressed
--
Chris Murphy Murphy
next reply other threads:[~2025-03-13 3:53 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-03-13 3:52 Chris Murphy [this message]
2025-03-20 4:31 ` Chris Murphy
2025-03-21 4:28 ` Nhat Pham
2025-03-21 18:57 ` Yosry Ahmed
2025-03-21 23:24 ` Chris Murphy
2025-03-24 17:06 ` Nhat Pham
2025-03-24 18:24 ` Yosry Ahmed
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ab3b5b2e-2736-40d7-92bd-67d6c3082eb7@app.fastmail.com \
--to=lists@colorremedies.com \
--cc=linux-mm@kvack.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox