linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* 6.14.0-rc6 lockdep warning kswapd
@ 2025-03-13  3:52 Chris Murphy
  2025-03-20  4:31 ` Chris Murphy
  0 siblings, 1 reply; 7+ messages in thread
From: Chris Murphy @ 2025-03-13  3:52 UTC (permalink / raw)
  To: Linux List

6.14.0-0.rc6.49.fc42.x86_64+debug
swap is enabled using a swapfile on btrfs (which is also used for /)
zswap is enabled using zsmalloc/zstd

Downstream bug report includes full dmesg.log attachment
https://bugzilla.redhat.com/show_bug.cgi?id=2351794


[ 4898.604852] perf: interrupt took too long (6155 > 6142), lowering kernel.perf_event_max_sample_rate to 32000
[ 5009.879938] ======================================================
[ 5009.879948] WARNING: possible circular locking dependency detected
[ 5009.879958] 6.14.0-0.rc6.49.fc42.x86_64+debug #1 Not tainted
[ 5009.879971] ------------------------------------------------------
[ 5009.879980] kswapd0/97 is trying to acquire lock:
[ 5009.879991] ffffe8fffea2bf00 (&per_cpu_ptr(pool->acomp_ctx, cpu)->mutex){+.+.}-{4:4}, at: zswap_compress+0x123/0x630
[ 5009.880036] 
               but task is already holding lock:
[ 5009.880046] ffffffff91a52ec0 (fs_reclaim){+.+.}-{0:0}, at: balance_pgdat+0x19d/0x1040
[ 5009.880083] 
               which lock already depends on the new lock.

[ 5009.880092] 
               the existing dependency chain (in reverse order) is:
[ 5009.880101] 
               -> #2 (fs_reclaim){+.+.}-{0:0}:
[ 5009.880130]        lock_acquire.part.0+0x125/0x360
[ 5009.880147]        fs_reclaim_acquire+0xc9/0x110
[ 5009.880163]        __kmalloc_cache_node_noprof+0x61/0x4f0
[ 5009.880178]        __get_vm_area_node+0xf6/0x2a0
[ 5009.880194]        __vmalloc_node_range_noprof+0x1fe/0x4c0
[ 5009.880206]        __vmalloc_node_noprof+0xb1/0x180
[ 5009.880218]        crypto_scomp_init_tfm+0x113/0x340
[ 5009.880229]        crypto_create_tfm_node+0xe9/0x2d0
[ 5009.880239]        crypto_init_scomp_ops_async+0x5a/0x1c0
[ 5009.880252]        crypto_create_tfm_node+0xe9/0x2d0
[ 5009.880265]        crypto_alloc_tfm_node+0xd7/0x1e0
[ 5009.880280]        alg_test_comp+0x10e/0x2c0
[ 5009.880294]        alg_test+0x365/0xff0
[ 5009.880306]        cryptomgr_test+0x54/0x80
[ 5009.880320]        kthread+0x39d/0x760
[ 5009.880332]        ret_from_fork+0x31/0x70
[ 5009.880344]        ret_from_fork_asm+0x1a/0x30
[ 5009.880357] 
               -> #1 (scomp_lock){+.+.}-{4:4}:
[ 5009.880376]        lock_acquire.part.0+0x125/0x360
[ 5009.880386]        __mutex_lock+0x1b3/0x1430
[ 5009.880395]        crypto_exit_scomp_ops_async+0x42/0x80
[ 5009.880405]        crypto_destroy_tfm+0xd8/0x250
[ 5009.880413]        zswap_cpu_comp_dead+0x11d/0x1c0
[ 5009.880420]        cpuhp_invoke_callback+0x190/0xa70
[ 5009.880431]        cpuhp_issue_call+0x13a/0x8a0
[ 5009.880439]        __cpuhp_state_remove_instance+0x214/0x510
[ 5009.880448]        __zswap_pool_release+0x48/0x110
[ 5009.880455]        process_one_work+0x896/0x14b0
[ 5009.880465]        worker_thread+0x5e5/0xfb0
[ 5009.880473]        kthread+0x39d/0x760
[ 5009.880481]        ret_from_fork+0x31/0x70
[ 5009.880488]        ret_from_fork_asm+0x1a/0x30
[ 5009.880495] 
               -> #0 (&per_cpu_ptr(pool->acomp_ctx, cpu)->mutex){+.+.}-{4:4}:
[ 5009.880511]        check_prev_add+0x1ab/0x23c0
[ 5009.880519]        __lock_acquire+0x22d6/0x2e30
[ 5009.880527]        lock_acquire.part.0+0x125/0x360
[ 5009.880535]        __mutex_lock+0x1b3/0x1430
[ 5009.880543]        zswap_compress+0x123/0x630
[ 5009.880550]        zswap_store_page+0xf0/0xb50
[ 5009.880562]        zswap_store+0x72f/0xb90
[ 5009.880575]        swap_writepage+0x384/0x790
[ 5009.880588]        shmem_writepage+0xd14/0x14b0
[ 5009.880602]        pageout+0x372/0xa60
[ 5009.880615]        shrink_folio_list+0x26da/0x3880
[ 5009.880628]        evict_folios+0x670/0x1c40
[ 5009.880640]        try_to_shrink_lruvec+0x422/0x9d0
[ 5009.880654]        shrink_one+0x36d/0x820
[ 5009.880667]        shrink_many+0x337/0xc90
[ 5009.880680]        shrink_node+0x2f5/0x1460
[ 5009.880694]        balance_pgdat+0x544/0x1040
[ 5009.880708]        kswapd+0x2f9/0x510
[ 5009.880722]        kthread+0x39d/0x760
[ 5009.880736]        ret_from_fork+0x31/0x70
[ 5009.880751]        ret_from_fork_asm+0x1a/0x30
[ 5009.880764] 
               other info that might help us debug this:

[ 5009.880774] Chain exists of:
                 &per_cpu_ptr(pool->acomp_ctx, cpu)->mutex --> scomp_lock --> fs_reclaim

[ 5009.880811]  Possible unsafe locking scenario:

[ 5009.880819]        CPU0                    CPU1
[ 5009.880825]        ----                    ----
[ 5009.880831]   lock(fs_reclaim);
[ 5009.880848]                                lock(scomp_lock);
[ 5009.880866]                                lock(fs_reclaim);
[ 5009.880883]   lock(&per_cpu_ptr(pool->acomp_ctx, cpu)->mutex);
[ 5009.880900] 
                *** DEADLOCK ***

[ 5009.880906] 1 lock held by kswapd0/97:
[ 5009.880918]  #0: ffffffff91a52ec0 (fs_reclaim){+.+.}-{0:0}, at: balance_pgdat+0x19d/0x1040
[ 5009.880960] 
               stack backtrace:
[ 5009.880971] CPU: 3 UID: 0 PID: 97 Comm: kswapd0 Not tainted 6.14.0-0.rc6.49.fc42.x86_64+debug #1
[ 5009.880984] Hardware name: LENOVO 20QDS3E200/20QDS3E200, BIOS N2HET77W (1.60 ) 02/06/2024
[ 5009.880989] Call Trace:
[ 5009.880994]  <TASK>
[ 5009.881002]  dump_stack_lvl+0x84/0xd0
[ 5009.881015]  print_circular_bug.cold+0x38/0x48
[ 5009.881031]  check_noncircular+0x309/0x3e0
[ 5009.881044]  ? __pfx_check_noncircular+0x10/0x10
[ 5009.881061]  ? mark_lock+0x75/0x890
[ 5009.881074]  ? alloc_chain_hlocks+0x4c2/0x6f0
[ 5009.881086]  check_prev_add+0x1ab/0x23c0
[ 5009.881104]  __lock_acquire+0x22d6/0x2e30
[ 5009.881125]  ? __pfx___lock_acquire+0x10/0x10
[ 5009.881135]  ? __lock_release.isra.0+0x4ab/0xa30
[ 5009.881144]  ? __lock_acquired+0x22b/0x880
[ 5009.881157]  lock_acquire.part.0+0x125/0x360
[ 5009.881167]  ? zswap_compress+0x123/0x630
[ 5009.881180]  ? __pfx_lock_acquire.part.0+0x10/0x10
[ 5009.881195]  ? rcu_is_watching+0x15/0xe0
[ 5009.881206]  ? lock_acquire+0x1a6/0x210
[ 5009.881220]  __mutex_lock+0x1b3/0x1430
[ 5009.881230]  ? zswap_compress+0x123/0x630
[ 5009.881237]  ? kmem_cache_alloc_node_noprof+0x153/0x4e0
[ 5009.881250]  ? swap_writepage+0x384/0x790
[ 5009.881257]  ? zswap_compress+0x123/0x630
[ 5009.881265]  ? pageout+0x372/0xa60
[ 5009.881271]  ? shrink_folio_list+0x26da/0x3880
[ 5009.881279]  ? evict_folios+0x670/0x1c40
[ 5009.881286]  ? try_to_shrink_lruvec+0x422/0x9d0
[ 5009.881295]  ? shrink_one+0x36d/0x820
[ 5009.881302]  ? shrink_many+0x337/0xc90
[ 5009.881313]  ? __pfx___mutex_lock+0x10/0x10
[ 5009.881321]  ? ret_from_fork+0x31/0x70
[ 5009.881330]  ? ret_from_fork_asm+0x1a/0x30
[ 5009.881354]  ? zswap_compress+0x123/0x630
[ 5009.881362]  zswap_compress+0x123/0x630
[ 5009.881374]  ? __pfx_zswap_compress+0x10/0x10
[ 5009.881389]  ? rcu_is_watching+0x15/0xe0
[ 5009.881401]  ? zswap_store_page+0xd6/0xb50
[ 5009.881417]  zswap_store_page+0xf0/0xb50
[ 5009.881430]  zswap_store+0x72f/0xb90
[ 5009.881442]  ? __pfx_zswap_store+0x10/0x10
[ 5009.881450]  ? folio_free_swap+0x169/0x470
[ 5009.881466]  swap_writepage+0x384/0x790
[ 5009.881479]  shmem_writepage+0xd14/0x14b0
[ 5009.881495]  ? __pfx_shmem_writepage+0x10/0x10
[ 5009.881504]  ? mark_usage+0x11e/0x330
[ 5009.881521]  ? folio_clear_dirty_for_io+0x115/0x6a0
[ 5009.881537]  pageout+0x372/0xa60
[ 5009.881547]  ? __pfx_pageout+0x10/0x10
[ 5009.881586]  ? folio_check_references.isra.0+0x79/0x2f0
[ 5009.881596]  ? __pfx_folio_check_references.isra.0+0x10/0x10
[ 5009.881610]  ? folio_evictable+0xa5/0x200
[ 5009.881627]  shrink_folio_list+0x26da/0x3880
[ 5009.881645]  ? __pfx_shrink_folio_list+0x10/0x10
[ 5009.881661]  ? __pfx_scan_folios+0x10/0x10
[ 5009.881692]  ? mark_held_locks+0x96/0xe0
[ 5009.881704]  ? _raw_spin_unlock_irq+0x28/0x60
[ 5009.881717]  evict_folios+0x670/0x1c40
[ 5009.881739]  ? mark_usage+0x11e/0x330
[ 5009.881749]  ? __pfx_evict_folios+0x10/0x10
[ 5009.881760]  ? mark_lock+0x75/0x890
[ 5009.881782]  ? __pfx___might_resched+0x10/0x10
[ 5009.881800]  try_to_shrink_lruvec+0x422/0x9d0
[ 5009.881821]  ? __lock_release.isra.0+0x4ab/0xa30
[ 5009.881833]  ? __pfx_try_to_shrink_lruvec+0x10/0x10
[ 5009.881845]  ? mark_lock+0x75/0x890
[ 5009.881859]  shrink_one+0x36d/0x820
[ 5009.881870]  ? shrink_many+0x312/0xc90
[ 5009.881882]  shrink_many+0x337/0xc90
[ 5009.881891]  ? shrink_many+0x312/0xc90
[ 5009.881909]  shrink_node+0x2f5/0x1460
[ 5009.881932]  ? __pfx_shrink_node+0x10/0x10
[ 5009.881951]  ? pgdat_balanced+0xb3/0x1a0
[ 5009.881965]  balance_pgdat+0x544/0x1040
[ 5009.881978]  ? __pfx_balance_pgdat+0x10/0x10
[ 5009.881986]  ? set_pgdat_percpu_threshold+0x1bd/0x300
[ 5009.882000]  ? _raw_spin_unlock_irq+0x38/0x60
[ 5009.882005]  ? __refrigerator+0x110/0x260
[ 5009.882015]  kswapd+0x2f9/0x510
[ 5009.882023]  ? __pfx_kswapd+0x10/0x10
[ 5009.882029]  ? __kthread_parkme+0xb0/0x1e0
[ 5009.882037]  ? __pfx_kswapd+0x10/0x10
[ 5009.882042]  kthread+0x39d/0x760
[ 5009.882048]  ? __pfx_kthread+0x10/0x10
[ 5009.882056]  ? _raw_spin_unlock_irq+0x28/0x60
[ 5009.882060]  ? __pfx_kthread+0x10/0x10
[ 5009.882067]  ret_from_fork+0x31/0x70
[ 5009.882072]  ? __pfx_kthread+0x10/0x10
[ 5009.882077]  ret_from_fork_asm+0x1a/0x30
[ 5009.882089]  </TASK>
[ 5380.636730] show_signal_msg: 6 callbacks suppressed


--
Chris Murphy Murphy


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: 6.14.0-rc6 lockdep warning kswapd
  2025-03-13  3:52 6.14.0-rc6 lockdep warning kswapd Chris Murphy
@ 2025-03-20  4:31 ` Chris Murphy
  2025-03-21  4:28   ` Nhat Pham
  0 siblings, 1 reply; 7+ messages in thread
From: Chris Murphy @ 2025-03-20  4:31 UTC (permalink / raw)
  To: Linux List



On Wed, Mar 12, 2025, at 9:52 PM, Chris Murphy wrote:
> 6.14.0-0.rc6.49.fc42.x86_64+debug
> swap is enabled using a swapfile on btrfs (which is also used for /)
> zswap is enabled using zsmalloc/zstd
>
> Downstream bug report includes full dmesg.log attachment
> https://bugzilla.redhat.com/show_bug.cgi?id=2351794

Also occurs with 6.14.0-0.rc7.56.fc42.x86_64

It's not reproducible in any way I'm aware of, but isn't uncommon. Updated bug report with rc7 dmesg.


-- 
Chris Murphy


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: 6.14.0-rc6 lockdep warning kswapd
  2025-03-20  4:31 ` Chris Murphy
@ 2025-03-21  4:28   ` Nhat Pham
  2025-03-21 18:57     ` Yosry Ahmed
  2025-03-21 23:24     ` Chris Murphy
  0 siblings, 2 replies; 7+ messages in thread
From: Nhat Pham @ 2025-03-21  4:28 UTC (permalink / raw)
  To: Chris Murphy; +Cc: Linux List, Andrew Morton, Yosry Ahmed, Yosry Ahmed

On Wed, Mar 19, 2025 at 9:31 PM Chris Murphy <lists@colorremedies.com> wrote:
>
>
>
> On Wed, Mar 12, 2025, at 9:52 PM, Chris Murphy wrote:
> > 6.14.0-0.rc6.49.fc42.x86_64+debug
> > swap is enabled using a swapfile on btrfs (which is also used for /)
> > zswap is enabled using zsmalloc/zstd
> >
> > Downstream bug report includes full dmesg.log attachment
> > https://bugzilla.redhat.com/show_bug.cgi?id=2351794
>
> Also occurs with 6.14.0-0.rc7.56.fc42.x86_64
>
> It's not reproducible in any way I'm aware of, but isn't uncommon. Updated bug report with rc7 dmesg.
>
>
> --
> Chris Murphy
>

Eyeballing the trace, this looks awfully similar to the problem that
Yosry was fixing here:

https://lore.kernel.org/all/20250226185625.2672936-1-yosry.ahmed@linux.dev/

That should break the following link in the chain:

&per_cpu_ptr(pool->acomp_ctx, cpu)->mutex --> scomp_lock

since we release the first lock, before acquiring the second lock.

I don't think this is in rc6 yet. Can you apply the patch and test again?


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: 6.14.0-rc6 lockdep warning kswapd
  2025-03-21  4:28   ` Nhat Pham
@ 2025-03-21 18:57     ` Yosry Ahmed
  2025-03-21 23:24     ` Chris Murphy
  1 sibling, 0 replies; 7+ messages in thread
From: Yosry Ahmed @ 2025-03-21 18:57 UTC (permalink / raw)
  To: Nhat Pham; +Cc: Chris Murphy, Linux List, Andrew Morton

On Thu, Mar 20, 2025 at 09:28:12PM -0700, Nhat Pham wrote:
> On Wed, Mar 19, 2025 at 9:31 PM Chris Murphy <lists@colorremedies.com> wrote:
> >
> >
> >
> > On Wed, Mar 12, 2025, at 9:52 PM, Chris Murphy wrote:
> > > 6.14.0-0.rc6.49.fc42.x86_64+debug
> > > swap is enabled using a swapfile on btrfs (which is also used for /)
> > > zswap is enabled using zsmalloc/zstd
> > >
> > > Downstream bug report includes full dmesg.log attachment
> > > https://bugzilla.redhat.com/show_bug.cgi?id=2351794
> >
> > Also occurs with 6.14.0-0.rc7.56.fc42.x86_64
> >
> > It's not reproducible in any way I'm aware of, but isn't uncommon. Updated bug report with rc7 dmesg.
> >
> >
> > --
> > Chris Murphy
> >
> 
> Eyeballing the trace, this looks awfully similar to the problem that
> Yosry was fixing here:
> 
> https://lore.kernel.org/all/20250226185625.2672936-1-yosry.ahmed@linux.dev/
> 
> That should break the following link in the chain:
> 
> &per_cpu_ptr(pool->acomp_ctx, cpu)->mutex --> scomp_lock
> 
> since we release the first lock, before acquiring the second lock.
> 
> I don't think this is in rc6 yet. Can you apply the patch and test again?

Yeah I don't see it in any MM branches. I suspect it's because there
were discussions about fixing this on the zswap side vs the crypto side.
Although I mentioned that I prefer that we take the zswap fix anyway.

Andrew, is it too late to squeeze that patch in? If it is, maybe we can
just land it in v6.15 and backport it to v6.14 via stable.


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: 6.14.0-rc6 lockdep warning kswapd
  2025-03-21  4:28   ` Nhat Pham
  2025-03-21 18:57     ` Yosry Ahmed
@ 2025-03-21 23:24     ` Chris Murphy
  2025-03-24 17:06       ` Nhat Pham
  1 sibling, 1 reply; 7+ messages in thread
From: Chris Murphy @ 2025-03-21 23:24 UTC (permalink / raw)
  To: Nhat Pham; +Cc: Linux List, Andrew Morton, Yosry Ahmed, Yosry Ahmed



On Thu, Mar 20, 2025, at 10:28 PM, Nhat Pham wrote:

> I don't think this is in rc6 yet. Can you apply the patch and test again?

At the moment I don't have a way to test the patch.

Also I failed to mention this is swapfile on btrfs on dm-crypt.

-- 
Chris Murphy


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: 6.14.0-rc6 lockdep warning kswapd
  2025-03-21 23:24     ` Chris Murphy
@ 2025-03-24 17:06       ` Nhat Pham
  2025-03-24 18:24         ` Yosry Ahmed
  0 siblings, 1 reply; 7+ messages in thread
From: Nhat Pham @ 2025-03-24 17:06 UTC (permalink / raw)
  To: Chris Murphy; +Cc: Linux List, Andrew Morton, Yosry Ahmed, Yosry Ahmed

On Fri, Mar 21, 2025 at 7:25 PM Chris Murphy <lists@colorremedies.com> wrote:
>
>
>
> On Thu, Mar 20, 2025, at 10:28 PM, Nhat Pham wrote:
>
> > I don't think this is in rc6 yet. Can you apply the patch and test again?
>
> At the moment I don't have a way to test the patch.
>
> Also I failed to mention this is swapfile on btrfs on dm-crypt.
>
> --
> Chris Murphy

I checked-out mm-unstable and reverted Yosry's fix. With my usual
"firehose" stress test script, I was able to trigger:

[   99.324986]
[   99.325146] ======================================================
[   99.325569] WARNING: possible circular locking dependency detected
[   99.325983] 6.14.0-rc6-ge211b31825f4 #5 Not tainted
[   99.326383] ------------------------------------------------------
[   99.326803] kswapd0/49 is trying to acquire lock:
[   99.327122] ffffd8893fc1e0e0 (&per_cpu_ptr(pool->acomp_ctx,
cpu)->mutex){+.+.}-{4:4}, at: zswap_store+0x3ec/0xf40
[   99.327812]
[   99.327812] but task is already holding lock:
[   99.328202] ffffffffa654fa80 (fs_reclaim){+.+.}-{0:0}, at:
balance_pgdat+0x3cd/0x760
[   99.328726]
[   99.328726] which lock already depends on the new lock.
[   99.328726]
[   99.329266]
[   99.329266] the existing dependency chain (in reverse order) is:
[   99.329851]
[   99.329851] -> #2 (fs_reclaim){+.+.}-{0:0}:
[   99.330415]        fs_reclaim_acquire+0x9d/0xd0
[   99.330908]        __kmalloc_cache_node_noprof+0x57/0x3e0
[   99.331497]        __get_vm_area_node+0x85/0x140
[   99.331998]        __vmalloc_node_range_noprof+0x13e/0x800
[   99.332573]        __vmalloc_node_noprof+0x4e/0x70
[   99.333062]        crypto_scomp_init_tfm+0xc2/0xe0
[   99.333584]        crypto_create_tfm_node+0x47/0xe0
[   99.334107]        crypto_init_scomp_ops_async+0x38/0xa0
[   99.334649]        crypto_create_tfm_node+0x47/0xe0
[   99.334963]        crypto_alloc_tfm_node+0x5e/0xe0
[   99.335279]        zswap_cpu_comp_prepare+0x78/0x180
[   99.335623]        cpuhp_invoke_callback+0xbd/0x6b0
[   99.335957]        cpuhp_issue_call+0xa4/0x250
[   99.336279]        __cpuhp_state_add_instance_cpuslocked+0x9d/0x160
[   99.336906]        __cpuhp_state_add_instance+0x75/0x190
[   99.337448]        zswap_pool_create+0x17e/0x2b0
[   99.337783]        zswap_setup+0x27d/0x5a0
[   99.338059]        do_one_initcall+0x5d/0x390
[   99.338355]        kernel_init_freeable+0x22f/0x410
[   99.338689]        kernel_init+0x1a/0x1d0
[   99.338966]        ret_from_fork+0x34/0x50
[   99.339249]        ret_from_fork_asm+0x1a/0x30
[   99.339558]
[   99.339558] -> #1 (scomp_lock){+.+.}-{4:4}:
[   99.340155]        __mutex_lock+0x95/0xda0
[   99.340431]        crypto_exit_scomp_ops_async+0x23/0x50
[   99.340795]        crypto_destroy_tfm+0x61/0xc0
[   99.341104]        zswap_cpu_comp_dead+0x6c/0x90
[   99.341415]        cpuhp_invoke_callback+0xbd/0x6b0
[   99.341744]        cpuhp_issue_call+0xa4/0x250
[   99.342043]        __cpuhp_state_remove_instance+0x100/0x250
[   99.342423]        __zswap_pool_release+0x46/0xa0
[   99.342747]        process_one_work+0x210/0x5e0
[   99.343054]        worker_thread+0x183/0x320
[   99.343346]        kthread+0xef/0x230
[   99.343605]        ret_from_fork+0x34/0x50
[   99.343881]        ret_from_fork_asm+0x1a/0x30
[   99.344179]
[   99.344179] -> #0 (&per_cpu_ptr(pool->acomp_ctx, cpu)->mutex){+.+.}-{4:4}:
[   99.344732]        __lock_acquire+0x15e3/0x23e0
[   99.345040]        lock_acquire+0xc9/0x2d0
[   99.345315]        __mutex_lock+0x95/0xda0
[   99.345597]        zswap_store+0x3ec/0xf40
[   99.345875]        swap_writepage+0x114/0x530
[   99.346183]        pageout+0xf9/0x2d0
[   99.346435]        shrink_folio_list+0x7f8/0xdf0
[   99.346750]        shrink_lruvec+0x7d2/0xda0
[   99.347040]        shrink_node+0x30a/0x840
[   99.347315]        balance_pgdat+0x349/0x760
[   99.347610]        kswapd+0x1db/0x3c0
[   99.347858]        kthread+0xef/0x230
[   99.348107]        ret_from_fork+0x34/0x50
[   99.348381]        ret_from_fork_asm+0x1a/0x30
[   99.348687]
[   99.348687] other info that might help us debug this:
[   99.348687]
[   99.349228] Chain exists of:
[   99.349228]   &per_cpu_ptr(pool->acomp_ctx, cpu)->mutex -->
scomp_lock --> fs_reclaim
[   99.349228]
[   99.350214]  Possible unsafe locking scenario:
[   99.350214]
[   99.350612]        CPU0                    CPU1
[   99.350916]        ----                    ----
[   99.351219]   lock(fs_reclaim);
[   99.351437]                                lock(scomp_lock);
[   99.351820]                                lock(fs_reclaim);
[   99.352196]   lock(&per_cpu_ptr(pool->acomp_ctx, cpu)->mutex);
[   99.352588]
[   99.352588]  *** DEADLOCK ***
[   99.352588]
[   99.353021] 1 lock held by kswapd0/49:
[   99.353468]  #0: ffffffffa654fa80 (fs_reclaim){+.+.}-{0:0}, at:
balance_pgdat+0x3cd/0x760
[   99.354017]
[   99.354017] stack backtrace:
[   99.354316] CPU: 0 UID: 0 PID: 49 Comm: kswapd0 Not tainted
6.14.0-rc6-ge211b31825f4 #5
[   99.354318] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996),
BIOS rel-1.16.0-0-gd239552ce722-prebuilt.qemu.org 04/01/2014
[   99.354320] Call Trace:
[   99.354321]  <TASK>
[   99.354323]  dump_stack_lvl+0x82/0xd0
[   99.354326]  print_circular_bug+0x26a/0x330
[   99.354329]  check_noncircular+0x14e/0x170
[   99.354331]  ? lock_acquire+0xc9/0x2d0
[   99.354334]  __lock_acquire+0x15e3/0x23e0
[   99.354337]  lock_acquire+0xc9/0x2d0
[   99.354339]  ? zswap_store+0x3ec/0xf40
[   99.354342]  ? find_held_lock+0x2b/0x80
[   99.354346]  __mutex_lock+0x95/0xda0
[   99.354348]  ? zswap_store+0x3ec/0xf40
[   99.354350]  ? __create_object+0x5e/0x90
[   99.354353]  ? zswap_store+0x3ec/0xf40
[   99.354355]  ? kmem_cache_alloc_node_noprof+0x2de/0x3e0
[   99.354358]  ? zswap_store+0x3ec/0xf40
[   99.354360]  zswap_store+0x3ec/0xf40
[   99.354364]  ? _raw_spin_unlock+0x23/0x30
[   99.354368]  ? folio_free_swap+0x98/0x190
[   99.354371]  swap_writepage+0x114/0x530
[   99.354373]  pageout+0xf9/0x2d0
[   99.354381]  shrink_folio_list+0x7f8/0xdf0
[   99.354385]  ? __mod_memcg_lruvec_state+0x20f/0x280
[   99.354389]  ? isolate_lru_folios+0x465/0x610
[   99.354392]  ? find_held_lock+0x2b/0x80
[   99.354395]  ? mark_held_locks+0x49/0x80
[   99.354398]  shrink_lruvec+0x7d2/0xda0
[   99.354402]  ? find_held_lock+0x2b/0x80
[   99.354407]  ? shrink_node+0x30a/0x840
[   99.354409]  shrink_node+0x30a/0x840
[   99.354413]  balance_pgdat+0x349/0x760
[   99.354418]  kswapd+0x1db/0x3c0
[   99.354421]  ? __pfx_autoremove_wake_function+0x10/0x10
[   99.354424]  ? __pfx_kswapd+0x10/0x10
[   99.354426]  kthread+0xef/0x230
[   99.354429]  ? __pfx_kthread+0x10/0x10
[   99.354432]  ret_from_fork+0x34/0x50
[   99.354435]  ? __pfx_kthread+0x10/0x10
[   99.354437]  ret_from_fork_asm+0x1a/0x30
[   99.354441]  </TASK>

With Yosry's fix, it goes away.

So I guess:

Tested-by: Nhat Pham <nphamcs@gmail.com>


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: 6.14.0-rc6 lockdep warning kswapd
  2025-03-24 17:06       ` Nhat Pham
@ 2025-03-24 18:24         ` Yosry Ahmed
  0 siblings, 0 replies; 7+ messages in thread
From: Yosry Ahmed @ 2025-03-24 18:24 UTC (permalink / raw)
  To: Nhat Pham; +Cc: Chris Murphy, Linux List, Andrew Morton

On Mon, Mar 24, 2025 at 01:06:32PM -0400, Nhat Pham wrote:
> On Fri, Mar 21, 2025 at 7:25 PM Chris Murphy <lists@colorremedies.com> wrote:
> >
> >
> >
> > On Thu, Mar 20, 2025, at 10:28 PM, Nhat Pham wrote:
> >
> > > I don't think this is in rc6 yet. Can you apply the patch and test again?
> >
> > At the moment I don't have a way to test the patch.
> >
> > Also I failed to mention this is swapfile on btrfs on dm-crypt.
> >
> > --
> > Chris Murphy
> 
> I checked-out mm-unstable and reverted Yosry's fix. With my usual
> "firehose" stress test script, I was able to trigger:
> 
> [   99.324986]
> [   99.325146] ======================================================
> [   99.325569] WARNING: possible circular locking dependency detected
> [   99.325983] 6.14.0-rc6-ge211b31825f4 #5 Not tainted
> [   99.326383] ------------------------------------------------------
> [   99.326803] kswapd0/49 is trying to acquire lock:
> [   99.327122] ffffd8893fc1e0e0 (&per_cpu_ptr(pool->acomp_ctx,
> cpu)->mutex){+.+.}-{4:4}, at: zswap_store+0x3ec/0xf40
> [   99.327812]
> [   99.327812] but task is already holding lock:
> [   99.328202] ffffffffa654fa80 (fs_reclaim){+.+.}-{0:0}, at:
> balance_pgdat+0x3cd/0x760
> [   99.328726]
> [   99.328726] which lock already depends on the new lock.
> [   99.328726]
> [   99.329266]
> [   99.329266] the existing dependency chain (in reverse order) is:
> [   99.329851]
> [   99.329851] -> #2 (fs_reclaim){+.+.}-{0:0}:
> [   99.330415]        fs_reclaim_acquire+0x9d/0xd0
> [   99.330908]        __kmalloc_cache_node_noprof+0x57/0x3e0
> [   99.331497]        __get_vm_area_node+0x85/0x140
> [   99.331998]        __vmalloc_node_range_noprof+0x13e/0x800
> [   99.332573]        __vmalloc_node_noprof+0x4e/0x70
> [   99.333062]        crypto_scomp_init_tfm+0xc2/0xe0
> [   99.333584]        crypto_create_tfm_node+0x47/0xe0
> [   99.334107]        crypto_init_scomp_ops_async+0x38/0xa0
> [   99.334649]        crypto_create_tfm_node+0x47/0xe0
> [   99.334963]        crypto_alloc_tfm_node+0x5e/0xe0
> [   99.335279]        zswap_cpu_comp_prepare+0x78/0x180
> [   99.335623]        cpuhp_invoke_callback+0xbd/0x6b0
> [   99.335957]        cpuhp_issue_call+0xa4/0x250
> [   99.336279]        __cpuhp_state_add_instance_cpuslocked+0x9d/0x160
> [   99.336906]        __cpuhp_state_add_instance+0x75/0x190
> [   99.337448]        zswap_pool_create+0x17e/0x2b0
> [   99.337783]        zswap_setup+0x27d/0x5a0
> [   99.338059]        do_one_initcall+0x5d/0x390
> [   99.338355]        kernel_init_freeable+0x22f/0x410
> [   99.338689]        kernel_init+0x1a/0x1d0
> [   99.338966]        ret_from_fork+0x34/0x50
> [   99.339249]        ret_from_fork_asm+0x1a/0x30
> [   99.339558]
> [   99.339558] -> #1 (scomp_lock){+.+.}-{4:4}:
> [   99.340155]        __mutex_lock+0x95/0xda0
> [   99.340431]        crypto_exit_scomp_ops_async+0x23/0x50
> [   99.340795]        crypto_destroy_tfm+0x61/0xc0
> [   99.341104]        zswap_cpu_comp_dead+0x6c/0x90
> [   99.341415]        cpuhp_invoke_callback+0xbd/0x6b0
> [   99.341744]        cpuhp_issue_call+0xa4/0x250
> [   99.342043]        __cpuhp_state_remove_instance+0x100/0x250
> [   99.342423]        __zswap_pool_release+0x46/0xa0
> [   99.342747]        process_one_work+0x210/0x5e0
> [   99.343054]        worker_thread+0x183/0x320
> [   99.343346]        kthread+0xef/0x230
> [   99.343605]        ret_from_fork+0x34/0x50
> [   99.343881]        ret_from_fork_asm+0x1a/0x30
> [   99.344179]
> [   99.344179] -> #0 (&per_cpu_ptr(pool->acomp_ctx, cpu)->mutex){+.+.}-{4:4}:
> [   99.344732]        __lock_acquire+0x15e3/0x23e0
> [   99.345040]        lock_acquire+0xc9/0x2d0
> [   99.345315]        __mutex_lock+0x95/0xda0
> [   99.345597]        zswap_store+0x3ec/0xf40
> [   99.345875]        swap_writepage+0x114/0x530
> [   99.346183]        pageout+0xf9/0x2d0
> [   99.346435]        shrink_folio_list+0x7f8/0xdf0
> [   99.346750]        shrink_lruvec+0x7d2/0xda0
> [   99.347040]        shrink_node+0x30a/0x840
> [   99.347315]        balance_pgdat+0x349/0x760
> [   99.347610]        kswapd+0x1db/0x3c0
> [   99.347858]        kthread+0xef/0x230
> [   99.348107]        ret_from_fork+0x34/0x50
> [   99.348381]        ret_from_fork_asm+0x1a/0x30
> [   99.348687]
> [   99.348687] other info that might help us debug this:
> [   99.348687]
> [   99.349228] Chain exists of:
> [   99.349228]   &per_cpu_ptr(pool->acomp_ctx, cpu)->mutex -->
> scomp_lock --> fs_reclaim
> [   99.349228]
> [   99.350214]  Possible unsafe locking scenario:
> [   99.350214]
> [   99.350612]        CPU0                    CPU1
> [   99.350916]        ----                    ----
> [   99.351219]   lock(fs_reclaim);
> [   99.351437]                                lock(scomp_lock);
> [   99.351820]                                lock(fs_reclaim);
> [   99.352196]   lock(&per_cpu_ptr(pool->acomp_ctx, cpu)->mutex);
> [   99.352588]
> [   99.352588]  *** DEADLOCK ***
> [   99.352588]
> [   99.353021] 1 lock held by kswapd0/49:
> [   99.353468]  #0: ffffffffa654fa80 (fs_reclaim){+.+.}-{0:0}, at:
> balance_pgdat+0x3cd/0x760
> [   99.354017]
> [   99.354017] stack backtrace:
> [   99.354316] CPU: 0 UID: 0 PID: 49 Comm: kswapd0 Not tainted
> 6.14.0-rc6-ge211b31825f4 #5
> [   99.354318] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996),
> BIOS rel-1.16.0-0-gd239552ce722-prebuilt.qemu.org 04/01/2014
> [   99.354320] Call Trace:
> [   99.354321]  <TASK>
> [   99.354323]  dump_stack_lvl+0x82/0xd0
> [   99.354326]  print_circular_bug+0x26a/0x330
> [   99.354329]  check_noncircular+0x14e/0x170
> [   99.354331]  ? lock_acquire+0xc9/0x2d0
> [   99.354334]  __lock_acquire+0x15e3/0x23e0
> [   99.354337]  lock_acquire+0xc9/0x2d0
> [   99.354339]  ? zswap_store+0x3ec/0xf40
> [   99.354342]  ? find_held_lock+0x2b/0x80
> [   99.354346]  __mutex_lock+0x95/0xda0
> [   99.354348]  ? zswap_store+0x3ec/0xf40
> [   99.354350]  ? __create_object+0x5e/0x90
> [   99.354353]  ? zswap_store+0x3ec/0xf40
> [   99.354355]  ? kmem_cache_alloc_node_noprof+0x2de/0x3e0
> [   99.354358]  ? zswap_store+0x3ec/0xf40
> [   99.354360]  zswap_store+0x3ec/0xf40
> [   99.354364]  ? _raw_spin_unlock+0x23/0x30
> [   99.354368]  ? folio_free_swap+0x98/0x190
> [   99.354371]  swap_writepage+0x114/0x530
> [   99.354373]  pageout+0xf9/0x2d0
> [   99.354381]  shrink_folio_list+0x7f8/0xdf0
> [   99.354385]  ? __mod_memcg_lruvec_state+0x20f/0x280
> [   99.354389]  ? isolate_lru_folios+0x465/0x610
> [   99.354392]  ? find_held_lock+0x2b/0x80
> [   99.354395]  ? mark_held_locks+0x49/0x80
> [   99.354398]  shrink_lruvec+0x7d2/0xda0
> [   99.354402]  ? find_held_lock+0x2b/0x80
> [   99.354407]  ? shrink_node+0x30a/0x840
> [   99.354409]  shrink_node+0x30a/0x840
> [   99.354413]  balance_pgdat+0x349/0x760
> [   99.354418]  kswapd+0x1db/0x3c0
> [   99.354421]  ? __pfx_autoremove_wake_function+0x10/0x10
> [   99.354424]  ? __pfx_kswapd+0x10/0x10
> [   99.354426]  kthread+0xef/0x230
> [   99.354429]  ? __pfx_kthread+0x10/0x10
> [   99.354432]  ret_from_fork+0x34/0x50
> [   99.354435]  ? __pfx_kthread+0x10/0x10
> [   99.354437]  ret_from_fork_asm+0x1a/0x30
> [   99.354441]  </TASK>
> 
> With Yosry's fix, it goes away.
> 
> So I guess:
> 
> Tested-by: Nhat Pham <nphamcs@gmail.com>

Thanks for confirming this! Although I think responding with your
Tested-by on the original patch is probably easier for Andrew to track
down.


^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2025-03-24 18:24 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-03-13  3:52 6.14.0-rc6 lockdep warning kswapd Chris Murphy
2025-03-20  4:31 ` Chris Murphy
2025-03-21  4:28   ` Nhat Pham
2025-03-21 18:57     ` Yosry Ahmed
2025-03-21 23:24     ` Chris Murphy
2025-03-24 17:06       ` Nhat Pham
2025-03-24 18:24         ` Yosry Ahmed

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox