linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Yosry Ahmed <yosry.ahmed@linux.dev>
To: Nhat Pham <nphamcs@gmail.com>
Cc: Chris Murphy <lists@colorremedies.com>,
	Linux List <linux-mm@kvack.org>,
	Andrew Morton <akpm@linux-foundation.org>
Subject: Re: 6.14.0-rc6 lockdep warning kswapd
Date: Mon, 24 Mar 2025 18:24:44 +0000	[thread overview]
Message-ID: <Z-GjbPTEEoo76uQu@google.com> (raw)
In-Reply-To: <CAKEwX=P91a9ZQVUpwVU6gP95ZfWRN7mvzgE_v9wCdp7Bww-j7g@mail.gmail.com>

On Mon, Mar 24, 2025 at 01:06:32PM -0400, Nhat Pham wrote:
> On Fri, Mar 21, 2025 at 7:25 PM Chris Murphy <lists@colorremedies.com> wrote:
> >
> >
> >
> > On Thu, Mar 20, 2025, at 10:28 PM, Nhat Pham wrote:
> >
> > > I don't think this is in rc6 yet. Can you apply the patch and test again?
> >
> > At the moment I don't have a way to test the patch.
> >
> > Also I failed to mention this is swapfile on btrfs on dm-crypt.
> >
> > --
> > Chris Murphy
> 
> I checked-out mm-unstable and reverted Yosry's fix. With my usual
> "firehose" stress test script, I was able to trigger:
> 
> [   99.324986]
> [   99.325146] ======================================================
> [   99.325569] WARNING: possible circular locking dependency detected
> [   99.325983] 6.14.0-rc6-ge211b31825f4 #5 Not tainted
> [   99.326383] ------------------------------------------------------
> [   99.326803] kswapd0/49 is trying to acquire lock:
> [   99.327122] ffffd8893fc1e0e0 (&per_cpu_ptr(pool->acomp_ctx,
> cpu)->mutex){+.+.}-{4:4}, at: zswap_store+0x3ec/0xf40
> [   99.327812]
> [   99.327812] but task is already holding lock:
> [   99.328202] ffffffffa654fa80 (fs_reclaim){+.+.}-{0:0}, at:
> balance_pgdat+0x3cd/0x760
> [   99.328726]
> [   99.328726] which lock already depends on the new lock.
> [   99.328726]
> [   99.329266]
> [   99.329266] the existing dependency chain (in reverse order) is:
> [   99.329851]
> [   99.329851] -> #2 (fs_reclaim){+.+.}-{0:0}:
> [   99.330415]        fs_reclaim_acquire+0x9d/0xd0
> [   99.330908]        __kmalloc_cache_node_noprof+0x57/0x3e0
> [   99.331497]        __get_vm_area_node+0x85/0x140
> [   99.331998]        __vmalloc_node_range_noprof+0x13e/0x800
> [   99.332573]        __vmalloc_node_noprof+0x4e/0x70
> [   99.333062]        crypto_scomp_init_tfm+0xc2/0xe0
> [   99.333584]        crypto_create_tfm_node+0x47/0xe0
> [   99.334107]        crypto_init_scomp_ops_async+0x38/0xa0
> [   99.334649]        crypto_create_tfm_node+0x47/0xe0
> [   99.334963]        crypto_alloc_tfm_node+0x5e/0xe0
> [   99.335279]        zswap_cpu_comp_prepare+0x78/0x180
> [   99.335623]        cpuhp_invoke_callback+0xbd/0x6b0
> [   99.335957]        cpuhp_issue_call+0xa4/0x250
> [   99.336279]        __cpuhp_state_add_instance_cpuslocked+0x9d/0x160
> [   99.336906]        __cpuhp_state_add_instance+0x75/0x190
> [   99.337448]        zswap_pool_create+0x17e/0x2b0
> [   99.337783]        zswap_setup+0x27d/0x5a0
> [   99.338059]        do_one_initcall+0x5d/0x390
> [   99.338355]        kernel_init_freeable+0x22f/0x410
> [   99.338689]        kernel_init+0x1a/0x1d0
> [   99.338966]        ret_from_fork+0x34/0x50
> [   99.339249]        ret_from_fork_asm+0x1a/0x30
> [   99.339558]
> [   99.339558] -> #1 (scomp_lock){+.+.}-{4:4}:
> [   99.340155]        __mutex_lock+0x95/0xda0
> [   99.340431]        crypto_exit_scomp_ops_async+0x23/0x50
> [   99.340795]        crypto_destroy_tfm+0x61/0xc0
> [   99.341104]        zswap_cpu_comp_dead+0x6c/0x90
> [   99.341415]        cpuhp_invoke_callback+0xbd/0x6b0
> [   99.341744]        cpuhp_issue_call+0xa4/0x250
> [   99.342043]        __cpuhp_state_remove_instance+0x100/0x250
> [   99.342423]        __zswap_pool_release+0x46/0xa0
> [   99.342747]        process_one_work+0x210/0x5e0
> [   99.343054]        worker_thread+0x183/0x320
> [   99.343346]        kthread+0xef/0x230
> [   99.343605]        ret_from_fork+0x34/0x50
> [   99.343881]        ret_from_fork_asm+0x1a/0x30
> [   99.344179]
> [   99.344179] -> #0 (&per_cpu_ptr(pool->acomp_ctx, cpu)->mutex){+.+.}-{4:4}:
> [   99.344732]        __lock_acquire+0x15e3/0x23e0
> [   99.345040]        lock_acquire+0xc9/0x2d0
> [   99.345315]        __mutex_lock+0x95/0xda0
> [   99.345597]        zswap_store+0x3ec/0xf40
> [   99.345875]        swap_writepage+0x114/0x530
> [   99.346183]        pageout+0xf9/0x2d0
> [   99.346435]        shrink_folio_list+0x7f8/0xdf0
> [   99.346750]        shrink_lruvec+0x7d2/0xda0
> [   99.347040]        shrink_node+0x30a/0x840
> [   99.347315]        balance_pgdat+0x349/0x760
> [   99.347610]        kswapd+0x1db/0x3c0
> [   99.347858]        kthread+0xef/0x230
> [   99.348107]        ret_from_fork+0x34/0x50
> [   99.348381]        ret_from_fork_asm+0x1a/0x30
> [   99.348687]
> [   99.348687] other info that might help us debug this:
> [   99.348687]
> [   99.349228] Chain exists of:
> [   99.349228]   &per_cpu_ptr(pool->acomp_ctx, cpu)->mutex -->
> scomp_lock --> fs_reclaim
> [   99.349228]
> [   99.350214]  Possible unsafe locking scenario:
> [   99.350214]
> [   99.350612]        CPU0                    CPU1
> [   99.350916]        ----                    ----
> [   99.351219]   lock(fs_reclaim);
> [   99.351437]                                lock(scomp_lock);
> [   99.351820]                                lock(fs_reclaim);
> [   99.352196]   lock(&per_cpu_ptr(pool->acomp_ctx, cpu)->mutex);
> [   99.352588]
> [   99.352588]  *** DEADLOCK ***
> [   99.352588]
> [   99.353021] 1 lock held by kswapd0/49:
> [   99.353468]  #0: ffffffffa654fa80 (fs_reclaim){+.+.}-{0:0}, at:
> balance_pgdat+0x3cd/0x760
> [   99.354017]
> [   99.354017] stack backtrace:
> [   99.354316] CPU: 0 UID: 0 PID: 49 Comm: kswapd0 Not tainted
> 6.14.0-rc6-ge211b31825f4 #5
> [   99.354318] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996),
> BIOS rel-1.16.0-0-gd239552ce722-prebuilt.qemu.org 04/01/2014
> [   99.354320] Call Trace:
> [   99.354321]  <TASK>
> [   99.354323]  dump_stack_lvl+0x82/0xd0
> [   99.354326]  print_circular_bug+0x26a/0x330
> [   99.354329]  check_noncircular+0x14e/0x170
> [   99.354331]  ? lock_acquire+0xc9/0x2d0
> [   99.354334]  __lock_acquire+0x15e3/0x23e0
> [   99.354337]  lock_acquire+0xc9/0x2d0
> [   99.354339]  ? zswap_store+0x3ec/0xf40
> [   99.354342]  ? find_held_lock+0x2b/0x80
> [   99.354346]  __mutex_lock+0x95/0xda0
> [   99.354348]  ? zswap_store+0x3ec/0xf40
> [   99.354350]  ? __create_object+0x5e/0x90
> [   99.354353]  ? zswap_store+0x3ec/0xf40
> [   99.354355]  ? kmem_cache_alloc_node_noprof+0x2de/0x3e0
> [   99.354358]  ? zswap_store+0x3ec/0xf40
> [   99.354360]  zswap_store+0x3ec/0xf40
> [   99.354364]  ? _raw_spin_unlock+0x23/0x30
> [   99.354368]  ? folio_free_swap+0x98/0x190
> [   99.354371]  swap_writepage+0x114/0x530
> [   99.354373]  pageout+0xf9/0x2d0
> [   99.354381]  shrink_folio_list+0x7f8/0xdf0
> [   99.354385]  ? __mod_memcg_lruvec_state+0x20f/0x280
> [   99.354389]  ? isolate_lru_folios+0x465/0x610
> [   99.354392]  ? find_held_lock+0x2b/0x80
> [   99.354395]  ? mark_held_locks+0x49/0x80
> [   99.354398]  shrink_lruvec+0x7d2/0xda0
> [   99.354402]  ? find_held_lock+0x2b/0x80
> [   99.354407]  ? shrink_node+0x30a/0x840
> [   99.354409]  shrink_node+0x30a/0x840
> [   99.354413]  balance_pgdat+0x349/0x760
> [   99.354418]  kswapd+0x1db/0x3c0
> [   99.354421]  ? __pfx_autoremove_wake_function+0x10/0x10
> [   99.354424]  ? __pfx_kswapd+0x10/0x10
> [   99.354426]  kthread+0xef/0x230
> [   99.354429]  ? __pfx_kthread+0x10/0x10
> [   99.354432]  ret_from_fork+0x34/0x50
> [   99.354435]  ? __pfx_kthread+0x10/0x10
> [   99.354437]  ret_from_fork_asm+0x1a/0x30
> [   99.354441]  </TASK>
> 
> With Yosry's fix, it goes away.
> 
> So I guess:
> 
> Tested-by: Nhat Pham <nphamcs@gmail.com>

Thanks for confirming this! Although I think responding with your
Tested-by on the original patch is probably easier for Andrew to track
down.


      reply	other threads:[~2025-03-24 18:24 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-03-13  3:52 Chris Murphy
2025-03-20  4:31 ` Chris Murphy
2025-03-21  4:28   ` Nhat Pham
2025-03-21 18:57     ` Yosry Ahmed
2025-03-21 23:24     ` Chris Murphy
2025-03-24 17:06       ` Nhat Pham
2025-03-24 18:24         ` Yosry Ahmed [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Z-GjbPTEEoo76uQu@google.com \
    --to=yosry.ahmed@linux.dev \
    --cc=akpm@linux-foundation.org \
    --cc=linux-mm@kvack.org \
    --cc=lists@colorremedies.com \
    --cc=nphamcs@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox