From: Dave Chinner <david@fromorbit.com>
To: Alex Shi <seakeel@gmail.com>
Cc: linux-xfs@vger.kernel.org, Linux-MM <linux-mm@kvack.org>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: Re: xfs deadlock on mm-unstable kernel?
Date: Mon, 8 Jul 2024 20:14:44 +1000 [thread overview]
Message-ID: <Zou8FCgPKqqWXKyS@dread.disaster.area> (raw)
In-Reply-To: <e5814465-b39a-44d8-aa3d-427773c9ae16@gmail.com>
On Mon, Jul 08, 2024 at 04:36:08PM +0800, Alex Shi wrote:
> 372.297234][ T3001] ============================================
> [ 372.297530][ T3001] WARNING: possible recursive locking detected
> [ 372.297827][ T3001] 6.10.0-rc6-00453-g2be3de2b70e6 #64 Not tainted
> [ 372.298137][ T3001] --------------------------------------------
> [ 372.298436][ T3001] cc1/3001 is trying to acquire lock:
> [ 372.298701][ T3001] ffff88802cb910d8 (&xfs_dir_ilock_class){++++}-{3:3}, at: xfs_reclaim_inode+0x59e/0x710
> [ 372.299242][ T3001]
> [ 372.299242][ T3001] but task is already holding lock:
> [ 372.299679][ T3001] ffff88800e145e58 (&xfs_dir_ilock_class){++++}-{3:3}, at: xfs_ilock_data_map_shared+0x4d/0x60
> [ 372.300258][ T3001]
> [ 372.300258][ T3001] other info that might help us debug this:
> [ 372.300650][ T3001] Possible unsafe locking scenario:
> [ 372.300650][ T3001]
> [ 372.301031][ T3001] CPU0
> [ 372.301231][ T3001] ----
> [ 372.301386][ T3001] lock(&xfs_dir_ilock_class);
> [ 372.301623][ T3001] lock(&xfs_dir_ilock_class);
> [ 372.301860][ T3001]
> [ 372.301860][ T3001] *** DEADLOCK ***
> [ 372.301860][ T3001]
> [ 372.302325][ T3001] May be due to missing lock nesting notation
> [ 372.302325][ T3001]
> [ 372.302723][ T3001] 3 locks held by cc1/3001:
> [ 372.302944][ T3001] #0: ffff88800e146078 (&inode->i_sb->s_type->i_mutex_dir_key){++++}-{3:3}, at: walk_component+0x2a5/0x500
> [ 372.303554][ T3001] #1: ffff88800e145e58 (&xfs_dir_ilock_class){++++}-{3:3}, at: xfs_ilock_data_map_shared+0x4d/0x60
> [ 372.304183][ T3001] #2: ffff8880040190e0 (&type->s_umount_key#48){++++}-{3:3}, at: super_cache_scan+0x82/0x4e0
False positive. Inodes above allocation must be actively referenced,
and inodes accees by xfs_reclaim_inode() must have no references and
been evicted and destroyed by the VFS. So there is no way that an
unreferenced inode being locked for reclaim in xfs_reclaim_inode()
can deadlock against the refrenced inode locked by the inode lookup
code.
Unfortunately, we don't have enough lockdep subclasses available to
annotate this correctly - we're already using all
MAX_LOCKDEP_SUBCLASSES to tell lockdep about all the ways we can
nest inode locks. That leaves us no space to add a "reclaim"
annotation for locking done from super_cache_scan() paths that would
avoid these false positives....
-Dave.
--
Dave Chinner
david@fromorbit.com
next prev parent reply other threads:[~2024-07-08 10:14 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-07-08 8:36 Alex Shi
2024-07-08 10:14 ` Dave Chinner [this message]
2024-11-12 17:14 ` Sebastian Andrzej Siewior
2024-11-13 22:23 ` Dave Chinner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Zou8FCgPKqqWXKyS@dread.disaster.area \
--to=david@fromorbit.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-xfs@vger.kernel.org \
--cc=seakeel@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox