From: Chuck Lever <cel@kernel.org>
To: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org
Cc: kernel test robot <oliver.sang@intel.com>,
Chuck Lever <chuck.lever@oracle.com>,
oliver.sang@intel.com, oe-lkp@lists.linux.dev,
ying.huang@intel.com, feng.tang@intel.com, fengwei.yin@intel.com
Subject: [PATCH RFC] libfs: Remove parent dentry locking in offset_iterate_dir()
Date: Tue, 25 Jul 2023 14:31:04 -0400 [thread overview]
Message-ID: <169030957098.157536.9938425508695693348.stgit@manet.1015granger.net> (raw)
From: Chuck Lever <chuck.lever@oracle.com>
Since offset_iterate_dir() does not walk the parent's d_subdir list
nor does it manipulate the parent's d_child, there doesn't seem to
be a reason to hold the parent's d_lock. The offset_ctx's xarray can
be sufficiently protected with just the RCU read lock.
Flame graph data captured during the git regression run shows a
20% reduction in CPU cycles consumed in offset_find_next().
Reported-by: kernel test robot <oliver.sang@intel.com>
Closes: https://lore.kernel.org/oe-lkp/202307171640.e299f8d5-oliver.sang@intel.com
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
---
fs/libfs.c | 9 +++------
1 file changed, 3 insertions(+), 6 deletions(-)
This is a possible fix for the will-it-scale regression recently
reported by the kernel test robot. It passes the git regression
test over NFS and doesn't seem to perturb xfstests.
I'm not able to run lkp here yet, so anyone who can run the
will-it-scale test, please report the results. Many thanks.
diff --git a/fs/libfs.c b/fs/libfs.c
index fcc0f1f3c2dc..b69c41fb3c63 100644
--- a/fs/libfs.c
+++ b/fs/libfs.c
@@ -406,7 +406,7 @@ static struct dentry *offset_find_next(struct xa_state *xas)
child = xas_next_entry(xas, U32_MAX);
if (!child)
goto out;
- spin_lock_nested(&child->d_lock, DENTRY_D_LOCK_NESTED);
+ spin_lock(&child->d_lock);
if (simple_positive(child))
found = dget_dlock(child);
spin_unlock(&child->d_lock);
@@ -424,17 +424,14 @@ static bool offset_dir_emit(struct dir_context *ctx, struct dentry *dentry)
inode->i_ino, fs_umode_to_dtype(inode->i_mode));
}
-static void offset_iterate_dir(struct dentry *dir, struct dir_context *ctx)
+static void offset_iterate_dir(struct inode *inode, struct dir_context *ctx)
{
- struct inode *inode = d_inode(dir);
struct offset_ctx *so_ctx = inode->i_op->get_offset_ctx(inode);
XA_STATE(xas, &so_ctx->xa, ctx->pos);
struct dentry *dentry;
while (true) {
- spin_lock(&dir->d_lock);
dentry = offset_find_next(&xas);
- spin_unlock(&dir->d_lock);
if (!dentry)
break;
@@ -478,7 +475,7 @@ static int offset_readdir(struct file *file, struct dir_context *ctx)
if (!dir_emit_dots(file, ctx))
return 0;
- offset_iterate_dir(dir, ctx);
+ offset_iterate_dir(d_inode(dir), ctx);
return 0;
}
next reply other threads:[~2023-07-25 18:31 UTC|newest]
Thread overview: 2+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-07-25 18:31 Chuck Lever [this message]
2023-07-28 8:06 ` Christian Brauner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=169030957098.157536.9938425508695693348.stgit@manet.1015granger.net \
--to=cel@kernel.org \
--cc=chuck.lever@oracle.com \
--cc=feng.tang@intel.com \
--cc=fengwei.yin@intel.com \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=oe-lkp@lists.linux.dev \
--cc=oliver.sang@intel.com \
--cc=ying.huang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox