From: David Howells <dhowells@redhat.com>
To: Christian Brauner <christian@brauner.io>,
Steve French <smfrench@gmail.com>,
Matthew Wilcox <willy@infradead.org>
Cc: David Howells <dhowells@redhat.com>,
Jeff Layton <jlayton@kernel.org>,
Gao Xiang <hsiangkao@linux.alibaba.com>,
Dominique Martinet <asmadeus@codewreck.org>,
Marc Dionne <marc.dionne@auristor.com>,
Paulo Alcantara <pc@manguebit.com>,
Shyam Prasad N <sprasad@microsoft.com>,
Tom Talpey <tom@talpey.com>,
Eric Van Hensbergen <ericvh@kernel.org>,
Ilya Dryomov <idryomov@gmail.com>,
netfs@lists.linux.dev, linux-afs@lists.infradead.org,
linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org,
ceph-devel@vger.kernel.org, v9fs@lists.linux.dev,
linux-erofs@lists.ozlabs.org, linux-fsdevel@vger.kernel.org,
linux-mm@kvack.org, netdev@vger.kernel.org,
linux-kernel@vger.kernel.org
Subject: [PATCH v2 10/25] netfs: Set the request work function upon allocation
Date: Wed, 14 Aug 2024 21:38:30 +0100 [thread overview]
Message-ID: <20240814203850.2240469-11-dhowells@redhat.com> (raw)
In-Reply-To: <20240814203850.2240469-1-dhowells@redhat.com>
Set the work function in the netfs_io_request work_struct when we allocate
the request rather than doing this later. This reduces the number of
places we need to set it in future code.
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Jeff Layton <jlayton@kernel.org>
cc: netfs@lists.linux.dev
cc: linux-fsdevel@vger.kernel.org
---
fs/netfs/internal.h | 1 +
fs/netfs/io.c | 4 +---
fs/netfs/objects.c | 9 ++++++++-
fs/netfs/write_issue.c | 1 -
4 files changed, 10 insertions(+), 5 deletions(-)
diff --git a/fs/netfs/internal.h b/fs/netfs/internal.h
index 9e6e0e59d7e4..f2920b4ee726 100644
--- a/fs/netfs/internal.h
+++ b/fs/netfs/internal.h
@@ -29,6 +29,7 @@ int netfs_prefetch_for_write(struct file *file, struct folio *folio,
/*
* io.c
*/
+void netfs_rreq_work(struct work_struct *work);
int netfs_begin_read(struct netfs_io_request *rreq, bool sync);
/*
diff --git a/fs/netfs/io.c b/fs/netfs/io.c
index ce3e821b4e4f..8b9aaa99d787 100644
--- a/fs/netfs/io.c
+++ b/fs/netfs/io.c
@@ -422,7 +422,7 @@ static void netfs_rreq_assess(struct netfs_io_request *rreq, bool was_async)
netfs_rreq_completed(rreq, was_async);
}
-static void netfs_rreq_work(struct work_struct *work)
+void netfs_rreq_work(struct work_struct *work)
{
struct netfs_io_request *rreq =
container_of(work, struct netfs_io_request, work);
@@ -734,8 +734,6 @@ int netfs_begin_read(struct netfs_io_request *rreq, bool sync)
// TODO: Use bounce buffer if requested
rreq->io_iter = rreq->iter;
- INIT_WORK(&rreq->work, netfs_rreq_work);
-
/* Chop the read into slices according to what the cache and the netfs
* want and submit each one.
*/
diff --git a/fs/netfs/objects.c b/fs/netfs/objects.c
index 0294df70c3ff..d6e9785ce7a3 100644
--- a/fs/netfs/objects.c
+++ b/fs/netfs/objects.c
@@ -48,9 +48,16 @@ struct netfs_io_request *netfs_alloc_request(struct address_space *mapping,
INIT_LIST_HEAD(&rreq->io_streams[0].subrequests);
INIT_LIST_HEAD(&rreq->io_streams[1].subrequests);
INIT_LIST_HEAD(&rreq->subrequests);
- INIT_WORK(&rreq->work, NULL);
refcount_set(&rreq->ref, 1);
+ if (origin == NETFS_READAHEAD ||
+ origin == NETFS_READPAGE ||
+ origin == NETFS_READ_FOR_WRITE ||
+ origin == NETFS_DIO_READ)
+ INIT_WORK(&rreq->work, netfs_rreq_work);
+ else
+ INIT_WORK(&rreq->work, netfs_write_collection_worker);
+
__set_bit(NETFS_RREQ_IN_PROGRESS, &rreq->flags);
if (file && file->f_flags & O_NONBLOCK)
__set_bit(NETFS_RREQ_NONBLOCK, &rreq->flags);
diff --git a/fs/netfs/write_issue.c b/fs/netfs/write_issue.c
index 34e541afd79b..41db709ca1d3 100644
--- a/fs/netfs/write_issue.c
+++ b/fs/netfs/write_issue.c
@@ -109,7 +109,6 @@ struct netfs_io_request *netfs_create_write_req(struct address_space *mapping,
wreq->contiguity = wreq->start;
wreq->cleaned_to = wreq->start;
- INIT_WORK(&wreq->work, netfs_write_collection_worker);
wreq->io_streams[0].stream_nr = 0;
wreq->io_streams[0].source = NETFS_UPLOAD_TO_SERVER;
next prev parent reply other threads:[~2024-08-14 20:40 UTC|newest]
Thread overview: 51+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-08-14 20:38 [PATCH v2 00/25] netfs: Read/write improvements David Howells
2024-08-14 20:38 ` [PATCH v2 01/25] netfs, ceph: Partially revert "netfs: Replace PG_fscache by setting folio->private and marking dirty" David Howells
2024-08-14 20:38 ` [PATCH v2 02/25] cachefiles: Fix non-taking of sb_writers around set/removexattr David Howells
2024-08-14 20:38 ` [PATCH v2 03/25] netfs: Adjust labels in /proc/fs/netfs/stats David Howells
2024-08-14 20:38 ` [PATCH v2 04/25] netfs: Record contention stats for writeback lock David Howells
2024-08-14 20:38 ` [PATCH v2 05/25] netfs: Reduce number of conditional branches in netfs_perform_write() David Howells
2024-08-14 20:38 ` [PATCH v2 06/25] netfs, cifs: Move CIFS_INO_MODIFIED_ATTR to netfs_inode David Howells
2024-08-14 20:38 ` [PATCH v2 07/25] netfs: Move max_len/max_nr_segs from netfs_io_subrequest to netfs_io_stream David Howells
2024-08-14 20:38 ` [PATCH v2 08/25] netfs: Reserve netfs_sreq_source 0 as unset/unknown David Howells
2024-08-14 20:38 ` [PATCH v2 09/25] netfs: Remove NETFS_COPY_TO_CACHE David Howells
2024-08-14 20:38 ` David Howells [this message]
2024-08-14 20:38 ` [PATCH v2 11/25] netfs: Use bh-disabling spinlocks for rreq->lock David Howells
2024-08-14 20:38 ` [PATCH v2 12/25] mm: Define struct folio_queue and ITER_FOLIOQ to handle a sequence of folios David Howells
2024-08-14 20:38 ` [PATCH v2 13/25] iov_iter: Provide copy_folio_from_iter() David Howells
2024-08-14 20:38 ` [PATCH v2 14/25] cifs: Provide the capability to extract from ITER_FOLIOQ to RDMA SGEs David Howells
2024-08-14 20:38 ` [PATCH v2 15/25] netfs: Use new folio_queue data type and iterator instead of xarray iter David Howells
2024-09-24 9:48 ` Leon Romanovsky
2024-08-14 20:38 ` [PATCH v2 16/25] netfs: Provide an iterator-reset function David Howells
2024-08-14 20:38 ` [PATCH v2 17/25] netfs: Simplify the writeback code David Howells
2024-08-14 20:38 ` [PATCH v2 18/25] afs: Make read subreqs async David Howells
2024-08-14 20:38 ` [PATCH v2 19/25] netfs: Speed up buffered reading David Howells
2024-08-16 11:12 ` Simon Horman
2024-09-23 18:34 ` Manu Bretelle
2024-09-23 18:43 ` Eduard Zingerman
2024-09-23 21:56 ` Eduard Zingerman
2024-09-23 22:33 ` David Howells
2024-09-23 23:37 ` Eduard Zingerman
2024-09-23 19:38 ` David Howells
2024-09-23 20:20 ` Manu Bretelle
2024-09-24 23:20 ` David Howells
2024-09-25 0:01 ` Eduard Zingerman
2024-09-25 10:31 ` Leon Romanovsky
2024-09-29 9:12 ` David Howells
2024-09-29 9:37 ` Eduard Zingerman
2024-09-29 18:55 ` Leon Romanovsky
2024-09-30 12:44 ` David Howells
2024-09-30 12:51 ` David Howells
2024-09-30 16:46 ` Eduard Zingerman
2024-09-30 18:35 ` David Howells
2024-09-30 19:00 ` Omar Sandoval
2024-09-27 20:50 ` David Howells
2024-09-27 20:55 ` Eduard Zingerman
2024-09-27 21:11 ` David Howells
2024-09-27 23:22 ` Eduard Zingerman
2024-08-14 20:38 ` [PATCH v2 20/25] netfs: Remove fs/netfs/io.c David Howells
2024-08-14 20:38 ` [PATCH v2 21/25] cachefiles, netfs: Fix write to partial block at EOF David Howells
2024-08-14 20:38 ` [PATCH v2 22/25] netfs: Cancel dirty folios that have no storage destination David Howells
2024-08-14 20:38 ` [PATCH v2 23/25] cifs: Use iterate_and_advance*() routines directly for hashing David Howells
2024-08-14 20:38 ` [PATCH v2 24/25] cifs: Switch crypto buffer to use a folio_queue rather than an xarray David Howells
2024-08-14 20:38 ` [PATCH v2 25/25] cifs: Don't support ITER_XARRAY David Howells
2024-08-15 13:07 ` [PATCH v2 00/25] netfs: Read/write improvements Christian Brauner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240814203850.2240469-11-dhowells@redhat.com \
--to=dhowells@redhat.com \
--cc=asmadeus@codewreck.org \
--cc=ceph-devel@vger.kernel.org \
--cc=christian@brauner.io \
--cc=ericvh@kernel.org \
--cc=hsiangkao@linux.alibaba.com \
--cc=idryomov@gmail.com \
--cc=jlayton@kernel.org \
--cc=linux-afs@lists.infradead.org \
--cc=linux-cifs@vger.kernel.org \
--cc=linux-erofs@lists.ozlabs.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-nfs@vger.kernel.org \
--cc=marc.dionne@auristor.com \
--cc=netdev@vger.kernel.org \
--cc=netfs@lists.linux.dev \
--cc=pc@manguebit.com \
--cc=smfrench@gmail.com \
--cc=sprasad@microsoft.com \
--cc=tom@talpey.com \
--cc=v9fs@lists.linux.dev \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox