From: David Howells <dhowells@redhat.com>
To: Christian Brauner <christian@brauner.io>,
Steve French <smfrench@gmail.com>,
Matthew Wilcox <willy@infradead.org>
Cc: David Howells <dhowells@redhat.com>,
Jeff Layton <jlayton@kernel.org>,
Gao Xiang <hsiangkao@linux.alibaba.com>,
Dominique Martinet <asmadeus@codewreck.org>,
Marc Dionne <marc.dionne@auristor.com>,
Paulo Alcantara <pc@manguebit.com>,
Shyam Prasad N <sprasad@microsoft.com>,
Tom Talpey <tom@talpey.com>,
Eric Van Hensbergen <ericvh@kernel.org>,
Ilya Dryomov <idryomov@gmail.com>,
netfs@lists.linux.dev, linux-afs@lists.infradead.org,
linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org,
ceph-devel@vger.kernel.org, v9fs@lists.linux.dev,
linux-erofs@lists.ozlabs.org, linux-fsdevel@vger.kernel.org,
linux-mm@kvack.org, netdev@vger.kernel.org,
linux-kernel@vger.kernel.org, Steve French <sfrench@samba.org>,
Enzo Matsumiya <ematsumiya@suse.de>
Subject: [PATCH v2 14/25] cifs: Provide the capability to extract from ITER_FOLIOQ to RDMA SGEs
Date: Wed, 14 Aug 2024 21:38:34 +0100 [thread overview]
Message-ID: <20240814203850.2240469-15-dhowells@redhat.com> (raw)
In-Reply-To: <20240814203850.2240469-1-dhowells@redhat.com>
Make smb_extract_iter_to_rdma() extract page fragments from an ITER_FOLIOQ
iterator into RDMA SGEs.
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Steve French <sfrench@samba.org>
cc: Paulo Alcantara <pc@manguebit.com>
cc: Tom Talpey <tom@talpey.com>
cc: Enzo Matsumiya <ematsumiya@suse.de>
cc: linux-cifs@vger.kernel.org
---
fs/smb/client/smbdirect.c | 71 +++++++++++++++++++++++++++++++++++++--
1 file changed, 68 insertions(+), 3 deletions(-)
diff --git a/fs/smb/client/smbdirect.c b/fs/smb/client/smbdirect.c
index 7bcc379014ca..c946b38ca825 100644
--- a/fs/smb/client/smbdirect.c
+++ b/fs/smb/client/smbdirect.c
@@ -6,6 +6,7 @@
*/
#include <linux/module.h>
#include <linux/highmem.h>
+#include <linux/folio_queue.h>
#include "smbdirect.h"
#include "cifs_debug.h"
#include "cifsproto.h"
@@ -2463,6 +2464,8 @@ static ssize_t smb_extract_bvec_to_rdma(struct iov_iter *iter,
start = 0;
}
+ if (ret > 0)
+ iov_iter_advance(iter, ret);
return ret;
}
@@ -2519,6 +2522,65 @@ static ssize_t smb_extract_kvec_to_rdma(struct iov_iter *iter,
start = 0;
}
+ if (ret > 0)
+ iov_iter_advance(iter, ret);
+ return ret;
+}
+
+/*
+ * Extract folio fragments from a FOLIOQ-class iterator and add them to an RDMA
+ * list. The folios are not pinned.
+ */
+static ssize_t smb_extract_folioq_to_rdma(struct iov_iter *iter,
+ struct smb_extract_to_rdma *rdma,
+ ssize_t maxsize)
+{
+ const struct folio_queue *folioq = iter->folioq;
+ unsigned int slot = iter->folioq_slot;
+ ssize_t ret = 0;
+ size_t offset = iter->iov_offset;
+
+ BUG_ON(!folioq);
+
+ if (slot >= folioq_nr_slots(folioq)) {
+ folioq = folioq->next;
+ if (WARN_ON_ONCE(!folioq))
+ return -EIO;
+ slot = 0;
+ }
+
+ do {
+ struct folio *folio = folioq_folio(folioq, slot);
+ size_t fsize = folioq_folio_size(folioq, slot);
+
+ if (offset < fsize) {
+ size_t part = umin(maxsize - ret, fsize - offset);
+
+ if (!smb_set_sge(rdma, folio_page(folio, 0), offset, part))
+ return -EIO;
+
+ offset += part;
+ ret += part;
+ }
+
+ if (offset >= fsize) {
+ offset = 0;
+ slot++;
+ if (slot >= folioq_nr_slots(folioq)) {
+ if (!folioq->next) {
+ WARN_ON_ONCE(ret < iter->count);
+ break;
+ }
+ folioq = folioq->next;
+ slot = 0;
+ }
+ }
+ } while (rdma->nr_sge < rdma->max_sge || maxsize > 0);
+
+ iter->folioq = folioq;
+ iter->folioq_slot = slot;
+ iter->iov_offset = offset;
+ iter->count -= ret;
return ret;
}
@@ -2563,6 +2625,8 @@ static ssize_t smb_extract_xarray_to_rdma(struct iov_iter *iter,
}
rcu_read_unlock();
+ if (ret > 0)
+ iov_iter_advance(iter, ret);
return ret;
}
@@ -2590,6 +2654,9 @@ static ssize_t smb_extract_iter_to_rdma(struct iov_iter *iter, size_t len,
case ITER_KVEC:
ret = smb_extract_kvec_to_rdma(iter, rdma, len);
break;
+ case ITER_FOLIOQ:
+ ret = smb_extract_folioq_to_rdma(iter, rdma, len);
+ break;
case ITER_XARRAY:
ret = smb_extract_xarray_to_rdma(iter, rdma, len);
break;
@@ -2598,9 +2665,7 @@ static ssize_t smb_extract_iter_to_rdma(struct iov_iter *iter, size_t len,
return -EIO;
}
- if (ret > 0) {
- iov_iter_advance(iter, ret);
- } else if (ret < 0) {
+ if (ret < 0) {
while (rdma->nr_sge > before) {
struct ib_sge *sge = &rdma->sge[rdma->nr_sge--];
next prev parent reply other threads:[~2024-08-14 20:40 UTC|newest]
Thread overview: 51+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-08-14 20:38 [PATCH v2 00/25] netfs: Read/write improvements David Howells
2024-08-14 20:38 ` [PATCH v2 01/25] netfs, ceph: Partially revert "netfs: Replace PG_fscache by setting folio->private and marking dirty" David Howells
2024-08-14 20:38 ` [PATCH v2 02/25] cachefiles: Fix non-taking of sb_writers around set/removexattr David Howells
2024-08-14 20:38 ` [PATCH v2 03/25] netfs: Adjust labels in /proc/fs/netfs/stats David Howells
2024-08-14 20:38 ` [PATCH v2 04/25] netfs: Record contention stats for writeback lock David Howells
2024-08-14 20:38 ` [PATCH v2 05/25] netfs: Reduce number of conditional branches in netfs_perform_write() David Howells
2024-08-14 20:38 ` [PATCH v2 06/25] netfs, cifs: Move CIFS_INO_MODIFIED_ATTR to netfs_inode David Howells
2024-08-14 20:38 ` [PATCH v2 07/25] netfs: Move max_len/max_nr_segs from netfs_io_subrequest to netfs_io_stream David Howells
2024-08-14 20:38 ` [PATCH v2 08/25] netfs: Reserve netfs_sreq_source 0 as unset/unknown David Howells
2024-08-14 20:38 ` [PATCH v2 09/25] netfs: Remove NETFS_COPY_TO_CACHE David Howells
2024-08-14 20:38 ` [PATCH v2 10/25] netfs: Set the request work function upon allocation David Howells
2024-08-14 20:38 ` [PATCH v2 11/25] netfs: Use bh-disabling spinlocks for rreq->lock David Howells
2024-08-14 20:38 ` [PATCH v2 12/25] mm: Define struct folio_queue and ITER_FOLIOQ to handle a sequence of folios David Howells
2024-08-14 20:38 ` [PATCH v2 13/25] iov_iter: Provide copy_folio_from_iter() David Howells
2024-08-14 20:38 ` David Howells [this message]
2024-08-14 20:38 ` [PATCH v2 15/25] netfs: Use new folio_queue data type and iterator instead of xarray iter David Howells
2024-09-24 9:48 ` Leon Romanovsky
2024-08-14 20:38 ` [PATCH v2 16/25] netfs: Provide an iterator-reset function David Howells
2024-08-14 20:38 ` [PATCH v2 17/25] netfs: Simplify the writeback code David Howells
2024-08-14 20:38 ` [PATCH v2 18/25] afs: Make read subreqs async David Howells
2024-08-14 20:38 ` [PATCH v2 19/25] netfs: Speed up buffered reading David Howells
2024-08-16 11:12 ` Simon Horman
2024-09-23 18:34 ` Manu Bretelle
2024-09-23 18:43 ` Eduard Zingerman
2024-09-23 21:56 ` Eduard Zingerman
2024-09-23 22:33 ` David Howells
2024-09-23 23:37 ` Eduard Zingerman
2024-09-23 19:38 ` David Howells
2024-09-23 20:20 ` Manu Bretelle
2024-09-24 23:20 ` David Howells
2024-09-25 0:01 ` Eduard Zingerman
2024-09-25 10:31 ` Leon Romanovsky
2024-09-29 9:12 ` David Howells
2024-09-29 9:37 ` Eduard Zingerman
2024-09-29 18:55 ` Leon Romanovsky
2024-09-30 12:44 ` David Howells
2024-09-30 12:51 ` David Howells
2024-09-30 16:46 ` Eduard Zingerman
2024-09-30 18:35 ` David Howells
2024-09-30 19:00 ` Omar Sandoval
2024-09-27 20:50 ` David Howells
2024-09-27 20:55 ` Eduard Zingerman
2024-09-27 21:11 ` David Howells
2024-09-27 23:22 ` Eduard Zingerman
2024-08-14 20:38 ` [PATCH v2 20/25] netfs: Remove fs/netfs/io.c David Howells
2024-08-14 20:38 ` [PATCH v2 21/25] cachefiles, netfs: Fix write to partial block at EOF David Howells
2024-08-14 20:38 ` [PATCH v2 22/25] netfs: Cancel dirty folios that have no storage destination David Howells
2024-08-14 20:38 ` [PATCH v2 23/25] cifs: Use iterate_and_advance*() routines directly for hashing David Howells
2024-08-14 20:38 ` [PATCH v2 24/25] cifs: Switch crypto buffer to use a folio_queue rather than an xarray David Howells
2024-08-14 20:38 ` [PATCH v2 25/25] cifs: Don't support ITER_XARRAY David Howells
2024-08-15 13:07 ` [PATCH v2 00/25] netfs: Read/write improvements Christian Brauner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240814203850.2240469-15-dhowells@redhat.com \
--to=dhowells@redhat.com \
--cc=asmadeus@codewreck.org \
--cc=ceph-devel@vger.kernel.org \
--cc=christian@brauner.io \
--cc=ematsumiya@suse.de \
--cc=ericvh@kernel.org \
--cc=hsiangkao@linux.alibaba.com \
--cc=idryomov@gmail.com \
--cc=jlayton@kernel.org \
--cc=linux-afs@lists.infradead.org \
--cc=linux-cifs@vger.kernel.org \
--cc=linux-erofs@lists.ozlabs.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-nfs@vger.kernel.org \
--cc=marc.dionne@auristor.com \
--cc=netdev@vger.kernel.org \
--cc=netfs@lists.linux.dev \
--cc=pc@manguebit.com \
--cc=sfrench@samba.org \
--cc=smfrench@gmail.com \
--cc=sprasad@microsoft.com \
--cc=tom@talpey.com \
--cc=v9fs@lists.linux.dev \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox