From: Chuck Lever <chuck.lever@oracle.com>
To: linux-nfs@vger.kernel.org, linux-mm@kvack.org
Cc: neilb@suse.de
Subject: [PATCH v3 0/3] Bulk-release pages during NFSD read splice
Date: Thu, 08 Jul 2021 11:26:16 -0400 [thread overview]
Message-ID: <162575623717.2532.8517369487503961860.stgit@klimt.1015granger.net> (raw)
I'm using "v3" simply because the v2 series of NFSD page allocator
work included the same bulk-release concept in a different form.
v2 has now been merged (thanks, Mel!). However, the bulk-release
part of that series was postponed.
Consider v3 to be an RFC refresh.
As with the page allocation side, I'm trying to reduce the average
number of times NFSD invokes the page allocation and release APIs
because they can be expensive, and because it is a resource that is
shared amongst all nfsd threads and thus access to it is partially
serialized. This small series tackles a code path that is frequently
invoked when NFSD handles READ operations on local filesystems that
support splice (i.e., most of the popular ones).
The previous version of this proposal placed the unused pages on
a local list and then re-used the pages directly in svc_alloc_arg()
before invoking alloc_pages_bulk_array() to fill in any remaining
missing rq_pages entries.
This meant there would be the possibility of some workloads that
caused accrual of pages without bounds, so the finished version of
that logic would have to be complex and possibly involve a shrinker.
In this version, I'm simply handing the pages back to the page
allocator, so all that complexity vanishes. What makes it more
efficient is that instead of calling put_page() for each page,
the code collects the unused pages in a per-nfsd thread array, and
returns them to the allocator using a bulk free API (release_pages)
when the array is full.
In this version of the series, each nfsd thread never accrues more
than 16 pages. We can easily make that larger or smaller, but 16
already reduces the rate of put_pages() calls to a minute fraction
of what it was, and does not consume much additional space in struct
svc_rqst.
Comments welcome!
---
Chuck Lever (3):
NFSD: Clean up splice actor
SUNRPC: Add svc_rqst_replace_page() API
NFSD: Batch release pages during splice read
fs/nfsd/vfs.c | 20 +++++---------------
include/linux/sunrpc/svc.h | 5 +++++
net/sunrpc/svc.c | 29 +++++++++++++++++++++++++++++
3 files changed, 39 insertions(+), 15 deletions(-)
--
Chuck Lever
next reply other threads:[~2021-07-08 15:44 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-07-08 15:26 Chuck Lever [this message]
2021-07-08 15:26 ` [PATCH v3 1/3] NFSD: Clean up splice actor Chuck Lever
2021-07-08 15:26 ` [PATCH v3 2/3] SUNRPC: Add svc_rqst_replace_page() API Chuck Lever
2021-07-08 23:30 ` Matthew Wilcox
2021-07-09 3:04 ` Chuck Lever III
2021-07-08 15:26 ` [PATCH v3 3/3] NFSD: Batch release pages during splice read Chuck Lever
2021-07-08 23:23 ` [PATCH v3 0/3] Bulk-release pages during NFSD read splice NeilBrown
2021-07-09 2:58 ` Chuck Lever III
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=162575623717.2532.8517369487503961860.stgit@klimt.1015granger.net \
--to=chuck.lever@oracle.com \
--cc=linux-mm@kvack.org \
--cc=linux-nfs@vger.kernel.org \
--cc=neilb@suse.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox