From: Nathan Chancellor <nathan@kernel.org>
To: Simon Horman <horms@kernel.org>,
David Howells <dhowells@redhat.com>,
Christian Brauner <christian@brauner.io>
Cc: Steve French <smfrench@gmail.com>,
Matthew Wilcox <willy@infradead.org>,
Jeff Layton <jlayton@kernel.org>,
Gao Xiang <hsiangkao@linux.alibaba.com>,
Dominique Martinet <asmadeus@codewreck.org>,
Marc Dionne <marc.dionne@auristor.com>,
Paulo Alcantara <pc@manguebit.com>,
Shyam Prasad N <sprasad@microsoft.com>,
Tom Talpey <tom@talpey.com>,
Eric Van Hensbergen <ericvh@kernel.org>,
Ilya Dryomov <idryomov@gmail.com>,
netfs@lists.linux.dev, linux-afs@lists.infradead.org,
linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org,
ceph-devel@vger.kernel.org, v9fs@lists.linux.dev,
linux-erofs@lists.ozlabs.org, linux-fsdevel@vger.kernel.org,
linux-mm@kvack.org, netdev@vger.kernel.org,
linux-kernel@vger.kernel.org
Subject: Re: [PATCH 18/24] netfs: Speed up buffered reading
Date: Thu, 1 Aug 2024 11:53:21 -0700 [thread overview]
Message-ID: <20240801185321.GA2534812@thelio-3990X> (raw)
In-Reply-To: <20240731190742.GS1967603@kernel.org>
On Wed, Jul 31, 2024 at 08:07:42PM +0100, Simon Horman wrote:
> On Mon, Jul 29, 2024 at 05:19:47PM +0100, David Howells wrote:
>
> ...
>
> > diff --git a/fs/netfs/buffered_read.c b/fs/netfs/buffered_read.c
>
> ...
>
> > +/*
> > + * Perform a read to the pagecache from a series of sources of different types,
> > + * slicing up the region to be read according to available cache blocks and
> > + * network rsize.
> > + */
> > +static void netfs_read_to_pagecache(struct netfs_io_request *rreq)
> > +{
> > + struct netfs_inode *ictx = netfs_inode(rreq->inode);
> > + unsigned long long start = rreq->start;
> > + ssize_t size = rreq->len;
> > + int ret = 0;
> > +
> > + atomic_inc(&rreq->nr_outstanding);
> > +
> > + do {
> > + struct netfs_io_subrequest *subreq;
> > + enum netfs_io_source source = NETFS_DOWNLOAD_FROM_SERVER;
> > + ssize_t slice;
> > +
> > + subreq = netfs_alloc_subrequest(rreq);
> > + if (!subreq) {
> > + ret = -ENOMEM;
> > + break;
> > + }
> > +
> > + subreq->start = start;
> > + subreq->len = size;
> > +
> > + atomic_inc(&rreq->nr_outstanding);
> > + spin_lock_bh(&rreq->lock);
> > + list_add_tail(&subreq->rreq_link, &rreq->subrequests);
> > + subreq->prev_donated = rreq->prev_donated;
> > + rreq->prev_donated = 0;
> > + trace_netfs_sreq(subreq, netfs_sreq_trace_added);
> > + spin_unlock_bh(&rreq->lock);
> > +
> > + source = netfs_cache_prepare_read(rreq, subreq, rreq->i_size);
> > + subreq->source = source;
> > + if (source == NETFS_DOWNLOAD_FROM_SERVER) {
> > + unsigned long long zp = umin(ictx->zero_point, rreq->i_size);
> > + size_t len = subreq->len;
> > +
> > + if (subreq->start >= zp) {
> > + subreq->source = source = NETFS_FILL_WITH_ZEROES;
> > + goto fill_with_zeroes;
> > + }
> > +
> > + if (len > zp - subreq->start)
> > + len = zp - subreq->start;
> > + if (len == 0) {
> > + pr_err("ZERO-LEN READ: R=%08x[%x] l=%zx/%zx s=%llx z=%llx i=%llx",
> > + rreq->debug_id, subreq->debug_index,
> > + subreq->len, size,
> > + subreq->start, ictx->zero_point, rreq->i_size);
> > + break;
> > + }
> > + subreq->len = len;
> > +
> > + netfs_stat(&netfs_n_rh_download);
> > + if (rreq->netfs_ops->prepare_read) {
> > + ret = rreq->netfs_ops->prepare_read(subreq);
> > + if (ret < 0) {
> > + atomic_dec(&rreq->nr_outstanding);
> > + netfs_put_subrequest(subreq, false,
> > + netfs_sreq_trace_put_cancel);
> > + break;
> > + }
> > + trace_netfs_sreq(subreq, netfs_sreq_trace_prepare);
> > + }
> > +
> > + slice = netfs_prepare_read_iterator(subreq);
> > + if (slice < 0) {
> > + atomic_dec(&rreq->nr_outstanding);
> > + netfs_put_subrequest(subreq, false, netfs_sreq_trace_put_cancel);
> > + ret = slice;
> > + break;
> > + }
> > +
> > + rreq->netfs_ops->issue_read(subreq);
> > + goto done;
> > + }
> > +
> > + fill_with_zeroes:
> > + if (source == NETFS_FILL_WITH_ZEROES) {
> > + subreq->source = NETFS_FILL_WITH_ZEROES;
> > + trace_netfs_sreq(subreq, netfs_sreq_trace_submit);
> > + netfs_stat(&netfs_n_rh_zero);
> > + slice = netfs_prepare_read_iterator(subreq);
> > + __set_bit(NETFS_SREQ_CLEAR_TAIL, &subreq->flags);
> > + netfs_read_subreq_terminated(subreq, 0, false);
> > + goto done;
> > + }
> > +
> > + if (source == NETFS_READ_FROM_CACHE) {
> > + trace_netfs_sreq(subreq, netfs_sreq_trace_submit);
> > + slice = netfs_prepare_read_iterator(subreq);
> > + netfs_read_cache_to_pagecache(rreq, subreq);
> > + goto done;
> > + }
> > +
> > + if (source == NETFS_INVALID_READ)
> > + break;
>
> Hi David,
>
> I feel a sense of deja vu here. So apologies if this was already
> discussed in the past.
>
> If the code ever reaches this line, then slice will be used
> uninitialised below.
>
> Flagged by W=1 allmodconfig builds on x86_64 with Clang 18.1.8.
which now breaks the build in next-20240801:
fs/netfs/buffered_read.c:304:7: error: variable 'slice' is used uninitialized whenever 'if' condition is false [-Werror,-Wsometimes-uninitialized]
304 | if (source == NETFS_INVALID_READ)
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~
fs/netfs/buffered_read.c:308:11: note: uninitialized use occurs here
308 | size -= slice;
| ^~~~~
fs/netfs/buffered_read.c:304:3: note: remove the 'if' if its condition is always true
304 | if (source == NETFS_INVALID_READ)
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
305 | break;
fs/netfs/buffered_read.c:221:16: note: initialize the variable 'slice' to silence this warning
221 | ssize_t slice;
| ^
| = 0
1 error generated.
If source has to be one of these values, perhaps switching to a switch
statement and having a default with a WARN_ON() or something would
convey that to the compiler?
Cheers,
Nathan
next prev parent reply other threads:[~2024-08-01 18:53 UTC|newest]
Thread overview: 31+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-07-29 16:19 [PATCH 00/24] netfs: Read/write improvements David Howells
2024-07-29 16:19 ` [PATCH 01/24] fs/netfs/fscache_cookie: add missing "n_accesses" check David Howells
2024-07-29 16:19 ` [PATCH 02/24] cachefiles: Fix non-taking of sb_writers around set/removexattr David Howells
2024-07-29 16:19 ` [PATCH 03/24] netfs: Adjust labels in /proc/fs/netfs/stats David Howells
2024-07-29 16:19 ` [PATCH 04/24] netfs: Record contention stats for writeback lock David Howells
2024-07-29 16:19 ` [PATCH 05/24] netfs: Reduce number of conditional branches in netfs_perform_write() David Howells
2024-07-29 16:19 ` [PATCH 06/24] netfs, cifs: Move CIFS_INO_MODIFIED_ATTR to netfs_inode David Howells
2024-07-29 16:19 ` [PATCH 07/24] netfs: Move max_len/max_nr_segs from netfs_io_subrequest to netfs_io_stream David Howells
2024-07-29 16:19 ` [PATCH 08/24] netfs: Reserve netfs_sreq_source 0 as unset/unknown David Howells
2024-07-29 16:19 ` [PATCH 09/24] netfs: Remove NETFS_COPY_TO_CACHE David Howells
2024-07-29 16:19 ` [PATCH 10/24] netfs: Set the request work function upon allocation David Howells
2024-07-29 16:19 ` [PATCH 11/24] netfs: Use bh-disabling spinlocks for rreq->lock David Howells
2024-07-29 16:19 ` [PATCH 12/24] mm: Define struct folio_queue and ITER_FOLIOQ to handle a sequence of folios David Howells
2024-07-29 16:19 ` [PATCH 13/24] cifs: Provide the capability to extract from ITER_FOLIOQ to RDMA SGEs David Howells
2024-07-29 16:19 ` [PATCH 14/24] netfs: Use new folio_queue data type and iterator instead of xarray iter David Howells
2024-07-29 16:19 ` [PATCH 15/24] netfs: Provide an iterator-reset function David Howells
2024-07-29 16:19 ` [PATCH 16/24] netfs: Simplify the writeback code David Howells
2024-07-29 16:19 ` [PATCH 17/24] afs: Make read subreqs async David Howells
2024-07-29 16:19 ` [PATCH 18/24] netfs: Speed up buffered reading David Howells
2024-07-31 19:07 ` Simon Horman
2024-08-01 18:53 ` Nathan Chancellor [this message]
2024-08-02 14:18 ` David Howells
2024-08-02 14:44 ` Simon Horman
2024-07-29 16:19 ` [PATCH 19/24] netfs: Remove fs/netfs/io.c David Howells
2024-07-29 16:19 ` [PATCH 20/24] cachefiles, netfs: Fix write to partial block at EOF David Howells
2024-07-29 16:19 ` [PATCH 21/24] netfs: Cancel dirty folios that have no storage destination David Howells
2024-07-29 16:19 ` [PATCH 22/24] cifs: Use iterate_and_advance*() routines directly for hashing David Howells
2024-07-29 16:19 ` [PATCH 23/24] cifs: Switch crypto buffer to use a folio_queue rather than an xarray David Howells
2024-07-29 16:19 ` [PATCH 24/24] cifs: Don't support ITER_XARRAY David Howells
2024-07-30 10:36 ` (subset) [PATCH 00/24] netfs: Read/write improvements Christian Brauner
2024-07-30 10:38 ` Christian Brauner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240801185321.GA2534812@thelio-3990X \
--to=nathan@kernel.org \
--cc=asmadeus@codewreck.org \
--cc=ceph-devel@vger.kernel.org \
--cc=christian@brauner.io \
--cc=dhowells@redhat.com \
--cc=ericvh@kernel.org \
--cc=horms@kernel.org \
--cc=hsiangkao@linux.alibaba.com \
--cc=idryomov@gmail.com \
--cc=jlayton@kernel.org \
--cc=linux-afs@lists.infradead.org \
--cc=linux-cifs@vger.kernel.org \
--cc=linux-erofs@lists.ozlabs.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-nfs@vger.kernel.org \
--cc=marc.dionne@auristor.com \
--cc=netdev@vger.kernel.org \
--cc=netfs@lists.linux.dev \
--cc=pc@manguebit.com \
--cc=smfrench@gmail.com \
--cc=sprasad@microsoft.com \
--cc=tom@talpey.com \
--cc=v9fs@lists.linux.dev \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox