From: "Darrick J. Wong" <djwong@kernel.org>
To: Brian Foster <bfoster@redhat.com>
Cc: linux-fsdevel@vger.kernel.org, linux-xfs@vger.kernel.org,
linux-mm@kvack.org
Subject: Re: [PATCH 3/7] iomap: optional zero range dirty folio processing
Date: Tue, 10 Jun 2025 07:55:52 -0700 [thread overview]
Message-ID: <20250610145552.GM6156@frogsfrogsfrogs> (raw)
In-Reply-To: <aEgjMtAONSHz6yJT@bfoster>
On Tue, Jun 10, 2025 at 08:21:06AM -0400, Brian Foster wrote:
> On Mon, Jun 09, 2025 at 09:04:20AM -0700, Darrick J. Wong wrote:
> > On Thu, Jun 05, 2025 at 01:33:53PM -0400, Brian Foster wrote:
> > > The only way zero range can currently process unwritten mappings
> > > with dirty pagecache is to check whether the range is dirty before
> > > mapping lookup and then flush when at least one underlying mapping
> > > is unwritten. This ordering is required to prevent iomap lookup from
> > > racing with folio writeback and reclaim.
> > >
> > > Since zero range can skip ranges of unwritten mappings that are
> > > clean in cache, this operation can be improved by allowing the
> > > filesystem to provide a set of dirty folios that require zeroing. In
> > > turn, rather than flush or iterate file offsets, zero range can
> > > iterate on folios in the batch and advance over clean or uncached
> > > ranges in between.
> > >
> > > Add a folio_batch in struct iomap and provide a helper for fs' to
> > > populate the batch at lookup time. Update the folio lookup path to
> > > return the next folio in the batch, if provided, and advance the
> > > iter if the folio starts beyond the current offset.
> > >
> > > Signed-off-by: Brian Foster <bfoster@redhat.com>
> > > ---
> > > fs/iomap/buffered-io.c | 73 +++++++++++++++++++++++++++++++++++++++---
> > > fs/iomap/iter.c | 6 ++++
> > > include/linux/iomap.h | 4 +++
> > > 3 files changed, 78 insertions(+), 5 deletions(-)
> > >
> > > diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
> > > index 16499655e7b0..cf2f4f869920 100644
> > > --- a/fs/iomap/buffered-io.c
> > > +++ b/fs/iomap/buffered-io.c
> > > @@ -750,6 +750,16 @@ static struct folio *__iomap_get_folio(struct iomap_iter *iter, size_t len)
> > > if (!mapping_large_folio_support(iter->inode->i_mapping))
> > > len = min_t(size_t, len, PAGE_SIZE - offset_in_page(pos));
> > >
> > > + if (iter->fbatch) {
> > > + struct folio *folio = folio_batch_next(iter->fbatch);
> > > +
> > > + if (folio) {
> > > + folio_get(folio);
> > > + folio_lock(folio);
> >
> > Hrm. So each folio that is added to the batch isn't locked, nor does
> > the batch (or iomap) hold a refcount on the folio until we get here. Do
> > we have to re-check that folio->{mapping,index} match what iomap is
> > trying to process? Or can we assume that nobody has removed the folio
> > from the mapping?
> >
>
> The filemap helper grabs a reference to the folio but doesn't
> necessarily lock it. The ref is effectively transferred to the batch
> there and the _get() here creates the iomap reference (i.e. that is
> analogous to the traditional iomap get folio path). The batch is
> ultimately released via folio_batch_release() and the iomap refs dropped
> in the same way regardless of whether iomap grabbed it itself or was
> part of a patch.
Oh, ok, so that's really iomap getting its own ref on the folio to
remain independent of whatever the fbatch code does (or might some day
do).
> > I'm wondering because __filemap_get_folio/filemap_get_entry seem to do
> > all that for us. I think the folio_pos check below might cover some of
> > that revalidation?
> >
>
> I'm not totally sure the folio revalidation is necessarily required
> here.. If it is, I'd also need to think about whether it's ok to skip
> such folios or the approach here needs revisiting. I'll take a closer
> look and also try to document this better and get some feedback from
> people who know this code better in the next go around..
Hrmm. On closer examination, at least for xfs we've taken i_rwsem and
the invalidate_lock so I think it should be the case that you don't need
to revalidate. I think the same locks are held for iomap_unshare_range
(mentioned elsewhere in this thread) though it doesn't apply to regular
pagecache writes.
> > > + }
> > > + return folio;
> > > + }
> > > +
> > > if (folio_ops && folio_ops->get_folio)
> > > return folio_ops->get_folio(iter, pos, len);
> > > else
> ...
> > > @@ -819,6 +831,12 @@ static int iomap_write_begin(struct iomap_iter *iter, struct folio **foliop,
> > > if (IS_ERR(folio))
> > > return PTR_ERR(folio);
> > >
> > > + /* no folio means we're done with a batch */
> >
> > ...ran out of folios but *plen is nonzero, i.e. we still have range to
> > cover?
> >
>
> Yes I suppose that is implied by being in this path.. will fix.
>
> > > + if (!folio) {
> > > + WARN_ON_ONCE(!iter->fbatch);
> > > + return 0;
> > > + }
> > > +
> > > /*
> > > * Now we have a locked folio, before we do anything with it we need to
> > > * check that the iomap we have cached is not stale. The inode extent
> ...
> > > +
> > > int
> > > iomap_zero_range(struct inode *inode, loff_t pos, loff_t len, bool *did_zero,
> > > const struct iomap_ops *ops, void *private)
> ...
> > > @@ -1445,13 +1503,18 @@ iomap_zero_range(struct inode *inode, loff_t pos, loff_t len, bool *did_zero,
> > > * if dirty and the fs returns a mapping that might convert on
> > > * writeback.
> > > */
> > > - range_dirty = filemap_range_needs_writeback(inode->i_mapping,
> > > - iter.pos, iter.pos + iter.len - 1);
> > > + range_dirty = filemap_range_needs_writeback(mapping, iter.pos,
> > > + iter.pos + iter.len - 1);
> > > while ((ret = iomap_iter(&iter, ops)) > 0) {
> > > const struct iomap *srcmap = iomap_iter_srcmap(&iter);
> > >
> > > - if (srcmap->type == IOMAP_HOLE ||
> > > - srcmap->type == IOMAP_UNWRITTEN) {
> > > + if (WARN_ON_ONCE(iter.fbatch &&
> > > + srcmap->type != IOMAP_UNWRITTEN))
> >
> > I wonder, are you planning to expand the folio batching to other
> > buffered-io.c operations? Such that the iter.fbatch checks might some
> > day go away?
> >
>
> Yes.. but I'm not totally sure wrt impact on the fbatch checks quite
> yet. The next thing I wanted to look at is addressing the same unwritten
> mapping vs. dirty folios issue in the seek data/hole path. It's been a
> little while since I last investigated there (and that was also before
> the whole granular advance approach was devised), but IIRC it would look
> rather similar to what this is doing for zero range. That may or may
> not justify just making the batch required for both operations and
> potentially simplifying this logic further. I'll keep that in mind when
> I get to it..
>
> After that, I may play around with the buffered write path, but that is
> a larger change with slightly different scope and requirements..
<nod>
--D
> Brian
>
> > --D
> >
> > > + return -EIO;
> > > +
> > > + if (!iter.fbatch &&
> > > + (srcmap->type == IOMAP_HOLE ||
> > > + srcmap->type == IOMAP_UNWRITTEN)) {
> > > s64 status;
> > >
> > > if (range_dirty) {
> > > diff --git a/fs/iomap/iter.c b/fs/iomap/iter.c
> > > index 6ffc6a7b9ba5..89bd5951a6fd 100644
> > > --- a/fs/iomap/iter.c
> > > +++ b/fs/iomap/iter.c
> > > @@ -9,6 +9,12 @@
> > >
> > > static inline void iomap_iter_reset_iomap(struct iomap_iter *iter)
> > > {
> > > + if (iter->fbatch) {
> > > + folio_batch_release(iter->fbatch);
> > > + kfree(iter->fbatch);
> > > + iter->fbatch = NULL;
> > > + }
> > > +
> > > iter->status = 0;
> > > memset(&iter->iomap, 0, sizeof(iter->iomap));
> > > memset(&iter->srcmap, 0, sizeof(iter->srcmap));
> > > diff --git a/include/linux/iomap.h b/include/linux/iomap.h
> > > index 522644d62f30..0b9b460b2873 100644
> > > --- a/include/linux/iomap.h
> > > +++ b/include/linux/iomap.h
> > > @@ -9,6 +9,7 @@
> > > #include <linux/types.h>
> > > #include <linux/mm_types.h>
> > > #include <linux/blkdev.h>
> > > +#include <linux/pagevec.h>
> > >
> > > struct address_space;
> > > struct fiemap_extent_info;
> > > @@ -239,6 +240,7 @@ struct iomap_iter {
> > > unsigned flags;
> > > struct iomap iomap;
> > > struct iomap srcmap;
> > > + struct folio_batch *fbatch;
> > > void *private;
> > > };
> > >
> > > @@ -345,6 +347,8 @@ void iomap_invalidate_folio(struct folio *folio, size_t offset, size_t len);
> > > bool iomap_dirty_folio(struct address_space *mapping, struct folio *folio);
> > > int iomap_file_unshare(struct inode *inode, loff_t pos, loff_t len,
> > > const struct iomap_ops *ops);
> > > +loff_t iomap_fill_dirty_folios(struct iomap_iter *iter, loff_t offset,
> > > + loff_t length);
> > > int iomap_zero_range(struct inode *inode, loff_t pos, loff_t len,
> > > bool *did_zero, const struct iomap_ops *ops, void *private);
> > > int iomap_truncate_page(struct inode *inode, loff_t pos, bool *did_zero,
> > > --
> > > 2.49.0
> > >
> > >
> >
>
>
next prev parent reply other threads:[~2025-06-10 14:55 UTC|newest]
Thread overview: 40+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-06-05 17:33 [PATCH 0/7] iomap: zero range folio batch support Brian Foster
2025-06-05 17:33 ` [PATCH 1/7] iomap: move pos+len BUG_ON() to after folio lookup Brian Foster
2025-06-09 16:16 ` Darrick J. Wong
2025-06-10 4:20 ` Christoph Hellwig
2025-06-10 12:16 ` Brian Foster
2025-06-05 17:33 ` [PATCH 2/7] filemap: add helper to look up dirty folios in a range Brian Foster
2025-06-09 15:48 ` Darrick J. Wong
2025-06-10 4:21 ` Christoph Hellwig
2025-06-10 12:17 ` Brian Foster
2025-06-10 4:22 ` Christoph Hellwig
2025-06-05 17:33 ` [PATCH 3/7] iomap: optional zero range dirty folio processing Brian Foster
2025-06-09 16:04 ` Darrick J. Wong
2025-06-10 4:27 ` Christoph Hellwig
2025-06-10 12:21 ` Brian Foster
2025-06-10 12:21 ` Brian Foster
2025-06-10 13:29 ` Christoph Hellwig
2025-06-10 14:19 ` Brian Foster
2025-06-11 3:54 ` Christoph Hellwig
2025-06-10 14:55 ` Darrick J. Wong [this message]
2025-06-11 3:55 ` Christoph Hellwig
2025-06-12 4:06 ` Darrick J. Wong
2025-06-10 4:27 ` Christoph Hellwig
2025-06-05 17:33 ` [PATCH 4/7] xfs: always trim mapping to requested range for zero range Brian Foster
2025-06-09 16:07 ` Darrick J. Wong
2025-06-05 17:33 ` [PATCH 5/7] xfs: fill dirty folios on zero range of unwritten mappings Brian Foster
2025-06-06 2:02 ` kernel test robot
2025-06-06 15:20 ` Brian Foster
2025-06-09 16:12 ` Darrick J. Wong
2025-06-10 4:31 ` Christoph Hellwig
2025-06-10 12:24 ` Brian Foster
2025-07-02 18:50 ` Darrick J. Wong
2025-06-05 17:33 ` [PATCH 6/7] iomap: remove old partial eof zeroing optimization Brian Foster
2025-06-10 4:32 ` Christoph Hellwig
2025-06-05 17:33 ` [PATCH RFC 7/7] xfs: error tag to force zeroing on debug kernels Brian Foster
2025-06-10 4:33 ` Christoph Hellwig
2025-06-10 12:26 ` Brian Foster
2025-06-10 13:30 ` Christoph Hellwig
2025-06-10 14:20 ` Brian Foster
2025-06-10 19:12 ` Brian Foster
2025-06-11 3:56 ` Christoph Hellwig
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250610145552.GM6156@frogsfrogsfrogs \
--to=djwong@kernel.org \
--cc=bfoster@redhat.com \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-xfs@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox