From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B8C32C5B552 for ; Tue, 10 Jun 2025 14:55:56 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5195C6B008A; Tue, 10 Jun 2025 10:55:56 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4EEE06B008C; Tue, 10 Jun 2025 10:55:56 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 42B706B0092; Tue, 10 Jun 2025 10:55:56 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 205036B008A for ; Tue, 10 Jun 2025 10:55:56 -0400 (EDT) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id BA21C1A05D4 for ; Tue, 10 Jun 2025 14:55:55 +0000 (UTC) X-FDA: 83539790670.09.9942F06 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf20.hostedemail.com (Postfix) with ESMTP id 3A01F1C0007 for ; Tue, 10 Jun 2025 14:55:54 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=iVpSsGKj; spf=pass (imf20.hostedemail.com: domain of djwong@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=djwong@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1749567354; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=TqRmmTBaPSWQAwhyvyfWKeeQxSHUth5JXXziSuKNNTw=; b=QRTVOWYyL2D9modOsJYwEDNI+2g+OL2uXneoBrJJCYRSo1YzB/BOhRaaP0sLnyL3ZfOxAI HYNBmFXCWXoZ+WYmnHdNyk1YSo7N3ceqNasJT1JSRkPJUPfo3dPKciNHcHOTMj8sxQx/pN 0kQbjYLhXEoaphS6GzAHOFFPQZ/ObFo= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=iVpSsGKj; spf=pass (imf20.hostedemail.com: domain of djwong@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=djwong@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1749567354; a=rsa-sha256; cv=none; b=AGafU/erm5xnkLwLheLuDxaSv7Rdod3ztmsWGn98XRll8k7aETfklFTNrinKiKzIlbugR0 UcfrJvugLkwBXstXnGpnsCXYXiEGdt1Gwailb+r+xQauT5Y+BSjBnW4SCSsA4ChvGSRzlu lZkjpMXpZcO7T7GdeYEa2EBIqeP6x4c= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id AD1555C10E0; Tue, 10 Jun 2025 14:53:36 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id E9998C4CEED; Tue, 10 Jun 2025 14:55:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1749567353; bh=8iy4XjpSLPUw2yyMDqXm2qSW/3qMWW6PYFmZy+nTKqY=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=iVpSsGKjMiPbHvjy2468Xw3snrl2ew1fvLME+SOaMqgaqPbTcQFv0ehQt9icM91fO iwsRGO9kbPxE0F19S+j3ApL9HlFuvlmPbWXY4NvPafI4oa4kWZ5VIG/y53HF1JqlVZ S6ucTVdYLwsR5ptlM2+8wdf4ilaSAjY0WuCH+O+0v2twjoY4RUqCljLJiEXJkVFf8m RnbbZVcoCEq2+nvTiS3snm6GThJfWnftiYUW/LUjPglKK8d3Z6MA1ELSt6lKCd7Ul9 bx8FHIDAfiD8hGhe/B1/NdxuUo8zIAp9eSHxfzanLRwWoNU05DljkEwL+jkCBQbQNH eO2lYf5HK0Bqg== Date: Tue, 10 Jun 2025 07:55:52 -0700 From: "Darrick J. Wong" To: Brian Foster Cc: linux-fsdevel@vger.kernel.org, linux-xfs@vger.kernel.org, linux-mm@kvack.org Subject: Re: [PATCH 3/7] iomap: optional zero range dirty folio processing Message-ID: <20250610145552.GM6156@frogsfrogsfrogs> References: <20250605173357.579720-1-bfoster@redhat.com> <20250605173357.579720-4-bfoster@redhat.com> <20250609160420.GC6156@frogsfrogsfrogs> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Stat-Signature: 9yb5tbggitk3nbdyj8wih5bo365afgbd X-Rspamd-Queue-Id: 3A01F1C0007 X-Rspam-User: X-Rspamd-Server: rspam09 X-HE-Tag: 1749567354-391355 X-HE-Meta: U2FsdGVkX1/goeaRSIZiNniQ1uYBjBLKI+tNELGMoGRdjPkfESATs4iQs+ujYAglAtbMfXl5e1LYCtDf4h6/i1SRE34O1homUaEKO+JQKQYZvxybrNzkKu288D95qLYKzf9umICiabh2Z3GAiNTIKWBNB1x27qUB0jqEIJ843cd62XO6yFmv+iHCl4pQVcKUJxHTHhYQSs74vM413GvMwrx3CunVoTijd+bIATXm5bfwjdnbQdTdffS3HTnXan83g3n2UCTbadueM/mznixBvNpUoIqeNzpF3geMMzIFm8v/P/AixfJSyWV7r3cSAWgCLz4443lqhUVJIVaKLKywWuwW1M4x0i0Y898qZcSJYBgYzgr4gV2yICVHcjLF9/0RMxTy71cyzZ1PzPGYCY+QuT8I7mgKd747N/Nb7G1AzAMHeRiFR3JFZ6EC/GsbhoryzX59muawd24xks7kxvItlAO8SleUlPvU/zHVUc1cizgMrQIdYTU1VE5ndRbzSFsKZQ8Qt6dyAtodDwIqhN2ODzYd1WrgyEl5iGPOy009X7xczokBtmdqtIo+F4ERHrASZ/uJFFNiw9Onlye3BRE3Mb9of+LccNfQFvZeBa4eb1Tk33AXeecPg/rstd/uFaqdD+2ZeT64ESkAen3DepTEzvOQ477VM0XKozHMpLuIK14pX4cWywN3ZYCWQxSlp/uTwpft9YhBjyE1aBBA/WoxRsVszDvgwboKDWATHaAoSmF0AsTbBkRqa4I4Cztlk3QBjflC8QfnzvXVFxDwKT0LJVLXgIJGYwPz3fx5nDExJisDDUIEEVir2eI1cPtVOXNwA+iaQNlOAkyRvKBAGyDTuW0oIDj32MQ2RLMxaM86GlV/y10mampfXNCf0KAmT/nHrnW/IwAPxBsoPvt0qII8hz93/k90wl+k0xsfEMB04GE= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Tue, Jun 10, 2025 at 08:21:06AM -0400, Brian Foster wrote: > On Mon, Jun 09, 2025 at 09:04:20AM -0700, Darrick J. Wong wrote: > > On Thu, Jun 05, 2025 at 01:33:53PM -0400, Brian Foster wrote: > > > The only way zero range can currently process unwritten mappings > > > with dirty pagecache is to check whether the range is dirty before > > > mapping lookup and then flush when at least one underlying mapping > > > is unwritten. This ordering is required to prevent iomap lookup from > > > racing with folio writeback and reclaim. > > > > > > Since zero range can skip ranges of unwritten mappings that are > > > clean in cache, this operation can be improved by allowing the > > > filesystem to provide a set of dirty folios that require zeroing. In > > > turn, rather than flush or iterate file offsets, zero range can > > > iterate on folios in the batch and advance over clean or uncached > > > ranges in between. > > > > > > Add a folio_batch in struct iomap and provide a helper for fs' to > > > populate the batch at lookup time. Update the folio lookup path to > > > return the next folio in the batch, if provided, and advance the > > > iter if the folio starts beyond the current offset. > > > > > > Signed-off-by: Brian Foster > > > --- > > > fs/iomap/buffered-io.c | 73 +++++++++++++++++++++++++++++++++++++++--- > > > fs/iomap/iter.c | 6 ++++ > > > include/linux/iomap.h | 4 +++ > > > 3 files changed, 78 insertions(+), 5 deletions(-) > > > > > > diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c > > > index 16499655e7b0..cf2f4f869920 100644 > > > --- a/fs/iomap/buffered-io.c > > > +++ b/fs/iomap/buffered-io.c > > > @@ -750,6 +750,16 @@ static struct folio *__iomap_get_folio(struct iomap_iter *iter, size_t len) > > > if (!mapping_large_folio_support(iter->inode->i_mapping)) > > > len = min_t(size_t, len, PAGE_SIZE - offset_in_page(pos)); > > > > > > + if (iter->fbatch) { > > > + struct folio *folio = folio_batch_next(iter->fbatch); > > > + > > > + if (folio) { > > > + folio_get(folio); > > > + folio_lock(folio); > > > > Hrm. So each folio that is added to the batch isn't locked, nor does > > the batch (or iomap) hold a refcount on the folio until we get here. Do > > we have to re-check that folio->{mapping,index} match what iomap is > > trying to process? Or can we assume that nobody has removed the folio > > from the mapping? > > > > The filemap helper grabs a reference to the folio but doesn't > necessarily lock it. The ref is effectively transferred to the batch > there and the _get() here creates the iomap reference (i.e. that is > analogous to the traditional iomap get folio path). The batch is > ultimately released via folio_batch_release() and the iomap refs dropped > in the same way regardless of whether iomap grabbed it itself or was > part of a patch. Oh, ok, so that's really iomap getting its own ref on the folio to remain independent of whatever the fbatch code does (or might some day do). > > I'm wondering because __filemap_get_folio/filemap_get_entry seem to do > > all that for us. I think the folio_pos check below might cover some of > > that revalidation? > > > > I'm not totally sure the folio revalidation is necessarily required > here.. If it is, I'd also need to think about whether it's ok to skip > such folios or the approach here needs revisiting. I'll take a closer > look and also try to document this better and get some feedback from > people who know this code better in the next go around.. Hrmm. On closer examination, at least for xfs we've taken i_rwsem and the invalidate_lock so I think it should be the case that you don't need to revalidate. I think the same locks are held for iomap_unshare_range (mentioned elsewhere in this thread) though it doesn't apply to regular pagecache writes. > > > + } > > > + return folio; > > > + } > > > + > > > if (folio_ops && folio_ops->get_folio) > > > return folio_ops->get_folio(iter, pos, len); > > > else > ... > > > @@ -819,6 +831,12 @@ static int iomap_write_begin(struct iomap_iter *iter, struct folio **foliop, > > > if (IS_ERR(folio)) > > > return PTR_ERR(folio); > > > > > > + /* no folio means we're done with a batch */ > > > > ...ran out of folios but *plen is nonzero, i.e. we still have range to > > cover? > > > > Yes I suppose that is implied by being in this path.. will fix. > > > > + if (!folio) { > > > + WARN_ON_ONCE(!iter->fbatch); > > > + return 0; > > > + } > > > + > > > /* > > > * Now we have a locked folio, before we do anything with it we need to > > > * check that the iomap we have cached is not stale. The inode extent > ... > > > + > > > int > > > iomap_zero_range(struct inode *inode, loff_t pos, loff_t len, bool *did_zero, > > > const struct iomap_ops *ops, void *private) > ... > > > @@ -1445,13 +1503,18 @@ iomap_zero_range(struct inode *inode, loff_t pos, loff_t len, bool *did_zero, > > > * if dirty and the fs returns a mapping that might convert on > > > * writeback. > > > */ > > > - range_dirty = filemap_range_needs_writeback(inode->i_mapping, > > > - iter.pos, iter.pos + iter.len - 1); > > > + range_dirty = filemap_range_needs_writeback(mapping, iter.pos, > > > + iter.pos + iter.len - 1); > > > while ((ret = iomap_iter(&iter, ops)) > 0) { > > > const struct iomap *srcmap = iomap_iter_srcmap(&iter); > > > > > > - if (srcmap->type == IOMAP_HOLE || > > > - srcmap->type == IOMAP_UNWRITTEN) { > > > + if (WARN_ON_ONCE(iter.fbatch && > > > + srcmap->type != IOMAP_UNWRITTEN)) > > > > I wonder, are you planning to expand the folio batching to other > > buffered-io.c operations? Such that the iter.fbatch checks might some > > day go away? > > > > Yes.. but I'm not totally sure wrt impact on the fbatch checks quite > yet. The next thing I wanted to look at is addressing the same unwritten > mapping vs. dirty folios issue in the seek data/hole path. It's been a > little while since I last investigated there (and that was also before > the whole granular advance approach was devised), but IIRC it would look > rather similar to what this is doing for zero range. That may or may > not justify just making the batch required for both operations and > potentially simplifying this logic further. I'll keep that in mind when > I get to it.. > > After that, I may play around with the buffered write path, but that is > a larger change with slightly different scope and requirements.. --D > Brian > > > --D > > > > > + return -EIO; > > > + > > > + if (!iter.fbatch && > > > + (srcmap->type == IOMAP_HOLE || > > > + srcmap->type == IOMAP_UNWRITTEN)) { > > > s64 status; > > > > > > if (range_dirty) { > > > diff --git a/fs/iomap/iter.c b/fs/iomap/iter.c > > > index 6ffc6a7b9ba5..89bd5951a6fd 100644 > > > --- a/fs/iomap/iter.c > > > +++ b/fs/iomap/iter.c > > > @@ -9,6 +9,12 @@ > > > > > > static inline void iomap_iter_reset_iomap(struct iomap_iter *iter) > > > { > > > + if (iter->fbatch) { > > > + folio_batch_release(iter->fbatch); > > > + kfree(iter->fbatch); > > > + iter->fbatch = NULL; > > > + } > > > + > > > iter->status = 0; > > > memset(&iter->iomap, 0, sizeof(iter->iomap)); > > > memset(&iter->srcmap, 0, sizeof(iter->srcmap)); > > > diff --git a/include/linux/iomap.h b/include/linux/iomap.h > > > index 522644d62f30..0b9b460b2873 100644 > > > --- a/include/linux/iomap.h > > > +++ b/include/linux/iomap.h > > > @@ -9,6 +9,7 @@ > > > #include > > > #include > > > #include > > > +#include > > > > > > struct address_space; > > > struct fiemap_extent_info; > > > @@ -239,6 +240,7 @@ struct iomap_iter { > > > unsigned flags; > > > struct iomap iomap; > > > struct iomap srcmap; > > > + struct folio_batch *fbatch; > > > void *private; > > > }; > > > > > > @@ -345,6 +347,8 @@ void iomap_invalidate_folio(struct folio *folio, size_t offset, size_t len); > > > bool iomap_dirty_folio(struct address_space *mapping, struct folio *folio); > > > int iomap_file_unshare(struct inode *inode, loff_t pos, loff_t len, > > > const struct iomap_ops *ops); > > > +loff_t iomap_fill_dirty_folios(struct iomap_iter *iter, loff_t offset, > > > + loff_t length); > > > int iomap_zero_range(struct inode *inode, loff_t pos, loff_t len, > > > bool *did_zero, const struct iomap_ops *ops, void *private); > > > int iomap_truncate_page(struct inode *inode, loff_t pos, bool *did_zero, > > > -- > > > 2.49.0 > > > > > > > > > >