From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E63B4C433E1 for ; Mon, 24 Aug 2020 15:18:12 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 9D11F206B5 for ; Mon, 24 Aug 2020 15:18:12 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="wOwrwRJD" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9D11F206B5 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 351E26B0028; Mon, 24 Aug 2020 11:18:12 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2656C6B0029; Mon, 24 Aug 2020 11:18:12 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 12ED56B002A; Mon, 24 Aug 2020 11:18:12 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0202.hostedemail.com [216.40.44.202]) by kanga.kvack.org (Postfix) with ESMTP id EFE4A6B0028 for ; Mon, 24 Aug 2020 11:18:11 -0400 (EDT) Received: from smtpin11.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id A7C041EF2 for ; Mon, 24 Aug 2020 15:18:11 +0000 (UTC) X-FDA: 77185817982.11.fact11_1e124c627054 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin11.hostedemail.com (Postfix) with ESMTP id A38E6180F76B2 for ; Mon, 24 Aug 2020 15:17:08 +0000 (UTC) X-HE-Tag: fact11_1e124c627054 X-Filterd-Recvd-Size: 7284 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf32.hostedemail.com (Postfix) with ESMTP for ; Mon, 24 Aug 2020 15:17:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=iplF7f6E9rVkJGIqHTB4yelEOJviHWixj4QJSoLjmRs=; b=wOwrwRJDflHbn0w8l8oVd8i/gG tDkiDCAoTdInbSi1U4J8L8XJ+4FtpCYlW6KYoznapN7lbPYwaEXP+IgUuQ8k38eYQJNWoMOSpUe3F ZGAiJBT6q9NrHezDtdE9spayZleXvlrupTLI31qA5MUil5lsTDYQ1NY4SFvDhH6pGk4dKU2G0mOx8 mi8dnnU/mP4QZ/sDShPacSwywxJhHkL/oFsGpIzGuVdJg5c3yZVtkqL2Xp4kWRMkCDv0vEMueaG17 DpXJxnwR9FEgTiHv6DaZ8g+0qu3PjtAhbodG4Z8OEKtI7Xg8Ld2GScT9R1lfXOXEr4SLJh7JZuN7s zGaW7JtA==; Received: from willy by casper.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1kAEDY-0004DG-9x; Mon, 24 Aug 2020 15:17:04 +0000 From: "Matthew Wilcox (Oracle)" To: linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , "Darrick J . Wong" , linux-block@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 08/11] iomap: Change iomap_write_begin calling convention Date: Mon, 24 Aug 2020 16:16:57 +0100 Message-Id: <20200824151700.16097-9-willy@infradead.org> X-Mailer: git-send-email 2.21.3 In-Reply-To: <20200824151700.16097-1-willy@infradead.org> References: <20200824151700.16097-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: A38E6180F76B2 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam01 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Pass (up to) the remaining length of the extent to iomap_write_begin() and have it return the number of bytes that will fit in the page. That lets us copy more bytes per call to iomap_write_begin() if the page cache has already allocated a THP (and will in future allow us to pass a hint to the page cache that it should try to allocate a larger page if there are none in the cache). Signed-off-by: Matthew Wilcox (Oracle) --- fs/iomap/buffered-io.c | 61 +++++++++++++++++++++++------------------- 1 file changed, 33 insertions(+), 28 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index d14de8886d5c..f43a15aaa381 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -566,14 +566,14 @@ iomap_read_page_sync(loff_t block_start, struct pag= e *page, unsigned poff, return submit_bio_wait(&bio); } =20 -static int -__iomap_write_begin(struct inode *inode, loff_t pos, unsigned len, int f= lags, - struct page *page, struct iomap *srcmap) +static ssize_t __iomap_write_begin(struct inode *inode, loff_t pos, + size_t len, int flags, struct page *page, struct iomap *srcmap) { loff_t block_size =3D i_blocksize(inode); loff_t block_start =3D pos & ~(block_size - 1); loff_t block_end =3D (pos + len + block_size - 1) & ~(block_size - 1); - unsigned from =3D offset_in_page(pos), to =3D from + len; + size_t from =3D offset_in_thp(page, pos); + size_t to =3D from + len; size_t poff, plen; int status; =20 @@ -609,12 +609,13 @@ __iomap_write_begin(struct inode *inode, loff_t pos= , unsigned len, int flags, return 0; } =20 -static int -iomap_write_begin(struct inode *inode, loff_t pos, unsigned len, unsigne= d flags, - struct page **pagep, struct iomap *iomap, struct iomap *srcmap) +static ssize_t iomap_write_begin(struct inode *inode, loff_t pos, loff_t= len, + unsigned flags, struct page **pagep, struct iomap *iomap, + struct iomap *srcmap) { const struct iomap_page_ops *page_ops =3D iomap->page_ops; struct page *page; + size_t offset; int status =3D 0; =20 BUG_ON(pos + len > iomap->offset + iomap->length); @@ -625,6 +626,8 @@ iomap_write_begin(struct inode *inode, loff_t pos, un= signed len, unsigned flags, return -EINTR; =20 if (page_ops && page_ops->page_prepare) { + if (len > UINT_MAX) + len =3D UINT_MAX; status =3D page_ops->page_prepare(inode, pos, len, iomap); if (status) return status; @@ -636,6 +639,10 @@ iomap_write_begin(struct inode *inode, loff_t pos, u= nsigned len, unsigned flags, status =3D -ENOMEM; goto out_no_page; } + page =3D thp_head(page); + offset =3D offset_in_thp(page, pos); + if (len > thp_size(page) - offset) + len =3D thp_size(page) - offset; =20 if (srcmap->type =3D=3D IOMAP_INLINE) iomap_read_inline_data(inode, page, srcmap); @@ -645,11 +652,11 @@ iomap_write_begin(struct inode *inode, loff_t pos, = unsigned len, unsigned flags, status =3D __iomap_write_begin(inode, pos, len, flags, page, srcmap); =20 - if (unlikely(status)) + if (status < 0) goto out_unlock; =20 *pagep =3D page; - return 0; + return len; =20 out_unlock: unlock_page(page); @@ -805,8 +812,10 @@ iomap_write_actor(struct inode *inode, loff_t pos, l= off_t length, void *data, =20 status =3D iomap_write_begin(inode, pos, bytes, 0, &page, iomap, srcmap); - if (unlikely(status)) + if (status < 0) break; + /* We may be partway through a THP */ + offset =3D offset_in_thp(page, pos); =20 if (mapping_writably_mapped(inode->i_mapping)) flush_dcache_page(page); @@ -866,7 +875,6 @@ static loff_t iomap_unshare_actor(struct inode *inode, loff_t pos, loff_t length, void= *data, struct iomap *iomap, struct iomap *srcmap) { - long status =3D 0; loff_t written =3D 0; =20 /* don't bother with blocks that are not shared to start with */ @@ -877,25 +885,24 @@ iomap_unshare_actor(struct inode *inode, loff_t pos= , loff_t length, void *data, return length; =20 do { - unsigned long offset =3D offset_in_page(pos); - unsigned long bytes =3D min_t(loff_t, PAGE_SIZE - offset, length); struct page *page; + ssize_t bytes; =20 - status =3D iomap_write_begin(inode, pos, bytes, + bytes =3D iomap_write_begin(inode, pos, length, IOMAP_WRITE_F_UNSHARE, &page, iomap, srcmap); - if (unlikely(status)) - return status; + if (bytes < 0) + return bytes; =20 - status =3D iomap_write_end(inode, pos, bytes, bytes, page, iomap, + bytes =3D iomap_write_end(inode, pos, bytes, bytes, page, iomap, srcmap); - if (WARN_ON_ONCE(status =3D=3D 0)) + if (WARN_ON_ONCE(bytes =3D=3D 0)) return -EIO; =20 cond_resched(); =20 - pos +=3D status; - written +=3D status; - length -=3D status; + pos +=3D bytes; + written +=3D bytes; + length -=3D bytes; =20 balance_dirty_pages_ratelimited(inode->i_mapping); } while (length); @@ -926,15 +933,13 @@ static loff_t iomap_zero(struct inode *inode, loff_= t pos, u64 length, struct iomap *iomap, struct iomap *srcmap) { struct page *page; - int status; - unsigned offset =3D offset_in_page(pos); - unsigned bytes =3D min_t(u64, PAGE_SIZE - offset, length); + ssize_t bytes; =20 - status =3D iomap_write_begin(inode, pos, bytes, 0, &page, iomap, srcmap= ); - if (status) - return status; + bytes =3D iomap_write_begin(inode, pos, length, 0, &page, iomap, srcmap= ); + if (bytes < 0) + return bytes; =20 - zero_user(page, offset, bytes); + zero_user(page, offset_in_thp(page, pos), bytes); mark_page_accessed(page); =20 return iomap_write_end(inode, pos, bytes, bytes, page, iomap, srcmap); --=20 2.28.0