From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6B05FC56202 for ; Thu, 29 Oct 2020 19:34:31 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id EEB5C20EDD for ; Thu, 29 Oct 2020 19:34:29 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="cgdijzCx" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org EEB5C20EDD Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 419016B0075; Thu, 29 Oct 2020 15:34:14 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 125F86B0078; Thu, 29 Oct 2020 15:34:14 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E25496B007D; Thu, 29 Oct 2020 15:34:13 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0219.hostedemail.com [216.40.44.219]) by kanga.kvack.org (Postfix) with ESMTP id 792956B0078 for ; Thu, 29 Oct 2020 15:34:13 -0400 (EDT) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 23591180AD811 for ; Thu, 29 Oct 2020 19:34:13 +0000 (UTC) X-FDA: 77425963986.08.crook36_11069ab27290 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin08.hostedemail.com (Postfix) with ESMTP id EEA281819E773 for ; Thu, 29 Oct 2020 19:34:12 +0000 (UTC) X-HE-Tag: crook36_11069ab27290 X-Filterd-Recvd-Size: 6801 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf23.hostedemail.com (Postfix) with ESMTP for ; Thu, 29 Oct 2020 19:34:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=5Qi+fkVc5y2m3fg4x8n5YvSh2yc6fBKTXrBLflPtfFo=; b=cgdijzCxM0jsbGOUGge5xzXqxe pjiKNnaXc3NeGn6Y9ZsbMPzl3h6AUlFtKDhYbT9JE/1mw4hoEM1yCsMVuIomzvrFZULUGhCwGPmNV HQxBdil412iT477HB03yMlb+wSEuHSMTMiiKKCDai6JexnnPjf0x+pgsEXOtIA3srgZx1gEKOc6Yv xCc4+C3DJQ8MesMu6VHhTLTZ5eTGxroqTX8j3JPz8CHa2aBsSL93VpY7WEmAapLdQO/ilsSonqioq /TFR3xNUasJUuLriGHbkxPKD7ApX71/nzu3DmIIZU87H2xZCjcDti2ZkI7oOVmerilThG4K3G00HL TkNn+MnQ==; Received: from willy by casper.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1kYDgX-0007bn-AZ; Thu, 29 Oct 2020 19:34:09 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 07/19] mm/filemap: Use head pages in generic_file_buffered_read Date: Thu, 29 Oct 2020 19:33:53 +0000 Message-Id: <20201029193405.29125-8-willy@infradead.org> X-Mailer: git-send-email 2.21.3 In-Reply-To: <20201029193405.29125-1-willy@infradead.org> References: <20201029193405.29125-1-willy@infradead.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add mapping_get_read_heads() which returns the head pages which represent a contiguous array of bytes in the file. It also stops when encountering a page marked as Readahead or !Uptodate (but does return that page) so it can be handled appropriately by gfbr_get_pages(). Signed-off-by: Matthew Wilcox (Oracle) --- mm/filemap.c | 78 ++++++++++++++++++++++++++++++++++++++++------------ 1 file changed, 61 insertions(+), 17 deletions(-) diff --git a/mm/filemap.c b/mm/filemap.c index 1bfd87d85bfd..c0161f42f37d 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -2166,6 +2166,48 @@ static void shrink_readahead_size_eio(struct file_= ra_state *ra) ra->ra_pages /=3D 4; } =20 +static unsigned mapping_get_read_heads(struct address_space *mapping, + pgoff_t index, unsigned int nr_pages, struct page **pages) +{ + XA_STATE(xas, &mapping->i_pages, index); + struct page *head; + unsigned int ret =3D 0; + + if (unlikely(!nr_pages)) + return 0; + + rcu_read_lock(); + for (head =3D xas_load(&xas); head; head =3D xas_next(&xas)) { + if (xas_retry(&xas, head)) + continue; + if (xa_is_value(head)) + break; + if (!page_cache_get_speculative(head)) + goto retry; + + /* Has the page moved or been split? */ + if (unlikely(head !=3D xas_reload(&xas))) + goto put_page; + + pages[ret++] =3D head; + if (ret =3D=3D nr_pages) + break; + if (!PageUptodate(head)) + break; + if (PageReadahead(head)) + break; + xas.xa_index =3D head->index + thp_nr_pages(head) - 1; + xas.xa_offset =3D (xas.xa_index >> xas.xa_shift) & XA_CHUNK_MASK; + continue; +put_page: + put_page(head); +retry: + xas_reset(&xas); + } + rcu_read_unlock(); + return ret; +} + static int lock_page_for_iocb(struct kiocb *iocb, struct page *page) { if (iocb->ki_flags & IOCB_WAITQ) @@ -2328,14 +2370,14 @@ static int gfbr_get_pages(struct kiocb *iocb, str= uct iov_iter *iter, struct file_ra_state *ra =3D &filp->f_ra; pgoff_t index =3D iocb->ki_pos >> PAGE_SHIFT; pgoff_t last_index =3D (iocb->ki_pos + iter->count + PAGE_SIZE-1) >> PA= GE_SHIFT; - int i, j, nr_got, err =3D 0; + int i, nr_got, err =3D 0; =20 nr =3D min_t(unsigned long, last_index - index, nr); find_page: if (fatal_signal_pending(current)) return -EINTR; =20 - nr_got =3D find_get_pages_contig(mapping, index, nr, pages); + nr_got =3D mapping_get_read_heads(mapping, index, nr, pages); if (nr_got) goto got_pages; =20 @@ -2344,7 +2386,7 @@ static int gfbr_get_pages(struct kiocb *iocb, struc= t iov_iter *iter, =20 page_cache_sync_readahead(mapping, ra, filp, index, last_index - index)= ; =20 - nr_got =3D find_get_pages_contig(mapping, index, nr, pages); + nr_got =3D mapping_get_read_heads(mapping, index, nr, pages); if (nr_got) goto got_pages; =20 @@ -2355,15 +2397,14 @@ static int gfbr_get_pages(struct kiocb *iocb, str= uct iov_iter *iter, got_pages: for (i =3D 0; i < nr_got; i++) { struct page *page =3D pages[i]; - pgoff_t pg_index =3D index + i; + pgoff_t pg_index =3D page->index; loff_t pg_pos =3D max(iocb->ki_pos, (loff_t) pg_index << PAGE_SHIFT); loff_t pg_count =3D iocb->ki_pos + iter->count - pg_pos; =20 if (PageReadahead(page)) { if (iocb->ki_flags & IOCB_NOIO) { - for (j =3D i; j < nr_got; j++) - put_page(pages[j]); + put_page(page); nr_got =3D i; err =3D -EAGAIN; break; @@ -2375,8 +2416,7 @@ static int gfbr_get_pages(struct kiocb *iocb, struc= t iov_iter *iter, if (!PageUptodate(page)) { if ((iocb->ki_flags & IOCB_NOWAIT) || ((iocb->ki_flags & IOCB_WAITQ) && i)) { - for (j =3D i; j < nr_got; j++) - put_page(pages[j]); + put_page(page); nr_got =3D i; err =3D -EAGAIN; break; @@ -2385,8 +2425,6 @@ static int gfbr_get_pages(struct kiocb *iocb, struc= t iov_iter *iter, page =3D gfbr_update_page(iocb, mapping, iter, page, pg_pos, pg_count); if (IS_ERR_OR_NULL(page)) { - for (j =3D i + 1; j < nr_got; j++) - put_page(pages[j]); nr_got =3D i; err =3D PTR_ERR_OR_ZERO(page); break; @@ -2500,20 +2538,26 @@ ssize_t generic_file_buffered_read(struct kiocb *= iocb, mark_page_accessed(pages[i]); =20 for (i =3D 0; i < pg_nr; i++) { - unsigned int offset =3D iocb->ki_pos & ~PAGE_MASK; - unsigned int bytes =3D min_t(loff_t, end_offset - iocb->ki_pos, - PAGE_SIZE - offset); - unsigned int copied; + struct page *page =3D pages[i]; + size_t page_size =3D thp_size(page); + size_t offset =3D iocb->ki_pos & (page_size - 1); + size_t bytes =3D min_t(loff_t, end_offset - iocb->ki_pos, + page_size - offset); + size_t copied; =20 /* * If users can be writing to this page using arbitrary * virtual addresses, take care about potential aliasing * before reading the page on the kernel side. */ - if (writably_mapped) - flush_dcache_page(pages[i]); + if (writably_mapped) { + int j; + + for (j =3D 0; j < thp_nr_pages(page); j++) + flush_dcache_page(page + j); + } =20 - copied =3D copy_page_to_iter(pages[i], offset, bytes, iter); + copied =3D copy_page_to_iter(page, offset, bytes, iter); =20 written +=3D copied; iocb->ki_pos +=3D copied; --=20 2.28.0