From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.6 required=3.0 tests=DKIM_INVALID,DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8709AC33CA9 for ; Mon, 13 Jan 2020 15:38:13 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 41FD52081E for ; Mon, 13 Jan 2020 15:38:13 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="n14KrJrF" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 41FD52081E Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id BF8858E0015; Mon, 13 Jan 2020 10:37:56 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id B87B68E0012; Mon, 13 Jan 2020 10:37:56 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9FBEC8E0016; Mon, 13 Jan 2020 10:37:56 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0253.hostedemail.com [216.40.44.253]) by kanga.kvack.org (Postfix) with ESMTP id 86F128E0012 for ; Mon, 13 Jan 2020 10:37:56 -0500 (EST) Received: from smtpin21.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with SMTP id 419C340CA for ; Mon, 13 Jan 2020 15:37:56 +0000 (UTC) X-FDA: 76373016552.21.park85_7f748f2819315 X-HE-Tag: park85_7f748f2819315 X-Filterd-Recvd-Size: 7624 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) by imf43.hostedemail.com (Postfix) with ESMTP for ; Mon, 13 Jan 2020 15:37:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=SXAvnYjcl6qequNwK8pgR/rQT6gLXY9+s/CAcN+XqLc=; b=n14KrJrFKqwJGKJyWhG5HgE/nm Np64Aehxm7IYOTM+02fPOAQ23CiYbX1GuQojaxO0PNE2Q939pySyF5PzKQa96oJuO5rcOBz9mPcSz 4eZ927zi+xeCjUN26AStyqAYnB1hsoxuhaDQzDb6A2D5BXsg7ZeXBtZ10EnIR1Num8EGLq/l67Okm 6mOzRPBzy9EYFHt0m3wfmuq0Myag0sknVP0wXdu/UAtLweNBi/VAA564YZ+7zcpKcuxWjD3nNTE+4 OzcsSNfHmwGLLLaQ7x84znlybgQy9zySk8UN2oLSPoaWQj0pzkC3qIUt6jHVucwRZtO0AZIFbD7uj k/K8FPWA==; Received: from willy by bombadil.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1ir1mr-00076I-6S; Mon, 13 Jan 2020 15:37:53 +0000 From: Matthew Wilcox To: linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , jlayton@kernel.org, hch@infradead.org Subject: [PATCH 4/8] mm/fs: Add a_ops->readahead Date: Mon, 13 Jan 2020 07:37:42 -0800 Message-Id: <20200113153746.26654-5-willy@infradead.org> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20200113153746.26654-1-willy@infradead.org> References: <20200113153746.26654-1-willy@infradead.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: "Matthew Wilcox (Oracle)" This will replace ->readpages with a saner interface: - No return type (errors are ignored for read ahead anyway) - Pages are already in the page cache when ->readpages is called - Pages are passed in a pagevec instead of a linked list Signed-off-by: Matthew Wilcox (Oracle) --- Documentation/filesystems/locking.rst | 8 +++++- Documentation/filesystems/vfs.rst | 9 ++++++ include/linux/fs.h | 3 ++ mm/readahead.c | 40 ++++++++++++++++++++++++++- 4 files changed, 58 insertions(+), 2 deletions(-) diff --git a/Documentation/filesystems/locking.rst b/Documentation/filesy= stems/locking.rst index 5057e4d9dcd1..1e2f1186fd1a 100644 --- a/Documentation/filesystems/locking.rst +++ b/Documentation/filesystems/locking.rst @@ -239,6 +239,8 @@ prototypes:: int (*readpage)(struct file *, struct page *); int (*writepages)(struct address_space *, struct writeback_control *); int (*set_page_dirty)(struct page *page); + int (*readahead)(struct file *, struct address_space *, + struct pagevec *, pgoff_t index); int (*readpages)(struct file *filp, struct address_space *mapping, struct list_head *pages, unsigned nr_pages); int (*write_begin)(struct file *, struct address_space *mapping, @@ -271,7 +273,8 @@ writepage: yes, unlocks (see below) readpage: yes, unlocks writepages: set_page_dirty no -readpages: +readpages: no +readahead: yes, unlocks write_begin: locks the page exclusive write_end: yes, unlocks exclusive bmap: @@ -298,6 +301,9 @@ completion. ->readpages() populates the pagecache with the passed pages and starts I/O against them. They come unlocked upon I/O completion. =20 +->readahead() starts I/O against the pages. They come unlocked upon +I/O completion. + ->writepage() is used for two purposes: for "memory cleansing" and for "sync". These are quite different operations and the behaviour may diff= er depending upon the mode. diff --git a/Documentation/filesystems/vfs.rst b/Documentation/filesystem= s/vfs.rst index 7d4d09dd5e6d..63d0f0dbbf9c 100644 --- a/Documentation/filesystems/vfs.rst +++ b/Documentation/filesystems/vfs.rst @@ -706,6 +706,8 @@ cache in your filesystem. The following members are = defined: int (*readpage)(struct file *, struct page *); int (*writepages)(struct address_space *, struct writeback_control *); int (*set_page_dirty)(struct page *page); + int (*readahead)(struct file *, struct address_space *, + struct pagevec *, pgoff_t index); int (*readpages)(struct file *filp, struct address_space *mapping, struct list_head *pages, unsigned nr_pages); int (*write_begin)(struct file *, struct address_space *mapping, @@ -781,6 +783,13 @@ cache in your filesystem. The following members are= defined: If defined, it should set the PageDirty flag, and the PAGECACHE_TAG_DIRTY tag in the radix tree. =20 +``readahead`` + called by the VM to read pages associated with the address_space + object. This is essentially a vector version of readpage. + Instead of just one page, several pages are requested. + Since this is readahead, attempt to start I/O on each page and + let the I/O completion path set errors on the page. + ``readpages`` called by the VM to read pages associated with the address_space object. This is essentially just a vector version of readpage. diff --git a/include/linux/fs.h b/include/linux/fs.h index 98e0349adb52..2769f89666fb 100644 --- a/include/linux/fs.h +++ b/include/linux/fs.h @@ -52,6 +52,7 @@ struct hd_geometry; struct iovec; struct kiocb; struct kobject; +struct pagevec; struct pipe_inode_info; struct poll_table_struct; struct kstatfs; @@ -375,6 +376,8 @@ struct address_space_operations { */ int (*readpages)(struct file *filp, struct address_space *mapping, struct list_head *pages, unsigned nr_pages); + void (*readahead)(struct file *, struct address_space *, + struct pagevec *, pgoff_t offset); =20 int (*write_begin)(struct file *, struct address_space *mapping, loff_t pos, unsigned len, unsigned flags, diff --git a/mm/readahead.c b/mm/readahead.c index 76a70a4406b5..2fe0974173ea 100644 --- a/mm/readahead.c +++ b/mm/readahead.c @@ -123,7 +123,45 @@ static unsigned read_pages(struct address_space *map= ping, struct file *filp, struct page *page; unsigned int nr_pages =3D pagevec_count(pvec); =20 - if (mapping->a_ops->readpages) { + if (mapping->a_ops->readahead) { + /* + * When we remove support for ->readpages, we'll call + * add_to_page_cache_lru() in the parent and all this + * grot goes away. + */ + unsigned char first =3D pvec->first; + unsigned char saved_nr =3D pvec->nr; + pgoff_t base =3D offset; + pagevec_for_each(pvec, page) { + if (!add_to_page_cache_lru(page, mapping, offset++, + gfp)) { + unsigned char saved_first =3D pvec->first; + + pvec->nr =3D pvec->first - 1; + pvec->first =3D first; + mapping->a_ops->readahead(filp, mapping, pvec, + base + first); + first =3D pvec->nr + 1; + pvec->nr =3D saved_nr; + pvec->first =3D saved_first; + + put_page(page); + } + } + pvec->first =3D first; + offset =3D base + first; + mapping->a_ops->readahead(filp, mapping, pvec, offset); + /* + * Ideally the implementation would at least attempt to + * start I/O against all the pages, but there are times + * when it makes more sense to just give up. Take care + * of any un-attempted pages here. + */ + pagevec_for_each(pvec, page) { + unlock_page(page); + put_page(page); + } + } else if (mapping->a_ops->readpages) { LIST_HEAD(pages); =20 pagevec_for_each(pvec, page) { --=20 2.24.1