From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.6 required=3.0 tests=DKIM_INVALID,DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CE094C33C9E for ; Wed, 15 Jan 2020 02:38:58 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 86E3724679 for ; Wed, 15 Jan 2020 02:38:58 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="RONClEle" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 86E3724679 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 117A68E0009; Tue, 14 Jan 2020 21:38:54 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 051E28E0008; Tue, 14 Jan 2020 21:38:53 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E0BE08E0009; Tue, 14 Jan 2020 21:38:53 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0108.hostedemail.com [216.40.44.108]) by kanga.kvack.org (Postfix) with ESMTP id C0FDF8E0008 for ; Tue, 14 Jan 2020 21:38:53 -0500 (EST) Received: from smtpin29.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with SMTP id 7F78F8248D7C for ; Wed, 15 Jan 2020 02:38:53 +0000 (UTC) X-FDA: 76378310946.29.quilt29_87c63815f7131 X-HE-Tag: quilt29_87c63815f7131 X-Filterd-Recvd-Size: 7588 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) by imf11.hostedemail.com (Postfix) with ESMTP for ; Wed, 15 Jan 2020 02:38:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=FokUiDNkX0vDvA4I4b4gMyDmQibGXxBuPyNO9gCWLN8=; b=RONClEle8E7N2zN2xq6MEvYS0T wXbGCKaTkkPf51j+fFgQrTjfeiXIRxjB2yjBEXDzoJQjT+2fPcCKmFpWlzyWIgSkhu5Mx7AJf/myR HP/9XjA8202ce7W/u13TMqFnTXlj7IAkrVFNHhG8SBa573CjQmmSB/6xoFzRTsCOzdOgKcyFuRgWH ex9QL+SqUoFdz3S2/mP5JutIJRVT7qfWt3UAcTX+MnUGLYbtvwYp8oDfQeeW+xHuZEFzf+/DoHg6A u6IibUwKMG4/2kMJEtC55R8C/t33hqLijzYpde7W9yayBlEzhm1AwC7IMRkE89nVvGU1PAhtxwKHn AG++ELHg==; Received: from willy by bombadil.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1irYZy-0008Aq-4y; Wed, 15 Jan 2020 02:38:46 +0000 From: Matthew Wilcox To: linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , Jeff Layton , Christoph Hellwig , Chris Mason Subject: [PATCH v2 5/9] mm: Add readahead address space operation Date: Tue, 14 Jan 2020 18:38:39 -0800 Message-Id: <20200115023843.31325-6-willy@infradead.org> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20200115023843.31325-1-willy@infradead.org> References: <20200115023843.31325-1-willy@infradead.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: "Matthew Wilcox (Oracle)" This replaces ->readpages with a saner interface: - Return the number of pages not read instead of an ignored error code. - Pages are already in the page cache when ->readahead is called. - Implementation looks up the pages in the page cache instead of having them passed in a linked list. Signed-off-by: Matthew Wilcox (Oracle) --- Documentation/filesystems/locking.rst | 7 ++++++- Documentation/filesystems/vfs.rst | 11 +++++++++++ include/linux/fs.h | 2 ++ include/linux/pagemap.h | 12 ++++++++++++ mm/readahead.c | 13 ++++++++++++- 5 files changed, 43 insertions(+), 2 deletions(-) diff --git a/Documentation/filesystems/locking.rst b/Documentation/filesy= stems/locking.rst index 5057e4d9dcd1..d8a5dde914b5 100644 --- a/Documentation/filesystems/locking.rst +++ b/Documentation/filesystems/locking.rst @@ -239,6 +239,8 @@ prototypes:: int (*readpage)(struct file *, struct page *); int (*writepages)(struct address_space *, struct writeback_control *); int (*set_page_dirty)(struct page *page); + unsigned (*readahead)(struct file *, struct address_space *, + pgoff_t start, unsigned nr_pages); int (*readpages)(struct file *filp, struct address_space *mapping, struct list_head *pages, unsigned nr_pages); int (*write_begin)(struct file *, struct address_space *mapping, @@ -271,7 +273,8 @@ writepage: yes, unlocks (see below) readpage: yes, unlocks writepages: set_page_dirty no -readpages: +readahead: yes, unlocks +readpages: no write_begin: locks the page exclusive write_end: yes, unlocks exclusive bmap: @@ -295,6 +298,8 @@ the request handler (/dev/loop). ->readpage() unlocks the page, either synchronously or via I/O completion. =20 +->readahead() unlocks the page like ->readpage(). + ->readpages() populates the pagecache with the passed pages and starts I/O against them. They come unlocked upon I/O completion. =20 diff --git a/Documentation/filesystems/vfs.rst b/Documentation/filesystem= s/vfs.rst index 7d4d09dd5e6d..bb06fb7b120b 100644 --- a/Documentation/filesystems/vfs.rst +++ b/Documentation/filesystems/vfs.rst @@ -706,6 +706,8 @@ cache in your filesystem. The following members are = defined: int (*readpage)(struct file *, struct page *); int (*writepages)(struct address_space *, struct writeback_control *); int (*set_page_dirty)(struct page *page); + unsigned (*readahead)(struct file *filp, struct address_space *mapping= , + pgoff_t start, unsigned nr_pages); int (*readpages)(struct file *filp, struct address_space *mapping, struct list_head *pages, unsigned nr_pages); int (*write_begin)(struct file *, struct address_space *mapping, @@ -781,6 +783,15 @@ cache in your filesystem. The following members are= defined: If defined, it should set the PageDirty flag, and the PAGECACHE_TAG_DIRTY tag in the radix tree. =20 +``readahead`` + called by the VM to read pages associated with the address_space + object. The pages are consecutive in the page cache and are + locked. The implementation should decrement the page refcount a= fter + attempting I/O on each page. Usually the page will be unlocked = by + the I/O completion handler. If the function does not attempt I/= O on + some pages, return the number of pages which were not read so th= e + common code can unlock the pages for you. + ``readpages`` called by the VM to read pages associated with the address_space object. This is essentially just a vector version of readpage. diff --git a/include/linux/fs.h b/include/linux/fs.h index 98e0349adb52..a10f3a72e5ac 100644 --- a/include/linux/fs.h +++ b/include/linux/fs.h @@ -375,6 +375,8 @@ struct address_space_operations { */ int (*readpages)(struct file *filp, struct address_space *mapping, struct list_head *pages, unsigned nr_pages); + unsigned (*readahead)(struct file *, struct address_space *, + pgoff_t start, unsigned nr_pages); =20 int (*write_begin)(struct file *, struct address_space *mapping, loff_t pos, unsigned len, unsigned flags, diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 37a4d9e32cd3..0821f584c43c 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -630,6 +630,18 @@ static inline int add_to_page_cache(struct page *pag= e, return error; } =20 +/* + * Only call this from a ->readahead implementation. + */ +static inline +struct page *readahead_page(struct address_space *mapping, loff_t pos) +{ + struct page *page =3D xa_load(&mapping->i_pages, pos / PAGE_SIZE); + VM_BUG_ON_PAGE(!PageLocked(page), page); + + return page; +} + static inline unsigned long dir_pages(struct inode *inode) { return (unsigned long)(inode->i_size + PAGE_SIZE - 1) >> diff --git a/mm/readahead.c b/mm/readahead.c index 5a6676640f20..6d65dae6dad0 100644 --- a/mm/readahead.c +++ b/mm/readahead.c @@ -121,7 +121,18 @@ static void read_pages(struct address_space *mapping= , struct file *filp, =20 blk_start_plug(&plug); =20 - if (mapping->a_ops->readpages) { + if (mapping->a_ops->readahead) { + unsigned left =3D mapping->a_ops->readahead(filp, mapping, + start, nr_pages); + + while (left) { + struct page *page =3D readahead_page(mapping, + start + nr_pages - left - 1); + unlock_page(page); + put_page(page); + left--; + } + } else if (mapping->a_ops->readpages) { mapping->a_ops->readpages(filp, mapping, pages, nr_pages); /* Clean up the remaining pages */ put_pages_list(pages); --=20 2.24.1