From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.6 required=3.0 tests=DKIM_INVALID,DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 72674C33CB8 for ; Wed, 15 Jan 2020 02:38:50 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 2ABBC24684 for ; Wed, 15 Jan 2020 02:38:50 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="EgrmhkcR" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2ABBC24684 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 35E768E0006; Tue, 14 Jan 2020 21:38:49 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 30E6F8E0003; Tue, 14 Jan 2020 21:38:49 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1883F8E0006; Tue, 14 Jan 2020 21:38:49 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0140.hostedemail.com [216.40.44.140]) by kanga.kvack.org (Postfix) with ESMTP id 0140F8E0003 for ; Tue, 14 Jan 2020 21:38:48 -0500 (EST) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with SMTP id A380A40C2 for ; Wed, 15 Jan 2020 02:38:48 +0000 (UTC) X-FDA: 76378310736.08.top32_871049c4f3c3d X-HE-Tag: top32_871049c4f3c3d X-Filterd-Recvd-Size: 10251 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) by imf01.hostedemail.com (Postfix) with ESMTP for ; Wed, 15 Jan 2020 02:38:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=R5s7E+fDNMsmJ/ZtIC98sZK+AORG+F7O0ns/rNwbsl0=; b=EgrmhkcR/yjDI7OqkOXZNvgj8i t4lXpRcAlZpgyU/7bgXTxjnCeOOUX+N9qds89jX0ZFKKwdzIOJsX9eDwPUMts4c1tQjhtsZvnVzeQ yKChgvFCVBKDbG0wcvIiYAmKvP0OcdO2Dhzk3qMj1hY4LOsD6yPrgBHYtOATfO91bTL27L3MkrOHN 3t47yBo5GeSPzUIQR61xT2NtFSCrl1rYh8Avo6NaflsIAC03UJ87sXXmFbQRpj45Vi+pz4vrDmYPn hqP29jiEr45cpWuh0U3CK2jsKiHwAzPf0CtG8Oe6K5S+eYZcVROC8Gl/9CUGXLhbCFo9ZL/aE+XzW Z7F646eg==; Received: from willy by bombadil.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1irYZy-0008Az-7T; Wed, 15 Jan 2020 02:38:46 +0000 From: Matthew Wilcox To: linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , Jeff Layton , Christoph Hellwig , Chris Mason Subject: [PATCH v2 7/9] cifs: Convert from readpages to readahead Date: Tue, 14 Jan 2020 18:38:41 -0800 Message-Id: <20200115023843.31325-8-willy@infradead.org> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20200115023843.31325-1-willy@infradead.org> References: <20200115023843.31325-1-willy@infradead.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: "Matthew Wilcox (Oracle)" Use the new readahead operation in CIFS Signed-off-by: Matthew Wilcox (Oracle) --- fs/cifs/file.c | 143 +++++++++---------------------------------------- 1 file changed, 25 insertions(+), 118 deletions(-) diff --git a/fs/cifs/file.c b/fs/cifs/file.c index 043288b5c728..c9162380ad22 100644 --- a/fs/cifs/file.c +++ b/fs/cifs/file.c @@ -4280,70 +4280,11 @@ cifs_readpages_copy_into_pages(struct TCP_Server_= Info *server, return readpages_fill_pages(server, rdata, iter, iter->count); } =20 -static int -readpages_get_pages(struct address_space *mapping, struct list_head *pag= e_list, - unsigned int rsize, struct list_head *tmplist, - unsigned int *nr_pages, loff_t *offset, unsigned int *bytes) -{ - struct page *page, *tpage; - unsigned int expected_index; - int rc; - gfp_t gfp =3D readahead_gfp_mask(mapping); - - INIT_LIST_HEAD(tmplist); - - page =3D lru_to_page(page_list); - - /* - * Lock the page and put it in the cache. Since no one else - * should have access to this page, we're safe to simply set - * PG_locked without checking it first. - */ - __SetPageLocked(page); - rc =3D add_to_page_cache_locked(page, mapping, - page->index, gfp); - - /* give up if we can't stick it in the cache */ - if (rc) { - __ClearPageLocked(page); - return rc; - } - - /* move first page to the tmplist */ - *offset =3D (loff_t)page->index << PAGE_SHIFT; - *bytes =3D PAGE_SIZE; - *nr_pages =3D 1; - list_move_tail(&page->lru, tmplist); - - /* now try and add more pages onto the request */ - expected_index =3D page->index + 1; - list_for_each_entry_safe_reverse(page, tpage, page_list, lru) { - /* discontinuity ? */ - if (page->index !=3D expected_index) - break; - - /* would this page push the read over the rsize? */ - if (*bytes + PAGE_SIZE > rsize) - break; - - __SetPageLocked(page); - if (add_to_page_cache_locked(page, mapping, page->index, gfp)) { - __ClearPageLocked(page); - break; - } - list_move_tail(&page->lru, tmplist); - (*bytes) +=3D PAGE_SIZE; - expected_index++; - (*nr_pages)++; - } - return rc; -} - -static int cifs_readpages(struct file *file, struct address_space *mappi= ng, - struct list_head *page_list, unsigned num_pages) +static unsigned cifs_readahead(struct file *file, + struct address_space *mapping, pgoff_t start, + unsigned num_pages) { int rc; - struct list_head tmplist; struct cifsFileInfo *open_file =3D file->private_data; struct cifs_sb_info *cifs_sb =3D CIFS_FILE_SB(file); struct TCP_Server_Info *server; @@ -4358,11 +4299,10 @@ static int cifs_readpages(struct file *file, stru= ct address_space *mapping, * After this point, every page in the list might have PG_fscache set, * so we will need to clean that up off of every page we don't use. */ - rc =3D cifs_readpages_from_fscache(mapping->host, mapping, page_list, - &num_pages); + rc =3D -ENOBUFS; if (rc =3D=3D 0) { free_xid(xid); - return rc; + return num_pages; } =20 if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_RWPIDFORWARD) @@ -4373,24 +4313,11 @@ static int cifs_readpages(struct file *file, stru= ct address_space *mapping, rc =3D 0; server =3D tlink_tcon(open_file->tlink)->ses->server; =20 - cifs_dbg(FYI, "%s: file=3D%p mapping=3D%p num_pages=3D%u\n", - __func__, file, mapping, num_pages); + cifs_dbg(FYI, "%s: file=3D%p mapping=3D%p start=3D%lu num_pages=3D%u\n"= , + __func__, file, mapping, start, num_pages); =20 - /* - * Start with the page at end of list and move it to private - * list. Do the same with any following pages until we hit - * the rsize limit, hit an index discontinuity, or run out of - * pages. Issue the async read and then start the loop again - * until the list is empty. - * - * Note that list order is important. The page_list is in - * the order of declining indexes. When we put the pages in - * the rdata->pages, then we want them in increasing order. - */ - while (!list_empty(page_list)) { - unsigned int i, nr_pages, bytes, rsize; - loff_t offset; - struct page *page, *tpage; + while (num_pages) { + unsigned int i, nr_pages, rsize; struct cifs_readdata *rdata; struct cifs_credits credits_on_stack; struct cifs_credits *credits =3D &credits_on_stack; @@ -4408,21 +4335,14 @@ static int cifs_readpages(struct file *file, stru= ct address_space *mapping, if (rc) break; =20 + nr_pages =3D min_t(unsigned, rsize / PAGE_SIZE, num_pages); /* * Give up immediately if rsize is too small to read an entire * page. The VFS will fall back to readpage. We should never - * reach this point however since we set ra_pages to 0 when the - * rsize is smaller than a cache page. + * reach this point however since we set ra_pages to 0 when + * the rsize is smaller than a cache page. */ - if (unlikely(rsize < PAGE_SIZE)) { - add_credits_and_wake_if(server, credits, 0); - free_xid(xid); - return 0; - } - - rc =3D readpages_get_pages(mapping, page_list, rsize, &tmplist, - &nr_pages, &offset, &bytes); - if (rc) { + if (unlikely(nr_pages < 1)) { add_credits_and_wake_if(server, credits, 0); break; } @@ -4430,21 +4350,15 @@ static int cifs_readpages(struct file *file, stru= ct address_space *mapping, rdata =3D cifs_readdata_alloc(nr_pages, cifs_readv_complete); if (!rdata) { /* best to give up if we're out of mem */ - list_for_each_entry_safe(page, tpage, &tmplist, lru) { - list_del(&page->lru); - lru_cache_add_file(page); - unlock_page(page); - put_page(page); - } - rc =3D -ENOMEM; add_credits_and_wake_if(server, credits, 0); break; } =20 rdata->cfile =3D cifsFileInfo_get(open_file); rdata->mapping =3D mapping; - rdata->offset =3D offset; - rdata->bytes =3D bytes; + rdata->offset =3D start; + rdata->nr_pages =3D nr_pages; + rdata->bytes =3D nr_pages * PAGE_SIZE; rdata->pid =3D pid; rdata->pagesz =3D PAGE_SIZE; rdata->tailsz =3D PAGE_SIZE; @@ -4452,10 +4366,8 @@ static int cifs_readpages(struct file *file, struc= t address_space *mapping, rdata->copy_into_pages =3D cifs_readpages_copy_into_pages; rdata->credits =3D credits_on_stack; =20 - list_for_each_entry_safe(page, tpage, &tmplist, lru) { - list_del(&page->lru); - rdata->pages[rdata->nr_pages++] =3D page; - } + for (i =3D 0; i < nr_pages; i++) + rdata->pages[i] =3D readahead_page(mapping, start++); =20 rc =3D adjust_credits(server, &rdata->credits, rdata->bytes); =20 @@ -4468,27 +4380,22 @@ static int cifs_readpages(struct file *file, stru= ct address_space *mapping, =20 if (rc) { add_credits_and_wake_if(server, &rdata->credits, 0); - for (i =3D 0; i < rdata->nr_pages; i++) { - page =3D rdata->pages[i]; - lru_cache_add_file(page); - unlock_page(page); - put_page(page); - } - /* Fallback to the readpage in error/reconnect cases */ kref_put(&rdata->refcount, cifs_readdata_release); break; } =20 kref_put(&rdata->refcount, cifs_readdata_release); + num_pages -=3D nr_pages; } =20 /* Any pages that have been shown to fscache but didn't get added to * the pagecache must be uncached before they get returned to the * allocator. */ - cifs_fscache_readpages_cancel(mapping->host, page_list); +// cifs_fscache_readpages_cancel(mapping->host, page_list); free_xid(xid); - return rc; + + return num_pages; } =20 /* @@ -4806,7 +4713,7 @@ cifs_direct_io(struct kiocb *iocb, struct iov_iter = *iter) =20 const struct address_space_operations cifs_addr_ops =3D { .readpage =3D cifs_readpage, - .readpages =3D cifs_readpages, + .readahead =3D cifs_readahead, .writepage =3D cifs_writepage, .writepages =3D cifs_writepages, .write_begin =3D cifs_write_begin, @@ -4819,9 +4726,9 @@ const struct address_space_operations cifs_addr_ops= =3D { }; =20 /* - * cifs_readpages requires the server to support a buffer large enough t= o + * cifs_readahead requires the server to support a buffer large enough t= o * contain the header plus one complete page of data. Otherwise, we nee= d - * to leave cifs_readpages out of the address space operations. + * to leave cifs_readahead out of the address space operations. */ const struct address_space_operations cifs_addr_ops_smallbuf =3D { .readpage =3D cifs_readpage, --=20 2.24.1