From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.8 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3BD5BC433DF for ; Tue, 2 Jun 2020 20:10:46 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E0C33206E2 for ; Tue, 2 Jun 2020 20:10:45 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="xyXK5wZW" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E0C33206E2 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 84DFD80015; Tue, 2 Jun 2020 16:10:45 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7FE8C80007; Tue, 2 Jun 2020 16:10:45 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6EEC080015; Tue, 2 Jun 2020 16:10:45 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0026.hostedemail.com [216.40.44.26]) by kanga.kvack.org (Postfix) with ESMTP id 55AAA80007 for ; Tue, 2 Jun 2020 16:10:45 -0400 (EDT) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 1BA19180AD806 for ; Tue, 2 Jun 2020 20:10:45 +0000 (UTC) X-FDA: 76885364850.22.pull10_5dc9f9c61521c Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin22.hostedemail.com (Postfix) with ESMTP id E0B8418159E4A for ; Tue, 2 Jun 2020 20:10:44 +0000 (UTC) X-HE-Tag: pull10_5dc9f9c61521c X-Filterd-Recvd-Size: 7522 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf31.hostedemail.com (Postfix) with ESMTP for ; Tue, 2 Jun 2020 20:10:44 +0000 (UTC) Received: from localhost.localdomain (c-73-231-172-41.hsd1.ca.comcast.net [73.231.172.41]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 09993207FB; Tue, 2 Jun 2020 20:10:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1591128643; bh=WCGzioatIh6BiSEpfOqvjiYAuvh5P1rOOZDM9swg9uE=; h=Date:From:To:Subject:In-Reply-To:From; b=xyXK5wZWYW3Vo/M/kDRFUhUQYm2BJW2MeIJ0Xf3sI6CfgSccao1rcI1LqqvFjYEuq /8KqzKxVrPpJi01xeosP1SIzWV9BFcVq1PAEOVMTW7UEq1FRTZA4Az7oqILbEI5GXK GHUcNeaoiBo39dnRi0kunX0x4FXRa74AKLCor01c= Date: Tue, 02 Jun 2020 13:10:42 -0700 From: Andrew Morton To: akpm@linux-foundation.org, darrick.wong@oracle.com, dchinner@redhat.com, ebiggers@google.com, gaoxiang25@huawei.com, hch@lst.de, jaegeuk@kernel.org, jhubbard@nvidia.com, johannes.thumshirn@wdc.com, joseph.qi@linux.alibaba.com, junxiao.bi@oracle.com, linux-mm@kvack.org, mhocko@suse.com, mm-commits@vger.kernel.org, mszeredi@redhat.com, torvalds@linux-foundation.org, william.kucharski@oracle.com, willy@infradead.org, xiyou.wangcong@gmail.com, yuchao0@huawei.com, ziy@nvidia.com Subject: [patch 014/128] mm: move readahead prototypes from mm.h Message-ID: <20200602201042.VA2uCl-L4%akpm@linux-foundation.org> In-Reply-To: <20200602130930.8e8f10fa6f19e3766e70921f@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Rspamd-Queue-Id: E0B8418159E4A X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam05 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: "Matthew Wilcox (Oracle)" Subject: mm: move readahead prototypes from mm.h Patch series "Change readahead API", v11. This series adds a readahead address_space operation to replace the readpages operation. The key difference is that pages are added to the page cache as they are allocated (and then looked up by the filesystem) instead of passing them on a list to the readpages operation and having the filesystem add them to the page cache. It's a net reduction in code for each implementation, more efficient than walking a list, and solves the direct-write vs buffered-read problem reported by yu kuai at http://lkml.kernel.org/r/20200116063601.39201-1-yukuai3@huawei.com The only unconverted filesystems are those which use fscache. Their conversion is pending Dave Howells' rewrite which will make the conversion substantially easier. This should be completed by the end of the year. I want to thank the reviewers/testers; Dave Chinner, John Hubbard, Eric Biggers, Johannes Thumshirn, Dave Sterba, Zi Yan, Christoph Hellwig and Miklos Szeredi have done a marvellous job of providing constructive criticism. These patches pass an xfstests run on ext4, xfs & btrfs with no regressions that I can tell (some of the tests seem a little flaky before and remain flaky afterwards). This patch (of 25): The readahead code is part of the page cache so should be found in the pagemap.h file. force_page_cache_readahead is only used within mm, so move it to mm/internal.h instead. Remove the parameter names where they add no value, and rename the ones which were actively misleading. Link: http://lkml.kernel.org/r/20200414150233.24495-1-willy@infradead.org Link: http://lkml.kernel.org/r/20200414150233.24495-2-willy@infradead.org Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: John Hubbard Reviewed-by: Christoph Hellwig Reviewed-by: William Kucharski Reviewed-by: Johannes Thumshirn Cc: Chao Yu Cc: Cong Wang Cc: Darrick J. Wong Cc: Dave Chinner Cc: Eric Biggers Cc: Gao Xiang Cc: Jaegeuk Kim Cc: Joseph Qi Cc: Junxiao Bi Cc: Michal Hocko Cc: Zi Yan Cc: Miklos Szeredi Signed-off-by: Andrew Morton --- block/blk-core.c | 1 + include/linux/mm.h | 19 ------------------- include/linux/pagemap.h | 8 ++++++++ mm/fadvise.c | 2 ++ mm/internal.h | 2 ++ 5 files changed, 13 insertions(+), 19 deletions(-) --- a/block/blk-core.c~mm-move-readahead-prototypes-from-mmh +++ a/block/blk-core.c @@ -20,6 +20,7 @@ #include #include #include +#include #include #include #include --- a/include/linux/mm.h~mm-move-readahead-prototypes-from-mmh +++ a/include/linux/mm.h @@ -2608,25 +2608,6 @@ extern vm_fault_t filemap_page_mkwrite(s int __must_check write_one_page(struct page *page); void task_dirty_inc(struct task_struct *tsk); -/* readahead.c */ -#define VM_READAHEAD_PAGES (SZ_128K / PAGE_SIZE) - -int force_page_cache_readahead(struct address_space *mapping, struct file *filp, - pgoff_t offset, unsigned long nr_to_read); - -void page_cache_sync_readahead(struct address_space *mapping, - struct file_ra_state *ra, - struct file *filp, - pgoff_t offset, - unsigned long size); - -void page_cache_async_readahead(struct address_space *mapping, - struct file_ra_state *ra, - struct file *filp, - struct page *pg, - pgoff_t offset, - unsigned long size); - extern unsigned long stack_guard_gap; /* Generic expand stack which grows the stack according to GROWS{UP,DOWN} */ extern int expand_stack(struct vm_area_struct *vma, unsigned long address); --- a/include/linux/pagemap.h~mm-move-readahead-prototypes-from-mmh +++ a/include/linux/pagemap.h @@ -618,6 +618,14 @@ int replace_page_cache_page(struct page void delete_from_page_cache_batch(struct address_space *mapping, struct pagevec *pvec); +#define VM_READAHEAD_PAGES (SZ_128K / PAGE_SIZE) + +void page_cache_sync_readahead(struct address_space *, struct file_ra_state *, + struct file *, pgoff_t index, unsigned long req_count); +void page_cache_async_readahead(struct address_space *, struct file_ra_state *, + struct file *, struct page *, pgoff_t index, + unsigned long req_count); + /* * Like add_to_page_cache_locked, but used to add newly allocated pages: * the page is new, so we can just run __SetPageLocked() against it. --- a/mm/fadvise.c~mm-move-readahead-prototypes-from-mmh +++ a/mm/fadvise.c @@ -22,6 +22,8 @@ #include +#include "internal.h" + /* * POSIX_FADV_WILLNEED could set PG_Referenced, and POSIX_FADV_NOREUSE could * deactivate the pages and clear PG_Referenced. --- a/mm/internal.h~mm-move-readahead-prototypes-from-mmh +++ a/mm/internal.h @@ -49,6 +49,8 @@ void unmap_page_range(struct mmu_gather unsigned long addr, unsigned long end, struct zap_details *details); +int force_page_cache_readahead(struct address_space *, struct file *, + pgoff_t index, unsigned long nr_to_read); extern unsigned int __do_page_cache_readahead(struct address_space *mapping, struct file *filp, pgoff_t offset, unsigned long nr_to_read, unsigned long lookahead_size); _