From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.6 required=3.0 tests=DKIM_INVALID,DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A90E0C35245 for ; Wed, 15 Jan 2020 02:38:56 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 67147222C3 for ; Wed, 15 Jan 2020 02:38:56 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="CSnVza/D" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 67147222C3 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 86A558E0003; Tue, 14 Jan 2020 21:38:51 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 61DC88E0009; Tue, 14 Jan 2020 21:38:51 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3AF6A8E0003; Tue, 14 Jan 2020 21:38:51 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0247.hostedemail.com [216.40.44.247]) by kanga.kvack.org (Postfix) with ESMTP id 052EF8E0008 for ; Tue, 14 Jan 2020 21:38:51 -0500 (EST) Received: from smtpin04.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with SMTP id C965640D9 for ; Wed, 15 Jan 2020 02:38:50 +0000 (UTC) X-FDA: 76378310820.04.teeth41_8767f7c0c3110 X-HE-Tag: teeth41_8767f7c0c3110 X-Filterd-Recvd-Size: 5078 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) by imf02.hostedemail.com (Postfix) with ESMTP for ; Wed, 15 Jan 2020 02:38:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=0Lf3LIidmtzhLradCtIzZCa/E4yScaYQG1zFtokMxyI=; b=CSnVza/DExA3BXgOaEyUvK2/Dt ViA6C+SrfkrcgYYwc4eTyJzzHH8N4WaKAG30TZXiMZqRf00eU4lmJVDmpRXufq2R117ytYWDJ9hk+ wKtwV125l/dXdaeo0fV6R+BmMaXc40cxlO2vCZQlIQEe9HAmV2iIiXaE0Z5iiotNhLF2ysCU6Avvl k+8f7XQGZ2bQ6QqJ6H/XEjtzWA8D92e8fsmrNjGY/bqIyJsv6YKhPc3cDRNJ1fhZZsnfcxg7Sym6a nDFL2pNZrLrMc9/q3/rzd9xcr6Us64RYwJi9yFWrbNGV5tYYwzi2tzZ7N7UrRm4XUOzdYUWXZk5EX 4JcfTWzQ==; Received: from willy by bombadil.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1irYZy-0008B3-8Z; Wed, 15 Jan 2020 02:38:46 +0000 From: Matthew Wilcox To: linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , Jeff Layton , Christoph Hellwig , Chris Mason Subject: [PATCH v2 8/9] mm: Remove add_to_page_cache_locked Date: Tue, 14 Jan 2020 18:38:42 -0800 Message-Id: <20200115023843.31325-9-willy@infradead.org> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20200115023843.31325-1-willy@infradead.org> References: <20200115023843.31325-1-willy@infradead.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: "Matthew Wilcox (Oracle)" The only remaining caller is add_to_page_cache(), and the only caller of that is hugetlbfs, so move add_to_page_cache() into filemap.c and call __add_to_page_cache_locked() directly. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/pagemap.h | 20 ++------------------ mm/filemap.c | 23 ++++++++--------------- 2 files changed, 10 insertions(+), 33 deletions(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 0821f584c43c..75075065dd0b 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -604,8 +604,8 @@ static inline int fault_in_pages_readable(const char = __user *uaddr, int size) return 0; } =20 -int add_to_page_cache_locked(struct page *page, struct address_space *ma= pping, - pgoff_t index, gfp_t gfp_mask); +int add_to_page_cache(struct page *page, struct address_space *mapping, + pgoff_t index, gfp_t gfp); int add_to_page_cache_lru(struct page *page, struct address_space *mappi= ng, pgoff_t index, gfp_t gfp_mask); extern void delete_from_page_cache(struct page *page); @@ -614,22 +614,6 @@ int replace_page_cache_page(struct page *old, struct= page *new, gfp_t gfp_mask); void delete_from_page_cache_batch(struct address_space *mapping, struct pagevec *pvec); =20 -/* - * Like add_to_page_cache_locked, but used to add newly allocated pages: - * the page is new, so we can just run __SetPageLocked() against it. - */ -static inline int add_to_page_cache(struct page *page, - struct address_space *mapping, pgoff_t offset, gfp_t gfp_mask) -{ - int error; - - __SetPageLocked(page); - error =3D add_to_page_cache_locked(page, mapping, offset, gfp_mask); - if (unlikely(error)) - __ClearPageLocked(page); - return error; -} - /* * Only call this from a ->readahead implementation. */ diff --git a/mm/filemap.c b/mm/filemap.c index bf6aa30be58d..fb87f5fa75e6 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -913,25 +913,18 @@ static int __add_to_page_cache_locked(struct page *= page, } ALLOW_ERROR_INJECTION(__add_to_page_cache_locked, ERRNO); =20 -/** - * add_to_page_cache_locked - add a locked page to the pagecache - * @page: page to add - * @mapping: the page's address_space - * @offset: page index - * @gfp_mask: page allocation mode - * - * This function is used to add a page to the pagecache. It must be lock= ed. - * This function does not add the page to the LRU. The caller must do t= hat. - * - * Return: %0 on success, negative error code otherwise. - */ -int add_to_page_cache_locked(struct page *page, struct address_space *ma= pping, +int add_to_page_cache(struct page *page, struct address_space *mapping, pgoff_t offset, gfp_t gfp_mask) { - return __add_to_page_cache_locked(page, mapping, offset, + int err; + + __SetPageLocked(page); + err =3D __add_to_page_cache_locked(page, mapping, offset, gfp_mask, NULL); + if (unlikely(err)) + __ClearPageLocked(page); + return err; } -EXPORT_SYMBOL(add_to_page_cache_locked); =20 int add_to_page_cache_lru(struct page *page, struct address_space *mappi= ng, pgoff_t offset, gfp_t gfp_mask) --=20 2.24.1