From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D3502C433E2 for ; Sat, 11 Jul 2020 03:32:01 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 917782075D for ; Sat, 11 Jul 2020 03:32:01 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="QxCRMsMG" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 917782075D Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 134548D0003; Fri, 10 Jul 2020 23:32:01 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0E45F8D0001; Fri, 10 Jul 2020 23:32:01 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F3BDF8D0003; Fri, 10 Jul 2020 23:32:00 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0254.hostedemail.com [216.40.44.254]) by kanga.kvack.org (Postfix) with ESMTP id DABD28D0001 for ; Fri, 10 Jul 2020 23:32:00 -0400 (EDT) Received: from smtpin12.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 753CD442C for ; Sat, 11 Jul 2020 03:32:00 +0000 (UTC) X-FDA: 77024371200.12.jail72_28121a226ed3 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin12.hostedemail.com (Postfix) with ESMTP id 407961800B452 for ; Sat, 11 Jul 2020 03:32:00 +0000 (UTC) X-HE-Tag: jail72_28121a226ed3 X-Filterd-Recvd-Size: 6337 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf26.hostedemail.com (Postfix) with ESMTP for ; Sat, 11 Jul 2020 03:31:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:To:From:Date:Sender:Reply-To:Cc: Content-Transfer-Encoding:Content-ID:Content-Description; bh=38vhtHJO6uVHVwCHS2qbckiiOZfGHOCHXxp7mdJwC7c=; b=QxCRMsMGg+YXQADX6oCyLMWFz4 S4qwwqW2v1nNbkSaRmqYvMYXzIbSnw0k4aAXiiph7VboRQK6PzkmTBxnod5Q8AI0CKt7wI/Zh24J9 qxBH93Onc1M0OabfTas6cHFom+0ta6qcsE/qNTwSRlz2sspHZgxpEaHUlr5w2jFZ3LfLfGYJcDQpT q5cToe4oVzJTvtYTDAzQYdD/4dTYP+ZuBS4TOkq0EbRdrdX/L07dhzsUR3jGec2YECs5DH57bnKOK z1OagIdotWp6UhnMH1furNPOry+iYWPTyactjJnfsKL5uhFEq8cBthTjhrxGJZSLxuzLQFt+uwRjC jPHOnfPA==; Received: from willy by casper.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1ju6F2-0002yz-44 for linux-mm@kvack.org; Sat, 11 Jul 2020 03:31:56 +0000 Date: Sat, 11 Jul 2020 04:31:56 +0100 From: Matthew Wilcox To: linux-mm@kvack.org Subject: [PATCH 5/4] mm: Return head pages from find_get_entry Message-ID: <20200711033156.GS12769@casper.infradead.org> References: <20200710202642.21794-1-willy@infradead.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20200710202642.21794-1-willy@infradead.org> X-Rspamd-Queue-Id: 407961800B452 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam03 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This was the original destination of the prior patch series. I have a few places in the THP patch set which add calls to find_get_entry() and it annoyed me that I was carefully calling find_subpage() in find_get_entry() only to immediately call thp_head() to get back to the original subpage. I'm not sure it's worth applying this as part of the patch series, which is why I left it out earlier. There are going to be some other functions which return only head pages. I currently have a find_get_heads_range_tag() in my tree, which is probably going to become find_get_entries_range_tag(). --- 8< --- mm: Return head pages from find_get_entry All the callers of find_get_entry() call compound_head() to get back to the head page. They still do because compound_head() calls are hidden in such functions as put_page() and lock_page(), but it lets us get rid of a few calls. diff --git a/mm/filemap.c b/mm/filemap.c index f0ae9a6308cb..7e0a7d02e7aa 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1487,9 +1487,9 @@ EXPORT_SYMBOL(page_cache_prev_miss); /** * find_get_entry - find and get a page cache entry * @mapping: the address_space to search - * @offset: the page cache index + * @index: the page cache index * - * Looks up the page cache slot at @mapping & @offset. If there is a + * Looks up the page cache slot at @mapping & @index. If there is a * page cache page, it is returned with an increased refcount. * * If the slot holds a shadow entry of a previously evicted page, or a @@ -1497,9 +1497,9 @@ EXPORT_SYMBOL(page_cache_prev_miss); * * Return: the found page or shadow entry, %NULL if nothing is found. */ -struct page *find_get_entry(struct address_space *mapping, pgoff_t offset) +struct page *find_get_entry(struct address_space *mapping, pgoff_t index) { - XA_STATE(xas, &mapping->i_pages, offset); + XA_STATE(xas, &mapping->i_pages, index); struct page *page; rcu_read_lock(); @@ -1527,7 +1527,6 @@ struct page *find_get_entry(struct address_space *mapping, pgoff_t offset) put_page(page); goto repeat; } - page = find_subpage(page, offset); out: rcu_read_unlock(); @@ -1537,9 +1536,9 @@ struct page *find_get_entry(struct address_space *mapping, pgoff_t offset) /** * find_lock_entry - locate, pin and lock a page cache entry * @mapping: the address_space to search - * @offset: the page cache index + * @index: the page cache index * - * Looks up the page cache slot at @mapping & @offset. If there is a + * Looks up the page cache slot at @mapping & @index. If there is a * page cache page, it is returned locked and with an increased * refcount. * @@ -1550,21 +1549,22 @@ struct page *find_get_entry(struct address_space *mapping, pgoff_t offset) * * Return: the found page or shadow entry, %NULL if nothing is found. */ -struct page *find_lock_entry(struct address_space *mapping, pgoff_t offset) +struct page *find_lock_entry(struct address_space *mapping, pgoff_t index) { struct page *page; repeat: - page = find_get_entry(mapping, offset); + page = find_get_entry(mapping, index); if (page && !xa_is_value(page)) { lock_page(page); /* Has the page been truncated? */ - if (unlikely(page_mapping(page) != mapping)) { + if (unlikely(page->mapping != mapping)) { unlock_page(page); put_page(page); goto repeat; } - VM_BUG_ON_PAGE(page_to_pgoff(page) != offset, page); + page = find_subpage(page, index); + VM_BUG_ON_PAGE(page_to_pgoff(page) != index, page); } return page; } @@ -1620,12 +1620,13 @@ struct page *pagecache_get_page(struct address_space *mapping, pgoff_t index, } /* Has the page been truncated? */ - if (unlikely(compound_head(page)->mapping != mapping)) { + if (unlikely(page->mapping != mapping)) { unlock_page(page); put_page(page); goto repeat; } - VM_BUG_ON_PAGE(page->index != index, page); + VM_BUG_ON_PAGE(page->index != + (index & ~(thp_nr_pages(page) - 1)), page); } if (fgp_flags & FGP_ACCESSED) @@ -1666,7 +1667,7 @@ struct page *pagecache_get_page(struct address_space *mapping, pgoff_t index, unlock_page(page); } - return page; + return find_subpage(page, index); } EXPORT_SYMBOL(pagecache_get_page); diff --git a/mm/mincore.c b/mm/mincore.c index abc24ca6f0f7..cda857110d44 100644 --- a/mm/mincore.c +++ b/mm/mincore.c @@ -58,8 +58,10 @@ struct page *find_get_swap_page(struct address_space *mapping, pgoff_t index) struct swap_info_struct *si; struct page *page = find_get_entry(mapping, index); - if (!xa_is_value(page)) + if (!page) return page; + if (!xa_is_value(page)) + return find_subpage(page, index); if (!IS_ENABLED(CONFIG_SWAP) || !shmem_mapping(mapping)) return NULL;