From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 49933C433ED for ; Wed, 5 May 2021 16:46:58 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B5A7361463 for ; Wed, 5 May 2021 16:46:57 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B5A7361463 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 036906B006C; Wed, 5 May 2021 12:46:57 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id F1EB36B006E; Wed, 5 May 2021 12:46:56 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E0CC86B0070; Wed, 5 May 2021 12:46:56 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0071.hostedemail.com [216.40.44.71]) by kanga.kvack.org (Postfix) with ESMTP id C6BE06B006C for ; Wed, 5 May 2021 12:46:56 -0400 (EDT) Received: from smtpin11.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 8985F181AC9CB for ; Wed, 5 May 2021 16:46:56 +0000 (UTC) X-FDA: 78107756832.11.72C8C47 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf19.hostedemail.com (Postfix) with ESMTP id 1A7CF90009FE for ; Wed, 5 May 2021 16:46:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=brBdjD9Ppp8PG9BGHTyvbw5Kp5eFerXmwRmOz/JrsJg=; b=pNjdDwCoxaD83ZkEc3xjyKioDw dB79Od/hIu1fSRyYZrCzYz3xBHw3G/nCOmAFbUoo35lwRJOojXLtI9CAksjbaIeaG8my0pLmg3M+2 j+9C+VMHnG4vGvoJx5jQVIi6FjpYbK/1sUR5rH6Ct3sM0ftDdnMhrKdfFxuZRaebKj8wRPgj4j36Z n8i8dNOb9vkrjto6n0QpzPF5vMwNfgJVqjBNMJ6R3V6WnzObDOlqGjHKgJVAQ3V2WXvqXgyk8L1IN 2evghG03alUSS93NsIQ0w1rX3LTOexnUfAOfoiLLrP0K6ztrdkA4PxT4XkvV5sOEbF/htqGxdDXwQ +odTwbVg==; Received: from willy by casper.infradead.org with local (Exim 4.94 #2 (Red Hat Linux)) id 1leKYp-000bMv-2B; Wed, 05 May 2021 16:41:11 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org Subject: [PATCH v9 73/96] mm/lru: Add folio_lru and folio_is_file_lru Date: Wed, 5 May 2021 16:06:05 +0100 Message-Id: <20210505150628.111735-74-willy@infradead.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210505150628.111735-1-willy@infradead.org> References: <20210505150628.111735-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=pNjdDwCo; spf=none (imf19.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 1A7CF90009FE X-Stat-Signature: u9dfo485156ggzder1jma93atza7w5pr Received-SPF: none (infradead.org>: No applicable sender policy available) receiver=imf19; identity=mailfrom; envelope-from=""; helo=casper.infradead.org; client-ip=90.155.50.34 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1620233176-654713 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Convert page_lru() to call folio_lru() and convert page_is_file_lru() to call folio_is_file_lru(). All pages on the LRUs are folios (because tail pages use the space for the LRU list as compound_head), so all callers can be converted. Saves 637 bytes of kernel text; no functions grow. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/mm_inline.h | 44 ++++++++++++++++++++++++--------------- 1 file changed, 27 insertions(+), 17 deletions(-) diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h index 355ea1ee32bd..c03b12ea0b7b 100644 --- a/include/linux/mm_inline.h +++ b/include/linux/mm_inline.h @@ -6,22 +6,27 @@ #include =20 /** - * page_is_file_lru - should the page be on a file LRU or anon LRU? - * @page: the page to test + * folio_is_file_lru - should the folio be on a file LRU or anon LRU? + * @folio: the folio to test * - * Returns 1 if @page is a regular filesystem backed page cache page or = a lazily - * freed anonymous page (e.g. via MADV_FREE). Returns 0 if @page is a n= ormal - * anonymous page, a tmpfs page or otherwise ram or swap backed page. U= sed by - * functions that manipulate the LRU lists, to sort a page onto the righ= t LRU - * list. + * Returns 1 if @folio is a regular filesystem backed page cache folio + * or a lazily freed anonymous folio (e.g. via MADV_FREE). Returns 0 if + * @folio is a normal anonymous folio, a tmpfs folio or otherwise ram or + * swap backed folio. Used by functions that manipulate the LRU lists, + * to sort a folio onto the right LRU list. * * We would like to get this info without a page flag, but the state - * needs to survive until the page is last deleted from the LRU, which + * needs to survive until the folio is last deleted from the LRU, which * could be as far down as __page_cache_release. */ +static inline int folio_is_file_lru(struct folio *folio) +{ + return !folio_swapbacked(folio); +} + static inline int page_is_file_lru(struct page *page) { - return !PageSwapBacked(page); + return folio_is_file_lru(page_folio(page)); } =20 static __always_inline void update_lru_size(struct lruvec *lruvec, @@ -57,28 +62,33 @@ static __always_inline void __clear_page_lru_flags(st= ruct page *page) } =20 /** - * page_lru - which LRU list should a page be on? - * @page: the page to test + * folio_lru_list - which LRU list should a folio be on? + * @folio: the folio to test * - * Returns the LRU list a page should be on, as an index + * Returns the LRU list a folio should be on, as an index * into the array of LRU lists. */ -static __always_inline enum lru_list page_lru(struct page *page) +static __always_inline enum lru_list folio_lru_list(struct folio *folio) { enum lru_list lru; =20 - VM_BUG_ON_PAGE(PageActive(page) && PageUnevictable(page), page); + VM_BUG_ON_FOLIO(folio_active(folio) && folio_unevictable(folio), folio)= ; =20 - if (PageUnevictable(page)) + if (folio_unevictable(folio)) return LRU_UNEVICTABLE; =20 - lru =3D page_is_file_lru(page) ? LRU_INACTIVE_FILE : LRU_INACTIVE_ANON; - if (PageActive(page)) + lru =3D folio_is_file_lru(folio) ? LRU_INACTIVE_FILE : LRU_INACTIVE_ANO= N; + if (folio_active(folio)) lru +=3D LRU_ACTIVE; =20 return lru; } =20 +static __always_inline enum lru_list page_lru(struct page *page) +{ + return folio_lru_list(page_folio(page)); +} + static __always_inline void add_page_to_lru_list(struct page *page, struct lruvec *lruvec) { --=20 2.30.2