From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 79AAFC43334 for ; Fri, 17 Jun 2022 17:50:43 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E6F096B0092; Fri, 17 Jun 2022 13:50:28 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 795BE6B0099; Fri, 17 Jun 2022 13:50:28 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2DC1C6B0081; Fri, 17 Jun 2022 13:50:28 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 4A08F6B0092 for ; Fri, 17 Jun 2022 13:50:27 -0400 (EDT) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay11.hostedemail.com (Postfix) with ESMTP id 27463806A6 for ; Fri, 17 Jun 2022 17:50:27 +0000 (UTC) X-FDA: 79588467294.23.A15C4D5 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf06.hostedemail.com (Postfix) with ESMTP id 6DF1618009E for ; Fri, 17 Jun 2022 17:50:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=zkMZOCNsuu1ncWNhuGRQrfM6XKyf8Sy8MQxgId+C+zM=; b=TDmhItBXOhkP2K6mwaE53DPwQu f4LtMQXvCPVGKASPKyVoa0RzbrUNTGUCJTfm/K/1NF1jtcnP5H0VyqkTpPAxZFLr8dVKXjz8TA+vQ F5z9FEr1NdOw97KxeZ6ms4zETJ5dV3aTGLsaPbb0tVw/J0qWejWIALjtDLJz7UT0MDvIEJwjnbh6b IOUBYnaIwmYH8rUhU+LONNS99+k7DN+5jUYPxmQimc2hXB4QzjBzEnSB0PPCOU4ReV6wp/EeCfFxc oWLee2LQApleI9xr9Ci878AZCpdpgVpFTEPQMaWsL0TXr+F9JDrOMkEI4gWwnf5hTjEItweTss6Fl gj+LYuBg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1o2G6x-0030Zg-GD; Fri, 17 Jun 2022 17:50:23 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org Subject: [PATCH 03/22] mm/swap: Make __pagevec_lru_add static Date: Fri, 17 Jun 2022 18:50:01 +0100 Message-Id: <20220617175020.717127-4-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220617175020.717127-1-willy@infradead.org> References: <20220617175020.717127-1-willy@infradead.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=TDmhItBX; dmarc=none; spf=none (imf06.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1655488226; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=zkMZOCNsuu1ncWNhuGRQrfM6XKyf8Sy8MQxgId+C+zM=; b=RMW7FgBiFLgLtK4jtdkqNWmFsNvdNgiV7ouAp7NclGsSe2+U5epfMO1YqWlfZQxd0c3VjY UoIxruIq7Xs1dLVsEL3mHmbL1fkRziAIFpC5txcPG1W8XpvfOCSAiisNl/HADtHfyCZ2HG BD6CmYi5Nlg0+02yuoTgHx49CLqepKA= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1655488226; a=rsa-sha256; cv=none; b=HWWmIta1WI/5RETSWYEqj4QW7r2tZ8wANRBL7H1g2AytzYqpiseBdvBiRL3ju3RPw2eKhi L3E9oqGlkXmThznAuCkuhC6p61Tz+V5FewuDAvbvkknx9g/GssUBCTONFO1cok733xqapk OwRPSVN/91nA4hnjIgQYGVrcpXg8ShQ= X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 6DF1618009E X-Rspam-User: Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=TDmhItBX; dmarc=none; spf=none (imf06.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-Stat-Signature: goweowfb98fbotw4fusuq8o1srpu1tsa X-HE-Tag: 1655488226-300451 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: __pagevec_lru_add has no callers outside swap.c, so make it static, and move it to a more logical position in the file. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/pagevec.h | 1 - mm/swap.c | 126 ++++++++++++++++++++-------------------- 2 files changed, 63 insertions(+), 64 deletions(-) diff --git a/include/linux/pagevec.h b/include/linux/pagevec.h index 6649154a2115..215eb6c3bdc9 100644 --- a/include/linux/pagevec.h +++ b/include/linux/pagevec.h @@ -26,7 +26,6 @@ struct pagevec { }; void __pagevec_release(struct pagevec *pvec); -void __pagevec_lru_add(struct pagevec *pvec); unsigned pagevec_lookup_range_tag(struct pagevec *pvec, struct address_space *mapping, pgoff_t *index, pgoff_t end, xa_mark_t tag); diff --git a/mm/swap.c b/mm/swap.c index a983a1b93e73..6b015096ef4a 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -228,6 +228,69 @@ static bool pagevec_add_and_need_flush(struct pagevec *pvec, struct page *page) typedef void (*move_fn_t)(struct lruvec *lruvec, struct folio *folio); +static void __pagevec_lru_add_fn(struct folio *folio, struct lruvec *lruvec) +{ + int was_unevictable = folio_test_clear_unevictable(folio); + long nr_pages = folio_nr_pages(folio); + + VM_BUG_ON_FOLIO(folio_test_lru(folio), folio); + + folio_set_lru(folio); + /* + * Is an smp_mb__after_atomic() still required here, before + * folio_evictable() tests PageMlocked, to rule out the possibility + * of stranding an evictable folio on an unevictable LRU? I think + * not, because __munlock_page() only clears PageMlocked while the LRU + * lock is held. + * + * (That is not true of __page_cache_release(), and not necessarily + * true of release_pages(): but those only clear PageMlocked after + * put_page_testzero() has excluded any other users of the page.) + */ + if (folio_evictable(folio)) { + if (was_unevictable) + __count_vm_events(UNEVICTABLE_PGRESCUED, nr_pages); + } else { + folio_clear_active(folio); + folio_set_unevictable(folio); + /* + * folio->mlock_count = !!folio_test_mlocked(folio)? + * But that leaves __mlock_page() in doubt whether another + * actor has already counted the mlock or not. Err on the + * safe side, underestimate, let page reclaim fix it, rather + * than leaving a page on the unevictable LRU indefinitely. + */ + folio->mlock_count = 0; + if (!was_unevictable) + __count_vm_events(UNEVICTABLE_PGCULLED, nr_pages); + } + + lruvec_add_folio(lruvec, folio); + trace_mm_lru_insertion(folio); +} + +/* + * Add the passed pages to the LRU, then drop the caller's refcount + * on them. Reinitialises the caller's pagevec. + */ +static void __pagevec_lru_add(struct pagevec *pvec) +{ + int i; + struct lruvec *lruvec = NULL; + unsigned long flags = 0; + + for (i = 0; i < pagevec_count(pvec); i++) { + struct folio *folio = page_folio(pvec->pages[i]); + + lruvec = folio_lruvec_relock_irqsave(folio, lruvec, &flags); + __pagevec_lru_add_fn(folio, lruvec); + } + if (lruvec) + unlock_page_lruvec_irqrestore(lruvec, flags); + release_pages(pvec->pages, pvec->nr); + pagevec_reinit(pvec); +} + static void folio_batch_move_lru(struct folio_batch *fbatch, move_fn_t move_fn) { int i; @@ -1036,69 +1099,6 @@ void __pagevec_release(struct pagevec *pvec) } EXPORT_SYMBOL(__pagevec_release); -static void __pagevec_lru_add_fn(struct folio *folio, struct lruvec *lruvec) -{ - int was_unevictable = folio_test_clear_unevictable(folio); - long nr_pages = folio_nr_pages(folio); - - VM_BUG_ON_FOLIO(folio_test_lru(folio), folio); - - folio_set_lru(folio); - /* - * Is an smp_mb__after_atomic() still required here, before - * folio_evictable() tests PageMlocked, to rule out the possibility - * of stranding an evictable folio on an unevictable LRU? I think - * not, because __munlock_page() only clears PageMlocked while the LRU - * lock is held. - * - * (That is not true of __page_cache_release(), and not necessarily - * true of release_pages(): but those only clear PageMlocked after - * put_page_testzero() has excluded any other users of the page.) - */ - if (folio_evictable(folio)) { - if (was_unevictable) - __count_vm_events(UNEVICTABLE_PGRESCUED, nr_pages); - } else { - folio_clear_active(folio); - folio_set_unevictable(folio); - /* - * folio->mlock_count = !!folio_test_mlocked(folio)? - * But that leaves __mlock_page() in doubt whether another - * actor has already counted the mlock or not. Err on the - * safe side, underestimate, let page reclaim fix it, rather - * than leaving a page on the unevictable LRU indefinitely. - */ - folio->mlock_count = 0; - if (!was_unevictable) - __count_vm_events(UNEVICTABLE_PGCULLED, nr_pages); - } - - lruvec_add_folio(lruvec, folio); - trace_mm_lru_insertion(folio); -} - -/* - * Add the passed pages to the LRU, then drop the caller's refcount - * on them. Reinitialises the caller's pagevec. - */ -void __pagevec_lru_add(struct pagevec *pvec) -{ - int i; - struct lruvec *lruvec = NULL; - unsigned long flags = 0; - - for (i = 0; i < pagevec_count(pvec); i++) { - struct folio *folio = page_folio(pvec->pages[i]); - - lruvec = folio_lruvec_relock_irqsave(folio, lruvec, &flags); - __pagevec_lru_add_fn(folio, lruvec); - } - if (lruvec) - unlock_page_lruvec_irqrestore(lruvec, flags); - release_pages(pvec->pages, pvec->nr); - pagevec_reinit(pvec); -} - /** * folio_batch_remove_exceptionals() - Prune non-folios from a batch. * @fbatch: The batch to prune -- 2.35.1