From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D0D99C433FE for ; Wed, 4 May 2022 18:29:38 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3C4746B0087; Wed, 4 May 2022 14:29:09 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9E4DC6B009E; Wed, 4 May 2022 14:29:08 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5265E6B009C; Wed, 4 May 2022 14:29:08 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 5EEC96B009F for ; Wed, 4 May 2022 14:29:07 -0400 (EDT) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 3F3F91653 for ; Wed, 4 May 2022 18:29:07 +0000 (UTC) X-FDA: 79428897534.25.EE88539 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf04.hostedemail.com (Postfix) with ESMTP id 7B49F40087 for ; Wed, 4 May 2022 18:28:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=ZiXX1ec3Aock3u4mY4UTS1lsOFn1Ilvwgy0/X5MGdzw=; b=biMOq83D8Zd1J6y3HAZlvk2Ymq 5bMubzw5T+70ibkDnd1/aw+mka2u/77GVsyMP0DfY45DPdrjaSwJb+CrNn049SxhEWvE45+BJVNvF svc2jocq1TtpFDZeqg1H2N/YjJpyBTxOiLqXP7IjGZun80TnmzKpxpFJLjaURQKqm8NDZVWwdORTP 7RJZgfkFyi9ponYwb3DwBdBz6dQ5E2xe2saInmYbxj1nCGKma5+N3flgZz238jH9M8ahz4bP69Np0 hwcc2kAZoFELnAiVtxw0oa3bgBtPwRwSCXjNtAMtRaHceml0DOdCs/Hk1HbZLgGJsRXSF0xLoObA0 IdVrGWdw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nmJkH-00Gq9b-8z; Wed, 04 May 2022 18:29:05 +0000 From: "Matthew Wilcox (Oracle)" To: akpm@linuxfoundation.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH v2 26/26] mm/migrate: Convert move_to_new_page() into move_to_new_folio() Date: Wed, 4 May 2022 19:28:57 +0100 Message-Id: <20220504182857.4013401-27-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220504182857.4013401-1-willy@infradead.org> References: <20220504182857.4013401-1-willy@infradead.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 7B49F40087 X-Stat-Signature: fgwj4uxrdzmkxq7kgmkaapw7anowyc6j X-Rspam-User: Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=biMOq83D; spf=none (imf04.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-HE-Tag: 1651688939-186440 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Pass in the folios that we already have in each caller. Saves a lot of calls to compound_head(). Signed-off-by: Matthew Wilcox (Oracle) --- mm/migrate.c | 58 ++++++++++++++++++++++++++-------------------------- 1 file changed, 29 insertions(+), 29 deletions(-) diff --git a/mm/migrate.c b/mm/migrate.c index 6c31ee1e1c9b..b664673d5833 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -836,21 +836,21 @@ static int fallback_migrate_page(struct address_space *mapping, * < 0 - error code * MIGRATEPAGE_SUCCESS - success */ -static int move_to_new_page(struct page *newpage, struct page *page, +static int move_to_new_folio(struct folio *dst, struct folio *src, enum migrate_mode mode) { struct address_space *mapping; int rc = -EAGAIN; - bool is_lru = !__PageMovable(page); + bool is_lru = !__PageMovable(&src->page); - VM_BUG_ON_PAGE(!PageLocked(page), page); - VM_BUG_ON_PAGE(!PageLocked(newpage), newpage); + VM_BUG_ON_FOLIO(!folio_test_locked(src), src); + VM_BUG_ON_FOLIO(!folio_test_locked(dst), dst); - mapping = page_mapping(page); + mapping = folio_mapping(src); if (likely(is_lru)) { if (!mapping) - rc = migrate_page(mapping, newpage, page, mode); + rc = migrate_page(mapping, &dst->page, &src->page, mode); else if (mapping->a_ops->migratepage) /* * Most pages have a mapping and most filesystems @@ -859,54 +859,54 @@ static int move_to_new_page(struct page *newpage, struct page *page, * migratepage callback. This is the most common path * for page migration. */ - rc = mapping->a_ops->migratepage(mapping, newpage, - page, mode); + rc = mapping->a_ops->migratepage(mapping, &dst->page, + &src->page, mode); else - rc = fallback_migrate_page(mapping, newpage, - page, mode); + rc = fallback_migrate_page(mapping, &dst->page, + &src->page, mode); } else { /* * In case of non-lru page, it could be released after * isolation step. In that case, we shouldn't try migration. */ - VM_BUG_ON_PAGE(!PageIsolated(page), page); - if (!PageMovable(page)) { + VM_BUG_ON_FOLIO(!folio_test_isolated(src), src); + if (!folio_test_movable(src)) { rc = MIGRATEPAGE_SUCCESS; - ClearPageIsolated(page); + folio_clear_isolated(src); goto out; } - rc = mapping->a_ops->migratepage(mapping, newpage, - page, mode); + rc = mapping->a_ops->migratepage(mapping, &dst->page, + &src->page, mode); WARN_ON_ONCE(rc == MIGRATEPAGE_SUCCESS && - !PageIsolated(page)); + !folio_test_isolated(src)); } /* - * When successful, old pagecache page->mapping must be cleared before - * page is freed; but stats require that PageAnon be left as PageAnon. + * When successful, old pagecache src->mapping must be cleared before + * src is freed; but stats require that PageAnon be left as PageAnon. */ if (rc == MIGRATEPAGE_SUCCESS) { - if (__PageMovable(page)) { - VM_BUG_ON_PAGE(!PageIsolated(page), page); + if (__PageMovable(&src->page)) { + VM_BUG_ON_FOLIO(!folio_test_isolated(src), src); /* * We clear PG_movable under page_lock so any compactor * cannot try to migrate this page. */ - ClearPageIsolated(page); + folio_clear_isolated(src); } /* - * Anonymous and movable page->mapping will be cleared by + * Anonymous and movable src->mapping will be cleared by * free_pages_prepare so don't reset it here for keeping * the type to work PageAnon, for example. */ - if (!PageMappingFlags(page)) - page->mapping = NULL; + if (!folio_mapping_flags(src)) + src->mapping = NULL; - if (likely(!is_zone_device_page(newpage))) - flush_dcache_folio(page_folio(newpage)); + if (likely(!folio_is_zone_device(dst))) + flush_dcache_folio(dst); } out: return rc; @@ -994,7 +994,7 @@ static int __unmap_and_move(struct page *page, struct page *newpage, goto out_unlock; if (unlikely(!is_lru)) { - rc = move_to_new_page(newpage, page, mode); + rc = move_to_new_folio(dst, folio, mode); goto out_unlock_both; } @@ -1025,7 +1025,7 @@ static int __unmap_and_move(struct page *page, struct page *newpage, } if (!page_mapped(page)) - rc = move_to_new_page(newpage, page, mode); + rc = move_to_new_folio(dst, folio, mode); /* * When successful, push newpage to LRU immediately: so that if it @@ -1256,7 +1256,7 @@ static int unmap_and_move_huge_page(new_page_t get_new_page, } if (!page_mapped(hpage)) - rc = move_to_new_page(new_hpage, hpage, mode); + rc = move_to_new_folio(dst, src, mode); if (page_was_mapped) remove_migration_ptes(src, -- 2.34.1