From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CC3C7C433FE for ; Fri, 4 Feb 2022 19:59:37 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B575E6B0088; Fri, 4 Feb 2022 14:59:07 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 7C6978D0005; Fri, 4 Feb 2022 14:59:07 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 222A56B0088; Fri, 4 Feb 2022 14:59:07 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0176.hostedemail.com [216.40.44.176]) by kanga.kvack.org (Postfix) with ESMTP id 839306B0093 for ; Fri, 4 Feb 2022 14:59:06 -0500 (EST) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 4AE3296F01 for ; Fri, 4 Feb 2022 19:59:06 +0000 (UTC) X-FDA: 79106161092.16.E745D0D Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf22.hostedemail.com (Postfix) with ESMTP id DBE4DC0004 for ; Fri, 4 Feb 2022 19:59:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=EiTl9q/RuSenIeiy7aVDh11eq3USPRI+K+nzMLYhC1I=; b=nYZFnIE/kJm3rIxMWsdUQ5p1+1 UsF5/II5qPuYXhFuNeCVTlWtyNQs+MeMp9rxkcPz/VBTqSb25An8emwpcu3Fs224hzJNrTWCScZqn j+Bb52waX55cC/s/UvSfTbVwLA4F1fo8Ae/WyDZzmZZMChbckb+ONSZzNEDkdf8Zk9tBiXw0jb+dj 2aZoPNEHMlLP9XGKIpoRHZDSg20KeRZcP+hFiD+rqUOWU39cbG/BOjIBQ1PqJ1GaM0xzdDkeLpNFt TsZc+IL19ZluYAkNp1rhKNQ8JclyA882yv0p4/eCKZf+R9B9VWgQWzpASvW9ioUPdx2Fpi8HWqk7p id6mdA+w==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nG4jW-007LmT-OJ; Fri, 04 Feb 2022 19:59:02 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org Subject: [PATCH 33/75] mm/vmscan: Convert __remove_mapping() to take a folio Date: Fri, 4 Feb 2022 19:58:10 +0000 Message-Id: <20220204195852.1751729-34-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220204195852.1751729-1-willy@infradead.org> References: <20220204195852.1751729-1-willy@infradead.org> MIME-Version: 1.0 X-Stat-Signature: zqsn3izrerm54og37tchghsifn7j6ym9 X-Rspam-User: nil Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="nYZFnIE/"; spf=none (imf22.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: DBE4DC0004 X-HE-Tag: 1644004745-858937 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This removes a few hidden calls to compound_head(). Signed-off-by: Matthew Wilcox (Oracle) --- mm/vmscan.c | 44 +++++++++++++++++++++++--------------------- 1 file changed, 23 insertions(+), 21 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 9f11960b1db8..15cbfae0d8ec 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1237,17 +1237,16 @@ static pageout_t pageout(struct page *page, struc= t address_space *mapping) * Same as remove_mapping, but if the page is removed from the mapping, = it * gets returned with a refcount of 0. */ -static int __remove_mapping(struct address_space *mapping, struct page *= page, +static int __remove_mapping(struct address_space *mapping, struct folio = *folio, bool reclaimed, struct mem_cgroup *target_memcg) { - struct folio *folio =3D page_folio(page); int refcount; void *shadow =3D NULL; =20 - BUG_ON(!PageLocked(page)); - BUG_ON(mapping !=3D page_mapping(page)); + BUG_ON(!folio_test_locked(folio)); + BUG_ON(mapping !=3D folio_mapping(folio)); =20 - if (!PageSwapCache(page)) + if (!folio_test_swapcache(folio)) spin_lock(&mapping->host->i_lock); xa_lock_irq(&mapping->i_pages); /* @@ -1275,23 +1274,23 @@ static int __remove_mapping(struct address_space = *mapping, struct page *page, * Note that if SetPageDirty is always performed via set_page_dirty, * and thus under the i_pages lock, then this ordering is not required. */ - refcount =3D 1 + compound_nr(page); - if (!page_ref_freeze(page, refcount)) + refcount =3D 1 + folio_nr_pages(folio); + if (!folio_ref_freeze(folio, refcount)) goto cannot_free; /* note: atomic_cmpxchg in page_ref_freeze provides the smp_rmb */ - if (unlikely(PageDirty(page))) { - page_ref_unfreeze(page, refcount); + if (unlikely(folio_test_dirty(folio))) { + folio_ref_unfreeze(folio, refcount); goto cannot_free; } =20 - if (PageSwapCache(page)) { - swp_entry_t swap =3D { .val =3D page_private(page) }; + if (folio_test_swapcache(folio)) { + swp_entry_t swap =3D folio_swap_entry(folio); mem_cgroup_swapout(folio, swap); if (reclaimed && !mapping_exiting(mapping)) shadow =3D workingset_eviction(folio, target_memcg); - __delete_from_swap_cache(page, swap, shadow); + __delete_from_swap_cache(&folio->page, swap, shadow); xa_unlock_irq(&mapping->i_pages); - put_swap_page(page, swap); + put_swap_page(&folio->page, swap); } else { void (*freepage)(struct page *); =20 @@ -1312,7 +1311,7 @@ static int __remove_mapping(struct address_space *m= apping, struct page *page, * exceptional entries and shadow exceptional entries in the * same address_space. */ - if (reclaimed && page_is_file_lru(page) && + if (reclaimed && folio_is_file_lru(folio) && !mapping_exiting(mapping) && !dax_mapping(mapping)) shadow =3D workingset_eviction(folio, target_memcg); __filemap_remove_folio(folio, shadow); @@ -1322,14 +1321,14 @@ static int __remove_mapping(struct address_space = *mapping, struct page *page, spin_unlock(&mapping->host->i_lock); =20 if (freepage !=3D NULL) - freepage(page); + freepage(&folio->page); } =20 return 1; =20 cannot_free: xa_unlock_irq(&mapping->i_pages); - if (!PageSwapCache(page)) + if (!folio_test_swapcache(folio)) spin_unlock(&mapping->host->i_lock); return 0; } @@ -1342,13 +1341,14 @@ static int __remove_mapping(struct address_space = *mapping, struct page *page, */ int remove_mapping(struct address_space *mapping, struct page *page) { - if (__remove_mapping(mapping, page, false, NULL)) { + struct folio *folio =3D page_folio(page); + if (__remove_mapping(mapping, folio, false, NULL)) { /* * Unfreezing the refcount with 1 rather than 2 effectively * drops the pagecache ref for us without requiring another * atomic operation. */ - page_ref_unfreeze(page, 1); + folio_ref_unfreeze(folio, 1); return 1; } return 0; @@ -1530,14 +1530,16 @@ static unsigned int shrink_page_list(struct list_= head *page_list, while (!list_empty(page_list)) { struct address_space *mapping; struct page *page; + struct folio *folio; enum page_references references =3D PAGEREF_RECLAIM; bool dirty, writeback, may_enter_fs; unsigned int nr_pages; =20 cond_resched(); =20 - page =3D lru_to_page(page_list); - list_del(&page->lru); + folio =3D lru_to_folio(page_list); + list_del(&folio->lru); + page =3D &folio->page; =20 if (!trylock_page(page)) goto keep; @@ -1890,7 +1892,7 @@ static unsigned int shrink_page_list(struct list_he= ad *page_list, */ count_vm_event(PGLAZYFREED); count_memcg_page_event(page, PGLAZYFREED); - } else if (!mapping || !__remove_mapping(mapping, page, true, + } else if (!mapping || !__remove_mapping(mapping, folio, true, sc->target_mem_cgroup)) goto keep_locked; =20 --=20 2.34.1