linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: "Matthew Wilcox (Oracle)" <willy@infradead.org>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org>,
	linux-mm@kvack.org, Hugh Dickins <hughd@google.com>
Subject: [PATCH 05/22] mm: Convert page_remove_rmap() to use a folio internally
Date: Sat, 31 Dec 2022 21:45:53 +0000	[thread overview]
Message-ID: <20221231214610.2800682-6-willy@infradead.org> (raw)
In-Reply-To: <20221231214610.2800682-1-willy@infradead.org>

The API for page_remove_rmap() needs to be page-based, because we can
remove mappings of pages individually.  But inside the function, we want
to only call compound_head() once and then use the folio APIs instead
of the page APIs that each call compound_head().

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/rmap.c | 51 ++++++++++++++++++++++++++++-----------------------
 1 file changed, 28 insertions(+), 23 deletions(-)

diff --git a/mm/rmap.c b/mm/rmap.c
index 17984eb9f990..7d51ed49ee9f 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1378,42 +1378,42 @@ void page_add_file_rmap(struct page *page,
  *
  * The caller needs to hold the pte lock.
  */
-void page_remove_rmap(struct page *page,
-	struct vm_area_struct *vma, bool compound)
+void page_remove_rmap(struct page *page, struct vm_area_struct *vma,
+		bool compound)
 {
-	atomic_t *mapped;
+	struct folio *folio = page_folio(page);
+	atomic_t *mapped = &folio->_nr_pages_mapped;
 	int nr = 0, nr_pmdmapped = 0;
 	bool last;
+	enum node_stat_item idx;
 
 	VM_BUG_ON_PAGE(compound && !PageHead(page), page);
 
 	/* Hugetlb pages are not counted in NR_*MAPPED */
-	if (unlikely(PageHuge(page))) {
+	if (unlikely(folio_test_hugetlb(folio))) {
 		/* hugetlb pages are always mapped with pmds */
-		atomic_dec(compound_mapcount_ptr(page));
+		atomic_dec(&folio->_entire_mapcount);
 		return;
 	}
 
-	lock_page_memcg(page);
+	folio_memcg_lock(folio);
 
 	/* Is page being unmapped by PTE? Is this its last map to be removed? */
 	if (likely(!compound)) {
 		last = atomic_add_negative(-1, &page->_mapcount);
 		nr = last;
-		if (last && PageCompound(page)) {
-			mapped = subpages_mapcount_ptr(compound_head(page));
+		if (last && folio_test_large(folio)) {
 			nr = atomic_dec_return_relaxed(mapped);
 			nr = (nr < COMPOUND_MAPPED);
 		}
-	} else if (PageTransHuge(page)) {
+	} else if (folio_test_large(folio)) {
 		/* That test is redundant: it's for safety or to optimize out */
 
-		last = atomic_add_negative(-1, compound_mapcount_ptr(page));
+		last = atomic_add_negative(-1, &folio->_entire_mapcount);
 		if (last) {
-			mapped = subpages_mapcount_ptr(page);
 			nr = atomic_sub_return_relaxed(COMPOUND_MAPPED, mapped);
 			if (likely(nr < COMPOUND_MAPPED)) {
-				nr_pmdmapped = thp_nr_pages(page);
+				nr_pmdmapped = folio_nr_pages(folio);
 				nr = nr_pmdmapped - (nr & FOLIO_PAGES_MAPPED);
 				/* Raced ahead of another remove and an add? */
 				if (unlikely(nr < 0))
@@ -1426,21 +1426,26 @@ void page_remove_rmap(struct page *page,
 	}
 
 	if (nr_pmdmapped) {
-		__mod_lruvec_page_state(page, PageAnon(page) ? NR_ANON_THPS :
-				(PageSwapBacked(page) ? NR_SHMEM_PMDMAPPED :
-				NR_FILE_PMDMAPPED), -nr_pmdmapped);
+		if (folio_test_anon(folio))
+			idx = NR_ANON_THPS;
+		else if (folio_test_swapbacked(folio))
+			idx = NR_SHMEM_PMDMAPPED;
+		else
+			idx = NR_FILE_PMDMAPPED;
+		__lruvec_stat_mod_folio(folio, idx, -nr_pmdmapped);
 	}
 	if (nr) {
-		__mod_lruvec_page_state(page, PageAnon(page) ? NR_ANON_MAPPED :
-				NR_FILE_MAPPED, -nr);
+		idx = folio_test_anon(folio) ? NR_ANON_MAPPED : NR_FILE_MAPPED;
+		__lruvec_stat_mod_folio(folio, idx, -nr);
+
 		/*
-		 * Queue anon THP for deferred split if at least one small
-		 * page of the compound page is unmapped, but at least one
-		 * small page is still mapped.
+		 * Queue anon THP for deferred split if at least one
+		 * page of the folio is unmapped, but at least one
+		 * page is still mapped.
 		 */
-		if (PageTransCompound(page) && PageAnon(page))
+		if (folio_test_large(folio) && folio_test_anon(folio))
 			if (!compound || nr < nr_pmdmapped)
-				deferred_split_huge_page(compound_head(page));
+				deferred_split_huge_page(&folio->page);
 	}
 
 	/*
@@ -1451,7 +1456,7 @@ void page_remove_rmap(struct page *page,
 	 * and remember that it's only reliable while mapped.
 	 */
 
-	unlock_page_memcg(page);
+	folio_memcg_unlock(folio);
 
 	munlock_vma_page(page, vma, compound);
 }
-- 
2.35.1



  parent reply	other threads:[~2022-12-31 21:46 UTC|newest]

Thread overview: 25+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-12-31 21:45 [PATCH 00/22] Get rid of first tail page fields Matthew Wilcox (Oracle)
2022-12-31 21:45 ` [PATCH 01/22] mm: Remove folio_pincount_ptr() and head_compound_pincount() Matthew Wilcox (Oracle)
2022-12-31 21:45 ` [PATCH 02/22] mm: Convert head_subpages_mapcount() into folio_nr_pages_mapped() Matthew Wilcox (Oracle)
2022-12-31 21:45 ` [PATCH 03/22] doc: Clarify refcount section by referring to folios & pages Matthew Wilcox (Oracle)
2022-12-31 21:45 ` [PATCH 04/22] mm: Convert total_compound_mapcount() to folio_total_mapcount() Matthew Wilcox (Oracle)
2022-12-31 21:45 ` Matthew Wilcox (Oracle) [this message]
2022-12-31 21:45 ` [PATCH 06/22] mm: Convert page_add_anon_rmap() to use a folio internally Matthew Wilcox (Oracle)
2022-12-31 21:45 ` [PATCH 07/22] mm: Convert page_add_file_rmap() " Matthew Wilcox (Oracle)
2022-12-31 21:45 ` [PATCH 08/22] mm: Add folio_add_new_anon_rmap() Matthew Wilcox (Oracle)
2022-12-31 23:29   ` kernel test robot
2022-12-31 23:59   ` kernel test robot
2022-12-31 21:45 ` [PATCH 09/22] page_alloc: Use folio fields directly Matthew Wilcox (Oracle)
2022-12-31 21:45 ` [PATCH 10/22] mm: Use a folio in hugepage_add_anon_rmap() and hugepage_add_new_anon_rmap() Matthew Wilcox (Oracle)
2022-12-31 21:45 ` [PATCH 11/22] mm: Use entire_mapcount in __page_dup_rmap() Matthew Wilcox (Oracle)
2022-12-31 21:46 ` [PATCH 12/22] mm/debug: Remove call to head_compound_mapcount() Matthew Wilcox (Oracle)
2022-12-31 21:46 ` [PATCH 13/22] hugetlb: Remove uses of folio_mapcount_ptr Matthew Wilcox (Oracle)
2022-12-31 21:46 ` [PATCH 14/22] mm: Convert page_mapcount() to use folio_entire_mapcount() Matthew Wilcox (Oracle)
2022-12-31 21:46 ` [PATCH 15/22] mm: Remove head_compound_mapcount() and _ptr functions Matthew Wilcox (Oracle)
2022-12-31 21:46 ` [PATCH 16/22] mm: Reimplement compound_order() Matthew Wilcox (Oracle)
2022-12-31 21:46 ` [PATCH 17/22] mm: Reimplement compound_nr() Matthew Wilcox (Oracle)
2022-12-31 21:46 ` [PATCH 18/22] mm: Convert set_compound_page_dtor() and set_compound_order() to folios Matthew Wilcox (Oracle)
2022-12-31 21:46 ` [PATCH 19/22] mm: Convert is_transparent_hugepage() to use a folio Matthew Wilcox (Oracle)
2022-12-31 21:46 ` [PATCH 20/22] mm: Convert destroy_large_folio() to use folio_dtor Matthew Wilcox (Oracle)
2022-12-31 21:46 ` [PATCH 21/22] hugetlb: Remove uses of compound_dtor and compound_nr Matthew Wilcox (Oracle)
2022-12-31 21:46 ` [PATCH 22/22] mm: Remove 'First tail page' members from struct page Matthew Wilcox (Oracle)

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20221231214610.2800682-6-willy@infradead.org \
    --to=willy@infradead.org \
    --cc=akpm@linux-foundation.org \
    --cc=hughd@google.com \
    --cc=linux-mm@kvack.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox