From: "Matthew Wilcox (Oracle)" <willy@infradead.org>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: Kefeng Wang <wangkefeng.wang@huawei.com>,
linux-mm@kvack.org, david@redhat.com, linux-s390@vger.kernel.org,
Matthew Wilcox <willy@infradead.org>
Subject: [PATCH v3 09/10] mm: convert mm_counter() to take a folio
Date: Thu, 11 Jan 2024 15:24:28 +0000 [thread overview]
Message-ID: <20240111152429.3374566-10-willy@infradead.org> (raw)
In-Reply-To: <20240111152429.3374566-1-willy@infradead.org>
From: Kefeng Wang <wangkefeng.wang@huawei.com>
Now all callers of mm_counter() have a folio, convert mm_counter() to
take a folio. Saves a call to compound_head() hidden inside PageAnon().
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
arch/s390/mm/pgtable.c | 2 +-
include/linux/mm.h | 6 +++---
mm/memory.c | 10 +++++-----
mm/rmap.c | 8 ++++----
mm/userfaultfd.c | 2 +-
5 files changed, 14 insertions(+), 14 deletions(-)
diff --git a/arch/s390/mm/pgtable.c b/arch/s390/mm/pgtable.c
index 7e5dd4b17664..b71432b15d66 100644
--- a/arch/s390/mm/pgtable.c
+++ b/arch/s390/mm/pgtable.c
@@ -723,7 +723,7 @@ static void ptep_zap_swap_entry(struct mm_struct *mm, swp_entry_t entry)
else if (is_migration_entry(entry)) {
struct folio *folio = pfn_swap_entry_folio(entry);
- dec_mm_counter(mm, mm_counter(&folio->page));
+ dec_mm_counter(mm, mm_counter(folio));
}
free_swap_and_cache(entry);
}
diff --git a/include/linux/mm.h b/include/linux/mm.h
index f5a97dec5169..22e597b36b38 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2603,11 +2603,11 @@ static inline int mm_counter_file(struct page *page)
return MM_FILEPAGES;
}
-static inline int mm_counter(struct page *page)
+static inline int mm_counter(struct folio *folio)
{
- if (PageAnon(page))
+ if (folio_test_anon(folio))
return MM_ANONPAGES;
- return mm_counter_file(page);
+ return mm_counter_file(&folio->page);
}
static inline unsigned long get_mm_rss(struct mm_struct *mm)
diff --git a/mm/memory.c b/mm/memory.c
index b73322ab9fd6..53ef7ae96440 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -808,7 +808,7 @@ copy_nonpresent_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm,
} else if (is_migration_entry(entry)) {
folio = pfn_swap_entry_folio(entry);
- rss[mm_counter(&folio->page)]++;
+ rss[mm_counter(folio)]++;
if (!is_readable_migration_entry(entry) &&
is_cow_mapping(vm_flags)) {
@@ -840,7 +840,7 @@ copy_nonpresent_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm,
* keep things as they are.
*/
folio_get(folio);
- rss[mm_counter(page)]++;
+ rss[mm_counter(folio)]++;
/* Cannot fail as these pages cannot get pinned. */
folio_try_dup_anon_rmap_pte(folio, page, src_vma);
@@ -1476,7 +1476,7 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb,
if (pte_young(ptent) && likely(vma_has_recency(vma)))
folio_mark_accessed(folio);
}
- rss[mm_counter(page)]--;
+ rss[mm_counter(folio)]--;
if (!delay_rmap) {
folio_remove_rmap_pte(folio, page, vma);
if (unlikely(page_mapcount(page) < 0))
@@ -1504,7 +1504,7 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb,
* see zap_install_uffd_wp_if_needed().
*/
WARN_ON_ONCE(!vma_is_anonymous(vma));
- rss[mm_counter(page)]--;
+ rss[mm_counter(folio)]--;
if (is_device_private_entry(entry))
folio_remove_rmap_pte(folio, page, vma);
folio_put(folio);
@@ -1519,7 +1519,7 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb,
folio = pfn_swap_entry_folio(entry);
if (!should_zap_folio(details, folio))
continue;
- rss[mm_counter(&folio->page)]--;
+ rss[mm_counter(folio)]--;
} else if (pte_marker_entry_uffd_wp(entry)) {
/*
* For anon: always drop the marker; for file: only
diff --git a/mm/rmap.c b/mm/rmap.c
index f5d43edad529..4648cf1d8178 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1780,7 +1780,7 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
set_huge_pte_at(mm, address, pvmw.pte, pteval,
hsz);
} else {
- dec_mm_counter(mm, mm_counter(&folio->page));
+ dec_mm_counter(mm, mm_counter(folio));
set_pte_at(mm, address, pvmw.pte, pteval);
}
@@ -1795,7 +1795,7 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
* migration) will not expect userfaults on already
* copied pages.
*/
- dec_mm_counter(mm, mm_counter(&folio->page));
+ dec_mm_counter(mm, mm_counter(folio));
} else if (folio_test_anon(folio)) {
swp_entry_t entry = page_swap_entry(subpage);
pte_t swp_pte;
@@ -2181,7 +2181,7 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma,
set_huge_pte_at(mm, address, pvmw.pte, pteval,
hsz);
} else {
- dec_mm_counter(mm, mm_counter(&folio->page));
+ dec_mm_counter(mm, mm_counter(folio));
set_pte_at(mm, address, pvmw.pte, pteval);
}
@@ -2196,7 +2196,7 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma,
* migration) will not expect userfaults on already
* copied pages.
*/
- dec_mm_counter(mm, mm_counter(&folio->page));
+ dec_mm_counter(mm, mm_counter(folio));
} else {
swp_entry_t entry;
pte_t swp_pte;
diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
index 216ab4c8621f..662ab304cca3 100644
--- a/mm/userfaultfd.c
+++ b/mm/userfaultfd.c
@@ -124,7 +124,7 @@ int mfill_atomic_install_pte(pmd_t *dst_pmd,
* Must happen after rmap, as mm_counter() checks mapping (via
* PageAnon()), which is set by __page_set_anon_rmap().
*/
- inc_mm_counter(dst_mm, mm_counter(page));
+ inc_mm_counter(dst_mm, mm_counter(folio));
set_pte_at(dst_mm, dst_addr, dst_pte, _dst_pte);
--
2.43.0
next prev parent reply other threads:[~2024-01-11 15:24 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-01-11 15:24 [PATCH v3 00/10] mm: convert mm counter " Matthew Wilcox (Oracle)
2024-01-11 15:24 ` [PATCH v3 01/10] mm: Add pfn_swap_entry_folio() Matthew Wilcox (Oracle)
2024-01-11 15:24 ` [PATCH v3 02/10] proc: Use pfn_swap_entry_folio where obvious Matthew Wilcox (Oracle)
2024-01-11 15:24 ` [PATCH v3 03/10] mprotect: Use pfn_swap_entry_folio Matthew Wilcox (Oracle)
2024-01-11 15:24 ` [PATCH v3 04/10] s390: use pfn_swap_entry_folio() in ptep_zap_swap_entry() Matthew Wilcox (Oracle)
2024-01-11 15:24 ` [PATCH v3 05/10] mm: use pfn_swap_entry_folio() in __split_huge_pmd_locked() Matthew Wilcox (Oracle)
2024-01-11 15:24 ` [PATCH v3 06/10] mm: use pfn_swap_entry_to_folio() in zap_huge_pmd() Matthew Wilcox (Oracle)
2024-01-11 15:24 ` [PATCH v3 07/10] mm: use pfn_swap_entry_folio() in copy_nonpresent_pte() Matthew Wilcox (Oracle)
2024-01-11 15:24 ` [PATCH v3 08/10] mm: Convert to should_zap_page() to should_zap_folio() Matthew Wilcox (Oracle)
2024-01-12 5:03 ` kernel test robot
2024-01-12 10:14 ` Kefeng Wang
2024-01-22 17:19 ` Ryan Roberts
2024-01-22 17:32 ` Ryan Roberts
2024-01-11 15:24 ` Matthew Wilcox (Oracle) [this message]
2024-01-11 15:24 ` [PATCH v3 10/10] mm: convert mm_counter_file() to take a folio Matthew Wilcox (Oracle)
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240111152429.3374566-10-willy@infradead.org \
--to=willy@infradead.org \
--cc=akpm@linux-foundation.org \
--cc=david@redhat.com \
--cc=linux-mm@kvack.org \
--cc=linux-s390@vger.kernel.org \
--cc=wangkefeng.wang@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox