From: "Matthew Wilcox (Oracle)" <willy@infradead.org>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org>, linux-mm@kvack.org
Subject: [PATCH 3/8] mm: Add pmd_folio()
Date: Tue, 26 Mar 2024 20:28:23 +0000 [thread overview]
Message-ID: <20240326202833.523759-4-willy@infradead.org> (raw)
In-Reply-To: <20240326202833.523759-1-willy@infradead.org>
Convert directly from a pmd to a folio without going through another
representation first. For now this is just a slightly shorter way to
write it, but it might end up being more efficient later.
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
include/linux/pgtable.h | 2 ++
mm/huge_memory.c | 6 +++---
mm/madvise.c | 2 +-
mm/mempolicy.c | 2 +-
mm/mlock.c | 2 +-
mm/userfaultfd.c | 2 +-
6 files changed, 9 insertions(+), 7 deletions(-)
diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
index 600e17d03659..09c85c7bf9c2 100644
--- a/include/linux/pgtable.h
+++ b/include/linux/pgtable.h
@@ -50,6 +50,8 @@
#define pmd_pgtable(pmd) pmd_page(pmd)
#endif
+#define pmd_folio(pmd) page_folio(pmd_page(pmd))
+
/*
* A page table page can be thought of an array like this: pXd_t[PTRS_PER_PxD]
*
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 78f9132daa52..8ee09bfdfdb7 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1816,7 +1816,7 @@ bool madvise_free_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
goto out;
}
- folio = pfn_folio(pmd_pfn(orig_pmd));
+ folio = pmd_folio(orig_pmd);
/*
* If other processes are mapping this folio, we couldn't discard
* the folio unless they all do MADV_FREE so let's skip the folio.
@@ -2086,7 +2086,7 @@ int change_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
if (pmd_protnone(*pmd))
goto unlock;
- folio = page_folio(pmd_page(*pmd));
+ folio = pmd_folio(*pmd);
toptier = node_is_toptier(folio_nid(folio));
/*
* Skip scanning top tier node if normal numa
@@ -2663,7 +2663,7 @@ void __split_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
* It's safe to call pmd_page when folio is set because it's
* guaranteed that pmd is present.
*/
- if (folio && folio != page_folio(pmd_page(*pmd)))
+ if (folio && folio != pmd_folio(*pmd))
goto out;
__split_huge_pmd_locked(vma, pmd, range.start, freeze);
}
diff --git a/mm/madvise.c b/mm/madvise.c
index 7625830d6ae9..1f77a51baaac 100644
--- a/mm/madvise.c
+++ b/mm/madvise.c
@@ -363,7 +363,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
goto huge_unlock;
}
- folio = pfn_folio(pmd_pfn(orig_pmd));
+ folio = pmd_folio(orig_pmd);
/* Do not interfere with other mappings of this folio */
if (folio_likely_mapped_shared(folio))
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index 6e7069ecf713..4f5d0923af8f 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -509,7 +509,7 @@ static void queue_folios_pmd(pmd_t *pmd, struct mm_walk *walk)
qp->nr_failed++;
return;
}
- folio = pfn_folio(pmd_pfn(*pmd));
+ folio = pmd_folio(*pmd);
if (is_huge_zero_folio(folio)) {
walk->action = ACTION_CONTINUE;
return;
diff --git a/mm/mlock.c b/mm/mlock.c
index 1ed2f2ab37cd..30b51cdea89d 100644
--- a/mm/mlock.c
+++ b/mm/mlock.c
@@ -378,7 +378,7 @@ static int mlock_pte_range(pmd_t *pmd, unsigned long addr,
goto out;
if (is_huge_zero_pmd(*pmd))
goto out;
- folio = page_folio(pmd_page(*pmd));
+ folio = pmd_folio(*pmd);
if (vma->vm_flags & VM_LOCKED)
mlock_folio(folio);
else
diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
index 4f1a392fe84f..f6267afe65d1 100644
--- a/mm/userfaultfd.c
+++ b/mm/userfaultfd.c
@@ -1697,7 +1697,7 @@ ssize_t move_pages(struct userfaultfd_ctx *ctx, unsigned long dst_start,
/* Check if we can move the pmd without splitting it. */
if (move_splits_huge_pmd(dst_addr, src_addr, src_start + len) ||
!pmd_none(dst_pmdval)) {
- struct folio *folio = pfn_folio(pmd_pfn(*src_pmd));
+ struct folio *folio = pmd_folio(*src_pmd);
if (!folio || (!is_huge_zero_folio(folio) &&
!PageAnonExclusive(&folio->page))) {
--
2.43.0
next prev parent reply other threads:[~2024-03-26 20:28 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-03-26 20:28 [PATCH 0/8] Convert huge_zero_page to huge_zero_folio Matthew Wilcox (Oracle)
2024-03-26 20:28 ` [PATCH 1/8] sparc: Use is_huge_zero_pmd() Matthew Wilcox (Oracle)
2024-03-26 20:28 ` [PATCH 2/8] mm: Add is_huge_zero_folio() Matthew Wilcox (Oracle)
2024-03-26 20:28 ` Matthew Wilcox (Oracle) [this message]
2024-04-04 19:39 ` [PATCH 3/8] mm: Add pmd_folio() David Hildenbrand
2024-03-26 20:28 ` [PATCH 4/8] mm: Convert migrate_vma_collect_pmd to use a folio Matthew Wilcox (Oracle)
2024-03-26 20:28 ` [PATCH 5/8] mm: Convert huge_zero_page to huge_zero_folio Matthew Wilcox (Oracle)
2024-03-26 20:28 ` [PATCH 6/8] mm: Convert do_huge_pmd_anonymous_page " Matthew Wilcox (Oracle)
2024-04-04 19:41 ` David Hildenbrand
2024-03-26 20:28 ` [PATCH 7/8] dax: Use huge_zero_folio Matthew Wilcox (Oracle)
2024-03-26 20:28 ` [PATCH 8/8] mm: Rename mm_put_huge_zero_page to mm_put_huge_zero_folio Matthew Wilcox (Oracle)
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240326202833.523759-4-willy@infradead.org \
--to=willy@infradead.org \
--cc=akpm@linux-foundation.org \
--cc=linux-mm@kvack.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox