From: "Matthew Wilcox (Oracle)" <willy@infradead.org>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org>,
linux-mm@kvack.org, linux-arch@vger.kernel.org,
David Hildenbrand <david@redhat.com>
Subject: [PATCH v2 07/11] mm: Add folio_mk_pte()
Date: Wed, 2 Apr 2025 19:17:01 +0100 [thread overview]
Message-ID: <20250402181709.2386022-8-willy@infradead.org> (raw)
In-Reply-To: <20250402181709.2386022-1-willy@infradead.org>
Removes a cast from folio to page in four callers of mk_pte().
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Acked-by: David Hildenbrand <david@redhat.com>
---
include/linux/mm.h | 15 +++++++++++++++
mm/memory.c | 6 +++---
mm/userfaultfd.c | 2 +-
3 files changed, 19 insertions(+), 4 deletions(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 5dc097b5d646..d657815305f7 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1992,6 +1992,21 @@ static inline pte_t mk_pte(struct page *page, pgprot_t pgprot)
{
return pfn_pte(page_to_pfn(page), pgprot);
}
+
+/**
+ * folio_mk_pte - Create a PTE for this folio
+ * @folio: The folio to create a PTE for
+ * @pgprot: The page protection bits to use
+ *
+ * Create a page table entry for the first page of this folio.
+ * This is suitable for passing to set_ptes().
+ *
+ * Return: A page table entry suitable for mapping this folio.
+ */
+static inline pte_t folio_mk_pte(struct folio *folio, pgprot_t pgprot)
+{
+ return pfn_pte(folio_pfn(folio), pgprot);
+}
#endif
static inline bool folio_has_pincount(const struct folio *folio)
diff --git a/mm/memory.c b/mm/memory.c
index 68bcf639a78c..fc4d8152a2e4 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -929,7 +929,7 @@ copy_present_page(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma
rss[MM_ANONPAGES]++;
/* All done, just insert the new page copy in the child */
- pte = mk_pte(&new_folio->page, dst_vma->vm_page_prot);
+ pte = folio_mk_pte(new_folio, dst_vma->vm_page_prot);
pte = maybe_mkwrite(pte_mkdirty(pte), dst_vma);
if (userfaultfd_pte_wp(dst_vma, ptep_get(src_pte)))
/* Uffd-wp needs to be delivered to dest pte as well */
@@ -3523,7 +3523,7 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf)
inc_mm_counter(mm, MM_ANONPAGES);
}
flush_cache_page(vma, vmf->address, pte_pfn(vmf->orig_pte));
- entry = mk_pte(&new_folio->page, vma->vm_page_prot);
+ entry = folio_mk_pte(new_folio, vma->vm_page_prot);
entry = pte_sw_mkyoung(entry);
if (unlikely(unshare)) {
if (pte_soft_dirty(vmf->orig_pte))
@@ -5013,7 +5013,7 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf)
*/
__folio_mark_uptodate(folio);
- entry = mk_pte(&folio->page, vma->vm_page_prot);
+ entry = folio_mk_pte(folio, vma->vm_page_prot);
entry = pte_sw_mkyoung(entry);
if (vma->vm_flags & VM_WRITE)
entry = pte_mkwrite(pte_mkdirty(entry), vma);
diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
index fbf2cf62ab9f..3d77c8254228 100644
--- a/mm/userfaultfd.c
+++ b/mm/userfaultfd.c
@@ -1063,7 +1063,7 @@ static int move_present_pte(struct mm_struct *mm,
folio_move_anon_rmap(src_folio, dst_vma);
src_folio->index = linear_page_index(dst_vma, dst_addr);
- orig_dst_pte = mk_pte(&src_folio->page, dst_vma->vm_page_prot);
+ orig_dst_pte = folio_mk_pte(src_folio, dst_vma->vm_page_prot);
/* Follow mremap() behavior and treat the entry dirty after the move */
orig_dst_pte = pte_mkwrite(pte_mkdirty(orig_dst_pte), dst_vma);
--
2.47.2
next prev parent reply other threads:[~2025-04-02 18:17 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-04-02 18:16 [PATCH v2 00/11] " Matthew Wilcox (Oracle)
2025-04-02 18:16 ` [PATCH v2 01/11] mm: Set the pte dirty if the folio is already dirty Matthew Wilcox (Oracle)
2025-04-02 18:16 ` [PATCH v2 02/11] mm: Introduce a common definition of mk_pte() Matthew Wilcox (Oracle)
2025-04-02 18:16 ` [PATCH v2 03/11] sparc32: Remove custom " Matthew Wilcox (Oracle)
2025-04-02 18:16 ` [PATCH v2 04/11] x86: " Matthew Wilcox (Oracle)
2025-04-02 18:16 ` [PATCH v2 05/11] um: " Matthew Wilcox (Oracle)
2025-04-02 18:17 ` [PATCH v2 06/11] mm: Make mk_pte() definition unconditional Matthew Wilcox (Oracle)
2025-04-02 18:17 ` Matthew Wilcox (Oracle) [this message]
2025-04-02 18:17 ` [PATCH v2 08/11] hugetlb: Simplify make_huge_pte() Matthew Wilcox (Oracle)
2025-04-02 18:17 ` [PATCH v2 09/11] mm: Remove mk_huge_pte() Matthew Wilcox (Oracle)
2025-04-02 18:17 ` [PATCH v2 10/11] mm: Add folio_mk_pmd() Matthew Wilcox (Oracle)
2025-04-05 17:07 ` Zi Yan
2025-04-02 18:17 ` [PATCH v2 11/11] arch: Remove mk_pmd() Matthew Wilcox (Oracle)
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250402181709.2386022-8-willy@infradead.org \
--to=willy@infradead.org \
--cc=akpm@linux-foundation.org \
--cc=david@redhat.com \
--cc=linux-arch@vger.kernel.org \
--cc=linux-mm@kvack.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox