linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 1/2] hugetlb: Convert hugetlb_vma_maps_page() to hugetlb_vma_maps_pfn()
@ 2025-02-26 16:31 Matthew Wilcox (Oracle)
  2025-02-26 16:31 ` [PATCH 2/2] hugetlb: Convert adjust_range_hwpoison() to take a folio Matthew Wilcox (Oracle)
  0 siblings, 1 reply; 2+ messages in thread
From: Matthew Wilcox (Oracle) @ 2025-02-26 16:31 UTC (permalink / raw)
  To: Muchun Song; +Cc: Matthew Wilcox (Oracle), linux-mm

pte_page() is more expensive than pte_pfn() (often it's defined
as pfn_to_page(pte_pfn())), so it makes sense to do the conversion
to pfn once (by calling folio_pfn()) rather than convert the pfn
to a page each time.

While this is a very small advantage, the main motivation is removing
a reference to folio->page.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 fs/hugetlbfs/inode.c | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
index 0fc179a59830..a427d41fbca0 100644
--- a/fs/hugetlbfs/inode.c
+++ b/fs/hugetlbfs/inode.c
@@ -338,8 +338,8 @@ static void hugetlb_delete_from_page_cache(struct folio *folio)
  * mutex for the page in the mapping.  So, we can not race with page being
  * faulted into the vma.
  */
-static bool hugetlb_vma_maps_page(struct vm_area_struct *vma,
-				unsigned long addr, struct page *page)
+static bool hugetlb_vma_maps_pfn(struct vm_area_struct *vma,
+				unsigned long addr, unsigned long pfn)
 {
 	pte_t *ptep, pte;
 
@@ -351,7 +351,7 @@ static bool hugetlb_vma_maps_page(struct vm_area_struct *vma,
 	if (huge_pte_none(pte) || !pte_present(pte))
 		return false;
 
-	if (pte_page(pte) == page)
+	if (pte_pfn(pte) == pfn)
 		return true;
 
 	return false;
@@ -396,7 +396,7 @@ static void hugetlb_unmap_file_folio(struct hstate *h,
 {
 	struct rb_root_cached *root = &mapping->i_mmap;
 	struct hugetlb_vma_lock *vma_lock;
-	struct page *page = &folio->page;
+	unsigned long pfn = folio_pfn(folio);
 	struct vm_area_struct *vma;
 	unsigned long v_start;
 	unsigned long v_end;
@@ -412,7 +412,7 @@ static void hugetlb_unmap_file_folio(struct hstate *h,
 		v_start = vma_offset_start(vma, start);
 		v_end = vma_offset_end(vma, end);
 
-		if (!hugetlb_vma_maps_page(vma, v_start, page))
+		if (!hugetlb_vma_maps_pfn(vma, v_start, pfn))
 			continue;
 
 		if (!hugetlb_vma_trylock_write(vma)) {
@@ -462,7 +462,7 @@ static void hugetlb_unmap_file_folio(struct hstate *h,
 		 */
 		v_start = vma_offset_start(vma, start);
 		v_end = vma_offset_end(vma, end);
-		if (hugetlb_vma_maps_page(vma, v_start, page))
+		if (hugetlb_vma_maps_pfn(vma, v_start, pfn))
 			unmap_hugepage_range(vma, v_start, v_end, NULL,
 					     ZAP_FLAG_DROP_MARKER);
 
-- 
2.47.2



^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2025-02-26 16:31 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-02-26 16:31 [PATCH 1/2] hugetlb: Convert hugetlb_vma_maps_page() to hugetlb_vma_maps_pfn() Matthew Wilcox (Oracle)
2025-02-26 16:31 ` [PATCH 2/2] hugetlb: Convert adjust_range_hwpoison() to take a folio Matthew Wilcox (Oracle)

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox