linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 1/2] hugetlb: Convert hugetlb_vma_maps_page() to hugetlb_vma_maps_pfn()
@ 2025-02-26 16:31 Matthew Wilcox (Oracle)
  2025-02-26 16:31 ` [PATCH 2/2] hugetlb: Convert adjust_range_hwpoison() to take a folio Matthew Wilcox (Oracle)
  0 siblings, 1 reply; 2+ messages in thread
From: Matthew Wilcox (Oracle) @ 2025-02-26 16:31 UTC (permalink / raw)
  To: Muchun Song; +Cc: Matthew Wilcox (Oracle), linux-mm

pte_page() is more expensive than pte_pfn() (often it's defined
as pfn_to_page(pte_pfn())), so it makes sense to do the conversion
to pfn once (by calling folio_pfn()) rather than convert the pfn
to a page each time.

While this is a very small advantage, the main motivation is removing
a reference to folio->page.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 fs/hugetlbfs/inode.c | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
index 0fc179a59830..a427d41fbca0 100644
--- a/fs/hugetlbfs/inode.c
+++ b/fs/hugetlbfs/inode.c
@@ -338,8 +338,8 @@ static void hugetlb_delete_from_page_cache(struct folio *folio)
  * mutex for the page in the mapping.  So, we can not race with page being
  * faulted into the vma.
  */
-static bool hugetlb_vma_maps_page(struct vm_area_struct *vma,
-				unsigned long addr, struct page *page)
+static bool hugetlb_vma_maps_pfn(struct vm_area_struct *vma,
+				unsigned long addr, unsigned long pfn)
 {
 	pte_t *ptep, pte;
 
@@ -351,7 +351,7 @@ static bool hugetlb_vma_maps_page(struct vm_area_struct *vma,
 	if (huge_pte_none(pte) || !pte_present(pte))
 		return false;
 
-	if (pte_page(pte) == page)
+	if (pte_pfn(pte) == pfn)
 		return true;
 
 	return false;
@@ -396,7 +396,7 @@ static void hugetlb_unmap_file_folio(struct hstate *h,
 {
 	struct rb_root_cached *root = &mapping->i_mmap;
 	struct hugetlb_vma_lock *vma_lock;
-	struct page *page = &folio->page;
+	unsigned long pfn = folio_pfn(folio);
 	struct vm_area_struct *vma;
 	unsigned long v_start;
 	unsigned long v_end;
@@ -412,7 +412,7 @@ static void hugetlb_unmap_file_folio(struct hstate *h,
 		v_start = vma_offset_start(vma, start);
 		v_end = vma_offset_end(vma, end);
 
-		if (!hugetlb_vma_maps_page(vma, v_start, page))
+		if (!hugetlb_vma_maps_pfn(vma, v_start, pfn))
 			continue;
 
 		if (!hugetlb_vma_trylock_write(vma)) {
@@ -462,7 +462,7 @@ static void hugetlb_unmap_file_folio(struct hstate *h,
 		 */
 		v_start = vma_offset_start(vma, start);
 		v_end = vma_offset_end(vma, end);
-		if (hugetlb_vma_maps_page(vma, v_start, page))
+		if (hugetlb_vma_maps_pfn(vma, v_start, pfn))
 			unmap_hugepage_range(vma, v_start, v_end, NULL,
 					     ZAP_FLAG_DROP_MARKER);
 
-- 
2.47.2



^ permalink raw reply	[flat|nested] 2+ messages in thread

* [PATCH 2/2] hugetlb: Convert adjust_range_hwpoison() to take a folio
  2025-02-26 16:31 [PATCH 1/2] hugetlb: Convert hugetlb_vma_maps_page() to hugetlb_vma_maps_pfn() Matthew Wilcox (Oracle)
@ 2025-02-26 16:31 ` Matthew Wilcox (Oracle)
  0 siblings, 0 replies; 2+ messages in thread
From: Matthew Wilcox (Oracle) @ 2025-02-26 16:31 UTC (permalink / raw)
  To: Muchun Song; +Cc: Matthew Wilcox (Oracle), linux-mm

Remove a use of folio->page by passing the folio into
adjust_range_hwpoison().  We need to convert to a page eventually,
but that can happen inside adjust_range_hwpoison().

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 fs/hugetlbfs/inode.c | 16 +++++++++-------
 1 file changed, 9 insertions(+), 7 deletions(-)

diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
index a427d41fbca0..a6fcaf8317a0 100644
--- a/fs/hugetlbfs/inode.c
+++ b/fs/hugetlbfs/inode.c
@@ -193,19 +193,21 @@ hugetlb_get_unmapped_area(struct file *file, unsigned long addr,
 }
 
 /*
- * Someone wants to read @bytes from a HWPOISON hugetlb @page from @offset.
+ * Someone wants to read @bytes from a HWPOISON hugetlb @folio from @offset.
  * Returns the maximum number of bytes one can read without touching the 1st raw
- * HWPOISON subpage.
+ * HWPOISON page.
  *
  * The implementation borrows the iteration logic from copy_page_to_iter*.
  */
-static size_t adjust_range_hwpoison(struct page *page, size_t offset, size_t bytes)
+static size_t adjust_range_hwpoison(struct folio *folio, size_t offset,
+		size_t bytes)
 {
+	struct page *page;
 	size_t n = 0;
 	size_t res = 0;
 
-	/* First subpage to start the loop. */
-	page = nth_page(page, offset / PAGE_SIZE);
+	/* First page to start the loop. */
+	page = folio_page(folio, offset / PAGE_SIZE);
 	offset %= PAGE_SIZE;
 	while (1) {
 		if (is_raw_hwpoison_page_in_hugepage(page))
@@ -278,10 +280,10 @@ static ssize_t hugetlbfs_read_iter(struct kiocb *iocb, struct iov_iter *to)
 			else {
 				/*
 				 * Adjust how many bytes safe to read without
-				 * touching the 1st raw HWPOISON subpage after
+				 * touching the 1st raw HWPOISON page after
 				 * offset.
 				 */
-				want = adjust_range_hwpoison(&folio->page, offset, nr);
+				want = adjust_range_hwpoison(folio, offset, nr);
 				if (want == 0) {
 					folio_put(folio);
 					retval = -EIO;
-- 
2.47.2



^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2025-02-26 16:31 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-02-26 16:31 [PATCH 1/2] hugetlb: Convert hugetlb_vma_maps_page() to hugetlb_vma_maps_pfn() Matthew Wilcox (Oracle)
2025-02-26 16:31 ` [PATCH 2/2] hugetlb: Convert adjust_range_hwpoison() to take a folio Matthew Wilcox (Oracle)

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox