linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 1/4] mm/hugetlb: Refactor unmap_ref_private() to take folio instead of page
@ 2025-04-18 16:57 nifan.cxl
  2025-04-18 16:57 ` [PATCH v2 2/4] mm/hugetlb: Refactor unmap_hugepage_range() " nifan.cxl
                   ` (4 more replies)
  0 siblings, 5 replies; 16+ messages in thread
From: nifan.cxl @ 2025-04-18 16:57 UTC (permalink / raw)
  To: muchun.song, willy
  Cc: mcgrof, a.manzanares, dave, akpm, david, linux-mm, linux-kernel,
	Fan Ni, Sidhartha Kumar

From: Fan Ni <fan.ni@samsung.com>

The function unmap_ref_private() has only user, which passes in
&folio->page. Let it take folio directly.

Signed-off-by: Fan Ni <fan.ni@samsung.com>
Reviewed-by: Muchun Song <muchun.song@linux.dev>
Reviewed-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
---
v2:
Picked up Reviewed-by tags;

v1:
https://lore.kernel.org/linux-mm/aAHUluy7T32ZlYg7@debian/T/#mbf9f3e8b49497755b414e1887b2376b3902ffb76
---
 mm/hugetlb.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index ccc4f08f8481..b5d1ac8290a7 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -6064,7 +6064,7 @@ void unmap_hugepage_range(struct vm_area_struct *vma, unsigned long start,
  * same region.
  */
 static void unmap_ref_private(struct mm_struct *mm, struct vm_area_struct *vma,
-			      struct page *page, unsigned long address)
+			      struct folio *folio, unsigned long address)
 {
 	struct hstate *h = hstate_vma(vma);
 	struct vm_area_struct *iter_vma;
@@ -6108,7 +6108,8 @@ static void unmap_ref_private(struct mm_struct *mm, struct vm_area_struct *vma,
 		 */
 		if (!is_vma_resv_set(iter_vma, HPAGE_RESV_OWNER))
 			unmap_hugepage_range(iter_vma, address,
-					     address + huge_page_size(h), page, 0);
+					     address + huge_page_size(h),
+					     folio_page(folio, 0), 0);
 	}
 	i_mmap_unlock_write(mapping);
 }
@@ -6231,8 +6232,7 @@ static vm_fault_t hugetlb_wp(struct folio *pagecache_folio,
 			hugetlb_vma_unlock_read(vma);
 			mutex_unlock(&hugetlb_fault_mutex_table[hash]);
 
-			unmap_ref_private(mm, vma, &old_folio->page,
-					vmf->address);
+			unmap_ref_private(mm, vma, old_folio, vmf->address);
 
 			mutex_lock(&hugetlb_fault_mutex_table[hash]);
 			hugetlb_vma_lock_read(vma);
-- 
2.47.2



^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2025-04-25  1:11 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-04-18 16:57 [PATCH v2 1/4] mm/hugetlb: Refactor unmap_ref_private() to take folio instead of page nifan.cxl
2025-04-18 16:57 ` [PATCH v2 2/4] mm/hugetlb: Refactor unmap_hugepage_range() " nifan.cxl
2025-04-22  8:52   ` David Hildenbrand
2025-04-22  8:55   ` David Hildenbrand
2025-04-23  3:15   ` Matthew Wilcox
2025-04-18 16:57 ` [PATCH v2 3/4] mm/hugetlb: Refactor __unmap_hugepage_range() " nifan.cxl
2025-04-22  8:56   ` David Hildenbrand
2025-04-23  3:19   ` Matthew Wilcox
2025-04-18 16:57 ` [PATCH v2 4/4] mm/hugetlb: Convert use of struct page to folio in __unmap_hugepage_range() nifan.cxl
2025-04-21 15:08   ` Sidhartha Kumar
2025-04-22  9:00   ` David Hildenbrand
2025-04-23  3:22   ` Matthew Wilcox
2025-04-23 22:17     ` Andrew Morton
2025-04-22  8:50 ` [PATCH v2 1/4] mm/hugetlb: Refactor unmap_ref_private() to take folio instead of page David Hildenbrand
2025-04-23  3:17 ` Matthew Wilcox
2025-04-25  1:11   ` Fan Ni

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox