linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 1/2] mm: khugepaged: convert set_huge_pmd() to take a folio
@ 2025-05-12  2:57 Baolin Wang
  2025-05-12  2:57 ` [PATCH v3 2/2] mm: convert do_set_pmd() " Baolin Wang
  0 siblings, 1 reply; 3+ messages in thread
From: Baolin Wang @ 2025-05-12  2:57 UTC (permalink / raw)
  To: akpm, willy, david
  Cc: hannes, lorenzo.stoakes, Liam.Howlett, npache, ryan.roberts,
	dev.jain, ziy, vbabka, rppt, surenb, mhocko, baolin.wang,
	linux-mm, linux-kernel

We've already gotten the stable locked folio in collapse_pte_mapped_thp(),
so just use folio for set_huge_pmd() to set the PMD entry, which is more
straightforward.

Moreover, we will check the folio size in do_set_pmd(), so we can remove
the unnecessary VM_BUG_ON() in set_huge_pmd(). While we are at it, we can
also remove the PageTransHuge(), as it currently has no callers.

Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
Acked-by: David Hildenbrand <david@redhat.com>
---
Changes from v2:
 - Update the releated comments.
 - Pass the 'page' for set_huge_pmd().
 - Add acked tag from David.

Changes from v1:
 - Remove the unnecessary VM_BUG_ON().
 - Remove the PageTransHuge().
---
 include/linux/page-flags.h | 15 ---------------
 mm/khugepaged.c            | 11 +++++------
 2 files changed, 5 insertions(+), 21 deletions(-)

diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
index 37b11f15dbd9..1c1d49554c71 100644
--- a/include/linux/page-flags.h
+++ b/include/linux/page-flags.h
@@ -907,20 +907,6 @@ FOLIO_FLAG_FALSE(partially_mapped)
 #define PG_head_mask ((1UL << PG_head))
 
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
-/*
- * PageHuge() only returns true for hugetlbfs pages, but not for
- * normal or transparent huge pages.
- *
- * PageTransHuge() returns true for both transparent huge and
- * hugetlbfs pages, but not normal pages. PageTransHuge() can only be
- * called only in the core VM paths where hugetlbfs pages can't exist.
- */
-static inline int PageTransHuge(const struct page *page)
-{
-	VM_BUG_ON_PAGE(PageTail(page), page);
-	return PageHead(page);
-}
-
 /*
  * PageTransCompound returns true for both transparent huge pages
  * and hugetlbfs pages, so it should only be called when it's known
@@ -931,7 +917,6 @@ static inline int PageTransCompound(const struct page *page)
 	return PageCompound(page);
 }
 #else
-TESTPAGEFLAG_FALSE(TransHuge, transhuge)
 TESTPAGEFLAG_FALSE(TransCompound, transcompound)
 #endif
 
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index b04b6a770afe..33daea8f667e 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -1465,9 +1465,9 @@ static void collect_mm_slot(struct khugepaged_mm_slot *mm_slot)
 }
 
 #ifdef CONFIG_SHMEM
-/* hpage must be locked, and mmap_lock must be held */
+/* folio must be locked, and mmap_lock must be held */
 static int set_huge_pmd(struct vm_area_struct *vma, unsigned long addr,
-			pmd_t *pmdp, struct page *hpage)
+			pmd_t *pmdp, struct folio *folio, struct page *page)
 {
 	struct vm_fault vmf = {
 		.vma = vma,
@@ -1476,13 +1476,12 @@ static int set_huge_pmd(struct vm_area_struct *vma, unsigned long addr,
 		.pmd = pmdp,
 	};
 
-	VM_BUG_ON(!PageTransHuge(hpage));
 	mmap_assert_locked(vma->vm_mm);
 
-	if (do_set_pmd(&vmf, hpage))
+	if (do_set_pmd(&vmf, page))
 		return SCAN_FAIL;
 
-	get_page(hpage);
+	folio_get(folio);
 	return SCAN_SUCCEED;
 }
 
@@ -1689,7 +1688,7 @@ int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr,
 maybe_install_pmd:
 	/* step 5: install pmd entry */
 	result = install_pmd
-			? set_huge_pmd(vma, haddr, pmd, &folio->page)
+			? set_huge_pmd(vma, haddr, pmd, folio, &folio->page)
 			: SCAN_SUCCEED;
 	goto drop_folio;
 abort:
-- 
2.43.5



^ permalink raw reply	[flat|nested] 3+ messages in thread

* [PATCH v3 2/2] mm: convert do_set_pmd() to take a folio
  2025-05-12  2:57 [PATCH v3 1/2] mm: khugepaged: convert set_huge_pmd() to take a folio Baolin Wang
@ 2025-05-12  2:57 ` Baolin Wang
  2025-05-12  8:04   ` David Hildenbrand
  0 siblings, 1 reply; 3+ messages in thread
From: Baolin Wang @ 2025-05-12  2:57 UTC (permalink / raw)
  To: akpm, willy, david
  Cc: hannes, lorenzo.stoakes, Liam.Howlett, npache, ryan.roberts,
	dev.jain, ziy, vbabka, rppt, surenb, mhocko, baolin.wang,
	linux-mm, linux-kernel

In do_set_pmd(), we always use the folio->page to build PMD mappings for
the entire folio. Since all callers of do_set_pmd() already hold a stable
folio, converting do_set_pmd() to take a folio is safe and more straightforward.

In addition, to ensure the extensibility of do_set_pmd() for supporting
larger folios beyond PMD size, we keep the 'page' parameter to specify
which page within the folio should be mapped.

No functional changes expected.

Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
Reviewed-by: Zi Yan <ziy@nvidia.com>
---
Changes from v2:
 - Add reviewed tag from Zi.

Changes from v1:
 - Keep the 'page' parameter of the do_set_pmd().

Note: I did mm selftests and built kernel on tmpfs/xfs filesystems, and
did not find any regression.
---
 include/linux/mm.h |  2 +-
 mm/filemap.c       |  2 +-
 mm/khugepaged.c    |  2 +-
 mm/memory.c        | 11 +++++------
 4 files changed, 8 insertions(+), 9 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 43748c8f3454..d5f578c91e77 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1237,7 +1237,7 @@ static inline pte_t maybe_mkwrite(pte_t pte, struct vm_area_struct *vma)
 	return pte;
 }
 
-vm_fault_t do_set_pmd(struct vm_fault *vmf, struct page *page);
+vm_fault_t do_set_pmd(struct vm_fault *vmf, struct folio *folio, struct page *page);
 void set_pte_range(struct vm_fault *vmf, struct folio *folio,
 		struct page *page, unsigned int nr, unsigned long addr);
 
diff --git a/mm/filemap.c b/mm/filemap.c
index 7b90cbeb4a1a..09d005848f0d 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -3533,7 +3533,7 @@ static bool filemap_map_pmd(struct vm_fault *vmf, struct folio *folio,
 
 	if (pmd_none(*vmf->pmd) && folio_test_pmd_mappable(folio)) {
 		struct page *page = folio_file_page(folio, start);
-		vm_fault_t ret = do_set_pmd(vmf, page);
+		vm_fault_t ret = do_set_pmd(vmf, folio, page);
 		if (!ret) {
 			/* The page is mapped successfully, reference consumed. */
 			folio_unlock(folio);
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 33daea8f667e..ebcd7c8a4b44 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -1478,7 +1478,7 @@ static int set_huge_pmd(struct vm_area_struct *vma, unsigned long addr,
 
 	mmap_assert_locked(vma->vm_mm);
 
-	if (do_set_pmd(&vmf, page))
+	if (do_set_pmd(&vmf, folio, page))
 		return SCAN_FAIL;
 
 	folio_get(folio);
diff --git a/mm/memory.c b/mm/memory.c
index 68c1d962d0ad..9c202c32ca66 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -5176,9 +5176,8 @@ static void deposit_prealloc_pte(struct vm_fault *vmf)
 	vmf->prealloc_pte = NULL;
 }
 
-vm_fault_t do_set_pmd(struct vm_fault *vmf, struct page *page)
+vm_fault_t do_set_pmd(struct vm_fault *vmf, struct folio *folio, struct page *page)
 {
-	struct folio *folio = page_folio(page);
 	struct vm_area_struct *vma = vmf->vma;
 	bool write = vmf->flags & FAULT_FLAG_WRITE;
 	unsigned long haddr = vmf->address & HPAGE_PMD_MASK;
@@ -5251,7 +5250,7 @@ vm_fault_t do_set_pmd(struct vm_fault *vmf, struct page *page)
 	return ret;
 }
 #else
-vm_fault_t do_set_pmd(struct vm_fault *vmf, struct page *page)
+vm_fault_t do_set_pmd(struct vm_fault *vmf, struct folio *folio, struct page *page)
 {
 	return VM_FAULT_FALLBACK;
 }
@@ -5345,6 +5344,7 @@ vm_fault_t finish_fault(struct vm_fault *vmf)
 	else
 		page = vmf->page;
 
+	folio = page_folio(page);
 	/*
 	 * check even for read faults because we might have lost our CoWed
 	 * page
@@ -5356,8 +5356,8 @@ vm_fault_t finish_fault(struct vm_fault *vmf)
 	}
 
 	if (pmd_none(*vmf->pmd)) {
-		if (PageTransCompound(page)) {
-			ret = do_set_pmd(vmf, page);
+		if (folio_test_pmd_mappable(folio)) {
+			ret = do_set_pmd(vmf, folio, page);
 			if (ret != VM_FAULT_FALLBACK)
 				return ret;
 		}
@@ -5368,7 +5368,6 @@ vm_fault_t finish_fault(struct vm_fault *vmf)
 			return VM_FAULT_OOM;
 	}
 
-	folio = page_folio(page);
 	nr_pages = folio_nr_pages(folio);
 
 	/*
-- 
2.43.5



^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH v3 2/2] mm: convert do_set_pmd() to take a folio
  2025-05-12  2:57 ` [PATCH v3 2/2] mm: convert do_set_pmd() " Baolin Wang
@ 2025-05-12  8:04   ` David Hildenbrand
  0 siblings, 0 replies; 3+ messages in thread
From: David Hildenbrand @ 2025-05-12  8:04 UTC (permalink / raw)
  To: Baolin Wang, akpm, willy
  Cc: hannes, lorenzo.stoakes, Liam.Howlett, npache, ryan.roberts,
	dev.jain, ziy, vbabka, rppt, surenb, mhocko, linux-mm,
	linux-kernel

On 12.05.25 04:57, Baolin Wang wrote:
> In do_set_pmd(), we always use the folio->page to build PMD mappings for
> the entire folio. Since all callers of do_set_pmd() already hold a stable
> folio, converting do_set_pmd() to take a folio is safe and more straightforward.
> 
> In addition, to ensure the extensibility of do_set_pmd() for supporting
> larger folios beyond PMD size, we keep the 'page' parameter to specify
> which page within the folio should be mapped.
> 
> No functional changes expected.
> 
> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
> Reviewed-by: Zi Yan <ziy@nvidia.com>

Acked-by: David Hildenbrand <david@redhat.com>

-- 
Cheers,

David / dhildenb



^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2025-05-12  8:04 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-05-12  2:57 [PATCH v3 1/2] mm: khugepaged: convert set_huge_pmd() to take a folio Baolin Wang
2025-05-12  2:57 ` [PATCH v3 2/2] mm: convert do_set_pmd() " Baolin Wang
2025-05-12  8:04   ` David Hildenbrand

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox