linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] mm: huge_memory: convert __do_huge_pmd_anonymous_page() to use folios
@ 2024-04-22 18:12 Jianfeng Wang
  2024-04-22 19:11 ` Matthew Wilcox
  0 siblings, 1 reply; 2+ messages in thread
From: Jianfeng Wang @ 2024-04-22 18:12 UTC (permalink / raw)
  To: linux-mm, linux-kernel; +Cc: akpm, willy

Change __do_huge_pmd_anonymous_page() to take folio as input, as its
caller has used folio. Save one unnecessary call to compound_head().

Signed-off-by: Jianfeng Wang <jianfeng.w.wang@oracle.com>
---
 mm/huge_memory.c | 9 ++++-----
 1 file changed, 4 insertions(+), 5 deletions(-)

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 89f58c7603b2..83566ee738e0 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -866,10 +866,9 @@ unsigned long thp_get_unmapped_area(struct file *filp, unsigned long addr,
 EXPORT_SYMBOL_GPL(thp_get_unmapped_area);
 
 static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf,
-			struct page *page, gfp_t gfp)
+			struct folio *folio, gfp_t gfp)
 {
 	struct vm_area_struct *vma = vmf->vma;
-	struct folio *folio = page_folio(page);
 	pgtable_t pgtable;
 	unsigned long haddr = vmf->address & HPAGE_PMD_MASK;
 	vm_fault_t ret = 0;
@@ -890,7 +889,7 @@ static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf,
 		goto release;
 	}
 
-	clear_huge_page(page, vmf->address, HPAGE_PMD_NR);
+	clear_huge_page(&folio->page, vmf->address, HPAGE_PMD_NR);
 	/*
 	 * The memory barrier inside __folio_mark_uptodate makes sure that
 	 * clear_huge_page writes become visible before the set_pmd_at()
@@ -918,7 +917,7 @@ static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf,
 			return ret;
 		}
 
-		entry = mk_huge_pmd(page, vma->vm_page_prot);
+		entry = mk_huge_pmd(&folio->page, vma->vm_page_prot);
 		entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma);
 		folio_add_new_anon_rmap(folio, vma, haddr);
 		folio_add_lru_vma(folio, vma);
@@ -1051,7 +1050,7 @@ vm_fault_t do_huge_pmd_anonymous_page(struct vm_fault *vmf)
 		count_vm_event(THP_FAULT_FALLBACK);
 		return VM_FAULT_FALLBACK;
 	}
-	return __do_huge_pmd_anonymous_page(vmf, &folio->page, gfp);
+	return __do_huge_pmd_anonymous_page(vmf, folio, gfp);
 }
 
 static void insert_pfn_pmd(struct vm_area_struct *vma, unsigned long addr,
-- 
2.42.1



^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: [PATCH] mm: huge_memory: convert __do_huge_pmd_anonymous_page() to use folios
  2024-04-22 18:12 [PATCH] mm: huge_memory: convert __do_huge_pmd_anonymous_page() to use folios Jianfeng Wang
@ 2024-04-22 19:11 ` Matthew Wilcox
  0 siblings, 0 replies; 2+ messages in thread
From: Matthew Wilcox @ 2024-04-22 19:11 UTC (permalink / raw)
  To: Jianfeng Wang; +Cc: linux-mm, linux-kernel, akpm

On Mon, Apr 22, 2024 at 11:12:16AM -0700, Jianfeng Wang wrote:
> Change __do_huge_pmd_anonymous_page() to take folio as input, as its
> caller has used folio. Save one unnecessary call to compound_head().

I don't like this patch.  It makes the assumption that folios will never
be larger than PMD size, and I don't think that's an assumption that's
going to last another five years.  Look where you had to
do &folio->page:

> +	clear_huge_page(&folio->page, vmf->address, HPAGE_PMD_NR);

> +		entry = mk_huge_pmd(&folio->page, vma->vm_page_prot);

For mk_huge_pmd() in particular, you need to know the precise page, and
not just use the first page of the folio.


^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2024-04-22 19:12 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-04-22 18:12 [PATCH] mm: huge_memory: convert __do_huge_pmd_anonymous_page() to use folios Jianfeng Wang
2024-04-22 19:11 ` Matthew Wilcox

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox