linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v11 mm-new 05/10] mm: thp: enable THP allocation exclusively through khugepaged
@ 2025-10-20  3:16 Yafang Shao
  2025-10-20  3:16 ` [PATCH v11 mm-new 06/10] mm: bpf-thp: add support for global mode Yafang Shao
                   ` (3 more replies)
  0 siblings, 4 replies; 5+ messages in thread
From: Yafang Shao @ 2025-10-20  3:16 UTC (permalink / raw)
  To: akpm, ast, daniel, andrii, martin.lau, eddyz87, song,
	yonghong.song, john.fastabend, kpsingh, sdf, haoluo, jolsa,
	david, ziy, lorenzo.stoakes, Liam.Howlett, npache, ryan.roberts,
	dev.jain, hannes, usamaarif642, gutierrez.asier, willy,
	ameryhung, rientjes, corbet, 21cnbao, shakeel.butt, tj,
	lance.yang, rdunlap
  Cc: bpf, linux-mm, linux-doc, linux-kernel, Yafang Shao

khugepaged_enter_vma() ultimately invokes any attached BPF function with
the TVA_KHUGEPAGED flag set when determining whether or not to enable
khugepaged THP for a freshly faulted in VMA.

Currently, on fault, we invoke this in do_huge_pmd_anonymous_page(), as
invoked by create_huge_pmd() and only when we have already checked to
see if an allowable TVA_PAGEFAULT order is specified.

Since we might want to disallow THP on fault-in but allow it via
khugepaged, we move things around so we always attempt to enter
khugepaged upon fault.

This change is safe because:
- khugepaged operates at the MM level rather than per-VMA. The THP
  allocation might fail during page faults due to transient conditions
  (e.g., memory pressure), it is safe to add this MM to khugepaged for
  subsequent defragmentation.
- If __thp_vma_allowable_orders(TVA_PAGEFAULT) returns 0, then
  __thp_vma_allowable_orders(TVA_KHUGEPAGED) will also return 0.

While we could also extend prctl() to utilize this new policy, such a
change would require a uAPI modification to PR_SET_THP_DISABLE.

Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
Acked-by: Lance Yang <lance.yang@linux.dev>
Cc: Usama Arif <usamaarif642@gmail.com>
---
 mm/huge_memory.c |  1 -
 mm/memory.c      | 13 ++++++++-----
 2 files changed, 8 insertions(+), 6 deletions(-)

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index e105604868a5..45d13c798525 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1390,7 +1390,6 @@ vm_fault_t do_huge_pmd_anonymous_page(struct vm_fault *vmf)
 	ret = vmf_anon_prepare(vmf);
 	if (ret)
 		return ret;
-	khugepaged_enter_vma(vma);
 
 	if (!(vmf->flags & FAULT_FLAG_WRITE) &&
 			!mm_forbids_zeropage(vma->vm_mm) &&
diff --git a/mm/memory.c b/mm/memory.c
index 7a242cb07d56..5007f7526694 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -6327,11 +6327,14 @@ static vm_fault_t __handle_mm_fault(struct vm_area_struct *vma,
 	if (pud_trans_unstable(vmf.pud))
 		goto retry_pud;
 
-	if (pmd_none(*vmf.pmd) &&
-	    thp_vma_allowable_order(vma, TVA_PAGEFAULT, PMD_ORDER)) {
-		ret = create_huge_pmd(&vmf);
-		if (!(ret & VM_FAULT_FALLBACK))
-			return ret;
+	if (pmd_none(*vmf.pmd)) {
+		if (vma_is_anonymous(vma))
+			khugepaged_enter_vma(vma);
+		if (thp_vma_allowable_order(vma, TVA_PAGEFAULT, PMD_ORDER)) {
+			ret = create_huge_pmd(&vmf);
+			if (!(ret & VM_FAULT_FALLBACK))
+				return ret;
+		}
 	} else {
 		vmf.orig_pmd = pmdp_get_lockless(vmf.pmd);
 
-- 
2.47.3



^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2025-10-20  3:17 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-10-20  3:16 [PATCH v11 mm-new 05/10] mm: thp: enable THP allocation exclusively through khugepaged Yafang Shao
2025-10-20  3:16 ` [PATCH v11 mm-new 06/10] mm: bpf-thp: add support for global mode Yafang Shao
2025-10-20  3:16 ` [PATCH v11 mm-new 07/10] Documentation: add BPF THP Yafang Shao
2025-10-20  3:16 ` [PATCH v11 mm-new 08/10] selftests/bpf: add a simple BPF based THP policy Yafang Shao
2025-10-20  3:16 ` [PATCH v11 mm-new 09/10] selftests/bpf: add test case to update " Yafang Shao

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox