* [PATCH v2] khugepaged: remove redundant index check for pmd-folios
@ 2026-02-27 14:35 Dev Jain
0 siblings, 0 replies; only message in thread
From: Dev Jain @ 2026-02-27 14:35 UTC (permalink / raw)
To: akpm, david, lorenzo.stoakes
Cc: ziy, baolin.wang, Liam.Howlett, npache, ryan.roberts, dev.jain,
baohua, lance.yang, linux-mm, linux-kernel
Claim: folio_order(folio) == HPAGE_PMD_ORDER => folio->index == start.
Proof: Both loops in hpage_collapse_scan_file and collapse_file, which
iterate on the xarray, have the invariant that
start <= folio->index < start + HPAGE_PMD_NR ... (i)
A folio is always naturally aligned in the pagecache, therefore
folio_order == HPAGE_PMD_ORDER => IS_ALIGNED(folio->index, HPAGE_PMD_NR) == true ... (ii)
thp_vma_allowable_order -> thp_vma_suitable_order requires that the virtual
offsets in the VMA are aligned to the order,
=> IS_ALIGNED(start, HPAGE_PMD_NR) == true ... (iii)
Combining (i), (ii) and (iii), the claim is proven.
Therefore, remove this check.
While at it, simplify the comments.
Signed-off-by: Dev Jain <dev.jain@arm.com>
---
v1->v2:
- Remove the check instead of converting to VM_WARN_ON
- While at it, simplify the comments
Based on mm-new (8982358e1c87).
mm/khugepaged.c | 14 ++++----------
1 file changed, 4 insertions(+), 10 deletions(-)
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 5f668c1dd0fe4..b7b4680d27ab1 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -2015,9 +2015,7 @@ static enum scan_result collapse_file(struct mm_struct *mm, unsigned long addr,
* we locked the first folio, then a THP might be there already.
* This will be discovered on the first iteration.
*/
- if (folio_order(folio) == HPAGE_PMD_ORDER &&
- folio->index == start) {
- /* Maybe PMD-mapped */
+ if (folio_order(folio) == HPAGE_PMD_ORDER) {
result = SCAN_PTE_MAPPED_HUGEPAGE;
goto out_unlock;
}
@@ -2345,15 +2343,11 @@ static enum scan_result hpage_collapse_scan_file(struct mm_struct *mm,
continue;
}
- if (folio_order(folio) == HPAGE_PMD_ORDER &&
- folio->index == start) {
- /* Maybe PMD-mapped */
+ if (folio_order(folio) == HPAGE_PMD_ORDER) {
result = SCAN_PTE_MAPPED_HUGEPAGE;
/*
- * For SCAN_PTE_MAPPED_HUGEPAGE, further processing
- * by the caller won't touch the page cache, and so
- * it's safe to skip LRU and refcount checks before
- * returning.
+ * PMD-sized THP implies that we can only try
+ * retracting the PTE table.
*/
folio_put(folio);
break;
--
2.34.1
^ permalink raw reply [flat|nested] only message in thread
only message in thread, other threads:[~2026-02-27 14:35 UTC | newest]
Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2026-02-27 14:35 [PATCH v2] khugepaged: remove redundant index check for pmd-folios Dev Jain
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox