* [PATCH v2] khugepaged: remove redundant index check for pmd-folios
@ 2026-02-27 14:35 Dev Jain
2026-02-27 20:57 ` David Hildenbrand (Arm)
` (5 more replies)
0 siblings, 6 replies; 8+ messages in thread
From: Dev Jain @ 2026-02-27 14:35 UTC (permalink / raw)
To: akpm, david, lorenzo.stoakes
Cc: ziy, baolin.wang, Liam.Howlett, npache, ryan.roberts, dev.jain,
baohua, lance.yang, linux-mm, linux-kernel
Claim: folio_order(folio) == HPAGE_PMD_ORDER => folio->index == start.
Proof: Both loops in hpage_collapse_scan_file and collapse_file, which
iterate on the xarray, have the invariant that
start <= folio->index < start + HPAGE_PMD_NR ... (i)
A folio is always naturally aligned in the pagecache, therefore
folio_order == HPAGE_PMD_ORDER => IS_ALIGNED(folio->index, HPAGE_PMD_NR) == true ... (ii)
thp_vma_allowable_order -> thp_vma_suitable_order requires that the virtual
offsets in the VMA are aligned to the order,
=> IS_ALIGNED(start, HPAGE_PMD_NR) == true ... (iii)
Combining (i), (ii) and (iii), the claim is proven.
Therefore, remove this check.
While at it, simplify the comments.
Signed-off-by: Dev Jain <dev.jain@arm.com>
---
v1->v2:
- Remove the check instead of converting to VM_WARN_ON
- While at it, simplify the comments
Based on mm-new (8982358e1c87).
mm/khugepaged.c | 14 ++++----------
1 file changed, 4 insertions(+), 10 deletions(-)
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 5f668c1dd0fe4..b7b4680d27ab1 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -2015,9 +2015,7 @@ static enum scan_result collapse_file(struct mm_struct *mm, unsigned long addr,
* we locked the first folio, then a THP might be there already.
* This will be discovered on the first iteration.
*/
- if (folio_order(folio) == HPAGE_PMD_ORDER &&
- folio->index == start) {
- /* Maybe PMD-mapped */
+ if (folio_order(folio) == HPAGE_PMD_ORDER) {
result = SCAN_PTE_MAPPED_HUGEPAGE;
goto out_unlock;
}
@@ -2345,15 +2343,11 @@ static enum scan_result hpage_collapse_scan_file(struct mm_struct *mm,
continue;
}
- if (folio_order(folio) == HPAGE_PMD_ORDER &&
- folio->index == start) {
- /* Maybe PMD-mapped */
+ if (folio_order(folio) == HPAGE_PMD_ORDER) {
result = SCAN_PTE_MAPPED_HUGEPAGE;
/*
- * For SCAN_PTE_MAPPED_HUGEPAGE, further processing
- * by the caller won't touch the page cache, and so
- * it's safe to skip LRU and refcount checks before
- * returning.
+ * PMD-sized THP implies that we can only try
+ * retracting the PTE table.
*/
folio_put(folio);
break;
--
2.34.1
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH v2] khugepaged: remove redundant index check for pmd-folios
2026-02-27 14:35 [PATCH v2] khugepaged: remove redundant index check for pmd-folios Dev Jain
@ 2026-02-27 20:57 ` David Hildenbrand (Arm)
2026-02-28 4:44 ` Lance Yang
` (4 subsequent siblings)
5 siblings, 0 replies; 8+ messages in thread
From: David Hildenbrand (Arm) @ 2026-02-27 20:57 UTC (permalink / raw)
To: Dev Jain, akpm, lorenzo.stoakes
Cc: ziy, baolin.wang, Liam.Howlett, npache, ryan.roberts, baohua,
lance.yang, linux-mm, linux-kernel
On 2/27/26 15:35, Dev Jain wrote:
> Claim: folio_order(folio) == HPAGE_PMD_ORDER => folio->index == start.
>
> Proof: Both loops in hpage_collapse_scan_file and collapse_file, which
> iterate on the xarray, have the invariant that
> start <= folio->index < start + HPAGE_PMD_NR ... (i)
>
> A folio is always naturally aligned in the pagecache, therefore
> folio_order == HPAGE_PMD_ORDER => IS_ALIGNED(folio->index, HPAGE_PMD_NR) == true ... (ii)
>
> thp_vma_allowable_order -> thp_vma_suitable_order requires that the virtual
> offsets in the VMA are aligned to the order,
> => IS_ALIGNED(start, HPAGE_PMD_NR) == true ... (iii)
>
> Combining (i), (ii) and (iii), the claim is proven.
>
> Therefore, remove this check.
> While at it, simplify the comments.
>
> Signed-off-by: Dev Jain <dev.jain@arm.com>
> ---
There might be some conflict with Nicos changes.
Acked-by: David Hildenbrand (Arm) <david@kernel.org>
--
Cheers,
David
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH v2] khugepaged: remove redundant index check for pmd-folios
2026-02-27 14:35 [PATCH v2] khugepaged: remove redundant index check for pmd-folios Dev Jain
2026-02-27 20:57 ` David Hildenbrand (Arm)
@ 2026-02-28 4:44 ` Lance Yang
2026-03-03 2:16 ` Baolin Wang
` (3 subsequent siblings)
5 siblings, 0 replies; 8+ messages in thread
From: Lance Yang @ 2026-02-28 4:44 UTC (permalink / raw)
To: Dev Jain
Cc: ziy, david, lorenzo.stoakes, akpm, baolin.wang, Liam.Howlett,
npache, ryan.roberts, baohua, linux-mm, linux-kernel
On 2026/2/27 22:35, Dev Jain wrote:
> Claim: folio_order(folio) == HPAGE_PMD_ORDER => folio->index == start.
>
> Proof: Both loops in hpage_collapse_scan_file and collapse_file, which
> iterate on the xarray, have the invariant that
> start <= folio->index < start + HPAGE_PMD_NR ... (i)
>
> A folio is always naturally aligned in the pagecache, therefore
> folio_order == HPAGE_PMD_ORDER => IS_ALIGNED(folio->index, HPAGE_PMD_NR) == true ... (ii)
>
> thp_vma_allowable_order -> thp_vma_suitable_order requires that the virtual
> offsets in the VMA are aligned to the order,
> => IS_ALIGNED(start, HPAGE_PMD_NR) == true ... (iii)
>
> Combining (i), (ii) and (iii), the claim is proven.
>
> Therefore, remove this check.
> While at it, simplify the comments.
>
> Signed-off-by: Dev Jain <dev.jain@arm.com>
> ---
> v1->v2:
> - Remove the check instead of converting to VM_WARN_ON
> - While at it, simplify the comments
>
> Based on mm-new (8982358e1c87).
>
> mm/khugepaged.c | 14 ++++----------
> 1 file changed, 4 insertions(+), 10 deletions(-)
>
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index 5f668c1dd0fe4..b7b4680d27ab1 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -2015,9 +2015,7 @@ static enum scan_result collapse_file(struct mm_struct *mm, unsigned long addr,
> * we locked the first folio, then a THP might be there already.
> * This will be discovered on the first iteration.
> */
> - if (folio_order(folio) == HPAGE_PMD_ORDER &&
> - folio->index == start) {
> - /* Maybe PMD-mapped */
> + if (folio_order(folio) == HPAGE_PMD_ORDER) {
> result = SCAN_PTE_MAPPED_HUGEPAGE;
> goto out_unlock;
> }
> @@ -2345,15 +2343,11 @@ static enum scan_result hpage_collapse_scan_file(struct mm_struct *mm,
> continue;
> }
>
> - if (folio_order(folio) == HPAGE_PMD_ORDER &&
> - folio->index == start) {
> - /* Maybe PMD-mapped */
> + if (folio_order(folio) == HPAGE_PMD_ORDER) {
> result = SCAN_PTE_MAPPED_HUGEPAGE;
> /*
> - * For SCAN_PTE_MAPPED_HUGEPAGE, further processing
> - * by the caller won't touch the page cache, and so
> - * it's safe to skip LRU and refcount checks before
> - * returning.
> + * PMD-sized THP implies that we can only try
> + * retracting the PTE table.
> */
> folio_put(folio);
> break;
LGTM!
The proof is sound, the combination of the loop invariant, natural
alignment, and VMA alignment requirements indeed makes the index
check redundant :D
Reviewed-by: Lance Yang <lance.yang@linux.dev>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH v2] khugepaged: remove redundant index check for pmd-folios
2026-02-27 14:35 [PATCH v2] khugepaged: remove redundant index check for pmd-folios Dev Jain
2026-02-27 20:57 ` David Hildenbrand (Arm)
2026-02-28 4:44 ` Lance Yang
@ 2026-03-03 2:16 ` Baolin Wang
2026-03-03 9:56 ` Lorenzo Stoakes
` (2 subsequent siblings)
5 siblings, 0 replies; 8+ messages in thread
From: Baolin Wang @ 2026-03-03 2:16 UTC (permalink / raw)
To: Dev Jain, akpm, david, lorenzo.stoakes
Cc: ziy, Liam.Howlett, npache, ryan.roberts, baohua, lance.yang,
linux-mm, linux-kernel
On 2/27/26 10:35 PM, Dev Jain wrote:
> Claim: folio_order(folio) == HPAGE_PMD_ORDER => folio->index == start.
>
> Proof: Both loops in hpage_collapse_scan_file and collapse_file, which
> iterate on the xarray, have the invariant that
> start <= folio->index < start + HPAGE_PMD_NR ... (i)
>
> A folio is always naturally aligned in the pagecache, therefore
> folio_order == HPAGE_PMD_ORDER => IS_ALIGNED(folio->index, HPAGE_PMD_NR) == true ... (ii)
>
> thp_vma_allowable_order -> thp_vma_suitable_order requires that the virtual
> offsets in the VMA are aligned to the order,
> => IS_ALIGNED(start, HPAGE_PMD_NR) == true ... (iii)
>
> Combining (i), (ii) and (iii), the claim is proven.
>
> Therefore, remove this check.
> While at it, simplify the comments.
>
> Signed-off-by: Dev Jain <dev.jain@arm.com>
> ---
Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH v2] khugepaged: remove redundant index check for pmd-folios
2026-02-27 14:35 [PATCH v2] khugepaged: remove redundant index check for pmd-folios Dev Jain
` (2 preceding siblings ...)
2026-03-03 2:16 ` Baolin Wang
@ 2026-03-03 9:56 ` Lorenzo Stoakes
2026-03-04 8:27 ` Wei Yang
2026-03-04 9:22 ` Anshuman Khandual
5 siblings, 0 replies; 8+ messages in thread
From: Lorenzo Stoakes @ 2026-03-03 9:56 UTC (permalink / raw)
To: Dev Jain
Cc: akpm, david, ziy, baolin.wang, Liam.Howlett, npache,
ryan.roberts, baohua, lance.yang, linux-mm, linux-kernel
On Fri, Feb 27, 2026 at 08:05:01PM +0530, Dev Jain wrote:
> Claim: folio_order(folio) == HPAGE_PMD_ORDER => folio->index == start.
>
> Proof: Both loops in hpage_collapse_scan_file and collapse_file, which
> iterate on the xarray, have the invariant that
> start <= folio->index < start + HPAGE_PMD_NR ... (i)
>
> A folio is always naturally aligned in the pagecache, therefore
> folio_order == HPAGE_PMD_ORDER => IS_ALIGNED(folio->index, HPAGE_PMD_NR) == true ... (ii)
>
> thp_vma_allowable_order -> thp_vma_suitable_order requires that the virtual
> offsets in the VMA are aligned to the order,
> => IS_ALIGNED(start, HPAGE_PMD_NR) == true ... (iii)
>
> Combining (i), (ii) and (iii), the claim is proven.
>
> Therefore, remove this check.
> While at it, simplify the comments.
>
> Signed-off-by: Dev Jain <dev.jain@arm.com>
Very mathematical :)
LGTM so:
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> ---
> v1->v2:
> - Remove the check instead of converting to VM_WARN_ON
> - While at it, simplify the comments
>
> Based on mm-new (8982358e1c87).
>
> mm/khugepaged.c | 14 ++++----------
> 1 file changed, 4 insertions(+), 10 deletions(-)
>
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index 5f668c1dd0fe4..b7b4680d27ab1 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -2015,9 +2015,7 @@ static enum scan_result collapse_file(struct mm_struct *mm, unsigned long addr,
> * we locked the first folio, then a THP might be there already.
> * This will be discovered on the first iteration.
> */
> - if (folio_order(folio) == HPAGE_PMD_ORDER &&
> - folio->index == start) {
> - /* Maybe PMD-mapped */
> + if (folio_order(folio) == HPAGE_PMD_ORDER) {
> result = SCAN_PTE_MAPPED_HUGEPAGE;
> goto out_unlock;
> }
> @@ -2345,15 +2343,11 @@ static enum scan_result hpage_collapse_scan_file(struct mm_struct *mm,
> continue;
> }
>
> - if (folio_order(folio) == HPAGE_PMD_ORDER &&
> - folio->index == start) {
> - /* Maybe PMD-mapped */
> + if (folio_order(folio) == HPAGE_PMD_ORDER) {
> result = SCAN_PTE_MAPPED_HUGEPAGE;
> /*
> - * For SCAN_PTE_MAPPED_HUGEPAGE, further processing
> - * by the caller won't touch the page cache, and so
> - * it's safe to skip LRU and refcount checks before
> - * returning.
> + * PMD-sized THP implies that we can only try
> + * retracting the PTE table.
> */
> folio_put(folio);
> break;
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH v2] khugepaged: remove redundant index check for pmd-folios
2026-02-27 14:35 [PATCH v2] khugepaged: remove redundant index check for pmd-folios Dev Jain
` (3 preceding siblings ...)
2026-03-03 9:56 ` Lorenzo Stoakes
@ 2026-03-04 8:27 ` Wei Yang
2026-03-04 8:44 ` Dev Jain
2026-03-04 9:22 ` Anshuman Khandual
5 siblings, 1 reply; 8+ messages in thread
From: Wei Yang @ 2026-03-04 8:27 UTC (permalink / raw)
To: Dev Jain
Cc: akpm, david, lorenzo.stoakes, ziy, baolin.wang, Liam.Howlett,
npache, ryan.roberts, baohua, lance.yang, linux-mm, linux-kernel
On Fri, Feb 27, 2026 at 08:05:01PM +0530, Dev Jain wrote:
>Claim: folio_order(folio) == HPAGE_PMD_ORDER => folio->index == start.
>
>Proof: Both loops in hpage_collapse_scan_file and collapse_file, which
>iterate on the xarray, have the invariant that
>start <= folio->index < start + HPAGE_PMD_NR ... (i)
>
>A folio is always naturally aligned in the pagecache, therefore
>folio_order == HPAGE_PMD_ORDER => IS_ALIGNED(folio->index, HPAGE_PMD_NR) == true ... (ii)
This is because __filemap_add_folio() align the index to folio_order(), right?
>
>thp_vma_allowable_order -> thp_vma_suitable_order requires that the virtual
>offsets in the VMA are aligned to the order,
>=> IS_ALIGNED(start, HPAGE_PMD_NR) == true ... (iii)
>
>Combining (i), (ii) and (iii), the claim is proven.
>
>Therefore, remove this check.
>While at it, simplify the comments.
>
>Signed-off-by: Dev Jain <dev.jain@arm.com>
--
Wei Yang
Help you, Help me
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH v2] khugepaged: remove redundant index check for pmd-folios
2026-03-04 8:27 ` Wei Yang
@ 2026-03-04 8:44 ` Dev Jain
0 siblings, 0 replies; 8+ messages in thread
From: Dev Jain @ 2026-03-04 8:44 UTC (permalink / raw)
To: Wei Yang
Cc: akpm, david, lorenzo.stoakes, ziy, baolin.wang, Liam.Howlett,
npache, ryan.roberts, baohua, lance.yang, linux-mm, linux-kernel
On 04/03/26 1:57 pm, Wei Yang wrote:
> On Fri, Feb 27, 2026 at 08:05:01PM +0530, Dev Jain wrote:
>> Claim: folio_order(folio) == HPAGE_PMD_ORDER => folio->index == start.
>>
>> Proof: Both loops in hpage_collapse_scan_file and collapse_file, which
>> iterate on the xarray, have the invariant that
>> start <= folio->index < start + HPAGE_PMD_NR ... (i)
>>
>> A folio is always naturally aligned in the pagecache, therefore
>> folio_order == HPAGE_PMD_ORDER => IS_ALIGNED(folio->index, HPAGE_PMD_NR) == true ... (ii)
>
> This is because __filemap_add_folio() align the index to folio_order(), right?
No, see code around instances of mapping_align_index(). We retrieve the max
order from the index, not the other way around. We already are given the
index to put the folio at, by the caller, so have to construct the order
from that.
>
>>
>> thp_vma_allowable_order -> thp_vma_suitable_order requires that the virtual
>> offsets in the VMA are aligned to the order,
>> => IS_ALIGNED(start, HPAGE_PMD_NR) == true ... (iii)
>>
>> Combining (i), (ii) and (iii), the claim is proven.
>>
>> Therefore, remove this check.
>> While at it, simplify the comments.
>>
>> Signed-off-by: Dev Jain <dev.jain@arm.com>
>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH v2] khugepaged: remove redundant index check for pmd-folios
2026-02-27 14:35 [PATCH v2] khugepaged: remove redundant index check for pmd-folios Dev Jain
` (4 preceding siblings ...)
2026-03-04 8:27 ` Wei Yang
@ 2026-03-04 9:22 ` Anshuman Khandual
5 siblings, 0 replies; 8+ messages in thread
From: Anshuman Khandual @ 2026-03-04 9:22 UTC (permalink / raw)
To: Dev Jain, akpm, david, lorenzo.stoakes
Cc: ziy, baolin.wang, Liam.Howlett, npache, ryan.roberts, baohua,
lance.yang, linux-mm, linux-kernel
On 27/02/26 8:05 PM, Dev Jain wrote:
> Claim: folio_order(folio) == HPAGE_PMD_ORDER => folio->index == start.
>
> Proof: Both loops in hpage_collapse_scan_file and collapse_file, which
> iterate on the xarray, have the invariant that
> start <= folio->index < start + HPAGE_PMD_NR ... (i)
>
> A folio is always naturally aligned in the pagecache, therefore
> folio_order == HPAGE_PMD_ORDER => IS_ALIGNED(folio->index, HPAGE_PMD_NR) == true ... (ii)
>
> thp_vma_allowable_order -> thp_vma_suitable_order requires that the virtual
> offsets in the VMA are aligned to the order,
> => IS_ALIGNED(start, HPAGE_PMD_NR) == true ... (iii)
>
> Combining (i), (ii) and (iii), the claim is proven.
>
> Therefore, remove this check.
> While at it, simplify the comments.
>
> Signed-off-by: Dev Jain <dev.jain@arm.com>
Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
> ---
> v1->v2:
> - Remove the check instead of converting to VM_WARN_ON
> - While at it, simplify the comments
>
> Based on mm-new (8982358e1c87).
>
> mm/khugepaged.c | 14 ++++----------
> 1 file changed, 4 insertions(+), 10 deletions(-)
>
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index 5f668c1dd0fe4..b7b4680d27ab1 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -2015,9 +2015,7 @@ static enum scan_result collapse_file(struct mm_struct *mm, unsigned long addr,
> * we locked the first folio, then a THP might be there already.
> * This will be discovered on the first iteration.
> */
> - if (folio_order(folio) == HPAGE_PMD_ORDER &&
> - folio->index == start) {
> - /* Maybe PMD-mapped */
> + if (folio_order(folio) == HPAGE_PMD_ORDER) {
> result = SCAN_PTE_MAPPED_HUGEPAGE;
> goto out_unlock;
> }
> @@ -2345,15 +2343,11 @@ static enum scan_result hpage_collapse_scan_file(struct mm_struct *mm,
> continue;
> }
>
> - if (folio_order(folio) == HPAGE_PMD_ORDER &&
> - folio->index == start) {
> - /* Maybe PMD-mapped */
> + if (folio_order(folio) == HPAGE_PMD_ORDER) {
> result = SCAN_PTE_MAPPED_HUGEPAGE;
> /*
> - * For SCAN_PTE_MAPPED_HUGEPAGE, further processing
> - * by the caller won't touch the page cache, and so
> - * it's safe to skip LRU and refcount checks before
> - * returning.
> + * PMD-sized THP implies that we can only try
> + * retracting the PTE table.
> */
> folio_put(folio);
> break;
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2026-03-04 9:22 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2026-02-27 14:35 [PATCH v2] khugepaged: remove redundant index check for pmd-folios Dev Jain
2026-02-27 20:57 ` David Hildenbrand (Arm)
2026-02-28 4:44 ` Lance Yang
2026-03-03 2:16 ` Baolin Wang
2026-03-03 9:56 ` Lorenzo Stoakes
2026-03-04 8:27 ` Wei Yang
2026-03-04 8:44 ` Dev Jain
2026-03-04 9:22 ` Anshuman Khandual
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox