linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] khugepaged: convert redundant check to WARN_ON
@ 2026-02-19  5:48 Dev Jain
  2026-02-19 22:33 ` Andrew Morton
  2026-02-24  8:21 ` Baolin Wang
  0 siblings, 2 replies; 6+ messages in thread
From: Dev Jain @ 2026-02-19  5:48 UTC (permalink / raw)
  To: akpm, david, lorenzo.stoakes
  Cc: ziy, baolin.wang, Liam.Howlett, npache, ryan.roberts, baohua,
	lance.yang, linux-mm, linux-kernel, Dev Jain

Claim: folio_order(folio) == HPAGE_PMD_ORDER => folio->index == start.

Proof: Both loops in hpage_collapse_scan_file and collapse_file, which
iterate on the xarray, have the invariant that
start <= folio->index < start + HPAGE_PMD_NR ... (i)
A folio is always naturally aligned in the pagecache, therefore
folio_order == HPAGE_PMD_ORDER => IS_ALIGNED(folio->index, HPAGE_PMD_NR) == true ... (ii)
thp_vma_allowable_order -> thp_vma_suitable_order requires that the virtual
offsets in the VMA are aligned to the order,
=> IS_ALIGNED(start, HPAGE_PMD_NR) == true ... (iii)

Combining (i), (ii) and (iii), the claim is proven.

Therefore, convert this to a VM_WARN_ON.

Signed-off-by: Dev Jain <dev.jain@arm.com>
---
Based on mm-unstable (d9982f38eb6e). mm-selftests pass.

 mm/khugepaged.c | 10 ++++++----
 1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index fa1e57fd2c469..f27cbb4d1f62c 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -2000,8 +2000,9 @@ static enum scan_result collapse_file(struct mm_struct *mm, unsigned long addr,
 		 * we locked the first folio, then a THP might be there already.
 		 * This will be discovered on the first iteration.
 		 */
-		if (folio_order(folio) == HPAGE_PMD_ORDER &&
-		    folio->index == start) {
+		if (folio_order(folio) == HPAGE_PMD_ORDER) {
+			VM_WARN_ON(folio->index != start);
+
 			/* Maybe PMD-mapped */
 			result = SCAN_PTE_MAPPED_HUGEPAGE;
 			goto out_unlock;
@@ -2329,8 +2330,9 @@ static enum scan_result hpage_collapse_scan_file(struct mm_struct *mm, unsigned
 			continue;
 		}
 
-		if (folio_order(folio) == HPAGE_PMD_ORDER &&
-		    folio->index == start) {
+		if (folio_order(folio) == HPAGE_PMD_ORDER) {
+			VM_WARN_ON(folio->index != start);
+
 			/* Maybe PMD-mapped */
 			result = SCAN_PTE_MAPPED_HUGEPAGE;
 			/*
-- 
2.34.1



^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] khugepaged: convert redundant check to WARN_ON
  2026-02-19  5:48 [PATCH] khugepaged: convert redundant check to WARN_ON Dev Jain
@ 2026-02-19 22:33 ` Andrew Morton
  2026-02-23  4:40   ` Dev Jain
  2026-02-24  8:21 ` Baolin Wang
  1 sibling, 1 reply; 6+ messages in thread
From: Andrew Morton @ 2026-02-19 22:33 UTC (permalink / raw)
  To: Dev Jain
  Cc: david, lorenzo.stoakes, ziy, baolin.wang, Liam.Howlett, npache,
	ryan.roberts, baohua, lance.yang, linux-mm, linux-kernel

On Thu, 19 Feb 2026 11:18:27 +0530 Dev Jain <dev.jain@arm.com> wrote:

> Claim: folio_order(folio) == HPAGE_PMD_ORDER => folio->index == start.
> 
> Proof: Both loops in hpage_collapse_scan_file and collapse_file, which
> iterate on the xarray, have the invariant that
> start <= folio->index < start + HPAGE_PMD_NR ... (i)
> A folio is always naturally aligned in the pagecache, therefore
> folio_order == HPAGE_PMD_ORDER => IS_ALIGNED(folio->index, HPAGE_PMD_NR) == true ... (ii)
> thp_vma_allowable_order -> thp_vma_suitable_order requires that the virtual
> offsets in the VMA are aligned to the order,
> => IS_ALIGNED(start, HPAGE_PMD_NR) == true ... (iii)
> 
> Combining (i), (ii) and (iii), the claim is proven.
> 
> Therefore, convert this to a VM_WARN_ON.
> 
> ...
>
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -2000,8 +2000,9 @@ static enum scan_result collapse_file(struct mm_struct *mm, unsigned long addr,
>  		 * we locked the first folio, then a THP might be there already.
>  		 * This will be discovered on the first iteration.
>  		 */
> -		if (folio_order(folio) == HPAGE_PMD_ORDER &&
> -		    folio->index == start) {
> +		if (folio_order(folio) == HPAGE_PMD_ORDER) {
> +			VM_WARN_ON(folio->index != start);

It's a bad sad to remove unneeded code by retaining that code and
adding even more code.

Perhaps add a comment reminding us to remove this altogether at a later
date?



^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] khugepaged: convert redundant check to WARN_ON
  2026-02-19 22:33 ` Andrew Morton
@ 2026-02-23  4:40   ` Dev Jain
  2026-02-24 10:51     ` David Hildenbrand (Arm)
  0 siblings, 1 reply; 6+ messages in thread
From: Dev Jain @ 2026-02-23  4:40 UTC (permalink / raw)
  To: Andrew Morton
  Cc: david, lorenzo.stoakes, ziy, baolin.wang, Liam.Howlett, npache,
	ryan.roberts, baohua, lance.yang, linux-mm, linux-kernel


On 20/02/26 4:03 am, Andrew Morton wrote:
> On Thu, 19 Feb 2026 11:18:27 +0530 Dev Jain <dev.jain@arm.com> wrote:
>
>> Claim: folio_order(folio) == HPAGE_PMD_ORDER => folio->index == start.
>>
>> Proof: Both loops in hpage_collapse_scan_file and collapse_file, which
>> iterate on the xarray, have the invariant that
>> start <= folio->index < start + HPAGE_PMD_NR ... (i)
>> A folio is always naturally aligned in the pagecache, therefore
>> folio_order == HPAGE_PMD_ORDER => IS_ALIGNED(folio->index, HPAGE_PMD_NR) == true ... (ii)
>> thp_vma_allowable_order -> thp_vma_suitable_order requires that the virtual
>> offsets in the VMA are aligned to the order,
>> => IS_ALIGNED(start, HPAGE_PMD_NR) == true ... (iii)
>>
>> Combining (i), (ii) and (iii), the claim is proven.
>>
>> Therefore, convert this to a VM_WARN_ON.
>>
>> ...
>>
>> --- a/mm/khugepaged.c
>> +++ b/mm/khugepaged.c
>> @@ -2000,8 +2000,9 @@ static enum scan_result collapse_file(struct mm_struct *mm, unsigned long addr,
>>  		 * we locked the first folio, then a THP might be there already.
>>  		 * This will be discovered on the first iteration.
>>  		 */
>> -		if (folio_order(folio) == HPAGE_PMD_ORDER &&
>> -		    folio->index == start) {
>> +		if (folio_order(folio) == HPAGE_PMD_ORDER) {
>> +			VM_WARN_ON(folio->index != start);
> It's a bad sad to remove unneeded code by retaining that code and
> adding even more code.
>
> Perhaps add a comment reminding us to remove this altogether at a later
> date?

But then shouldn't I remove it just now :) try_collapse_pte_mapped_thp is
anways going to bail out if such a bug occurs, so nothing goes wrong. In
short this check is actually useless, apart from the fact that this lets
us spot a bug (which I doubt won't be figured out by other code paths already).



^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] khugepaged: convert redundant check to WARN_ON
  2026-02-19  5:48 [PATCH] khugepaged: convert redundant check to WARN_ON Dev Jain
  2026-02-19 22:33 ` Andrew Morton
@ 2026-02-24  8:21 ` Baolin Wang
  2026-02-24  8:45   ` Lance Yang
  1 sibling, 1 reply; 6+ messages in thread
From: Baolin Wang @ 2026-02-24  8:21 UTC (permalink / raw)
  To: Dev Jain, akpm, david, lorenzo.stoakes
  Cc: ziy, Liam.Howlett, npache, ryan.roberts, baohua, lance.yang,
	linux-mm, linux-kernel



On 2/19/26 1:48 PM, Dev Jain wrote:
> Claim: folio_order(folio) == HPAGE_PMD_ORDER => folio->index == start.
> 
> Proof: Both loops in hpage_collapse_scan_file and collapse_file, which
> iterate on the xarray, have the invariant that
> start <= folio->index < start + HPAGE_PMD_NR ... (i)
> A folio is always naturally aligned in the pagecache, therefore
> folio_order == HPAGE_PMD_ORDER => IS_ALIGNED(folio->index, HPAGE_PMD_NR) == true ... (ii)
> thp_vma_allowable_order -> thp_vma_suitable_order requires that the virtual
> offsets in the VMA are aligned to the order,
> => IS_ALIGNED(start, HPAGE_PMD_NR) == true ... (iii)
> 
> Combining (i), (ii) and (iii), the claim is proven.
> 
> Therefore, convert this to a VM_WARN_ON.
> 
> Signed-off-by: Dev Jain <dev.jain@arm.com>
> ---

Make sense to me. Personally, I’d like to keep this VM_WARN_ON() to 
catch unexpected behavior.

Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com>

> Based on mm-unstable (d9982f38eb6e). mm-selftests pass.
> 
>   mm/khugepaged.c | 10 ++++++----
>   1 file changed, 6 insertions(+), 4 deletions(-)
> 
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index fa1e57fd2c469..f27cbb4d1f62c 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -2000,8 +2000,9 @@ static enum scan_result collapse_file(struct mm_struct *mm, unsigned long addr,
>   		 * we locked the first folio, then a THP might be there already.
>   		 * This will be discovered on the first iteration.
>   		 */
> -		if (folio_order(folio) == HPAGE_PMD_ORDER &&
> -		    folio->index == start) {
> +		if (folio_order(folio) == HPAGE_PMD_ORDER) {
> +			VM_WARN_ON(folio->index != start);
> +
>   			/* Maybe PMD-mapped */
>   			result = SCAN_PTE_MAPPED_HUGEPAGE;
>   			goto out_unlock;
> @@ -2329,8 +2330,9 @@ static enum scan_result hpage_collapse_scan_file(struct mm_struct *mm, unsigned
>   			continue;
>   		}
>   
> -		if (folio_order(folio) == HPAGE_PMD_ORDER &&
> -		    folio->index == start) {
> +		if (folio_order(folio) == HPAGE_PMD_ORDER) {
> +			VM_WARN_ON(folio->index != start);
> +
>   			/* Maybe PMD-mapped */
>   			result = SCAN_PTE_MAPPED_HUGEPAGE;
>   			/*



^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] khugepaged: convert redundant check to WARN_ON
  2026-02-24  8:21 ` Baolin Wang
@ 2026-02-24  8:45   ` Lance Yang
  0 siblings, 0 replies; 6+ messages in thread
From: Lance Yang @ 2026-02-24  8:45 UTC (permalink / raw)
  To: Baolin Wang, Dev Jain
  Cc: ziy, Liam.Howlett, npache, ryan.roberts, baohua, linux-mm,
	linux-kernel, akpm, david, lorenzo.stoakes



On 2026/2/24 16:21, Baolin Wang wrote:
> 
> 
> On 2/19/26 1:48 PM, Dev Jain wrote:
>> Claim: folio_order(folio) == HPAGE_PMD_ORDER => folio->index == start.
>>
>> Proof: Both loops in hpage_collapse_scan_file and collapse_file, which
>> iterate on the xarray, have the invariant that
>> start <= folio->index < start + HPAGE_PMD_NR ... (i)
>> A folio is always naturally aligned in the pagecache, therefore
>> folio_order == HPAGE_PMD_ORDER => IS_ALIGNED(folio->index, 
>> HPAGE_PMD_NR) == true ... (ii)
>> thp_vma_allowable_order -> thp_vma_suitable_order requires that the 
>> virtual
>> offsets in the VMA are aligned to the order,
>> => IS_ALIGNED(start, HPAGE_PMD_NR) == true ... (iii)
>>
>> Combining (i), (ii) and (iii), the claim is proven.
>>
>> Therefore, convert this to a VM_WARN_ON.
>>
>> Signed-off-by: Dev Jain <dev.jain@arm.com>
>> ---
> 
> Make sense to me. Personally, I’d like to keep this VM_WARN_ON() to 
> catch unexpected behavior.

+1

Reviewed-by: Lance Yang <lance.yang@linux.dev>


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] khugepaged: convert redundant check to WARN_ON
  2026-02-23  4:40   ` Dev Jain
@ 2026-02-24 10:51     ` David Hildenbrand (Arm)
  0 siblings, 0 replies; 6+ messages in thread
From: David Hildenbrand (Arm) @ 2026-02-24 10:51 UTC (permalink / raw)
  To: Dev Jain, Andrew Morton
  Cc: lorenzo.stoakes, ziy, baolin.wang, Liam.Howlett, npache,
	ryan.roberts, baohua, lance.yang, linux-mm, linux-kernel

On 2/23/26 05:40, Dev Jain wrote:
> 
> On 20/02/26 4:03 am, Andrew Morton wrote:
>> On Thu, 19 Feb 2026 11:18:27 +0530 Dev Jain <dev.jain@arm.com> wrote:
>>
>>> Claim: folio_order(folio) == HPAGE_PMD_ORDER => folio->index == start.
>>>
>>> Proof: Both loops in hpage_collapse_scan_file and collapse_file, which
>>> iterate on the xarray, have the invariant that
>>> start <= folio->index < start + HPAGE_PMD_NR ... (i)
>>> A folio is always naturally aligned in the pagecache, therefore
>>> folio_order == HPAGE_PMD_ORDER => IS_ALIGNED(folio->index, HPAGE_PMD_NR) == true ... (ii)
>>> thp_vma_allowable_order -> thp_vma_suitable_order requires that the virtual
>>> offsets in the VMA are aligned to the order,
>>> => IS_ALIGNED(start, HPAGE_PMD_NR) == true ... (iii)
>>>
>>> Combining (i), (ii) and (iii), the claim is proven.
>>>
>>> Therefore, convert this to a VM_WARN_ON.
>>>
>>> ...
>>>
>>> --- a/mm/khugepaged.c
>>> +++ b/mm/khugepaged.c
>>> @@ -2000,8 +2000,9 @@ static enum scan_result collapse_file(struct mm_struct *mm, unsigned long addr,
>>>  		 * we locked the first folio, then a THP might be there already.
>>>  		 * This will be discovered on the first iteration.
>>>  		 */
>>> -		if (folio_order(folio) == HPAGE_PMD_ORDER &&
>>> -		    folio->index == start) {
>>> +		if (folio_order(folio) == HPAGE_PMD_ORDER) {
>>> +			VM_WARN_ON(folio->index != start);
>> It's a bad sad to remove unneeded code by retaining that code and
>> adding even more code.
>>
>> Perhaps add a comment reminding us to remove this altogether at a later
>> date?
> 
> But then shouldn't I remove it just now :) 

I'd say, remove it directly. Even if "folio->index != start", both
functions should do the right thing.

While at it, we could heavily simplify the comments to "If there is
already a PMD-sized THP, we can only try collapsing the PTE table".

Or sth, like that.

-- 
Cheers,

David


^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2026-02-24 10:51 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2026-02-19  5:48 [PATCH] khugepaged: convert redundant check to WARN_ON Dev Jain
2026-02-19 22:33 ` Andrew Morton
2026-02-23  4:40   ` Dev Jain
2026-02-24 10:51     ` David Hildenbrand (Arm)
2026-02-24  8:21 ` Baolin Wang
2026-02-24  8:45   ` Lance Yang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox