linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] khugepaged: Reduce race probability between migration and khugepaged
@ 2025-06-30  4:48 Dev Jain
  2025-06-30  7:46 ` Baolin Wang
                   ` (2 more replies)
  0 siblings, 3 replies; 11+ messages in thread
From: Dev Jain @ 2025-06-30  4:48 UTC (permalink / raw)
  To: akpm, david
  Cc: ziy, baolin.wang, lorenzo.stoakes, Liam.Howlett, npache,
	ryan.roberts, baohua, linux-mm, linux-kernel, Dev Jain

Suppose a folio is under migration, and khugepaged is also trying to
collapse it. collapse_pte_mapped_thp() will retrieve the folio from the
page cache via filemap_lock_folio(), thus taking a reference on the folio
and sleeping on the folio lock, since the lock is held by the migration
path. Migration will then fail in
__folio_migrate_mapping -> folio_ref_freeze. Reduce the probability of
such a race happening (leading to migration failure) by bailing out
if we detect a PMD is marked with a migration entry.

This fixes the migration-shared-anon-thp testcase failure on Apple M3.

Note that, this is not a "fix" since it only reduces the chance of
interference of khugepaged with migration, wherein both the kernel
functionalities are deemed "best-effort".

Signed-off-by: Dev Jain <dev.jain@arm.com>
---

This patch was part of
https://lore.kernel.org/all/20250625055806.82645-1-dev.jain@arm.com/
but I have sent it separately on suggestion of Lorenzo, and also because
I plan to send the first two patches after David Hildenbrand's
folio_pte_batch series gets merged.

 mm/khugepaged.c | 12 ++++++++++--
 1 file changed, 10 insertions(+), 2 deletions(-)

diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 1aa7ca67c756..99977bb9bf6a 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -31,6 +31,7 @@ enum scan_result {
 	SCAN_FAIL,
 	SCAN_SUCCEED,
 	SCAN_PMD_NULL,
+	SCAN_PMD_MIGRATION,
 	SCAN_PMD_NONE,
 	SCAN_PMD_MAPPED,
 	SCAN_EXCEED_NONE_PTE,
@@ -941,6 +942,8 @@ static inline int check_pmd_state(pmd_t *pmd)
 
 	if (pmd_none(pmde))
 		return SCAN_PMD_NONE;
+	if (is_pmd_migration_entry(pmde))
+		return SCAN_PMD_MIGRATION;
 	if (!pmd_present(pmde))
 		return SCAN_PMD_NULL;
 	if (pmd_trans_huge(pmde))
@@ -1502,9 +1505,12 @@ int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr,
 	    !range_in_vma(vma, haddr, haddr + HPAGE_PMD_SIZE))
 		return SCAN_VMA_CHECK;
 
-	/* Fast check before locking page if already PMD-mapped */
+	/*
+	 * Fast check before locking folio if already PMD-mapped, or if the
+	 * folio is under migration
+	 */
 	result = find_pmd_or_thp_or_none(mm, haddr, &pmd);
-	if (result == SCAN_PMD_MAPPED)
+	if (result == SCAN_PMD_MAPPED || result == SCAN_PMD_MIGRATION)
 		return result;
 
 	/*
@@ -2716,6 +2722,7 @@ static int madvise_collapse_errno(enum scan_result r)
 	case SCAN_PAGE_LRU:
 	case SCAN_DEL_PAGE_LRU:
 	case SCAN_PAGE_FILLED:
+	case SCAN_PMD_MIGRATION:
 		return -EAGAIN;
 	/*
 	 * Other: Trying again likely not to succeed / error intrinsic to
@@ -2802,6 +2809,7 @@ int madvise_collapse(struct vm_area_struct *vma, unsigned long start,
 			goto handle_result;
 		/* Whitelisted set of results where continuing OK */
 		case SCAN_PMD_NULL:
+		case SCAN_PMD_MIGRATION:
 		case SCAN_PTE_NON_PRESENT:
 		case SCAN_PTE_UFFD_WP:
 		case SCAN_PAGE_RO:
-- 
2.30.2



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH] khugepaged: Reduce race probability between migration and khugepaged
  2025-06-30  4:48 [PATCH] khugepaged: Reduce race probability between migration and khugepaged Dev Jain
@ 2025-06-30  7:46 ` Baolin Wang
  2025-06-30  7:55 ` Anshuman Khandual
  2025-06-30 13:27 ` Lorenzo Stoakes
  2 siblings, 0 replies; 11+ messages in thread
From: Baolin Wang @ 2025-06-30  7:46 UTC (permalink / raw)
  To: Dev Jain, akpm, david
  Cc: ziy, lorenzo.stoakes, Liam.Howlett, npache, ryan.roberts, baohua,
	linux-mm, linux-kernel



On 2025/6/30 12:48, Dev Jain wrote:
> Suppose a folio is under migration, and khugepaged is also trying to
> collapse it. collapse_pte_mapped_thp() will retrieve the folio from the
> page cache via filemap_lock_folio(), thus taking a reference on the folio
> and sleeping on the folio lock, since the lock is held by the migration
> path. Migration will then fail in
> __folio_migrate_mapping -> folio_ref_freeze. Reduce the probability of
> such a race happening (leading to migration failure) by bailing out
> if we detect a PMD is marked with a migration entry.
> 
> This fixes the migration-shared-anon-thp testcase failure on Apple M3.
> 
> Note that, this is not a "fix" since it only reduces the chance of
> interference of khugepaged with migration, wherein both the kernel
> functionalities are deemed "best-effort".
> 
> Signed-off-by: Dev Jain <dev.jain@arm.com>
> ---

Looks reasonable to me.
Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com>


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH] khugepaged: Reduce race probability between migration and khugepaged
  2025-06-30  4:48 [PATCH] khugepaged: Reduce race probability between migration and khugepaged Dev Jain
  2025-06-30  7:46 ` Baolin Wang
@ 2025-06-30  7:55 ` Anshuman Khandual
  2025-06-30  7:58   ` David Hildenbrand
  2025-06-30  8:12   ` Dev Jain
  2025-06-30 13:27 ` Lorenzo Stoakes
  2 siblings, 2 replies; 11+ messages in thread
From: Anshuman Khandual @ 2025-06-30  7:55 UTC (permalink / raw)
  To: Dev Jain, akpm, david
  Cc: ziy, baolin.wang, lorenzo.stoakes, Liam.Howlett, npache,
	ryan.roberts, baohua, linux-mm, linux-kernel

On 30/06/25 10:18 AM, Dev Jain wrote:
> Suppose a folio is under migration, and khugepaged is also trying to
> collapse it. collapse_pte_mapped_thp() will retrieve the folio from the
> page cache via filemap_lock_folio(), thus taking a reference on the folio
> and sleeping on the folio lock, since the lock is held by the migration
> path. Migration will then fail in
> __folio_migrate_mapping -> folio_ref_freeze. Reduce the probability of
> such a race happening (leading to migration failure) by bailing out
> if we detect a PMD is marked with a migration entry.

Could the migration be re-attempted after such failure ? Seems like
the migration failure here is traded for a scan failure instead.

> 
> This fixes the migration-shared-anon-thp testcase failure on Apple M3.

Could you please provide some more context why this test case was
failing earlier and how does this change here fixes the problem ?

> 
> Note that, this is not a "fix" since it only reduces the chance of
> interference of khugepaged with migration, wherein both the kernel
> functionalities are deemed "best-effort".
> > Signed-off-by: Dev Jain <dev.jain@arm.com>
> ---
> 
> This patch was part of
> https://lore.kernel.org/all/20250625055806.82645-1-dev.jain@arm.com/
> but I have sent it separately on suggestion of Lorenzo, and also because
> I plan to send the first two patches after David Hildenbrand's
> folio_pte_batch series gets merged.
> 
>  mm/khugepaged.c | 12 ++++++++++--
>  1 file changed, 10 insertions(+), 2 deletions(-)
> 
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index 1aa7ca67c756..99977bb9bf6a 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -31,6 +31,7 @@ enum scan_result {
>  	SCAN_FAIL,
>  	SCAN_SUCCEED,
>  	SCAN_PMD_NULL,
> +	SCAN_PMD_MIGRATION,
>  	SCAN_PMD_NONE,
>  	SCAN_PMD_MAPPED,
>  	SCAN_EXCEED_NONE_PTE,
> @@ -941,6 +942,8 @@ static inline int check_pmd_state(pmd_t *pmd)
>  
>  	if (pmd_none(pmde))
>  		return SCAN_PMD_NONE;
> +	if (is_pmd_migration_entry(pmde))
> +		return SCAN_PMD_MIGRATION;
>  	if (!pmd_present(pmde))
>  		return SCAN_PMD_NULL;
>  	if (pmd_trans_huge(pmde))
> @@ -1502,9 +1505,12 @@ int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr,
>  	    !range_in_vma(vma, haddr, haddr + HPAGE_PMD_SIZE))
>  		return SCAN_VMA_CHECK;
>  
> -	/* Fast check before locking page if already PMD-mapped */
> +	/*
> +	 * Fast check before locking folio if already PMD-mapped, or if the
> +	 * folio is under migration
> +	 */
>  	result = find_pmd_or_thp_or_none(mm, haddr, &pmd);
> -	if (result == SCAN_PMD_MAPPED)
> +	if (result == SCAN_PMD_MAPPED || result == SCAN_PMD_MIGRATION)
Should mapped PMD and migrating PMD be treated equally while scanning ?

>  		return result;
>  
>  	/*
> @@ -2716,6 +2722,7 @@ static int madvise_collapse_errno(enum scan_result r)
>  	case SCAN_PAGE_LRU:
>  	case SCAN_DEL_PAGE_LRU:
>  	case SCAN_PAGE_FILLED:
> +	case SCAN_PMD_MIGRATION:
>  		return -EAGAIN;
>  	/*
>  	 * Other: Trying again likely not to succeed / error intrinsic to
> @@ -2802,6 +2809,7 @@ int madvise_collapse(struct vm_area_struct *vma, unsigned long start,
>  			goto handle_result;
>  		/* Whitelisted set of results where continuing OK */
>  		case SCAN_PMD_NULL:
> +		case SCAN_PMD_MIGRATION:
>  		case SCAN_PTE_NON_PRESENT:
>  		case SCAN_PTE_UFFD_WP:
>  		case SCAN_PAGE_RO:


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH] khugepaged: Reduce race probability between migration and khugepaged
  2025-06-30  7:55 ` Anshuman Khandual
@ 2025-06-30  7:58   ` David Hildenbrand
  2025-06-30  8:12   ` Dev Jain
  1 sibling, 0 replies; 11+ messages in thread
From: David Hildenbrand @ 2025-06-30  7:58 UTC (permalink / raw)
  To: Anshuman Khandual, Dev Jain, akpm
  Cc: ziy, baolin.wang, lorenzo.stoakes, Liam.Howlett, npache,
	ryan.roberts, baohua, linux-mm, linux-kernel

On 30.06.25 09:55, Anshuman Khandual wrote:
> On 30/06/25 10:18 AM, Dev Jain wrote:
>> Suppose a folio is under migration, and khugepaged is also trying to
>> collapse it. collapse_pte_mapped_thp() will retrieve the folio from the
>> page cache via filemap_lock_folio(), thus taking a reference on the folio
>> and sleeping on the folio lock, since the lock is held by the migration
>> path. Migration will then fail in
>> __folio_migrate_mapping -> folio_ref_freeze. Reduce the probability of
>> such a race happening (leading to migration failure) by bailing out
>> if we detect a PMD is marked with a migration entry.
> 
> Could the migration be re-attempted after such failure ? Seems like
> the migration failure here is traded for a scan failure instead.
> 
>>
>> This fixes the migration-shared-anon-thp testcase failure on Apple M3.
> 
> Could you please provide some more context why this test case was
> failing earlier and how does this change here fixes the problem ?
> 
>>
>> Note that, this is not a "fix" since it only reduces the chance of
>> interference of khugepaged with migration, wherein both the kernel
>> functionalities are deemed "best-effort".
>>> Signed-off-by: Dev Jain <dev.jain@arm.com>
>> ---
>>
>> This patch was part of
>> https://lore.kernel.org/all/20250625055806.82645-1-dev.jain@arm.com/
>> but I have sent it separately on suggestion of Lorenzo, and also because
>> I plan to send the first two patches after David Hildenbrand's
>> folio_pte_batch series gets merged.
>>
>>   mm/khugepaged.c | 12 ++++++++++--
>>   1 file changed, 10 insertions(+), 2 deletions(-)
>>
>> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
>> index 1aa7ca67c756..99977bb9bf6a 100644
>> --- a/mm/khugepaged.c
>> +++ b/mm/khugepaged.c
>> @@ -31,6 +31,7 @@ enum scan_result {
>>   	SCAN_FAIL,
>>   	SCAN_SUCCEED,
>>   	SCAN_PMD_NULL,
>> +	SCAN_PMD_MIGRATION,
>>   	SCAN_PMD_NONE,
>>   	SCAN_PMD_MAPPED,
>>   	SCAN_EXCEED_NONE_PTE,
>> @@ -941,6 +942,8 @@ static inline int check_pmd_state(pmd_t *pmd)
>>   
>>   	if (pmd_none(pmde))
>>   		return SCAN_PMD_NONE;
>> +	if (is_pmd_migration_entry(pmde))
>> +		return SCAN_PMD_MIGRATION;
>>   	if (!pmd_present(pmde))
>>   		return SCAN_PMD_NULL;
>>   	if (pmd_trans_huge(pmde))
>> @@ -1502,9 +1505,12 @@ int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr,
>>   	    !range_in_vma(vma, haddr, haddr + HPAGE_PMD_SIZE))
>>   		return SCAN_VMA_CHECK;
>>   
>> -	/* Fast check before locking page if already PMD-mapped */
>> +	/*
>> +	 * Fast check before locking folio if already PMD-mapped, or if the
>> +	 * folio is under migration
>> +	 */
>>   	result = find_pmd_or_thp_or_none(mm, haddr, &pmd);
>> -	if (result == SCAN_PMD_MAPPED)
>> +	if (result == SCAN_PMD_MAPPED || result == SCAN_PMD_MIGRATION)
> Should mapped PMD and migrating PMD be treated equally while scanning ?

Wanted to ask the same thing I think: why not simply use 
SCAN_PMD_MAPPED? After all, the folio is already pmd-mapped, just not 
using a present entry but (temporarily) using a migration entry.

-- 
Cheers,

David / dhildenb



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH] khugepaged: Reduce race probability between migration and khugepaged
  2025-06-30  7:55 ` Anshuman Khandual
  2025-06-30  7:58   ` David Hildenbrand
@ 2025-06-30  8:12   ` Dev Jain
  2025-06-30  8:19     ` David Hildenbrand
  1 sibling, 1 reply; 11+ messages in thread
From: Dev Jain @ 2025-06-30  8:12 UTC (permalink / raw)
  To: Anshuman Khandual, akpm, david
  Cc: ziy, baolin.wang, lorenzo.stoakes, Liam.Howlett, npache,
	ryan.roberts, baohua, linux-mm, linux-kernel


On 30/06/25 1:25 pm, Anshuman Khandual wrote:
> On 30/06/25 10:18 AM, Dev Jain wrote:
>> Suppose a folio is under migration, and khugepaged is also trying to
>> collapse it. collapse_pte_mapped_thp() will retrieve the folio from the
>> page cache via filemap_lock_folio(), thus taking a reference on the folio
>> and sleeping on the folio lock, since the lock is held by the migration
>> path. Migration will then fail in
>> __folio_migrate_mapping -> folio_ref_freeze. Reduce the probability of
>> such a race happening (leading to migration failure) by bailing out
>> if we detect a PMD is marked with a migration entry.
> Could the migration be re-attempted after such failure ? Seems like
> the migration failure here is traded for a scan failure instead.

We already re-attempt migration. See NR_MAX_MIGRATE_PAGES_RETRY and
NR_MAX_MIGRATE_ASYNC_RETRY. Also just before freezing the refcount,
we do a suitable refcount check in folio_migrate_mapping(). So the
race happens after this and folio_ref_freeze() in __folio_migrate_mapping(),
therefore the window for the race is already very small in the migration
path, but large in the khugepaged path.
  

>
>> This fixes the migration-shared-anon-thp testcase failure on Apple M3.
> Could you please provide some more context why this test case was
> failing earlier and how does this change here fixes the problem ?

IMHO the explanation I have given in the patch description is clear
and succinct: the testcase is failing due to the race. This patch
shortens the race window, and the test on this particular hardware
does not hit the race window again.

>
>> Note that, this is not a "fix" since it only reduces the chance of
>> interference of khugepaged with migration, wherein both the kernel
>> functionalities are deemed "best-effort".
>>> Signed-off-by: Dev Jain <dev.jain@arm.com>
>> ---
>>
>> This patch was part of
>> https://lore.kernel.org/all/20250625055806.82645-1-dev.jain@arm.com/
>> but I have sent it separately on suggestion of Lorenzo, and also because
>> I plan to send the first two patches after David Hildenbrand's
>> folio_pte_batch series gets merged.
>>
>>   mm/khugepaged.c | 12 ++++++++++--
>>   1 file changed, 10 insertions(+), 2 deletions(-)
>>
>> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
>> index 1aa7ca67c756..99977bb9bf6a 100644
>> --- a/mm/khugepaged.c
>> +++ b/mm/khugepaged.c
>> @@ -31,6 +31,7 @@ enum scan_result {
>>   	SCAN_FAIL,
>>   	SCAN_SUCCEED,
>>   	SCAN_PMD_NULL,
>> +	SCAN_PMD_MIGRATION,
>>   	SCAN_PMD_NONE,
>>   	SCAN_PMD_MAPPED,
>>   	SCAN_EXCEED_NONE_PTE,
>> @@ -941,6 +942,8 @@ static inline int check_pmd_state(pmd_t *pmd)
>>   
>>   	if (pmd_none(pmde))
>>   		return SCAN_PMD_NONE;
>> +	if (is_pmd_migration_entry(pmde))
>> +		return SCAN_PMD_MIGRATION;
>>   	if (!pmd_present(pmde))
>>   		return SCAN_PMD_NULL;
>>   	if (pmd_trans_huge(pmde))
>> @@ -1502,9 +1505,12 @@ int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr,
>>   	    !range_in_vma(vma, haddr, haddr + HPAGE_PMD_SIZE))
>>   		return SCAN_VMA_CHECK;
>>   
>> -	/* Fast check before locking page if already PMD-mapped */
>> +	/*
>> +	 * Fast check before locking folio if already PMD-mapped, or if the
>> +	 * folio is under migration
>> +	 */
>>   	result = find_pmd_or_thp_or_none(mm, haddr, &pmd);
>> -	if (result == SCAN_PMD_MAPPED)
>> +	if (result == SCAN_PMD_MAPPED || result == SCAN_PMD_MIGRATION)
> Should mapped PMD and migrating PMD be treated equally while scanning ?

SCAN_PMD_MAPPED is used as an indicator to change result to SCAN_SUCCEED
in khugepaged_scan_mm_slot: after the call to collapse_pte_mapped_thp. And,
it is also used in madvise_collapse() to do ++thps which is used to set the
return value of madvise_collapse. So I think this approach will be wrong.

>
>>   		return result;
>>   
>>   	/*
>> @@ -2716,6 +2722,7 @@ static int madvise_collapse_errno(enum scan_result r)
>>   	case SCAN_PAGE_LRU:
>>   	case SCAN_DEL_PAGE_LRU:
>>   	case SCAN_PAGE_FILLED:
>> +	case SCAN_PMD_MIGRATION:
>>   		return -EAGAIN;
>>   	/*
>>   	 * Other: Trying again likely not to succeed / error intrinsic to
>> @@ -2802,6 +2809,7 @@ int madvise_collapse(struct vm_area_struct *vma, unsigned long start,
>>   			goto handle_result;
>>   		/* Whitelisted set of results where continuing OK */
>>   		case SCAN_PMD_NULL:
>> +		case SCAN_PMD_MIGRATION:
>>   		case SCAN_PTE_NON_PRESENT:
>>   		case SCAN_PTE_UFFD_WP:
>>   		case SCAN_PAGE_RO:


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH] khugepaged: Reduce race probability between migration and khugepaged
  2025-06-30  8:12   ` Dev Jain
@ 2025-06-30  8:19     ` David Hildenbrand
  2025-06-30  8:39       ` Dev Jain
  0 siblings, 1 reply; 11+ messages in thread
From: David Hildenbrand @ 2025-06-30  8:19 UTC (permalink / raw)
  To: Dev Jain, Anshuman Khandual, akpm
  Cc: ziy, baolin.wang, lorenzo.stoakes, Liam.Howlett, npache,
	ryan.roberts, baohua, linux-mm, linux-kernel

On 30.06.25 10:12, Dev Jain wrote:
> 
> On 30/06/25 1:25 pm, Anshuman Khandual wrote:
>> On 30/06/25 10:18 AM, Dev Jain wrote:
>>> Suppose a folio is under migration, and khugepaged is also trying to
>>> collapse it. collapse_pte_mapped_thp() will retrieve the folio from the
>>> page cache via filemap_lock_folio(), thus taking a reference on the folio
>>> and sleeping on the folio lock, since the lock is held by the migration
>>> path. Migration will then fail in
>>> __folio_migrate_mapping -> folio_ref_freeze. Reduce the probability of
>>> such a race happening (leading to migration failure) by bailing out
>>> if we detect a PMD is marked with a migration entry.
>> Could the migration be re-attempted after such failure ? Seems like
>> the migration failure here is traded for a scan failure instead.
> 
> We already re-attempt migration. See NR_MAX_MIGRATE_PAGES_RETRY and
> NR_MAX_MIGRATE_ASYNC_RETRY. Also just before freezing the refcount,
> we do a suitable refcount check in folio_migrate_mapping(). So the
> race happens after this and folio_ref_freeze() in __folio_migrate_mapping(),
> therefore the window for the race is already very small in the migration
> path, but large in the khugepaged path.
>    
> 
>>
>>> This fixes the migration-shared-anon-thp testcase failure on Apple M3.
>> Could you please provide some more context why this test case was
>> failing earlier and how does this change here fixes the problem ?
> 
> IMHO the explanation I have given in the patch description is clear
> and succinct: the testcase is failing due to the race. This patch
> shortens the race window, and the test on this particular hardware
> does not hit the race window again.
> 
>>
>>> Note that, this is not a "fix" since it only reduces the chance of
>>> interference of khugepaged with migration, wherein both the kernel
>>> functionalities are deemed "best-effort".
>>>> Signed-off-by: Dev Jain <dev.jain@arm.com>
>>> ---
>>>
>>> This patch was part of
>>> https://lore.kernel.org/all/20250625055806.82645-1-dev.jain@arm.com/
>>> but I have sent it separately on suggestion of Lorenzo, and also because
>>> I plan to send the first two patches after David Hildenbrand's
>>> folio_pte_batch series gets merged.
>>>
>>>    mm/khugepaged.c | 12 ++++++++++--
>>>    1 file changed, 10 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
>>> index 1aa7ca67c756..99977bb9bf6a 100644
>>> --- a/mm/khugepaged.c
>>> +++ b/mm/khugepaged.c
>>> @@ -31,6 +31,7 @@ enum scan_result {
>>>    	SCAN_FAIL,
>>>    	SCAN_SUCCEED,
>>>    	SCAN_PMD_NULL,
>>> +	SCAN_PMD_MIGRATION,
>>>    	SCAN_PMD_NONE,
>>>    	SCAN_PMD_MAPPED,
>>>    	SCAN_EXCEED_NONE_PTE,
>>> @@ -941,6 +942,8 @@ static inline int check_pmd_state(pmd_t *pmd)
>>>    
>>>    	if (pmd_none(pmde))
>>>    		return SCAN_PMD_NONE;
>>> +	if (is_pmd_migration_entry(pmde))
>>> +		return SCAN_PMD_MIGRATION;
>>>    	if (!pmd_present(pmde))
>>>    		return SCAN_PMD_NULL;
>>>    	if (pmd_trans_huge(pmde))
>>> @@ -1502,9 +1505,12 @@ int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr,
>>>    	    !range_in_vma(vma, haddr, haddr + HPAGE_PMD_SIZE))
>>>    		return SCAN_VMA_CHECK;
>>>    
>>> -	/* Fast check before locking page if already PMD-mapped */
>>> +	/*
>>> +	 * Fast check before locking folio if already PMD-mapped, or if the
>>> +	 * folio is under migration
>>> +	 */
>>>    	result = find_pmd_or_thp_or_none(mm, haddr, &pmd);
>>> -	if (result == SCAN_PMD_MAPPED)
>>> +	if (result == SCAN_PMD_MAPPED || result == SCAN_PMD_MIGRATION)
>> Should mapped PMD and migrating PMD be treated equally while scanning ?
> 
> SCAN_PMD_MAPPED is used as an indicator to change result to SCAN_SUCCEED
> in khugepaged_scan_mm_slot: after the call to collapse_pte_mapped_thp. And,
> it is also used in madvise_collapse() to do ++thps which is used to set the
> return value of madvise_collapse. So I think this approach will be wrong.

But if it already is PMD mapped (just temporarily through a migration 
entry), isn't this exactly what we want?

-- 
Cheers,

David / dhildenb



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH] khugepaged: Reduce race probability between migration and khugepaged
  2025-06-30  8:19     ` David Hildenbrand
@ 2025-06-30  8:39       ` Dev Jain
  0 siblings, 0 replies; 11+ messages in thread
From: Dev Jain @ 2025-06-30  8:39 UTC (permalink / raw)
  To: David Hildenbrand, Anshuman Khandual, akpm
  Cc: ziy, baolin.wang, lorenzo.stoakes, Liam.Howlett, npache,
	ryan.roberts, baohua, linux-mm, linux-kernel


On 30/06/25 1:49 pm, David Hildenbrand wrote:
> On 30.06.25 10:12, Dev Jain wrote:
>>
>> On 30/06/25 1:25 pm, Anshuman Khandual wrote:
>>> On 30/06/25 10:18 AM, Dev Jain wrote:
>>>> Suppose a folio is under migration, and khugepaged is also trying to
>>>> collapse it. collapse_pte_mapped_thp() will retrieve the folio from 
>>>> the
>>>> page cache via filemap_lock_folio(), thus taking a reference on the 
>>>> folio
>>>> and sleeping on the folio lock, since the lock is held by the 
>>>> migration
>>>> path. Migration will then fail in
>>>> __folio_migrate_mapping -> folio_ref_freeze. Reduce the probability of
>>>> such a race happening (leading to migration failure) by bailing out
>>>> if we detect a PMD is marked with a migration entry.
>>> Could the migration be re-attempted after such failure ? Seems like
>>> the migration failure here is traded for a scan failure instead.
>>
>> We already re-attempt migration. See NR_MAX_MIGRATE_PAGES_RETRY and
>> NR_MAX_MIGRATE_ASYNC_RETRY. Also just before freezing the refcount,
>> we do a suitable refcount check in folio_migrate_mapping(). So the
>> race happens after this and folio_ref_freeze() in 
>> __folio_migrate_mapping(),
>> therefore the window for the race is already very small in the migration
>> path, but large in the khugepaged path.
>>
>>>
>>>> This fixes the migration-shared-anon-thp testcase failure on Apple M3.
>>> Could you please provide some more context why this test case was
>>> failing earlier and how does this change here fixes the problem ?
>>
>> IMHO the explanation I have given in the patch description is clear
>> and succinct: the testcase is failing due to the race. This patch
>> shortens the race window, and the test on this particular hardware
>> does not hit the race window again.
>>
>>>
>>>> Note that, this is not a "fix" since it only reduces the chance of
>>>> interference of khugepaged with migration, wherein both the kernel
>>>> functionalities are deemed "best-effort".
>>>>> Signed-off-by: Dev Jain <dev.jain@arm.com>
>>>> ---
>>>>
>>>> This patch was part of
>>>> https://lore.kernel.org/all/20250625055806.82645-1-dev.jain@arm.com/
>>>> but I have sent it separately on suggestion of Lorenzo, and also 
>>>> because
>>>> I plan to send the first two patches after David Hildenbrand's
>>>> folio_pte_batch series gets merged.
>>>>
>>>>    mm/khugepaged.c | 12 ++++++++++--
>>>>    1 file changed, 10 insertions(+), 2 deletions(-)
>>>>
>>>> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
>>>> index 1aa7ca67c756..99977bb9bf6a 100644
>>>> --- a/mm/khugepaged.c
>>>> +++ b/mm/khugepaged.c
>>>> @@ -31,6 +31,7 @@ enum scan_result {
>>>>        SCAN_FAIL,
>>>>        SCAN_SUCCEED,
>>>>        SCAN_PMD_NULL,
>>>> +    SCAN_PMD_MIGRATION,
>>>>        SCAN_PMD_NONE,
>>>>        SCAN_PMD_MAPPED,
>>>>        SCAN_EXCEED_NONE_PTE,
>>>> @@ -941,6 +942,8 @@ static inline int check_pmd_state(pmd_t *pmd)
>>>>           if (pmd_none(pmde))
>>>>            return SCAN_PMD_NONE;
>>>> +    if (is_pmd_migration_entry(pmde))
>>>> +        return SCAN_PMD_MIGRATION;
>>>>        if (!pmd_present(pmde))
>>>>            return SCAN_PMD_NULL;
>>>>        if (pmd_trans_huge(pmde))
>>>> @@ -1502,9 +1505,12 @@ int collapse_pte_mapped_thp(struct mm_struct 
>>>> *mm, unsigned long addr,
>>>>            !range_in_vma(vma, haddr, haddr + HPAGE_PMD_SIZE))
>>>>            return SCAN_VMA_CHECK;
>>>>    -    /* Fast check before locking page if already PMD-mapped */
>>>> +    /*
>>>> +     * Fast check before locking folio if already PMD-mapped, or 
>>>> if the
>>>> +     * folio is under migration
>>>> +     */
>>>>        result = find_pmd_or_thp_or_none(mm, haddr, &pmd);
>>>> -    if (result == SCAN_PMD_MAPPED)
>>>> +    if (result == SCAN_PMD_MAPPED || result == SCAN_PMD_MIGRATION)
>>> Should mapped PMD and migrating PMD be treated equally while scanning ?
>>
>> SCAN_PMD_MAPPED is used as an indicator to change result to SCAN_SUCCEED
>> in khugepaged_scan_mm_slot: after the call to 
>> collapse_pte_mapped_thp. And,
>> it is also used in madvise_collapse() to do ++thps which is used to 
>> set the
>> return value of madvise_collapse. So I think this approach will be 
>> wrong.
>
> But if it already is PMD mapped (just temporarily through a migration 
> entry), isn't this exactly what we want?

Good point. I was about to say that what about PMD-folio splitting 
during migration, but then

during unmapping the folios, if we cannot migrate the PMD-folios, they 
will be splitted via

unmap_folio() along with the PMD, therefore we can be sure that if we 
encounter a PMD

migration entry, then eventually it will be converted to a PMD leaf 
entry on migration success

or failure.


I'll merge this into SCAN_PMD_MAPPED, thanks.



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH] khugepaged: Reduce race probability between migration and khugepaged
  2025-06-30  4:48 [PATCH] khugepaged: Reduce race probability between migration and khugepaged Dev Jain
  2025-06-30  7:46 ` Baolin Wang
  2025-06-30  7:55 ` Anshuman Khandual
@ 2025-06-30 13:27 ` Lorenzo Stoakes
  2025-06-30 14:30   ` Dev Jain
  2 siblings, 1 reply; 11+ messages in thread
From: Lorenzo Stoakes @ 2025-06-30 13:27 UTC (permalink / raw)
  To: Dev Jain
  Cc: akpm, david, ziy, baolin.wang, Liam.Howlett, npache,
	ryan.roberts, baohua, linux-mm, linux-kernel

On Mon, Jun 30, 2025 at 10:18:37AM +0530, Dev Jain wrote:
> Suppose a folio is under migration, and khugepaged is also trying to
> collapse it. collapse_pte_mapped_thp() will retrieve the folio from the
> page cache via filemap_lock_folio(), thus taking a reference on the folio
> and sleeping on the folio lock, since the lock is held by the migration
> path. Migration will then fail in
> __folio_migrate_mapping -> folio_ref_freeze. Reduce the probability of
> such a race happening (leading to migration failure) by bailing out
> if we detect a PMD is marked with a migration entry.

This is a nice find!

>
> This fixes the migration-shared-anon-thp testcase failure on Apple M3.
>
> Note that, this is not a "fix" since it only reduces the chance of
> interference of khugepaged with migration, wherein both the kernel
> functionalities are deemed "best-effort".

Thanks for separating this out, appreciated!

>
> Signed-off-by: Dev Jain <dev.jain@arm.com>
> ---
>
> This patch was part of
> https://lore.kernel.org/all/20250625055806.82645-1-dev.jain@arm.com/
> but I have sent it separately on suggestion of Lorenzo, and also because
> I plan to send the first two patches after David Hildenbrand's
> folio_pte_batch series gets merged.
>
>  mm/khugepaged.c | 12 ++++++++++--
>  1 file changed, 10 insertions(+), 2 deletions(-)
>
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index 1aa7ca67c756..99977bb9bf6a 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -31,6 +31,7 @@ enum scan_result {
>  	SCAN_FAIL,
>  	SCAN_SUCCEED,
>  	SCAN_PMD_NULL,
> +	SCAN_PMD_MIGRATION,
>  	SCAN_PMD_NONE,
>  	SCAN_PMD_MAPPED,
>  	SCAN_EXCEED_NONE_PTE,
> @@ -941,6 +942,8 @@ static inline int check_pmd_state(pmd_t *pmd)
>
>  	if (pmd_none(pmde))
>  		return SCAN_PMD_NONE;
> +	if (is_pmd_migration_entry(pmde))
> +		return SCAN_PMD_MIGRATION;

With David's suggestions I guess this boils down to simply adding this line.

Could we add a quick comment to explain why here?

Thanks!

>  	if (!pmd_present(pmde))
>  		return SCAN_PMD_NULL;
>  	if (pmd_trans_huge(pmde))
> @@ -1502,9 +1505,12 @@ int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr,
>  	    !range_in_vma(vma, haddr, haddr + HPAGE_PMD_SIZE))
>  		return SCAN_VMA_CHECK;
>
> -	/* Fast check before locking page if already PMD-mapped */
> +	/*
> +	 * Fast check before locking folio if already PMD-mapped, or if the
> +	 * folio is under migration
> +	 */
>  	result = find_pmd_or_thp_or_none(mm, haddr, &pmd);
> -	if (result == SCAN_PMD_MAPPED)
> +	if (result == SCAN_PMD_MAPPED || result == SCAN_PMD_MIGRATION)
>  		return result;
>
>  	/*
> @@ -2716,6 +2722,7 @@ static int madvise_collapse_errno(enum scan_result r)
>  	case SCAN_PAGE_LRU:
>  	case SCAN_DEL_PAGE_LRU:
>  	case SCAN_PAGE_FILLED:
> +	case SCAN_PMD_MIGRATION:
>  		return -EAGAIN;
>  	/*
>  	 * Other: Trying again likely not to succeed / error intrinsic to
> @@ -2802,6 +2809,7 @@ int madvise_collapse(struct vm_area_struct *vma, unsigned long start,
>  			goto handle_result;
>  		/* Whitelisted set of results where continuing OK */
>  		case SCAN_PMD_NULL:
> +		case SCAN_PMD_MIGRATION:
>  		case SCAN_PTE_NON_PRESENT:
>  		case SCAN_PTE_UFFD_WP:
>  		case SCAN_PAGE_RO:
> --
> 2.30.2
>


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH] khugepaged: Reduce race probability between migration and khugepaged
  2025-06-30 13:27 ` Lorenzo Stoakes
@ 2025-06-30 14:30   ` Dev Jain
  2025-07-01  4:30     ` Anshuman Khandual
  0 siblings, 1 reply; 11+ messages in thread
From: Dev Jain @ 2025-06-30 14:30 UTC (permalink / raw)
  To: Lorenzo Stoakes
  Cc: akpm, david, ziy, baolin.wang, Liam.Howlett, npache,
	ryan.roberts, baohua, linux-mm, linux-kernel


On 30/06/25 6:57 pm, Lorenzo Stoakes wrote:
> On Mon, Jun 30, 2025 at 10:18:37AM +0530, Dev Jain wrote:
>> Suppose a folio is under migration, and khugepaged is also trying to
>> collapse it. collapse_pte_mapped_thp() will retrieve the folio from the
>> page cache via filemap_lock_folio(), thus taking a reference on the folio
>> and sleeping on the folio lock, since the lock is held by the migration
>> path. Migration will then fail in
>> __folio_migrate_mapping -> folio_ref_freeze. Reduce the probability of
>> such a race happening (leading to migration failure) by bailing out
>> if we detect a PMD is marked with a migration entry.
> This is a nice find!
>
>> This fixes the migration-shared-anon-thp testcase failure on Apple M3.
>>
>> Note that, this is not a "fix" since it only reduces the chance of
>> interference of khugepaged with migration, wherein both the kernel
>> functionalities are deemed "best-effort".
> Thanks for separating this out, appreciated!
>
>> Signed-off-by: Dev Jain <dev.jain@arm.com>
>> ---
>>
>> This patch was part of
>> https://lore.kernel.org/all/20250625055806.82645-1-dev.jain@arm.com/
>> but I have sent it separately on suggestion of Lorenzo, and also because
>> I plan to send the first two patches after David Hildenbrand's
>> folio_pte_batch series gets merged.
>>
>>   mm/khugepaged.c | 12 ++++++++++--
>>   1 file changed, 10 insertions(+), 2 deletions(-)
>>
>> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
>> index 1aa7ca67c756..99977bb9bf6a 100644
>> --- a/mm/khugepaged.c
>> +++ b/mm/khugepaged.c
>> @@ -31,6 +31,7 @@ enum scan_result {
>>   	SCAN_FAIL,
>>   	SCAN_SUCCEED,
>>   	SCAN_PMD_NULL,
>> +	SCAN_PMD_MIGRATION,
>>   	SCAN_PMD_NONE,
>>   	SCAN_PMD_MAPPED,
>>   	SCAN_EXCEED_NONE_PTE,
>> @@ -941,6 +942,8 @@ static inline int check_pmd_state(pmd_t *pmd)
>>
>>   	if (pmd_none(pmde))
>>   		return SCAN_PMD_NONE;
>> +	if (is_pmd_migration_entry(pmde))
>> +		return SCAN_PMD_MIGRATION;
> With David's suggestions I guess this boils down to simply adding this line.

I think it should be

diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 1aa7ca67c756..8a6ba5c8ba4d 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -941,10 +941,10 @@ static inline int check_pmd_state(pmd_t *pmd)
  
  	if (pmd_none(pmde))
  		return SCAN_PMD_NONE;
+	if (is_pmd_migration_entry(pmde) || pmd_trans_huge(pmde))
+		return SCAN_PMD_MAPPED;
  	if (!pmd_present(pmde))
  		return SCAN_PMD_NULL;
-	if (pmd_trans_huge(pmde))
-		return SCAN_PMD_MAPPED;
  	if (pmd_bad(pmde))
  		return SCAN_PMD_NULL;
  	return SCAN_SUCCEED;

Moving this line above since we don't want to exit prematurely
due to !pmd_present(pmde).


>
> Could we add a quick comment to explain why here?

Sure.

>
> Thanks!
>
>>   	if (!pmd_present(pmde))
>>   		return SCAN_PMD_NULL;
>>   	if (pmd_trans_huge(pmde))
>> @@ -1502,9 +1505,12 @@ int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr,
>>   	    !range_in_vma(vma, haddr, haddr + HPAGE_PMD_SIZE))
>>   		return SCAN_VMA_CHECK;
>>
>> -	/* Fast check before locking page if already PMD-mapped */
>> +	/*
>> +	 * Fast check before locking folio if already PMD-mapped, or if the
>> +	 * folio is under migration
>> +	 */
>>   	result = find_pmd_or_thp_or_none(mm, haddr, &pmd);
>> -	if (result == SCAN_PMD_MAPPED)
>> +	if (result == SCAN_PMD_MAPPED || result == SCAN_PMD_MIGRATION)
>>   		return result;
>>
>>   	/*
>> @@ -2716,6 +2722,7 @@ static int madvise_collapse_errno(enum scan_result r)
>>   	case SCAN_PAGE_LRU:
>>   	case SCAN_DEL_PAGE_LRU:
>>   	case SCAN_PAGE_FILLED:
>> +	case SCAN_PMD_MIGRATION:
>>   		return -EAGAIN;
>>   	/*
>>   	 * Other: Trying again likely not to succeed / error intrinsic to
>> @@ -2802,6 +2809,7 @@ int madvise_collapse(struct vm_area_struct *vma, unsigned long start,
>>   			goto handle_result;
>>   		/* Whitelisted set of results where continuing OK */
>>   		case SCAN_PMD_NULL:
>> +		case SCAN_PMD_MIGRATION:
>>   		case SCAN_PTE_NON_PRESENT:
>>   		case SCAN_PTE_UFFD_WP:
>>   		case SCAN_PAGE_RO:
>> --
>> 2.30.2
>>


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH] khugepaged: Reduce race probability between migration and khugepaged
  2025-06-30 14:30   ` Dev Jain
@ 2025-07-01  4:30     ` Anshuman Khandual
  2025-07-01  4:39       ` Dev Jain
  0 siblings, 1 reply; 11+ messages in thread
From: Anshuman Khandual @ 2025-07-01  4:30 UTC (permalink / raw)
  To: Dev Jain, Lorenzo Stoakes
  Cc: akpm, david, ziy, baolin.wang, Liam.Howlett, npache,
	ryan.roberts, baohua, linux-mm, linux-kernel



On 30/06/25 8:00 PM, Dev Jain wrote:
> 
> On 30/06/25 6:57 pm, Lorenzo Stoakes wrote:
>> On Mon, Jun 30, 2025 at 10:18:37AM +0530, Dev Jain wrote:
>>> Suppose a folio is under migration, and khugepaged is also trying to
>>> collapse it. collapse_pte_mapped_thp() will retrieve the folio from the
>>> page cache via filemap_lock_folio(), thus taking a reference on the folio
>>> and sleeping on the folio lock, since the lock is held by the migration
>>> path. Migration will then fail in
>>> __folio_migrate_mapping -> folio_ref_freeze. Reduce the probability of
>>> such a race happening (leading to migration failure) by bailing out
>>> if we detect a PMD is marked with a migration entry.
>> This is a nice find!
>>
>>> This fixes the migration-shared-anon-thp testcase failure on Apple M3.
>>>
>>> Note that, this is not a "fix" since it only reduces the chance of
>>> interference of khugepaged with migration, wherein both the kernel
>>> functionalities are deemed "best-effort".
>> Thanks for separating this out, appreciated!
>>
>>> Signed-off-by: Dev Jain <dev.jain@arm.com>
>>> ---
>>>
>>> This patch was part of
>>> https://lore.kernel.org/all/20250625055806.82645-1-dev.jain@arm.com/
>>> but I have sent it separately on suggestion of Lorenzo, and also because
>>> I plan to send the first two patches after David Hildenbrand's
>>> folio_pte_batch series gets merged.
>>>
>>>   mm/khugepaged.c | 12 ++++++++++--
>>>   1 file changed, 10 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
>>> index 1aa7ca67c756..99977bb9bf6a 100644
>>> --- a/mm/khugepaged.c
>>> +++ b/mm/khugepaged.c
>>> @@ -31,6 +31,7 @@ enum scan_result {
>>>       SCAN_FAIL,
>>>       SCAN_SUCCEED,
>>>       SCAN_PMD_NULL,
>>> +    SCAN_PMD_MIGRATION,
>>>       SCAN_PMD_NONE,
>>>       SCAN_PMD_MAPPED,
>>>       SCAN_EXCEED_NONE_PTE,
>>> @@ -941,6 +942,8 @@ static inline int check_pmd_state(pmd_t *pmd)
>>>
>>>       if (pmd_none(pmde))
>>>           return SCAN_PMD_NONE;
>>> +    if (is_pmd_migration_entry(pmde))
>>> +        return SCAN_PMD_MIGRATION;
>> With David's suggestions I guess this boils down to simply adding this line.
> 
> I think it should be
> 
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index 1aa7ca67c756..8a6ba5c8ba4d 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -941,10 +941,10 @@ static inline int check_pmd_state(pmd_t *pmd)
>  
>      if (pmd_none(pmde))
>          return SCAN_PMD_NONE;
> +    if (is_pmd_migration_entry(pmde) || pmd_trans_huge(pmde))
> +        return SCAN_PMD_MAPPED;
>      if (!pmd_present(pmde))
>          return SCAN_PMD_NULL;
> -    if (pmd_trans_huge(pmde))
> -        return SCAN_PMD_MAPPED;
>      if (pmd_bad(pmde))
>          return SCAN_PMD_NULL;
>      return SCAN_SUCCEED;
> 
> Moving this line above since we don't want to exit prematurely
> due to !pmd_present(pmde).

Might be cleaner to just add the migration test separately before
the pmd_present() and without modifying existing pmd_trans_huge().

	if (is_pmd_migration_entry(pmde))
		return SCAN_PMD_MAPPED;

> 
> 
>>
>> Could we add a quick comment to explain why here?
> 
> Sure.
> 
>>
>> Thanks!
>>
>>>       if (!pmd_present(pmde))
>>>           return SCAN_PMD_NULL;
>>>       if (pmd_trans_huge(pmde))
>>> @@ -1502,9 +1505,12 @@ int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr,
>>>           !range_in_vma(vma, haddr, haddr + HPAGE_PMD_SIZE))
>>>           return SCAN_VMA_CHECK;
>>>
>>> -    /* Fast check before locking page if already PMD-mapped */
>>> +    /*
>>> +     * Fast check before locking folio if already PMD-mapped, or if the
>>> +     * folio is under migration
>>> +     */
>>>       result = find_pmd_or_thp_or_none(mm, haddr, &pmd);
>>> -    if (result == SCAN_PMD_MAPPED)
>>> +    if (result == SCAN_PMD_MAPPED || result == SCAN_PMD_MIGRATION)
>>>           return result;
>>>
>>>       /*
>>> @@ -2716,6 +2722,7 @@ static int madvise_collapse_errno(enum scan_result r)
>>>       case SCAN_PAGE_LRU:
>>>       case SCAN_DEL_PAGE_LRU:
>>>       case SCAN_PAGE_FILLED:
>>> +    case SCAN_PMD_MIGRATION:
>>>           return -EAGAIN;
>>>       /*
>>>        * Other: Trying again likely not to succeed / error intrinsic to
>>> @@ -2802,6 +2809,7 @@ int madvise_collapse(struct vm_area_struct *vma, unsigned long start,
>>>               goto handle_result;
>>>           /* Whitelisted set of results where continuing OK */
>>>           case SCAN_PMD_NULL:
>>> +        case SCAN_PMD_MIGRATION:
>>>           case SCAN_PTE_NON_PRESENT:
>>>           case SCAN_PTE_UFFD_WP:
>>>           case SCAN_PAGE_RO:
>>> -- 
>>> 2.30.2
>>>
> 



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH] khugepaged: Reduce race probability between migration and khugepaged
  2025-07-01  4:30     ` Anshuman Khandual
@ 2025-07-01  4:39       ` Dev Jain
  0 siblings, 0 replies; 11+ messages in thread
From: Dev Jain @ 2025-07-01  4:39 UTC (permalink / raw)
  To: Anshuman Khandual, Lorenzo Stoakes
  Cc: akpm, david, ziy, baolin.wang, Liam.Howlett, npache,
	ryan.roberts, baohua, linux-mm, linux-kernel


On 01/07/25 10:00 am, Anshuman Khandual wrote:
>
> On 30/06/25 8:00 PM, Dev Jain wrote:
>> On 30/06/25 6:57 pm, Lorenzo Stoakes wrote:
>>> On Mon, Jun 30, 2025 at 10:18:37AM +0530, Dev Jain wrote:
>>>> Suppose a folio is under migration, and khugepaged is also trying to
>>>> collapse it. collapse_pte_mapped_thp() will retrieve the folio from the
>>>> page cache via filemap_lock_folio(), thus taking a reference on the folio
>>>> and sleeping on the folio lock, since the lock is held by the migration
>>>> path. Migration will then fail in
>>>> __folio_migrate_mapping -> folio_ref_freeze. Reduce the probability of
>>>> such a race happening (leading to migration failure) by bailing out
>>>> if we detect a PMD is marked with a migration entry.
>>> This is a nice find!
>>>
>>>> This fixes the migration-shared-anon-thp testcase failure on Apple M3.
>>>>
>>>> Note that, this is not a "fix" since it only reduces the chance of
>>>> interference of khugepaged with migration, wherein both the kernel
>>>> functionalities are deemed "best-effort".
>>> Thanks for separating this out, appreciated!
>>>
>>>> Signed-off-by: Dev Jain <dev.jain@arm.com>
>>>> ---
>>>>
>>>> This patch was part of
>>>> https://lore.kernel.org/all/20250625055806.82645-1-dev.jain@arm.com/
>>>> but I have sent it separately on suggestion of Lorenzo, and also because
>>>> I plan to send the first two patches after David Hildenbrand's
>>>> folio_pte_batch series gets merged.
>>>>
>>>>    mm/khugepaged.c | 12 ++++++++++--
>>>>    1 file changed, 10 insertions(+), 2 deletions(-)
>>>>
>>>> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
>>>> index 1aa7ca67c756..99977bb9bf6a 100644
>>>> --- a/mm/khugepaged.c
>>>> +++ b/mm/khugepaged.c
>>>> @@ -31,6 +31,7 @@ enum scan_result {
>>>>        SCAN_FAIL,
>>>>        SCAN_SUCCEED,
>>>>        SCAN_PMD_NULL,
>>>> +    SCAN_PMD_MIGRATION,
>>>>        SCAN_PMD_NONE,
>>>>        SCAN_PMD_MAPPED,
>>>>        SCAN_EXCEED_NONE_PTE,
>>>> @@ -941,6 +942,8 @@ static inline int check_pmd_state(pmd_t *pmd)
>>>>
>>>>        if (pmd_none(pmde))
>>>>            return SCAN_PMD_NONE;
>>>> +    if (is_pmd_migration_entry(pmde))
>>>> +        return SCAN_PMD_MIGRATION;
>>> With David's suggestions I guess this boils down to simply adding this line.
>> I think it should be
>>
>> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
>> index 1aa7ca67c756..8a6ba5c8ba4d 100644
>> --- a/mm/khugepaged.c
>> +++ b/mm/khugepaged.c
>> @@ -941,10 +941,10 @@ static inline int check_pmd_state(pmd_t *pmd)
>>   
>>       if (pmd_none(pmde))
>>           return SCAN_PMD_NONE;
>> +    if (is_pmd_migration_entry(pmde) || pmd_trans_huge(pmde))
>> +        return SCAN_PMD_MAPPED;
>>       if (!pmd_present(pmde))
>>           return SCAN_PMD_NULL;
>> -    if (pmd_trans_huge(pmde))
>> -        return SCAN_PMD_MAPPED;
>>       if (pmd_bad(pmde))
>>           return SCAN_PMD_NULL;
>>       return SCAN_SUCCEED;
>>
>> Moving this line above since we don't want to exit prematurely
>> due to !pmd_present(pmde).
> Might be cleaner to just add the migration test separately before
> the pmd_present() and without modifying existing pmd_trans_huge().
>
> 	if (is_pmd_migration_entry(pmde))
> 		return SCAN_PMD_MAPPED;

Sounds good.

>>
>>> Could we add a quick comment to explain why here?
>> Sure.
>>
>>> Thanks!
>>>
>>>>        if (!pmd_present(pmde))
>>>>            return SCAN_PMD_NULL;
>>>>        if (pmd_trans_huge(pmde))
>>>> @@ -1502,9 +1505,12 @@ int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr,
>>>>            !range_in_vma(vma, haddr, haddr + HPAGE_PMD_SIZE))
>>>>            return SCAN_VMA_CHECK;
>>>>
>>>> -    /* Fast check before locking page if already PMD-mapped */
>>>> +    /*
>>>> +     * Fast check before locking folio if already PMD-mapped, or if the
>>>> +     * folio is under migration
>>>> +     */
>>>>        result = find_pmd_or_thp_or_none(mm, haddr, &pmd);
>>>> -    if (result == SCAN_PMD_MAPPED)
>>>> +    if (result == SCAN_PMD_MAPPED || result == SCAN_PMD_MIGRATION)
>>>>            return result;
>>>>
>>>>        /*
>>>> @@ -2716,6 +2722,7 @@ static int madvise_collapse_errno(enum scan_result r)
>>>>        case SCAN_PAGE_LRU:
>>>>        case SCAN_DEL_PAGE_LRU:
>>>>        case SCAN_PAGE_FILLED:
>>>> +    case SCAN_PMD_MIGRATION:
>>>>            return -EAGAIN;
>>>>        /*
>>>>         * Other: Trying again likely not to succeed / error intrinsic to
>>>> @@ -2802,6 +2809,7 @@ int madvise_collapse(struct vm_area_struct *vma, unsigned long start,
>>>>                goto handle_result;
>>>>            /* Whitelisted set of results where continuing OK */
>>>>            case SCAN_PMD_NULL:
>>>> +        case SCAN_PMD_MIGRATION:
>>>>            case SCAN_PTE_NON_PRESENT:
>>>>            case SCAN_PTE_UFFD_WP:
>>>>            case SCAN_PAGE_RO:
>>>> -- 
>>>> 2.30.2
>>>>


^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2025-07-01  4:40 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-06-30  4:48 [PATCH] khugepaged: Reduce race probability between migration and khugepaged Dev Jain
2025-06-30  7:46 ` Baolin Wang
2025-06-30  7:55 ` Anshuman Khandual
2025-06-30  7:58   ` David Hildenbrand
2025-06-30  8:12   ` Dev Jain
2025-06-30  8:19     ` David Hildenbrand
2025-06-30  8:39       ` Dev Jain
2025-06-30 13:27 ` Lorenzo Stoakes
2025-06-30 14:30   ` Dev Jain
2025-07-01  4:30     ` Anshuman Khandual
2025-07-01  4:39       ` Dev Jain

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox