linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* Re: [RFC PATCH 3/3] mm/mremap: Use can_pte_batch_count() instead of folio_pte_batch() for pte batch
@ 2025-10-28 13:27 zhangqilong
  0 siblings, 0 replies; 5+ messages in thread
From: zhangqilong @ 2025-10-28 13:27 UTC (permalink / raw)
  To: Lorenzo Stoakes
  Cc: akpm, david, Liam.Howlett, vbabka, rppt, surenb, mhocko, jannh,
	pfalcato, linux-mm, linux-kernel, Wangkefeng (OS Kernel Lab),
	Sunnanyong

> On Mon, Oct 27, 2025 at 10:03:15PM +0800, Zhang Qilong wrote:
> > In current mremap_folio_pte_batch(), 1) pte_batch_hint() always return
> > one pte in non-ARM64 machine, it is not efficient. 2) Next,
> 
> Err... but there's basically no benefit for non-arm64 machines?

It except have benefit in non-arm64 machines. In non-arm64 machine,
pte_batch_hint always return one although a folio are mapped by multiple
PTEs. 

> 
> The key benefit is the mTHP side of things and making the underlying arch-
> specific code more efficient right?

Yes, we except it could benefit from mTHP, and not just for arm64.

> 
> And again you need to get numbers to demonstrate you don't regress non-
> arm64.

Yes, I will have a test on x86-64, non-contiguous folios or non-contiguous-folio
should not cause regression. Thanks for your kind reminder.

> 
> > it need to acquire a folio to call the folio_pte_batch().
> >
> > Due to new added can_pte_batch_count(), we just call it instead of
> > folio_pte_batch(). And then rename mremap_folio_pte_batch() to
> > mremap_pte_batch().
> >
> > Signed-off-by: Zhang Qilong <zhangqilong3@huawei.com>
> > ---
> >  mm/mremap.c | 16 +++-------------
> >  1 file changed, 3 insertions(+), 13 deletions(-)
> >
> > diff --git a/mm/mremap.c b/mm/mremap.c index
> > bd7314898ec5..d11f93f1622f 100644
> > --- a/mm/mremap.c
> > +++ b/mm/mremap.c
> > @@ -169,27 +169,17 @@ static pte_t move_soft_dirty_pte(pte_t pte)
> >  		pte = pte_swp_mksoft_dirty(pte);
> >  #endif
> >  	return pte;
> >  }
> >
> > -static int mremap_folio_pte_batch(struct vm_area_struct *vma,
> > unsigned long addr,
> > +static int mremap_pte_batch(struct vm_area_struct *vma, unsigned long
> > +addr,
> >  		pte_t *ptep, pte_t pte, int max_nr)  {
> > -	struct folio *folio;
> > -
> >  	if (max_nr == 1)
> >  		return 1;
> >
> > -	/* Avoid expensive folio lookup if we stand no chance of benefit. */
> > -	if (pte_batch_hint(ptep, pte) == 1)
> > -		return 1;
> 
> Why are we eliminating an easy exit here and instead always invoking the more
> involved function?
> 
> Again this has to be tested against non-arm architectures.
> 
> > -
> > -	folio = vm_normal_folio(vma, addr, pte);
> > -	if (!folio || !folio_test_large(folio))
> > -		return 1;
> > -
> > -	return folio_pte_batch(folio, ptep, pte, max_nr);
> > +	return can_pte_batch_count(vma, ptep, &pte, max_nr, 0);
> 
> This is very silly to have this function now ust return another function + a trivial
> check that your function should be doing...
> 
> >  }
> >
> >  static int move_ptes(struct pagetable_move_control *pmc,
> >  		unsigned long extent, pmd_t *old_pmd, pmd_t *new_pmd)
> { @@ -278,11
> > +268,11 @@ static int move_ptes(struct pagetable_move_control *pmc,
> >  		 * make sure the physical page stays valid until
> >  		 * the TLB entry for the old mapping has been
> >  		 * flushed.
> >  		 */
> >  		if (pte_present(old_pte)) {
> > -			nr_ptes = mremap_folio_pte_batch(vma, old_addr,
> old_ptep,
> > +			nr_ptes = mremap_pte_batch(vma, old_addr, old_ptep,
> >  							 old_pte,
> max_nr_ptes);
> >  			force_flush = true;
> >  		}
> >  		pte = get_and_clear_ptes(mm, old_addr, old_ptep, nr_ptes);
> >  		pte = move_pte(pte, old_addr, new_addr);
> > --
> > 2.43.0
> >



^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [RFC PATCH 3/3] mm/mremap: Use can_pte_batch_count() instead of folio_pte_batch() for pte batch
@ 2025-10-28 13:01 zhangqilong
  0 siblings, 0 replies; 5+ messages in thread
From: zhangqilong @ 2025-10-28 13:01 UTC (permalink / raw)
  To: David Hildenbrand, akpm, lorenzo.stoakes, Liam.Howlett, vbabka,
	rppt, surenb, mhocko, jannh, pfalcato
  Cc: linux-mm, linux-kernel, Wangkefeng (OS Kernel Lab), Sunnanyong

On 27.10.25 15:03, Zhang Qilong wrote:
> > In current mremap_folio_pte_batch(), 1) pte_batch_hint() always return
> > one pte in non-ARM64 machine, it is not efficient. 2) Next, it need to
> > acquire a folio to call the folio_pte_batch().
> >
> > Due to new added can_pte_batch_count(), we just call it instead of
> > folio_pte_batch(). And then rename mremap_folio_pte_batch() to
> > mremap_pte_batch().
> >
> > Signed-off-by: Zhang Qilong <zhangqilong3@huawei.com>
> > ---
> >   mm/mremap.c | 16 +++-------------
> >   1 file changed, 3 insertions(+), 13 deletions(-)
> >
> > diff --git a/mm/mremap.c b/mm/mremap.c index
> > bd7314898ec5..d11f93f1622f 100644
> > --- a/mm/mremap.c
> > +++ b/mm/mremap.c
> > @@ -169,27 +169,17 @@ static pte_t move_soft_dirty_pte(pte_t pte)
> >   		pte = pte_swp_mksoft_dirty(pte);
> >   #endif
> >   	return pte;
> >   }
> >
> > -static int mremap_folio_pte_batch(struct vm_area_struct *vma,
> > unsigned long addr,
> > +static int mremap_pte_batch(struct vm_area_struct *vma, unsigned long
> > +addr,
> >   		pte_t *ptep, pte_t pte, int max_nr)
> >   {
> > -	struct folio *folio;
> > -
> >   	if (max_nr == 1)
> >   		return 1;
> >
> > -	/* Avoid expensive folio lookup if we stand no chance of benefit. */
> > -	if (pte_batch_hint(ptep, pte) == 1)
> > -		return 1;
> > -
> > -	folio = vm_normal_folio(vma, addr, pte);
> > -	if (!folio || !folio_test_large(folio))
> > -		return 1;
> > -
> > -	return folio_pte_batch(folio, ptep, pte, max_nr);
> > +	return can_pte_batch_count(vma, ptep, &pte, max_nr, 0);
> >   }
> >
> >   static int move_ptes(struct pagetable_move_control *pmc,
> >   		unsigned long extent, pmd_t *old_pmd, pmd_t *new_pmd)
> >   {
> > @@ -278,11 +268,11 @@ static int move_ptes(struct
> pagetable_move_control *pmc,
> >   		 * make sure the physical page stays valid until
> >   		 * the TLB entry for the old mapping has been
> >   		 * flushed.
> >   		 */
> >   		if (pte_present(old_pte)) {
> > -			nr_ptes = mremap_folio_pte_batch(vma, old_addr,
> old_ptep,
> > +			nr_ptes = mremap_pte_batch(vma, old_addr, old_ptep,
> >   							 old_pte,
> max_nr_ptes);
> >   			force_flush = true;
> >   		}
> >   		pte = get_and_clear_ptes(mm, old_addr, old_ptep, nr_ptes);
> 
> get_and_clear_ptes() documents: "Clear present PTEs that map consecutive
> pages of the same folio, collecting dirty/accessed bits."

Oh, good catch. My focus was solely on the implementations of get_and_clear_ptes()
and set_ptes() and regarding their multi-folio PTEs handling, and I missed this comment.
get_and_clear_ptes() will collect dirty/accessed bits in batch ranges, and set in later.

> 
> And as can_pte_batch_count() will merge access/dirty bits, you would silently
> set ptes dirty/accessed that belong to other folios, which sounds very wrong.

Year, your analysis is very thorough. The access/dirty bit will be merged between
neighboring batched folios due to get_and_clear_ptes().

If caller don't want to merge access/dirty bits between folios(means not ignore any bits), 
they should call folio_pte_batch() or the new interface with 'flag | FPB_RESPECT_DIRTY '
(access bit is default be respected).

> 
> Staring at the code, I wonder if there is also a problem with the write bit, have
> to dig into that.

The write bit is handled similarly to dirty bit. If called don't want merge write between folios, we
could call the new interface with 'flag | FPB_RESPECT_WRITE', and pte_same() will compare the write bit
with the next neighboring folio pte.  If it's different, it will break.

Thanks for the in-depth review.
 
> --
> Cheers
> 
> David / dhildenb
> 


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [RFC PATCH 3/3] mm/mremap: Use can_pte_batch_count() instead of folio_pte_batch() for pte batch
  2025-10-27 14:03 ` [RFC PATCH 3/3] mm/mremap: Use can_pte_batch_count() instead of folio_pte_batch() for pte batch Zhang Qilong
  2025-10-27 19:41   ` David Hildenbrand
@ 2025-10-27 19:57   ` Lorenzo Stoakes
  1 sibling, 0 replies; 5+ messages in thread
From: Lorenzo Stoakes @ 2025-10-27 19:57 UTC (permalink / raw)
  To: Zhang Qilong
  Cc: akpm, david, Liam.Howlett, vbabka, rppt, surenb, mhocko, jannh,
	pfalcato, linux-mm, linux-kernel, wangkefeng.wang, sunnanyong

On Mon, Oct 27, 2025 at 10:03:15PM +0800, Zhang Qilong wrote:
> In current mremap_folio_pte_batch(), 1) pte_batch_hint() always
> return one pte in non-ARM64 machine, it is not efficient. 2) Next,

Err... but there's basically no benefit for non-arm64 machines?

The key benefit is the mTHP side of things and making the underlying
arch-specific code more efficient right?

And again you need to get numbers to demonstrate you don't regress non-arm64.

> it need to acquire a folio to call the folio_pte_batch().
>
> Due to new added can_pte_batch_count(), we just call it instead of
> folio_pte_batch(). And then rename mremap_folio_pte_batch() to
> mremap_pte_batch().
>
> Signed-off-by: Zhang Qilong <zhangqilong3@huawei.com>
> ---
>  mm/mremap.c | 16 +++-------------
>  1 file changed, 3 insertions(+), 13 deletions(-)
>
> diff --git a/mm/mremap.c b/mm/mremap.c
> index bd7314898ec5..d11f93f1622f 100644
> --- a/mm/mremap.c
> +++ b/mm/mremap.c
> @@ -169,27 +169,17 @@ static pte_t move_soft_dirty_pte(pte_t pte)
>  		pte = pte_swp_mksoft_dirty(pte);
>  #endif
>  	return pte;
>  }
>
> -static int mremap_folio_pte_batch(struct vm_area_struct *vma, unsigned long addr,
> +static int mremap_pte_batch(struct vm_area_struct *vma, unsigned long addr,
>  		pte_t *ptep, pte_t pte, int max_nr)
>  {
> -	struct folio *folio;
> -
>  	if (max_nr == 1)
>  		return 1;
>
> -	/* Avoid expensive folio lookup if we stand no chance of benefit. */
> -	if (pte_batch_hint(ptep, pte) == 1)
> -		return 1;

Why are we eliminating an easy exit here and instead always invoking the
more involved function?

Again this has to be tested against non-arm architectures.

> -
> -	folio = vm_normal_folio(vma, addr, pte);
> -	if (!folio || !folio_test_large(folio))
> -		return 1;
> -
> -	return folio_pte_batch(folio, ptep, pte, max_nr);
> +	return can_pte_batch_count(vma, ptep, &pte, max_nr, 0);

This is very silly to have this function now ust return another function + a
trivial check that your function should be doing...

>  }
>
>  static int move_ptes(struct pagetable_move_control *pmc,
>  		unsigned long extent, pmd_t *old_pmd, pmd_t *new_pmd)
>  {
> @@ -278,11 +268,11 @@ static int move_ptes(struct pagetable_move_control *pmc,
>  		 * make sure the physical page stays valid until
>  		 * the TLB entry for the old mapping has been
>  		 * flushed.
>  		 */
>  		if (pte_present(old_pte)) {
> -			nr_ptes = mremap_folio_pte_batch(vma, old_addr, old_ptep,
> +			nr_ptes = mremap_pte_batch(vma, old_addr, old_ptep,
>  							 old_pte, max_nr_ptes);
>  			force_flush = true;
>  		}
>  		pte = get_and_clear_ptes(mm, old_addr, old_ptep, nr_ptes);
>  		pte = move_pte(pte, old_addr, new_addr);
> --
> 2.43.0
>


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [RFC PATCH 3/3] mm/mremap: Use can_pte_batch_count() instead of folio_pte_batch() for pte batch
  2025-10-27 14:03 ` [RFC PATCH 3/3] mm/mremap: Use can_pte_batch_count() instead of folio_pte_batch() for pte batch Zhang Qilong
@ 2025-10-27 19:41   ` David Hildenbrand
  2025-10-27 19:57   ` Lorenzo Stoakes
  1 sibling, 0 replies; 5+ messages in thread
From: David Hildenbrand @ 2025-10-27 19:41 UTC (permalink / raw)
  To: Zhang Qilong, akpm, lorenzo.stoakes, Liam.Howlett, vbabka, rppt,
	surenb, mhocko, jannh, pfalcato
  Cc: linux-mm, linux-kernel, wangkefeng.wang, sunnanyong

On 27.10.25 15:03, Zhang Qilong wrote:
> In current mremap_folio_pte_batch(), 1) pte_batch_hint() always
> return one pte in non-ARM64 machine, it is not efficient. 2) Next,
> it need to acquire a folio to call the folio_pte_batch().
> 
> Due to new added can_pte_batch_count(), we just call it instead of
> folio_pte_batch(). And then rename mremap_folio_pte_batch() to
> mremap_pte_batch().
> 
> Signed-off-by: Zhang Qilong <zhangqilong3@huawei.com>
> ---
>   mm/mremap.c | 16 +++-------------
>   1 file changed, 3 insertions(+), 13 deletions(-)
> 
> diff --git a/mm/mremap.c b/mm/mremap.c
> index bd7314898ec5..d11f93f1622f 100644
> --- a/mm/mremap.c
> +++ b/mm/mremap.c
> @@ -169,27 +169,17 @@ static pte_t move_soft_dirty_pte(pte_t pte)
>   		pte = pte_swp_mksoft_dirty(pte);
>   #endif
>   	return pte;
>   }
>   
> -static int mremap_folio_pte_batch(struct vm_area_struct *vma, unsigned long addr,
> +static int mremap_pte_batch(struct vm_area_struct *vma, unsigned long addr,
>   		pte_t *ptep, pte_t pte, int max_nr)
>   {
> -	struct folio *folio;
> -
>   	if (max_nr == 1)
>   		return 1;
>   
> -	/* Avoid expensive folio lookup if we stand no chance of benefit. */
> -	if (pte_batch_hint(ptep, pte) == 1)
> -		return 1;
> -
> -	folio = vm_normal_folio(vma, addr, pte);
> -	if (!folio || !folio_test_large(folio))
> -		return 1;
> -
> -	return folio_pte_batch(folio, ptep, pte, max_nr);
> +	return can_pte_batch_count(vma, ptep, &pte, max_nr, 0);
>   }
>   
>   static int move_ptes(struct pagetable_move_control *pmc,
>   		unsigned long extent, pmd_t *old_pmd, pmd_t *new_pmd)
>   {
> @@ -278,11 +268,11 @@ static int move_ptes(struct pagetable_move_control *pmc,
>   		 * make sure the physical page stays valid until
>   		 * the TLB entry for the old mapping has been
>   		 * flushed.
>   		 */
>   		if (pte_present(old_pte)) {
> -			nr_ptes = mremap_folio_pte_batch(vma, old_addr, old_ptep,
> +			nr_ptes = mremap_pte_batch(vma, old_addr, old_ptep,
>   							 old_pte, max_nr_ptes);
>   			force_flush = true;
>   		}
>   		pte = get_and_clear_ptes(mm, old_addr, old_ptep, nr_ptes);

get_and_clear_ptes() documents: "Clear present PTEs that map consecutive 
pages of the same folio, collecting dirty/accessed bits."

And as can_pte_batch_count() will merge access/dirty bits, you would 
silently set ptes dirty/accessed that belong to other folios, which 
sounds very wrong.

Staring at the code, I wonder if there is also a problem with the write 
bit, have to dig into that.

-- 
Cheers

David / dhildenb



^ permalink raw reply	[flat|nested] 5+ messages in thread

* [RFC PATCH 3/3] mm/mremap: Use can_pte_batch_count() instead of folio_pte_batch() for pte batch
  2025-10-27 14:03 [RFC PATCH 0/3] mm: PTEs batch optimization in mincore and mremap Zhang Qilong
@ 2025-10-27 14:03 ` Zhang Qilong
  2025-10-27 19:41   ` David Hildenbrand
  2025-10-27 19:57   ` Lorenzo Stoakes
  0 siblings, 2 replies; 5+ messages in thread
From: Zhang Qilong @ 2025-10-27 14:03 UTC (permalink / raw)
  To: akpm, david, lorenzo.stoakes, Liam.Howlett, vbabka, rppt, surenb,
	mhocko, jannh, pfalcato
  Cc: linux-mm, linux-kernel, wangkefeng.wang, sunnanyong

In current mremap_folio_pte_batch(), 1) pte_batch_hint() always
return one pte in non-ARM64 machine, it is not efficient. 2) Next,
it need to acquire a folio to call the folio_pte_batch().

Due to new added can_pte_batch_count(), we just call it instead of
folio_pte_batch(). And then rename mremap_folio_pte_batch() to
mremap_pte_batch().

Signed-off-by: Zhang Qilong <zhangqilong3@huawei.com>
---
 mm/mremap.c | 16 +++-------------
 1 file changed, 3 insertions(+), 13 deletions(-)

diff --git a/mm/mremap.c b/mm/mremap.c
index bd7314898ec5..d11f93f1622f 100644
--- a/mm/mremap.c
+++ b/mm/mremap.c
@@ -169,27 +169,17 @@ static pte_t move_soft_dirty_pte(pte_t pte)
 		pte = pte_swp_mksoft_dirty(pte);
 #endif
 	return pte;
 }
 
-static int mremap_folio_pte_batch(struct vm_area_struct *vma, unsigned long addr,
+static int mremap_pte_batch(struct vm_area_struct *vma, unsigned long addr,
 		pte_t *ptep, pte_t pte, int max_nr)
 {
-	struct folio *folio;
-
 	if (max_nr == 1)
 		return 1;
 
-	/* Avoid expensive folio lookup if we stand no chance of benefit. */
-	if (pte_batch_hint(ptep, pte) == 1)
-		return 1;
-
-	folio = vm_normal_folio(vma, addr, pte);
-	if (!folio || !folio_test_large(folio))
-		return 1;
-
-	return folio_pte_batch(folio, ptep, pte, max_nr);
+	return can_pte_batch_count(vma, ptep, &pte, max_nr, 0);
 }
 
 static int move_ptes(struct pagetable_move_control *pmc,
 		unsigned long extent, pmd_t *old_pmd, pmd_t *new_pmd)
 {
@@ -278,11 +268,11 @@ static int move_ptes(struct pagetable_move_control *pmc,
 		 * make sure the physical page stays valid until
 		 * the TLB entry for the old mapping has been
 		 * flushed.
 		 */
 		if (pte_present(old_pte)) {
-			nr_ptes = mremap_folio_pte_batch(vma, old_addr, old_ptep,
+			nr_ptes = mremap_pte_batch(vma, old_addr, old_ptep,
 							 old_pte, max_nr_ptes);
 			force_flush = true;
 		}
 		pte = get_and_clear_ptes(mm, old_addr, old_ptep, nr_ptes);
 		pte = move_pte(pte, old_addr, new_addr);
-- 
2.43.0



^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2025-10-28 13:27 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-10-28 13:27 [RFC PATCH 3/3] mm/mremap: Use can_pte_batch_count() instead of folio_pte_batch() for pte batch zhangqilong
  -- strict thread matches above, loose matches on Subject: below --
2025-10-28 13:01 zhangqilong
2025-10-27 14:03 [RFC PATCH 0/3] mm: PTEs batch optimization in mincore and mremap Zhang Qilong
2025-10-27 14:03 ` [RFC PATCH 3/3] mm/mremap: Use can_pte_batch_count() instead of folio_pte_batch() for pte batch Zhang Qilong
2025-10-27 19:41   ` David Hildenbrand
2025-10-27 19:57   ` Lorenzo Stoakes

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox