linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: "Huang, Ying" <ying.huang@linux.alibaba.com>
To: Ryan Roberts <ryan.roberts@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>,
	 Will Deacon <will@kernel.org>,
	 Andrew Morton <akpm@linux-foundation.org>,
	 David Hildenbrand <david@redhat.com>,
	 Lorenzo Stoakes <lorenzo.stoakes@oracle.com>,
	 Vlastimil Babka <vbabka@suse.cz>,  Zi Yan <ziy@nvidia.com>,
	 Baolin Wang <baolin.wang@linux.alibaba.com>,
	 Yang Shi <yang@os.amperecomputing.com>,
	 "Christoph Lameter (Ampere)" <cl@gentwo.org>,
	 Dev Jain <dev.jain@arm.com>,  Barry Song <baohua@kernel.org>,
	 Anshuman Khandual <anshuman.khandual@arm.com>,
	Yicong Yang <yangyicong@hisilicon.com>,
	 Kefeng Wang <wangkefeng.wang@huawei.com>,
	 Kevin Brodsky <kevin.brodsky@arm.com>,
	 Yin Fengwei <fengwei_yin@linux.alibaba.com>,
	linux-arm-kernel@lists.infradead.org,
	 linux-kernel@vger.kernel.org, linux-mm@kvack.org
Subject: Re: [PATCH -v2 2/2] arm64, tlbflush: don't TLBI broadcast if page reused in write fault
Date: Thu, 16 Oct 2025 09:35:11 +0800	[thread overview]
Message-ID: <877bwvmxww.fsf@DESKTOP-5N7EMDA> (raw)
In-Reply-To: <9afcdd88-f8f9-4d2f-94d7-7c41b0a25ddf@arm.com> (Ryan Roberts's message of "Wed, 15 Oct 2025 16:28:31 +0100")

Hi, Ryan,

Thanks for comments!

Ryan Roberts <ryan.roberts@arm.com> writes:

> On 13/10/2025 10:20, Huang Ying wrote:
>> A multi-thread customer workload with large memory footprint uses
>> fork()/exec() to run some external programs every tens seconds.  When
>> running the workload on an arm64 server machine, it's observed that
>> quite some CPU cycles are spent in the TLB flushing functions.  While
>> running the workload on the x86_64 server machine, it's not.  This
>> causes the performance on arm64 to be much worse than that on x86_64.
>> 
>> During the workload running, after fork()/exec() write-protects all
>> pages in the parent process, memory writing in the parent process
>> will cause a write protection fault.  Then the page fault handler
>> will make the PTE/PDE writable if the page can be reused, which is
>> almost always true in the workload.  On arm64, to avoid the write
>> protection fault on other CPUs, the page fault handler flushes the TLB
>> globally with TLBI broadcast after changing the PTE/PDE.  However, this
>> isn't always necessary.  Firstly, it's safe to leave some stall
>
> nit: You keep using the word "stall" here and in the code. I think you mean "stale"?

OOPS, my poor English :-(
Yes, it should be "stale".  Thanks for pointing this out, will fix it in
the future versions.

>> read-only TLB entries as long as they will be flushed finally.
>> Secondly, it's quite possible that the original read-only PTE/PDEs
>> aren't cached in remote TLB at all if the memory footprint is large.
>> In fact, on x86_64, the page fault handler doesn't flush the remote
>> TLB in this situation, which benefits the performance a lot.
>> 
>> To improve the performance on arm64, make the write protection fault
>> handler flush the TLB locally instead of globally via TLBI broadcast
>> after making the PTE/PDE writable.  If there are stall read-only TLB
>> entries in the remote CPUs, the page fault handler on these CPUs will
>> regard the page fault as spurious and flush the stall TLB entries.
>> 
>> To test the patchset, make the usemem.c from vm-scalability
>> (https://git.kernel.org/pub/scm/linux/kernel/git/wfg/vm-scalability.git).
>> support calling fork()/exec() periodically (merged).  To mimic the
>> behavior of the customer workload, run usemem with 4 threads, access
>> 100GB memory, and call fork()/exec() every 40 seconds.  Test results
>> show that with the patchset the score of usemem improves ~40.6%.  The
>> cycles% of TLB flush functions reduces from ~50.5% to ~0.3% in perf
>> profile.
>> 
>> Signed-off-by: Huang Ying <ying.huang@linux.alibaba.com>
>> Cc: Catalin Marinas <catalin.marinas@arm.com>
>> Cc: Will Deacon <will@kernel.org>
>> Cc: Andrew Morton <akpm@linux-foundation.org>
>> Cc: David Hildenbrand <david@redhat.com>
>> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
>> Cc: Vlastimil Babka <vbabka@suse.cz>
>> Cc: Zi Yan <ziy@nvidia.com>
>> Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
>> Cc: Ryan Roberts <ryan.roberts@arm.com>
>> Cc: Yang Shi <yang@os.amperecomputing.com>
>> Cc: "Christoph Lameter (Ampere)" <cl@gentwo.org>
>> Cc: Dev Jain <dev.jain@arm.com>
>> Cc: Barry Song <baohua@kernel.org>
>> Cc: Anshuman Khandual <anshuman.khandual@arm.com>
>> Cc: Yicong Yang <yangyicong@hisilicon.com>
>> Cc: Kefeng Wang <wangkefeng.wang@huawei.com>
>> Cc: Kevin Brodsky <kevin.brodsky@arm.com>
>> Cc: Yin Fengwei <fengwei_yin@linux.alibaba.com>
>> Cc: linux-arm-kernel@lists.infradead.org
>> Cc: linux-kernel@vger.kernel.org
>> Cc: linux-mm@kvack.org
>> ---
>>  arch/arm64/include/asm/pgtable.h  | 14 +++++---
>>  arch/arm64/include/asm/tlbflush.h | 56 +++++++++++++++++++++++++++++++
>>  arch/arm64/mm/contpte.c           |  3 +-
>>  arch/arm64/mm/fault.c             |  2 +-
>>  4 files changed, 67 insertions(+), 8 deletions(-)
>> 
>> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
>> index aa89c2e67ebc..35bae2e4bcfe 100644
>> --- a/arch/arm64/include/asm/pgtable.h
>> +++ b/arch/arm64/include/asm/pgtable.h
>> @@ -130,12 +130,16 @@ static inline void arch_leave_lazy_mmu_mode(void)
>>  #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
>>  
>>  /*
>> - * Outside of a few very special situations (e.g. hibernation), we always
>> - * use broadcast TLB invalidation instructions, therefore a spurious page
>> - * fault on one CPU which has been handled concurrently by another CPU
>> - * does not need to perform additional invalidation.
>> + * We use local TLB invalidation instruction when reusing page in
>> + * write protection fault handler to avoid TLBI broadcast in the hot
>> + * path.  This will cause spurious page faults if stall read-only TLB
>> + * entries exist.
>>   */
>> -#define flush_tlb_fix_spurious_fault(vma, address, ptep) do { } while (0)
>> +#define flush_tlb_fix_spurious_fault(vma, address, ptep)	\
>> +	local_flush_tlb_page_nonotify(vma, address)
>> +
>> +#define flush_tlb_fix_spurious_fault_pmd(vma, address, pmdp)	\
>> +	local_flush_tlb_page_nonotify(vma, address)
>>  
>>  /*
>>   * ZERO_PAGE is a global shared page that is always zero: used
>> diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h
>> index 18a5dc0c9a54..651b31fd18bb 100644
>> --- a/arch/arm64/include/asm/tlbflush.h
>> +++ b/arch/arm64/include/asm/tlbflush.h
>> @@ -249,6 +249,18 @@ static inline unsigned long get_trans_granule(void)
>>   *		cannot be easily determined, the value TLBI_TTL_UNKNOWN will
>>   *		perform a non-hinted invalidation.
>>   *
>> + *	local_flush_tlb_page(vma, addr)
>> + *		Local variant of flush_tlb_page().  Stale TLB entries may
>> + *		remain in remote CPUs.
>> + *
>> + *	local_flush_tlb_page_nonotify(vma, addr)
>> + *		Same as local_flush_tlb_page() except MMU notifier will not be
>> + *		called.
>> + *
>> + *	local_flush_tlb_contpte_range(vma, start, end)
>> + *		Invalidate the virtual-address range '[start, end)' mapped with
>> + *		contpte on local CPU for the user address space corresponding
>> + *		to 'vma->mm'.  Stale TLB entries may remain in remote CPUs.
>>   *
>>   *	Finally, take a look at asm/tlb.h to see how tlb_flush() is implemented
>>   *	on top of these routines, since that is our interface to the mmu_gather
>> @@ -282,6 +294,33 @@ static inline void flush_tlb_mm(struct mm_struct *mm)
>>  	mmu_notifier_arch_invalidate_secondary_tlbs(mm, 0, -1UL);
>>  }
>>  
>> +static inline void __local_flush_tlb_page_nonotify_nosync(
>> +	struct mm_struct *mm, unsigned long uaddr)
>> +{
>> +	unsigned long addr;
>> +
>> +	dsb(nshst);
>> +	addr = __TLBI_VADDR(uaddr, ASID(mm));
>> +	__tlbi(vale1, addr);
>> +	__tlbi_user(vale1, addr);
>> +}
>> +
>> +static inline void local_flush_tlb_page_nonotify(
>> +	struct vm_area_struct *vma, unsigned long uaddr)
>> +{
>> +	__local_flush_tlb_page_nonotify_nosync(vma->vm_mm, uaddr);
>> +	dsb(nsh);
>> +}
>> +
>> +static inline void local_flush_tlb_page(struct vm_area_struct *vma,
>> +					unsigned long uaddr)
>> +{
>> +	__local_flush_tlb_page_nonotify_nosync(vma->vm_mm, uaddr);
>> +	mmu_notifier_arch_invalidate_secondary_tlbs(vma->vm_mm, uaddr & PAGE_MASK,
>> +						(uaddr & PAGE_MASK) + PAGE_SIZE);
>> +	dsb(nsh);
>> +}
>> +
>>  static inline void __flush_tlb_page_nosync(struct mm_struct *mm,
>>  					   unsigned long uaddr)
>>  {
>> @@ -472,6 +511,23 @@ static inline void __flush_tlb_range(struct vm_area_struct *vma,
>>  	dsb(ish);
>>  }
>>  
>> +static inline void local_flush_tlb_contpte_range(struct vm_area_struct *vma,
>> +						 unsigned long start, unsigned long end)
>
> This would be clearer as an API if it was like this:
>
> static inline void local_flush_tlb_contpte(struct vm_area_struct *vma,
> 					unsigned long uaddr)
>
> i.e. the user doesn't set the range - it's implicitly CONT_PTE_SIZE starting at
> round_down(uaddr, PAGE_SIZE).

Sure.  Will do this.

> Thanks,
> Ryan
>
>> +{
>> +	unsigned long asid, pages;
>> +
>> +	start = round_down(start, PAGE_SIZE);
>> +	end = round_up(end, PAGE_SIZE);
>> +	pages = (end - start) >> PAGE_SHIFT;
>> +
>> +	dsb(nshst);
>> +	asid = ASID(vma->vm_mm);
>> +	__flush_tlb_range_op(vale1, start, pages, PAGE_SIZE, asid,
>> +			     3, true, lpa2_is_enabled());
>> +	mmu_notifier_arch_invalidate_secondary_tlbs(vma->vm_mm, start, end);
>> +	dsb(nsh);
>> +}
>> +
>>  static inline void flush_tlb_range(struct vm_area_struct *vma,
>>  				   unsigned long start, unsigned long end)
>>  {
>> diff --git a/arch/arm64/mm/contpte.c b/arch/arm64/mm/contpte.c
>> index c0557945939c..0f9bbb7224dc 100644
>> --- a/arch/arm64/mm/contpte.c
>> +++ b/arch/arm64/mm/contpte.c
>> @@ -622,8 +622,7 @@ int contpte_ptep_set_access_flags(struct vm_area_struct *vma,
>>  			__ptep_set_access_flags(vma, addr, ptep, entry, 0);
>>  
>>  		if (dirty)
>> -			__flush_tlb_range(vma, start_addr, addr,
>> -							PAGE_SIZE, true, 3);
>> +			local_flush_tlb_contpte_range(vma, start_addr, addr);
>>  	} else {
>>  		__contpte_try_unfold(vma->vm_mm, addr, ptep, orig_pte);
>>  		__ptep_set_access_flags(vma, addr, ptep, entry, dirty);
>> diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
>> index d816ff44faff..22f54f5afe3f 100644
>> --- a/arch/arm64/mm/fault.c
>> +++ b/arch/arm64/mm/fault.c
>> @@ -235,7 +235,7 @@ int __ptep_set_access_flags(struct vm_area_struct *vma,
>>  
>>  	/* Invalidate a stale read-only entry */
>>  	if (dirty)
>> -		flush_tlb_page(vma, address);
>> +		local_flush_tlb_page(vma, address);
>>  	return 1;
>>  }
>>  

---
Best Regards,
Huang, Ying


  reply	other threads:[~2025-10-16  1:35 UTC|newest]

Thread overview: 33+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-10-13  9:20 [PATCH -v2 0/2] arm, tlbflush: avoid " Huang Ying
2025-10-13  9:20 ` [PATCH -v2 1/2] mm: add spurious fault fixing support for huge pmd Huang Ying
2025-10-14 14:21   ` Lorenzo Stoakes
2025-10-14 14:38     ` David Hildenbrand
2025-10-14 14:49       ` Lorenzo Stoakes
2025-10-14 14:58         ` David Hildenbrand
2025-10-14 15:13           ` Lorenzo Stoakes
2025-10-15  8:43     ` Huang, Ying
2025-10-15 11:20       ` Lorenzo Stoakes
2025-10-15 12:23         ` David Hildenbrand
2025-10-16  2:22         ` Huang, Ying
2025-10-16  8:25           ` Lorenzo Stoakes
2025-10-16  8:59             ` David Hildenbrand
2025-10-16  9:12             ` Huang, Ying
2025-10-13  9:20 ` [PATCH -v2 2/2] arm64, tlbflush: don't TLBI broadcast if page reused in write fault Huang Ying
2025-10-15 15:28   ` Ryan Roberts
2025-10-16  1:35     ` Huang, Ying [this message]
2025-10-22  4:08   ` Barry Song
2025-10-22  7:31     ` Huang, Ying
2025-10-22  8:14       ` Barry Song
2025-10-22  9:02         ` Huang, Ying
2025-10-22  9:17           ` Barry Song
2025-10-22  9:30             ` Huang, Ying
2025-10-22  9:37               ` Barry Song
2025-10-22  9:46                 ` Huang, Ying
2025-10-22  9:55                   ` Barry Song
2025-10-22 10:22                     ` Barry Song
2025-10-22 10:34                     ` Huang, Ying
2025-10-22 10:52                       ` Barry Song
2025-10-23  1:22                         ` Huang, Ying
2025-10-23  5:39                           ` Barry Song
2025-10-23  6:15                             ` Huang, Ying
2025-10-23 10:18     ` Ryan Roberts

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=877bwvmxww.fsf@DESKTOP-5N7EMDA \
    --to=ying.huang@linux.alibaba.com \
    --cc=akpm@linux-foundation.org \
    --cc=anshuman.khandual@arm.com \
    --cc=baohua@kernel.org \
    --cc=baolin.wang@linux.alibaba.com \
    --cc=catalin.marinas@arm.com \
    --cc=cl@gentwo.org \
    --cc=david@redhat.com \
    --cc=dev.jain@arm.com \
    --cc=fengwei_yin@linux.alibaba.com \
    --cc=kevin.brodsky@arm.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=lorenzo.stoakes@oracle.com \
    --cc=ryan.roberts@arm.com \
    --cc=vbabka@suse.cz \
    --cc=wangkefeng.wang@huawei.com \
    --cc=will@kernel.org \
    --cc=yang@os.amperecomputing.com \
    --cc=yangyicong@hisilicon.com \
    --cc=ziy@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox