* [PATCH -mm] arm64: tlbflush: Add some comments for TLB batched flushing
@ 2023-07-29 13:14 Yicong Yang
2023-07-31 10:34 ` Catalin Marinas
2023-08-01 1:10 ` Alistair Popple
0 siblings, 2 replies; 3+ messages in thread
From: Yicong Yang @ 2023-07-29 13:14 UTC (permalink / raw)
To: catalin.marinas, akpm, linux-mm, linux-arm-kernel; +Cc: 21cnbao, yangyicong
From: Yicong Yang <yangyicong@hisilicon.com>
Add comments for arch_flush_tlb_batched_pending() and
arch_tlbbatch_flush() to illustrate why only a DSB is
needed.
Cc: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Yicong Yang <yangyicong@hisilicon.com>
---
arch/arm64/include/asm/tlbflush.h | 15 +++++++++++++++
1 file changed, 15 insertions(+)
diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h
index 3456866c6a1d..2bad230b95b4 100644
--- a/arch/arm64/include/asm/tlbflush.h
+++ b/arch/arm64/include/asm/tlbflush.h
@@ -300,11 +300,26 @@ static inline void arch_tlbbatch_add_pending(struct arch_tlbflush_unmap_batch *b
__flush_tlb_page_nosync(mm, uaddr);
}
+/*
+ * If mprotect/munmap/etc occurs during TLB batched flushing, we need to
+ * synchronise all the TLBI issued by a DSB to avoid the race mentioned in
+ * flush_tlb_batched_pending().
+ */
static inline void arch_flush_tlb_batched_pending(struct mm_struct *mm)
{
dsb(ish);
}
+/*
+ * To support TLB batched flush for multiple pages unmapping, we only send
+ * the TLBI for each page in arch_tlbbatch_add_pending() and wait for the
+ * completion at the end in arch_tlbbatch_flush(). Since we've already issued
+ * TLBI for each page so only a DSB is needed to synchronise its effect on the
+ * other CPUs.
+ *
+ * This will save the time waiting on DSB comparing issuing a TLBI;DSB sequence
+ * for each page.
+ */
static inline void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch)
{
dsb(ish);
--
2.24.0
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [PATCH -mm] arm64: tlbflush: Add some comments for TLB batched flushing
2023-07-29 13:14 [PATCH -mm] arm64: tlbflush: Add some comments for TLB batched flushing Yicong Yang
@ 2023-07-31 10:34 ` Catalin Marinas
2023-08-01 1:10 ` Alistair Popple
1 sibling, 0 replies; 3+ messages in thread
From: Catalin Marinas @ 2023-07-31 10:34 UTC (permalink / raw)
To: Yicong Yang; +Cc: akpm, linux-mm, linux-arm-kernel, 21cnbao, yangyicong
On Sat, Jul 29, 2023 at 09:14:48PM +0800, Yicong Yang wrote:
> diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h
> index 3456866c6a1d..2bad230b95b4 100644
> --- a/arch/arm64/include/asm/tlbflush.h
> +++ b/arch/arm64/include/asm/tlbflush.h
> @@ -300,11 +300,26 @@ static inline void arch_tlbbatch_add_pending(struct arch_tlbflush_unmap_batch *b
> __flush_tlb_page_nosync(mm, uaddr);
> }
>
> +/*
> + * If mprotect/munmap/etc occurs during TLB batched flushing, we need to
> + * synchronise all the TLBI issued by a DSB to avoid the race mentioned in
Nitpick: s/by a DSB/with a DSB/ as it somehow reads that the DSB issued
the TLBI.
Since the rest of the series went in via the mm tree, I assume Andrew
will pick this up as well.
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Thanks.
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [PATCH -mm] arm64: tlbflush: Add some comments for TLB batched flushing
2023-07-29 13:14 [PATCH -mm] arm64: tlbflush: Add some comments for TLB batched flushing Yicong Yang
2023-07-31 10:34 ` Catalin Marinas
@ 2023-08-01 1:10 ` Alistair Popple
1 sibling, 0 replies; 3+ messages in thread
From: Alistair Popple @ 2023-08-01 1:10 UTC (permalink / raw)
To: Yicong Yang
Cc: catalin.marinas, akpm, linux-mm, linux-arm-kernel, 21cnbao, yangyicong
Yicong Yang <yangyicong@huawei.com> writes:
Thanks! I was reading this code the other day and it took me a while to
figure out what was going on. These comments would have been very
helpful and match my understanding, so:
Reviewed-by: Alistair Popple <apopple@nvidia.com>
> From: Yicong Yang <yangyicong@hisilicon.com>
>
> Add comments for arch_flush_tlb_batched_pending() and
> arch_tlbbatch_flush() to illustrate why only a DSB is
> needed.
>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Signed-off-by: Yicong Yang <yangyicong@hisilicon.com>
> ---
> arch/arm64/include/asm/tlbflush.h | 15 +++++++++++++++
> 1 file changed, 15 insertions(+)
>
> diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h
> index 3456866c6a1d..2bad230b95b4 100644
> --- a/arch/arm64/include/asm/tlbflush.h
> +++ b/arch/arm64/include/asm/tlbflush.h
> @@ -300,11 +300,26 @@ static inline void arch_tlbbatch_add_pending(struct arch_tlbflush_unmap_batch *b
> __flush_tlb_page_nosync(mm, uaddr);
> }
>
> +/*
> + * If mprotect/munmap/etc occurs during TLB batched flushing, we need to
> + * synchronise all the TLBI issued by a DSB to avoid the race mentioned in
> + * flush_tlb_batched_pending().
> + */
> static inline void arch_flush_tlb_batched_pending(struct mm_struct *mm)
> {
> dsb(ish);
> }
>
> +/*
> + * To support TLB batched flush for multiple pages unmapping, we only send
> + * the TLBI for each page in arch_tlbbatch_add_pending() and wait for the
> + * completion at the end in arch_tlbbatch_flush(). Since we've already issued
> + * TLBI for each page so only a DSB is needed to synchronise its effect on the
> + * other CPUs.
> + *
> + * This will save the time waiting on DSB comparing issuing a TLBI;DSB sequence
> + * for each page.
> + */
> static inline void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch)
> {
> dsb(ish);
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2023-08-01 1:12 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-07-29 13:14 [PATCH -mm] arm64: tlbflush: Add some comments for TLB batched flushing Yicong Yang
2023-07-31 10:34 ` Catalin Marinas
2023-08-01 1:10 ` Alistair Popple
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox