* [PATCH 01/11] x86/mm: make MMU_GATHER_RCU_TABLE_FREE unconditional
2024-12-23 2:55 [RFC PATCH v2 00/11] AMD broadcast TLB invalidation Rik van Riel
@ 2024-12-23 2:55 ` Rik van Riel
2024-12-23 6:01 ` Qi Zheng
2024-12-24 18:26 ` Peter Zijlstra
2024-12-23 2:55 ` [PATCH 02/11] x86/mm: add X86_FEATURE_INVLPGB definition Rik van Riel
` (10 subsequent siblings)
11 siblings, 2 replies; 21+ messages in thread
From: Rik van Riel @ 2024-12-23 2:55 UTC (permalink / raw)
To: x86
Cc: linux-kernel, kernel-team, dave.hansen, luto, peterz, tglx,
mingo, bp, hpa, akpm, linux-mm, Rik van Riel
Currently x86 usse CONFIG_MMU_GATHER_TABLE_FREE when using
paravirt, and not when running on bare metal.
There is no real good reason to do things differently for
each setup. Make them all the same.
Signed-off-by: Rik van Riel <riel@surriel.com>
Suggested-by: Peter Zijlstra <peterz@infradead.org>
---
arch/x86/Kconfig | 2 +-
arch/x86/kernel/paravirt.c | 7 +------
2 files changed, 2 insertions(+), 7 deletions(-)
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 9d7bd0ae48c4..e8743f8c9fd0 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -274,7 +274,7 @@ config X86
select HAVE_PCI
select HAVE_PERF_REGS
select HAVE_PERF_USER_STACK_DUMP
- select MMU_GATHER_RCU_TABLE_FREE if PARAVIRT
+ select MMU_GATHER_RCU_TABLE_FREE
select MMU_GATHER_MERGE_VMAS
select HAVE_POSIX_CPU_TIMERS_TASK_WORK
select HAVE_REGS_AND_STACK_ACCESS_API
diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c
index fec381533555..2b78a6b466ed 100644
--- a/arch/x86/kernel/paravirt.c
+++ b/arch/x86/kernel/paravirt.c
@@ -59,11 +59,6 @@ void __init native_pv_lock_init(void)
static_branch_enable(&virt_spin_lock_key);
}
-static void native_tlb_remove_table(struct mmu_gather *tlb, void *table)
-{
- tlb_remove_page(tlb, table);
-}
-
struct static_key paravirt_steal_enabled;
struct static_key paravirt_steal_rq_enabled;
@@ -191,7 +186,7 @@ struct paravirt_patch_template pv_ops = {
.mmu.flush_tlb_kernel = native_flush_tlb_global,
.mmu.flush_tlb_one_user = native_flush_tlb_one_user,
.mmu.flush_tlb_multi = native_flush_tlb_multi,
- .mmu.tlb_remove_table = native_tlb_remove_table,
+ .mmu.tlb_remove_table = tlb_remove_table,
.mmu.exit_mmap = paravirt_nop,
.mmu.notify_page_enc_status_changed = paravirt_nop,
--
2.47.1
^ permalink raw reply [flat|nested] 21+ messages in thread* Re: [PATCH 01/11] x86/mm: make MMU_GATHER_RCU_TABLE_FREE unconditional
2024-12-23 2:55 ` [PATCH 01/11] x86/mm: make MMU_GATHER_RCU_TABLE_FREE unconditional Rik van Riel
@ 2024-12-23 6:01 ` Qi Zheng
2024-12-23 20:20 ` Rik van Riel
2024-12-24 18:26 ` Peter Zijlstra
1 sibling, 1 reply; 21+ messages in thread
From: Qi Zheng @ 2024-12-23 6:01 UTC (permalink / raw)
To: Rik van Riel
Cc: x86, linux-kernel, kernel-team, dave.hansen, luto, peterz, tglx,
mingo, bp, hpa, akpm, linux-mm
Very happy to see this change!
On 2024/12/23 10:55, Rik van Riel wrote:
> Currently x86 usse CONFIG_MMU_GATHER_TABLE_FREE when using
^
use
> paravirt, and not when running on bare metal.
>
> There is no real good reason to do things differently for
> each setup. Make them all the same.
>
> Signed-off-by: Rik van Riel <riel@surriel.com>
> Suggested-by: Peter Zijlstra <peterz@infradead.org>
> ---
> arch/x86/Kconfig | 2 +-
> arch/x86/kernel/paravirt.c | 7 +------
> 2 files changed, 2 insertions(+), 7 deletions(-)
>
> diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
> index 9d7bd0ae48c4..e8743f8c9fd0 100644
> --- a/arch/x86/Kconfig
> +++ b/arch/x86/Kconfig
> @@ -274,7 +274,7 @@ config X86
> select HAVE_PCI
> select HAVE_PERF_REGS
> select HAVE_PERF_USER_STACK_DUMP
> - select MMU_GATHER_RCU_TABLE_FREE if PARAVIRT
> + select MMU_GATHER_RCU_TABLE_FREE
> select MMU_GATHER_MERGE_VMAS
> select HAVE_POSIX_CPU_TIMERS_TASK_WORK
> select HAVE_REGS_AND_STACK_ACCESS_API
> diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c
> index fec381533555..2b78a6b466ed 100644
> --- a/arch/x86/kernel/paravirt.c
> +++ b/arch/x86/kernel/paravirt.c
> @@ -59,11 +59,6 @@ void __init native_pv_lock_init(void)
> static_branch_enable(&virt_spin_lock_key);
> }
>
> -static void native_tlb_remove_table(struct mmu_gather *tlb, void *table)
> -{
> - tlb_remove_page(tlb, table);
> -}
> -
> struct static_key paravirt_steal_enabled;
> struct static_key paravirt_steal_rq_enabled;
>
> @@ -191,7 +186,7 @@ struct paravirt_patch_template pv_ops = {
> .mmu.flush_tlb_kernel = native_flush_tlb_global,
> .mmu.flush_tlb_one_user = native_flush_tlb_one_user,
> .mmu.flush_tlb_multi = native_flush_tlb_multi,
> - .mmu.tlb_remove_table = native_tlb_remove_table,
> + .mmu.tlb_remove_table = tlb_remove_table,
>
> .mmu.exit_mmap = paravirt_nop,
> .mmu.notify_page_enc_status_changed = paravirt_nop,
It look like this patch series is not based on the latest linux-next.
In addition to the above case, maybe the paravirt_tlb_remove_table()
in arch/x86/mm/pgtable.c also needs to be changed to tlb_remove_table()?
Thanks,
Qi
^ permalink raw reply [flat|nested] 21+ messages in thread* Re: [PATCH 01/11] x86/mm: make MMU_GATHER_RCU_TABLE_FREE unconditional
2024-12-23 6:01 ` Qi Zheng
@ 2024-12-23 20:20 ` Rik van Riel
0 siblings, 0 replies; 21+ messages in thread
From: Rik van Riel @ 2024-12-23 20:20 UTC (permalink / raw)
To: Qi Zheng
Cc: x86, linux-kernel, kernel-team, dave.hansen, luto, peterz, tglx,
mingo, bp, hpa, akpm, linux-mm
On Mon, 2024-12-23 at 14:01 +0800, Qi Zheng wrote:
> Very happy to see this change!
>
>
> > @@ -191,7 +186,7 @@ struct paravirt_patch_template pv_ops = {
> > .mmu.flush_tlb_kernel = native_flush_tlb_global,
> > .mmu.flush_tlb_one_user =
> > native_flush_tlb_one_user,
> > .mmu.flush_tlb_multi = native_flush_tlb_multi,
> > - .mmu.tlb_remove_table = native_tlb_remove_table,
> > + .mmu.tlb_remove_table = tlb_remove_table,
> >
> > .mmu.exit_mmap = paravirt_nop,
> > .mmu.notify_page_enc_status_changed = paravirt_nop,
>
> It look like this patch series is not based on the latest linux-next.
>
That is correct. I based this on tip.git x86/mm, since
that seems like the most likely destination for this
code.
> In addition to the above case, maybe the paravirt_tlb_remove_table()
> in arch/x86/mm/pgtable.c also needs to be changed to
> tlb_remove_table()?
I'll get that in the next version.
Thank you for reviewing the patch!
--
All Rights Reversed.
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 01/11] x86/mm: make MMU_GATHER_RCU_TABLE_FREE unconditional
2024-12-23 2:55 ` [PATCH 01/11] x86/mm: make MMU_GATHER_RCU_TABLE_FREE unconditional Rik van Riel
2024-12-23 6:01 ` Qi Zheng
@ 2024-12-24 18:26 ` Peter Zijlstra
1 sibling, 0 replies; 21+ messages in thread
From: Peter Zijlstra @ 2024-12-24 18:26 UTC (permalink / raw)
To: Rik van Riel
Cc: x86, linux-kernel, kernel-team, dave.hansen, luto, tglx, mingo,
bp, hpa, akpm, linux-mm
On Sun, Dec 22, 2024 at 09:55:07PM -0500, Rik van Riel wrote:
> Currently x86 usse CONFIG_MMU_GATHER_TABLE_FREE when using
> paravirt, and not when running on bare metal.
>
> There is no real good reason to do things differently for
> each setup. Make them all the same.
More importantly, the changes you're proposing very much rely on this.
Without TLBi IPIs nothing serializes GUP-fast vs TLBi and this RCU-ish
table free scheme is required.
> Signed-off-by: Rik van Riel <riel@surriel.com>
> Suggested-by: Peter Zijlstra <peterz@infradead.org>
> ---
> arch/x86/Kconfig | 2 +-
> arch/x86/kernel/paravirt.c | 7 +------
> 2 files changed, 2 insertions(+), 7 deletions(-)
>
> diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
> index 9d7bd0ae48c4..e8743f8c9fd0 100644
> --- a/arch/x86/Kconfig
> +++ b/arch/x86/Kconfig
> @@ -274,7 +274,7 @@ config X86
> select HAVE_PCI
> select HAVE_PERF_REGS
> select HAVE_PERF_USER_STACK_DUMP
> - select MMU_GATHER_RCU_TABLE_FREE if PARAVIRT
> + select MMU_GATHER_RCU_TABLE_FREE
> select MMU_GATHER_MERGE_VMAS
> select HAVE_POSIX_CPU_TIMERS_TASK_WORK
> select HAVE_REGS_AND_STACK_ACCESS_API
> diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c
> index fec381533555..2b78a6b466ed 100644
> --- a/arch/x86/kernel/paravirt.c
> +++ b/arch/x86/kernel/paravirt.c
> @@ -59,11 +59,6 @@ void __init native_pv_lock_init(void)
> static_branch_enable(&virt_spin_lock_key);
> }
>
> -static void native_tlb_remove_table(struct mmu_gather *tlb, void *table)
> -{
> - tlb_remove_page(tlb, table);
> -}
> -
> struct static_key paravirt_steal_enabled;
> struct static_key paravirt_steal_rq_enabled;
>
> @@ -191,7 +186,7 @@ struct paravirt_patch_template pv_ops = {
> .mmu.flush_tlb_kernel = native_flush_tlb_global,
> .mmu.flush_tlb_one_user = native_flush_tlb_one_user,
> .mmu.flush_tlb_multi = native_flush_tlb_multi,
> - .mmu.tlb_remove_table = native_tlb_remove_table,
> + .mmu.tlb_remove_table = tlb_remove_table,
>
> .mmu.exit_mmap = paravirt_nop,
> .mmu.notify_page_enc_status_changed = paravirt_nop,
> --
> 2.47.1
>
^ permalink raw reply [flat|nested] 21+ messages in thread
* [PATCH 02/11] x86/mm: add X86_FEATURE_INVLPGB definition.
2024-12-23 2:55 [RFC PATCH v2 00/11] AMD broadcast TLB invalidation Rik van Riel
2024-12-23 2:55 ` [PATCH 01/11] x86/mm: make MMU_GATHER_RCU_TABLE_FREE unconditional Rik van Riel
@ 2024-12-23 2:55 ` Rik van Riel
2024-12-23 2:55 ` [PATCH 03/11] x86/mm: get INVLPGB count max from CPUID Rik van Riel
` (9 subsequent siblings)
11 siblings, 0 replies; 21+ messages in thread
From: Rik van Riel @ 2024-12-23 2:55 UTC (permalink / raw)
To: x86
Cc: linux-kernel, kernel-team, dave.hansen, luto, peterz, tglx,
mingo, bp, hpa, akpm, linux-mm, Rik van Riel
Add the INVPLGB CPUID definition, allowing the kernel to recognize
whether the CPU supports the INVLPGB instruction.
Signed-off-by: Rik van Riel <riel@surriel.com>
---
arch/x86/include/asm/cpufeatures.h | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
index 17b6590748c0..b7209d6c3a5f 100644
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -338,6 +338,7 @@
#define X86_FEATURE_CLZERO (13*32+ 0) /* "clzero" CLZERO instruction */
#define X86_FEATURE_IRPERF (13*32+ 1) /* "irperf" Instructions Retired Count */
#define X86_FEATURE_XSAVEERPTR (13*32+ 2) /* "xsaveerptr" Always save/restore FP error pointers */
+#define X86_FEATURE_INVLPGB (13*32+ 3) /* "invlpgb" INVLPGB instruction */
#define X86_FEATURE_RDPRU (13*32+ 4) /* "rdpru" Read processor register at user level */
#define X86_FEATURE_WBNOINVD (13*32+ 9) /* "wbnoinvd" WBNOINVD instruction */
#define X86_FEATURE_AMD_IBPB (13*32+12) /* Indirect Branch Prediction Barrier */
--
2.47.1
^ permalink raw reply [flat|nested] 21+ messages in thread* [PATCH 03/11] x86/mm: get INVLPGB count max from CPUID
2024-12-23 2:55 [RFC PATCH v2 00/11] AMD broadcast TLB invalidation Rik van Riel
2024-12-23 2:55 ` [PATCH 01/11] x86/mm: make MMU_GATHER_RCU_TABLE_FREE unconditional Rik van Riel
2024-12-23 2:55 ` [PATCH 02/11] x86/mm: add X86_FEATURE_INVLPGB definition Rik van Riel
@ 2024-12-23 2:55 ` Rik van Riel
2024-12-25 23:42 ` Nadav Amit
2024-12-23 2:55 ` [PATCH 04/11] x86/mm: add INVLPGB support code Rik van Riel
` (8 subsequent siblings)
11 siblings, 1 reply; 21+ messages in thread
From: Rik van Riel @ 2024-12-23 2:55 UTC (permalink / raw)
To: x86
Cc: linux-kernel, kernel-team, dave.hansen, luto, peterz, tglx,
mingo, bp, hpa, akpm, linux-mm, Rik van Riel
The CPU advertises the maximum number of pages that can be shot down
with one INVLPGB instruction in the CPUID data.
Save that information for later use.
Signed-off-by: Rik van Riel <riel@surriel.com>
---
arch/x86/include/asm/tlbflush.h | 1 +
arch/x86/kernel/cpu/amd.c | 8 ++++++++
arch/x86/kernel/setup.c | 4 ++++
3 files changed, 13 insertions(+)
diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h
index 02fc2aa06e9e..7d1468a3967b 100644
--- a/arch/x86/include/asm/tlbflush.h
+++ b/arch/x86/include/asm/tlbflush.h
@@ -182,6 +182,7 @@ static inline void cr4_init_shadow(void)
extern unsigned long mmu_cr4_features;
extern u32 *trampoline_cr4_features;
+extern u16 invlpgb_count_max;
extern void initialize_tlbstate_and_flush(void);
diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
index 79d2e17f6582..226b8fc64bfc 100644
--- a/arch/x86/kernel/cpu/amd.c
+++ b/arch/x86/kernel/cpu/amd.c
@@ -1135,6 +1135,14 @@ static void cpu_detect_tlb_amd(struct cpuinfo_x86 *c)
tlb_lli_2m[ENTRIES] = eax & mask;
tlb_lli_4m[ENTRIES] = tlb_lli_2m[ENTRIES] >> 1;
+
+ if (c->extended_cpuid_level < 0x80000008)
+ return;
+
+ cpuid(0x80000008, &eax, &ebx, &ecx, &edx);
+
+ /* Max number of pages INVLPGB can invalidate in one shot */
+ invlpgb_count_max = (edx & 0xffff) + 1;
}
static const struct cpu_dev amd_cpu_dev = {
diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index f1fea506e20f..ef2b49edca25 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -138,6 +138,10 @@ __visible unsigned long mmu_cr4_features __ro_after_init;
__visible unsigned long mmu_cr4_features __ro_after_init = X86_CR4_PAE;
#endif
+#ifdef CONFIG_CPU_SUP_AMD
+u16 invlpgb_count_max;
+#endif
+
#ifdef CONFIG_IMA
static phys_addr_t ima_kexec_buffer_phys;
static size_t ima_kexec_buffer_size;
--
2.47.1
^ permalink raw reply [flat|nested] 21+ messages in thread* Re: [PATCH 03/11] x86/mm: get INVLPGB count max from CPUID
2024-12-23 2:55 ` [PATCH 03/11] x86/mm: get INVLPGB count max from CPUID Rik van Riel
@ 2024-12-25 23:42 ` Nadav Amit
0 siblings, 0 replies; 21+ messages in thread
From: Nadav Amit @ 2024-12-25 23:42 UTC (permalink / raw)
To: Rik van Riel
Cc: the arch/x86 maintainers, Linux Kernel Mailing List, kernel-team,
Dave Hansen, luto, peterz, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, H. Peter Anvin, Andrew Morton,
open list:MEMORY MANAGEMENT
> On 23 Dec 2024, at 4:55, Rik van Riel <riel@surriel.com> wrote:
>
> --- a/arch/x86/kernel/setup.c
> +++ b/arch/x86/kernel/setup.c
> @@ -138,6 +138,10 @@ __visible unsigned long mmu_cr4_features __ro_after_init;
> __visible unsigned long mmu_cr4_features __ro_after_init = X86_CR4_PAE;
> #endif
>
> +#ifdef CONFIG_CPU_SUP_AMD
> +u16 invlpgb_count_max;
Any reason to mark it also as __ro_after_init?
^ permalink raw reply [flat|nested] 21+ messages in thread
* [PATCH 04/11] x86/mm: add INVLPGB support code
2024-12-23 2:55 [RFC PATCH v2 00/11] AMD broadcast TLB invalidation Rik van Riel
` (2 preceding siblings ...)
2024-12-23 2:55 ` [PATCH 03/11] x86/mm: get INVLPGB count max from CPUID Rik van Riel
@ 2024-12-23 2:55 ` Rik van Riel
2024-12-23 2:55 ` [PATCH 05/11] x86/mm: use INVLPGB for kernel TLB flushes Rik van Riel
` (7 subsequent siblings)
11 siblings, 0 replies; 21+ messages in thread
From: Rik van Riel @ 2024-12-23 2:55 UTC (permalink / raw)
To: x86
Cc: linux-kernel, kernel-team, dave.hansen, luto, peterz, tglx,
mingo, bp, hpa, akpm, linux-mm, Rik van Riel
Add invlpgb.h with the helper functions and definitions needed to use
broadcast TLB invalidation on AMD EPYC 3 and newer CPUs.
Signed-off-by: Rik van Riel <riel@surriel.com>
---
arch/x86/include/asm/invlpgb.h | 93 +++++++++++++++++++++++++++++++++
arch/x86/include/asm/tlbflush.h | 1 +
2 files changed, 94 insertions(+)
create mode 100644 arch/x86/include/asm/invlpgb.h
diff --git a/arch/x86/include/asm/invlpgb.h b/arch/x86/include/asm/invlpgb.h
new file mode 100644
index 000000000000..862775897a54
--- /dev/null
+++ b/arch/x86/include/asm/invlpgb.h
@@ -0,0 +1,93 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _ASM_X86_INVLPGB
+#define _ASM_X86_INVLPGB
+
+#include <vdso/bits.h>
+
+/*
+ * INVLPGB does broadcast TLB invalidation across all the CPUs in the system.
+ *
+ * The INVLPGB instruction is weakly ordered, and a batch of invalidations can
+ * be done in a parallel fashion.
+ *
+ * TLBSYNC is used to ensure that pending INVLPGB invalidations initiated from
+ * this CPU have completed.
+ */
+static inline void __invlpgb(unsigned long asid, unsigned long pcid, unsigned long addr,
+ int extra_count, bool pmd_stride, unsigned long flags)
+{
+ u64 rax = addr | flags;
+ u32 ecx = (pmd_stride << 31) | extra_count;
+ u32 edx = (pcid << 16) | asid;
+
+ asm volatile("invlpgb" : : "a" (rax), "c" (ecx), "d" (edx));
+}
+
+/*
+ * INVLPGB can be targeted by virtual address, PCID, ASID, or any combination
+ * of the three. For example:
+ * - INVLPGB_VA | INVLPGB_INCLUDE_GLOBAL: invalidate all TLB entries at the address
+ * - INVLPGB_PCID: invalidate all TLB entries matching the PCID
+ *
+ * The first can be used to invalidate (kernel) mappings at a particular
+ * address across all processes.
+ *
+ * The latter invalidates all TLB entries matching a PCID.
+ */
+#define INVLPGB_VA BIT(0)
+#define INVLPGB_PCID BIT(1)
+#define INVLPGB_ASID BIT(2)
+#define INVLPGB_INCLUDE_GLOBAL BIT(3)
+#define INVLPGB_FINAL_ONLY BIT(4)
+#define INVLPGB_INCLUDE_NESTED BIT(5)
+
+/* Flush all mappings for a given pcid and addr, not including globals. */
+static inline void invlpgb_flush_user(unsigned long pcid,
+ unsigned long addr)
+{
+ __invlpgb(0, pcid, addr, 0, 0, INVLPGB_PCID | INVLPGB_VA);
+}
+
+static inline void invlpgb_flush_user_nr(unsigned long pcid, unsigned long addr,
+ int nr, bool pmd_stride)
+{
+ __invlpgb(0, pcid, addr, nr - 1, pmd_stride, INVLPGB_PCID | INVLPGB_VA);
+}
+
+/* Flush all mappings for a given ASID, not including globals. */
+static inline void invlpgb_flush_single_asid(unsigned long asid)
+{
+ __invlpgb(asid, 0, 0, 0, 0, INVLPGB_ASID);
+}
+
+/* Flush all mappings for a given PCID, not including globals. */
+static inline void invlpgb_flush_single_pcid(unsigned long pcid)
+{
+ __invlpgb(0, pcid, 0, 0, 0, INVLPGB_PCID);
+}
+
+/* Flush all mappings, including globals, for all PCIDs. */
+static inline void invlpgb_flush_all(void)
+{
+ __invlpgb(0, 0, 0, 0, 0, INVLPGB_INCLUDE_GLOBAL);
+}
+
+/* Flush addr, including globals, for all PCIDs. */
+static inline void invlpgb_flush_addr(unsigned long addr, int nr)
+{
+ __invlpgb(0, 0, addr, nr - 1, 0, INVLPGB_INCLUDE_GLOBAL);
+}
+
+/* Flush all mappings for all PCIDs except globals. */
+static inline void invlpgb_flush_all_nonglobals(void)
+{
+ __invlpgb(0, 0, 0, 0, 0, 0);
+}
+
+/* Wait for INVLPGB originated by this CPU to complete. */
+static inline void tlbsync(void)
+{
+ asm volatile("tlbsync");
+}
+
+#endif /* _ASM_X86_INVLPGB */
diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h
index 7d1468a3967b..20074f17fbcd 100644
--- a/arch/x86/include/asm/tlbflush.h
+++ b/arch/x86/include/asm/tlbflush.h
@@ -10,6 +10,7 @@
#include <asm/cpufeature.h>
#include <asm/special_insns.h>
#include <asm/smp.h>
+#include <asm/invlpgb.h>
#include <asm/invpcid.h>
#include <asm/pti.h>
#include <asm/processor-flags.h>
--
2.47.1
^ permalink raw reply [flat|nested] 21+ messages in thread* [PATCH 05/11] x86/mm: use INVLPGB for kernel TLB flushes
2024-12-23 2:55 [RFC PATCH v2 00/11] AMD broadcast TLB invalidation Rik van Riel
` (3 preceding siblings ...)
2024-12-23 2:55 ` [PATCH 04/11] x86/mm: add INVLPGB support code Rik van Riel
@ 2024-12-23 2:55 ` Rik van Riel
2024-12-23 2:55 ` [PATCH 06/11] x86/tlb: use INVLPGB in flush_tlb_all Rik van Riel
` (6 subsequent siblings)
11 siblings, 0 replies; 21+ messages in thread
From: Rik van Riel @ 2024-12-23 2:55 UTC (permalink / raw)
To: x86
Cc: linux-kernel, kernel-team, dave.hansen, luto, peterz, tglx,
mingo, bp, hpa, akpm, linux-mm, Rik van Riel
Use broadcast TLB invalidation for kernel addresses when available.
This stops us from having to send IPIs for kernel TLB flushes.
Signed-off-by: Rik van Riel <riel@surriel.com>
---
arch/x86/mm/tlb.c | 31 +++++++++++++++++++++++++++++++
1 file changed, 31 insertions(+)
diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index 6cf881a942bb..29207dc5b807 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -1077,6 +1077,32 @@ void flush_tlb_all(void)
on_each_cpu(do_flush_tlb_all, NULL, 1);
}
+static void broadcast_kernel_range_flush(unsigned long start, unsigned long end)
+{
+ unsigned long addr;
+ unsigned long maxnr = invlpgb_count_max;
+ unsigned long threshold = tlb_single_page_flush_ceiling * maxnr;
+
+ /*
+ * TLBSYNC only waits for flushes originating on the same CPU.
+ * Disabling migration allows us to wait on all flushes.
+ */
+ guard(preempt)();
+
+ if (end == TLB_FLUSH_ALL ||
+ (end - start) > threshold << PAGE_SHIFT) {
+ invlpgb_flush_all();
+ } else {
+ unsigned long nr;
+ for (addr = start; addr < end; addr += nr << PAGE_SHIFT) {
+ nr = min((end - addr) >> PAGE_SHIFT, maxnr);
+ invlpgb_flush_addr(addr, nr);
+ }
+ }
+
+ tlbsync();
+}
+
static void do_kernel_range_flush(void *info)
{
struct flush_tlb_info *f = info;
@@ -1089,6 +1115,11 @@ static void do_kernel_range_flush(void *info)
void flush_tlb_kernel_range(unsigned long start, unsigned long end)
{
+ if (cpu_feature_enabled(X86_FEATURE_INVLPGB)) {
+ broadcast_kernel_range_flush(start, end);
+ return;
+ }
+
/* Balance as user space task's flush, a bit conservative */
if (end == TLB_FLUSH_ALL ||
(end - start) > tlb_single_page_flush_ceiling << PAGE_SHIFT) {
--
2.47.1
^ permalink raw reply [flat|nested] 21+ messages in thread* [PATCH 06/11] x86/tlb: use INVLPGB in flush_tlb_all
2024-12-23 2:55 [RFC PATCH v2 00/11] AMD broadcast TLB invalidation Rik van Riel
` (4 preceding siblings ...)
2024-12-23 2:55 ` [PATCH 05/11] x86/mm: use INVLPGB for kernel TLB flushes Rik van Riel
@ 2024-12-23 2:55 ` Rik van Riel
2024-12-23 2:55 ` [PATCH 07/11] x86/mm: use broadcast TLB flushing for page reclaim TLB flushing Rik van Riel
` (5 subsequent siblings)
11 siblings, 0 replies; 21+ messages in thread
From: Rik van Riel @ 2024-12-23 2:55 UTC (permalink / raw)
To: x86
Cc: linux-kernel, kernel-team, dave.hansen, luto, peterz, tglx,
mingo, bp, hpa, akpm, linux-mm, Rik van Riel
The flush_tlb_all() function is not used a whole lot, but we might
as well use broadcast TLB flushing there, too.
Signed-off-by: Rik van Riel <riel@surriel.com>
---
arch/x86/mm/tlb.c | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index 29207dc5b807..266d5174fc7b 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -1074,6 +1074,12 @@ static void do_flush_tlb_all(void *info)
void flush_tlb_all(void)
{
count_vm_tlb_event(NR_TLB_REMOTE_FLUSH);
+ if (cpu_feature_enabled(X86_FEATURE_INVLPGB)) {
+ guard(preempt)();
+ invlpgb_flush_all();
+ tlbsync();
+ return;
+ }
on_each_cpu(do_flush_tlb_all, NULL, 1);
}
--
2.47.1
^ permalink raw reply [flat|nested] 21+ messages in thread* [PATCH 07/11] x86/mm: use broadcast TLB flushing for page reclaim TLB flushing
2024-12-23 2:55 [RFC PATCH v2 00/11] AMD broadcast TLB invalidation Rik van Riel
` (5 preceding siblings ...)
2024-12-23 2:55 ` [PATCH 06/11] x86/tlb: use INVLPGB in flush_tlb_all Rik van Riel
@ 2024-12-23 2:55 ` Rik van Riel
2024-12-23 2:55 ` [PATCH 08/11] x86/mm: enable broadcast TLB invalidation for multi-threaded processes Rik van Riel
` (4 subsequent siblings)
11 siblings, 0 replies; 21+ messages in thread
From: Rik van Riel @ 2024-12-23 2:55 UTC (permalink / raw)
To: x86
Cc: linux-kernel, kernel-team, dave.hansen, luto, peterz, tglx,
mingo, bp, hpa, akpm, linux-mm, Rik van Riel
In the page reclaim code, we only track the CPU(s) where the TLB needs
to be flushed, rather than all the individual mappings that may be getting
invalidated.
Use broadcast TLB flushing when that is available.
Signed-off-by: Rik van Riel <riel@surriel.com>
---
arch/x86/mm/tlb.c | 10 +++++++++-
1 file changed, 9 insertions(+), 1 deletion(-)
diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index 266d5174fc7b..64f1679c37e1 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -1310,8 +1310,16 @@ EXPORT_SYMBOL_GPL(__flush_tlb_all);
void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch)
{
struct flush_tlb_info *info;
+ int cpu;
+
+ if (cpu_feature_enabled(X86_FEATURE_INVLPGB)) {
+ guard(preempt)();
+ invlpgb_flush_all_nonglobals();
+ tlbsync();
+ return;
+ }
- int cpu = get_cpu();
+ cpu = get_cpu();
info = get_flush_tlb_info(NULL, 0, TLB_FLUSH_ALL, 0, false,
TLB_GENERATION_INVALID);
--
2.47.1
^ permalink raw reply [flat|nested] 21+ messages in thread* [PATCH 08/11] x86/mm: enable broadcast TLB invalidation for multi-threaded processes
2024-12-23 2:55 [RFC PATCH v2 00/11] AMD broadcast TLB invalidation Rik van Riel
` (6 preceding siblings ...)
2024-12-23 2:55 ` [PATCH 07/11] x86/mm: use broadcast TLB flushing for page reclaim TLB flushing Rik van Riel
@ 2024-12-23 2:55 ` Rik van Riel
2024-12-25 23:22 ` Nadav Amit
2024-12-25 23:32 ` Nadav Amit
2024-12-23 2:55 ` [PATCH 09/11] x86,tlb: do targeted broadcast flushing from tlbbatch code Rik van Riel
` (3 subsequent siblings)
11 siblings, 2 replies; 21+ messages in thread
From: Rik van Riel @ 2024-12-23 2:55 UTC (permalink / raw)
To: x86
Cc: linux-kernel, kernel-team, dave.hansen, luto, peterz, tglx,
mingo, bp, hpa, akpm, linux-mm, Rik van Riel
Use broadcast TLB invalidation, using the INVPLGB instruction, on AMD EPYC 3
and newer CPUs.
In order to not exhaust PCID space, and keep TLB flushes local for single
threaded processes, we only hand out broadcast ASIDs to processes active on
3 or more CPUs, and gradually increase the threshold as broadcast ASID space
is depleted.
Signed-off-by: Rik van Riel <riel@surriel.com>
---
arch/x86/include/asm/mmu.h | 6 +
arch/x86/include/asm/mmu_context.h | 12 ++
arch/x86/include/asm/tlbflush.h | 15 ++
arch/x86/mm/tlb.c | 313 ++++++++++++++++++++++++++++-
4 files changed, 337 insertions(+), 9 deletions(-)
diff --git a/arch/x86/include/asm/mmu.h b/arch/x86/include/asm/mmu.h
index 3b496cdcb74b..a8e8dfa5a520 100644
--- a/arch/x86/include/asm/mmu.h
+++ b/arch/x86/include/asm/mmu.h
@@ -48,6 +48,12 @@ typedef struct {
unsigned long flags;
#endif
+#ifdef CONFIG_CPU_SUP_AMD
+ struct list_head broadcast_asid_list;
+ u16 broadcast_asid;
+ bool asid_transition;
+#endif
+
#ifdef CONFIG_ADDRESS_MASKING
/* Active LAM mode: X86_CR3_LAM_U48 or X86_CR3_LAM_U57 or 0 (disabled) */
unsigned long lam_cr3_mask;
diff --git a/arch/x86/include/asm/mmu_context.h b/arch/x86/include/asm/mmu_context.h
index 795fdd53bd0a..0dc446c427d2 100644
--- a/arch/x86/include/asm/mmu_context.h
+++ b/arch/x86/include/asm/mmu_context.h
@@ -139,6 +139,8 @@ static inline void mm_reset_untag_mask(struct mm_struct *mm)
#define enter_lazy_tlb enter_lazy_tlb
extern void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk);
+extern void destroy_context_free_broadcast_asid(struct mm_struct *mm);
+
/*
* Init a new mm. Used on mm copies, like at fork()
* and on mm's that are brand-new, like at execve().
@@ -161,6 +163,13 @@ static inline int init_new_context(struct task_struct *tsk,
mm->context.execute_only_pkey = -1;
}
#endif
+
+#ifdef CONFIG_CPU_SUP_AMD
+ INIT_LIST_HEAD(&mm->context.broadcast_asid_list);
+ mm->context.broadcast_asid = 0;
+ mm->context.asid_transition = false;
+#endif
+
mm_reset_untag_mask(mm);
init_new_context_ldt(mm);
return 0;
@@ -170,6 +179,9 @@ static inline int init_new_context(struct task_struct *tsk,
static inline void destroy_context(struct mm_struct *mm)
{
destroy_context_ldt(mm);
+#ifdef CONFIG_CPU_SUP_AMD
+ destroy_context_free_broadcast_asid(mm);
+#endif
}
extern void switch_mm(struct mm_struct *prev, struct mm_struct *next,
diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h
index 20074f17fbcd..074f46b74b92 100644
--- a/arch/x86/include/asm/tlbflush.h
+++ b/arch/x86/include/asm/tlbflush.h
@@ -65,6 +65,21 @@ static inline void cr4_clear_bits(unsigned long mask)
*/
#define TLB_NR_DYN_ASIDS 6
+#ifdef CONFIG_CPU_SUP_AMD
+#define is_dyn_asid(asid) (asid) < TLB_NR_DYN_ASIDS
+#define is_broadcast_asid(asid) (asid) >= TLB_NR_DYN_ASIDS
+#define in_asid_transition(info) (info->mm && info->mm->context.asid_transition)
+#else
+#define is_dyn_asid(asid) true
+#define is_broadcast_asid(asid) false
+#define in_asid_transition(info) false
+
+inline bool needs_broadcast_asid_reload(struct mm_struct *next, u16 prev_asid)
+{
+ return false;
+}
+#endif
+
struct tlb_context {
u64 ctx_id;
u64 tlb_gen;
diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index 64f1679c37e1..29a64f8c4c94 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -74,13 +74,15 @@
* use different names for each of them:
*
* ASID - [0, TLB_NR_DYN_ASIDS-1]
- * the canonical identifier for an mm
+ * the canonical identifier for an mm, dynamically allocated on each CPU
+ * [TLB_NR_DYN_ASIDS, MAX_ASID_AVAILABLE-1]
+ * the canonical, global identifier for an mm, identical across all CPUs
*
- * kPCID - [1, TLB_NR_DYN_ASIDS]
+ * kPCID - [1, MAX_ASID_AVAILABLE]
* the value we write into the PCID part of CR3; corresponds to the
* ASID+1, because PCID 0 is special.
*
- * uPCID - [2048 + 1, 2048 + TLB_NR_DYN_ASIDS]
+ * uPCID - [2048 + 1, 2048 + MAX_ASID_AVAILABLE]
* for KPTI each mm has two address spaces and thus needs two
* PCID values, but we can still do with a single ASID denomination
* for each mm. Corresponds to kPCID + 2048.
@@ -225,6 +227,18 @@ static void choose_new_asid(struct mm_struct *next, u64 next_tlb_gen,
return;
}
+ /*
+ * TLB consistency for this ASID is maintained with INVLPGB;
+ * TLB flushes happen even while the process isn't running.
+ */
+#ifdef CONFIG_CPU_SUP_AMD
+ if (static_cpu_has(X86_FEATURE_INVLPGB) && next->context.broadcast_asid) {
+ *new_asid = next->context.broadcast_asid;
+ *need_flush = false;
+ return;
+ }
+#endif
+
if (this_cpu_read(cpu_tlbstate.invalidate_other))
clear_asid_other();
@@ -251,6 +265,248 @@ static void choose_new_asid(struct mm_struct *next, u64 next_tlb_gen,
*need_flush = true;
}
+#ifdef CONFIG_CPU_SUP_AMD
+/*
+ * Logic for AMD INVLPGB support.
+ */
+static DEFINE_RAW_SPINLOCK(broadcast_asid_lock);
+static u16 last_broadcast_asid = TLB_NR_DYN_ASIDS;
+static DECLARE_BITMAP(broadcast_asid_used, MAX_ASID_AVAILABLE) = { 0 };
+static LIST_HEAD(broadcast_asid_list);
+static int broadcast_asid_available = MAX_ASID_AVAILABLE - TLB_NR_DYN_ASIDS - 1;
+
+static void reset_broadcast_asid_space(void)
+{
+ mm_context_t *context;
+
+ lockdep_assert_held(&broadcast_asid_lock);
+
+ /*
+ * Flush once when we wrap around the ASID space, so we won't need
+ * to flush every time we allocate an ASID for boradcast flushing.
+ */
+ invlpgb_flush_all_nonglobals();
+ tlbsync();
+
+ /*
+ * Leave the currently used broadcast ASIDs set in the bitmap, since
+ * those cannot be reused before the next wraparound and flush..
+ */
+ bitmap_clear(broadcast_asid_used, 0, MAX_ASID_AVAILABLE);
+ list_for_each_entry(context, &broadcast_asid_list, broadcast_asid_list)
+ __set_bit(context->broadcast_asid, broadcast_asid_used);
+
+ last_broadcast_asid = TLB_NR_DYN_ASIDS;
+}
+
+static u16 get_broadcast_asid(void)
+{
+ lockdep_assert_held(&broadcast_asid_lock);
+
+ do {
+ u16 start = last_broadcast_asid;
+ u16 asid = find_next_zero_bit(broadcast_asid_used, MAX_ASID_AVAILABLE, start);
+
+ if (asid >= MAX_ASID_AVAILABLE) {
+ reset_broadcast_asid_space();
+ continue;
+ }
+
+ /* Try claiming this broadcast ASID. */
+ if (!test_and_set_bit(asid, broadcast_asid_used)) {
+ last_broadcast_asid = asid;
+ return asid;
+ }
+ } while (1);
+}
+
+/*
+ * Returns true if the mm is transitioning from a CPU-local ASID to a broadcast
+ * (INVLPGB) ASID, or the other way around.
+ */
+static bool needs_broadcast_asid_reload(struct mm_struct *next, u16 prev_asid)
+{
+ u16 broadcast_asid = next->context.broadcast_asid;
+
+ if (broadcast_asid && prev_asid != broadcast_asid)
+ return true;
+
+ if (!broadcast_asid && is_broadcast_asid(prev_asid))
+ return true;
+
+ return false;
+}
+
+void destroy_context_free_broadcast_asid(struct mm_struct *mm)
+{
+ if (!mm->context.broadcast_asid)
+ return;
+
+ guard(raw_spinlock_irqsave)(&broadcast_asid_lock);
+ mm->context.broadcast_asid = 0;
+ list_del(&mm->context.broadcast_asid_list);
+ broadcast_asid_available++;
+}
+
+static int mm_active_cpus(struct mm_struct *mm)
+{
+ int count = 0;
+ int cpu;
+
+ for_each_cpu(cpu, mm_cpumask(mm)) {
+ /* Skip the CPUs that aren't really running this process. */
+ if (per_cpu(cpu_tlbstate.loaded_mm, cpu) != mm)
+ continue;
+
+ if (per_cpu(cpu_tlbstate_shared.is_lazy, cpu))
+ continue;
+
+ count++;
+ }
+ return count;
+}
+
+/*
+ * Assign a broadcast ASID to the current process, protecting against
+ * races between multiple threads in the process.
+ */
+static void use_broadcast_asid(struct mm_struct *mm)
+{
+ guard(raw_spinlock_irqsave)(&broadcast_asid_lock);
+
+ /* This process is already using broadcast TLB invalidation. */
+ if (mm->context.broadcast_asid)
+ return;
+
+ mm->context.broadcast_asid = get_broadcast_asid();
+ mm->context.asid_transition = true;
+ list_add(&mm->context.broadcast_asid_list, &broadcast_asid_list);
+ broadcast_asid_available--;
+}
+
+/*
+ * Figure out whether to assign a broadcast (global) ASID to a process.
+ * We vary the threshold by how empty or full broadcast ASID space is.
+ * 1/4 full: >= 4 active threads
+ * 1/2 full: >= 8 active threads
+ * 3/4 full: >= 16 active threads
+ * 7/8 full: >= 32 active threads
+ * etc
+ *
+ * This way we should never exhaust the broadcast ASID space, even on very
+ * large systems, and the processes with the largest number of active
+ * threads should be able to use broadcast TLB invalidation.
+ */
+#define HALFFULL_THRESHOLD 8
+static bool meets_broadcast_asid_threshold(struct mm_struct *mm)
+{
+ int avail = broadcast_asid_available;
+ int threshold = HALFFULL_THRESHOLD;
+ int mm_active_threads;
+
+ if (!avail)
+ return false;
+
+ mm_active_threads = mm_active_cpus(mm);
+
+ /* Small processes can just use IPI TLB flushing. */
+ if (mm_active_threads < 3)
+ return false;
+
+ if (avail > MAX_ASID_AVAILABLE * 3 / 4) {
+ threshold = HALFFULL_THRESHOLD / 4;
+ } else if (avail > MAX_ASID_AVAILABLE / 2) {
+ threshold = HALFFULL_THRESHOLD / 2;
+ } else if (avail < MAX_ASID_AVAILABLE / 3) {
+ do {
+ avail *= 2;
+ threshold *= 2;
+ } while ((avail + threshold ) < MAX_ASID_AVAILABLE / 2);
+ }
+
+ return mm_active_threads > threshold;
+}
+
+static void count_tlb_flush(struct mm_struct *mm)
+{
+ if (!static_cpu_has(X86_FEATURE_INVLPGB))
+ return;
+
+ /* Check every once in a while. */
+ if ((current->pid & 0x1f) != (jiffies & 0x1f))
+ return;
+
+ if (meets_broadcast_asid_threshold(mm))
+ use_broadcast_asid(mm);
+}
+
+static void finish_asid_transition(struct flush_tlb_info *info)
+{
+ struct mm_struct *mm = info->mm;
+ int bc_asid = mm->context.broadcast_asid;
+ int cpu;
+
+ if (!mm->context.asid_transition)
+ return;
+
+ for_each_cpu(cpu, mm_cpumask(mm)) {
+ if (READ_ONCE(per_cpu(cpu_tlbstate.loaded_mm, cpu)) != mm)
+ continue;
+
+ /*
+ * If at least one CPU is not using the broadcast ASID yet,
+ * send a TLB flush IPI. The IPI should cause stragglers
+ * to transition soon.
+ */
+ if (per_cpu(cpu_tlbstate.loaded_mm_asid, cpu) != bc_asid) {
+ flush_tlb_multi(mm_cpumask(info->mm), info);
+ return;
+ }
+ }
+
+ /* All the CPUs running this process are using the broadcast ASID. */
+ mm->context.asid_transition = 0;
+}
+
+static void broadcast_tlb_flush(struct flush_tlb_info *info)
+{
+ bool pmd = info->stride_shift == PMD_SHIFT;
+ unsigned long maxnr = invlpgb_count_max;
+ unsigned long asid = info->mm->context.broadcast_asid;
+ unsigned long addr = info->start;
+ unsigned long nr;
+
+ /* Flushing multiple pages at once is not supported with 1GB pages. */
+ if (info->stride_shift > PMD_SHIFT)
+ maxnr = 1;
+
+ if (info->end == TLB_FLUSH_ALL) {
+ invlpgb_flush_single_pcid(kern_pcid(asid));
+ /* Do any CPUs supporting INVLPGB need PTI? */
+ if (static_cpu_has(X86_FEATURE_PTI))
+ invlpgb_flush_single_pcid(user_pcid(asid));
+ } else do {
+ /*
+ * Calculate how many pages can be flushed at once; if the
+ * remainder of the range is less than one page, flush one.
+ */
+ nr = min(maxnr, (info->end - addr) >> info->stride_shift);
+ nr = max(nr, 1);
+
+ invlpgb_flush_user_nr(kern_pcid(asid), addr, nr, pmd);
+ /* Do any CPUs supporting INVLPGB need PTI? */
+ if (static_cpu_has(X86_FEATURE_PTI))
+ invlpgb_flush_user_nr(user_pcid(asid), addr, nr, pmd);
+ addr += nr << info->stride_shift;
+ } while (addr < info->end);
+
+ finish_asid_transition(info);
+
+ /* Wait for the INVLPGBs kicked off above to finish. */
+ tlbsync();
+}
+#endif /* CONFIG_CPU_SUP_AMD */
+
/*
* Given an ASID, flush the corresponding user ASID. We can delay this
* until the next time we switch to it.
@@ -556,8 +812,9 @@ void switch_mm_irqs_off(struct mm_struct *unused, struct mm_struct *next,
*/
if (prev == next) {
/* Not actually switching mm's */
- VM_WARN_ON(this_cpu_read(cpu_tlbstate.ctxs[prev_asid].ctx_id) !=
- next->context.ctx_id);
+ if (is_dyn_asid(prev_asid))
+ VM_WARN_ON(this_cpu_read(cpu_tlbstate.ctxs[prev_asid].ctx_id) !=
+ next->context.ctx_id);
/*
* If this races with another thread that enables lam, 'new_lam'
@@ -573,6 +830,23 @@ void switch_mm_irqs_off(struct mm_struct *unused, struct mm_struct *next,
!cpumask_test_cpu(cpu, mm_cpumask(next))))
cpumask_set_cpu(cpu, mm_cpumask(next));
+ /*
+ * Check if the current mm is transitioning to a new ASID.
+ */
+ if (needs_broadcast_asid_reload(next, prev_asid)) {
+ next_tlb_gen = atomic64_read(&next->context.tlb_gen);
+
+ choose_new_asid(next, next_tlb_gen, &new_asid, &need_flush);
+ goto reload_tlb;
+ }
+
+ /*
+ * Broadcast TLB invalidation keeps this PCID up to date
+ * all the time.
+ */
+ if (is_broadcast_asid(prev_asid))
+ return;
+
/*
* If the CPU is not in lazy TLB mode, we are just switching
* from one thread in a process to another thread in the same
@@ -626,8 +900,10 @@ void switch_mm_irqs_off(struct mm_struct *unused, struct mm_struct *next,
barrier();
}
+reload_tlb:
new_lam = mm_lam_cr3_mask(next);
if (need_flush) {
+ VM_BUG_ON(is_broadcast_asid(new_asid));
this_cpu_write(cpu_tlbstate.ctxs[new_asid].ctx_id, next->context.ctx_id);
this_cpu_write(cpu_tlbstate.ctxs[new_asid].tlb_gen, next_tlb_gen);
load_new_mm_cr3(next->pgd, new_asid, new_lam, true);
@@ -746,7 +1022,7 @@ static void flush_tlb_func(void *info)
const struct flush_tlb_info *f = info;
struct mm_struct *loaded_mm = this_cpu_read(cpu_tlbstate.loaded_mm);
u32 loaded_mm_asid = this_cpu_read(cpu_tlbstate.loaded_mm_asid);
- u64 local_tlb_gen = this_cpu_read(cpu_tlbstate.ctxs[loaded_mm_asid].tlb_gen);
+ u64 local_tlb_gen;
bool local = smp_processor_id() == f->initiating_cpu;
unsigned long nr_invalidate = 0;
u64 mm_tlb_gen;
@@ -769,6 +1045,16 @@ static void flush_tlb_func(void *info)
if (unlikely(loaded_mm == &init_mm))
return;
+ /* Reload the ASID if transitioning into or out of a broadcast ASID */
+ if (needs_broadcast_asid_reload(loaded_mm, loaded_mm_asid)) {
+ switch_mm_irqs_off(NULL, loaded_mm, NULL);
+ loaded_mm_asid = this_cpu_read(cpu_tlbstate.loaded_mm_asid);
+ }
+
+ /* Broadcast ASIDs are always kept up to date with INVLPGB. */
+ if (is_broadcast_asid(loaded_mm_asid))
+ return;
+
VM_WARN_ON(this_cpu_read(cpu_tlbstate.ctxs[loaded_mm_asid].ctx_id) !=
loaded_mm->context.ctx_id);
@@ -786,6 +1072,8 @@ static void flush_tlb_func(void *info)
return;
}
+ local_tlb_gen = this_cpu_read(cpu_tlbstate.ctxs[loaded_mm_asid].tlb_gen);
+
if (unlikely(f->new_tlb_gen != TLB_GENERATION_INVALID &&
f->new_tlb_gen <= local_tlb_gen)) {
/*
@@ -953,7 +1241,7 @@ STATIC_NOPV void native_flush_tlb_multi(const struct cpumask *cpumask,
* up on the new contents of what used to be page tables, while
* doing a speculative memory access.
*/
- if (info->freed_tables)
+ if (info->freed_tables || in_asid_transition(info))
on_each_cpu_mask(cpumask, flush_tlb_func, (void *)info, true);
else
on_each_cpu_cond_mask(should_flush_tlb, flush_tlb_func,
@@ -1026,14 +1314,18 @@ void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start,
bool freed_tables)
{
struct flush_tlb_info *info;
+ unsigned long threshold = tlb_single_page_flush_ceiling;
u64 new_tlb_gen;
int cpu;
+ if (static_cpu_has(X86_FEATURE_INVLPGB))
+ threshold *= invlpgb_count_max;
+
cpu = get_cpu();
/* Should we flush just the requested range? */
if ((end == TLB_FLUSH_ALL) ||
- ((end - start) >> stride_shift) > tlb_single_page_flush_ceiling) {
+ ((end - start) >> stride_shift) > threshold) {
start = 0;
end = TLB_FLUSH_ALL;
}
@@ -1049,9 +1341,12 @@ void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start,
* a local TLB flush is needed. Optimize this use-case by calling
* flush_tlb_func_local() directly in this case.
*/
- if (cpumask_any_but(mm_cpumask(mm), cpu) < nr_cpu_ids) {
+ if (IS_ENABLED(CONFIG_CPU_SUP_AMD) && mm->context.broadcast_asid) {
+ broadcast_tlb_flush(info);
+ } else if (cpumask_any_but(mm_cpumask(mm), cpu) < nr_cpu_ids) {
info->trim_cpumask = should_trim_cpumask(mm);
flush_tlb_multi(mm_cpumask(mm), info);
+ count_tlb_flush(mm);
} else if (mm == this_cpu_read(cpu_tlbstate.loaded_mm)) {
lockdep_assert_irqs_enabled();
local_irq_disable();
--
2.47.1
^ permalink raw reply [flat|nested] 21+ messages in thread* Re: [PATCH 08/11] x86/mm: enable broadcast TLB invalidation for multi-threaded processes
2024-12-23 2:55 ` [PATCH 08/11] x86/mm: enable broadcast TLB invalidation for multi-threaded processes Rik van Riel
@ 2024-12-25 23:22 ` Nadav Amit
2024-12-25 23:32 ` Nadav Amit
1 sibling, 0 replies; 21+ messages in thread
From: Nadav Amit @ 2024-12-25 23:22 UTC (permalink / raw)
To: Rik van Riel
Cc: the arch/x86 maintainers, Linux Kernel Mailing List, kernel-team,
Dave Hansen, luto, peterz, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, H. Peter Anvin, Andrew Morton,
open list:MEMORY MANAGEMENT
On 23 Dec 2024, at 4:55, Rik van Riel <riel@surriel.com> wrote:
> +static int mm_active_cpus(struct mm_struct *mm)
> +{
> + int count = 0;
> + int cpu;
> +
> + for_each_cpu(cpu, mm_cpumask(mm)) {
> + /* Skip the CPUs that aren't really running this process. */
> + if (per_cpu(cpu_tlbstate.loaded_mm, cpu) != mm)
> + continue;
> +
> + if (per_cpu(cpu_tlbstate_shared.is_lazy, cpu))
> + continue;
> +
> + count++;
> + }
> + return count;
> +}
Since you are only interested in checking whether the number of “mm active
CPUs" is greater than a certain threshold, don’t you want to add some
checks for early termination? This can allow to avoid cachelines of
cpu_tlbstate traversing back and forth.
For instance, by running cpumask_weight() first, if the weight is lower than
the threshold, no need to loop. Similarly, if inside the loop the threshold
has already been crossed, no need for more iterations.
^ permalink raw reply [flat|nested] 21+ messages in thread* Re: [PATCH 08/11] x86/mm: enable broadcast TLB invalidation for multi-threaded processes
2024-12-23 2:55 ` [PATCH 08/11] x86/mm: enable broadcast TLB invalidation for multi-threaded processes Rik van Riel
2024-12-25 23:22 ` Nadav Amit
@ 2024-12-25 23:32 ` Nadav Amit
1 sibling, 0 replies; 21+ messages in thread
From: Nadav Amit @ 2024-12-25 23:32 UTC (permalink / raw)
To: Rik van Riel
Cc: the arch/x86 maintainers, Linux Kernel Mailing List, kernel-team,
Dave Hansen, luto, peterz, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, H. Peter Anvin, Andrew Morton,
open list:MEMORY MANAGEMENT
>
>
> On 23 Dec 2024, at 4:55, Rik van Riel <riel@surriel.com> wrote:
>
> @@ -1049,9 +1341,12 @@ void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start,
> * a local TLB flush is needed. Optimize this use-case by calling
> * flush_tlb_func_local() directly in this case.
> */
> - if (cpumask_any_but(mm_cpumask(mm), cpu) < nr_cpu_ids) {
> + if (IS_ENABLED(CONFIG_CPU_SUP_AMD) && mm->context.broadcast_asid) {
> + broadcast_tlb_flush(info);
>
I think broadcast_asid is defined within an ifdef, so the IS_ENABLED() here
would not save you from having to use ifdef.
^ permalink raw reply [flat|nested] 21+ messages in thread
* [PATCH 09/11] x86,tlb: do targeted broadcast flushing from tlbbatch code
2024-12-23 2:55 [RFC PATCH v2 00/11] AMD broadcast TLB invalidation Rik van Riel
` (7 preceding siblings ...)
2024-12-23 2:55 ` [PATCH 08/11] x86/mm: enable broadcast TLB invalidation for multi-threaded processes Rik van Riel
@ 2024-12-23 2:55 ` Rik van Riel
2024-12-23 2:55 ` [PATCH 10/11] x86/mm: enable AMD translation cache extensions Rik van Riel
` (2 subsequent siblings)
11 siblings, 0 replies; 21+ messages in thread
From: Rik van Riel @ 2024-12-23 2:55 UTC (permalink / raw)
To: x86
Cc: linux-kernel, kernel-team, dave.hansen, luto, peterz, tglx,
mingo, bp, hpa, akpm, linux-mm, Rik van Riel
Instead of doing a system-wide TLB flush from arch_tlbbatch_flush,
queue up asynchronous, targeted flushes from arch_tlbbatch_add_pending.
This also allows us to avoid adding the CPUs of processes using broadcast
flushing to the batch->cpumask, and will hopefully further reduce TLB
flushing from the reclaim and compaction paths.
Signed-off-by: Rik van Riel <riel@surriel.com>
---
arch/x86/include/asm/tlbbatch.h | 1 +
arch/x86/include/asm/tlbflush.h | 12 +++------
arch/x86/mm/tlb.c | 48 ++++++++++++++++++++++++++-------
3 files changed, 42 insertions(+), 19 deletions(-)
diff --git a/arch/x86/include/asm/tlbbatch.h b/arch/x86/include/asm/tlbbatch.h
index 1ad56eb3e8a8..f9a17edf63ad 100644
--- a/arch/x86/include/asm/tlbbatch.h
+++ b/arch/x86/include/asm/tlbbatch.h
@@ -10,6 +10,7 @@ struct arch_tlbflush_unmap_batch {
* the PFNs being flushed..
*/
struct cpumask cpumask;
+ bool used_invlpgb;
};
#endif /* _ARCH_X86_TLBBATCH_H */
diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h
index 074f46b74b92..71d094841356 100644
--- a/arch/x86/include/asm/tlbflush.h
+++ b/arch/x86/include/asm/tlbflush.h
@@ -295,21 +295,15 @@ static inline u64 inc_mm_tlb_gen(struct mm_struct *mm)
return atomic64_inc_return(&mm->context.tlb_gen);
}
-static inline void arch_tlbbatch_add_pending(struct arch_tlbflush_unmap_batch *batch,
- struct mm_struct *mm,
- unsigned long uaddr)
-{
- inc_mm_tlb_gen(mm);
- cpumask_or(&batch->cpumask, &batch->cpumask, mm_cpumask(mm));
- mmu_notifier_arch_invalidate_secondary_tlbs(mm, 0, -1UL);
-}
-
static inline void arch_flush_tlb_batched_pending(struct mm_struct *mm)
{
flush_tlb_mm(mm);
}
extern void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch);
+extern void arch_tlbbatch_add_pending(struct arch_tlbflush_unmap_batch *batch,
+ struct mm_struct *mm,
+ unsigned long uaddr);
static inline bool pte_flags_need_flush(unsigned long oldflags,
unsigned long newflags,
diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index 29a64f8c4c94..c5459516a72e 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -1605,16 +1605,7 @@ EXPORT_SYMBOL_GPL(__flush_tlb_all);
void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch)
{
struct flush_tlb_info *info;
- int cpu;
-
- if (cpu_feature_enabled(X86_FEATURE_INVLPGB)) {
- guard(preempt)();
- invlpgb_flush_all_nonglobals();
- tlbsync();
- return;
- }
-
- cpu = get_cpu();
+ int cpu = get_cpu();
info = get_flush_tlb_info(NULL, 0, TLB_FLUSH_ALL, 0, false,
TLB_GENERATION_INVALID);
@@ -1632,12 +1623,49 @@ void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch)
local_irq_enable();
}
+ /*
+ * If we issued (asynchronous) INVLPGB flushes, wait for them here.
+ * The cpumask above contains only CPUs that were running tasks
+ * not using broadcast TLB flushing.
+ */
+ if (cpu_feature_enabled(X86_FEATURE_INVLPGB) && batch->used_invlpgb) {
+ tlbsync();
+ migrate_enable();
+ batch->used_invlpgb = false;
+ }
+
cpumask_clear(&batch->cpumask);
put_flush_tlb_info();
put_cpu();
}
+void arch_tlbbatch_add_pending(struct arch_tlbflush_unmap_batch *batch,
+ struct mm_struct *mm,
+ unsigned long uaddr)
+{
+ if (static_cpu_has(X86_FEATURE_INVLPGB) && mm->context.broadcast_asid) {
+ u16 asid = mm->context.broadcast_asid;
+ /*
+ * Queue up an asynchronous invalidation. The corresponding
+ * TLBSYNC is done in arch_tlbbatch_flush(), and must be done
+ * on the same CPU.
+ */
+ if (!batch->used_invlpgb) {
+ batch->used_invlpgb = true;
+ migrate_disable();
+ }
+ invlpgb_flush_user_nr(kern_pcid(asid), uaddr, 1, 0);
+ /* Do any CPUs supporting INVLPGB need PTI? */
+ if (static_cpu_has(X86_FEATURE_PTI))
+ invlpgb_flush_user_nr(user_pcid(asid), uaddr, 1, 0);
+ } else {
+ inc_mm_tlb_gen(mm);
+ cpumask_or(&batch->cpumask, &batch->cpumask, mm_cpumask(mm));
+ }
+ mmu_notifier_arch_invalidate_secondary_tlbs(mm, 0, -1UL);
+}
+
/*
* Blindly accessing user memory from NMI context can be dangerous
* if we're in the middle of switching the current user task or
--
2.47.1
^ permalink raw reply [flat|nested] 21+ messages in thread* [PATCH 10/11] x86/mm: enable AMD translation cache extensions
2024-12-23 2:55 [RFC PATCH v2 00/11] AMD broadcast TLB invalidation Rik van Riel
` (8 preceding siblings ...)
2024-12-23 2:55 ` [PATCH 09/11] x86,tlb: do targeted broadcast flushing from tlbbatch code Rik van Riel
@ 2024-12-23 2:55 ` Rik van Riel
2024-12-23 2:55 ` [PATCH 11/11] x86/mm: only invalidate final translations with INVLPGB Rik van Riel
2024-12-24 18:08 ` [RFC PATCH v2 00/11] AMD broadcast TLB invalidation Michael Kelley
11 siblings, 0 replies; 21+ messages in thread
From: Rik van Riel @ 2024-12-23 2:55 UTC (permalink / raw)
To: x86
Cc: linux-kernel, kernel-team, dave.hansen, luto, peterz, tglx,
mingo, bp, hpa, akpm, linux-mm, Rik van Riel
With AMD TCE (translation cache extensions) only the intermediate mappings
that cover the address range zapped by INVLPG / INVLPGB get invalidated,
rather than all intermediate mappings getting zapped at every TLB invalidation.
This can help reduce the TLB miss rate, by keeping more intermediate
mappings in the cache.
From the AMD manual:
Translation Cache Extension (TCE) Bit. Bit 15, read/write. Setting this bit
to 1 changes how the INVLPG, INVLPGB, and INVPCID instructions operate on
TLB entries. When this bit is 0, these instructions remove the target PTE
from the TLB as well as all upper-level table entries that are cached
in the TLB, whether or not they are associated with the target PTE.
When this bit is set, these instructions will remove the target PTE and
only those upper-level entries that lead to the target PTE in
the page table hierarchy, leaving unrelated upper-level entries intact.
Signed-off-by: Rik van Riel <riel@surriel.com>
---
arch/x86/kernel/cpu/amd.c | 8 ++++++++
arch/x86/mm/tlb.c | 10 +++++++---
2 files changed, 15 insertions(+), 3 deletions(-)
diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
index 226b8fc64bfc..4dc42705aaca 100644
--- a/arch/x86/kernel/cpu/amd.c
+++ b/arch/x86/kernel/cpu/amd.c
@@ -1143,6 +1143,14 @@ static void cpu_detect_tlb_amd(struct cpuinfo_x86 *c)
/* Max number of pages INVLPGB can invalidate in one shot */
invlpgb_count_max = (edx & 0xffff) + 1;
+
+ /* If supported, enable translation cache extensions (TCE) */
+ cpuid(0x80000001, &eax, &ebx, &ecx, &edx);
+ if (ecx & BIT(17)) {
+ u64 msr = native_read_msr(MSR_EFER);;
+ msr |= BIT(15);
+ wrmsrl(MSR_EFER, msr);
+ }
}
static const struct cpu_dev amd_cpu_dev = {
diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index c5459516a72e..f1e2358616e5 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -480,7 +480,7 @@ static void broadcast_tlb_flush(struct flush_tlb_info *info)
if (info->stride_shift > PMD_SHIFT)
maxnr = 1;
- if (info->end == TLB_FLUSH_ALL) {
+ if (info->end == TLB_FLUSH_ALL || info->freed_tables) {
invlpgb_flush_single_pcid(kern_pcid(asid));
/* Do any CPUs supporting INVLPGB need PTI? */
if (static_cpu_has(X86_FEATURE_PTI))
@@ -1113,7 +1113,7 @@ static void flush_tlb_func(void *info)
*
* The only question is whether to do a full or partial flush.
*
- * We do a partial flush if requested and two extra conditions
+ * We do a partial flush if requested and three extra conditions
* are met:
*
* 1. f->new_tlb_gen == local_tlb_gen + 1. We have an invariant that
@@ -1140,10 +1140,14 @@ static void flush_tlb_func(void *info)
* date. By doing a full flush instead, we can increase
* local_tlb_gen all the way to mm_tlb_gen and we can probably
* avoid another flush in the very near future.
+ *
+ * 3. No page tables were freed. If page tables were freed, a full
+ * flush ensures intermediate translations in the TLB get flushed.
*/
if (f->end != TLB_FLUSH_ALL &&
f->new_tlb_gen == local_tlb_gen + 1 &&
- f->new_tlb_gen == mm_tlb_gen) {
+ f->new_tlb_gen == mm_tlb_gen &&
+ !f->freed_tables) {
/* Partial flush */
unsigned long addr = f->start;
--
2.47.1
^ permalink raw reply [flat|nested] 21+ messages in thread* [PATCH 11/11] x86/mm: only invalidate final translations with INVLPGB
2024-12-23 2:55 [RFC PATCH v2 00/11] AMD broadcast TLB invalidation Rik van Riel
` (9 preceding siblings ...)
2024-12-23 2:55 ` [PATCH 10/11] x86/mm: enable AMD translation cache extensions Rik van Riel
@ 2024-12-23 2:55 ` Rik van Riel
2024-12-24 18:08 ` [RFC PATCH v2 00/11] AMD broadcast TLB invalidation Michael Kelley
11 siblings, 0 replies; 21+ messages in thread
From: Rik van Riel @ 2024-12-23 2:55 UTC (permalink / raw)
To: x86
Cc: linux-kernel, kernel-team, dave.hansen, luto, peterz, tglx,
mingo, bp, hpa, akpm, linux-mm, Rik van Riel
Use the INVLPGB_FINAL_ONLY flag when invalidating mappings with INVPLGB.
This way only leaf mappings get removed from the TLB, leaving intermediate
translations cached.
On the (rare) occasions where we free page tables we do a full flush,
ensuring intermediate translations get flushed from the TLB.
Signed-off-by: Rik van Riel <riel@surriel.com>
---
arch/x86/include/asm/invlpgb.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/include/asm/invlpgb.h b/arch/x86/include/asm/invlpgb.h
index 862775897a54..2669ebfffe81 100644
--- a/arch/x86/include/asm/invlpgb.h
+++ b/arch/x86/include/asm/invlpgb.h
@@ -51,7 +51,7 @@ static inline void invlpgb_flush_user(unsigned long pcid,
static inline void invlpgb_flush_user_nr(unsigned long pcid, unsigned long addr,
int nr, bool pmd_stride)
{
- __invlpgb(0, pcid, addr, nr - 1, pmd_stride, INVLPGB_PCID | INVLPGB_VA);
+ __invlpgb(0, pcid, addr, nr - 1, pmd_stride, INVLPGB_PCID | INVLPGB_VA | INVLPGB_FINAL_ONLY);
}
/* Flush all mappings for a given ASID, not including globals. */
--
2.47.1
^ permalink raw reply [flat|nested] 21+ messages in thread* RE: [RFC PATCH v2 00/11] AMD broadcast TLB invalidation
2024-12-23 2:55 [RFC PATCH v2 00/11] AMD broadcast TLB invalidation Rik van Riel
` (10 preceding siblings ...)
2024-12-23 2:55 ` [PATCH 11/11] x86/mm: only invalidate final translations with INVLPGB Rik van Riel
@ 2024-12-24 18:08 ` Michael Kelley
2024-12-25 14:48 ` Rik van Riel
11 siblings, 1 reply; 21+ messages in thread
From: Michael Kelley @ 2024-12-24 18:08 UTC (permalink / raw)
To: riel, x86
Cc: linux-kernel, kernel-team, dave.hansen, luto, peterz, tglx,
mingo, bp, hpa, akpm, linux-mm
From: riel@surriel.com <riel@surriel.com> Sent: Sunday, December 22, 2024 6:55 PM
>
> Add support for broadcast TLB invalidation using AMD's INVLPGB instruction.
> This allows the kernel to invalidate TLB entries on remote CPUs without
> needing to send IPIs, without having to wait for remote CPUs to handle
> those interrupts, and with less interruption to what was running on
> those CPUs.
>
> Because x86 PCID space is limited, and there are some very large
> systems out there, broadcast TLB invalidation is only used for
> processes that are active on 3 or more CPUs, with the threshold
> being gradually increased the more the PCID space gets exhausted.
Rik --
What is this patch set's expectation about INVLPGB and TLBSYNC
availability and usage in a VM? I see that INVLPGB and TLBYSNC
behavior in a VM is spec'ed in the AMD Programmer's Manual, but
I wonder about their impact in a multi-tenant host like in a public
cloud environment. And given what this patch set does in assigning
global ASIDs, should X86_FEATURE_INVLPGB be disabled if
running in a VM where the hypervisor for whatever reason has
enabled INVLPGB/TLBSYNC in its VMs?
My knowledge of the details here is pretty limited, so my
question may just reflect my ignorance. But it would be good
for the code comments and/or commit messages to include
explicit statements about what is expected in a VM.
Michael
>
> Combined with the removal of unnecessary lru_add_drain calls
> (see https://lkml.org/lkml/2024/12/19/1388) this results in a
> nice performance boost for the will-it-scale tlb_flush2_threads
> test on an AMD Milan system with 36 cores:
>
> - vanilla kernel: 527k loops/second
> - lru_add_drain removal: 731k loops/second
> - only INVLPGB: 527k loops/second
> - lru_add_drain + INVLPGB: 1157k loops/second
>
> Profiling with only the INVLPGB changes showed while
> TLB invalidation went down from 40% of the total CPU
> time to only around 4% of CPU time, the contention
> simply moved to the LRU lock.
>
> Fixing both at the same time about doubles the
> number of iterations per second from this case.
>
> v2:
> - Apply suggestions by Peter and Borislav (thank you!)
> - Fix bug in arch_tlbbatch_flush, where we need to do both
> the TLBSYNC, and flush the CPUs that are in the cpumask.
> - Some updates to comments and changelogs based on questions.
>
^ permalink raw reply [flat|nested] 21+ messages in thread* Re: [RFC PATCH v2 00/11] AMD broadcast TLB invalidation
2024-12-24 18:08 ` [RFC PATCH v2 00/11] AMD broadcast TLB invalidation Michael Kelley
@ 2024-12-25 14:48 ` Rik van Riel
2025-01-10 19:29 ` Tom Lendacky
0 siblings, 1 reply; 21+ messages in thread
From: Rik van Riel @ 2024-12-25 14:48 UTC (permalink / raw)
To: Michael Kelley, x86
Cc: linux-kernel, kernel-team, dave.hansen, luto, peterz, tglx,
mingo, bp, hpa, akpm, linux-mm
On Tue, 2024-12-24 at 18:08 +0000, Michael Kelley wrote:
> From: riel@surriel.com <riel@surriel.com> Sent: Sunday, December 22,
> 2024 6:55 PM
>
> >
> > Add support for broadcast TLB invalidation using AMD's INVLPGB
> > instruction.
>
> > This allows the kernel to invalidate TLB entries on remote CPUs
> > without
> > needing to send IPIs, without having to wait for remote CPUs to
> > handle
> > those interrupts, and with less interruption to what was running on
> > those CPUs.
> >
> > Because x86 PCID space is limited, and there are some very large
> > systems out there, broadcast TLB invalidation is only used for
> > processes that are active on 3 or more CPUs, with the threshold
> > being gradually increased the more the PCID space gets exhausted.
>
> Rik --
>
> What is this patch set's expectation about INVLPGB and TLBSYNC
> availability and usage in a VM? I see that INVLPGB and TLBYSNC
> behavior in a VM is spec'ed in the AMD Programmer's Manual, but
> I wonder about their impact in a multi-tenant host like in a public
> cloud environment. And given what this patch set does in assigning
> global ASIDs, should X86_FEATURE_INVLPGB be disabled if
> running in a VM where the hypervisor for whatever reason has
> enabled INVLPGB/TLBSYNC in its VMs?
>
This patch series enables bare metal INVLPGB functionality.
Virtual machines should probably not expose the INVPLGB
CPUID feature bit to guests, since virtual machine
invalidation seems to work differently than bare metal
invalidation.
For one, the ASID seems to actually mean something in
SVM context, while trying to use the ASID in bare metal
blows up :)
--
All Rights Reversed.
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [RFC PATCH v2 00/11] AMD broadcast TLB invalidation
2024-12-25 14:48 ` Rik van Riel
@ 2025-01-10 19:29 ` Tom Lendacky
0 siblings, 0 replies; 21+ messages in thread
From: Tom Lendacky @ 2025-01-10 19:29 UTC (permalink / raw)
To: Rik van Riel, Michael Kelley, x86
Cc: linux-kernel, kernel-team, dave.hansen, luto, peterz, tglx,
mingo, bp, hpa, akpm, linux-mm
On 12/25/24 08:48, Rik van Riel wrote:
> On Tue, 2024-12-24 at 18:08 +0000, Michael Kelley wrote:
>> From: riel@surriel.com <riel@surriel.com> Sent: Sunday, December 22,
>> 2024 6:55 PM
>>
>>>
>>> Add support for broadcast TLB invalidation using AMD's INVLPGB
>>> instruction.
>>
>>> This allows the kernel to invalidate TLB entries on remote CPUs
>>> without
>>> needing to send IPIs, without having to wait for remote CPUs to
>>> handle
>>> those interrupts, and with less interruption to what was running on
>>> those CPUs.
>>>
>>> Because x86 PCID space is limited, and there are some very large
>>> systems out there, broadcast TLB invalidation is only used for
>>> processes that are active on 3 or more CPUs, with the threshold
>>> being gradually increased the more the PCID space gets exhausted.
>>
>> Rik --
>>
>> What is this patch set's expectation about INVLPGB and TLBSYNC
>> availability and usage in a VM? I see that INVLPGB and TLBYSNC
>> behavior in a VM is spec'ed in the AMD Programmer's Manual, but
>> I wonder about their impact in a multi-tenant host like in a public
>> cloud environment. And given what this patch set does in assigning
>> global ASIDs, should X86_FEATURE_INVLPGB be disabled if
>> running in a VM where the hypervisor for whatever reason has
>> enabled INVLPGB/TLBSYNC in its VMs?
>>
> This patch series enables bare metal INVLPGB functionality.
>
> Virtual machines should probably not expose the INVPLGB
> CPUID feature bit to guests, since virtual machine
> invalidation seems to work differently than bare metal
> invalidation.
>
> For one, the ASID seems to actually mean something in
> SVM context, while trying to use the ASID in bare metal
> blows up :)
Note that global ASIDs (relative to VMs) are different from the broadcast
ASIDs being used here. IIUC, the broadcast ASIDs here get translated to a
PCID value (kern_pcid(asid) or user_pcid(asid) in patch #9).
Thanks,
Tom
>
>
^ permalink raw reply [flat|nested] 21+ messages in thread