linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Borislav Petkov <bp@alien8.de>
To: Rik van Riel <riel@surriel.com>
Cc: x86@kernel.org, linux-kernel@vger.kernel.org,
	peterz@infradead.org, dave.hansen@linux.intel.com,
	zhengqi.arch@bytedance.com, nadav.amit@gmail.com,
	thomas.lendacky@amd.com, kernel-team@meta.com,
	linux-mm@kvack.org, akpm@linux-foundation.org,
	jackmanb@google.com, jannh@google.com, mhklinux@outlook.com,
	andrew.cooper3@citrix.com, Manali.Shukla@amd.com,
	mingo@kernel.org
Subject: Re: [PATCH v14 10/13] x86/mm: enable broadcast TLB invalidation for multi-threaded processes
Date: Mon, 3 Mar 2025 11:57:00 +0100	[thread overview]
Message-ID: <20250303105700.GAZ8WK_Bkq_r6lBNVc@fat_crate.local> (raw)
In-Reply-To: <20250226030129.530345-11-riel@surriel.com>

On Tue, Feb 25, 2025 at 10:00:45PM -0500, Rik van Riel wrote:
> +/*
> + * x86 has 4k ASIDs (2k when compiled with KPTI), but the largest
> + * x86 systems have over 8k CPUs. Because of this potential ASID
> + * shortage, global ASIDs are handed out to processes that have
> + * frequent TLB flushes and are active on 4 or more CPUs simultaneously.
> + */
> +static void consider_global_asid(struct mm_struct *mm)
> +{
> +	if (!static_cpu_has(X86_FEATURE_INVLPGB))
> +		return;
> +
> +	/* Check every once in a while. */
> +	if ((current->pid & 0x1f) != (jiffies & 0x1f))
> +		return;

Uff, this looks funky.

> +
> +	if (!READ_ONCE(global_asid_available))
> +		return;

use_global_asid() will do that check for us already and it'll even warn.

> +
> +	/*
> +	 * Assign a global ASID if the process is active on
> +	 * 4 or more CPUs simultaneously.
> +	 */
> +	if (mm_active_cpus_exceeds(mm, 3))
> +		use_global_asid(mm);
> +}
> +

...

> +static void broadcast_tlb_flush(struct flush_tlb_info *info)
> +{
> +	bool pmd = info->stride_shift == PMD_SHIFT;
> +	unsigned long asid = mm_global_asid(info->mm);
> +	unsigned long addr = info->start;
> +
> +	/*
> +	 * TLB flushes with INVLPGB are kicked off asynchronously.
> +	 * The inc_mm_tlb_gen() guarantees page table updates are done
> +	 * before these TLB flushes happen.
> +	 */
> +	if (info->end == TLB_FLUSH_ALL) {
> +		invlpgb_flush_single_pcid_nosync(kern_pcid(asid));
> +		/* Do any CPUs supporting INVLPGB need PTI? */

I hope not. :)

However, I think one can force-enable PTI on AMD so yeah, let's keep that.

...

Final result:

From: Rik van Riel <riel@surriel.com>
Date: Tue, 25 Feb 2025 22:00:45 -0500
Subject: [PATCH] x86/mm: Enable broadcast TLB invalidation for multi-threaded
 processes

There is not enough room in the 12-bit ASID address space to hand out
broadcast ASIDs to every process. Only hand out broadcast ASIDs to processes
when they are observed to be simultaneously running on 4 or more CPUs.

This also allows single threaded process to continue using the cheaper, local
TLB invalidation instructions like INVLPGB.

Due to the structure of flush_tlb_mm_range(), the INVLPGB flushing is done in
a generically named broadcast_tlb_flush() function which can later also be
used for Intel RAR.

Combined with the removal of unnecessary lru_add_drain calls() (see
https://lore.kernel.org/r/20241219153253.3da9e8aa@fangorn) this results in
a nice performance boost for the will-it-scale tlb_flush2_threads test on an
AMD Milan system with 36 cores:

  - vanilla kernel:           527k loops/second
  - lru_add_drain removal:    731k loops/second
  - only INVLPGB:             527k loops/second
  - lru_add_drain + INVLPGB: 1157k loops/second

Profiling with only the INVLPGB changes showed while TLB invalidation went
down from 40% of the total CPU time to only around 4% of CPU time, the
contention simply moved to the LRU lock.

Fixing both at the same time about doubles the number of iterations per second
from this case.

Comparing will-it-scale tlb_flush2_threads with several different numbers of
threads on a 72 CPU AMD Milan shows similar results. The number represents the
total number of loops per second across all the threads:

  threads	tip		INVLPGB

  1		315k		304k
  2		423k		424k
  4		644k		1032k
  8		652k		1267k
  16		737k		1368k
  32		759k		1199k
  64		636k		1094k
  72		609k		993k

1 and 2 thread performance is similar with and without INVLPGB, because
INVLPGB is only used on processes using 4 or more CPUs simultaneously.

The number is the median across 5 runs.

Some numbers closer to real world performance can be found at Phoronix, thanks
to Michael:

https://www.phoronix.com/news/AMD-INVLPGB-Linux-Benefits

  [ bp:
   - Massage
   - :%s/\<static_cpu_has\>/cpu_feature_enabled/cgi
   - :%s/\<clear_asid_transition\>/mm_clear_asid_transition/cgi
   ]

Signed-off-by: Rik van Riel <riel@surriel.com>
Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
Reviewed-by: Nadav Amit <nadav.amit@gmail.com>
Link: https://lore.kernel.org/r/20250226030129.530345-11-riel@surriel.com
---
 arch/x86/include/asm/tlbflush.h |   5 ++
 arch/x86/mm/tlb.c               | 104 +++++++++++++++++++++++++++++++-
 2 files changed, 108 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h
index e6c3be06dd21..8c21030269ff 100644
--- a/arch/x86/include/asm/tlbflush.h
+++ b/arch/x86/include/asm/tlbflush.h
@@ -280,6 +280,11 @@ static inline void mm_assign_global_asid(struct mm_struct *mm, u16 asid)
 	smp_store_release(&mm->context.global_asid, asid);
 }
 
+static inline void mm_clear_asid_transition(struct mm_struct *mm)
+{
+	WRITE_ONCE(mm->context.asid_transition, false);
+}
+
 static inline bool mm_in_asid_transition(struct mm_struct *mm)
 {
 	if (!cpu_feature_enabled(X86_FEATURE_INVLPGB))
diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index b5681e6f2333..0efd99053c09 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -430,6 +430,105 @@ static bool mm_needs_global_asid(struct mm_struct *mm, u16 asid)
 	return false;
 }
 
+/*
+ * x86 has 4k ASIDs (2k when compiled with KPTI), but the largest x86
+ * systems have over 8k CPUs. Because of this potential ASID shortage,
+ * global ASIDs are handed out to processes that have frequent TLB
+ * flushes and are active on 4 or more CPUs simultaneously.
+ */
+static void consider_global_asid(struct mm_struct *mm)
+{
+	if (!cpu_feature_enabled(X86_FEATURE_INVLPGB))
+		return;
+
+	/* Check every once in a while. */
+	if ((current->pid & 0x1f) != (jiffies & 0x1f))
+		return;
+
+	/*
+	 * Assign a global ASID if the process is active on
+	 * 4 or more CPUs simultaneously.
+	 */
+	if (mm_active_cpus_exceeds(mm, 3))
+		use_global_asid(mm);
+}
+
+static void finish_asid_transition(struct flush_tlb_info *info)
+{
+	struct mm_struct *mm = info->mm;
+	int bc_asid = mm_global_asid(mm);
+	int cpu;
+
+	if (!mm_in_asid_transition(mm))
+		return;
+
+	for_each_cpu(cpu, mm_cpumask(mm)) {
+		/*
+		 * The remote CPU is context switching. Wait for that to
+		 * finish, to catch the unlikely case of it switching to
+		 * the target mm with an out of date ASID.
+		 */
+		while (READ_ONCE(per_cpu(cpu_tlbstate.loaded_mm, cpu)) == LOADED_MM_SWITCHING)
+			cpu_relax();
+
+		if (READ_ONCE(per_cpu(cpu_tlbstate.loaded_mm, cpu)) != mm)
+			continue;
+
+		/*
+		 * If at least one CPU is not using the global ASID yet,
+		 * send a TLB flush IPI. The IPI should cause stragglers
+		 * to transition soon.
+		 *
+		 * This can race with the CPU switching to another task;
+		 * that results in a (harmless) extra IPI.
+		 */
+		if (READ_ONCE(per_cpu(cpu_tlbstate.loaded_mm_asid, cpu)) != bc_asid) {
+			flush_tlb_multi(mm_cpumask(info->mm), info);
+			return;
+		}
+	}
+
+	/* All the CPUs running this process are using the global ASID. */
+	mm_clear_asid_transition(mm);
+}
+
+static void broadcast_tlb_flush(struct flush_tlb_info *info)
+{
+	bool pmd = info->stride_shift == PMD_SHIFT;
+	unsigned long asid = mm_global_asid(info->mm);
+	unsigned long addr = info->start;
+
+	/*
+	 * TLB flushes with INVLPGB are kicked off asynchronously.
+	 * The inc_mm_tlb_gen() guarantees page table updates are done
+	 * before these TLB flushes happen.
+	 */
+	if (info->end == TLB_FLUSH_ALL) {
+		invlpgb_flush_single_pcid_nosync(kern_pcid(asid));
+		/* Do any CPUs supporting INVLPGB need PTI? */
+		if (cpu_feature_enabled(X86_FEATURE_PTI))
+			invlpgb_flush_single_pcid_nosync(user_pcid(asid));
+	} else do {
+		unsigned long nr = 1;
+
+		if (info->stride_shift <= PMD_SHIFT) {
+			nr = (info->end - addr) >> info->stride_shift;
+			nr = clamp_val(nr, 1, invlpgb_count_max);
+		}
+
+		invlpgb_flush_user_nr_nosync(kern_pcid(asid), addr, nr, pmd);
+		if (cpu_feature_enabled(X86_FEATURE_PTI))
+			invlpgb_flush_user_nr_nosync(user_pcid(asid), addr, nr, pmd);
+
+		addr += nr << info->stride_shift;
+	} while (addr < info->end);
+
+	finish_asid_transition(info);
+
+	/* Wait for the INVLPGBs kicked off above to finish. */
+	__tlbsync();
+}
+
 /*
  * Given an ASID, flush the corresponding user ASID.  We can delay this
  * until the next time we switch to it.
@@ -1260,9 +1359,12 @@ void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start,
 	 * a local TLB flush is needed. Optimize this use-case by calling
 	 * flush_tlb_func_local() directly in this case.
 	 */
-	if (cpumask_any_but(mm_cpumask(mm), cpu) < nr_cpu_ids) {
+	if (mm_global_asid(mm)) {
+		broadcast_tlb_flush(info);
+	} else if (cpumask_any_but(mm_cpumask(mm), cpu) < nr_cpu_ids) {
 		info->trim_cpumask = should_trim_cpumask(mm);
 		flush_tlb_multi(mm_cpumask(mm), info);
+		consider_global_asid(mm);
 	} else if (mm == this_cpu_read(cpu_tlbstate.loaded_mm)) {
 		lockdep_assert_irqs_enabled();
 		local_irq_disable();
-- 
2.43.0

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette


  reply	other threads:[~2025-03-03 10:57 UTC|newest]

Thread overview: 64+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-02-26  3:00 [PATCH v14 00/13] AMD broadcast TLB invalidation Rik van Riel
2025-02-26  3:00 ` [PATCH v14 01/13] x86/mm: consolidate full flush threshold decision Rik van Riel
2025-09-02 15:44   ` [BUG] x86/mm: regression after 4a02ed8e1cc3 Giovanni Cabiddu
2025-09-02 15:50     ` Dave Hansen
2025-09-02 16:08       ` Nadav Amit
2025-09-02 16:11         ` Dave Hansen
2025-09-03 14:00       ` Rik van Riel
2025-09-02 16:05     ` Jann Horn
2025-09-02 16:13       ` Jann Horn
2025-09-03 14:18       ` Nadav Amit
2025-09-03 14:42         ` Jann Horn
2025-09-02 16:31     ` Jann Horn
2025-09-02 16:57       ` Giovanni Cabiddu
2025-02-26  3:00 ` [PATCH v14 02/13] x86/mm: get INVLPGB count max from CPUID Rik van Riel
2025-02-28 16:21   ` Borislav Petkov
2025-02-28 19:27   ` Borislav Petkov
2025-02-26  3:00 ` [PATCH v14 03/13] x86/mm: add INVLPGB support code Rik van Riel
2025-02-28 18:46   ` Borislav Petkov
2025-02-28 18:51   ` Dave Hansen
2025-02-28 19:47   ` Borislav Petkov
2025-03-03 18:41     ` Dave Hansen
2025-03-03 19:23       ` Dave Hansen
2025-03-04 11:00         ` Borislav Petkov
2025-03-04 15:10           ` Dave Hansen
2025-03-04 16:19             ` Borislav Petkov
2025-03-04 16:57               ` Dave Hansen
2025-03-04 21:12                 ` Borislav Petkov
2025-02-26  3:00 ` [PATCH v14 04/13] x86/mm: use INVLPGB for kernel TLB flushes Rik van Riel
2025-02-28 19:00   ` Dave Hansen
2025-02-28 21:43   ` Borislav Petkov
2025-02-26  3:00 ` [PATCH v14 05/13] x86/mm: use INVLPGB in flush_tlb_all Rik van Riel
2025-02-28 19:18   ` Dave Hansen
2025-03-01 12:20     ` Borislav Petkov
2025-03-01 15:54       ` Rik van Riel
2025-02-28 22:20   ` Borislav Petkov
2025-02-26  3:00 ` [PATCH v14 06/13] x86/mm: use broadcast TLB flushing for page reclaim TLB flushing Rik van Riel
2025-02-28 18:57   ` Borislav Petkov
2025-02-26  3:00 ` [PATCH v14 07/13] x86/mm: add global ASID allocation helper functions Rik van Riel
2025-03-02  7:06   ` Borislav Petkov
2025-02-26  3:00 ` [PATCH v14 08/13] x86/mm: global ASID context switch & TLB flush handling Rik van Riel
2025-03-02  7:58   ` Borislav Petkov
2025-02-26  3:00 ` [PATCH v14 09/13] x86/mm: global ASID process exit helpers Rik van Riel
2025-03-02 12:38   ` Borislav Petkov
2025-03-02 13:53     ` Rik van Riel
2025-03-03 10:16       ` Borislav Petkov
2025-02-26  3:00 ` [PATCH v14 10/13] x86/mm: enable broadcast TLB invalidation for multi-threaded processes Rik van Riel
2025-03-03 10:57   ` Borislav Petkov [this message]
2025-02-26  3:00 ` [PATCH v14 11/13] x86/mm: do targeted broadcast flushing from tlbbatch code Rik van Riel
2025-03-03 11:46   ` Borislav Petkov
2025-03-03 21:47     ` Dave Hansen
2025-03-04 11:52       ` Borislav Petkov
2025-03-04 15:24         ` Dave Hansen
2025-03-04 12:52       ` Brendan Jackman
2025-03-04 14:11         ` Borislav Petkov
2025-03-04 15:33           ` Brendan Jackman
2025-03-04 17:51             ` Dave Hansen
2025-02-26  3:00 ` [PATCH v14 12/13] x86/mm: enable AMD translation cache extensions Rik van Riel
2025-02-26  3:00 ` [PATCH v14 13/13] x86/mm: only invalidate final translations with INVLPGB Rik van Riel
2025-03-03 22:40   ` Dave Hansen
2025-03-04 11:53     ` Borislav Petkov
2025-03-03 12:42 ` [PATCH v14 00/13] AMD broadcast TLB invalidation Borislav Petkov
2025-03-03 13:29   ` Borislav Petkov
2025-03-04 12:04 ` [PATCH] x86/mm: Always set the ASID valid bit for the INVLPGB instruction Borislav Petkov
2025-03-04 12:43   ` Borislav Petkov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20250303105700.GAZ8WK_Bkq_r6lBNVc@fat_crate.local \
    --to=bp@alien8.de \
    --cc=Manali.Shukla@amd.com \
    --cc=akpm@linux-foundation.org \
    --cc=andrew.cooper3@citrix.com \
    --cc=dave.hansen@linux.intel.com \
    --cc=jackmanb@google.com \
    --cc=jannh@google.com \
    --cc=kernel-team@meta.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhklinux@outlook.com \
    --cc=mingo@kernel.org \
    --cc=nadav.amit@gmail.com \
    --cc=peterz@infradead.org \
    --cc=riel@surriel.com \
    --cc=thomas.lendacky@amd.com \
    --cc=x86@kernel.org \
    --cc=zhengqi.arch@bytedance.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox