From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 49280C021B8 for ; Tue, 4 Mar 2025 13:58:57 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C64C1280008; Tue, 4 Mar 2025 08:58:56 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id C151C280004; Tue, 4 Mar 2025 08:58:56 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A8CAC280008; Tue, 4 Mar 2025 08:58:56 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 86CDF280004 for ; Tue, 4 Mar 2025 08:58:56 -0500 (EST) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 4B713120112 for ; Tue, 4 Mar 2025 13:58:56 +0000 (UTC) X-FDA: 83184024672.13.378501E Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf09.hostedemail.com (Postfix) with ESMTP id A2263140005 for ; Tue, 4 Mar 2025 13:58:54 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=S8+HPt3i; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf09.hostedemail.com: domain of bp@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=bp@kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1741096734; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=1HN5gYQTXAR2bWO25eG78FcVaGvIZPTU3+5WZ5eDXLI=; b=iqBhC1E5+KiWpqxTwWltTmtC/WctJsJMbxnqJyUZcJ41KMmI+ArSD25mvzfWPLX4BQY5Q0 pun7S8LM9Btam9SaOsij2aqPgcNuun8sLw/cDeqifnV26ZnspRUFAWWmmHFf09ufQ607Bg eCT+jkr3HquHVEn4ZT8F5Fo9rr9E8II= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=S8+HPt3i; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf09.hostedemail.com: domain of bp@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=bp@kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1741096734; a=rsa-sha256; cv=none; b=cGsnpIBa4US7D9kravybYsdEr/eeEuuGmCpyf9MjeigGIUHjVjHrhSN4ucNtF3eAr5kIYf G/7oMROMaIj2vCZpYd2XFfSAtCk3wca0VIxNRgX3tNY9KkGIk6jZa9gaGM8PVAW7RF3Y8+ O/+BcrE/0HINbm0is7USuzDcJJsT8f8= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 1DA915C5D5E; Tue, 4 Mar 2025 13:56:37 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id C0033C4CEEC; Tue, 4 Mar 2025 13:58:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1741096733; bh=z2DELLe0Mw9azg4Y88YeuBgadHUuT0vo+TmJ6yIo+tY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=S8+HPt3ijNm2idGQ/KNL7I1fl9hwX2WAjFZuhtSua12qfxgd234GwBSdBWD6AyXtg ObChu36YdR+j1y4KCTaF5rn9itEJV5UDlYJnSmmgLMmxDIIi2uHpGyL02ePbMsC3YT QmSjTvacRbODazBwh56pbdrU4Eni0Fuzq8mGQiVhKYg7xRVPMsT93Jf/mEq1UzfdGF M8DFz00+8x4Ie8wJJt3bXAhktL41fswds+SatUToUA/fDxcVilxw5cUWT2hGUArq1D aejw6+SS10aNx0LubbfUGq7u/IuNXsx1ukEfyvbgA4VzS8QGAkqkQoh3RBABvT3QiT 0GC5k8AvFNZWQ== From: Borislav Petkov To: riel@surriel.com Cc: Manali.Shukla@amd.com, akpm@linux-foundation.org, andrew.cooper3@citrix.com, jackmanb@google.com, jannh@google.com, kernel-team@meta.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, mhklinux@outlook.com, nadav.amit@gmail.com, thomas.lendacky@amd.com, x86@kernel.org, zhengqi.arch@bytedance.com, Borislav Petkov Subject: [PATCH v15 09/11] x86/mm: Enable broadcast TLB invalidation for multi-threaded processes Date: Tue, 4 Mar 2025 14:58:14 +0100 Message-ID: <20250304135816.12356-10-bp@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250304135816.12356-1-bp@kernel.org> References: <20250304135816.12356-1-bp@kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: A2263140005 X-Rspam-User: X-Stat-Signature: xnmqh7fxtt6wihdzsurfaprauddonuxw X-HE-Tag: 1741096734-358964 X-HE-Meta: U2FsdGVkX1/EtLQGmpP8NTaUnejYiyIp86jBwSOXxBnC5w2SrWkZbSQUqhC8YG5SvPia0cwFWc9Xxt5cBICDuEbHBeoUiHCnlmd7BdQ0WBonU3gNBFBAnpOPT7z2qyUP8R+7L1qM+3tOKCutnfjNoZpoRFJcx5++54AzsScoc2KaRCMw0QsHFGPSBGy0zHkE23iJF6JFN6LMN3Qjatyd6VJ8Enl77wbfc+LlGnmiTdKfgV+oupf2BJOuX0wkMyCVdLvcxcM+BvEEXiRdQ/1C7lv6lf9XUHDy6fEsabE7PzCOy6ujsBSiIaPkCz7oy5HTZcqyIjc80KGv1M50O8d9eFFiJZ+KKqMMKlsdDjQT7H/ueWkrAqqJX4oWoGLYlIpUDOszIJRBUSZe0XJLk9cZVhs9nqR/r5iawD2FQdm17x7a73mQ0h1r4djP8tpF3UytKfbH8E9m/4WfjsnsKLBlu0GdZIMFFp5yUm2nvLeI95sWsl9O5zWb0GSRjrUwk4Ptw1K4Rng+LmuWDXgqgyJZLNEs6NcOi5AasshW9y5kG1Ej/SJ18SlqfE47Hzl/9TpNkGxPEBtk3BVaHt6EnIuvzlK5Uai0q0DrJ5nKqXtQldJbqUqNX7vPc9yDtwM2UBWkKL+tu4cNM9LYENZmqa38oRXdtmOGft4TQp/TvJyUjc2D6dIKdYn60AUq6GggdV/fiZhgNS61CLWwkQtsVxo58GfAEgh28rDisvz73A3lKfLzE4La/dOHRJevcnQPk2JvrJPQe9GSSHEXKNXm9j/MhBt05UbN6phJafDCEjrCKEZzr1XHwtx/tgTnYVzjrTMtV1O5Yix97hQqeBq7ujhgTDJhtZnW09j4tk6PPTo1y3Uw1hmwv9KlIMGuLuYr4sRDSs7fbX+XsjN7Z08wdXDiylHWI7Iy1T9eZnJXWEQkWvt3HGeqM+o03BIrQ4FN/iTJmnnMUUgQX7WuEK3VQs7 W8ZZHwB9 ETcmwiTyg5H8wsLLga2K+YSgYfL33+eJvuKZFFVDIJuxfmoxQdAg8CUqWF8GhvkDoqGsR+f+w7cJiiRM5vlsq1XWst0SlPCbwz9HDd2DWSN0j6AzdSdxfsj2hIJmDjSNfKftG5pDS5gk4zUVx6R1XAI1caBhEai+td7TtMwjrm5L9+9BT5mC+O/lOdQpMIKeuGEo7uF9v443z7NguovyufvcqzrkjW/9q0mj16OAz5U4tLNjRCd5VOTzjZTmMP8zt8rrX1chQOYhExlN0Q3wAkZDCeGgGh2aX/BTLI4uQyu5vsAV+KvDKELuyk2yjswK4Yb4N9FMOxRC/ql2RJqIImZSFszaXBnHtVdTcZsHfVjCfsxqmFKVq7t9iWXnV5C+0zm8nfU6+BEqxZp+YrsxcO16MTO4E22dYtebCG4M6QqOql7p14EbnTcuuSkBEuJfVx115mjD7oquQIsaT48r2D6cpqA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Rik van Riel There is not enough room in the 12-bit ASID address space to hand out broadcast ASIDs to every process. Only hand out broadcast ASIDs to processes when they are observed to be simultaneously running on 4 or more CPUs. This also allows single threaded process to continue using the cheaper, local TLB invalidation instructions like INVLPGB. Due to the structure of flush_tlb_mm_range(), the INVLPGB flushing is done in a generically named broadcast_tlb_flush() function which can later also be used for Intel RAR. Combined with the removal of unnecessary lru_add_drain calls() (see https://lore.kernel.org/r/20241219153253.3da9e8aa@fangorn) this results in a nice performance boost for the will-it-scale tlb_flush2_threads test on an AMD Milan system with 36 cores: - vanilla kernel: 527k loops/second - lru_add_drain removal: 731k loops/second - only INVLPGB: 527k loops/second - lru_add_drain + INVLPGB: 1157k loops/second Profiling with only the INVLPGB changes showed while TLB invalidation went down from 40% of the total CPU time to only around 4% of CPU time, the contention simply moved to the LRU lock. Fixing both at the same time about doubles the number of iterations per second from this case. Comparing will-it-scale tlb_flush2_threads with several different numbers of threads on a 72 CPU AMD Milan shows similar results. The number represents the total number of loops per second across all the threads: threads tip INVLPGB 1 315k 304k 2 423k 424k 4 644k 1032k 8 652k 1267k 16 737k 1368k 32 759k 1199k 64 636k 1094k 72 609k 993k 1 and 2 thread performance is similar with and without INVLPGB, because INVLPGB is only used on processes using 4 or more CPUs simultaneously. The number is the median across 5 runs. Some numbers closer to real world performance can be found at Phoronix, thanks to Michael: https://www.phoronix.com/news/AMD-INVLPGB-Linux-Benefits [ bp: - Massage - :%s/\/cpu_feature_enabled/cgi - :%s/\/mm_clear_asid_transition/cgi - Fold in a 0day bot fix: https://lore.kernel.org/oe-kbuild-all/202503040000.GtiWUsBm-lkp@intel.com ] Signed-off-by: Rik van Riel Signed-off-by: Borislav Petkov (AMD) Reviewed-by: Nadav Amit Link: https://lore.kernel.org/r/20250226030129.530345-11-riel@surriel.com WIP Signed-off-by: Borislav Petkov (AMD) --- arch/x86/include/asm/tlbflush.h | 6 ++ arch/x86/mm/tlb.c | 104 +++++++++++++++++++++++++++++++- 2 files changed, 109 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h index e6c3be06dd21..7cad283d502d 100644 --- a/arch/x86/include/asm/tlbflush.h +++ b/arch/x86/include/asm/tlbflush.h @@ -280,6 +280,11 @@ static inline void mm_assign_global_asid(struct mm_struct *mm, u16 asid) smp_store_release(&mm->context.global_asid, asid); } +static inline void mm_clear_asid_transition(struct mm_struct *mm) +{ + WRITE_ONCE(mm->context.asid_transition, false); +} + static inline bool mm_in_asid_transition(struct mm_struct *mm) { if (!cpu_feature_enabled(X86_FEATURE_INVLPGB)) @@ -291,6 +296,7 @@ static inline bool mm_in_asid_transition(struct mm_struct *mm) static inline u16 mm_global_asid(struct mm_struct *mm) { return 0; } static inline void mm_init_global_asid(struct mm_struct *mm) { } static inline void mm_assign_global_asid(struct mm_struct *mm, u16 asid) { } +static inline void mm_clear_asid_transition(struct mm_struct *mm) { } static inline bool mm_in_asid_transition(struct mm_struct *mm) { return false; } #endif /* CONFIG_BROADCAST_TLB_FLUSH */ diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c index b5681e6f2333..0efd99053c09 100644 --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -430,6 +430,105 @@ static bool mm_needs_global_asid(struct mm_struct *mm, u16 asid) return false; } +/* + * x86 has 4k ASIDs (2k when compiled with KPTI), but the largest x86 + * systems have over 8k CPUs. Because of this potential ASID shortage, + * global ASIDs are handed out to processes that have frequent TLB + * flushes and are active on 4 or more CPUs simultaneously. + */ +static void consider_global_asid(struct mm_struct *mm) +{ + if (!cpu_feature_enabled(X86_FEATURE_INVLPGB)) + return; + + /* Check every once in a while. */ + if ((current->pid & 0x1f) != (jiffies & 0x1f)) + return; + + /* + * Assign a global ASID if the process is active on + * 4 or more CPUs simultaneously. + */ + if (mm_active_cpus_exceeds(mm, 3)) + use_global_asid(mm); +} + +static void finish_asid_transition(struct flush_tlb_info *info) +{ + struct mm_struct *mm = info->mm; + int bc_asid = mm_global_asid(mm); + int cpu; + + if (!mm_in_asid_transition(mm)) + return; + + for_each_cpu(cpu, mm_cpumask(mm)) { + /* + * The remote CPU is context switching. Wait for that to + * finish, to catch the unlikely case of it switching to + * the target mm with an out of date ASID. + */ + while (READ_ONCE(per_cpu(cpu_tlbstate.loaded_mm, cpu)) == LOADED_MM_SWITCHING) + cpu_relax(); + + if (READ_ONCE(per_cpu(cpu_tlbstate.loaded_mm, cpu)) != mm) + continue; + + /* + * If at least one CPU is not using the global ASID yet, + * send a TLB flush IPI. The IPI should cause stragglers + * to transition soon. + * + * This can race with the CPU switching to another task; + * that results in a (harmless) extra IPI. + */ + if (READ_ONCE(per_cpu(cpu_tlbstate.loaded_mm_asid, cpu)) != bc_asid) { + flush_tlb_multi(mm_cpumask(info->mm), info); + return; + } + } + + /* All the CPUs running this process are using the global ASID. */ + mm_clear_asid_transition(mm); +} + +static void broadcast_tlb_flush(struct flush_tlb_info *info) +{ + bool pmd = info->stride_shift == PMD_SHIFT; + unsigned long asid = mm_global_asid(info->mm); + unsigned long addr = info->start; + + /* + * TLB flushes with INVLPGB are kicked off asynchronously. + * The inc_mm_tlb_gen() guarantees page table updates are done + * before these TLB flushes happen. + */ + if (info->end == TLB_FLUSH_ALL) { + invlpgb_flush_single_pcid_nosync(kern_pcid(asid)); + /* Do any CPUs supporting INVLPGB need PTI? */ + if (cpu_feature_enabled(X86_FEATURE_PTI)) + invlpgb_flush_single_pcid_nosync(user_pcid(asid)); + } else do { + unsigned long nr = 1; + + if (info->stride_shift <= PMD_SHIFT) { + nr = (info->end - addr) >> info->stride_shift; + nr = clamp_val(nr, 1, invlpgb_count_max); + } + + invlpgb_flush_user_nr_nosync(kern_pcid(asid), addr, nr, pmd); + if (cpu_feature_enabled(X86_FEATURE_PTI)) + invlpgb_flush_user_nr_nosync(user_pcid(asid), addr, nr, pmd); + + addr += nr << info->stride_shift; + } while (addr < info->end); + + finish_asid_transition(info); + + /* Wait for the INVLPGBs kicked off above to finish. */ + __tlbsync(); +} + /* * Given an ASID, flush the corresponding user ASID. We can delay this * until the next time we switch to it. @@ -1260,9 +1359,12 @@ void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start, * a local TLB flush is needed. Optimize this use-case by calling * flush_tlb_func_local() directly in this case. */ - if (cpumask_any_but(mm_cpumask(mm), cpu) < nr_cpu_ids) { + if (mm_global_asid(mm)) { + broadcast_tlb_flush(info); + } else if (cpumask_any_but(mm_cpumask(mm), cpu) < nr_cpu_ids) { info->trim_cpumask = should_trim_cpumask(mm); flush_tlb_multi(mm_cpumask(mm), info); + consider_global_asid(mm); } else if (mm == this_cpu_read(cpu_tlbstate.loaded_mm)) { lockdep_assert_irqs_enabled(); local_irq_disable(); -- 2.43.0