From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 12EC2C282C6 for ; Tue, 4 Mar 2025 13:59:01 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 980FF280009; Tue, 4 Mar 2025 08:59:00 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 92CAA280004; Tue, 4 Mar 2025 08:59:00 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7CF6D280009; Tue, 4 Mar 2025 08:59:00 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 5E0C8280004 for ; Tue, 4 Mar 2025 08:59:00 -0500 (EST) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id DC352C1D9D for ; Tue, 4 Mar 2025 13:58:59 +0000 (UTC) X-FDA: 83184024798.03.3092F2D Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf07.hostedemail.com (Postfix) with ESMTP id 3184840015 for ; Tue, 4 Mar 2025 13:58:58 +0000 (UTC) Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b="lwH/QIJc"; spf=pass (imf07.hostedemail.com: domain of bp@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=bp@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1741096738; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=PHunenwCcu8fCU1fxEi1+mySKBOkJ4IBXxnE72H1GQM=; b=SmQv6onguOx7F9+m4GcYV5fV6EWm5tlROrb+6SMFD++LDNy5snwCd482lBWzf1QGonfczV lazNosrZEmmx/4XHPDlC0tZvSSGg8XnToGo7cZhK8eKfWQlesN7vOVQ7hy69NAs9EVdnPs Y2C8C+VScIk9lZkJnAlp539jSYpqid0= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b="lwH/QIJc"; spf=pass (imf07.hostedemail.com: domain of bp@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=bp@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1741096738; a=rsa-sha256; cv=none; b=rnnXZWH4OCziOWieKD07dzldIkNQmVYK2u2ASpcPW1+Qc11gfy1/EEoRI3snWu0ErVUX8Y QtvY54GHmVRB25Yz9H+pTomO9rakNkYdMg5J5vSgPJxqI4CY9Dd3j6q8pJ22hpjUWv2qKX hI9zzrSc4husX5KT1/9dzSNpbMFBLF4= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id A3D0A5C5B8F; Tue, 4 Mar 2025 13:56:40 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 30815C4CEE5; Tue, 4 Mar 2025 13:58:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1741096737; bh=stV7xxxBalUUJGGfdqVe/j6rs+E8JtA24+pOHC7n1qU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=lwH/QIJcNEZuv14gJ1GlBIj0C24z2Il17tn5+VNrDVbF78IHCNS8RRmVhVrnoHm1u MvuOAMxjYV8ex3ydzFufLf4b883+FR9Ppd0ORuMOjG0SyOmLlv1BIqPTGlZc2+h+Xo fzoPx2i5uqIDcGTcSfRiJ7K54l1/wxAED8x2dSZ0UqDXwyvkKFI5n+tPM7hHaAy/0g bZwc3GyP0h45riZXdUset9sogaMUvybd11rtIYjdNdmh0Zz7XuSL36e8n0hePF8CIi 02APEkK9B8DT91ndAFC3ou06Mooc3lFi11klMfxsAqGnuVa7NoAYlGtuhywJ537vwK kL/63Oj2YACEw== From: Borislav Petkov To: riel@surriel.com Cc: Manali.Shukla@amd.com, akpm@linux-foundation.org, andrew.cooper3@citrix.com, jackmanb@google.com, jannh@google.com, kernel-team@meta.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, mhklinux@outlook.com, nadav.amit@gmail.com, thomas.lendacky@amd.com, x86@kernel.org, zhengqi.arch@bytedance.com, Borislav Petkov Subject: [PATCH v15 10/11] x86/mm: Do targeted broadcast flushing from tlbbatch code Date: Tue, 4 Mar 2025 14:58:15 +0100 Message-ID: <20250304135816.12356-11-bp@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250304135816.12356-1-bp@kernel.org> References: <20250304135816.12356-1-bp@kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Rspam-User: X-Stat-Signature: w3st8sk3swofkf5qr5merkdr87xj81hj X-Rspamd-Queue-Id: 3184840015 X-Rspamd-Server: rspam07 X-HE-Tag: 1741096738-87119 X-HE-Meta: U2FsdGVkX19QGHUu4uPFwm9wMf3UM/+bR85FJxTMhFHsjB50kTZu1FEGwXoA0G9Ks3Q5pgFvgTZPCgevUotNuBvPF+ATyvCSUK8GauZpUkTUCHl+2CR8hqa8NWpJRc9gg6UtezuhwYD2p73LIWq1rVQbl+tB+TP6oiRJ1NAeP9ZCdxZlQBrYPGjLGBXAWkYsr67ynaY1y8f+QaFLw0wWRVDD7wyhh/3O1XYT1tlcqDpxa9H2upCHgXI2hO6OKyBJtLC28O3qRiIsENkrYRZrW/gJQjhfhcDfotsa82ki+v0pLWTtu5280YTVYR/OTE5dC36MaKs9U+43yzxl8+5CbbYrI7eMuPSHtJfZBFRg0KP7k3xzqMHiYAVnIURJcQ9Be/scaiuDsRFMqciLv/crPB5nRAJdyA4iNAlgoecG0LPfh33HTm9XRZxMWTKev5p7soBF0hNf55/lFSIVYrj8HMnKl8Y3e6wp/lhKCpFj7LvyqQ75vjrIW+h2KUhzWcDP/vPEUXTZvDNlyiZvNgwR+Mt+FXOv+4n+bXrpDa2ktI0D5zYwpI1CPHVYi/fDdzNpXdKrTLS/Z7X4KIubBXVIIH0izcbllydCMkelBYpI4nWRYdpiJhaDQnqikA3lYURWQgujZrGNtUPRWVUYHyn5g+2LvpfbsnvJm4y3NEJreQRk/xHwDeZ+yTgoXpOvMdgEvpckSYTCnQ+ayzmnf0Zm2hMzYkyvDqy4ttHRFENuJkU5Vl/mlnBprMxCC1ePg+SSpeEerFfq/Pj4vRUPpx5nrFAe+hedzclXoEY9LCgQnux/THRUFmrKxMm7p4VjsErkvLeb5hgkhX9bH+5yETbJQn/BCbz5zw5X5IR1KxC4VYGisH1VZEG8KmAUE5G4R2L37CEj8ij/+usb8zuZtF8jJc/36XdddOGfvA1LmJJTc4gxXo/opZuO8kVh2A4snhuKIY0nYAUH3kQiYEvcsVM O0Caz831 tjnqFnJPer0ztHDeokbJLaCfNk63otyk9Wlc+/5RPMPzsjtTRVqJm/9klC/XYsr2ANuiMDncYHk3EbAskbYBSz8BHdZS3qdllLxAUAFIhJWbpDBNtqeDiRTbWCW0M2VCImzj6otItM4Efkl+mbivQmAJIBrnn5FcR7ISAWEHn94sIigDqR/qI2B4Smm8tiCEveMf363vBqZPNE8VBQ1FjSOH7JYmq1VAWgx5Eictre7tpQ74a3/aLrizaiNMehq/mFWgHaCuyMX7C2j8DBHmTXemdvENjAUS36rBtHNiJt/qQ74Hrut7+CwZ18Ewxp3i8zPf98Vc8piozDuNSHJTvSEIBi6ORD7veUz1ozYy0G7g6tnE= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Rik van Riel Instead of doing a system-wide TLB flush from arch_tlbbatch_flush(), queue up asynchronous, targeted flushes from arch_tlbbatch_add_pending(). This also allows to avoid adding the CPUs of processes using broadcast flushing to the batch->cpumask, and will hopefully further reduce TLB flushing from the reclaim and compaction paths. [ bp: - Massage - :%s/\/cpu_feature_enabled/cgi - merge in improvements from dhansen ] Signed-off-by: Rik van Riel Signed-off-by: Borislav Petkov (AMD) Link: https://lore.kernel.org/r/20250226030129.530345-12-riel@surriel.com --- arch/x86/include/asm/tlb.h | 10 ++-- arch/x86/include/asm/tlbflush.h | 27 ++++++---- arch/x86/mm/tlb.c | 88 +++++++++++++++++++++++++++++++-- 3 files changed, 108 insertions(+), 17 deletions(-) diff --git a/arch/x86/include/asm/tlb.h b/arch/x86/include/asm/tlb.h index 8ffcae7beb55..e8561a846754 100644 --- a/arch/x86/include/asm/tlb.h +++ b/arch/x86/include/asm/tlb.h @@ -108,9 +108,9 @@ static inline void __tlbsync(void) { } /* The implied mode when all bits are clear: */ #define INVLPGB_MODE_ALL_NONGLOBALS 0UL -static inline void invlpgb_flush_user_nr_nosync(unsigned long pcid, - unsigned long addr, - u16 nr, bool stride) +static inline void __invlpgb_flush_user_nr_nosync(unsigned long pcid, + unsigned long addr, + u16 nr, bool stride) { enum addr_stride str = stride ? PMD_STRIDE : PTE_STRIDE; u8 flags = INVLPGB_FLAG_PCID | INVLPGB_FLAG_VA; @@ -119,7 +119,7 @@ static inline void invlpgb_flush_user_nr_nosync(unsigned long pcid, } /* Flush all mappings for a given PCID, not including globals. */ -static inline void invlpgb_flush_single_pcid_nosync(unsigned long pcid) +static inline void __invlpgb_flush_single_pcid_nosync(unsigned long pcid) { __invlpgb(0, pcid, 0, 1, PTE_STRIDE, INVLPGB_FLAG_PCID); } @@ -139,7 +139,7 @@ static inline void invlpgb_flush_all(void) } /* Flush addr, including globals, for all PCIDs. */ -static inline void invlpgb_flush_addr_nosync(unsigned long addr, u16 nr) +static inline void __invlpgb_flush_addr_nosync(unsigned long addr, u16 nr) { __invlpgb(0, 0, addr, nr, PTE_STRIDE, INVLPGB_FLAG_INCLUDE_GLOBAL); } diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h index 7cad283d502d..214d912ac148 100644 --- a/arch/x86/include/asm/tlbflush.h +++ b/arch/x86/include/asm/tlbflush.h @@ -105,6 +105,9 @@ struct tlb_state { * need to be invalidated. */ bool invalidate_other; +#ifdef CONFIG_BROADCAST_TLB_FLUSH + bool need_tlbsync; +#endif #ifdef CONFIG_ADDRESS_MASKING /* @@ -292,12 +295,24 @@ static inline bool mm_in_asid_transition(struct mm_struct *mm) return mm && READ_ONCE(mm->context.asid_transition); } + +static inline bool cpu_need_tlbsync(void) +{ + return this_cpu_read(cpu_tlbstate.need_tlbsync); +} + +static inline void cpu_set_tlbsync(bool state) +{ + this_cpu_write(cpu_tlbstate.need_tlbsync, state); +} #else static inline u16 mm_global_asid(struct mm_struct *mm) { return 0; } static inline void mm_init_global_asid(struct mm_struct *mm) { } static inline void mm_assign_global_asid(struct mm_struct *mm, u16 asid) { } static inline void mm_clear_asid_transition(struct mm_struct *mm) { } static inline bool mm_in_asid_transition(struct mm_struct *mm) { return false; } +static inline bool cpu_need_tlbsync(void) { return false; } +static inline void cpu_set_tlbsync(bool state) { } #endif /* CONFIG_BROADCAST_TLB_FLUSH */ #ifdef CONFIG_PARAVIRT @@ -347,21 +362,15 @@ static inline u64 inc_mm_tlb_gen(struct mm_struct *mm) return atomic64_inc_return(&mm->context.tlb_gen); } -static inline void arch_tlbbatch_add_pending(struct arch_tlbflush_unmap_batch *batch, - struct mm_struct *mm, - unsigned long uaddr) -{ - inc_mm_tlb_gen(mm); - cpumask_or(&batch->cpumask, &batch->cpumask, mm_cpumask(mm)); - mmu_notifier_arch_invalidate_secondary_tlbs(mm, 0, -1UL); -} - static inline void arch_flush_tlb_batched_pending(struct mm_struct *mm) { flush_tlb_mm(mm); } extern void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch); +extern void arch_tlbbatch_add_pending(struct arch_tlbflush_unmap_batch *batch, + struct mm_struct *mm, + unsigned long uaddr); static inline bool pte_flags_need_flush(unsigned long oldflags, unsigned long newflags, diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c index 0efd99053c09..61065975c139 100644 --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -492,6 +492,37 @@ static void finish_asid_transition(struct flush_tlb_info *info) mm_clear_asid_transition(mm); } +static inline void tlbsync(void) +{ + if (cpu_need_tlbsync()) { + __tlbsync(); + cpu_set_tlbsync(false); + } +} + +static inline void invlpgb_flush_user_nr_nosync(unsigned long pcid, + unsigned long addr, + u16 nr, bool pmd_stride) +{ + __invlpgb_flush_user_nr_nosync(pcid, addr, nr, pmd_stride); + if (!cpu_need_tlbsync()) + cpu_set_tlbsync(true); +} + +static inline void invlpgb_flush_single_pcid_nosync(unsigned long pcid) +{ + __invlpgb_flush_single_pcid_nosync(pcid); + if (!cpu_need_tlbsync()) + cpu_set_tlbsync(true); +} + +static inline void invlpgb_flush_addr_nosync(unsigned long addr, u16 nr) +{ + __invlpgb_flush_addr_nosync(addr, nr); + if (!cpu_need_tlbsync()) + cpu_set_tlbsync(true); +} + static void broadcast_tlb_flush(struct flush_tlb_info *info) { bool pmd = info->stride_shift == PMD_SHIFT; @@ -790,6 +821,8 @@ void switch_mm_irqs_off(struct mm_struct *unused, struct mm_struct *next, if (IS_ENABLED(CONFIG_PROVE_LOCKING)) WARN_ON_ONCE(!irqs_disabled()); + tlbsync(); + /* * Verify that CR3 is what we think it is. This will catch * hypothetical buggy code that directly switches to swapper_pg_dir @@ -966,6 +999,8 @@ void switch_mm_irqs_off(struct mm_struct *unused, struct mm_struct *next, */ void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk) { + tlbsync(); + if (this_cpu_read(cpu_tlbstate.loaded_mm) == &init_mm) return; @@ -1633,9 +1668,7 @@ void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch) * a local TLB flush is needed. Optimize this use-case by calling * flush_tlb_func_local() directly in this case. */ - if (cpu_feature_enabled(X86_FEATURE_INVLPGB)) { - invlpgb_flush_all_nonglobals(); - } else if (cpumask_any_but(&batch->cpumask, cpu) < nr_cpu_ids) { + if (cpumask_any_but(&batch->cpumask, cpu) < nr_cpu_ids) { flush_tlb_multi(&batch->cpumask, info); } else if (cpumask_test_cpu(cpu, &batch->cpumask)) { lockdep_assert_irqs_enabled(); @@ -1644,12 +1677,61 @@ void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch) local_irq_enable(); } + /* + * Wait for outstanding INVLPGB flushes. batch->cpumask will + * be empty when the batch was handled completely by INVLPGB. + * Note that mm_in_asid_transition() mm's may use INVLPGB and + * the flush_tlb_multi() IPIs at the same time. + */ + tlbsync(); + cpumask_clear(&batch->cpumask); put_flush_tlb_info(); put_cpu(); } +void arch_tlbbatch_add_pending(struct arch_tlbflush_unmap_batch *batch, + struct mm_struct *mm, unsigned long uaddr) +{ + u16 global_asid = mm_global_asid(mm); + + if (global_asid) { + /* + * Global ASIDs can be flushed with INVLPGB. Flush + * now instead of batching them for later. A later + * tlbsync() is required to ensure these completed. + */ + invlpgb_flush_user_nr_nosync(kern_pcid(global_asid), uaddr, 1, false); + /* Do any CPUs supporting INVLPGB need PTI? */ + if (cpu_feature_enabled(X86_FEATURE_PTI)) + invlpgb_flush_user_nr_nosync(user_pcid(global_asid), uaddr, 1, false); + + /* + * Some CPUs might still be using a local ASID for this + * process, and require IPIs, while others are using the + * global ASID. + * + * In this corner case, both broadcast TLB invalidation + * and IPIs need to be sent. The IPIs will help + * stragglers transition to the broadcast ASID. + */ + if (mm_in_asid_transition(mm)) + global_asid = 0; + } + + if (!global_asid) { + /* + * Mark the mm and the CPU so that + * the TLB gets flushed later. + */ + inc_mm_tlb_gen(mm); + cpumask_or(&batch->cpumask, &batch->cpumask, mm_cpumask(mm)); + } + + mmu_notifier_arch_invalidate_secondary_tlbs(mm, 0, -1UL); +} + /* * Blindly accessing user memory from NMI context can be dangerous * if we're in the middle of switching the current user task or -- 2.43.0