From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BAFD2C282C5 for ; Mon, 3 Mar 2025 11:46:54 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 224CC280003; Mon, 3 Mar 2025 06:46:54 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 1D334280001; Mon, 3 Mar 2025 06:46:54 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 074AF280003; Mon, 3 Mar 2025 06:46:54 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id DD0E5280001 for ; Mon, 3 Mar 2025 06:46:53 -0500 (EST) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 5491F12071B for ; Mon, 3 Mar 2025 11:46:53 +0000 (UTC) X-FDA: 83180063106.12.D95204E Received: from mail.alien8.de (mail.alien8.de [65.109.113.108]) by imf07.hostedemail.com (Postfix) with ESMTP id 4F09140004 for ; Mon, 3 Mar 2025 11:46:51 +0000 (UTC) Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=alien8.de header.s=alien8 header.b=UjlK5XBF; spf=pass (imf07.hostedemail.com: domain of bp@alien8.de designates 65.109.113.108 as permitted sender) smtp.mailfrom=bp@alien8.de; dmarc=pass (policy=none) header.from=alien8.de ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1741002411; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=lr4pcUvK1sXL+GqKzwaVriTNa+Bb+3A/phlFBzv/G+4=; b=LH8mIKGjQD9VTNyc0Sf374QohwIOEwMdSAXQttMwsCs/fQeDkWU8RpFo3MxKj3gdC8DwOW KVHomSFbWQiWFyh+hjEj6lfpyht2/cD4FFbMIq86L6r6x2+MIR/dFhyVIOQ5aaqT1bRTT2 O7zqnIIT5l/aWY0p+jFmAbJR8AJ5ccg= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1741002411; a=rsa-sha256; cv=none; b=bmd2DZfvl0KT9XrNiYcusvU5vLWJXYUb1QXsBpJwsFTRxT+4J8LLlQikojHrOUpCZpQ/JS p/PzcgK1w9tynNsG+Dohc2daY6nylH+tfSGAoFCRMBCQBnpHRzzPoi7BXYv/BoPDEeAX98 cPGprfWzMYDA/QTr02P/dyIJNtWMz4w= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=pass header.d=alien8.de header.s=alien8 header.b=UjlK5XBF; spf=pass (imf07.hostedemail.com: domain of bp@alien8.de designates 65.109.113.108 as permitted sender) smtp.mailfrom=bp@alien8.de; dmarc=pass (policy=none) header.from=alien8.de Received: from localhost (localhost.localdomain [127.0.0.1]) by mail.alien8.de (SuperMail on ZX Spectrum 128k) with ESMTP id 0C00040E0214; Mon, 3 Mar 2025 11:46:48 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at mail.alien8.de Received: from mail.alien8.de ([127.0.0.1]) by localhost (mail.alien8.de [127.0.0.1]) (amavisd-new, port 10026) with ESMTP id sY5HXBn-ZGIq; Mon, 3 Mar 2025 11:46:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=alien8.de; s=alien8; t=1741002401; bh=lr4pcUvK1sXL+GqKzwaVriTNa+Bb+3A/phlFBzv/G+4=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=UjlK5XBF9v2c5iDW/9VnZni1ye/acKvQGoCAifgYZldK1P0Des3K+qzAc6QHDI4LK i1pFLLqtGr//wSL0s82NoF/fq77KLHiuhy0mYhIVrlq5LU50Ezn5NhSYC93b9TH5qg BpSG+/819cDHT525riD2WH2hRo5V6ECLqsEK44zWvYyHKZLiZYq/DS0iKiQbqTUIE+ INbkjBql2HMcDh0E3CBKUgqEUr4cTsmdDGGLQk6rqmGFMr/FyzJ8jJ3JD1jsKeGT26 eTtyr3D+yvJ/Wh4TE7Jp5TuvB198B17CLCPhIhzZwbMMDhskZQAzODuv9KxYLQCVvP 4pgKploa4gMryO+eY0soGS8azxbm0S6E+6dnDPCo6aekM4T1n8s37a8W8TWdoYQuFu Qste/fuA+WQkiMCK8HprDFClrhRZFL/DAv8P3dmRecsfG85tai4Wc888LtS8fcb/xG xElHKPKrvc+k9ojlTingEjCQvJBQpn8WXQHCRSec36o2DRWCf8pURUpk/nrKcTRpUJ KrCorl8ujHrlN/D130SXKCRwCTmzqx31+yIsCgacuXO7udwjZVCL4A/u9mhQLG9NVD fSEKxOktE9VtnVfHuPWA4tINofflzIy7TZi8w/tfPQtMEcvlrZXBVKVZf+CgBKbdje ISx5l+IxUMX+HD9s298LLho4= Received: from zn.tnic (pd95303ce.dip0.t-ipconnect.de [217.83.3.206]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange ECDHE (P-256) server-signature ECDSA (P-256) server-digest SHA256) (No client certificate requested) by mail.alien8.de (SuperMail on ZX Spectrum 128k) with ESMTPSA id 67CF140E01D1; Mon, 3 Mar 2025 11:46:24 +0000 (UTC) Date: Mon, 3 Mar 2025 12:46:18 +0100 From: Borislav Petkov To: Rik van Riel Cc: x86@kernel.org, linux-kernel@vger.kernel.org, peterz@infradead.org, dave.hansen@linux.intel.com, zhengqi.arch@bytedance.com, nadav.amit@gmail.com, thomas.lendacky@amd.com, kernel-team@meta.com, linux-mm@kvack.org, akpm@linux-foundation.org, jackmanb@google.com, jannh@google.com, mhklinux@outlook.com, andrew.cooper3@citrix.com, Manali.Shukla@amd.com, mingo@kernel.org Subject: Re: [PATCH v14 11/13] x86/mm: do targeted broadcast flushing from tlbbatch code Message-ID: <20250303114618.GBZ8WWihMDjf-oy8P0@fat_crate.local> References: <20250226030129.530345-1-riel@surriel.com> <20250226030129.530345-12-riel@surriel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: <20250226030129.530345-12-riel@surriel.com> X-Rspamd-Server: rspam02 X-Stat-Signature: zisowmozm7spkfm74j1non9hjj45upzm X-Rspamd-Queue-Id: 4F09140004 X-Rspam-User: X-HE-Tag: 1741002411-818668 X-HE-Meta: U2FsdGVkX19n40qLIr8UaN7JX9Tx2jrHBVz39ckrAYDJAD6SMz8vRriPWXzS/9MWrjtv+2n2szt6A0dRiUCIVBOehMBvhYtDBl3VKOAffObEaIS0p2TMD++eT1fHC8UNpHeba7tr3Lzl88R10iRyOG0lvynIyYlJKITRTZrCC7LpiAltSIFCkgpagDJJLSnRY2H8Wa1hhET6Gkb8FO+cTWlWxVXBNxtUlC898TekfgHjUZ0SJQbOTTWsNgc0LViNPa+zLaJidNZdlrZBVS7/OKv5vusxxYRlt/NGoAsgngosAod5uBIT2RbM0Ud9Qe2oG+I6FIBYN9cDsqMM3XmuEn/fWrB0YChxHZOgu2w7msG36zaUPpvLG1shr47msJpvkhXMhfV7Jjj45ugRaWlivyLTXzNLxNpD2kGrTVGBRpX33kxomGwiR7M1nZJuPyXFdy4rDDE0NQmH7oLTX4QnN88pxudnbGiX/Cpt/0mWpVf9W5WTrbnQcg4TkkvlPXJP1xf3BiecRIJuGzcp+axjW/F0OWg7Z/aCdFo3LtOLGUAsT5ItSFDqFxVIVEkHhUgFaI5QuMk2eICdJHcvqmkaNgR/kDFiMgKmKHJmLlPhoX79GHaBrwdNGihwI7xDf98um/lQY/dqIbOsDLS/YDWdRl0jsuBgsDmvmdJoxJpi5kYfT1OwbKmJRa3cBqtos9qDWppryQ9Gv6ITvexa1JnVTGVbk4EbCqNf+UeqECAfmz3kiiCs+ZVmbIbQ1Wz7r4/z9bg6ZJ9JTtHIAbuKQ6N5dwKTacr5OiKJlf70KBPyxKAn2U6gKAdvOdTyfQzmPziTfN3I78x20dQujtLaxn6QFf1g/5jVJaTYV1A0xgOqfCd/8RieevYA4olUEnktjL9knoehk2xv6BgJncfp6o9FlOKa4ODme1U3dYGlexptafoYdThPr2mhIAVJsoF8JhiqZVS/I5WniDDTyMp0U17 3QWqyf4n ZeyYHIbiSLmdXbP5322VozwnXICIychvidhnbgkW+I01YSyLFx1VzxmmBW3Pzm09N1VHvOKFrM9BAqT6T+Sswn53vBa7Vi7l+PGMPwApGakBZ0IQjvd8ITIaOV08RkUbJJVUZhrywcCaso6peMDq+g47s3R7/vXJ6rIFUCKCQst7Q2eTgqVkiXYDNn7znfqT5g/4SxU0iESyKW+4touwi6yg/JMfZCmDHpz8f2kbGE5c9OLs2Hf8sv5rwGFbb+HuRadzrHn19twN8J/1xMbpq7TRrwiynAv+0r/2RW1CBz6JI3+Lgx47AmmtG6udRgJiE1VCNu2FmsJGxJDd5M0ltLToMNHi2RsVwTMh+RU4jNv6qtSXbLhG4p8yJB1fPfJgo1oJFsPTzaJ9sk04oGtu+kKtpPw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Tue, Feb 25, 2025 at 10:00:46PM -0500, Rik van Riel wrote: > +static inline bool cpu_need_tlbsync(void) > +{ > + return this_cpu_read(cpu_tlbstate.need_tlbsync); > +} > + > +static inline void cpu_write_tlbsync(bool state) That thing feels better like "cpu_set_tlbsync" in the code... > +{ > + this_cpu_write(cpu_tlbstate.need_tlbsync, state); > +} > #else > static inline u16 mm_global_asid(struct mm_struct *mm) > { ... > +static inline void tlbsync(void) > +{ > + if (!cpu_need_tlbsync()) > + return; > + __tlbsync(); > + cpu_write_tlbsync(false); > +} Easier to parse visually: static inline void tlbsync(void) { if (cpu_need_tlbsync()) { __tlbsync(); cpu_write_tlbsync(false); } } Final: From: Rik van Riel Date: Tue, 25 Feb 2025 22:00:46 -0500 Subject: [PATCH] x86/mm: Do targeted broadcast flushing from tlbbatch code Instead of doing a system-wide TLB flush from arch_tlbbatch_flush(), queue up asynchronous, targeted flushes from arch_tlbbatch_add_pending(). This also allows to avoid adding the CPUs of processes using broadcast flushing to the batch->cpumask, and will hopefully further reduce TLB flushing from the reclaim and compaction paths. [ bp: - Massage - :%s/\/cpu_feature_enabled/cgi ] Signed-off-by: Rik van Riel Signed-off-by: Borislav Petkov (AMD) Link: https://lore.kernel.org/r/20250226030129.530345-12-riel@surriel.com --- arch/x86/include/asm/tlb.h | 12 ++--- arch/x86/include/asm/tlbflush.h | 27 +++++++---- arch/x86/mm/tlb.c | 79 +++++++++++++++++++++++++++++++-- 3 files changed, 100 insertions(+), 18 deletions(-) diff --git a/arch/x86/include/asm/tlb.h b/arch/x86/include/asm/tlb.h index 04f2c6f4cee3..b5c2005725cf 100644 --- a/arch/x86/include/asm/tlb.h +++ b/arch/x86/include/asm/tlb.h @@ -102,16 +102,16 @@ static inline void __tlbsync(void) { } #define INVLPGB_FINAL_ONLY BIT(4) #define INVLPGB_INCLUDE_NESTED BIT(5) -static inline void invlpgb_flush_user_nr_nosync(unsigned long pcid, - unsigned long addr, - u16 nr, - bool pmd_stride) +static inline void __invlpgb_flush_user_nr_nosync(unsigned long pcid, + unsigned long addr, + u16 nr, + bool pmd_stride) { __invlpgb(0, pcid, addr, nr, pmd_stride, INVLPGB_PCID | INVLPGB_VA); } /* Flush all mappings for a given PCID, not including globals. */ -static inline void invlpgb_flush_single_pcid_nosync(unsigned long pcid) +static inline void __invlpgb_flush_single_pcid_nosync(unsigned long pcid) { __invlpgb(0, pcid, 0, 1, 0, INVLPGB_PCID); } @@ -131,7 +131,7 @@ static inline void invlpgb_flush_all(void) } /* Flush addr, including globals, for all PCIDs. */ -static inline void invlpgb_flush_addr_nosync(unsigned long addr, u16 nr) +static inline void __invlpgb_flush_addr_nosync(unsigned long addr, u16 nr) { __invlpgb(0, 0, addr, nr, 0, INVLPGB_INCLUDE_GLOBAL); } diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h index 8c21030269ff..cbdb86d58301 100644 --- a/arch/x86/include/asm/tlbflush.h +++ b/arch/x86/include/asm/tlbflush.h @@ -105,6 +105,9 @@ struct tlb_state { * need to be invalidated. */ bool invalidate_other; +#ifdef CONFIG_BROADCAST_TLB_FLUSH + bool need_tlbsync; +#endif #ifdef CONFIG_ADDRESS_MASKING /* @@ -292,11 +295,23 @@ static inline bool mm_in_asid_transition(struct mm_struct *mm) return mm && READ_ONCE(mm->context.asid_transition); } + +static inline bool cpu_need_tlbsync(void) +{ + return this_cpu_read(cpu_tlbstate.need_tlbsync); +} + +static inline void cpu_set_tlbsync(bool state) +{ + this_cpu_write(cpu_tlbstate.need_tlbsync, state); +} #else static inline u16 mm_global_asid(struct mm_struct *mm) { return 0; } static inline void mm_init_global_asid(struct mm_struct *mm) { } static inline void mm_assign_global_asid(struct mm_struct *mm, u16 asid) { } static inline bool mm_in_asid_transition(struct mm_struct *mm) { return false; } +static inline bool cpu_need_tlbsync(void) { return false; } +static inline void cpu_set_tlbsync(bool state) { } #endif /* CONFIG_BROADCAST_TLB_FLUSH */ #ifdef CONFIG_PARAVIRT @@ -346,21 +361,15 @@ static inline u64 inc_mm_tlb_gen(struct mm_struct *mm) return atomic64_inc_return(&mm->context.tlb_gen); } -static inline void arch_tlbbatch_add_pending(struct arch_tlbflush_unmap_batch *batch, - struct mm_struct *mm, - unsigned long uaddr) -{ - inc_mm_tlb_gen(mm); - cpumask_or(&batch->cpumask, &batch->cpumask, mm_cpumask(mm)); - mmu_notifier_arch_invalidate_secondary_tlbs(mm, 0, -1UL); -} - static inline void arch_flush_tlb_batched_pending(struct mm_struct *mm) { flush_tlb_mm(mm); } extern void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch); +extern void arch_tlbbatch_add_pending(struct arch_tlbflush_unmap_batch *batch, + struct mm_struct *mm, + unsigned long uaddr); static inline bool pte_flags_need_flush(unsigned long oldflags, unsigned long newflags, diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c index 0efd99053c09..83ba6876adbf 100644 --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -492,6 +492,37 @@ static void finish_asid_transition(struct flush_tlb_info *info) mm_clear_asid_transition(mm); } +static inline void tlbsync(void) +{ + if (cpu_need_tlbsync()) { + __tlbsync(); + cpu_set_tlbsync(false); + } +} + +static inline void invlpgb_flush_user_nr_nosync(unsigned long pcid, + unsigned long addr, + u16 nr, bool pmd_stride) +{ + __invlpgb_flush_user_nr_nosync(pcid, addr, nr, pmd_stride); + if (!cpu_need_tlbsync()) + cpu_set_tlbsync(true); +} + +static inline void invlpgb_flush_single_pcid_nosync(unsigned long pcid) +{ + __invlpgb_flush_single_pcid_nosync(pcid); + if (!cpu_need_tlbsync()) + cpu_set_tlbsync(true); +} + +static inline void invlpgb_flush_addr_nosync(unsigned long addr, u16 nr) +{ + __invlpgb_flush_addr_nosync(addr, nr); + if (!cpu_need_tlbsync()) + cpu_set_tlbsync(true); +} + static void broadcast_tlb_flush(struct flush_tlb_info *info) { bool pmd = info->stride_shift == PMD_SHIFT; @@ -790,6 +821,8 @@ void switch_mm_irqs_off(struct mm_struct *unused, struct mm_struct *next, if (IS_ENABLED(CONFIG_PROVE_LOCKING)) WARN_ON_ONCE(!irqs_disabled()); + tlbsync(); + /* * Verify that CR3 is what we think it is. This will catch * hypothetical buggy code that directly switches to swapper_pg_dir @@ -966,6 +999,8 @@ void switch_mm_irqs_off(struct mm_struct *unused, struct mm_struct *next, */ void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk) { + tlbsync(); + if (this_cpu_read(cpu_tlbstate.loaded_mm) == &init_mm) return; @@ -1633,9 +1668,7 @@ void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch) * a local TLB flush is needed. Optimize this use-case by calling * flush_tlb_func_local() directly in this case. */ - if (cpu_feature_enabled(X86_FEATURE_INVLPGB)) { - invlpgb_flush_all_nonglobals(); - } else if (cpumask_any_but(&batch->cpumask, cpu) < nr_cpu_ids) { + if (cpumask_any_but(&batch->cpumask, cpu) < nr_cpu_ids) { flush_tlb_multi(&batch->cpumask, info); } else if (cpumask_test_cpu(cpu, &batch->cpumask)) { lockdep_assert_irqs_enabled(); @@ -1644,12 +1677,52 @@ void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch) local_irq_enable(); } + /* + * If (asynchronous) INVLPGB flushes were issued, wait for them here. + * The cpumask above contains only CPUs that were running tasks + * not using broadcast TLB flushing. + */ + tlbsync(); + cpumask_clear(&batch->cpumask); put_flush_tlb_info(); put_cpu(); } +void arch_tlbbatch_add_pending(struct arch_tlbflush_unmap_batch *batch, + struct mm_struct *mm, + unsigned long uaddr) +{ + u16 asid = mm_global_asid(mm); + + if (asid) { + invlpgb_flush_user_nr_nosync(kern_pcid(asid), uaddr, 1, false); + /* Do any CPUs supporting INVLPGB need PTI? */ + if (cpu_feature_enabled(X86_FEATURE_PTI)) + invlpgb_flush_user_nr_nosync(user_pcid(asid), uaddr, 1, false); + + /* + * Some CPUs might still be using a local ASID for this + * process, and require IPIs, while others are using the + * global ASID. + * + * In this corner case, both broadcast TLB invalidation + * and IPIs need to be sent. The IPIs will help + * stragglers transition to the broadcast ASID. + */ + if (mm_in_asid_transition(mm)) + asid = 0; + } + + if (!asid) { + inc_mm_tlb_gen(mm); + cpumask_or(&batch->cpumask, &batch->cpumask, mm_cpumask(mm)); + } + + mmu_notifier_arch_invalidate_secondary_tlbs(mm, 0, -1UL); +} + /* * Blindly accessing user memory from NMI context can be dangerous * if we're in the middle of switching the current user task or -- 2.43.0 -- Regards/Gruss, Boris. https://people.kernel.org/tglx/notes-about-netiquette