From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7406FE77188 for ; Mon, 6 Jan 2025 18:40:32 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0A2B76B0082; Mon, 6 Jan 2025 13:40:32 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 047456B0083; Mon, 6 Jan 2025 13:40:32 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E51286B0088; Mon, 6 Jan 2025 13:40:31 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id C9F6F6B0082 for ; Mon, 6 Jan 2025 13:40:31 -0500 (EST) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 7BC4BA01F5 for ; Mon, 6 Jan 2025 18:40:31 +0000 (UTC) X-FDA: 82977892662.30.C3B12B3 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.17]) by imf07.hostedemail.com (Postfix) with ESMTP id 8B87F40015 for ; Mon, 6 Jan 2025 18:40:28 +0000 (UTC) Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=LFsmN9pa; spf=pass (imf07.hostedemail.com: domain of dave.hansen@intel.com designates 192.198.163.17 as permitted sender) smtp.mailfrom=dave.hansen@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1736188829; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=mw+a6kD3TPNgZ93oBRqGxCw1kIA9gH8vuPetTTiB6Oo=; b=ItluNSIn59YIaubCWa+fkcKSmG8tFLcmvMwnwbK2g0De4Zu5VrqlIIO4ISGanTkp5Bi27c BPH3l2lk1L42HfVzf3dcflZQ7FNf7fMJvakp91OFv0MNNfj4Q86qa7S4ZB8u4R0072aEXs AaPLVcvitXntOhJo7B4m7z4lfla0CIU= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=LFsmN9pa; spf=pass (imf07.hostedemail.com: domain of dave.hansen@intel.com designates 192.198.163.17 as permitted sender) smtp.mailfrom=dave.hansen@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1736188829; a=rsa-sha256; cv=none; b=AzCBbj0BfrrnbRSRMWvQ2+pK9BS+55ryaO8rO4uQ/q2zdm16lz1I+OWFiK50ZDA0PdkkqY ZPeCQC+4XumOkgX9n7MktYNMwZHXAKQ7WlMIst1FLdbyazdw9JyAIIIzIRA08zhiaC1j9O p5qpLGzoM7/GwCGGUrDC/tVAffY8WmE= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1736188828; x=1767724828; h=message-id:date:mime-version:subject:to:cc:references: from:in-reply-to:content-transfer-encoding; bh=l/bQPLoEM5yNKFcsp2zKt3wk7q1XC9dVatT63S9ThYs=; b=LFsmN9paC8BP2PQmzXELluk2e6ISNkBGv8eoP8zmk/L+FXKg++VbCcN5 aukFJvtCmNS23MnQ07nZtIJ+aBehZ1bCC78MnY0cfuV+xcXUqQnA5m+Vz aajNeSTg2gdxTMNYNjS0NXnbo5n0Mco92CyrAB6KzedteOvuf5BoPbGIt clHU8Q9UPud//q/Zk1zaIU7E3QhdMbjVVOw2GGntwKkJRXpdhYIqV7Aw3 xpBXhbym9U23IHsXdeY/AGrqQ6gB3EAlQTQl9CLx/ndG8cmG8/7Dw8Sqk L8eah58AEevahJDWJWWV/uTv9hhqCLw8v0cglVb2qarmZy3Hu9QL5CZPa Q==; X-CSE-ConnectionGUID: Az12R6HuR+iAAoVb/oy32w== X-CSE-MsgGUID: Cb8Wvlg5T2KZYnoYk/ha6w== X-IronPort-AV: E=McAfee;i="6700,10204,11307"; a="36245029" X-IronPort-AV: E=Sophos;i="6.12,293,1728975600"; d="scan'208";a="36245029" Received: from fmviesa001.fm.intel.com ([10.60.135.141]) by fmvoesa111.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Jan 2025 10:40:27 -0800 X-CSE-ConnectionGUID: mHLFTUrZTk2yAFXie06zBw== X-CSE-MsgGUID: EZaaahujTm2gBHfBlg02/Q== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,224,1728975600"; d="scan'208";a="133433242" Received: from lbogdanm-mobl3.ger.corp.intel.com (HELO [10.124.220.224]) ([10.124.220.224]) by smtpauth.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Jan 2025 10:40:25 -0800 Message-ID: Date: Mon, 6 Jan 2025 10:40:26 -0800 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH 09/12] x86/mm: enable broadcast TLB invalidation for multi-threaded processes To: Rik van Riel , x86@kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@meta.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, hpa@zytor.com, akpm@linux-foundation.org, nadav.amit@gmail.com, zhengqi.arch@bytedance.com, linux-mm@kvack.org References: <20241230175550.4046587-1-riel@surriel.com> <20241230175550.4046587-10-riel@surriel.com> From: Dave Hansen Content-Language: en-US Autocrypt: addr=dave.hansen@intel.com; keydata= xsFNBE6HMP0BEADIMA3XYkQfF3dwHlj58Yjsc4E5y5G67cfbt8dvaUq2fx1lR0K9h1bOI6fC oAiUXvGAOxPDsB/P6UEOISPpLl5IuYsSwAeZGkdQ5g6m1xq7AlDJQZddhr/1DC/nMVa/2BoY 2UnKuZuSBu7lgOE193+7Uks3416N2hTkyKUSNkduyoZ9F5twiBhxPJwPtn/wnch6n5RsoXsb ygOEDxLEsSk/7eyFycjE+btUtAWZtx+HseyaGfqkZK0Z9bT1lsaHecmB203xShwCPT49Blxz VOab8668QpaEOdLGhtvrVYVK7x4skyT3nGWcgDCl5/Vp3TWA4K+IofwvXzX2ON/Mj7aQwf5W iC+3nWC7q0uxKwwsddJ0Nu+dpA/UORQWa1NiAftEoSpk5+nUUi0WE+5DRm0H+TXKBWMGNCFn c6+EKg5zQaa8KqymHcOrSXNPmzJuXvDQ8uj2J8XuzCZfK4uy1+YdIr0yyEMI7mdh4KX50LO1 pmowEqDh7dLShTOif/7UtQYrzYq9cPnjU2ZW4qd5Qz2joSGTG9eCXLz5PRe5SqHxv6ljk8mb ApNuY7bOXO/A7T2j5RwXIlcmssqIjBcxsRRoIbpCwWWGjkYjzYCjgsNFL6rt4OL11OUF37wL QcTl7fbCGv53KfKPdYD5hcbguLKi/aCccJK18ZwNjFhqr4MliQARAQABzUVEYXZpZCBDaHJp c3RvcGhlciBIYW5zZW4gKEludGVsIFdvcmsgQWRkcmVzcykgPGRhdmUuaGFuc2VuQGludGVs LmNvbT7CwXgEEwECACIFAlQ+9J0CGwMGCwkIBwMCBhUIAgkKCwQWAgMBAh4BAheAAAoJEGg1 lTBwyZKwLZUP/0dnbhDc229u2u6WtK1s1cSd9WsflGXGagkR6liJ4um3XCfYWDHvIdkHYC1t MNcVHFBwmQkawxsYvgO8kXT3SaFZe4ISfB4K4CL2qp4JO+nJdlFUbZI7cz/Td9z8nHjMcWYF IQuTsWOLs/LBMTs+ANumibtw6UkiGVD3dfHJAOPNApjVr+M0P/lVmTeP8w0uVcd2syiaU5jB aht9CYATn+ytFGWZnBEEQFnqcibIaOrmoBLu2b3fKJEd8Jp7NHDSIdrvrMjYynmc6sZKUqH2 I1qOevaa8jUg7wlLJAWGfIqnu85kkqrVOkbNbk4TPub7VOqA6qG5GCNEIv6ZY7HLYd/vAkVY E8Plzq/NwLAuOWxvGrOl7OPuwVeR4hBDfcrNb990MFPpjGgACzAZyjdmYoMu8j3/MAEW4P0z F5+EYJAOZ+z212y1pchNNauehORXgjrNKsZwxwKpPY9qb84E3O9KYpwfATsqOoQ6tTgr+1BR CCwP712H+E9U5HJ0iibN/CDZFVPL1bRerHziuwuQuvE0qWg0+0SChFe9oq0KAwEkVs6ZDMB2 P16MieEEQ6StQRlvy2YBv80L1TMl3T90Bo1UUn6ARXEpcbFE0/aORH/jEXcRteb+vuik5UGY 5TsyLYdPur3TXm7XDBdmmyQVJjnJKYK9AQxj95KlXLVO38lczsFNBFRjzmoBEACyAxbvUEhd GDGNg0JhDdezyTdN8C9BFsdxyTLnSH31NRiyp1QtuxvcqGZjb2trDVuCbIzRrgMZLVgo3upr MIOx1CXEgmn23Zhh0EpdVHM8IKx9Z7V0r+rrpRWFE8/wQZngKYVi49PGoZj50ZEifEJ5qn/H Nsp2+Y+bTUjDdgWMATg9DiFMyv8fvoqgNsNyrrZTnSgoLzdxr89FGHZCoSoAK8gfgFHuO54B lI8QOfPDG9WDPJ66HCodjTlBEr/Cwq6GruxS5i2Y33YVqxvFvDa1tUtl+iJ2SWKS9kCai2DR 3BwVONJEYSDQaven/EHMlY1q8Vln3lGPsS11vSUK3QcNJjmrgYxH5KsVsf6PNRj9mp8Z1kIG qjRx08+nnyStWC0gZH6NrYyS9rpqH3j+hA2WcI7De51L4Rv9pFwzp161mvtc6eC/GxaiUGuH BNAVP0PY0fqvIC68p3rLIAW3f97uv4ce2RSQ7LbsPsimOeCo/5vgS6YQsj83E+AipPr09Caj 0hloj+hFoqiticNpmsxdWKoOsV0PftcQvBCCYuhKbZV9s5hjt9qn8CE86A5g5KqDf83Fxqm/ vXKgHNFHE5zgXGZnrmaf6resQzbvJHO0Fb0CcIohzrpPaL3YepcLDoCCgElGMGQjdCcSQ+Ci FCRl0Bvyj1YZUql+ZkptgGjikQARAQABwsFfBBgBAgAJBQJUY85qAhsMAAoJEGg1lTBwyZKw l4IQAIKHs/9po4spZDFyfDjunimEhVHqlUt7ggR1Hsl/tkvTSze8pI1P6dGp2XW6AnH1iayn yRcoyT0ZJ+Zmm4xAH1zqKjWplzqdb/dO28qk0bPso8+1oPO8oDhLm1+tY+cOvufXkBTm+whm +AyNTjaCRt6aSMnA/QHVGSJ8grrTJCoACVNhnXg/R0g90g8iV8Q+IBZyDkG0tBThaDdw1B2l asInUTeb9EiVfL/Zjdg5VWiF9LL7iS+9hTeVdR09vThQ/DhVbCNxVk+DtyBHsjOKifrVsYep WpRGBIAu3bK8eXtyvrw1igWTNs2wazJ71+0z2jMzbclKAyRHKU9JdN6Hkkgr2nPb561yjcB8 sIq1pFXKyO+nKy6SZYxOvHxCcjk2fkw6UmPU6/j/nQlj2lfOAgNVKuDLothIxzi8pndB8Jju KktE5HJqUUMXePkAYIxEQ0mMc8Po7tuXdejgPMwgP7x65xtfEqI0RuzbUioFltsp1jUaRwQZ MTsCeQDdjpgHsj+P2ZDeEKCbma4m6Ez/YWs4+zDm1X8uZDkZcfQlD9NldbKDJEXLIjYWo1PH hYepSffIWPyvBMBTW2W5FRjJ4vLRrJSUoEfJuPQ3vW9Y73foyo/qFoURHO48AinGPZ7PC7TF vUaNOTjKedrqHkaOcqB185ahG2had0xnFsDPlx5y In-Reply-To: <20241230175550.4046587-10-riel@surriel.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 8B87F40015 X-Rspam-User: X-Stat-Signature: g97czoashonuy58cet6yygeofn4izfef X-HE-Tag: 1736188828-67979 X-HE-Meta: U2FsdGVkX18N2YoPJ3tpj8To6IlNfsXme/mp+R+8Sx3+L8ZyDdYfK9Edb0ZOf+1vSVnH8RHVkZHqi3ooTYCB7OncKr1p40zPPrJ2SE/btGnWr1ys1QJJ/N4/taacTPS5LOky/may9oSe81b1eaiQdJCd3BfPranp2jmpj3P0RSv4pTwgCuydawgt8aofUwX8SsQSN3UUPxSyUgDb+6RVf6V7HjWLVoCcBKNpBOG73xBgBngJzvQ84aSriNnRIVlkCri9FfGgr3JSKq/dIbcvLoxFlw1IXCRf4TV0P5UF524I4wC+rER53D30++UtHGY2GGjz0MsbOiUlPPTVs4ccfTB3vgRXz+kOD2g/nENMnzi+IOChl+JK4wCPjdaO2OKTQnbD0kjYNmCG/+M9n5z2/QPfGaURa/iTUUc6rd6x6Q0b1i7SBvY8jhdx8zKSjU1hVyG+wbgPsvP4vOmiE7b74tXoX7SHPbPCdDoypOVruEwQGMqEXV/87l1UOwd0cX+Ndi2lAyodl7BR/0hT9O3C7NvdeLXhqmA4dsWPA4UwOCkZSaexpfhlxvUohRlk8h1tonmAeVPhyqqNuuCPsPSkw8YJvvmQp30gGErVfXQNAqwvER3YWxeYJKbnBHt6YEaCJKKQ7TtCKc6Wt7l0JjSAw719deRXPl0mSM+N4bOCE46ldqK5bCOaLQaX0dwtNgnHwJAhnY1MyzqtZWjP75EQkSSBMG5pgEdpxpH8xgZWDwzghiFvwYFTKzCw3fvyPsgMbJ+oWo+b68DBYAQqDkMGfA2rlgH7PuQXeS8nLNi0cr3C88EI1qukCChR91BtfLre+Q0GjLz4pykjHOaHbiUaJ9bkqr4IUZok0ryR3sJXIdqgX0GuIOGTL8C32xgVseFXXujCywynx8ZmLPVYPyirKZEBg5Fp/IGDb5y3UBOCfv6BwmD7rtXbrmtPTx5HQ2TJnhv3VzIu5g4k9h4DjOF DqWGkZOz jBp664lpM8yx+hX2vxG7lwCL9/WikaCa/Qvd6Sx62uxMLrd2J8qmk8wpT2nABaS5s2ucG58v9zJUGKycpTtTeJGHuOyfRecqm4cjT4A24P+6hc+vvOLdMtt7Xu4UP/N04LSOQc2Y3XMDQX+3akSWZIzer5gcu91VyR8a85omkne7msrytFC8Sny4wrv5eeXBMeV2da0rqAn8ClszUbUwkM6g1Gg/Ez2zLwTy0KK3HaIqPKxSqGbN+4MZPVRk9e3PyjUd6dlpVcYvO77TXI0ktnFCFMA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 12/30/24 09:53, Rik van Riel wrote: ... > +#ifdef CONFIG_CPU_SUP_AMD > + struct list_head broadcast_asid_list; > + u16 broadcast_asid; > + bool asid_transition; > +#endif Could we either do: config X86_TLB_FLUSH_BROADCAST_HW bool depends on CONFIG_CPU_SUP_AMD or even #define X86_TLB_FLUSH_BROADCAST_HW CONFIG_CPU_SUP_AMD for the whole series please? There are a non-trivial number of #ifdefs here and it would be nice to know what there're for, logically. This is a completely selfish request because Intel has a similar feature and we're surely going to give this approach a try on Intel CPUs too. Second, is there something that prevents you from defining a new MM_CONTEXT_* flag instead of a new bool? It might save bloating the context by a few words. > #ifdef CONFIG_ADDRESS_MASKING > /* Active LAM mode: X86_CR3_LAM_U48 or X86_CR3_LAM_U57 or 0 (disabled) */ > unsigned long lam_cr3_mask; > diff --git a/arch/x86/include/asm/mmu_context.h b/arch/x86/include/asm/mmu_context.h > index 795fdd53bd0a..0dc446c427d2 100644 > --- a/arch/x86/include/asm/mmu_context.h > +++ b/arch/x86/include/asm/mmu_context.h > @@ -139,6 +139,8 @@ static inline void mm_reset_untag_mask(struct mm_struct *mm) > #define enter_lazy_tlb enter_lazy_tlb > extern void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk); > > +extern void destroy_context_free_broadcast_asid(struct mm_struct *mm); > + > /* > * Init a new mm. Used on mm copies, like at fork() > * and on mm's that are brand-new, like at execve(). > @@ -161,6 +163,13 @@ static inline int init_new_context(struct task_struct *tsk, > mm->context.execute_only_pkey = -1; > } > #endif > + > +#ifdef CONFIG_CPU_SUP_AMD > + INIT_LIST_HEAD(&mm->context.broadcast_asid_list); > + mm->context.broadcast_asid = 0; > + mm->context.asid_transition = false; > +#endif We've been inconsistent about it, but I think I'd prefer that this had a: if (cpu_feature_enabled(X86_FEATURE_INVLPGB)) { ... } wrapper as opposed to CONFIG_CPU_SUP_AMD. It might save dirtying a cacheline on all the CPUs that don't care. cpu_feature_enabled() would also function the same as the #ifdef. > mm_reset_untag_mask(mm); > init_new_context_ldt(mm); > return 0; > @@ -170,6 +179,9 @@ static inline int init_new_context(struct task_struct *tsk, > static inline void destroy_context(struct mm_struct *mm) > { > destroy_context_ldt(mm); > +#ifdef CONFIG_CPU_SUP_AMD > + destroy_context_free_broadcast_asid(mm); > +#endif > } > > extern void switch_mm(struct mm_struct *prev, struct mm_struct *next, > diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h > index 20074f17fbcd..5e9956af98d1 100644 > --- a/arch/x86/include/asm/tlbflush.h > +++ b/arch/x86/include/asm/tlbflush.h > @@ -65,6 +65,23 @@ static inline void cr4_clear_bits(unsigned long mask) > */ > #define TLB_NR_DYN_ASIDS 6 > > +#ifdef CONFIG_CPU_SUP_AMD > +#define is_dyn_asid(asid) (asid) < TLB_NR_DYN_ASIDS > +#define is_broadcast_asid(asid) (asid) >= TLB_NR_DYN_ASIDS > +#define in_asid_transition(info) (info->mm && info->mm->context.asid_transition) > +#define mm_broadcast_asid(mm) (mm->context.broadcast_asid) > +#else > +#define is_dyn_asid(asid) true > +#define is_broadcast_asid(asid) false > +#define in_asid_transition(info) false > +#define mm_broadcast_asid(mm) 0 I think it was said elsewhere, but I also prefer static inlines for these instead of macros. The type checking that you get from the compiler in _both_ compile configurations is much more valuable than brevity. ... > + /* > + * TLB consistency for this ASID is maintained with INVLPGB; > + * TLB flushes happen even while the process isn't running. > + */ I'm not sure this comment helps much. The thing that matters here is that a broadcast ASID is asigned from a global namespace and not from a per-cpu namespace. > +#ifdef CONFIG_CPU_SUP_AMD > + if (static_cpu_has(X86_FEATURE_INVLPGB) && mm_broadcast_asid(next)) { > + *new_asid = mm_broadcast_asid(next); > + *need_flush = false; > + return; > + } > +#endif > + > if (this_cpu_read(cpu_tlbstate.invalidate_other)) > clear_asid_other(); > > @@ -251,6 +265,245 @@ static void choose_new_asid(struct mm_struct *next, u64 next_tlb_gen, > *need_flush = true; > } > > +#ifdef CONFIG_CPU_SUP_AMD > +/* > + * Logic for AMD INVLPGB support. > + */ This comment is another indication that this shouldn't all be crammed under CONFIG_CPU_SUP_AMD. > +static DEFINE_RAW_SPINLOCK(broadcast_asid_lock); > +static u16 last_broadcast_asid = TLB_NR_DYN_ASIDS; > +static DECLARE_BITMAP(broadcast_asid_used, MAX_ASID_AVAILABLE) = { 0 }; I'm debating whether this should be a bitmap for "broadcast" ASIDs alone or for all ASIDs. > +static LIST_HEAD(broadcast_asid_list); > +static int broadcast_asid_available = MAX_ASID_AVAILABLE - TLB_NR_DYN_ASIDS - 1; > + > +static void reset_broadcast_asid_space(void) > +{ > + mm_context_t *context; > + > + lockdep_assert_held(&broadcast_asid_lock); > + > + /* > + * Flush once when we wrap around the ASID space, so we won't need > + * to flush every time we allocate an ASID for boradcast flushing. ^ broadcast > + */ > + invlpgb_flush_all_nonglobals(); > + tlbsync(); > + > + /* > + * Leave the currently used broadcast ASIDs set in the bitmap, since > + * those cannot be reused before the next wraparound and flush.. > + */ > + bitmap_clear(broadcast_asid_used, 0, MAX_ASID_AVAILABLE); > + list_for_each_entry(context, &broadcast_asid_list, broadcast_asid_list) > + __set_bit(context->broadcast_asid, broadcast_asid_used); > + > + last_broadcast_asid = TLB_NR_DYN_ASIDS; > +} 'TLB_NR_DYN_ASIDS' is special here. Could it please be made more clear what it means *logically*? > +static u16 get_broadcast_asid(void) > +{ > + lockdep_assert_held(&broadcast_asid_lock); > + > + do { > + u16 start = last_broadcast_asid; > + u16 asid = find_next_zero_bit(broadcast_asid_used, MAX_ASID_AVAILABLE, start); > + > + if (asid >= MAX_ASID_AVAILABLE) { > + reset_broadcast_asid_space(); > + continue; > + } > + > + /* Try claiming this broadcast ASID. */ > + if (!test_and_set_bit(asid, broadcast_asid_used)) { > + last_broadcast_asid = asid; > + return asid; > + } > + } while (1); > +} I think it was said elsewhere, but the "try" logic doesn't make a lot of sense to me when it's all protected by a global lock. > +/* > + * Returns true if the mm is transitioning from a CPU-local ASID to a broadcast > + * (INVLPGB) ASID, or the other way around. > + */ > +static bool needs_broadcast_asid_reload(struct mm_struct *next, u16 prev_asid) > +{ > + u16 broadcast_asid = mm_broadcast_asid(next); > + > + if (broadcast_asid && prev_asid != broadcast_asid) > + return true; > + > + if (!broadcast_asid && is_broadcast_asid(prev_asid)) > + return true; > + > + return false; > +} > + > +void destroy_context_free_broadcast_asid(struct mm_struct *mm) > +{ > + if (!mm->context.broadcast_asid) > + return; > + > + guard(raw_spinlock_irqsave)(&broadcast_asid_lock); > + mm->context.broadcast_asid = 0; > + list_del(&mm->context.broadcast_asid_list); > + broadcast_asid_available++; > +} > + > +static bool mm_active_cpus_exceeds(struct mm_struct *mm, int threshold) > +{ This function is pretty important. It's kinda missing a comment about its theory of operation. > + int count = 0; > + int cpu; > + > + if (cpumask_weight(mm_cpumask(mm)) <= threshold) > + return false; There's a lot of potential redundancy between this check and the one below. I assume this sequence was desinged for performance: first, do a cheap, one-stop-shopping check on mm_cpumask(). If it looks, ok, then go marauding around in a bunch of per_cpu() cachelines in a much more expensive but precise search. Could we spell some of that out explicitly, please? > + for_each_cpu(cpu, mm_cpumask(mm)) { > + /* Skip the CPUs that aren't really running this process. */ > + if (per_cpu(cpu_tlbstate.loaded_mm, cpu) != mm) > + continue; This is the only place I know of where 'cpu_tlbstate' is read from a non-local CPU. This is fundamentally racy as hell and needs some heavy commenting about why this raciness is OK.