From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 45349C46CD2 for ; Sat, 30 Dec 2023 18:26:23 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7B88E8D0025; Sat, 30 Dec 2023 13:26:22 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 774DD8D0012; Sat, 30 Dec 2023 13:26:22 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 608A68D0025; Sat, 30 Dec 2023 13:26:22 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 4D9BA8D0012 for ; Sat, 30 Dec 2023 13:26:22 -0500 (EST) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 1454BA02BB for ; Sat, 30 Dec 2023 18:26:22 +0000 (UTC) X-FDA: 81624314604.09.F961769 Received: from relay1-d.mail.gandi.net (relay1-d.mail.gandi.net [217.70.183.193]) by imf23.hostedemail.com (Postfix) with ESMTP id 0229B140022 for ; Sat, 30 Dec 2023 18:26:19 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf23.hostedemail.com: domain of alex@ghiti.fr designates 217.70.183.193 as permitted sender) smtp.mailfrom=alex@ghiti.fr ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1703960780; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=3KqePvpy2xe4tH0TQP9AAhj/l0vxDxMwKUdMSu5K8/o=; b=gjgo/UQSZpy6/Pt0JEuUTyIxPELXcwnrf9KJGCq3Kt6I3XEaBojioo+XKJx5x45Q5KJaOw CJakF2MF35Z2TSYDYLRXauDhkVKZ6WEUxDtcPcdyxgSfsPx1m4IZD1/5bxRpb8W1xDczbl Qdh5+HPsD5vXvFzHGP+jLPtB91bYJ9M= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf23.hostedemail.com: domain of alex@ghiti.fr designates 217.70.183.193 as permitted sender) smtp.mailfrom=alex@ghiti.fr ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1703960780; a=rsa-sha256; cv=none; b=EWu24oyzg7mWfIzmy/76jFTvJnPgS/RaRs3YrpCJJByoW4GgbcIS4/iQW3Ll6GrjK6vuBn wPXIeB0QHAIZN41FdJsUEEW2a1W6a+2q1oSErmVM2+GsOxfZF7WybbJpdEbWYFyA1XVjss 05NSwYcmvu6qcP8kiOkS5hOAu6BwHFQ= Received: by mail.gandi.net (Postfix) with ESMTPSA id 56F6B240003; Sat, 30 Dec 2023 18:26:12 +0000 (UTC) Message-ID: Date: Sat, 30 Dec 2023 19:26:11 +0100 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH 2/2] riscv: tlb: avoid tlb flushing if fullmm == 1 To: Jisheng Zhang , Will Deacon , "Aneesh Kumar K . V" , Andrew Morton , Nick Piggin , Peter Zijlstra , Catalin Marinas , Paul Walmsley , Palmer Dabbelt , Albert Ou , Arnd Bergmann Cc: linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org References: <20231228084642.1765-1-jszhang@kernel.org> <20231228084642.1765-3-jszhang@kernel.org> Content-Language: en-US From: Alexandre Ghiti In-Reply-To: <20231228084642.1765-3-jszhang@kernel.org> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-GND-Sasl: alex@ghiti.fr X-Rspamd-Queue-Id: 0229B140022 X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: whkcxhd863qei98aq7bnkutoetg46b46 X-HE-Tag: 1703960779-653508 X-HE-Meta: U2FsdGVkX1/N7G/IPyph8Jzz6bMV2Mi9WlpnIUfIV7Wf0BQmQqu6o/VqWPntyXKcXZLOaVP11xvBqcq0Fb76sB8cTnl/f/hG+Ybj+sfYYSdKdp+Q68nj/MQluj49Yv2+Xmjf7cuF3oDamshGnEl9quVlDRehL0Jbr0Gnvp2/GEB6m9ASFCq3trFuhToO+BNoKGu8zOTVcyFBN3qr4ikrC7RaMuGFaIUM1RjsPZu2GstUT2ubCiNo/eFWn+38d+5B7/m4eP9EYi81+bXiD5gxhOTOjKTU6hxh+tg111md2zcRwbHIQkq5LIeGTrMFEXz4EkTVB+ZojY7kJ+JzubEAKF3Odgh5g9VF/YaXQ1za2XDpD5df+OARjtJg8LJkSKKesUDLAMJKQd+yOTX4K+k0SWbyQCcws62WWXcPl7gdraSIC4rt5PGSwUyGD7uDjwg6pjSf9oL8e852OjCRv0qdlUUryNaFtP0hovLpV28HhQMjAx1Bc86p/oSRQ8381/WHm5hRzbkQnEpSoS+1WF18NMYvtq2vz6hQeMWRpxEmwvU3eTtzOXdBx8xPvIOpNaA7Qnv54tQ49SM9CXdqRQOsldK10AVq7FGJPYv7NPj0PIHp8nHnhDO0QNZWGkdwsIpDR4Lbv7f8jQSw7hIjQvOZFqS0Jj9tMyyWtZXnM2lk7yOS0dhurqk/8UiwUS89adlogYDx4BUIbsjwUY0Ezk8aKZBi9QTdY2Cg7eJ7OSXHU6wEN8WyUbi/Gw4hBXEyuynYPUHb0VyFF9LmYAhoAtpaoYZFq1CTfuPVltB976tBilio292LC31MLKqYm6nZfGMbkCVebwH6IBHQAFkSQlQmYl/tP4BNRpVO/RbPv1bIo7BI+S55DDlci+bLUHnlKeIfCRsoqv1EnHUWIQn6YWX6rk6PTL6MxGBKe84nQryGmxoBG9siUX+bMxHF8KkJ+Ak58Otc8qaP5yqqc6+rqbD oUZMUnfD 0kGRL1qeeidEaQbjy8hLRTTAT1DKP7FHP/GzSOz/CsrRyAJiQ2wrsBeX1YzP/caTJCM3hnKre6O8fF31EmVfpNXLjWx/YO2QW/kwhfxeVGkw1bmc3lZ1bCOdFEUS2Wf1NmgTuklJejRJ19lVeu/0RYtsoBA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Hi Jisheng, On 28/12/2023 09:46, Jisheng Zhang wrote: > The mmu_gather code sets fullmm=1 when tearing down the entire address > space for an mm_struct on exit or execve. So if the underlying platform > supports ASID, the tlb flushing can be avoided because the ASID > allocator will never re-allocate a dirty ASID. > > Use the performance of Process creation in unixbench on T-HEAD TH1520 > platform is improved by about 4%. > > Signed-off-by: Jisheng Zhang > --- > arch/riscv/include/asm/tlb.h | 9 +++++++++ > 1 file changed, 9 insertions(+) > > diff --git a/arch/riscv/include/asm/tlb.h b/arch/riscv/include/asm/tlb.h > index 1eb5682b2af6..35f3c214332e 100644 > --- a/arch/riscv/include/asm/tlb.h > +++ b/arch/riscv/include/asm/tlb.h > @@ -12,10 +12,19 @@ static void tlb_flush(struct mmu_gather *tlb); > > #define tlb_flush tlb_flush > #include > +#include > > static inline void tlb_flush(struct mmu_gather *tlb) > { > #ifdef CONFIG_MMU > + /* > + * If ASID is supported, the ASID allocator will either invalidate the > + * ASID or mark it as used. So we can avoid TLB invalidation when > + * pulling down a full mm. > + */ Given the number of bits are limited for the ASID, at some point we'll reuse previously allocated ASID so the ASID allocator must make sure to invalidate the entries when reusing an ASID: can you point where this is done? Thanks, Alex > + if (static_branch_likely(&use_asid_allocator) && tlb->fullmm) > + return; > + > if (tlb->fullmm || tlb->need_flush_all) > flush_tlb_mm(tlb->mm); > else