From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 54116C47074 for ; Thu, 4 Jan 2024 13:00:52 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 74AA76B02FB; Thu, 4 Jan 2024 08:00:51 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 6F9D78D0073; Thu, 4 Jan 2024 08:00:51 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5C1148D006C; Thu, 4 Jan 2024 08:00:51 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 4C6716B02FB for ; Thu, 4 Jan 2024 08:00:51 -0500 (EST) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 24620160500 for ; Thu, 4 Jan 2024 13:00:51 +0000 (UTC) X-FDA: 81641638302.19.B983EC0 Received: from relay4-d.mail.gandi.net (relay4-d.mail.gandi.net [217.70.183.196]) by imf28.hostedemail.com (Postfix) with ESMTP id F0EF1C0032 for ; Thu, 4 Jan 2024 13:00:48 +0000 (UTC) Authentication-Results: imf28.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf28.hostedemail.com: domain of alex@ghiti.fr designates 217.70.183.196 as permitted sender) smtp.mailfrom=alex@ghiti.fr ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1704373249; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=crMN6klWskcEAZPwkQIyellJ5Jw4znZBfIWKYAuKBcY=; b=GvAmQLTUC9gbGft26rjas9adPSQr14dhwK84U1z87PkUzURs7Sdumk634ega9N5BFlQPE8 ukPA7YWvpHX+o6OJaQ/bfNK7lHOtzFmtiHtS0yhWBqKEK+hBA/D4Idv1MNu5+zvri3Plvm tTKhdBxYK9yDTkX+G1D/4HT+enJl1as= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf28.hostedemail.com: domain of alex@ghiti.fr designates 217.70.183.196 as permitted sender) smtp.mailfrom=alex@ghiti.fr ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1704373249; a=rsa-sha256; cv=none; b=nIxLtkf3Qp7jL51NJvsugAMMHx8KU9w4u4uaCJm2aVGB9i3DhQmFnVGV6aA4IZBEj5K0xh n8eywkSfOBF9aUMFgb0NJzpHk/We6xuGBL8iQv4ziL3+ZY2N2y4OVSlQXr+/U70QzxvIQC Ac2vpRsrgl5LW0PCOw2PvA66DV3M55k= Received: by mail.gandi.net (Postfix) with ESMTPSA id F3D03E0002; Thu, 4 Jan 2024 13:00:44 +0000 (UTC) Message-ID: <6685594f-5a50-4e4d-b6ec-2834e9f8624f@ghiti.fr> Date: Thu, 4 Jan 2024 14:00:44 +0100 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH 2/2] riscv: tlb: avoid tlb flushing if fullmm == 1 Content-Language: en-US To: Jisheng Zhang Cc: Will Deacon , "Aneesh Kumar K . V" , Andrew Morton , Nick Piggin , Peter Zijlstra , Catalin Marinas , Paul Walmsley , Palmer Dabbelt , Albert Ou , Arnd Bergmann , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org References: <20231228084642.1765-1-jszhang@kernel.org> <20231228084642.1765-3-jszhang@kernel.org> From: Alexandre Ghiti In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-GND-Sasl: alex@ghiti.fr X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: F0EF1C0032 X-Stat-Signature: fj96x59jopc56dwsd1i7ofxpxxcbebqr X-Rspam-User: X-HE-Tag: 1704373248-587925 X-HE-Meta: U2FsdGVkX18m58BsMYrIC6CAi2mg0YscrlatdnmrxI9wWacYNO7+uHrKIpyeF7ruytxvjdmk6fuN/aMQnqtjEt4OiozzQPm0sMPaR0/EvDao2WEaPZ1Fv3PW1xxOF4MCeaj0hNMQHRPs94PNNTl6kGRu6ZefZ9XEjCyLfdmoSmsRT66MdXmeMRFdP3dmoMyDgceJDe3H/wgUZPG4z31Xl7c0SM8Xp2hd/OaIng4wkoX4RVQ3A+Ir7yXdFmF3WuKxhVrNQSLUdvGntNGiq9VATxHHGFdz6HZpVH0SPfKa6G1dlvSg2LIYeFsZWGwXYSVdhJKWhEF8H4oIU8xfLxfztHYvvnuH+NUO+m3MTQDlFJB7d12Nmks+WPJt22sSU9VPkgKy98oTq9PE/s2VUT5tpt6DeoRv3EeRLtCyWMPw7+LduRgzFkYHrcyqk8gd5QVbmsoS/G4r9vS6gS7WeOhq0d02EPXzz3DAmIBNo2pp+e2bC2er8wvXYFosBx4QY4AEdl5PTzR30xzERQO2aiMVYt6OWgC6ib4MdM34kk1okytKv7CklVrYRU3XCDJxC+xogVmSyZ0CTdVXAqlbXr0fb4OPtOhglSuD0PLHrbeXgL351Yk04q6rwNWB+/RLQqS5qnSY3Pvl24pz17Hc5SrszeH1j5vLKgzSOYyJ6H5tBH/Unh13WQlVaRev8W6JzcXPslqVUZKSZDxWnhNWEDqrHgYOOE12/6sjvUToPVIsBQObSbNfMpNhhQEaDfMEN8viIbeizIq81I7TghSmAfaLgvmRudCGmQvdGEYbGR6N7HzyC8wG2rRsBvlJ0izZVHXn1qYxlUhC0Vet5yZMW/rdnTGDfyc1pK3mRNLCJ30JJiSIkg1h3jPM1ruF27UwD+iUdN15QpD2bEZjQOug2SdezRopV50pOdOqdq6wikj6MhlzsDy5C4QqWap2rhCZVLezTLvwWlQ4KkzAJIiRsG/ HOTxc84d ryrpxG+ngrTzUaVqFWlN2eUM0ZHKmiyB3QyJFJhnfKwm6RT1JQZmVaP+9v5aRi64vp5AHxiR41dyHOZf2xWnMhG/GBSQ5Q7Jfo6PtTa5ZHfHzp+AsAtzfPsfG1dm54JsK3gOyWprHB1HBCS9lU0P+/lN6kH33g+f+cWBMvW2tWrY7q25tu4bfsP+WkkDqulHoX6sEpEQ037IQDXEj5ts9U9I7GSNoTBE1VIEMBk5ll02l8rfu/76S81581x0ohdn56aTBszsejBCTA1Q= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 02/01/2024 04:12, Jisheng Zhang wrote: > On Sat, Dec 30, 2023 at 07:26:11PM +0100, Alexandre Ghiti wrote: >> Hi Jisheng, > Hi Alex, > >> On 28/12/2023 09:46, Jisheng Zhang wrote: >>> The mmu_gather code sets fullmm=1 when tearing down the entire address >>> space for an mm_struct on exit or execve. So if the underlying platform >>> supports ASID, the tlb flushing can be avoided because the ASID >>> allocator will never re-allocate a dirty ASID. >>> >>> Use the performance of Process creation in unixbench on T-HEAD TH1520 >>> platform is improved by about 4%. >>> >>> Signed-off-by: Jisheng Zhang >>> --- >>> arch/riscv/include/asm/tlb.h | 9 +++++++++ >>> 1 file changed, 9 insertions(+) >>> >>> diff --git a/arch/riscv/include/asm/tlb.h b/arch/riscv/include/asm/tlb.h >>> index 1eb5682b2af6..35f3c214332e 100644 >>> --- a/arch/riscv/include/asm/tlb.h >>> +++ b/arch/riscv/include/asm/tlb.h >>> @@ -12,10 +12,19 @@ static void tlb_flush(struct mmu_gather *tlb); >>> #define tlb_flush tlb_flush >>> #include >>> +#include >>> static inline void tlb_flush(struct mmu_gather *tlb) >>> { >>> #ifdef CONFIG_MMU >>> + /* >>> + * If ASID is supported, the ASID allocator will either invalidate the >>> + * ASID or mark it as used. So we can avoid TLB invalidation when >>> + * pulling down a full mm. >>> + */ >> >> Given the number of bits are limited for the ASID, at some point we'll reuse >> previously allocated ASID so the ASID allocator must make sure to invalidate >> the entries when reusing an ASID: can you point where this is done? > Per my understanding of the code, the path would be > set_mm_asid() > __new_context() > __flush_context() // set context_tlb_flush_pending > if (need_flush_tlb) > local_flush_tlb_all() Ok thanks, so feel free to add: Reviewed-by: Alexandre Ghiti Thanks! Alex > > Thanks > >>> + if (static_branch_likely(&use_asid_allocator) && tlb->fullmm) >>> + return; >>> + >>> if (tlb->fullmm || tlb->need_flush_all) >>> flush_tlb_mm(tlb->mm); >>> else > _______________________________________________ > linux-riscv mailing list > linux-riscv@lists.infradead.org > http://lists.infradead.org/mailman/listinfo/linux-riscv