From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A8522C61CE8 for ; Mon, 9 Jun 2025 11:01:36 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3AAB26B0092; Mon, 9 Jun 2025 07:01:36 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 382246B0093; Mon, 9 Jun 2025 07:01:36 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2BFFF6B0098; Mon, 9 Jun 2025 07:01:36 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 0D20F6B0092 for ; Mon, 9 Jun 2025 07:01:36 -0400 (EDT) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 98DC0161A9E for ; Mon, 9 Jun 2025 11:01:35 +0000 (UTC) X-FDA: 83535571350.09.435C961 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf13.hostedemail.com (Postfix) with ESMTP id B066420010 for ; Mon, 9 Jun 2025 11:01:33 +0000 (UTC) Authentication-Results: imf13.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf13.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1749466893; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=AoMPGPhanshZGMamuGL4DVdSxWIT3Ea9rLxIl5VFb0c=; b=YzW5jK7emgEQuZgc22UNKAyYeuhOjQ2iIYyaWVxaZ/1kp0mHdOl49GJc69OzY8Ut7CUKNo L0F3XByBiwwonzvN7I3uQls6Hogq+q76E5mHiU7Q6lcJK2ALNHXsXwLxgu/DBjs+0sHtRR 1skKALnW5seEFpWBdWB8j8AEq82vqak= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1749466893; a=rsa-sha256; cv=none; b=UfX16RYWVNs+z0cSwYbQmQRuv6mfDZDQbSMkIndBwjulPvvQbBSAyebwiiJMAbma+qZBpt +iBeFx9JAmdjDQwCScdpN5Jlm3pMbihzAoDnPFkqbdHUCoLrkoyBxT1LaDYtfGnX1jMlfi dbHtdfQ/W+oNYM6goZqLqPffow84LOc= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf13.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id F02C2150C; Mon, 9 Jun 2025 04:01:13 -0700 (PDT) Received: from [10.1.39.162] (XHFQ2J9959.cambridge.arm.com [10.1.39.162]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id C82543F59E; Mon, 9 Jun 2025 04:01:29 -0700 (PDT) Message-ID: Date: Mon, 9 Jun 2025 12:01:28 +0100 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v1] mm: Remove arch_flush_tlb_batched_pending() arch helper Content-Language: en-GB To: Lorenzo Stoakes Cc: Andrew Morton , Catalin Marinas , Will Deacon , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" , David Hildenbrand , Rik van Riel , "Liam R. Howlett" , Vlastimil Babka , Harry Yoo , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, linux-mm@kvack.org References: <20250609103132.447370-1-ryan.roberts@arm.com> <48375ad8-7461-446e-9002-8d326fba137b@lucifer.local> From: Ryan Roberts In-Reply-To: <48375ad8-7461-446e-9002-8d326fba137b@lucifer.local> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: B066420010 X-Stat-Signature: m1qpbgtfa5g4a3weadzn5fxyfq3begxs X-Rspam-User: X-HE-Tag: 1749466893-219191 X-HE-Meta: U2FsdGVkX19WCaRPfEc4RNw084/bGsSTSDoPs7F9/XabeKVdaDU3s0HwrcD3FKrioRDPtgPTbfU0Ks+qH0HEyuWLSrJ3Y/VwqxIF8nhI1VpYAZa4O8nK+CzAc1sIER/Aec64lFI9dEFEoQLjpfLN+6nGbmflMhsTmgVCT4oqLb8vgUMeuDpiluSwQ6dbSafWecn96/y4r5BvBmeou7NWzl78s+x3XG2qwNoXIbbVYPC6nYfOfjt8VtnSMHEtrkyyjJpSQgs6E4Pz3pS7qCieQm3unvcSmz81m+lGcUiinn7k5QFdC3FhsvaObRkYemrp6Eeex6m4uQxAXAjtwN1wx2b7AnK95wpMU6ZtF78o1JV2zlhXHQGzfr4bBERlW2lK/SeG63O7ept1t8s9/gnv0s05cJf+lBxVvZZ1u1BSrDXICpOALtUgvsuwTT00noAlxKbsLKiREkxvi4JM1V4kdfc8C1FzUoWOmL0vy4UB94xLRLKdFxPhZmrIMnBd14THmzYLi8gco1dDZys2ZrqMDiWqhpTQ8S8yz1Lut1Vq5DnM50xpO2Suo32yj9KCFhKwjuVKa2NoryOZOzAH5Knirt39K03M1ePljCBPvE2uVkUc+u0eRjFoVbrH/dTNOUIEcTjoh6+q0w1DuVHd3MwevEVmwcIKvtNLhIxYgSJ+3yb7vrtb3Ivle2y6JwHC1BPlevijzdj9V9bvuOH0xPhJ1xWIGNiHO/fX+uSuRTUD5pSpqpso2LToqlbGgTKuAYcnR7k1dBoEwOPEe/lqQtbzXjIIKo12Z3Vmke4Hw470diZ9WgA6OW+f3Y0xlP/L4wu6rc0KMdO68B/s81j2lEC5jQPHadZBdfnnTJDxTYXFqqs1NCOTnKxXOkEHJqHjKDG3sPkr9K1Tin4P0ir5s5NbTRfYEA5UEh2mWhrnkoOqfFA3qqOq1c+09Uh7ygYI18UPNYqEVmpzob3npF+1aLg J4atiHuC j/qL3xWHo5OvLR0Yj2JBe6n51iLcnbRMXhuWMxjJhuH60DJoU3PGhnHTmrVa7Yl9GjmmczbYhkUQ9Bw4+mpUmQgjIeJXpaUqyBSBM0b3hMa0YxzP3Jf2hUdGH56yIm4SEkTQYIEEJJPqM4O9LbuyWtyrxj/ZAT8Rh8cXVzsUfq90IAiJEOsy/mNzc3ocrEHU4+Q0CQhRNunh07GDQ6EpihJjozhUPX+R2fGRShfqcXNI5phW/mqxG65Z9ywvb2519oE+Puro9a+Ti0zKm+nEuLOKNfwdyInHJNDAMawlBP1rU8Hs= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 09/06/2025 11:45, Lorenzo Stoakes wrote: > On Mon, Jun 09, 2025 at 11:31:30AM +0100, Ryan Roberts wrote: >> Since commit 4b634918384c ("arm64/mm: Close theoretical race where stale >> TLB entry remains valid"), all arches that use tlbbatch for reclaim >> (arm64, riscv, x86) implement arch_flush_tlb_batched_pending() with a >> flush_tlb_mm(). >> >> So let's simplify by removing the unnecessary abstraction and doing the >> flush_tlb_mm() directly in flush_tlb_batched_pending(). This effectively >> reverts commit db6c1f6f236d ("mm/tlbbatch: introduce >> arch_flush_tlb_batched_pending()"). >> >> Suggested-by: Will Deacon >> Signed-off-by: Ryan Roberts > > Thanks, love to see an arch_*() helper go :) > > Reviewed-by: Lorenzo Stoakes Thanks! > > Couple points below. > >> --- >> arch/arm64/include/asm/tlbflush.h | 11 ----------- >> arch/riscv/include/asm/tlbflush.h | 1 - >> arch/riscv/mm/tlbflush.c | 5 ----- >> arch/x86/include/asm/tlbflush.h | 5 ----- >> mm/rmap.c | 2 +- >> 5 files changed, 1 insertion(+), 23 deletions(-) >> >> diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h >> index aa9efee17277..18a5dc0c9a54 100644 >> --- a/arch/arm64/include/asm/tlbflush.h >> +++ b/arch/arm64/include/asm/tlbflush.h >> @@ -322,17 +322,6 @@ static inline bool arch_tlbbatch_should_defer(struct mm_struct *mm) >> return true; >> } >> >> -/* >> - * If mprotect/munmap/etc occurs during TLB batched flushing, we need to ensure >> - * all the previously issued TLBIs targeting mm have completed. But since we >> - * can be executing on a remote CPU, a DSB cannot guarantee this like it can >> - * for arch_tlbbatch_flush(). Our only option is to flush the entire mm. >> - */ > > Hm are we losing information here? I guess it's hard to know whewre to put > this though. The generic version of this comment exists above flush_tlb_batched_pending() in rmap.c. For the arm64-specific description of why we need to flush the whole mm, that's captured in Commit 4b634918384c ("arm64/mm: Close theoretical race where stale TLB entry remains valid"), although I accept that may not be the first place someone looks. I don't think we should be defining arch_ helpers just to provide a hook for some arch-specific comments though. > >> -static inline void arch_flush_tlb_batched_pending(struct mm_struct *mm) >> -{ >> - flush_tlb_mm(mm); >> -} >> - >> /* >> * To support TLB batched flush for multiple pages unmapping, we only send >> * the TLBI for each page in arch_tlbbatch_add_pending() and wait for the >> diff --git a/arch/riscv/include/asm/tlbflush.h b/arch/riscv/include/asm/tlbflush.h >> index 1a20dd746a49..eed0abc40514 100644 >> --- a/arch/riscv/include/asm/tlbflush.h >> +++ b/arch/riscv/include/asm/tlbflush.h >> @@ -63,7 +63,6 @@ void flush_pud_tlb_range(struct vm_area_struct *vma, unsigned long start, >> bool arch_tlbbatch_should_defer(struct mm_struct *mm); >> void arch_tlbbatch_add_pending(struct arch_tlbflush_unmap_batch *batch, >> struct mm_struct *mm, unsigned long start, unsigned long end); >> -void arch_flush_tlb_batched_pending(struct mm_struct *mm); >> void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch); >> >> extern unsigned long tlb_flush_all_threshold; >> diff --git a/arch/riscv/mm/tlbflush.c b/arch/riscv/mm/tlbflush.c >> index e737ba7949b1..8404530ec00f 100644 >> --- a/arch/riscv/mm/tlbflush.c >> +++ b/arch/riscv/mm/tlbflush.c >> @@ -234,11 +234,6 @@ void arch_tlbbatch_add_pending(struct arch_tlbflush_unmap_batch *batch, >> mmu_notifier_arch_invalidate_secondary_tlbs(mm, start, end); >> } >> >> -void arch_flush_tlb_batched_pending(struct mm_struct *mm) >> -{ >> - flush_tlb_mm(mm); >> -} >> - >> void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch) >> { >> __flush_tlb_range(NULL, &batch->cpumask, >> diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h >> index e9b81876ebe4..00daedfefc1b 100644 >> --- a/arch/x86/include/asm/tlbflush.h >> +++ b/arch/x86/include/asm/tlbflush.h >> @@ -356,11 +356,6 @@ static inline void arch_tlbbatch_add_pending(struct arch_tlbflush_unmap_batch *b >> mmu_notifier_arch_invalidate_secondary_tlbs(mm, 0, -1UL); >> } >> >> -static inline void arch_flush_tlb_batched_pending(struct mm_struct *mm) >> -{ >> - flush_tlb_mm(mm); >> -} >> - >> extern void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch); >> >> static inline bool pte_flags_need_flush(unsigned long oldflags, >> diff --git a/mm/rmap.c b/mm/rmap.c >> index fb63d9256f09..fd160ddaa980 100644 >> --- a/mm/rmap.c >> +++ b/mm/rmap.c >> @@ -746,7 +746,7 @@ void flush_tlb_batched_pending(struct mm_struct *mm) >> int flushed = batch >> TLB_FLUSH_BATCH_FLUSHED_SHIFT; >> >> if (pending != flushed) { >> - arch_flush_tlb_batched_pending(mm); >> + flush_tlb_mm(mm); > > I see that CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH is only implemented in > riscv (if !nommu), x86, arm64, and therefore we are only going to invoke > this for those arches which previously did the same anyway, so this is > safe. It's also the way it used to be done before arm64 joined the party and thought it could optimize by just issuing a DSB. I since discoved that the DSB approach is buggy so arm64 has now fallen back to flush_tlb_mm() so the reason for the original introduction of arch_flush_tlb_batched_pending() has gone. Thanks, Ryan > > Kinda wish we could avoid this ugly #ifdef #else #endif pattern here in > mm/rmap.c but probably necessary in this case. > >> /* >> * If the new TLB flushing is pending during flushing, leave >> * mm->tlb_flush_batched as is, to avoid losing flushing. >> -- >> 2.43.0 >>