From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CA1C8C61CE8 for ; Mon, 9 Jun 2025 10:31:50 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 60ACE6B0096; Mon, 9 Jun 2025 06:31:50 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5E32B6B0098; Mon, 9 Jun 2025 06:31:50 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 520276B0099; Mon, 9 Jun 2025 06:31:50 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 3324B6B0096 for ; Mon, 9 Jun 2025 06:31:50 -0400 (EDT) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id B29B8BF43A for ; Mon, 9 Jun 2025 10:31:49 +0000 (UTC) X-FDA: 83535496338.25.DC719CD Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf19.hostedemail.com (Postfix) with ESMTP id 059E21A0004 for ; Mon, 9 Jun 2025 10:31:46 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf19.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1749465107; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:references; bh=H/tamliAeHECb7s6K1bukaVFPvE/2PQMPFCcaVeR95k=; b=lSfomXl0hOBCyk5QOJu7meUPWRMA5m0NHJL/1ONuEQ7CqMEopowbQ+7QDUkhuUYno5Nyze LbLxniy/Os9DC/5EF9uF9QbhEI+NrcUa28oUQvY3EBgb+25ONKw70C2u7QJhLzVJq+SaRc ulzjopuRtAVD6lT/YRwJYEbtYeL3QoA= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf19.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1749465107; a=rsa-sha256; cv=none; b=HK8yBo8wylpseculnylvaA9niG4UrQTC+JL5ilArAioDKTjyv2avi6iWv6ZjiZMnfIQvoK uz6wrDtUa38wZfqNYCJVrc7b0XSs6BA88WAEfKXeyAKL9rAvnzdKxP2+6CS73nPtJVRMVd kd1qSZXmI/MoUNBnxhY7HmIPBkYifI4= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 065B3150C; Mon, 9 Jun 2025 03:31:27 -0700 (PDT) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.27]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id B872F3F673; Mon, 9 Jun 2025 03:31:42 -0700 (PDT) From: Ryan Roberts To: Andrew Morton , Catalin Marinas , Will Deacon , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" , David Hildenbrand , Lorenzo Stoakes , Rik van Riel , "Liam R. Howlett" , Vlastimil Babka , Harry Yoo Cc: Ryan Roberts , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, linux-mm@kvack.org Subject: [PATCH v1] mm: Remove arch_flush_tlb_batched_pending() arch helper Date: Mon, 9 Jun 2025 11:31:30 +0100 Message-ID: <20250609103132.447370-1-ryan.roberts@arm.com> X-Mailer: git-send-email 2.43.0 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 059E21A0004 X-Stat-Signature: cwmueegy8bedeb4mm8oyzdaaxnirdo6y X-Rspam-User: X-HE-Tag: 1749465106-33260 X-HE-Meta: U2FsdGVkX19wm8itOWIEOLGEuphqf6B+o/nDicQcgHlbtgfOoTPvzU5mBq9Wlcov0pl4J1lyhaos4nYBhJD9fr01w4FF9AGKAvlaGq/D76mbvahei+kX6Nt8r3ihk7id70wG0ryMH8/8Ee6XeO8LZdxrZA8r9rNn2GScvxPfbw1oIHBY+BOz8mxgUeZt054DKW7k9teDugq0ftpzXFKdxDV8WmQF2XPWPoItsE1xt5kHvKvoo/KsQ3p966CAVK2Cqi1+FuU8J71CM2n9BIR99rMqSWvYspK/FtIzex3reNsHptYbDw8Y4s4+XZYy9QKzOr/J04BBC+PTqaw8PAc5qGTuCtA87FnDpTp+6wPC9vMmm1ZPmhqY/pEtjtLCRFQubkmPaKS0yDquLn/VVWTN15e4uee18FaZQLKZziak/pIkpSsQ1YWIYG3tNLRM0xRSh/q3SnVRukitzVCbNDulIchmtTH3miSLErQ5lsLCtRyKxch6A8y8OLcCj9Qg2fbk5KRkWTcaRVoaasWDgt9y27xq1b7aSCZYuV91VgEaOU0fSsSz3zwxFJQNfXhsGG7IJIgN3cFwKljnoLSFy2+N1tHYnkfjAm0RV817PX0mbCqymq7GZc8vLmZN+v/RI57NAlngg0A5kdBH0+tFivdTqy38nhRiaxvQ4+2cfdZAFnYS7yiDHs55UWfZdjsr3gRNVYasj6IZ4L87o7n9CGHMlXdwamAsSc5Y9AZcfW1vyZ29/bcE1e1XwBuSVI+mXA/X7elJ2oUwI+cp1ZdBNgCnCMAnUNN+8w51lA60RJVgEaOP+DNn7ztODj7CeBFWwdUl2nOyjnM4Im1GCPPUMz+GzxGxDTYXTwUXNZthQ4Xl5EA+oSEEkgsTO1tqkdvvUGqKVuu9tfaSX4yHWmbSxgDlKZdOgVAm0I23Wkn2SUUacB4qErw3w5cx9xrWG8hyr8fj9BPaudsOjp0VRH0Eg+M OXvpMMVs bLtpxbOT3FszUpys9X23N+T7bLZMa5usivdgZ1A+H8klTBjpPwZQM5JxLoht1Cdp4/ZIqJkiXTXLCkYZFCxCxG5vdEnv9QORJerUnwYEvQ+mO2ttNP3MS4+CWqSCG1KCZTwaIqRaAGuCsik/iLLjy6GWxf3T1d0Wpp6GmSTnrc08OCgL98mZdU2Zp6fRUxAfOFmbG1HHXx/nEpK78fo0edHf45FOnulgXw6hJCMxEOZlifOnZiyRx2z3BOrR71i7OiQbFeiMPa0EmlXD0TwYB7bamlQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Since commit 4b634918384c ("arm64/mm: Close theoretical race where stale TLB entry remains valid"), all arches that use tlbbatch for reclaim (arm64, riscv, x86) implement arch_flush_tlb_batched_pending() with a flush_tlb_mm(). So let's simplify by removing the unnecessary abstraction and doing the flush_tlb_mm() directly in flush_tlb_batched_pending(). This effectively reverts commit db6c1f6f236d ("mm/tlbbatch: introduce arch_flush_tlb_batched_pending()"). Suggested-by: Will Deacon Signed-off-by: Ryan Roberts --- arch/arm64/include/asm/tlbflush.h | 11 ----------- arch/riscv/include/asm/tlbflush.h | 1 - arch/riscv/mm/tlbflush.c | 5 ----- arch/x86/include/asm/tlbflush.h | 5 ----- mm/rmap.c | 2 +- 5 files changed, 1 insertion(+), 23 deletions(-) diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h index aa9efee17277..18a5dc0c9a54 100644 --- a/arch/arm64/include/asm/tlbflush.h +++ b/arch/arm64/include/asm/tlbflush.h @@ -322,17 +322,6 @@ static inline bool arch_tlbbatch_should_defer(struct mm_struct *mm) return true; } -/* - * If mprotect/munmap/etc occurs during TLB batched flushing, we need to ensure - * all the previously issued TLBIs targeting mm have completed. But since we - * can be executing on a remote CPU, a DSB cannot guarantee this like it can - * for arch_tlbbatch_flush(). Our only option is to flush the entire mm. - */ -static inline void arch_flush_tlb_batched_pending(struct mm_struct *mm) -{ - flush_tlb_mm(mm); -} - /* * To support TLB batched flush for multiple pages unmapping, we only send * the TLBI for each page in arch_tlbbatch_add_pending() and wait for the diff --git a/arch/riscv/include/asm/tlbflush.h b/arch/riscv/include/asm/tlbflush.h index 1a20dd746a49..eed0abc40514 100644 --- a/arch/riscv/include/asm/tlbflush.h +++ b/arch/riscv/include/asm/tlbflush.h @@ -63,7 +63,6 @@ void flush_pud_tlb_range(struct vm_area_struct *vma, unsigned long start, bool arch_tlbbatch_should_defer(struct mm_struct *mm); void arch_tlbbatch_add_pending(struct arch_tlbflush_unmap_batch *batch, struct mm_struct *mm, unsigned long start, unsigned long end); -void arch_flush_tlb_batched_pending(struct mm_struct *mm); void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch); extern unsigned long tlb_flush_all_threshold; diff --git a/arch/riscv/mm/tlbflush.c b/arch/riscv/mm/tlbflush.c index e737ba7949b1..8404530ec00f 100644 --- a/arch/riscv/mm/tlbflush.c +++ b/arch/riscv/mm/tlbflush.c @@ -234,11 +234,6 @@ void arch_tlbbatch_add_pending(struct arch_tlbflush_unmap_batch *batch, mmu_notifier_arch_invalidate_secondary_tlbs(mm, start, end); } -void arch_flush_tlb_batched_pending(struct mm_struct *mm) -{ - flush_tlb_mm(mm); -} - void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch) { __flush_tlb_range(NULL, &batch->cpumask, diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h index e9b81876ebe4..00daedfefc1b 100644 --- a/arch/x86/include/asm/tlbflush.h +++ b/arch/x86/include/asm/tlbflush.h @@ -356,11 +356,6 @@ static inline void arch_tlbbatch_add_pending(struct arch_tlbflush_unmap_batch *b mmu_notifier_arch_invalidate_secondary_tlbs(mm, 0, -1UL); } -static inline void arch_flush_tlb_batched_pending(struct mm_struct *mm) -{ - flush_tlb_mm(mm); -} - extern void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch); static inline bool pte_flags_need_flush(unsigned long oldflags, diff --git a/mm/rmap.c b/mm/rmap.c index fb63d9256f09..fd160ddaa980 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -746,7 +746,7 @@ void flush_tlb_batched_pending(struct mm_struct *mm) int flushed = batch >> TLB_FLUSH_BATCH_FLUSHED_SHIFT; if (pending != flushed) { - arch_flush_tlb_batched_pending(mm); + flush_tlb_mm(mm); /* * If the new TLB flushing is pending during flushing, leave * mm->tlb_flush_batched as is, to avoid losing flushing. -- 2.43.0