From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9F359C5B549 for ; Fri, 30 May 2025 14:05:30 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6CF786B0119; Fri, 30 May 2025 10:05:29 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 659236B0130; Fri, 30 May 2025 10:05:29 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 520D76B0132; Fri, 30 May 2025 10:05:29 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 2F22F6B0119 for ; Fri, 30 May 2025 10:05:29 -0400 (EDT) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id D349BBC640 for ; Fri, 30 May 2025 14:05:28 +0000 (UTC) X-FDA: 83499746736.23.90389D0 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf23.hostedemail.com (Postfix) with ESMTP id 2718C140008 for ; Fri, 30 May 2025 14:05:26 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=none; spf=pass (imf23.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1748613927; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=briVDyCtVmJlcp9qKYDQUQ+ZnAwtSVxjrm6EZqrt8AA=; b=G7LHeG/pUMZeya1LJDILKxMVXI3gM67KBf/NbwP4PKmHctAYy5VgA1AWF5AN4JXZhLOSId Ny3DyjEyTe2ziVu7lb2o9nvoU4IWm9aHJtrvKpgdsZYY0ANuZqLdWVszTS+4KOiQfqhE0r LsB944kYKw2OgGsG77OihC4bNkB5MRc= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=none; spf=pass (imf23.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1748613927; a=rsa-sha256; cv=none; b=esDqhFYc6SoIVsDnqeZIq2uXCzBmBNC76lH0vR7SAJWvf9x7Uk6WVh5t99l0cWd4QgFj6F XOsbCw8aUgPkboZKoDSaw3avdGKFiAR31GMDpDTxMt6WN2YzvOoR2P6TceYSzFoHmu1SAq +rjhJ1DrzKqHTqq0LduiUSWRyec5xsk= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id CC0EB2682; Fri, 30 May 2025 07:05:09 -0700 (PDT) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.27]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 42F963F673; Fri, 30 May 2025 07:05:21 -0700 (PDT) From: Ryan Roberts To: Catalin Marinas , Will Deacon , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , "David S. Miller" , Andreas Larsson , Juergen Gross , Ajay Kaher , Alexey Makhalov , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" , Boris Ostrovsky , "Aneesh Kumar K.V" , Andrew Morton , Peter Zijlstra , Arnd Bergmann , David Hildenbrand , Lorenzo Stoakes , "Liam R. Howlett" , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko , Alexei Starovoitov , Andrey Ryabinin Cc: Ryan Roberts , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, sparclinux@vger.kernel.org, virtualization@lists.linux.dev, xen-devel@lists.xenproject.org, linux-mm@kvack.org Subject: [RFC PATCH v1 4/6] mm: Introduce arch_in_lazy_mmu_mode() Date: Fri, 30 May 2025 15:04:42 +0100 Message-ID: <20250530140446.2387131-5-ryan.roberts@arm.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250530140446.2387131-1-ryan.roberts@arm.com> References: <20250530140446.2387131-1-ryan.roberts@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Rspamd-Queue-Id: 2718C140008 X-Stat-Signature: a7eqcub1g1ztxf4grk1aatkzn9zdfi9s X-Rspam-User: X-Rspamd-Server: rspam04 X-HE-Tag: 1748613926-705663 X-HE-Meta: U2FsdGVkX1/EHBxVmUXvy8G9lLgCWlVPlQ8B+WbDG7/2GO9vJtzsWqdikfmzcLtGncoUBPAaYnmP/7vPrX4ERHk6aNGp8YgApd84XLoj4maOXmoHMOfUN92s4AHGlLs9JW+n4IhPu62yhLIdeo4KUheOBLX2wCisl422MJOEYRo9Aqs1Ood6vmySJH8BlThImJEKFXKIuhSDZwX4FU/IjE19HbRIxrcq1p0MC4PfIwuFJzEzvSJd2daBBMzyw3vJwYaYbBGya5jDlTBvTazrTz/O7XEJeTadMgXvKaNOMMouWPSxPRHmkUBE4lnqLqUdtLmHosHxcYdZbP0QLAdXt1Gq3jn572+JXk5PKvcEJzen/HXrB2CBu0LPECz3IneWncvtbF75doMBGWhIS+rtsJ6VjCxN0e2htQgIra+ru3rutvOpSt+AVOAGfqIc49ZSlIO9jGLKmil2cD6CfuhrHtC4TOP391kHJtvudnketaBNkGWQYOb0/MNakvUQiWH0Mp69DuaJbst8IqO7TSVh0UfQoZJIK/+T+Eu5oK0Xm0bA31BW4Q4u/vssk7M+i+QoaIdgacuhlmg+VDtg83RRcgSBD/nnl17EbTQ8j3JkShPOhtjbuuRZ0XgCRrDxsbBF+2UPMGuK0LwaSjNi37pOHwUtu8wC2nLc6hDtI3ekzaVx+EV1/u5YpnHPdV0FqA3ftMBPQpCOHUFjRkSwNN0PiZFaHSGoYVSfHb5N2dlLuYm+P3Q0XhjrcFtwHsh+AHwoMMcVTIODbFIxfpQ+gASXeViHDZe9x5qZg1Qks52FYq5Vqjm1cQMnB/vwL0996WjEUeabE9yXU1zPGJzrdpCNmXI4UohlBmbdOq49PM0OLcAWsdpi32JWddw06eTzAbbwQAsqozRHn9aqe+OWeRU4xpGVqdLT1ffnXC8Bxfis8HcbLcEqM7n/N1YuS0bAfPBiZk9PJWTIW4ixyCxl0c3 IOgOysD3 p0GlDEGBaq5olILI0zDKY/zkUA1W0UMHVdK0TMQ47xcRL6XskXWCjX3YXPfRL//ShTPLZjXgSJTzn0Qo7yw9QasvHHjvxRsc+sKgRNyHascvzsvSTq/B1gZdeGJCL3u6anIKcE5V+sSgPyHLSx3d3pTiF0sQhmK2f9GmC7ZvCs2jF1JcwIJ26JPPVQ0gZLUHtUG1SYot+FcYDSaehVWXrjH20E/t/v12iVEUa9trIa2zIb6taeTplwOgFvqx+W0Ia4K1R6/K/Xvgh6Cz2JqTfG6sHfQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Introduce new arch_in_lazy_mmu_mode() API, which returns true if the calling context is currently in lazy mmu mode or false otherwise. Each arch that supports lazy mmu mode must provide an implementation of this API. The API will shortly be used to prevent accidental lazy mmu mode nesting when performing an allocation, and will additionally be used to ensure pte modification vs tlb flushing order does not get inadvertantly swapped. Signed-off-by: Ryan Roberts --- arch/arm64/include/asm/pgtable.h | 8 ++++++++ .../powerpc/include/asm/book3s/64/tlbflush-hash.h | 15 +++++++++++++++ arch/sparc/include/asm/tlbflush_64.h | 1 + arch/sparc/mm/tlb.c | 12 ++++++++++++ arch/x86/include/asm/paravirt.h | 5 +++++ arch/x86/include/asm/paravirt_types.h | 1 + arch/x86/kernel/paravirt.c | 6 ++++++ arch/x86/xen/mmu_pv.c | 6 ++++++ include/linux/pgtable.h | 1 + 9 files changed, 55 insertions(+) diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index 5285757ee0c1..add75dee49f5 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -119,6 +119,14 @@ static inline void arch_leave_lazy_mmu_mode(void) clear_thread_flag(TIF_LAZY_MMU); } +static inline bool arch_in_lazy_mmu_mode(void) +{ + if (in_interrupt()) + return false; + + return test_thread_flag(TIF_LAZY_MMU); +} + #ifdef CONFIG_TRANSPARENT_HUGEPAGE #define __HAVE_ARCH_FLUSH_PMD_TLB_RANGE diff --git a/arch/powerpc/include/asm/book3s/64/tlbflush-hash.h b/arch/powerpc/include/asm/book3s/64/tlbflush-hash.h index 146287d9580f..4123a9da32cc 100644 --- a/arch/powerpc/include/asm/book3s/64/tlbflush-hash.h +++ b/arch/powerpc/include/asm/book3s/64/tlbflush-hash.h @@ -57,6 +57,21 @@ static inline void arch_leave_lazy_mmu_mode(void) #define arch_flush_lazy_mmu_mode() do {} while (0) +static inline bool arch_in_lazy_mmu_mode(void) +{ + struct ppc64_tlb_batch *batch; + bool active; + + if (radix_enabled()) + return false; + + batch = get_cpu_ptr(&ppc64_tlb_batch); + active = batch->active; + put_cpu_ptr(&ppc64_tlb_batch); + + return active; +} + extern void hash__tlbiel_all(unsigned int action); extern void flush_hash_page(unsigned long vpn, real_pte_t pte, int psize, diff --git a/arch/sparc/include/asm/tlbflush_64.h b/arch/sparc/include/asm/tlbflush_64.h index 8b8cdaa69272..204bc957df9e 100644 --- a/arch/sparc/include/asm/tlbflush_64.h +++ b/arch/sparc/include/asm/tlbflush_64.h @@ -45,6 +45,7 @@ void flush_tlb_pending(void); void arch_enter_lazy_mmu_mode(void); void arch_leave_lazy_mmu_mode(void); #define arch_flush_lazy_mmu_mode() do {} while (0) +bool arch_in_lazy_mmu_mode(void); /* Local cpu only. */ void __flush_tlb_all(void); diff --git a/arch/sparc/mm/tlb.c b/arch/sparc/mm/tlb.c index a35ddcca5e76..83ab4ba4f4fb 100644 --- a/arch/sparc/mm/tlb.c +++ b/arch/sparc/mm/tlb.c @@ -69,6 +69,18 @@ void arch_leave_lazy_mmu_mode(void) preempt_enable(); } +bool arch_in_lazy_mmu_mode(void) +{ + struct tlb_batch *tb; + bool active; + + tb = get_cpu_ptr(&tlb_batch); + active = tb->active; + put_cpu_ptr(&tlb_batch); + + return active; +} + static void tlb_batch_add_one(struct mm_struct *mm, unsigned long vaddr, bool exec, unsigned int hugepage_shift) { diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h index b5e59a7ba0d0..c7ea3ccb8a41 100644 --- a/arch/x86/include/asm/paravirt.h +++ b/arch/x86/include/asm/paravirt.h @@ -542,6 +542,11 @@ static inline void arch_flush_lazy_mmu_mode(void) PVOP_VCALL0(mmu.lazy_mode.flush); } +static inline bool arch_in_lazy_mmu_mode(void) +{ + return PVOP_CALL0(bool, mmu.lazy_mode.in); +} + static inline void __set_fixmap(unsigned /* enum fixed_addresses */ idx, phys_addr_t phys, pgprot_t flags) { diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h index 37a8627d8277..41001ca9d010 100644 --- a/arch/x86/include/asm/paravirt_types.h +++ b/arch/x86/include/asm/paravirt_types.h @@ -46,6 +46,7 @@ struct pv_lazy_ops { void (*enter)(void); void (*leave)(void); void (*flush)(void); + bool (*in)(void); } __no_randomize_layout; #endif diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c index ab3e172dcc69..9af1a04a47fd 100644 --- a/arch/x86/kernel/paravirt.c +++ b/arch/x86/kernel/paravirt.c @@ -106,6 +106,11 @@ static noinstr void pv_native_set_debugreg(int regno, unsigned long val) { native_set_debugreg(regno, val); } + +static noinstr bool paravirt_retfalse(void) +{ + return false; +} #endif struct pv_info pv_info = { @@ -228,6 +233,7 @@ struct paravirt_patch_template pv_ops = { .enter = paravirt_nop, .leave = paravirt_nop, .flush = paravirt_nop, + .in = paravirt_retfalse, }, .mmu.set_fixmap = native_set_fixmap, diff --git a/arch/x86/xen/mmu_pv.c b/arch/x86/xen/mmu_pv.c index 2a4a8deaf612..74f7a8537911 100644 --- a/arch/x86/xen/mmu_pv.c +++ b/arch/x86/xen/mmu_pv.c @@ -2147,6 +2147,11 @@ static void xen_flush_lazy_mmu(void) preempt_enable(); } +static bool xen_in_lazy_mmu(void) +{ + return xen_get_lazy_mode() == XEN_LAZY_MMU; +} + static void __init xen_post_allocator_init(void) { pv_ops.mmu.set_pte = xen_set_pte; @@ -2230,6 +2235,7 @@ static const typeof(pv_ops) xen_mmu_ops __initconst = { .enter = xen_enter_lazy_mmu, .leave = xen_leave_lazy_mmu, .flush = xen_flush_lazy_mmu, + .in = xen_in_lazy_mmu, }, .set_fixmap = xen_set_fixmap, diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index b50447ef1c92..580d9971f435 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -235,6 +235,7 @@ static inline int pmd_dirty(pmd_t pmd) #define arch_enter_lazy_mmu_mode() do {} while (0) #define arch_leave_lazy_mmu_mode() do {} while (0) #define arch_flush_lazy_mmu_mode() do {} while (0) +#define arch_in_lazy_mmu_mode() false #endif #ifndef pte_batch_hint -- 2.43.0