linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Kevin Brodsky <kevin.brodsky@arm.com>
To: linux-mm@kvack.org
Cc: linux-kernel@vger.kernel.org,
	Kevin Brodsky <kevin.brodsky@arm.com>,
	Alexander Gordeev <agordeev@linux.ibm.com>,
	Andreas Larsson <andreas@gaisler.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Borislav Petkov <bp@alien8.de>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Christophe Leroy <christophe.leroy@csgroup.eu>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	David Hildenbrand <david@redhat.com>,
	"David S. Miller" <davem@davemloft.net>,
	"H. Peter Anvin" <hpa@zytor.com>, Ingo Molnar <mingo@redhat.com>,
	Jann Horn <jannh@google.com>, Juergen Gross <jgross@suse.com>,
	"Liam R. Howlett" <Liam.Howlett@oracle.com>,
	Lorenzo Stoakes <lorenzo.stoakes@oracle.com>,
	Madhavan Srinivasan <maddy@linux.ibm.com>,
	Michael Ellerman <mpe@ellerman.id.au>,
	Michal Hocko <mhocko@suse.com>, Mike Rapoport <rppt@kernel.org>,
	Nicholas Piggin <npiggin@gmail.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Ryan Roberts <ryan.roberts@arm.com>,
	Suren Baghdasaryan <surenb@google.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Vlastimil Babka <vbabka@suse.cz>, Will Deacon <will@kernel.org>,
	Yeoreum Yun <yeoreum.yun@arm.com>,
	linux-arm-kernel@lists.infradead.org,
	linuxppc-dev@lists.ozlabs.org, sparclinux@vger.kernel.org,
	xen-devel@lists.xenproject.org, x86@kernel.org
Subject: [PATCH v3 01/13] powerpc/64s: Do not re-activate batched TLB flush
Date: Wed, 15 Oct 2025 09:27:15 +0100	[thread overview]
Message-ID: <20251015082727.2395128-2-kevin.brodsky@arm.com> (raw)
In-Reply-To: <20251015082727.2395128-1-kevin.brodsky@arm.com>

From: Alexander Gordeev <agordeev@linux.ibm.com>

Since commit b9ef323ea168 ("powerpc/64s: Disable preemption in hash
lazy mmu mode") a task can not be preempted while in lazy MMU mode.
Therefore, the batch re-activation code is never called, so remove it.

Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com>
Signed-off-by: Kevin Brodsky <kevin.brodsky@arm.com>
---
This patch was originally posted as part of [1]; the series was not
taken but this patch remains relevant.

[1] https://lore.kernel.org/all/cover.1749747752.git.agordeev@linux.ibm.com/
---
 arch/powerpc/include/asm/thread_info.h |  2 --
 arch/powerpc/kernel/process.c          | 25 -------------------------
 2 files changed, 27 deletions(-)

diff --git a/arch/powerpc/include/asm/thread_info.h b/arch/powerpc/include/asm/thread_info.h
index b0f200aba2b3..97f35f9b1a96 100644
--- a/arch/powerpc/include/asm/thread_info.h
+++ b/arch/powerpc/include/asm/thread_info.h
@@ -154,12 +154,10 @@ void arch_setup_new_exec(void);
 /* Don't move TLF_NAPPING without adjusting the code in entry_32.S */
 #define TLF_NAPPING		0	/* idle thread enabled NAP mode */
 #define TLF_SLEEPING		1	/* suspend code enabled SLEEP mode */
-#define TLF_LAZY_MMU		3	/* tlb_batch is active */
 #define TLF_RUNLATCH		4	/* Is the runlatch enabled? */
 
 #define _TLF_NAPPING		(1 << TLF_NAPPING)
 #define _TLF_SLEEPING		(1 << TLF_SLEEPING)
-#define _TLF_LAZY_MMU		(1 << TLF_LAZY_MMU)
 #define _TLF_RUNLATCH		(1 << TLF_RUNLATCH)
 
 #ifndef __ASSEMBLER__
diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
index eb23966ac0a9..9237dcbeee4a 100644
--- a/arch/powerpc/kernel/process.c
+++ b/arch/powerpc/kernel/process.c
@@ -1281,9 +1281,6 @@ struct task_struct *__switch_to(struct task_struct *prev,
 {
 	struct thread_struct *new_thread, *old_thread;
 	struct task_struct *last;
-#ifdef CONFIG_PPC_64S_HASH_MMU
-	struct ppc64_tlb_batch *batch;
-#endif
 
 	new_thread = &new->thread;
 	old_thread = &current->thread;
@@ -1291,14 +1288,6 @@ struct task_struct *__switch_to(struct task_struct *prev,
 	WARN_ON(!irqs_disabled());
 
 #ifdef CONFIG_PPC_64S_HASH_MMU
-	batch = this_cpu_ptr(&ppc64_tlb_batch);
-	if (batch->active) {
-		current_thread_info()->local_flags |= _TLF_LAZY_MMU;
-		if (batch->index)
-			__flush_tlb_pending(batch);
-		batch->active = 0;
-	}
-
 	/*
 	 * On POWER9 the copy-paste buffer can only paste into
 	 * foreign real addresses, so unprivileged processes can not
@@ -1369,20 +1358,6 @@ struct task_struct *__switch_to(struct task_struct *prev,
 	 */
 
 #ifdef CONFIG_PPC_BOOK3S_64
-#ifdef CONFIG_PPC_64S_HASH_MMU
-	/*
-	 * This applies to a process that was context switched while inside
-	 * arch_enter_lazy_mmu_mode(), to re-activate the batch that was
-	 * deactivated above, before _switch(). This will never be the case
-	 * for new tasks.
-	 */
-	if (current_thread_info()->local_flags & _TLF_LAZY_MMU) {
-		current_thread_info()->local_flags &= ~_TLF_LAZY_MMU;
-		batch = this_cpu_ptr(&ppc64_tlb_batch);
-		batch->active = 1;
-	}
-#endif
-
 	/*
 	 * Math facilities are masked out of the child MSR in copy_thread.
 	 * A new task does not need to restore_math because it will
-- 
2.47.0



  reply	other threads:[~2025-10-15  8:27 UTC|newest]

Thread overview: 58+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-10-15  8:27 [PATCH v3 00/13] Nesting support for lazy MMU mode Kevin Brodsky
2025-10-15  8:27 ` Kevin Brodsky [this message]
2025-10-15  8:27 ` [PATCH v3 02/13] x86/xen: simplify flush_lazy_mmu() Kevin Brodsky
2025-10-15 16:52   ` Dave Hansen
2025-10-16  7:32     ` Kevin Brodsky
2025-10-15  8:27 ` [PATCH v3 03/13] powerpc/mm: implement arch_flush_lazy_mmu_mode() Kevin Brodsky
2025-10-23 19:36   ` David Hildenbrand
2025-10-24 12:09     ` Kevin Brodsky
2025-10-24 14:42       ` David Hildenbrand
2025-10-24 14:54         ` Kevin Brodsky
2025-10-15  8:27 ` [PATCH v3 04/13] sparc/mm: " Kevin Brodsky
2025-10-23 19:37   ` David Hildenbrand
2025-10-15  8:27 ` [PATCH v3 05/13] mm: introduce CONFIG_ARCH_LAZY_MMU Kevin Brodsky
2025-10-18  9:52   ` Mike Rapoport
2025-10-20 10:37     ` Kevin Brodsky
2025-10-23 19:38       ` David Hildenbrand
2025-10-15  8:27 ` [PATCH v3 06/13] mm: introduce generic lazy_mmu helpers Kevin Brodsky
2025-10-17 15:54   ` Alexander Gordeev
2025-10-20 10:32     ` Kevin Brodsky
2025-10-23 19:52   ` David Hildenbrand
2025-10-24 12:13     ` Kevin Brodsky
2025-10-24 13:27       ` David Hildenbrand
2025-10-24 14:32         ` Kevin Brodsky
2025-10-27 16:24           ` David Hildenbrand
2025-10-28 10:34             ` Kevin Brodsky
2025-10-15  8:27 ` [PATCH v3 07/13] mm: enable lazy_mmu sections to nest Kevin Brodsky
2025-10-23 20:00   ` David Hildenbrand
2025-10-24 12:16     ` Kevin Brodsky
2025-10-24 13:23       ` David Hildenbrand
2025-10-24 14:33         ` Kevin Brodsky
2025-10-15  8:27 ` [PATCH v3 08/13] arm64: mm: replace TIF_LAZY_MMU with in_lazy_mmu_mode() Kevin Brodsky
2025-10-15  8:27 ` [PATCH v3 09/13] powerpc/mm: replace batch->active " Kevin Brodsky
2025-10-23 20:02   ` David Hildenbrand
2025-10-24 12:16     ` Kevin Brodsky
2025-10-15  8:27 ` [PATCH v3 10/13] sparc/mm: " Kevin Brodsky
2025-10-23 20:03   ` David Hildenbrand
2025-10-15  8:27 ` [PATCH v3 11/13] x86/xen: use lazy_mmu_state when context-switching Kevin Brodsky
2025-10-23 20:06   ` David Hildenbrand
2025-10-24 14:47     ` David Woodhouse
2025-10-24 14:51       ` David Hildenbrand
2025-10-24 15:13         ` David Woodhouse
2025-10-24 15:16           ` David Hildenbrand
2025-10-24 15:38           ` John Paul Adrian Glaubitz
2025-10-24 15:47             ` David Hildenbrand
2025-10-24 15:51               ` John Paul Adrian Glaubitz
2025-10-27 12:38                 ` David Hildenbrand
2025-10-24 22:52         ` Demi Marie Obenour
2025-10-27 12:29           ` David Hildenbrand
2025-10-27 13:32             ` Kevin Brodsky
2025-10-24 15:05       ` Kevin Brodsky
2025-10-24 15:17         ` David Woodhouse
2025-10-27 13:38           ` Kevin Brodsky
2025-10-15  8:27 ` [PATCH v3 12/13] mm: bail out of lazy_mmu_mode_* in interrupt context Kevin Brodsky
2025-10-23 20:08   ` David Hildenbrand
2025-10-24 12:17     ` Kevin Brodsky
2025-10-15  8:27 ` [PATCH v3 13/13] mm: introduce arch_wants_lazy_mmu_mode() Kevin Brodsky
2025-10-23 20:10   ` David Hildenbrand
2025-10-24 12:17     ` Kevin Brodsky

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20251015082727.2395128-2-kevin.brodsky@arm.com \
    --to=kevin.brodsky@arm.com \
    --cc=Liam.Howlett@oracle.com \
    --cc=agordeev@linux.ibm.com \
    --cc=akpm@linux-foundation.org \
    --cc=andreas@gaisler.com \
    --cc=boris.ostrovsky@oracle.com \
    --cc=bp@alien8.de \
    --cc=catalin.marinas@arm.com \
    --cc=christophe.leroy@csgroup.eu \
    --cc=dave.hansen@linux.intel.com \
    --cc=davem@davemloft.net \
    --cc=david@redhat.com \
    --cc=hpa@zytor.com \
    --cc=jannh@google.com \
    --cc=jgross@suse.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linuxppc-dev@lists.ozlabs.org \
    --cc=lorenzo.stoakes@oracle.com \
    --cc=maddy@linux.ibm.com \
    --cc=mhocko@suse.com \
    --cc=mingo@redhat.com \
    --cc=mpe@ellerman.id.au \
    --cc=npiggin@gmail.com \
    --cc=peterz@infradead.org \
    --cc=rppt@kernel.org \
    --cc=ryan.roberts@arm.com \
    --cc=sparclinux@vger.kernel.org \
    --cc=surenb@google.com \
    --cc=tglx@linutronix.de \
    --cc=vbabka@suse.cz \
    --cc=will@kernel.org \
    --cc=x86@kernel.org \
    --cc=xen-devel@lists.xenproject.org \
    --cc=yeoreum.yun@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox