linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] arm64/mm: Disable barrier batching in interrupt contexts
@ 2025-05-12 10:22 Ryan Roberts
  2025-05-12 11:00 ` Catalin Marinas
                   ` (4 more replies)
  0 siblings, 5 replies; 14+ messages in thread
From: Ryan Roberts @ 2025-05-12 10:22 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Pasha Tatashin, Andrew Morton,
	Uladzislau Rezki, Christoph Hellwig, David Hildenbrand,
	Matthew Wilcox (Oracle),
	Mark Rutland, Anshuman Khandual, Alexandre Ghiti, Kevin Brodsky
  Cc: Ryan Roberts, linux-arm-kernel, linux-mm, linux-kernel,
	syzbot+5c0d9392e042f41d45c5

Commit 5fdd05efa1cd ("arm64/mm: Batch barriers when updating kernel
mappings") enabled arm64 kernels to track "lazy mmu mode" using TIF
flags in order to defer barriers until exiting the mode. At the same
time, it added warnings to check that pte manipulations were never
performed in interrupt context, because the tracking implementation
could not deal with nesting.

But it turns out that some debug features (e.g. KFENCE, DEBUG_PAGEALLOC)
do manipulate ptes in softirq context, which triggered the warnings.

So let's take the simplest and safest route and disable the batching
optimization in interrupt contexts. This makes these users no worse off
than prior to the optimization. Additionally the known offenders are
debug features that only manipulate a single PTE, so there is no
performance gain anyway.

There may be some obscure case of encrypted/decrypted DMA with the
dma_free_coherent called from an interrupt context, but again, this is
no worse off than prior to the commit.

Some options for supporting nesting were considered, but there is a
difficult to solve problem if any code manipulates ptes within interrupt
context but *outside of* a lazy mmu region. If this case exists, the
code would expect the updates to be immediate, but because the task
context may have already been in lazy mmu mode, the updates would be
deferred, which could cause incorrect behaviour. This problem is avoided
by always ensuring updates within interrupt context are immediate.

Fixes: 5fdd05efa1cd ("arm64/mm: Batch barriers when updating kernel mappings")
Reported-by: syzbot+5c0d9392e042f41d45c5@syzkaller.appspotmail.com
Closes: https://lore.kernel.org/linux-arm-kernel/681f2a09.050a0220.f2294.0006.GAE@google.com/
Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
---

Hi Will,

I've tested before and after with KFENCE enabled and it solves the issue. I've
also run all the mm-selftests which all continue to pass.

Catalin suggested a Fixes patch targetting the SHA as it is in for-next/mm was
the preferred approach, but shout if you want something different. I'm hoping
that with this fix we can still make it for this cycle, subject to not finding
any more issues.

Thanks,
Ryan


 arch/arm64/include/asm/pgtable.h | 16 ++++++++++++++--
 1 file changed, 14 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index ab4a1b19e596..e65083ec35cb 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -64,7 +64,11 @@ static inline void queue_pte_barriers(void)
 {
 	unsigned long flags;

-	VM_WARN_ON(in_interrupt());
+	if (in_interrupt()) {
+		emit_pte_barriers();
+		return;
+	}
+
 	flags = read_thread_flags();

 	if (flags & BIT(TIF_LAZY_MMU)) {
@@ -79,7 +83,9 @@ static inline void queue_pte_barriers(void)
 #define  __HAVE_ARCH_ENTER_LAZY_MMU_MODE
 static inline void arch_enter_lazy_mmu_mode(void)
 {
-	VM_WARN_ON(in_interrupt());
+	if (in_interrupt())
+		return;
+
 	VM_WARN_ON(test_thread_flag(TIF_LAZY_MMU));

 	set_thread_flag(TIF_LAZY_MMU);
@@ -87,12 +93,18 @@ static inline void arch_enter_lazy_mmu_mode(void)

 static inline void arch_flush_lazy_mmu_mode(void)
 {
+	if (in_interrupt())
+		return;
+
 	if (test_and_clear_thread_flag(TIF_LAZY_MMU_PENDING))
 		emit_pte_barriers();
 }

 static inline void arch_leave_lazy_mmu_mode(void)
 {
+	if (in_interrupt())
+		return;
+
 	arch_flush_lazy_mmu_mode();
 	clear_thread_flag(TIF_LAZY_MMU);
 }
--
2.43.0



^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2025-05-14 15:14 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-05-12 10:22 [PATCH] arm64/mm: Disable barrier batching in interrupt contexts Ryan Roberts
2025-05-12 11:00 ` Catalin Marinas
2025-05-12 11:03   ` Ryan Roberts
2025-05-12 11:07 ` David Hildenbrand
2025-05-12 12:00   ` Ryan Roberts
2025-05-12 12:05     ` David Hildenbrand
2025-05-12 12:33       ` Ryan Roberts
2025-05-12 13:14 ` Catalin Marinas
2025-05-12 13:53   ` Ryan Roberts
2025-05-12 14:14     ` Ryan Roberts
2025-05-13 20:46 ` Will Deacon
2025-05-14  9:29   ` Ryan Roberts
2025-05-14 15:13     ` Will Deacon
2025-05-14 15:14 ` Will Deacon

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox