linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v1 0/4] Fix lazy mmu mode
@ 2025-03-02 14:55 Ryan Roberts
  2025-03-02 14:55 ` [PATCH v1 1/4] mm: Fix lazy mmu docs and usage Ryan Roberts
                   ` (3 more replies)
  0 siblings, 4 replies; 17+ messages in thread
From: Ryan Roberts @ 2025-03-02 14:55 UTC (permalink / raw)
  To: Andrew Morton, David S. Miller, Andreas Larsson, Juergen Gross,
	Boris Ostrovsky, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, H. Peter Anvin, Matthew Wilcox (Oracle),
	Catalin Marinas
  Cc: Ryan Roberts, linux-mm, sparclinux, xen-devel, linux-kernel

Hi All,

I'm planning to implement lazy mmu mode for arm64 to optimize vmalloc. As part
of that, I will extend lazy mmu mode to cover kernel mappings in vmalloc table
walkers. While lazy mmu mode is already used for kernel mappings in a few
places, this will extend it's use significantly.

Having reviewed the existing lazy mmu implementations in powerpc, sparc and x86,
it looks like there are a bunch of bugs, some of which may be more likely to
trigger once I extend the use of lazy mmu. So this series attempts to clarify
the requirements and fix all the bugs in advance of that series. See patch #1
commit log for all the details.

Note that I have only been able to compile test these changes so appreciate any
help in testing.

Applies on Friday's mm-unstable (5f089a9aa987), as I assume this would be
preferred via that tree.

Thanks,
Ryan

Ryan Roberts (4):
  mm: Fix lazy mmu docs and usage
  sparc/mm: Disable preemption in lazy mmu mode
  sparc/mm: Avoid calling arch_enter/leave_lazy_mmu() in set_ptes
  Revert "x86/xen: allow nesting of same lazy mode"

 arch/sparc/include/asm/pgtable_64.h   |  2 --
 arch/sparc/mm/tlb.c                   |  5 ++++-
 arch/x86/include/asm/xen/hypervisor.h | 15 ++-------------
 arch/x86/xen/enlighten_pv.c           |  1 -
 fs/proc/task_mmu.c                    | 11 ++++-------
 include/linux/pgtable.h               | 14 ++++++++------
 6 files changed, 18 insertions(+), 30 deletions(-)

--
2.43.0



^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH v1 1/4] mm: Fix lazy mmu docs and usage
  2025-03-02 14:55 [PATCH v1 0/4] Fix lazy mmu mode Ryan Roberts
@ 2025-03-02 14:55 ` Ryan Roberts
  2025-03-03  8:49   ` David Hildenbrand
  2025-03-02 14:55 ` [PATCH v1 2/4] sparc/mm: Disable preemption in lazy mmu mode Ryan Roberts
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 17+ messages in thread
From: Ryan Roberts @ 2025-03-02 14:55 UTC (permalink / raw)
  To: Andrew Morton, David S. Miller, Andreas Larsson, Juergen Gross,
	Boris Ostrovsky, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, H. Peter Anvin, Matthew Wilcox (Oracle),
	Catalin Marinas
  Cc: Ryan Roberts, linux-mm, sparclinux, xen-devel, linux-kernel

The docs, implementations and use of arch_[enter|leave]_lazy_mmu_mode()
is a bit of a mess (to put it politely). There are a number of issues
related to nesting of lazy mmu regions and confusion over whether the
task, when in a lazy mmu region, is preemptible or not. Fix all the
issues relating to the core-mm. Follow up commits will fix the
arch-specific implementations. 3 arches implement lazy mmu; powerpc,
sparc and x86.

When arch_[enter|leave]_lazy_mmu_mode() was first introduced by commit
6606c3e0da53 ("[PATCH] paravirt: lazy mmu mode hooks.patch"), it was
expected that lazy mmu regions would never nest and that the appropriate
page table lock(s) would be held while in the region, thus ensuring the
region is non-preemptible. Additionally lazy mmu regions were only used
during manipulation of user mappings.

Commit 38e0edb15bd0 ("mm/apply_to_range: call pte function with lazy
updates") started invoking the lazy mmu mode in apply_to_pte_range(),
which is used for both user and kernel mappings. For kernel mappings the
region is no longer protected by any lock so there is no longer any
guarantee about non-preemptibility. Additionally, for RT configs, the
holding the PTL only implies no CPU migration, it doesn't prevent
preemption.

Commit bcc6cc832573 ("mm: add default definition of set_ptes()") added
arch_[enter|leave]_lazy_mmu_mode() to the default implementation of
set_ptes(), used by x86. So after this commit, lazy mmu regions can be
nested. Additionally commit 1a10a44dfc1d ("sparc64: implement the new
page table range API") and commit 9fee28baa601 ("powerpc: implement the
new page table range API") did the same for the sparc and powerpc
set_ptes() overrides.

powerpc couldn't deal with preemption so avoids it in commit
b9ef323ea168 ("powerpc/64s: Disable preemption in hash lazy mmu mode"),
which explicitly disables preemption for the whole region in its
implementation. x86 can support preemption (or at least it could until
it tried to add support nesting; more on this below). Sparc looks to be
totally broken in the face of preemption, as far as I can tell.

powewrpc can't deal with nesting, so avoids it in commit 47b8def9358c
("powerpc/mm: Avoid calling arch_enter/leave_lazy_mmu() in set_ptes"),
which removes the lazy mmu calls from its implementation of set_ptes().
x86 attempted to support nesting in commit 49147beb0ccb ("x86/xen: allow
nesting of same lazy mode") but as far as I can tell, this breaks its
support for preemption.

In short, it's all a mess; the semantics for
arch_[enter|leave]_lazy_mmu_mode() are not clearly defined and as a
result the implementations all have different expectations, sticking
plasters and bugs.

arm64 is aiming to start using these hooks, so let's clean everything up
before adding an arm64 implementation. Update the documentation to state
that lazy mmu regions can never be nested, must not be called in
interrupt context and preemption may or may not be enabled for the
duration of the region.

Additionally, update the way arch_[enter|leave]_lazy_mmu_mode() is
called in pagemap_scan_pmd_entry() to follow the normal pattern of
holding the ptl for user space mappings. As a result the scope is
reduced to only the pte table, but that's where most of the performance
win is. While I believe there wasn't technically a bug here, the
original scope made it easier to accidentally nest or, worse,
accidentally call something like kmap() which would expect an immediate
mode pte modification but it would end up deferred.

arch-specific fixes to conform to the new spec will proceed this one.

These issues were spotted by code review and I have no evidence of
issues being reported in the wild.

Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
---
 fs/proc/task_mmu.c      | 11 ++++-------
 include/linux/pgtable.h | 14 ++++++++------
 2 files changed, 12 insertions(+), 13 deletions(-)

diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
index c17615e21a5d..b0f189815512 100644
--- a/fs/proc/task_mmu.c
+++ b/fs/proc/task_mmu.c
@@ -2459,22 +2459,19 @@ static int pagemap_scan_pmd_entry(pmd_t *pmd, unsigned long start,
 	spinlock_t *ptl;
 	int ret;
 
-	arch_enter_lazy_mmu_mode();
-
 	ret = pagemap_scan_thp_entry(pmd, start, end, walk);
-	if (ret != -ENOENT) {
-		arch_leave_lazy_mmu_mode();
+	if (ret != -ENOENT)
 		return ret;
-	}
 
 	ret = 0;
 	start_pte = pte = pte_offset_map_lock(vma->vm_mm, pmd, start, &ptl);
 	if (!pte) {
-		arch_leave_lazy_mmu_mode();
 		walk->action = ACTION_AGAIN;
 		return 0;
 	}
 
+	arch_enter_lazy_mmu_mode();
+
 	if ((p->arg.flags & PM_SCAN_WP_MATCHING) && !p->vec_out) {
 		/* Fast path for performing exclusive WP */
 		for (addr = start; addr != end; pte++, addr += PAGE_SIZE) {
@@ -2543,8 +2540,8 @@ static int pagemap_scan_pmd_entry(pmd_t *pmd, unsigned long start,
 	if (flush_end)
 		flush_tlb_range(vma, start, addr);
 
-	pte_unmap_unlock(start_pte, ptl);
 	arch_leave_lazy_mmu_mode();
+	pte_unmap_unlock(start_pte, ptl);
 
 	cond_resched();
 	return ret;
diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
index 94d267d02372..787c632ee2c9 100644
--- a/include/linux/pgtable.h
+++ b/include/linux/pgtable.h
@@ -222,10 +222,14 @@ static inline int pmd_dirty(pmd_t pmd)
  * hazard could result in the direct mode hypervisor case, since the actual
  * write to the page tables may not yet have taken place, so reads though
  * a raw PTE pointer after it has been modified are not guaranteed to be
- * up to date.  This mode can only be entered and left under the protection of
- * the page table locks for all page tables which may be modified.  In the UP
- * case, this is required so that preemption is disabled, and in the SMP case,
- * it must synchronize the delayed page table writes properly on other CPUs.
+ * up to date.
+ *
+ * In the general case, no lock is guaranteed to be held between entry and exit
+ * of the lazy mode. So the implementation must assume preemption may be enabled
+ * and cpu migration is possible; it must take steps to be robust against this.
+ * (In practice, for user PTE updates, the appropriate page table lock(s) are
+ * held, but for kernel PTE updates, no lock is held). Nesting is not permitted
+ * and the mode cannot be used in interrupt context.
  */
 #ifndef __HAVE_ARCH_ENTER_LAZY_MMU_MODE
 #define arch_enter_lazy_mmu_mode()	do {} while (0)
@@ -287,7 +291,6 @@ static inline void set_ptes(struct mm_struct *mm, unsigned long addr,
 {
 	page_table_check_ptes_set(mm, ptep, pte, nr);
 
-	arch_enter_lazy_mmu_mode();
 	for (;;) {
 		set_pte(ptep, pte);
 		if (--nr == 0)
@@ -295,7 +298,6 @@ static inline void set_ptes(struct mm_struct *mm, unsigned long addr,
 		ptep++;
 		pte = pte_next_pfn(pte);
 	}
-	arch_leave_lazy_mmu_mode();
 }
 #endif
 #define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1)
-- 
2.43.0



^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH v1 2/4] sparc/mm: Disable preemption in lazy mmu mode
  2025-03-02 14:55 [PATCH v1 0/4] Fix lazy mmu mode Ryan Roberts
  2025-03-02 14:55 ` [PATCH v1 1/4] mm: Fix lazy mmu docs and usage Ryan Roberts
@ 2025-03-02 14:55 ` Ryan Roberts
  2025-03-03  8:51   ` David Hildenbrand
  2025-03-03 13:39   ` Andreas Larsson
  2025-03-02 14:55 ` [PATCH v1 3/4] sparc/mm: Avoid calling arch_enter/leave_lazy_mmu() in set_ptes Ryan Roberts
  2025-03-02 14:55 ` [PATCH v1 4/4] Revert "x86/xen: allow nesting of same lazy mode" Ryan Roberts
  3 siblings, 2 replies; 17+ messages in thread
From: Ryan Roberts @ 2025-03-02 14:55 UTC (permalink / raw)
  To: Andrew Morton, David S. Miller, Andreas Larsson, Juergen Gross,
	Boris Ostrovsky, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, H. Peter Anvin, Matthew Wilcox (Oracle),
	Catalin Marinas
  Cc: Ryan Roberts, linux-mm, sparclinux, xen-devel, linux-kernel

Since commit 38e0edb15bd0 ("mm/apply_to_range: call pte function with
lazy updates") it's been possible for arch_[enter|leave]_lazy_mmu_mode()
to be called without holding a page table lock (for the kernel mappings
case), and therefore it is possible that preemption may occur while in
the lazy mmu mode. The Sparc lazy mmu implementation is not robust to
preemption since it stores the lazy mode state in a per-cpu structure
and does not attempt to manage that state on task switch.

Powerpc had the same issue and fixed it by explicitly disabling
preemption in arch_enter_lazy_mmu_mode() and re-enabling in
arch_leave_lazy_mmu_mode(). See commit b9ef323ea168 ("powerpc/64s:
Disable preemption in hash lazy mmu mode").

Given Sparc's lazy mmu mode is based on powerpc's, let's fix it in the
same way here.

Fixes: 38e0edb15bd0 ("mm/apply_to_range: call pte function with lazy updates")
Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
---
 arch/sparc/mm/tlb.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/arch/sparc/mm/tlb.c b/arch/sparc/mm/tlb.c
index 8648a50afe88..a35ddcca5e76 100644
--- a/arch/sparc/mm/tlb.c
+++ b/arch/sparc/mm/tlb.c
@@ -52,8 +52,10 @@ void flush_tlb_pending(void)
 
 void arch_enter_lazy_mmu_mode(void)
 {
-	struct tlb_batch *tb = this_cpu_ptr(&tlb_batch);
+	struct tlb_batch *tb;
 
+	preempt_disable();
+	tb = this_cpu_ptr(&tlb_batch);
 	tb->active = 1;
 }
 
@@ -64,6 +66,7 @@ void arch_leave_lazy_mmu_mode(void)
 	if (tb->tlb_nr)
 		flush_tlb_pending();
 	tb->active = 0;
+	preempt_enable();
 }
 
 static void tlb_batch_add_one(struct mm_struct *mm, unsigned long vaddr,
-- 
2.43.0



^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH v1 3/4] sparc/mm: Avoid calling arch_enter/leave_lazy_mmu() in set_ptes
  2025-03-02 14:55 [PATCH v1 0/4] Fix lazy mmu mode Ryan Roberts
  2025-03-02 14:55 ` [PATCH v1 1/4] mm: Fix lazy mmu docs and usage Ryan Roberts
  2025-03-02 14:55 ` [PATCH v1 2/4] sparc/mm: Disable preemption in lazy mmu mode Ryan Roberts
@ 2025-03-02 14:55 ` Ryan Roberts
  2025-03-03  8:52   ` David Hildenbrand
  2025-03-03 13:39   ` Andreas Larsson
  2025-03-02 14:55 ` [PATCH v1 4/4] Revert "x86/xen: allow nesting of same lazy mode" Ryan Roberts
  3 siblings, 2 replies; 17+ messages in thread
From: Ryan Roberts @ 2025-03-02 14:55 UTC (permalink / raw)
  To: Andrew Morton, David S. Miller, Andreas Larsson, Juergen Gross,
	Boris Ostrovsky, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, H. Peter Anvin, Matthew Wilcox (Oracle),
	Catalin Marinas
  Cc: Ryan Roberts, linux-mm, sparclinux, xen-devel, linux-kernel

With commit 1a10a44dfc1d ("sparc64: implement the new page table range
API") set_ptes was added to the sparc architecture. The implementation
included calling arch_enter/leave_lazy_mmu() calls.

The patch removes the usage of arch_enter/leave_lazy_mmu() since this
implies nesting of lazy mmu regions which is not supported. Without this
fix, lazy mmu mode is effectively disabled because we exit the mode
after the first set_ptes:

remap_pte_range()
  -> arch_enter_lazy_mmu()
  -> set_ptes()
      -> arch_enter_lazy_mmu()
      -> arch_leave_lazy_mmu()
  -> arch_leave_lazy_mmu()

Powerpc suffered the same problem and fixed it in a corresponding way
with commit 47b8def9358c ("powerpc/mm: Avoid calling
arch_enter/leave_lazy_mmu() in set_ptes").

Fixes: 1a10a44dfc1d ("sparc64: implement the new page table range API")
Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
---
 arch/sparc/include/asm/pgtable_64.h | 2 --
 1 file changed, 2 deletions(-)

diff --git a/arch/sparc/include/asm/pgtable_64.h b/arch/sparc/include/asm/pgtable_64.h
index 2b7f358762c1..dc28f2c4eee3 100644
--- a/arch/sparc/include/asm/pgtable_64.h
+++ b/arch/sparc/include/asm/pgtable_64.h
@@ -936,7 +936,6 @@ static inline void __set_pte_at(struct mm_struct *mm, unsigned long addr,
 static inline void set_ptes(struct mm_struct *mm, unsigned long addr,
 		pte_t *ptep, pte_t pte, unsigned int nr)
 {
-	arch_enter_lazy_mmu_mode();
 	for (;;) {
 		__set_pte_at(mm, addr, ptep, pte, 0);
 		if (--nr == 0)
@@ -945,7 +944,6 @@ static inline void set_ptes(struct mm_struct *mm, unsigned long addr,
 		pte_val(pte) += PAGE_SIZE;
 		addr += PAGE_SIZE;
 	}
-	arch_leave_lazy_mmu_mode();
 }
 #define set_ptes set_ptes
 
-- 
2.43.0



^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH v1 4/4] Revert "x86/xen: allow nesting of same lazy mode"
  2025-03-02 14:55 [PATCH v1 0/4] Fix lazy mmu mode Ryan Roberts
                   ` (2 preceding siblings ...)
  2025-03-02 14:55 ` [PATCH v1 3/4] sparc/mm: Avoid calling arch_enter/leave_lazy_mmu() in set_ptes Ryan Roberts
@ 2025-03-02 14:55 ` Ryan Roberts
  2025-03-03 11:52   ` David Hildenbrand
  3 siblings, 1 reply; 17+ messages in thread
From: Ryan Roberts @ 2025-03-02 14:55 UTC (permalink / raw)
  To: Andrew Morton, David S. Miller, Andreas Larsson, Juergen Gross,
	Boris Ostrovsky, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, H. Peter Anvin, Matthew Wilcox (Oracle),
	Catalin Marinas
  Cc: Ryan Roberts, linux-mm, sparclinux, xen-devel, linux-kernel

Commit 49147beb0ccb ("x86/xen: allow nesting of same lazy mode") was
added as a solution for a core-mm code change where
arch_[enter|leave]_lazy_mmu_mode() started to be called in a nested
manner; see commit bcc6cc832573 ("mm: add default definition of
set_ptes()").

However, now that we have fixed the API to avoid nesting, we no longer
need this capability in the x86 implementation.

Additionally, from code review, I don't believe the fix was ever robust
in the case of preemption occurring while in the nested lazy mode. The
implementation usually deals with preemption by calling
arch_leave_lazy_mmu_mode() from xen_start_context_switch() for the
outgoing task if we are in the lazy mmu mode. Then in
xen_end_context_switch(), it restarts the lazy mode by calling
arch_enter_lazy_mmu_mode() for an incoming task that was in the lazy
mode when it was switched out. But arch_leave_lazy_mmu_mode() will only
unwind a single level of nesting. If we are in the double nest, then
it's not fully unwound and per-cpu variables are left in a bad state.

So the correct solution is to remove the possibility of nesting from the
higher level (which has now been done) and remove this x86-specific
solution.

Fixes: 49147beb0ccb ("x86/xen: allow nesting of same lazy mode")
Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
---
 arch/x86/include/asm/xen/hypervisor.h | 15 ++-------------
 arch/x86/xen/enlighten_pv.c           |  1 -
 2 files changed, 2 insertions(+), 14 deletions(-)

diff --git a/arch/x86/include/asm/xen/hypervisor.h b/arch/x86/include/asm/xen/hypervisor.h
index a9088250770f..bd0fc69a10a7 100644
--- a/arch/x86/include/asm/xen/hypervisor.h
+++ b/arch/x86/include/asm/xen/hypervisor.h
@@ -72,18 +72,10 @@ enum xen_lazy_mode {
 };
 
 DECLARE_PER_CPU(enum xen_lazy_mode, xen_lazy_mode);
-DECLARE_PER_CPU(unsigned int, xen_lazy_nesting);
 
 static inline void enter_lazy(enum xen_lazy_mode mode)
 {
-	enum xen_lazy_mode old_mode = this_cpu_read(xen_lazy_mode);
-
-	if (mode == old_mode) {
-		this_cpu_inc(xen_lazy_nesting);
-		return;
-	}
-
-	BUG_ON(old_mode != XEN_LAZY_NONE);
+	BUG_ON(this_cpu_read(xen_lazy_mode) != XEN_LAZY_NONE);
 
 	this_cpu_write(xen_lazy_mode, mode);
 }
@@ -92,10 +84,7 @@ static inline void leave_lazy(enum xen_lazy_mode mode)
 {
 	BUG_ON(this_cpu_read(xen_lazy_mode) != mode);
 
-	if (this_cpu_read(xen_lazy_nesting) == 0)
-		this_cpu_write(xen_lazy_mode, XEN_LAZY_NONE);
-	else
-		this_cpu_dec(xen_lazy_nesting);
+	this_cpu_write(xen_lazy_mode, XEN_LAZY_NONE);
 }
 
 enum xen_lazy_mode xen_get_lazy_mode(void);
diff --git a/arch/x86/xen/enlighten_pv.c b/arch/x86/xen/enlighten_pv.c
index 5e57835e999d..919e4df9380b 100644
--- a/arch/x86/xen/enlighten_pv.c
+++ b/arch/x86/xen/enlighten_pv.c
@@ -99,7 +99,6 @@ struct tls_descs {
 };
 
 DEFINE_PER_CPU(enum xen_lazy_mode, xen_lazy_mode) = XEN_LAZY_NONE;
-DEFINE_PER_CPU(unsigned int, xen_lazy_nesting);
 
 enum xen_lazy_mode xen_get_lazy_mode(void)
 {
-- 
2.43.0



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v1 1/4] mm: Fix lazy mmu docs and usage
  2025-03-02 14:55 ` [PATCH v1 1/4] mm: Fix lazy mmu docs and usage Ryan Roberts
@ 2025-03-03  8:49   ` David Hildenbrand
  2025-03-03  8:52     ` David Hildenbrand
  0 siblings, 1 reply; 17+ messages in thread
From: David Hildenbrand @ 2025-03-03  8:49 UTC (permalink / raw)
  To: Ryan Roberts, Andrew Morton, David S. Miller, Andreas Larsson,
	Juergen Gross, Boris Ostrovsky, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, Dave Hansen, H. Peter Anvin,
	Matthew Wilcox (Oracle),
	Catalin Marinas
  Cc: linux-mm, sparclinux, xen-devel, linux-kernel

On 02.03.25 15:55, Ryan Roberts wrote:
> The docs, implementations and use of arch_[enter|leave]_lazy_mmu_mode()
> is a bit of a mess (to put it politely). There are a number of issues
> related to nesting of lazy mmu regions and confusion over whether the
> task, when in a lazy mmu region, is preemptible or not. Fix all the
> issues relating to the core-mm. Follow up commits will fix the
> arch-specific implementations. 3 arches implement lazy mmu; powerpc,
> sparc and x86.
> 
> When arch_[enter|leave]_lazy_mmu_mode() was first introduced by commit
> 6606c3e0da53 ("[PATCH] paravirt: lazy mmu mode hooks.patch"), it was
> expected that lazy mmu regions would never nest and that the appropriate
> page table lock(s) would be held while in the region, thus ensuring the
> region is non-preemptible. Additionally lazy mmu regions were only used
> during manipulation of user mappings.
> 
> Commit 38e0edb15bd0 ("mm/apply_to_range: call pte function with lazy
> updates") started invoking the lazy mmu mode in apply_to_pte_range(),
> which is used for both user and kernel mappings. For kernel mappings the
> region is no longer protected by any lock so there is no longer any
> guarantee about non-preemptibility. Additionally, for RT configs, the
> holding the PTL only implies no CPU migration, it doesn't prevent
> preemption.
> 
> Commit bcc6cc832573 ("mm: add default definition of set_ptes()") added
> arch_[enter|leave]_lazy_mmu_mode() to the default implementation of
> set_ptes(), used by x86. So after this commit, lazy mmu regions can be
> nested. Additionally commit 1a10a44dfc1d ("sparc64: implement the new
> page table range API") and commit 9fee28baa601 ("powerpc: implement the
> new page table range API") did the same for the sparc and powerpc
> set_ptes() overrides.
> 
> powerpc couldn't deal with preemption so avoids it in commit
> b9ef323ea168 ("powerpc/64s: Disable preemption in hash lazy mmu mode"),
> which explicitly disables preemption for the whole region in its
> implementation. x86 can support preemption (or at least it could until
> it tried to add support nesting; more on this below). Sparc looks to be
> totally broken in the face of preemption, as far as I can tell.
> 
> powewrpc can't deal with nesting, so avoids it in commit 47b8def9358c
> ("powerpc/mm: Avoid calling arch_enter/leave_lazy_mmu() in set_ptes"),
> which removes the lazy mmu calls from its implementation of set_ptes().
> x86 attempted to support nesting in commit 49147beb0ccb ("x86/xen: allow
> nesting of same lazy mode") but as far as I can tell, this breaks its
> support for preemption.
> 
> In short, it's all a mess; the semantics for
> arch_[enter|leave]_lazy_mmu_mode() are not clearly defined and as a
> result the implementations all have different expectations, sticking
> plasters and bugs.
> 
> arm64 is aiming to start using these hooks, so let's clean everything up
> before adding an arm64 implementation. Update the documentation to state
> that lazy mmu regions can never be nested, must not be called in
> interrupt context and preemption may or may not be enabled for the
> duration of the region.
> 
> Additionally, update the way arch_[enter|leave]_lazy_mmu_mode() is
> called in pagemap_scan_pmd_entry() to follow the normal pattern of
> holding the ptl for user space mappings. As a result the scope is
> reduced to only the pte table, but that's where most of the performance
> win is. While I believe there wasn't technically a bug here, the
> original scope made it easier to accidentally nest or, worse,
> accidentally call something like kmap() which would expect an immediate
> mode pte modification but it would end up deferred.
> 
> arch-specific fixes to conform to the new spec will proceed this one.
> 
> These issues were spotted by code review and I have no evidence of
> issues being reported in the wild.
> 

All looking good to me!

Acked-by: David Hildenbrand <david@redhat.com>

-- 
Cheers,

David / dhildenb



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v1 2/4] sparc/mm: Disable preemption in lazy mmu mode
  2025-03-02 14:55 ` [PATCH v1 2/4] sparc/mm: Disable preemption in lazy mmu mode Ryan Roberts
@ 2025-03-03  8:51   ` David Hildenbrand
  2025-03-03 13:39   ` Andreas Larsson
  1 sibling, 0 replies; 17+ messages in thread
From: David Hildenbrand @ 2025-03-03  8:51 UTC (permalink / raw)
  To: Ryan Roberts, Andrew Morton, David S. Miller, Andreas Larsson,
	Juergen Gross, Boris Ostrovsky, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, Dave Hansen, H. Peter Anvin,
	Matthew Wilcox (Oracle),
	Catalin Marinas
  Cc: linux-mm, sparclinux, xen-devel, linux-kernel

On 02.03.25 15:55, Ryan Roberts wrote:
> Since commit 38e0edb15bd0 ("mm/apply_to_range: call pte function with
> lazy updates") it's been possible for arch_[enter|leave]_lazy_mmu_mode()
> to be called without holding a page table lock (for the kernel mappings
> case), and therefore it is possible that preemption may occur while in
> the lazy mmu mode. The Sparc lazy mmu implementation is not robust to
> preemption since it stores the lazy mode state in a per-cpu structure
> and does not attempt to manage that state on task switch.
> 
> Powerpc had the same issue and fixed it by explicitly disabling
> preemption in arch_enter_lazy_mmu_mode() and re-enabling in
> arch_leave_lazy_mmu_mode(). See commit b9ef323ea168 ("powerpc/64s:
> Disable preemption in hash lazy mmu mode").
> 
> Given Sparc's lazy mmu mode is based on powerpc's, let's fix it in the
> same way here.
> 
> Fixes: 38e0edb15bd0 ("mm/apply_to_range: call pte function with lazy updates")
> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
> ---
>   arch/sparc/mm/tlb.c | 5 ++++-
>   1 file changed, 4 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/sparc/mm/tlb.c b/arch/sparc/mm/tlb.c
> index 8648a50afe88..a35ddcca5e76 100644
> --- a/arch/sparc/mm/tlb.c
> +++ b/arch/sparc/mm/tlb.c
> @@ -52,8 +52,10 @@ void flush_tlb_pending(void)
>   
>   void arch_enter_lazy_mmu_mode(void)
>   {
> -	struct tlb_batch *tb = this_cpu_ptr(&tlb_batch);
> +	struct tlb_batch *tb;
>   
> +	preempt_disable();
> +	tb = this_cpu_ptr(&tlb_batch);
>   	tb->active = 1;
>   }
>   
> @@ -64,6 +66,7 @@ void arch_leave_lazy_mmu_mode(void)
>   	if (tb->tlb_nr)
>   		flush_tlb_pending();
>   	tb->active = 0;
> +	preempt_enable();
>   }
>   
>   static void tlb_batch_add_one(struct mm_struct *mm, unsigned long vaddr,

Acked-by: David Hildenbrand <david@redhat.com>

-- 
Cheers,

David / dhildenb



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v1 3/4] sparc/mm: Avoid calling arch_enter/leave_lazy_mmu() in set_ptes
  2025-03-02 14:55 ` [PATCH v1 3/4] sparc/mm: Avoid calling arch_enter/leave_lazy_mmu() in set_ptes Ryan Roberts
@ 2025-03-03  8:52   ` David Hildenbrand
  2025-03-03 13:39   ` Andreas Larsson
  1 sibling, 0 replies; 17+ messages in thread
From: David Hildenbrand @ 2025-03-03  8:52 UTC (permalink / raw)
  To: Ryan Roberts, Andrew Morton, David S. Miller, Andreas Larsson,
	Juergen Gross, Boris Ostrovsky, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, Dave Hansen, H. Peter Anvin,
	Matthew Wilcox (Oracle),
	Catalin Marinas
  Cc: linux-mm, sparclinux, xen-devel, linux-kernel

On 02.03.25 15:55, Ryan Roberts wrote:
> With commit 1a10a44dfc1d ("sparc64: implement the new page table range
> API") set_ptes was added to the sparc architecture. The implementation
> included calling arch_enter/leave_lazy_mmu() calls.
> 
> The patch removes the usage of arch_enter/leave_lazy_mmu() since this
> implies nesting of lazy mmu regions which is not supported. Without this
> fix, lazy mmu mode is effectively disabled because we exit the mode
> after the first set_ptes:
> 
> remap_pte_range()
>    -> arch_enter_lazy_mmu()
>    -> set_ptes()
>        -> arch_enter_lazy_mmu()
>        -> arch_leave_lazy_mmu()
>    -> arch_leave_lazy_mmu()
> 
> Powerpc suffered the same problem and fixed it in a corresponding way
> with commit 47b8def9358c ("powerpc/mm: Avoid calling
> arch_enter/leave_lazy_mmu() in set_ptes").
> 
> Fixes: 1a10a44dfc1d ("sparc64: implement the new page table range API")
> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
> ---
>   arch/sparc/include/asm/pgtable_64.h | 2 --
>   1 file changed, 2 deletions(-)
> 
> diff --git a/arch/sparc/include/asm/pgtable_64.h b/arch/sparc/include/asm/pgtable_64.h
> index 2b7f358762c1..dc28f2c4eee3 100644
> --- a/arch/sparc/include/asm/pgtable_64.h
> +++ b/arch/sparc/include/asm/pgtable_64.h
> @@ -936,7 +936,6 @@ static inline void __set_pte_at(struct mm_struct *mm, unsigned long addr,
>   static inline void set_ptes(struct mm_struct *mm, unsigned long addr,
>   		pte_t *ptep, pte_t pte, unsigned int nr)
>   {
> -	arch_enter_lazy_mmu_mode();
>   	for (;;) {
>   		__set_pte_at(mm, addr, ptep, pte, 0);
>   		if (--nr == 0)
> @@ -945,7 +944,6 @@ static inline void set_ptes(struct mm_struct *mm, unsigned long addr,
>   		pte_val(pte) += PAGE_SIZE;
>   		addr += PAGE_SIZE;
>   	}
> -	arch_leave_lazy_mmu_mode();
>   }
>   #define set_ptes set_ptes
>   

Acked-by: David Hildenbrand <david@redhat.com>

-- 
Cheers,

David / dhildenb



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v1 1/4] mm: Fix lazy mmu docs and usage
  2025-03-03  8:49   ` David Hildenbrand
@ 2025-03-03  8:52     ` David Hildenbrand
  2025-03-03 10:22       ` Ryan Roberts
  0 siblings, 1 reply; 17+ messages in thread
From: David Hildenbrand @ 2025-03-03  8:52 UTC (permalink / raw)
  To: Ryan Roberts, Andrew Morton, David S. Miller, Andreas Larsson,
	Juergen Gross, Boris Ostrovsky, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, Dave Hansen, H. Peter Anvin,
	Matthew Wilcox (Oracle),
	Catalin Marinas
  Cc: linux-mm, sparclinux, xen-devel, linux-kernel

On 03.03.25 09:49, David Hildenbrand wrote:
> On 02.03.25 15:55, Ryan Roberts wrote:
>> The docs, implementations and use of arch_[enter|leave]_lazy_mmu_mode()
>> is a bit of a mess (to put it politely). There are a number of issues
>> related to nesting of lazy mmu regions and confusion over whether the
>> task, when in a lazy mmu region, is preemptible or not. Fix all the
>> issues relating to the core-mm. Follow up commits will fix the
>> arch-specific implementations. 3 arches implement lazy mmu; powerpc,
>> sparc and x86.
>>
>> When arch_[enter|leave]_lazy_mmu_mode() was first introduced by commit
>> 6606c3e0da53 ("[PATCH] paravirt: lazy mmu mode hooks.patch"), it was
>> expected that lazy mmu regions would never nest and that the appropriate
>> page table lock(s) would be held while in the region, thus ensuring the
>> region is non-preemptible. Additionally lazy mmu regions were only used
>> during manipulation of user mappings.
>>
>> Commit 38e0edb15bd0 ("mm/apply_to_range: call pte function with lazy
>> updates") started invoking the lazy mmu mode in apply_to_pte_range(),
>> which is used for both user and kernel mappings. For kernel mappings the
>> region is no longer protected by any lock so there is no longer any
>> guarantee about non-preemptibility. Additionally, for RT configs, the
>> holding the PTL only implies no CPU migration, it doesn't prevent
>> preemption.
>>
>> Commit bcc6cc832573 ("mm: add default definition of set_ptes()") added
>> arch_[enter|leave]_lazy_mmu_mode() to the default implementation of
>> set_ptes(), used by x86. So after this commit, lazy mmu regions can be
>> nested. Additionally commit 1a10a44dfc1d ("sparc64: implement the new
>> page table range API") and commit 9fee28baa601 ("powerpc: implement the
>> new page table range API") did the same for the sparc and powerpc
>> set_ptes() overrides.
>>
>> powerpc couldn't deal with preemption so avoids it in commit
>> b9ef323ea168 ("powerpc/64s: Disable preemption in hash lazy mmu mode"),
>> which explicitly disables preemption for the whole region in its
>> implementation. x86 can support preemption (or at least it could until
>> it tried to add support nesting; more on this below). Sparc looks to be
>> totally broken in the face of preemption, as far as I can tell.
>>
>> powewrpc can't deal with nesting, so avoids it in commit 47b8def9358c
>> ("powerpc/mm: Avoid calling arch_enter/leave_lazy_mmu() in set_ptes"),
>> which removes the lazy mmu calls from its implementation of set_ptes().
>> x86 attempted to support nesting in commit 49147beb0ccb ("x86/xen: allow
>> nesting of same lazy mode") but as far as I can tell, this breaks its
>> support for preemption.
>>
>> In short, it's all a mess; the semantics for
>> arch_[enter|leave]_lazy_mmu_mode() are not clearly defined and as a
>> result the implementations all have different expectations, sticking
>> plasters and bugs.
>>
>> arm64 is aiming to start using these hooks, so let's clean everything up
>> before adding an arm64 implementation. Update the documentation to state
>> that lazy mmu regions can never be nested, must not be called in
>> interrupt context and preemption may or may not be enabled for the
>> duration of the region.
>>
>> Additionally, update the way arch_[enter|leave]_lazy_mmu_mode() is
>> called in pagemap_scan_pmd_entry() to follow the normal pattern of
>> holding the ptl for user space mappings. As a result the scope is
>> reduced to only the pte table, but that's where most of the performance
>> win is. While I believe there wasn't technically a bug here, the
>> original scope made it easier to accidentally nest or, worse,
>> accidentally call something like kmap() which would expect an immediate
>> mode pte modification but it would end up deferred.
>>
>> arch-specific fixes to conform to the new spec will proceed this one.
>>
>> These issues were spotted by code review and I have no evidence of
>> issues being reported in the wild.
>>
> 
> All looking good to me!
> 
> Acked-by: David Hildenbrand <david@redhat.com>
> 

... but I do wonder if the set_ptes change should be split from the 
pagemap change.

-- 
Cheers,

David / dhildenb



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v1 1/4] mm: Fix lazy mmu docs and usage
  2025-03-03  8:52     ` David Hildenbrand
@ 2025-03-03 10:22       ` Ryan Roberts
  2025-03-03 10:30         ` David Hildenbrand
  0 siblings, 1 reply; 17+ messages in thread
From: Ryan Roberts @ 2025-03-03 10:22 UTC (permalink / raw)
  To: David Hildenbrand, Andrew Morton, David S. Miller,
	Andreas Larsson, Juergen Gross, Boris Ostrovsky, Thomas Gleixner,
	Ingo Molnar, Borislav Petkov, Dave Hansen, H. Peter Anvin,
	Matthew Wilcox (Oracle),
	Catalin Marinas
  Cc: linux-mm, sparclinux, xen-devel, linux-kernel

On 03/03/2025 08:52, David Hildenbrand wrote:
> On 03.03.25 09:49, David Hildenbrand wrote:
>> On 02.03.25 15:55, Ryan Roberts wrote:
>>> The docs, implementations and use of arch_[enter|leave]_lazy_mmu_mode()
>>> is a bit of a mess (to put it politely). There are a number of issues
>>> related to nesting of lazy mmu regions and confusion over whether the
>>> task, when in a lazy mmu region, is preemptible or not. Fix all the
>>> issues relating to the core-mm. Follow up commits will fix the
>>> arch-specific implementations. 3 arches implement lazy mmu; powerpc,
>>> sparc and x86.
>>>
>>> When arch_[enter|leave]_lazy_mmu_mode() was first introduced by commit
>>> 6606c3e0da53 ("[PATCH] paravirt: lazy mmu mode hooks.patch"), it was
>>> expected that lazy mmu regions would never nest and that the appropriate
>>> page table lock(s) would be held while in the region, thus ensuring the
>>> region is non-preemptible. Additionally lazy mmu regions were only used
>>> during manipulation of user mappings.
>>>
>>> Commit 38e0edb15bd0 ("mm/apply_to_range: call pte function with lazy
>>> updates") started invoking the lazy mmu mode in apply_to_pte_range(),
>>> which is used for both user and kernel mappings. For kernel mappings the
>>> region is no longer protected by any lock so there is no longer any
>>> guarantee about non-preemptibility. Additionally, for RT configs, the
>>> holding the PTL only implies no CPU migration, it doesn't prevent
>>> preemption.
>>>
>>> Commit bcc6cc832573 ("mm: add default definition of set_ptes()") added
>>> arch_[enter|leave]_lazy_mmu_mode() to the default implementation of
>>> set_ptes(), used by x86. So after this commit, lazy mmu regions can be
>>> nested. Additionally commit 1a10a44dfc1d ("sparc64: implement the new
>>> page table range API") and commit 9fee28baa601 ("powerpc: implement the
>>> new page table range API") did the same for the sparc and powerpc
>>> set_ptes() overrides.
>>>
>>> powerpc couldn't deal with preemption so avoids it in commit
>>> b9ef323ea168 ("powerpc/64s: Disable preemption in hash lazy mmu mode"),
>>> which explicitly disables preemption for the whole region in its
>>> implementation. x86 can support preemption (or at least it could until
>>> it tried to add support nesting; more on this below). Sparc looks to be
>>> totally broken in the face of preemption, as far as I can tell.
>>>
>>> powewrpc can't deal with nesting, so avoids it in commit 47b8def9358c
>>> ("powerpc/mm: Avoid calling arch_enter/leave_lazy_mmu() in set_ptes"),
>>> which removes the lazy mmu calls from its implementation of set_ptes().
>>> x86 attempted to support nesting in commit 49147beb0ccb ("x86/xen: allow
>>> nesting of same lazy mode") but as far as I can tell, this breaks its
>>> support for preemption.
>>>
>>> In short, it's all a mess; the semantics for
>>> arch_[enter|leave]_lazy_mmu_mode() are not clearly defined and as a
>>> result the implementations all have different expectations, sticking
>>> plasters and bugs.
>>>
>>> arm64 is aiming to start using these hooks, so let's clean everything up
>>> before adding an arm64 implementation. Update the documentation to state
>>> that lazy mmu regions can never be nested, must not be called in
>>> interrupt context and preemption may or may not be enabled for the
>>> duration of the region.
>>>
>>> Additionally, update the way arch_[enter|leave]_lazy_mmu_mode() is
>>> called in pagemap_scan_pmd_entry() to follow the normal pattern of
>>> holding the ptl for user space mappings. As a result the scope is
>>> reduced to only the pte table, but that's where most of the performance
>>> win is. While I believe there wasn't technically a bug here, the
>>> original scope made it easier to accidentally nest or, worse,
>>> accidentally call something like kmap() which would expect an immediate
>>> mode pte modification but it would end up deferred.
>>>
>>> arch-specific fixes to conform to the new spec will proceed this one.
>>>
>>> These issues were spotted by code review and I have no evidence of
>>> issues being reported in the wild.
>>>
>>
>> All looking good to me!
>>
>> Acked-by: David Hildenbrand <david@redhat.com>
>>
> 
> ... but I do wonder if the set_ptes change should be split from the pagemap change.

So set_ptes + docs changes in one patch, and pagemap change in another? I can do
that.

I didn't actually cc stable on these, I'm wondering if I should do that? Perhaps
for all patches except the pagemap change?

Thanks for the quick review!


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v1 1/4] mm: Fix lazy mmu docs and usage
  2025-03-03 10:22       ` Ryan Roberts
@ 2025-03-03 10:30         ` David Hildenbrand
  2025-03-03 12:49           ` Andreas Larsson
  0 siblings, 1 reply; 17+ messages in thread
From: David Hildenbrand @ 2025-03-03 10:30 UTC (permalink / raw)
  To: Ryan Roberts, Andrew Morton, David S. Miller, Andreas Larsson,
	Juergen Gross, Boris Ostrovsky, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, Dave Hansen, H. Peter Anvin,
	Matthew Wilcox (Oracle),
	Catalin Marinas
  Cc: linux-mm, sparclinux, xen-devel, linux-kernel

On 03.03.25 11:22, Ryan Roberts wrote:
> On 03/03/2025 08:52, David Hildenbrand wrote:
>> On 03.03.25 09:49, David Hildenbrand wrote:
>>> On 02.03.25 15:55, Ryan Roberts wrote:
>>>> The docs, implementations and use of arch_[enter|leave]_lazy_mmu_mode()
>>>> is a bit of a mess (to put it politely). There are a number of issues
>>>> related to nesting of lazy mmu regions and confusion over whether the
>>>> task, when in a lazy mmu region, is preemptible or not. Fix all the
>>>> issues relating to the core-mm. Follow up commits will fix the
>>>> arch-specific implementations. 3 arches implement lazy mmu; powerpc,
>>>> sparc and x86.
>>>>
>>>> When arch_[enter|leave]_lazy_mmu_mode() was first introduced by commit
>>>> 6606c3e0da53 ("[PATCH] paravirt: lazy mmu mode hooks.patch"), it was
>>>> expected that lazy mmu regions would never nest and that the appropriate
>>>> page table lock(s) would be held while in the region, thus ensuring the
>>>> region is non-preemptible. Additionally lazy mmu regions were only used
>>>> during manipulation of user mappings.
>>>>
>>>> Commit 38e0edb15bd0 ("mm/apply_to_range: call pte function with lazy
>>>> updates") started invoking the lazy mmu mode in apply_to_pte_range(),
>>>> which is used for both user and kernel mappings. For kernel mappings the
>>>> region is no longer protected by any lock so there is no longer any
>>>> guarantee about non-preemptibility. Additionally, for RT configs, the
>>>> holding the PTL only implies no CPU migration, it doesn't prevent
>>>> preemption.
>>>>
>>>> Commit bcc6cc832573 ("mm: add default definition of set_ptes()") added
>>>> arch_[enter|leave]_lazy_mmu_mode() to the default implementation of
>>>> set_ptes(), used by x86. So after this commit, lazy mmu regions can be
>>>> nested. Additionally commit 1a10a44dfc1d ("sparc64: implement the new
>>>> page table range API") and commit 9fee28baa601 ("powerpc: implement the
>>>> new page table range API") did the same for the sparc and powerpc
>>>> set_ptes() overrides.
>>>>
>>>> powerpc couldn't deal with preemption so avoids it in commit
>>>> b9ef323ea168 ("powerpc/64s: Disable preemption in hash lazy mmu mode"),
>>>> which explicitly disables preemption for the whole region in its
>>>> implementation. x86 can support preemption (or at least it could until
>>>> it tried to add support nesting; more on this below). Sparc looks to be
>>>> totally broken in the face of preemption, as far as I can tell.
>>>>
>>>> powewrpc can't deal with nesting, so avoids it in commit 47b8def9358c
>>>> ("powerpc/mm: Avoid calling arch_enter/leave_lazy_mmu() in set_ptes"),
>>>> which removes the lazy mmu calls from its implementation of set_ptes().
>>>> x86 attempted to support nesting in commit 49147beb0ccb ("x86/xen: allow
>>>> nesting of same lazy mode") but as far as I can tell, this breaks its
>>>> support for preemption.
>>>>
>>>> In short, it's all a mess; the semantics for
>>>> arch_[enter|leave]_lazy_mmu_mode() are not clearly defined and as a
>>>> result the implementations all have different expectations, sticking
>>>> plasters and bugs.
>>>>
>>>> arm64 is aiming to start using these hooks, so let's clean everything up
>>>> before adding an arm64 implementation. Update the documentation to state
>>>> that lazy mmu regions can never be nested, must not be called in
>>>> interrupt context and preemption may or may not be enabled for the
>>>> duration of the region.
>>>>
>>>> Additionally, update the way arch_[enter|leave]_lazy_mmu_mode() is
>>>> called in pagemap_scan_pmd_entry() to follow the normal pattern of
>>>> holding the ptl for user space mappings. As a result the scope is
>>>> reduced to only the pte table, but that's where most of the performance
>>>> win is. While I believe there wasn't technically a bug here, the
>>>> original scope made it easier to accidentally nest or, worse,
>>>> accidentally call something like kmap() which would expect an immediate
>>>> mode pte modification but it would end up deferred.
>>>>
>>>> arch-specific fixes to conform to the new spec will proceed this one.
>>>>
>>>> These issues were spotted by code review and I have no evidence of
>>>> issues being reported in the wild.
>>>>
>>>
>>> All looking good to me!
>>>
>>> Acked-by: David Hildenbrand <david@redhat.com>
>>>
>>
>> ... but I do wonder if the set_ptes change should be split from the pagemap change.
> 
> So set_ptes + docs changes in one patch, and pagemap change in another? I can do
> that.

Yes.

> 
> I didn't actually cc stable on these, I'm wondering if I should do that? Perhaps
> for all patches except the pagemap change?

That would make sense to me. CC stable likely doesn't hurt here. 
(although I wonder if anybody cares about stable on sparc :))

-- 
Cheers,

David / dhildenb



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v1 4/4] Revert "x86/xen: allow nesting of same lazy mode"
  2025-03-02 14:55 ` [PATCH v1 4/4] Revert "x86/xen: allow nesting of same lazy mode" Ryan Roberts
@ 2025-03-03 11:52   ` David Hildenbrand
  2025-03-03 12:33     ` Ryan Roberts
  0 siblings, 1 reply; 17+ messages in thread
From: David Hildenbrand @ 2025-03-03 11:52 UTC (permalink / raw)
  To: Ryan Roberts, Andrew Morton, David S. Miller, Andreas Larsson,
	Juergen Gross, Boris Ostrovsky, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, Dave Hansen, H. Peter Anvin,
	Matthew Wilcox (Oracle),
	Catalin Marinas
  Cc: linux-mm, sparclinux, xen-devel, linux-kernel

On 02.03.25 15:55, Ryan Roberts wrote:
> Commit 49147beb0ccb ("x86/xen: allow nesting of same lazy mode") was
> added as a solution for a core-mm code change where
> arch_[enter|leave]_lazy_mmu_mode() started to be called in a nested
> manner; see commit bcc6cc832573 ("mm: add default definition of
> set_ptes()").
> 
> However, now that we have fixed the API to avoid nesting, we no longer
> need this capability in the x86 implementation.
> 
> Additionally, from code review, I don't believe the fix was ever robust
> in the case of preemption occurring while in the nested lazy mode. The
> implementation usually deals with preemption by calling
> arch_leave_lazy_mmu_mode() from xen_start_context_switch() for the
> outgoing task if we are in the lazy mmu mode. Then in
> xen_end_context_switch(), it restarts the lazy mode by calling
> arch_enter_lazy_mmu_mode() for an incoming task that was in the lazy
> mode when it was switched out. But arch_leave_lazy_mmu_mode() will only
> unwind a single level of nesting. If we are in the double nest, then
> it's not fully unwound and per-cpu variables are left in a bad state.
> 
> So the correct solution is to remove the possibility of nesting from the
> higher level (which has now been done) and remove this x86-specific
> solution.
> 
> Fixes: 49147beb0ccb ("x86/xen: allow nesting of same lazy mode")

Does this patch here deserve this tag? IIUC, it's rather a cleanup now 
that it was properly fixed elsewhere.

> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>

Acked-by: David Hildenbrand <david@redhat.com>

-- 
Cheers,

David / dhildenb



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v1 4/4] Revert "x86/xen: allow nesting of same lazy mode"
  2025-03-03 11:52   ` David Hildenbrand
@ 2025-03-03 12:33     ` Ryan Roberts
  2025-03-03 12:57       ` David Hildenbrand
  0 siblings, 1 reply; 17+ messages in thread
From: Ryan Roberts @ 2025-03-03 12:33 UTC (permalink / raw)
  To: David Hildenbrand, Andrew Morton, David S. Miller,
	Andreas Larsson, Juergen Gross, Boris Ostrovsky, Thomas Gleixner,
	Ingo Molnar, Borislav Petkov, Dave Hansen, H. Peter Anvin,
	Matthew Wilcox (Oracle),
	Catalin Marinas
  Cc: linux-mm, sparclinux, xen-devel, linux-kernel

On 03/03/2025 11:52, David Hildenbrand wrote:
> On 02.03.25 15:55, Ryan Roberts wrote:
>> Commit 49147beb0ccb ("x86/xen: allow nesting of same lazy mode") was
>> added as a solution for a core-mm code change where
>> arch_[enter|leave]_lazy_mmu_mode() started to be called in a nested
>> manner; see commit bcc6cc832573 ("mm: add default definition of
>> set_ptes()").
>>
>> However, now that we have fixed the API to avoid nesting, we no longer
>> need this capability in the x86 implementation.
>>
>> Additionally, from code review, I don't believe the fix was ever robust
>> in the case of preemption occurring while in the nested lazy mode. The
>> implementation usually deals with preemption by calling
>> arch_leave_lazy_mmu_mode() from xen_start_context_switch() for the
>> outgoing task if we are in the lazy mmu mode. Then in
>> xen_end_context_switch(), it restarts the lazy mode by calling
>> arch_enter_lazy_mmu_mode() for an incoming task that was in the lazy
>> mode when it was switched out. But arch_leave_lazy_mmu_mode() will only
>> unwind a single level of nesting. If we are in the double nest, then
>> it's not fully unwound and per-cpu variables are left in a bad state.
>>
>> So the correct solution is to remove the possibility of nesting from the
>> higher level (which has now been done) and remove this x86-specific
>> solution.
>>
>> Fixes: 49147beb0ccb ("x86/xen: allow nesting of same lazy mode")
> 
> Does this patch here deserve this tag? IIUC, it's rather a cleanup now that it
> was properly fixed elsewhere.

Now that nesting is not possible, yes it is just a cleanup. But when nesting was
possible, as far as I can tell it was buggy, as per my description. So it's a
bug bug that won't ever trigger once the other fixes are applied. Happy to
remove the Fixes and then not include it for stable for v2. That's probably
simplest.

> 
>> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
> 
> Acked-by: David Hildenbrand <david@redhat.com>
> 



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v1 1/4] mm: Fix lazy mmu docs and usage
  2025-03-03 10:30         ` David Hildenbrand
@ 2025-03-03 12:49           ` Andreas Larsson
  0 siblings, 0 replies; 17+ messages in thread
From: Andreas Larsson @ 2025-03-03 12:49 UTC (permalink / raw)
  To: David Hildenbrand, Ryan Roberts, Andrew Morton, David S. Miller,
	Juergen Gross, Boris Ostrovsky, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, Dave Hansen, H. Peter Anvin,
	Matthew Wilcox (Oracle),
	Catalin Marinas
  Cc: linux-mm, sparclinux, xen-devel, linux-kernel

On 2025-03-03 11:30, David Hildenbrand wrote:
> On 03.03.25 11:22, Ryan Roberts wrote:[snip]
>>
>> I didn't actually cc stable on these, I'm wondering if I should do that? Perhaps
>> for all patches except the pagemap change?
> 
> That would make sense to me. CC stable likely doesn't hurt here. (although I wonder if anybody cares about stable on sparc :))

Yes, stable is important for sparc just as well as for other architectures.

Cheers,
Andreas


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v1 4/4] Revert "x86/xen: allow nesting of same lazy mode"
  2025-03-03 12:33     ` Ryan Roberts
@ 2025-03-03 12:57       ` David Hildenbrand
  0 siblings, 0 replies; 17+ messages in thread
From: David Hildenbrand @ 2025-03-03 12:57 UTC (permalink / raw)
  To: Ryan Roberts, Andrew Morton, David S. Miller, Andreas Larsson,
	Juergen Gross, Boris Ostrovsky, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, Dave Hansen, H. Peter Anvin,
	Matthew Wilcox (Oracle),
	Catalin Marinas
  Cc: linux-mm, sparclinux, xen-devel, linux-kernel

On 03.03.25 13:33, Ryan Roberts wrote:
> On 03/03/2025 11:52, David Hildenbrand wrote:
>> On 02.03.25 15:55, Ryan Roberts wrote:
>>> Commit 49147beb0ccb ("x86/xen: allow nesting of same lazy mode") was
>>> added as a solution for a core-mm code change where
>>> arch_[enter|leave]_lazy_mmu_mode() started to be called in a nested
>>> manner; see commit bcc6cc832573 ("mm: add default definition of
>>> set_ptes()").
>>>
>>> However, now that we have fixed the API to avoid nesting, we no longer
>>> need this capability in the x86 implementation.
>>>
>>> Additionally, from code review, I don't believe the fix was ever robust
>>> in the case of preemption occurring while in the nested lazy mode. The
>>> implementation usually deals with preemption by calling
>>> arch_leave_lazy_mmu_mode() from xen_start_context_switch() for the
>>> outgoing task if we are in the lazy mmu mode. Then in
>>> xen_end_context_switch(), it restarts the lazy mode by calling
>>> arch_enter_lazy_mmu_mode() for an incoming task that was in the lazy
>>> mode when it was switched out. But arch_leave_lazy_mmu_mode() will only
>>> unwind a single level of nesting. If we are in the double nest, then
>>> it's not fully unwound and per-cpu variables are left in a bad state.
>>>
>>> So the correct solution is to remove the possibility of nesting from the
>>> higher level (which has now been done) and remove this x86-specific
>>> solution.
>>>
>>> Fixes: 49147beb0ccb ("x86/xen: allow nesting of same lazy mode")
>>
>> Does this patch here deserve this tag? IIUC, it's rather a cleanup now that it
>> was properly fixed elsewhere.
> 
> Now that nesting is not possible, yes it is just a cleanup. But when nesting was
> possible, as far as I can tell it was buggy, as per my description.

Right, I understood that part.

> So it's a
> bug bug that won't ever trigger once the other fixes are applied. Happy to
> remove the Fixes and then not include it for stable for v2. That's probably
> simplest.

I was just curious, because it sounded like the actual fix was the other 
patch. Whatever you think is best :)

-- 
Cheers,

David / dhildenb



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v1 2/4] sparc/mm: Disable preemption in lazy mmu mode
  2025-03-02 14:55 ` [PATCH v1 2/4] sparc/mm: Disable preemption in lazy mmu mode Ryan Roberts
  2025-03-03  8:51   ` David Hildenbrand
@ 2025-03-03 13:39   ` Andreas Larsson
  1 sibling, 0 replies; 17+ messages in thread
From: Andreas Larsson @ 2025-03-03 13:39 UTC (permalink / raw)
  To: Ryan Roberts, Andrew Morton, David S. Miller, Juergen Gross,
	Boris Ostrovsky, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, H. Peter Anvin, Matthew Wilcox (Oracle),
	Catalin Marinas
  Cc: linux-mm, sparclinux, xen-devel, linux-kernel

On 2025-03-02 15:55, Ryan Roberts wrote:
> Since commit 38e0edb15bd0 ("mm/apply_to_range: call pte function with
> lazy updates") it's been possible for arch_[enter|leave]_lazy_mmu_mode()
> to be called without holding a page table lock (for the kernel mappings
> case), and therefore it is possible that preemption may occur while in
> the lazy mmu mode. The Sparc lazy mmu implementation is not robust to
> preemption since it stores the lazy mode state in a per-cpu structure
> and does not attempt to manage that state on task switch.
> 
> Powerpc had the same issue and fixed it by explicitly disabling
> preemption in arch_enter_lazy_mmu_mode() and re-enabling in
> arch_leave_lazy_mmu_mode(). See commit b9ef323ea168 ("powerpc/64s:
> Disable preemption in hash lazy mmu mode").
> 
> Given Sparc's lazy mmu mode is based on powerpc's, let's fix it in the
> same way here.
> 
> Fixes: 38e0edb15bd0 ("mm/apply_to_range: call pte function with lazy updates")
> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
> ---
>  arch/sparc/mm/tlb.c | 5 ++++-
>  1 file changed, 4 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/sparc/mm/tlb.c b/arch/sparc/mm/tlb.c
> index 8648a50afe88..a35ddcca5e76 100644
> --- a/arch/sparc/mm/tlb.c
> +++ b/arch/sparc/mm/tlb.c
> @@ -52,8 +52,10 @@ void flush_tlb_pending(void)
>  
>  void arch_enter_lazy_mmu_mode(void)
>  {
> -	struct tlb_batch *tb = this_cpu_ptr(&tlb_batch);
> +	struct tlb_batch *tb;
>  
> +	preempt_disable();
> +	tb = this_cpu_ptr(&tlb_batch);
>  	tb->active = 1;
>  }
>  
> @@ -64,6 +66,7 @@ void arch_leave_lazy_mmu_mode(void)
>  	if (tb->tlb_nr)
>  		flush_tlb_pending();
>  	tb->active = 0;
> +	preempt_enable();
>  }
>  
>  static void tlb_batch_add_one(struct mm_struct *mm, unsigned long vaddr,

Acked-by: Andreas Larsson <andreas@gaisler.com>

Thanks,
Andreas


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v1 3/4] sparc/mm: Avoid calling arch_enter/leave_lazy_mmu() in set_ptes
  2025-03-02 14:55 ` [PATCH v1 3/4] sparc/mm: Avoid calling arch_enter/leave_lazy_mmu() in set_ptes Ryan Roberts
  2025-03-03  8:52   ` David Hildenbrand
@ 2025-03-03 13:39   ` Andreas Larsson
  1 sibling, 0 replies; 17+ messages in thread
From: Andreas Larsson @ 2025-03-03 13:39 UTC (permalink / raw)
  To: Ryan Roberts, Andrew Morton, David S. Miller, Juergen Gross,
	Boris Ostrovsky, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, H. Peter Anvin, Matthew Wilcox (Oracle),
	Catalin Marinas
  Cc: linux-mm, sparclinux, xen-devel, linux-kernel

On 2025-03-02 15:55, Ryan Roberts wrote:
> With commit 1a10a44dfc1d ("sparc64: implement the new page table range
> API") set_ptes was added to the sparc architecture. The implementation
> included calling arch_enter/leave_lazy_mmu() calls.
> 
> The patch removes the usage of arch_enter/leave_lazy_mmu() since this
> implies nesting of lazy mmu regions which is not supported. Without this
> fix, lazy mmu mode is effectively disabled because we exit the mode
> after the first set_ptes:
> 
> remap_pte_range()
>   -> arch_enter_lazy_mmu()
>   -> set_ptes()
>       -> arch_enter_lazy_mmu()
>       -> arch_leave_lazy_mmu()
>   -> arch_leave_lazy_mmu()
> 
> Powerpc suffered the same problem and fixed it in a corresponding way
> with commit 47b8def9358c ("powerpc/mm: Avoid calling
> arch_enter/leave_lazy_mmu() in set_ptes").
> 
> Fixes: 1a10a44dfc1d ("sparc64: implement the new page table range API")
> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
> ---
>  arch/sparc/include/asm/pgtable_64.h | 2 --
>  1 file changed, 2 deletions(-)
> 
> diff --git a/arch/sparc/include/asm/pgtable_64.h b/arch/sparc/include/asm/pgtable_64.h
> index 2b7f358762c1..dc28f2c4eee3 100644
> --- a/arch/sparc/include/asm/pgtable_64.h
> +++ b/arch/sparc/include/asm/pgtable_64.h
> @@ -936,7 +936,6 @@ static inline void __set_pte_at(struct mm_struct *mm, unsigned long addr,
>  static inline void set_ptes(struct mm_struct *mm, unsigned long addr,
>  		pte_t *ptep, pte_t pte, unsigned int nr)
>  {
> -	arch_enter_lazy_mmu_mode();
>  	for (;;) {
>  		__set_pte_at(mm, addr, ptep, pte, 0);
>  		if (--nr == 0)
> @@ -945,7 +944,6 @@ static inline void set_ptes(struct mm_struct *mm, unsigned long addr,
>  		pte_val(pte) += PAGE_SIZE;
>  		addr += PAGE_SIZE;
>  	}
> -	arch_leave_lazy_mmu_mode();
>  }
>  #define set_ptes set_ptes

Acked-by: Andreas Larsson <andreas@gaisler.com>

Thanks,
Andreas


^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2025-03-03 13:39 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-03-02 14:55 [PATCH v1 0/4] Fix lazy mmu mode Ryan Roberts
2025-03-02 14:55 ` [PATCH v1 1/4] mm: Fix lazy mmu docs and usage Ryan Roberts
2025-03-03  8:49   ` David Hildenbrand
2025-03-03  8:52     ` David Hildenbrand
2025-03-03 10:22       ` Ryan Roberts
2025-03-03 10:30         ` David Hildenbrand
2025-03-03 12:49           ` Andreas Larsson
2025-03-02 14:55 ` [PATCH v1 2/4] sparc/mm: Disable preemption in lazy mmu mode Ryan Roberts
2025-03-03  8:51   ` David Hildenbrand
2025-03-03 13:39   ` Andreas Larsson
2025-03-02 14:55 ` [PATCH v1 3/4] sparc/mm: Avoid calling arch_enter/leave_lazy_mmu() in set_ptes Ryan Roberts
2025-03-03  8:52   ` David Hildenbrand
2025-03-03 13:39   ` Andreas Larsson
2025-03-02 14:55 ` [PATCH v1 4/4] Revert "x86/xen: allow nesting of same lazy mode" Ryan Roberts
2025-03-03 11:52   ` David Hildenbrand
2025-03-03 12:33     ` Ryan Roberts
2025-03-03 12:57       ` David Hildenbrand

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox