* [PATCH] mmu_gather: move tlb flush for VM_PFNMAP/VM_MIXEDMAP vmas into free_pgtables()
@ 2025-01-21 20:09 Roman Gushchin
2025-01-22 10:06 ` Peter Zijlstra
2025-01-22 10:34 ` Hugh Dickins
0 siblings, 2 replies; 3+ messages in thread
From: Roman Gushchin @ 2025-01-21 20:09 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mm, Andrew Morton, Roman Gushchin, Jann Horn,
Peter Zijlstra, Will Deacon, Aneesh Kumar K.V, Nick Piggin,
Hugh Dickins, linux-arch
Commit b67fbebd4cf9 ("mmu_gather: Force tlb-flush VM_PFNMAP vmas")
added a forced tlbflush to tlb_vma_end(), which is required to avoid a
race between munmap() and unmap_mapping_range(). However it added some
overhead to other paths where tlb_vma_end() is used, but vmas are not
removed, e.g. madvise(MADV_DONTNEED).
Fix this by moving the tlb flush out of tlb_end_vma() into
free_pgtables(), somewhat similar to the stable version of the
original commit: e.g. stable commit 895428ee124a ("mm: Force TLB flush
for PFNMAP mappings before unlink_file_vma()").
Note, that if tlb->fullmm is set, no flush is required, as the whole
mm is about to be destroyed.
Suggested-by: Jann Horn <jannh@google.com>
Signed-off-by: Roman Gushchin <roman.gushchin@linux.dev>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Will Deacon <will@kernel.org>
Cc: "Aneesh Kumar K.V" <aneesh.kumar@kernel.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Nick Piggin <npiggin@gmail.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: linux-arch@vger.kernel.org
Cc: linux-mm@kvack.org
---
include/asm-generic/tlb.h | 16 ++++------------
mm/memory.c | 7 +++++++
2 files changed, 11 insertions(+), 12 deletions(-)
diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h
index 709830274b75..411daa96f57a 100644
--- a/include/asm-generic/tlb.h
+++ b/include/asm-generic/tlb.h
@@ -549,22 +549,14 @@ static inline void tlb_start_vma(struct mmu_gather *tlb, struct vm_area_struct *
static inline void tlb_end_vma(struct mmu_gather *tlb, struct vm_area_struct *vma)
{
- if (tlb->fullmm)
+ if (tlb->fullmm || IS_ENABLED(CONFIG_MMU_GATHER_MERGE_VMAS))
return;
/*
- * VM_PFNMAP is more fragile because the core mm will not track the
- * page mapcount -- there might not be page-frames for these PFNs after
- * all. Force flush TLBs for such ranges to avoid munmap() vs
- * unmap_mapping_range() races.
+ * Do a TLB flush and reset the range at VMA boundaries; this avoids
+ * the ranges growing with the unused space between consecutive VMAs.
*/
- if (tlb->vma_pfn || !IS_ENABLED(CONFIG_MMU_GATHER_MERGE_VMAS)) {
- /*
- * Do a TLB flush and reset the range at VMA boundaries; this avoids
- * the ranges growing with the unused space between consecutive VMAs.
- */
- tlb_flush_mmu_tlbonly(tlb);
- }
+ tlb_flush_mmu_tlbonly(tlb);
}
/*
diff --git a/mm/memory.c b/mm/memory.c
index 398c031be9ba..2071415f68dd 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -365,6 +365,13 @@ void free_pgtables(struct mmu_gather *tlb, struct ma_state *mas,
{
struct unlink_vma_file_batch vb;
+ /*
+ * Ensure we have no stale TLB entries by the time this mapping is
+ * removed from the rmap.
+ */
+ if (tlb->vma_pfn && !tlb->fullmm)
+ tlb_flush_mmu(tlb);
+
do {
unsigned long addr = vma->vm_start;
struct vm_area_struct *next;
--
2.48.0.rc2.279.g1de40edade-goog
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [PATCH] mmu_gather: move tlb flush for VM_PFNMAP/VM_MIXEDMAP vmas into free_pgtables()
2025-01-21 20:09 [PATCH] mmu_gather: move tlb flush for VM_PFNMAP/VM_MIXEDMAP vmas into free_pgtables() Roman Gushchin
@ 2025-01-22 10:06 ` Peter Zijlstra
2025-01-22 10:34 ` Hugh Dickins
1 sibling, 0 replies; 3+ messages in thread
From: Peter Zijlstra @ 2025-01-22 10:06 UTC (permalink / raw)
To: Roman Gushchin
Cc: linux-kernel, linux-mm, Andrew Morton, Jann Horn, Will Deacon,
Aneesh Kumar K.V, Nick Piggin, Hugh Dickins, linux-arch
On Tue, Jan 21, 2025 at 08:09:29PM +0000, Roman Gushchin wrote:
> Commit b67fbebd4cf9 ("mmu_gather: Force tlb-flush VM_PFNMAP vmas")
> added a forced tlbflush to tlb_vma_end(), which is required to avoid a
> race between munmap() and unmap_mapping_range(). However it added some
> overhead to other paths where tlb_vma_end() is used, but vmas are not
> removed, e.g. madvise(MADV_DONTNEED).
>
> Fix this by moving the tlb flush out of tlb_end_vma() into
> free_pgtables(), somewhat similar to the stable version of the
> original commit: e.g. stable commit 895428ee124a ("mm: Force TLB flush
> for PFNMAP mappings before unlink_file_vma()").
>
> Note, that if tlb->fullmm is set, no flush is required, as the whole
> mm is about to be destroyed.
>
> Suggested-by: Jann Horn <jannh@google.com>
> Signed-off-by: Roman Gushchin <roman.gushchin@linux.dev>
> diff --git a/mm/memory.c b/mm/memory.c
> index 398c031be9ba..2071415f68dd 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -365,6 +365,13 @@ void free_pgtables(struct mmu_gather *tlb, struct ma_state *mas,
> {
> struct unlink_vma_file_batch vb;
>
> + /*
> + * Ensure we have no stale TLB entries by the time this mapping is
> + * removed from the rmap.
This comment would be ever so much better if it would explain *why* this
is important.
> + */
> + if (tlb->vma_pfn && !tlb->fullmm)
> + tlb_flush_mmu(tlb);
I can't say I particularly like accessing vma_pfn outside of the
mmu_gather code, but I'm also not sure its worth it to add a special
helper.
I do worry this makes the whole thing a little more fragile though.
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [PATCH] mmu_gather: move tlb flush for VM_PFNMAP/VM_MIXEDMAP vmas into free_pgtables()
2025-01-21 20:09 [PATCH] mmu_gather: move tlb flush for VM_PFNMAP/VM_MIXEDMAP vmas into free_pgtables() Roman Gushchin
2025-01-22 10:06 ` Peter Zijlstra
@ 2025-01-22 10:34 ` Hugh Dickins
1 sibling, 0 replies; 3+ messages in thread
From: Hugh Dickins @ 2025-01-22 10:34 UTC (permalink / raw)
To: Roman Gushchin
Cc: linux-kernel, linux-mm, Andrew Morton, Jann Horn, Peter Zijlstra,
Will Deacon, Aneesh Kumar K.V, Nick Piggin, Hugh Dickins,
linux-arch
On Tue, 21 Jan 2025, Roman Gushchin wrote:
> Commit b67fbebd4cf9 ("mmu_gather: Force tlb-flush VM_PFNMAP vmas")
> added a forced tlbflush to tlb_vma_end(),
Yes, I think that was a poor way of fixing the bug in question.
> which is required to avoid a
> race between munmap() and unmap_mapping_range(). However it added some
> overhead to other paths where tlb_vma_end() is used, but vmas are not
> removed, e.g. madvise(MADV_DONTNEED).
Right.
>
> Fix this by moving the tlb flush out of tlb_end_vma() into
> free_pgtables(), somewhat similar to the stable version of the
> original commit: e.g. stable commit 895428ee124a ("mm: Force TLB flush
> for PFNMAP mappings before unlink_file_vma()").
Something like this patch will be a good improvement:
but not this version of the patch.
Because the mmu_gather may be gathering across many vmas,
but tlb_start_vma(), well, its "tlb_update_vma_flags()", says
tlb->vma_pfn = !!(vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP));
so a following vma may reset vma_pfn too soon: more care is needed.
But probably vma_pfn should be reset to 0 somewhere, to avoid an
extra TLB flush in free_pgtables() when it has already been done.
Perhaps vma_pfn should follow the same pattern of initialization,
setting and clearing as cleared_ptes etc, instead of following
vma_huge and vma_exec. Perhaps, but it is something different,
and I've not yet checked enough to be sure: tlb.h is still a maze
too twisty for me.
Hugh (after power outage interrupted reply)
>
> Note, that if tlb->fullmm is set, no flush is required, as the whole
> mm is about to be destroyed.
>
> Suggested-by: Jann Horn <jannh@google.com>
> Signed-off-by: Roman Gushchin <roman.gushchin@linux.dev>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Will Deacon <will@kernel.org>
> Cc: "Aneesh Kumar K.V" <aneesh.kumar@kernel.org>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Nick Piggin <npiggin@gmail.com>
> Cc: Hugh Dickins <hughd@google.com>
> Cc: linux-arch@vger.kernel.org
> Cc: linux-mm@kvack.org
> ---
> include/asm-generic/tlb.h | 16 ++++------------
> mm/memory.c | 7 +++++++
> 2 files changed, 11 insertions(+), 12 deletions(-)
>
> diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h
> index 709830274b75..411daa96f57a 100644
> --- a/include/asm-generic/tlb.h
> +++ b/include/asm-generic/tlb.h
> @@ -549,22 +549,14 @@ static inline void tlb_start_vma(struct mmu_gather *tlb, struct vm_area_struct *
>
> static inline void tlb_end_vma(struct mmu_gather *tlb, struct vm_area_struct *vma)
> {
> - if (tlb->fullmm)
> + if (tlb->fullmm || IS_ENABLED(CONFIG_MMU_GATHER_MERGE_VMAS))
> return;
>
> /*
> - * VM_PFNMAP is more fragile because the core mm will not track the
> - * page mapcount -- there might not be page-frames for these PFNs after
> - * all. Force flush TLBs for such ranges to avoid munmap() vs
> - * unmap_mapping_range() races.
> + * Do a TLB flush and reset the range at VMA boundaries; this avoids
> + * the ranges growing with the unused space between consecutive VMAs.
> */
> - if (tlb->vma_pfn || !IS_ENABLED(CONFIG_MMU_GATHER_MERGE_VMAS)) {
> - /*
> - * Do a TLB flush and reset the range at VMA boundaries; this avoids
> - * the ranges growing with the unused space between consecutive VMAs.
> - */
> - tlb_flush_mmu_tlbonly(tlb);
> - }
> + tlb_flush_mmu_tlbonly(tlb);
> }
>
> /*
> diff --git a/mm/memory.c b/mm/memory.c
> index 398c031be9ba..2071415f68dd 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -365,6 +365,13 @@ void free_pgtables(struct mmu_gather *tlb, struct ma_state *mas,
> {
> struct unlink_vma_file_batch vb;
>
> + /*
> + * Ensure we have no stale TLB entries by the time this mapping is
> + * removed from the rmap.
> + */
> + if (tlb->vma_pfn && !tlb->fullmm)
> + tlb_flush_mmu(tlb);
> +
> do {
> unsigned long addr = vma->vm_start;
> struct vm_area_struct *next;
> --
> 2.48.0.rc2.279.g1de40edade-goog
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2025-01-22 10:35 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-01-21 20:09 [PATCH] mmu_gather: move tlb flush for VM_PFNMAP/VM_MIXEDMAP vmas into free_pgtables() Roman Gushchin
2025-01-22 10:06 ` Peter Zijlstra
2025-01-22 10:34 ` Hugh Dickins
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox