From: "Liam R. Howlett" <Liam.Howlett@oracle.com>
To: Roman Gushchin <roman.gushchin@linux.dev>
Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org,
Andrew Morton <akpm@linux-foundation.org>,
Jann Horn <jannh@google.com>,
Peter Zijlstra <peterz@infradead.org>,
Will Deacon <will@kernel.org>,
"Aneesh Kumar K.V" <aneesh.kumar@kernel.org>,
Nick Piggin <npiggin@gmail.com>, Hugh Dickins <hughd@google.com>,
linux-arch@vger.kernel.org
Subject: Re: [PATCH v3] mmu_gather: move tlb flush for VM_PFNMAP/VM_MIXEDMAP vmas into free_pgtables()
Date: Thu, 23 Jan 2025 14:42:56 -0500 [thread overview]
Message-ID: <5bpibh7qkrcggyqsrathszfqrjckyaqspdons6cfkkyse4ub4b@2iu4sibbirxf> (raw)
In-Reply-To: <20250123164358.2384447-1-roman.gushchin@linux.dev>
* Roman Gushchin <roman.gushchin@linux.dev> [250123 11:44]:
> Commit b67fbebd4cf9 ("mmu_gather: Force tlb-flush VM_PFNMAP vmas")
> added a forced tlbflush to tlb_vma_end(), which is required to avoid a
> race between munmap() and unmap_mapping_range(). However it added some
> overhead to other paths where tlb_vma_end() is used, but vmas are not
> removed, e.g. madvise(MADV_DONTNEED).
>
> Fix this by moving the tlb flush out of tlb_end_vma() into
> free_pgtables(), somewhat similar to the stable version of the
> original commit: e.g. stable commit 895428ee124a ("mm: Force TLB flush
> for PFNMAP mappings before unlink_file_vma()").
>
> Note, that if tlb->fullmm is set, no flush is required, as the whole
> mm is about to be destroyed.
>
> ---
Hugh didn't mean to add a ---, he meant to move the version info between
the Cc list and the patch so that it's not in the git history.
You can find examples on the ML.
>
> v3:
> - added initialization of vma_pfn in __tlb_reset_range() (by Hugh D.)
>
> v2:
> - moved vma_pfn flag handling into tlb.h (by Peter Z.)
> - added comments (by Peter Z.)
> - fixed the vma_pfn flag setting (by Hugh D.)
>
> Suggested-by: Jann Horn <jannh@google.com>
> Signed-off-by: Roman Gushchin <roman.gushchin@linux.dev>
Link: some email discussion url lore.kernel.org..
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Will Deacon <will@kernel.org>
> Cc: "Aneesh Kumar K.V" <aneesh.kumar@kernel.org>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Nick Piggin <npiggin@gmail.com>
> Cc: Hugh Dickins <hughd@google.com>
> Cc: linux-arch@vger.kernel.org
> Cc: linux-mm@kvack.org
> ---
v3...
v2...
> include/asm-generic/tlb.h | 49 ++++++++++++++++++++++++++++-----------
> mm/memory.c | 7 ++++++
> 2 files changed, 42 insertions(+), 14 deletions(-)
>
> diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h
> index 709830274b75..cdc95b69b91d 100644
> --- a/include/asm-generic/tlb.h
> +++ b/include/asm-generic/tlb.h
> @@ -380,8 +380,16 @@ static inline void __tlb_reset_range(struct mmu_gather *tlb)
> tlb->cleared_pmds = 0;
> tlb->cleared_puds = 0;
> tlb->cleared_p4ds = 0;
> +
> + /*
> + * vma_pfn can only be set in tlb_start_vma(), so let's
> + * initialize it here. Also a tlb flush issued by
> + * tlb_flush_mmu_pfnmap() will cancel the vma_pfn state,
> + * so that unnecessary subsequent flushes are avoided.
> + */
> + tlb->vma_pfn = 0;
> /*
> - * Do not reset mmu_gather::vma_* fields here, we do not
> + * Do not reset other mmu_gather::vma_* fields here, we do not
> * call into tlb_start_vma() again to set them if there is an
> * intermediate flush.
> */
> @@ -449,7 +457,14 @@ tlb_update_vma_flags(struct mmu_gather *tlb, struct vm_area_struct *vma)
> */
> tlb->vma_huge = is_vm_hugetlb_page(vma);
> tlb->vma_exec = !!(vma->vm_flags & VM_EXEC);
> - tlb->vma_pfn = !!(vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP));
> +
> + /*
> + * vma_pfn is checked and cleared by tlb_flush_mmu_pfnmap()
> + * for a set of vma's, so it should be set if at least one vma
> + * has VM_PFNMAP or VM_MIXEDMAP flags set.
> + */
> + if (vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP))
> + tlb->vma_pfn = 1;
> }
>
> static inline void tlb_flush_mmu_tlbonly(struct mmu_gather *tlb)
> @@ -466,6 +481,20 @@ static inline void tlb_flush_mmu_tlbonly(struct mmu_gather *tlb)
> __tlb_reset_range(tlb);
> }
>
> +static inline void tlb_flush_mmu_pfnmap(struct mmu_gather *tlb)
> +{
> + /*
> + * VM_PFNMAP and VM_MIXEDMAP maps are fragile because the core mm
> + * doesn't track the page mapcount -- there might not be page-frames
> + * for these PFNs after all. Force flush TLBs for such ranges to avoid
> + * munmap() vs unmap_mapping_range() races.
> + * Ensure we have no stale TLB entries by the time this mapping is
> + * removed from the rmap.
> + */
> + if (unlikely(!tlb->fullmm && tlb->vma_pfn))
> + tlb_flush_mmu_tlbonly(tlb);
> +}
> +
> static inline void tlb_remove_page_size(struct mmu_gather *tlb,
> struct page *page, int page_size)
> {
> @@ -549,22 +578,14 @@ static inline void tlb_start_vma(struct mmu_gather *tlb, struct vm_area_struct *
>
> static inline void tlb_end_vma(struct mmu_gather *tlb, struct vm_area_struct *vma)
> {
> - if (tlb->fullmm)
> + if (tlb->fullmm || IS_ENABLED(CONFIG_MMU_GATHER_MERGE_VMAS))
> return;
>
> /*
> - * VM_PFNMAP is more fragile because the core mm will not track the
> - * page mapcount -- there might not be page-frames for these PFNs after
> - * all. Force flush TLBs for such ranges to avoid munmap() vs
> - * unmap_mapping_range() races.
> + * Do a TLB flush and reset the range at VMA boundaries; this avoids
> + * the ranges growing with the unused space between consecutive VMAs.
> */
> - if (tlb->vma_pfn || !IS_ENABLED(CONFIG_MMU_GATHER_MERGE_VMAS)) {
> - /*
> - * Do a TLB flush and reset the range at VMA boundaries; this avoids
> - * the ranges growing with the unused space between consecutive VMAs.
> - */
> - tlb_flush_mmu_tlbonly(tlb);
> - }
> + tlb_flush_mmu_tlbonly(tlb);
> }
>
> /*
> diff --git a/mm/memory.c b/mm/memory.c
> index 398c031be9ba..c2a9effb2e32 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -365,6 +365,13 @@ void free_pgtables(struct mmu_gather *tlb, struct ma_state *mas,
> {
> struct unlink_vma_file_batch vb;
>
> + /*
> + * VM_PFNMAP and VM_MIXEDMAP maps require a special handling here:
> + * force flush TLBs for such ranges to avoid munmap() vs
> + * unmap_mapping_range() races.
> + */
> + tlb_flush_mmu_pfnmap(tlb);
> +
> do {
> unsigned long addr = vma->vm_start;
> struct vm_area_struct *next;
> --
> 2.48.1.262.g85cc9f2d1e-goog
>
>
next prev parent reply other threads:[~2025-01-23 19:43 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-01-23 16:43 Roman Gushchin
2025-01-23 19:42 ` Liam R. Howlett [this message]
2025-01-23 21:19 ` Roman Gushchin
2025-01-23 22:26 ` Hugh Dickins
2025-01-27 1:31 ` Hugh Dickins
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=5bpibh7qkrcggyqsrathszfqrjckyaqspdons6cfkkyse4ub4b@2iu4sibbirxf \
--to=liam.howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=aneesh.kumar@kernel.org \
--cc=hughd@google.com \
--cc=jannh@google.com \
--cc=linux-arch@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=npiggin@gmail.com \
--cc=peterz@infradead.org \
--cc=roman.gushchin@linux.dev \
--cc=will@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox