linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: "David Hildenbrand (Red Hat)" <david@kernel.org>
To: Harry Yoo <harry.yoo@oracle.com>
Cc: linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org,
	linux-mm@kvack.org, Will Deacon <will@kernel.org>,
	"Aneesh Kumar K.V" <aneesh.kumar@kernel.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	Nick Piggin <npiggin@gmail.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Arnd Bergmann <arnd@arndb.de>,
	Muchun Song <muchun.song@linux.dev>,
	Oscar Salvador <osalvador@suse.de>,
	"Liam R. Howlett" <Liam.Howlett@oracle.com>,
	Lorenzo Stoakes <lorenzo.stoakes@oracle.com>,
	Vlastimil Babka <vbabka@suse.cz>, Jann Horn <jannh@google.com>,
	Pedro Falcato <pfalcato@suse.de>, Rik van Riel <riel@surriel.com>,
	Laurence Oberman <loberman@redhat.com>,
	Prakash Sangappa <prakash.sangappa@oracle.com>,
	Nadav Amit <nadav.amit@gmail.com>,
	stable@vger.kernel.org, Ryan Roberts <ryan.roberts@arm.com>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Christophe Leroy <christophe.leroy@csgroup.eu>
Subject: Re: [PATCH v2 4/4] mm/hugetlb: fix excessive IPI broadcasts when unsharing PMD tables using mmu_gather
Date: Sun, 21 Dec 2025 13:24:44 +0100	[thread overview]
Message-ID: <e78f5457-43fb-4656-ad53-bfda72936ef5@kernel.org> (raw)
In-Reply-To: <3d9ce821-a39d-4164-a225-fcbe790ea951@kernel.org>

On 12/19/25 14:59, David Hildenbrand (Red Hat) wrote:
> On 12/19/25 14:52, David Hildenbrand (Red Hat) wrote:
>> On 12/19/25 13:37, Harry Yoo wrote:
>>> On Fri, Dec 12, 2025 at 08:10:19AM +0100, David Hildenbrand (Red Hat) wrote:
>>>> As reported, ever since commit 1013af4f585f ("mm/hugetlb: fix
>>>> huge_pmd_unshare() vs GUP-fast race") we can end up in some situations
>>>> where we perform so many IPI broadcasts when unsharing hugetlb PMD page
>>>> tables that it severely regresses some workloads.
>>>>
>>>> In particular, when we fork()+exit(), or when we munmap() a large
>>>> area backed by many shared PMD tables, we perform one IPI broadcast per
>>>> unshared PMD table.
>>>>
>>>
>>> [...snip...]
>>>
>>>> Fixes: 1013af4f585f ("mm/hugetlb: fix huge_pmd_unshare() vs GUP-fast race")
>>>> Reported-by: Uschakow, Stanislav" <suschako@amazon.de>
>>>> Closes: https://lore.kernel.org/all/4d3878531c76479d9f8ca9789dc6485d@amazon.de/
>>>> Tested-by: Laurence Oberman <loberman@redhat.com>
>>>> Cc: <stable@vger.kernel.org>
>>>> Signed-off-by: David Hildenbrand (Red Hat) <david@kernel.org>
>>>> ---
>>>>     include/asm-generic/tlb.h |  74 ++++++++++++++++++++++-
>>>>     include/linux/hugetlb.h   |  19 +++---
>>>>     mm/hugetlb.c              | 121 ++++++++++++++++++++++----------------
>>>>     mm/mmu_gather.c           |   7 +++
>>>>     mm/mprotect.c             |   2 +-
>>>>     mm/rmap.c                 |  25 +++++---
>>>>     6 files changed, 179 insertions(+), 69 deletions(-)
>>>>
>>>> @@ -6522,22 +6511,16 @@ long hugetlb_change_protection(struct vm_area_struct *vma,
>>>>     				pte = huge_pte_clear_uffd_wp(pte);
>>>>     			huge_ptep_modify_prot_commit(vma, address, ptep, old_pte, pte);
>>>>     			pages++;
>>>> +			tlb_remove_huge_tlb_entry(h, tlb, ptep, address);
>>>>     		}
>>>>     
>>>>     next:
>>>>     		spin_unlock(ptl);
>>>>     		cond_resched();
>>>>     	}
>>>> -	/*
>>>> -	 * There is nothing protecting a previously-shared page table that we
>>>> -	 * unshared through huge_pmd_unshare() from getting freed after we
>>>> -	 * release i_mmap_rwsem, so flush the TLB now. If huge_pmd_unshare()
>>>> -	 * succeeded, flush the range corresponding to the pud.
>>>> -	 */
>>>> -	if (shared_pmd)
>>>> -		flush_hugetlb_tlb_range(vma, range.start, range.end);
>>>> -	else
>>>> -		flush_hugetlb_tlb_range(vma, start, end);
>>>> +
>>>> +	tlb_flush_mmu_tlbonly(tlb);
>>>> +	huge_pmd_unshare_flush(tlb, vma);
>>>
>>> Shouldn't we teach mmu_gather that it has to call
>>
>> I hope not :) In the worst case we could keep the
>> flush_hugetlb_tlb_range() in the !shared case in. Suboptimal but I am
>> sick and tired of dealing with this hugetlb mess.
>>
>>
>> Let me CC Ryan and Catalin for the arm64 pieces and Christophe on the
>> ppc pieces: See [1] where we convert away from some
>> flush_hugetlb_tlb_range() users to operate on mmu_gather using
>> * tlb_remove_huge_tlb_entry() for mremap() and mprotect(). Before we
>>      would only use it in __unmap_hugepage_range().
>> * tlb_flush_pmd_range() for unsharing of shared PMD tables. We already
>>      used that in one call path.
> 
> To clarify, powerpc does not select ARCH_WANT_HUGE_PMD_SHARE, so the
> second change does not apply to ppc.
> 

Okay, the existing hugetlb mmu_gather integration is hell on earth.

I *think* to get everything right (work around all the hacks we have) we might have to do a

	tlb_change_page_size(tlb, sz);
	tlb_start_vma(tlb, vma);

before adding something to the tlb and a tlb_end_vma(tlb, vma) if we
don't immediately call tlb_finish_mmu() already.

tlb_change_page_size() will set page_size accordingly (as required for
ppc IIUC).

tlb_start_vma()->tlb_update_vma_flags() will set tlb->vma_huge for ...
some very good reason I am sure.


So something like the following might do the trick:

 From b0b854c2f91ce0931e1462774c92015183fb5b52 Mon Sep 17 00:00:00 2001
From: "David Hildenbrand (Red Hat)" <david@kernel.org>
Date: Sun, 21 Dec 2025 12:57:43 +0100
Subject: [PATCH] tmp

Signed-off-by: David Hildenbrand (Red Hat) <david@kernel.org>
---
  mm/hugetlb.c | 12 +++++++++++-
  mm/rmap.c    |  4 ++++
  2 files changed, 15 insertions(+), 1 deletion(-)

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 7fef0b94b5d1e..14521210181c9 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -5113,6 +5113,9 @@ int move_hugetlb_page_tables(struct vm_area_struct *vma,
  	/* Prevent race with file truncation */
  	hugetlb_vma_lock_write(vma);
  	i_mmap_lock_write(mapping);
+
+	tlb_change_page_size(&tlb, sz);
+	tlb_start_vma(&tlb, vma);
  	for (; old_addr < old_end; old_addr += sz, new_addr += sz) {
  		src_pte = hugetlb_walk(vma, old_addr, sz);
  		if (!src_pte) {
@@ -5128,13 +5131,13 @@ int move_hugetlb_page_tables(struct vm_area_struct *vma,
  			new_addr |= last_addr_mask;
  			continue;
  		}
-		tlb_remove_huge_tlb_entry(h, &tlb, src_pte, old_addr);
  
  		dst_pte = huge_pte_alloc(mm, new_vma, new_addr, sz);
  		if (!dst_pte)
  			break;
  
  		move_huge_pte(vma, old_addr, new_addr, src_pte, dst_pte, sz);
+		tlb_remove_huge_tlb_entry(h, &tlb, src_pte, old_addr);
  	}
  
  	tlb_flush_mmu_tlbonly(&tlb);
@@ -6416,6 +6419,8 @@ long hugetlb_change_protection(struct mmu_gather *tlb, struct vm_area_struct *vm
  
  	BUG_ON(address >= end);
  	flush_cache_range(vma, range.start, range.end);
+	tlb_change_page_size(tlb, psize);
+	tlb_start_vma(tlb, vma);
  
  	mmu_notifier_invalidate_range_start(&range);
  	hugetlb_vma_lock_write(vma);
@@ -6532,6 +6537,8 @@ long hugetlb_change_protection(struct mmu_gather *tlb, struct vm_area_struct *vm
  	hugetlb_vma_unlock_write(vma);
  	mmu_notifier_invalidate_range_end(&range);
  
+	tlb_end_vma(tlb, vma);
+
  	return pages > 0 ? (pages << h->order) : pages;
  }
  
@@ -7259,6 +7266,9 @@ static void hugetlb_unshare_pmds(struct vm_area_struct *vma,
  	} else {
  		i_mmap_assert_write_locked(vma->vm_file->f_mapping);
  	}
+
+	tlb_change_page_size(&tlb, sz);
+	tlb_start_vma(&tlb, vma);
  	for (address = start; address < end; address += PUD_SIZE) {
  		ptep = hugetlb_walk(vma, address, sz);
  		if (!ptep)
diff --git a/mm/rmap.c b/mm/rmap.c
index d6799afe11147..27210bc6fb489 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -2015,6 +2015,8 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
  					goto walk_abort;
  
  				tlb_gather_mmu(&tlb, mm);
+				tlb_change_page_size(&tlb, huge_page_size(hstate_vma(vma)));
+				tlb_start_vma(&tlb, vma);
  				if (huge_pmd_unshare(&tlb, vma, address, pvmw.pte)) {
  					hugetlb_vma_unlock_write(vma);
  					huge_pmd_unshare_flush(&tlb, vma);
@@ -2413,6 +2415,8 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma,
  				}
  
  				tlb_gather_mmu(&tlb, mm);
+				tlb_change_page_size(&tlb, huge_page_size(hstate_vma(vma)));
+				tlb_start_vma(&tlb, vma);
  				if (huge_pmd_unshare(&tlb, vma, address, pvmw.pte)) {
  					hugetlb_vma_unlock_write(vma);
  					huge_pmd_unshare_flush(&tlb, vma);
-- 
2.52.0



But now I'm staring at it and wonder whether we should just defer the TLB flushing changes
to a later point and only focus on the IPI flushes. Doing only that with mmu_gather
looks *really* weird, and I don't want to introduce some other mechanism just for that
batching purpose.

Hm ...

-- 
Cheers

David


  reply	other threads:[~2025-12-21 12:24 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-12-12  7:10 [PATCH v2 0/4] mm/hugetlb: fixes for PMD table sharing (incl. using mmu_gather) David Hildenbrand (Red Hat)
2025-12-12  7:10 ` [PATCH v2 1/4] mm/hugetlb: fix hugetlb_pmd_shared() David Hildenbrand (Red Hat)
2025-12-12  7:10 ` [PATCH v2 2/4] mm/hugetlb: fix two comments related to huge_pmd_unshare() David Hildenbrand (Red Hat)
2025-12-19  4:44   ` Harry Yoo
2025-12-19  6:11     ` David Hildenbrand (Red Hat)
2025-12-19 11:20       ` Harry Yoo
2025-12-19 14:13         ` David Hildenbrand (Red Hat)
2025-12-19 21:37           ` Nadav Amit
2025-12-21  9:26             ` David Hildenbrand (Red Hat)
2025-12-12  7:10 ` [PATCH v2 3/4] mm/rmap: " David Hildenbrand (Red Hat)
2025-12-12  7:10 ` [PATCH v2 4/4] mm/hugetlb: fix excessive IPI broadcasts when unsharing PMD tables using mmu_gather David Hildenbrand (Red Hat)
2025-12-16 10:47   ` Lorenzo Stoakes
2025-12-19 12:37   ` Harry Yoo
2025-12-19 13:52     ` David Hildenbrand (Red Hat)
2025-12-19 13:59       ` David Hildenbrand (Red Hat)
2025-12-21 12:24         ` David Hildenbrand (Red Hat) [this message]
2025-12-22  2:09           ` Harry Yoo
2025-12-22 10:10             ` David Hildenbrand (Red Hat)

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=e78f5457-43fb-4656-ad53-bfda72936ef5@kernel.org \
    --to=david@kernel.org \
    --cc=Liam.Howlett@oracle.com \
    --cc=akpm@linux-foundation.org \
    --cc=aneesh.kumar@kernel.org \
    --cc=arnd@arndb.de \
    --cc=catalin.marinas@arm.com \
    --cc=christophe.leroy@csgroup.eu \
    --cc=harry.yoo@oracle.com \
    --cc=jannh@google.com \
    --cc=linux-arch@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=loberman@redhat.com \
    --cc=lorenzo.stoakes@oracle.com \
    --cc=muchun.song@linux.dev \
    --cc=nadav.amit@gmail.com \
    --cc=npiggin@gmail.com \
    --cc=osalvador@suse.de \
    --cc=peterz@infradead.org \
    --cc=pfalcato@suse.de \
    --cc=prakash.sangappa@oracle.com \
    --cc=riel@surriel.com \
    --cc=ryan.roberts@arm.com \
    --cc=stable@vger.kernel.org \
    --cc=vbabka@suse.cz \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox