linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Jann Horn <jannh@google.com>
To: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Cc: David Hildenbrand <david@redhat.com>,
	"Uschakow, Stanislav" <suschako@amazon.de>,
	 "linux-mm@kvack.org" <linux-mm@kvack.org>,
	"trix@redhat.com" <trix@redhat.com>,
	 "ndesaulniers@google.com" <ndesaulniers@google.com>,
	"nathan@kernel.org" <nathan@kernel.org>,
	 "akpm@linux-foundation.org" <akpm@linux-foundation.org>,
	"muchun.song@linux.dev" <muchun.song@linux.dev>,
	 "mike.kravetz@oracle.com" <mike.kravetz@oracle.com>,
	 "liam.howlett@oracle.com" <liam.howlett@oracle.com>,
	"osalvador@suse.de" <osalvador@suse.de>,
	 "vbabka@suse.cz" <vbabka@suse.cz>,
	"stable@vger.kernel.org" <stable@vger.kernel.org>
Subject: Re: Bug: Performance regression in 1013af4f585f: mm/hugetlb: fix huge_pmd_unshare() vs GUP-fast race
Date: Mon, 20 Oct 2025 17:33:22 +0200	[thread overview]
Message-ID: <CAG48ez3paQTctuAO1bXWarzvRK33kyLjHbQ6zsQLTWya8Y1=dQ@mail.gmail.com> (raw)
In-Reply-To: <81d096fb-f2c2-4b26-ab1b-486001ee2cac@lucifer.local>

On Mon, Oct 20, 2025 at 5:01 PM Lorenzo Stoakes
<lorenzo.stoakes@oracle.com> wrote:
> On Thu, Oct 16, 2025 at 08:44:57PM +0200, Jann Horn wrote:
> > On Thu, Oct 9, 2025 at 9:40 AM David Hildenbrand <david@redhat.com> wrote:
> > > On 01.09.25 12:58, Jann Horn wrote:
> > > > Hi!
> > > >
> > > > On Fri, Aug 29, 2025 at 4:30 PM Uschakow, Stanislav <suschako@amazon.de> wrote:
> > > >> We have observed a huge latency increase using `fork()` after ingesting the CVE-2025-38085 fix which leads to the commit `1013af4f585f: mm/hugetlb: fix huge_pmd_unshare() vs GUP-fast race`. On large machines with 1.5TB of memory with 196 cores, we identified mmapping of 1.2TB of shared memory and forking itself dozens or hundreds of times we see a increase of execution times of a factor of 4. The reproducer is at the end of the email.
> > > >
> > > > Yeah, every 1G virtual address range you unshare on unmap will do an
> > > > extra synchronous IPI broadcast to all CPU cores, so it's not very
> > > > surprising that doing this would be a bit slow on a machine with 196
> > > > cores.
> > > >
> > > >> My observation/assumption is:
> > > >>
> > > >> each child touches 100 random pages and despawns
> > > >> on each despawn `huge_pmd_unshare()` is called
> > > >> each call to `huge_pmd_unshare()` syncrhonizes all threads using `tlb_remove_table_sync_one()` leading to the regression
> > > >
> > > > Yeah, makes sense that that'd be slow.
> > > >
> > > > There are probably several ways this could be optimized - like maybe
> > > > changing tlb_remove_table_sync_one() to rely on the MM's cpumask
> > > > (though that would require thinking about whether this interacts with
> > > > remote MM access somehow), or batching the refcount drops for hugetlb
> > > > shared page tables through something like struct mmu_gather, or doing
> > > > something special for the unmap path, or changing the semantics of
> > > > hugetlb page tables such that they can never turn into normal page
> > > > tables again. However, I'm not planning to work on optimizing this.
> > >
> > > I'm currently looking at the fix and what sticks out is "Fix it with an
> > > explicit broadcast IPI through tlb_remove_table_sync_one()".
> > >
> > > (I don't understand how the page table can be used for "normal,
> > > non-hugetlb". I could only see how it is used for the remaining user for
> > > hugetlb stuff, but that's different question)
> >
> > If I remember correctly:
> > When a hugetlb shared page table drops to refcount 1, it turns into a
> > normal page table. If you then afterwards split the hugetlb VMA, unmap
> > one half of it, and place a new unrelated VMA in its place, the same
> > page table will be reused for PTEs of this new unrelated VMA.
> >
> > So the scenario would be:
> >
> > 1. Initially, we have a hugetlb shared page table covering 1G of
> > address space which maps hugetlb 2M pages, which is used by two
> > hugetlb VMAs in different processes (processes P1 and P2).
> > 2. A thread in P2 begins a gup_fast() walk in the hugetlb region, and
> > walks down through the PUD entry that points to the shared page table,
> > then when it reaches the loop in gup_fast_pmd_range() gets interrupted
> > for a while by an NMI or preempted by the hypervisor or something.
> > 3. P2 removes its VMA, and the hugetlb shared page table effectively
> > becomes a normal page table in P1.
>
> This is a bit confusing, are we talking about 2 threads in P2 on different CPUs?
>
> P2/T1 on CPU A is doing the gup_fast() walk,
> P2/T2 on CPU B is simultaneously 'removing' this VMA?

Ah, yes.

> Because surely the interrupts being disabled on CPU A means that ordinary
> preemption won't happen right?

Yeah.

> By remove what do you mean? Unmap? But won't this result in a TLB flush synced
> by IPI that is stalled by P2'S CPU having interrupts diabled?

The case I had in mind is munmap(). This is only an issue on platforms
where TLB flushes can be done without IPI. That includes:

 - KVM guests on x86 (where TLB flush IPIs can be elided if the target
vCPU has been preempted by the host, in which case the host promises
to do a TLB flush on guest re-entry)
 - modern AMD CPUs with INVLPGB
 - arm64

That is the whole point of tlb_remove_table_sync_one() - it forces an
IPI on architectures where TLB flush doesn't guarantee an IPI.

(The config option "CONFIG_MMU_GATHER_RCU_TABLE_FREE", which is only
needed on architectures that don't guarantee that an IPI is involved
in TLB flushing, is set on the major architectures nowadays -
unconditionally on x86 and arm64, and in SMP builds of 32-bit arm.)

> Or is it removed in the sense of hugetlb? As in something that invokes
> huge_pmd_unshare()?

I think that could also trigger it, though I wasn't thinking of that case.

> But I guess this doesn't matter as the page table teardown will succeed, just
> the final tlb_finish_mmu() will stall.
>
> And I guess GUP fast is trying to protect against the clear down by checking pmd
> != *pmdp.

The pmd recheck is done because of THP, IIRC because THP can deposit
and reuse page tables without following the normal page table life
cycle.

> > 4. Then P1 splits the hugetlb VMA in the middle (at a 2M boundary),
> > leaving two VMAs VMA1 and VMA2.
> > 5. P1 unmaps VMA1, and creates a new VMA (VMA3) in its place, for
> > example an anonymous private VMA.
>
> Hmm, can it though?
>
> P1 mmap write lock will be held, and VMA lock will be held too for VMA1,
>
> In vms_complete_munmap_vmas(), vms_clear_ptes() will stall on tlb_finish_mmu()
> for IPI-synced architectures, and in that case the unmap won't finish and the
> mmap write lock won't be released so nobody an map a new VMA yet can they?

Yeah, I think it can't happen on configurations that always use IPI
for TLB synchronization. My patch also doesn't change anything on
those architectures - tlb_remove_table_sync_one() is a no-op on
architectures without CONFIG_MMU_GATHER_RCU_TABLE_FREE.

> > 6. P1 populates VMA3 with page table entries.
>
> ofc this requires the mmap/vma write lock above to be released first.
>
> > 7. The gup_fast() walk in P2 continues, and gup_fast_pmd_range() now
> > uses the new PMD/PTE entries created for VMA3.
> >
> > > How does the fix work when an architecture does not issue IPIs for TLB
> > > shootdown? To handle gup-fast on these architectures, we use RCU.
> >
> > gup-fast disables interrupts, which synchronizes against both RCU and IPI.
> >
> > > So I'm wondering whether we use RCU somehow.
> > >
> > > But note that in gup_fast_pte_range(), we are validating whether the PMD
> > > changed:
> > >
> > > if (unlikely(pmd_val(pmd) != pmd_val(*pmdp)) ||
> > >      unlikely(pte_val(pte) != pte_val(ptep_get(ptep)))) {
> > >         gup_put_folio(folio, 1, flags);
> > >         goto pte_unmap;
> > > }
> > >
> > >
> > > So in case the page table got reused in the meantime, we should just
> > > back off and be fine, right?
> >
> > The shared page table is mapped with a PUD entry, and we don't check
> > whether the PUD entry changed here.
>
> Could we simply put a PUD check in there sensibly?

Uuuh... maybe? But I'm not sure if there is a good way to express the
safety rules after that change any more nicely than we can do with the
current safety rules, it feels like we're just tacking on an
increasing number of special cases. As I understand it, the current
rules are something like:

Freeing a page table needs RCU delay or IPI to synchronize against
gup_fast(). Randomly moving page tables to different locations (which
khugepaged does) is specially allowed only for PTE tables, thanks to
the PMD entry recheck. mremap() is kind of an weird case because it
can also move PMD tables without locking, but that's fine because
nothing in the region covered by the source virtual address range can
be part of a VMA other than the VMA being moved, so userspace has no
legitimate reason to access it.


  reply	other threads:[~2025-10-20 15:34 UTC|newest]

Thread overview: 35+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-08-29 14:30 Uschakow, Stanislav
2025-09-01 10:58 ` Jann Horn
2025-09-01 11:26   ` David Hildenbrand
2025-09-04 12:39     ` Uschakow, Stanislav
2025-10-08 22:54     ` Prakash Sangappa
2025-10-09  7:23       ` David Hildenbrand
2025-10-09 15:06         ` Prakash Sangappa
2025-10-09  7:40   ` David Hildenbrand
2025-10-09  8:19     ` David Hildenbrand
2025-10-16  9:21     ` Lorenzo Stoakes
2025-10-16 19:13       ` David Hildenbrand
2025-10-16 18:44     ` Jann Horn
2025-10-16 19:10       ` David Hildenbrand
2025-10-16 19:26         ` Jann Horn
2025-10-16 19:44           ` David Hildenbrand
2025-10-16 20:25             ` Jann Horn
2025-10-20 15:00       ` Lorenzo Stoakes
2025-10-20 15:33         ` Jann Horn [this message]
2025-10-24 12:24           ` Lorenzo Stoakes
2025-10-24 18:22             ` Jann Horn
2025-10-24 19:02               ` Lorenzo Stoakes
2025-10-24 19:43                 ` Jann Horn
2025-10-24 19:58                   ` Lorenzo Stoakes
2025-10-24 21:41                     ` Jann Horn
2025-10-29 16:19                   ` David Hildenbrand
2025-10-29 18:02                     ` Lorenzo Stoakes
2025-11-18 10:03                       ` David Hildenbrand (Red Hat)
2025-11-19 16:08                         ` Lorenzo Stoakes
2025-11-19 16:29                           ` David Hildenbrand (Red Hat)
2025-11-19 16:31                             ` David Hildenbrand (Red Hat)
2025-11-20 15:47                               ` David Hildenbrand (Red Hat)
2025-12-03 17:22                                 ` Prakash Sangappa
2025-12-03 19:45                                   ` David Hildenbrand (Red Hat)
2025-10-20 17:18         ` David Hildenbrand
2025-10-24  9:59           ` Lorenzo Stoakes

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAG48ez3paQTctuAO1bXWarzvRK33kyLjHbQ6zsQLTWya8Y1=dQ@mail.gmail.com' \
    --to=jannh@google.com \
    --cc=akpm@linux-foundation.org \
    --cc=david@redhat.com \
    --cc=liam.howlett@oracle.com \
    --cc=linux-mm@kvack.org \
    --cc=lorenzo.stoakes@oracle.com \
    --cc=mike.kravetz@oracle.com \
    --cc=muchun.song@linux.dev \
    --cc=nathan@kernel.org \
    --cc=ndesaulniers@google.com \
    --cc=osalvador@suse.de \
    --cc=stable@vger.kernel.org \
    --cc=suschako@amazon.de \
    --cc=trix@redhat.com \
    --cc=vbabka@suse.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox