linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: "David Hildenbrand (Arm)" <david@kernel.org>
To: Pedro Falcato <pfalcato@suse.de>
Cc: Dev Jain <dev.jain@arm.com>, Luke Yang <luyang@redhat.com>,
	surenb@google.com, jhladky@redhat.com, akpm@linux-foundation.org,
	Liam.Howlett@oracle.com, willy@infradead.org, vbabka@suse.cz,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: Re: [REGRESSION] mm/mprotect: 2x+ slowdown for >=400KiB regions since PTE batching (cac1db8c3aad)
Date: Thu, 19 Feb 2026 16:29:58 +0100	[thread overview]
Message-ID: <eb66a3ff-a214-46b0-85c4-b3fb6eac7efd@kernel.org> (raw)
In-Reply-To: <rtaao2lmzbmyugjeqdwhnacztjfgijjcax6itgst557qhqsnkr@iibocfiibsfh>

On 2/19/26 16:00, Pedro Falcato wrote:
> On Thu, Feb 19, 2026 at 02:02:42PM +0100, David Hildenbrand (Arm) wrote:
>> On 2/19/26 13:15, Pedro Falcato wrote:
>>>
>>> I don't know, perhaps there isn't a will-it-scale test for this. That's
>>> alright. Even the standard will-it-scale and stress-ng tests people use
>>> to detect regressions usually have glaring problems and are insanely
>>> microbenchey.
>>
>> My theory is that most heavy (high frequency where it would really hit performance)
>> mprotect users (like JITs) perform mprotect on very small ranges (e.g., single page),
>> where all the other overhead (syscall, TLB flush) dominates.
>>
>> That's why I was wondering which use cases that behave similar to the reproducer exist.
>>
>>>
>>>
>>> Sure, but pte-mapped 2M folios is almost a worst-case (why not a PMD at that
>>> point...)
>>
>> Well, 1M and all the way down will similarly benefit. 2M is just always the extreme case.
>>
>>>
>>>
>>> I suspect it's not that huge of a deal. Worst case you can always provide a
>>> software PTE_CONT bit that would e.g be set when mapping a large folio. Or
>>> perhaps "if this pte has a PFN, and the next pte has PFN + 1, then we're
>>> probably in a large folio, thus do the proper batching stuff". I think that
>>> could satisfy everyone. There are heuristics we can use, and perhaps
>>> pte_batch_hint() does not need to be that simple and useless in the !arm64
>>> case then. I'll try to look into a cromulent solution for everyone.
>>
>> Software bits are generally -ENOSPC, but maybe we are lucky on some architectures.
>>
>> We'd run into similar issues like aarch64 when shattering contiguity etc, so
>> there is quite some complexity too it that might not be worth it.
>>
>>>
>>> (shower thought: do we always get wins when batching large folios, or do these
>>> need to be of a significant order to get wins?)
>>
>> For mprotect(), I don't know. For fork() and unmap() batching there was always a
>> win even with order-2 folios. (never measured order-1, because they don't apply to
>> anonymous memory)
>>
>> I assume for mprotect() it depends whether we really needed the folio before, or
>> whether it's just not required like for mremap().
>>
>>>
>>> But personally I would err on the side of small folios, like we did for mremap()
>>> a few months back.
>>
>> The following (completely untested) might make most people happy by looking up
>> the folio only if (a) required or (b) if the architecture indicates that there is a large folio.
>>
>> I assume for some large folio use cases it might perform worse than before. But for
>> the write-upgrade case with large anon folios the performance improvement should remain.
>>
>> Not sure if some regression would remain for which we'd have to special-case the implementation
>> to take a separate path for nr_ptes == 1.
>>
>> Maybe you had something similar already:
>>
>>
>> diff --git a/mm/mprotect.c b/mm/mprotect.c
>> index c0571445bef7..0b3856ad728e 100644
>> --- a/mm/mprotect.c
>> +++ b/mm/mprotect.c
>> @@ -211,6 +211,25 @@ static void set_write_prot_commit_flush_ptes(struct vm_area_struct *vma,
>>          commit_anon_folio_batch(vma, folio, page, addr, ptep, oldpte, ptent, nr_ptes, tlb);
>>   }
>> +static bool mprotect_wants_folio_for_pte(unsigned long cp_flags, pte_t *ptep,
>> +               pte_t pte, unsigned long max_nr_ptes)
>> +{
>> +       /* NUMA hinting needs decide whether working on the folio is ok. */
>> +       if (cp_flags & MM_CP_PROT_NUMA)
>> +               return true;
>> +
>> +       /* We want the folio for possible write-upgrade. */
>> +       if (!pte_write(pte) && (cp_flags & MM_CP_TRY_CHANGE_WRITABLE))
>> +               return true;
>> +
>> +       /* There is nothing to batch. */
>> +       if (max_nr_ptes == 1)
>> +               return false;
>> +
>> +       /* For guaranteed large folios it's usually a win. */
>> +       return pte_batch_hint(ptep, pte) > 1;
>> +}
>> +
>>   static long change_pte_range(struct mmu_gather *tlb,
>>                  struct vm_area_struct *vma, pmd_t *pmd, unsigned long addr,
>>                  unsigned long end, pgprot_t newprot, unsigned long cp_flags)
>> @@ -241,16 +260,18 @@ static long change_pte_range(struct mmu_gather *tlb,
>>                          const fpb_t flags = FPB_RESPECT_SOFT_DIRTY | FPB_RESPECT_WRITE;
>>                          int max_nr_ptes = (end - addr) >> PAGE_SHIFT;
>>                          struct folio *folio = NULL;
>> -                       struct page *page;
>> +                       struct page *page = NULL;
>>                          pte_t ptent;
>>                          /* Already in the desired state. */
>>                          if (prot_numa && pte_protnone(oldpte))
>>                                  continue;
>> -                       page = vm_normal_page(vma, addr, oldpte);
>> -                       if (page)
>> -                               folio = page_folio(page);
>> +                       if (mprotect_wants_folio_for_pte(cp_flags, pte, oldpte, max_nr_ptes)) {
>> +                               page = vm_normal_page(vma, addr, oldpte);
>> +                               if (page)
>> +                                       folio = page_folio(page);
>> +                       }
>>                          /*
>>                           * Avoid trapping faults against the zero or KSM
>>
> 
> Yes, this is a better version than what I had, I'll take this hunk if you don't mind :)

Not at all, thanks for working on this.

> Note that it still doesn't handle large folios on !contpte architectures, which
> is partly the issue. 

It should when we really need the folio (write-upgrade, NUMA faults). So 
I guess the benchmark with THP will still show the benefit (as it does 
the write upgrade).

I suspect some sort of PTE lookahead might work well in
> practice, aside from the issues where e.g two order-0 folios that are
> contiguous in memory are separately mapped.
> 
> Though perhaps inlining vm_normal_folio() might also be interesting and side-step
> most of the issue. I'll play around with that.


I'd assume that it could also help fork/munmap() etc. For common 
architectures with vmemmap, vm_normal_page() is extremely short code.

-- 
Cheers,

David


  reply	other threads:[~2026-02-19 15:30 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-02-13 15:08 Luke Yang
2026-02-13 15:47 ` David Hildenbrand (Arm)
2026-02-13 16:24   ` Pedro Falcato
2026-02-13 17:16     ` Suren Baghdasaryan
2026-02-13 17:26       ` David Hildenbrand (Arm)
2026-02-16 10:12         ` Dev Jain
2026-02-16 14:56           ` Pedro Falcato
2026-02-17 17:43           ` Luke Yang
2026-02-17 18:08             ` Pedro Falcato
2026-02-18  5:01               ` Dev Jain
2026-02-18 10:06                 ` Pedro Falcato
2026-02-18 10:38                   ` Dev Jain
2026-02-18 10:46                     ` David Hildenbrand (Arm)
2026-02-18 11:58                       ` Pedro Falcato
2026-02-18 12:24                         ` David Hildenbrand (Arm)
2026-02-19 12:15                           ` Pedro Falcato
2026-02-19 13:02                             ` David Hildenbrand (Arm)
2026-02-19 15:00                               ` Pedro Falcato
2026-02-19 15:29                                 ` David Hildenbrand (Arm) [this message]
2026-02-20  4:12                                 ` Dev Jain
2026-02-18 11:52                     ` Pedro Falcato
2026-02-18  4:50             ` Dev Jain
2026-02-18 13:29 ` David Hildenbrand (Arm)

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=eb66a3ff-a214-46b0-85c4-b3fb6eac7efd@kernel.org \
    --to=david@kernel.org \
    --cc=Liam.Howlett@oracle.com \
    --cc=akpm@linux-foundation.org \
    --cc=dev.jain@arm.com \
    --cc=jhladky@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=luyang@redhat.com \
    --cc=pfalcato@suse.de \
    --cc=surenb@google.com \
    --cc=vbabka@suse.cz \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox