From: Dev Jain <dev.jain@arm.com>
To: Pedro Falcato <pfalcato@suse.de>,
"David Hildenbrand (Arm)" <david@kernel.org>
Cc: Luke Yang <luyang@redhat.com>,
surenb@google.com, jhladky@redhat.com, akpm@linux-foundation.org,
Liam.Howlett@oracle.com, willy@infradead.org, vbabka@suse.cz,
linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: Re: [REGRESSION] mm/mprotect: 2x+ slowdown for >=400KiB regions since PTE batching (cac1db8c3aad)
Date: Fri, 20 Feb 2026 09:42:39 +0530 [thread overview]
Message-ID: <484434ed-6300-450a-aacd-3e370e1c167c@arm.com> (raw)
In-Reply-To: <rtaao2lmzbmyugjeqdwhnacztjfgijjcax6itgst557qhqsnkr@iibocfiibsfh>
On 19/02/26 8:30 pm, Pedro Falcato wrote:
> On Thu, Feb 19, 2026 at 02:02:42PM +0100, David Hildenbrand (Arm) wrote:
>> On 2/19/26 13:15, Pedro Falcato wrote:
>>> On Wed, Feb 18, 2026 at 01:24:28PM +0100, David Hildenbrand (Arm) wrote:
>>>> On 2/18/26 12:58, Pedro Falcato wrote:
>>>>> I don't understand what you're looking for. an mprotect-based workload? those
>>>>> obviously don't really exist, apart from something like a JIT engine cranking
>>>>> out a lot of mprotect() calls in an aggressive fashion. Or perhaps some of that
>>>>> usage of mprotect that our DB friends like to use sometimes (discussed in
>>>>> $OTHER_CONTEXTS), though those are generally hugepages.
>>>>>
>>>> Anything besides a homemade micro-benchmark that highlights why we should
>>>> care about this exact fast and repeated sequence of events.
>>>>
>>>> I'm surprise that such a "large regression" does not show up in any other
>>>> non-home-made benchmark that people/bots are running. That's really what I
>>>> am questioning.
>>> I don't know, perhaps there isn't a will-it-scale test for this. That's
>>> alright. Even the standard will-it-scale and stress-ng tests people use
>>> to detect regressions usually have glaring problems and are insanely
>>> microbenchey.
>> My theory is that most heavy (high frequency where it would really hit performance)
>> mprotect users (like JITs) perform mprotect on very small ranges (e.g., single page),
>> where all the other overhead (syscall, TLB flush) dominates.
>>
>> That's why I was wondering which use cases that behave similar to the reproducer exist.
>>
>>>> Having that said, I'm all for optimizing it if there is a real problem
>>>> there.
>>>>
>>>>> I don't see how this can justify large performance regressions in a system
>>>>> call, for something every-architecture-not-named-arm64 does not have.
>>>> Take a look at the reported performance improvements on AMD with large
>>>> folios.
>>> Sure, but pte-mapped 2M folios is almost a worst-case (why not a PMD at that
>>> point...)
>> Well, 1M and all the way down will similarly benefit. 2M is just always the extreme case.
>>
>>>> The issue really is that small folios don't perform well, on any
>>>> architecture. But to detect large vs. small folios we need the ... folio.
>>>>
>>>> So once we optimize for small folios (== don't try to detect large folios)
>>>> we'll degrade large folios.
>>> I suspect it's not that huge of a deal. Worst case you can always provide a
>>> software PTE_CONT bit that would e.g be set when mapping a large folio. Or
>>> perhaps "if this pte has a PFN, and the next pte has PFN + 1, then we're
>>> probably in a large folio, thus do the proper batching stuff". I think that
>>> could satisfy everyone. There are heuristics we can use, and perhaps
>>> pte_batch_hint() does not need to be that simple and useless in the !arm64
>>> case then. I'll try to look into a cromulent solution for everyone.
>> Software bits are generally -ENOSPC, but maybe we are lucky on some architectures.
>>
>> We'd run into similar issues like aarch64 when shattering contiguity etc, so
>> there is quite some complexity too it that might not be worth it.
>>
>>> (shower thought: do we always get wins when batching large folios, or do these
>>> need to be of a significant order to get wins?)
>> For mprotect(), I don't know. For fork() and unmap() batching there was always a
>> win even with order-2 folios. (never measured order-1, because they don't apply to
>> anonymous memory)
>>
>> I assume for mprotect() it depends whether we really needed the folio before, or
>> whether it's just not required like for mremap().
>>
>>> But personally I would err on the side of small folios, like we did for mremap()
>>> a few months back.
>> The following (completely untested) might make most people happy by looking up
>> the folio only if (a) required or (b) if the architecture indicates that there is a large folio.
>>
>> I assume for some large folio use cases it might perform worse than before. But for
>> the write-upgrade case with large anon folios the performance improvement should remain.
>>
>> Not sure if some regression would remain for which we'd have to special-case the implementation
>> to take a separate path for nr_ptes == 1.
>>
>> Maybe you had something similar already:
>>
>>
>> diff --git a/mm/mprotect.c b/mm/mprotect.c
>> index c0571445bef7..0b3856ad728e 100644
>> --- a/mm/mprotect.c
>> +++ b/mm/mprotect.c
>> @@ -211,6 +211,25 @@ static void set_write_prot_commit_flush_ptes(struct vm_area_struct *vma,
>> commit_anon_folio_batch(vma, folio, page, addr, ptep, oldpte, ptent, nr_ptes, tlb);
>> }
>> +static bool mprotect_wants_folio_for_pte(unsigned long cp_flags, pte_t *ptep,
>> + pte_t pte, unsigned long max_nr_ptes)
>> +{
>> + /* NUMA hinting needs decide whether working on the folio is ok. */
>> + if (cp_flags & MM_CP_PROT_NUMA)
>> + return true;
>> +
>> + /* We want the folio for possible write-upgrade. */
>> + if (!pte_write(pte) && (cp_flags & MM_CP_TRY_CHANGE_WRITABLE))
>> + return true;
>> +
>> + /* There is nothing to batch. */
>> + if (max_nr_ptes == 1)
>> + return false;
>> +
>> + /* For guaranteed large folios it's usually a win. */
>> + return pte_batch_hint(ptep, pte) > 1;
>> +}
>> +
>> static long change_pte_range(struct mmu_gather *tlb,
>> struct vm_area_struct *vma, pmd_t *pmd, unsigned long addr,
>> unsigned long end, pgprot_t newprot, unsigned long cp_flags)
>> @@ -241,16 +260,18 @@ static long change_pte_range(struct mmu_gather *tlb,
>> const fpb_t flags = FPB_RESPECT_SOFT_DIRTY | FPB_RESPECT_WRITE;
>> int max_nr_ptes = (end - addr) >> PAGE_SHIFT;
>> struct folio *folio = NULL;
>> - struct page *page;
>> + struct page *page = NULL;
>> pte_t ptent;
>> /* Already in the desired state. */
>> if (prot_numa && pte_protnone(oldpte))
>> continue;
>> - page = vm_normal_page(vma, addr, oldpte);
>> - if (page)
>> - folio = page_folio(page);
>> + if (mprotect_wants_folio_for_pte(cp_flags, pte, oldpte, max_nr_ptes)) {
>> + page = vm_normal_page(vma, addr, oldpte);
>> + if (page)
>> + folio = page_folio(page);
>> + }
>> /*
>> * Avoid trapping faults against the zero or KSM
>>
> Yes, this is a better version than what I had, I'll take this hunk if you don't mind :)
> Note that it still doesn't handle large folios on !contpte architectures, which
> is partly the issue. I suspect some sort of PTE lookahead might work well in
> practice, aside from the issues where e.g two order-0 folios that are
> contiguous in memory are separately mapped.
>
> Though perhaps inlining vm_normal_folio() might also be interesting and side-step
> most of the issue. I'll play around with that.
Indeed this is one option.
You can also experiment with
https://lore.kernel.org/all/20250506050056.59250-3-dev.jain@arm.com/
which approximates presence of large folio if pfns are contiguous.
>
next prev parent reply other threads:[~2026-02-20 4:12 UTC|newest]
Thread overview: 23+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-02-13 15:08 Luke Yang
2026-02-13 15:47 ` David Hildenbrand (Arm)
2026-02-13 16:24 ` Pedro Falcato
2026-02-13 17:16 ` Suren Baghdasaryan
2026-02-13 17:26 ` David Hildenbrand (Arm)
2026-02-16 10:12 ` Dev Jain
2026-02-16 14:56 ` Pedro Falcato
2026-02-17 17:43 ` Luke Yang
2026-02-17 18:08 ` Pedro Falcato
2026-02-18 5:01 ` Dev Jain
2026-02-18 10:06 ` Pedro Falcato
2026-02-18 10:38 ` Dev Jain
2026-02-18 10:46 ` David Hildenbrand (Arm)
2026-02-18 11:58 ` Pedro Falcato
2026-02-18 12:24 ` David Hildenbrand (Arm)
2026-02-19 12:15 ` Pedro Falcato
2026-02-19 13:02 ` David Hildenbrand (Arm)
2026-02-19 15:00 ` Pedro Falcato
2026-02-19 15:29 ` David Hildenbrand (Arm)
2026-02-20 4:12 ` Dev Jain [this message]
2026-02-18 11:52 ` Pedro Falcato
2026-02-18 4:50 ` Dev Jain
2026-02-18 13:29 ` David Hildenbrand (Arm)
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=484434ed-6300-450a-aacd-3e370e1c167c@arm.com \
--to=dev.jain@arm.com \
--cc=Liam.Howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=david@kernel.org \
--cc=jhladky@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=luyang@redhat.com \
--cc=pfalcato@suse.de \
--cc=surenb@google.com \
--cc=vbabka@suse.cz \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox