From: "Lorenzo Stoakes (Oracle)" <ljs@kernel.org>
To: Pedro Falcato <pfalcato@suse.de>
Cc: Andrew Morton <akpm@linux-foundation.org>,
"Liam R. Howlett" <Liam.Howlett@oracle.com>,
Vlastimil Babka <vbabka@kernel.org>,
Jann Horn <jannh@google.com>,
David Hildenbrand <david@kernel.org>,
Dev Jain <dev.jain@arm.com>, Luke Yang <luyang@redhat.com>,
jhladky@redhat.com, linux-mm@kvack.org,
linux-kernel@vger.kernel.org
Subject: Re: [PATCH 2/4] mm/mprotect: move softleaf code out of the main function
Date: Thu, 19 Mar 2026 19:06:35 +0000 [thread overview]
Message-ID: <7217c8b1-1cc2-40c5-a513-7cede7cc1a73@lucifer.local> (raw)
In-Reply-To: <20260319183108.1105090-3-pfalcato@suse.de>
On Thu, Mar 19, 2026 at 06:31:06PM +0000, Pedro Falcato wrote:
> Move softleaf change_pte_range code into a separate function. This makes
> the change_pte_range() function (or where it inlines) a good bit
> smaller. Plus it lessens cognitive load when reading through the
> function.
>
> Signed-off-by: Pedro Falcato <pfalcato@suse.de>
Honestly I like this as a refactoring, the noinline notwithstanding, so:
Reviewed-by: Lorenzo Stoakes (Oracle) <ljs@kernel.org>
> ---
> mm/mprotect.c | 128 +++++++++++++++++++++++++++-----------------------
> 1 file changed, 68 insertions(+), 60 deletions(-)
>
> diff --git a/mm/mprotect.c b/mm/mprotect.c
> index 1bd0d4aa07c2..8d4fa38a8a26 100644
> --- a/mm/mprotect.c
> +++ b/mm/mprotect.c
> @@ -211,6 +211,73 @@ static void set_write_prot_commit_flush_ptes(struct vm_area_struct *vma,
> commit_anon_folio_batch(vma, folio, page, addr, ptep, oldpte, ptent, nr_ptes, tlb);
> }
>
> +static noinline long change_pte_softleaf(struct vm_area_struct *vma,
> + unsigned long addr, pte_t *pte, pte_t oldpte, unsigned long cp_flags)
> +{
> + bool uffd_wp = cp_flags & MM_CP_UFFD_WP;
> + bool uffd_wp_resolve = cp_flags & MM_CP_UFFD_WP_RESOLVE;
> + softleaf_t entry = softleaf_from_pte(oldpte);
> + pte_t newpte;
> +
> + if (softleaf_is_migration_write(entry)) {
> + const struct folio *folio = softleaf_to_folio(entry);
> +
> + /*
> + * A protection check is difficult so
> + * just be safe and disable write
> + */
> + if (folio_test_anon(folio))
> + entry = make_readable_exclusive_migration_entry(
> + swp_offset(entry));
> + else
> + entry = make_readable_migration_entry(swp_offset(entry));
> + newpte = swp_entry_to_pte(entry);
> + if (pte_swp_soft_dirty(oldpte))
> + newpte = pte_swp_mksoft_dirty(newpte);
> + } else if (softleaf_is_device_private_write(entry)) {
> + /*
> + * We do not preserve soft-dirtiness. See
> + * copy_nonpresent_pte() for explanation.
> + */
> + entry = make_readable_device_private_entry(
> + swp_offset(entry));
> + newpte = swp_entry_to_pte(entry);
> + if (pte_swp_uffd_wp(oldpte))
> + newpte = pte_swp_mkuffd_wp(newpte);
> + } else if (softleaf_is_marker(entry)) {
> + /*
> + * Ignore error swap entries unconditionally,
> + * because any access should sigbus/sigsegv
> + * anyway.
> + */
> + if (softleaf_is_poison_marker(entry) ||
> + softleaf_is_guard_marker(entry))
> + return 0;
This just continues in the original:
if (softleaf_is_poison_marker(entry) ||
softleaf_is_guard_marker(entry))
continue;
So this is correct.
> + /*
> + * If this is uffd-wp pte marker and we'd like
> + * to unprotect it, drop it; the next page
> + * fault will trigger without uffd trapping.
> + */
> + if (uffd_wp_resolve) {
> + pte_clear(vma->vm_mm, addr, pte);
> + return 1;
This incrmements pages and continues in the orignal:
if (uffd_wp_resolve) {
pte_clear(vma->vm_mm, addr, pte);
pages++;
}
continue;
So this is correct.
> + }
> + } else {
> + newpte = oldpte;
> + }
> +
> + if (uffd_wp)
> + newpte = pte_swp_mkuffd_wp(newpte);
> + else if (uffd_wp_resolve)
> + newpte = pte_swp_clear_uffd_wp(newpte);
> +
> + if (!pte_same(oldpte, newpte)) {
> + set_pte_at(vma->vm_mm, addr, pte, newpte);
> + return 1;
This increments pages and is at the end of the loop in the original:
if (!pte_same(oldpte, newpte)) {
set_pte_at(vma->vm_mm, addr, pte, newpte);
pages++;
}
So this is correct.
> + }
> + return 0;
> +}
> +
> static long change_pte_range(struct mmu_gather *tlb,
> struct vm_area_struct *vma, pmd_t *pmd, unsigned long addr,
> unsigned long end, pgprot_t newprot, unsigned long cp_flags)
> @@ -317,66 +384,7 @@ static long change_pte_range(struct mmu_gather *tlb,
> pages++;
> }
> } else {
> - softleaf_t entry = softleaf_from_pte(oldpte);
> - pte_t newpte;
> -
> - if (softleaf_is_migration_write(entry)) {
> - const struct folio *folio = softleaf_to_folio(entry);
> -
> - /*
> - * A protection check is difficult so
> - * just be safe and disable write
> - */
> - if (folio_test_anon(folio))
> - entry = make_readable_exclusive_migration_entry(
> - swp_offset(entry));
> - else
> - entry = make_readable_migration_entry(swp_offset(entry));
> - newpte = swp_entry_to_pte(entry);
> - if (pte_swp_soft_dirty(oldpte))
> - newpte = pte_swp_mksoft_dirty(newpte);
> - } else if (softleaf_is_device_private_write(entry)) {
> - /*
> - * We do not preserve soft-dirtiness. See
> - * copy_nonpresent_pte() for explanation.
> - */
> - entry = make_readable_device_private_entry(
> - swp_offset(entry));
> - newpte = swp_entry_to_pte(entry);
> - if (pte_swp_uffd_wp(oldpte))
> - newpte = pte_swp_mkuffd_wp(newpte);
> - } else if (softleaf_is_marker(entry)) {
> - /*
> - * Ignore error swap entries unconditionally,
> - * because any access should sigbus/sigsegv
> - * anyway.
> - */
> - if (softleaf_is_poison_marker(entry) ||
> - softleaf_is_guard_marker(entry))
> - continue;
> - /*
> - * If this is uffd-wp pte marker and we'd like
> - * to unprotect it, drop it; the next page
> - * fault will trigger without uffd trapping.
> - */
> - if (uffd_wp_resolve) {
> - pte_clear(vma->vm_mm, addr, pte);
> - pages++;
> - }
> - continue;
> - } else {
> - newpte = oldpte;
> - }
> -
> - if (uffd_wp)
> - newpte = pte_swp_mkuffd_wp(newpte);
> - else if (uffd_wp_resolve)
> - newpte = pte_swp_clear_uffd_wp(newpte);
> -
> - if (!pte_same(oldpte, newpte)) {
> - set_pte_at(vma->vm_mm, addr, pte, newpte);
> - pages++;
> - }
> + pages += change_pte_softleaf(vma, addr, pte, oldpte, cp_flags);
> }
> } while (pte += nr_ptes, addr += nr_ptes * PAGE_SIZE, addr != end);
> lazy_mmu_mode_disable();
> --
> 2.53.0
>
next prev parent reply other threads:[~2026-03-19 19:06 UTC|newest]
Thread overview: 32+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-03-19 18:31 [PATCH 0/4] mm/mprotect: micro-optimization work Pedro Falcato
2026-03-19 18:31 ` [PATCH 1/4] mm/mprotect: encourage inlining with __always_inline Pedro Falcato
2026-03-19 18:59 ` Lorenzo Stoakes (Oracle)
2026-03-19 19:00 ` Lorenzo Stoakes (Oracle)
2026-03-19 21:28 ` David Hildenbrand (Arm)
2026-03-20 9:59 ` Pedro Falcato
2026-03-20 10:08 ` David Hildenbrand (Arm)
2026-03-19 18:31 ` [PATCH 2/4] mm/mprotect: move softleaf code out of the main function Pedro Falcato
2026-03-19 19:06 ` Lorenzo Stoakes (Oracle) [this message]
2026-03-19 21:33 ` David Hildenbrand (Arm)
2026-03-20 10:04 ` Pedro Falcato
2026-03-20 10:07 ` David Hildenbrand (Arm)
2026-03-20 10:54 ` Lorenzo Stoakes (Oracle)
2026-03-19 18:31 ` [PATCH 3/4] mm/mprotect: un-inline folio_pte_batch_flags() Pedro Falcato
2026-03-19 19:14 ` Lorenzo Stoakes (Oracle)
2026-03-19 21:41 ` David Hildenbrand (Arm)
2026-03-20 10:36 ` Lorenzo Stoakes (Oracle)
2026-03-20 10:59 ` Pedro Falcato
2026-03-20 11:02 ` David Hildenbrand (Arm)
2026-03-20 11:27 ` Lorenzo Stoakes (Oracle)
2026-03-20 11:01 ` David Hildenbrand (Arm)
2026-03-20 11:45 ` Lorenzo Stoakes (Oracle)
2026-03-23 12:56 ` David Hildenbrand (Arm)
2026-03-20 10:34 ` Pedro Falcato
2026-03-20 10:51 ` Lorenzo Stoakes (Oracle)
2026-03-19 18:31 ` [PATCH 4/4] mm/mprotect: special-case small folios when applying write permissions Pedro Falcato
2026-03-19 19:17 ` Lorenzo Stoakes (Oracle)
2026-03-20 10:36 ` Pedro Falcato
2026-03-20 10:42 ` Lorenzo Stoakes (Oracle)
2026-03-19 21:43 ` David Hildenbrand (Arm)
2026-03-20 10:37 ` Pedro Falcato
2026-03-20 2:42 ` [PATCH 0/4] mm/mprotect: micro-optimization work Andrew Morton
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=7217c8b1-1cc2-40c5-a513-7cede7cc1a73@lucifer.local \
--to=ljs@kernel.org \
--cc=Liam.Howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=david@kernel.org \
--cc=dev.jain@arm.com \
--cc=jannh@google.com \
--cc=jhladky@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=luyang@redhat.com \
--cc=pfalcato@suse.de \
--cc=vbabka@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox