From: Dev Jain <dev.jain@arm.com>
To: David Hildenbrand <david@redhat.com>, akpm@linux-foundation.org
Cc: ryan.roberts@arm.com, willy@infradead.org, linux-mm@kvack.org,
linux-kernel@vger.kernel.org, catalin.marinas@arm.com,
will@kernel.org, Liam.Howlett@oracle.com,
lorenzo.stoakes@oracle.com, vbabka@suse.cz, jannh@google.com,
anshuman.khandual@arm.com, peterx@redhat.com, joey.gouly@arm.com,
ioworker0@gmail.com, baohua@kernel.org, kevin.brodsky@arm.com,
quic_zhenhuah@quicinc.com, christophe.leroy@csgroup.eu,
yangyicong@hisilicon.com, linux-arm-kernel@lists.infradead.org,
hughd@google.com, yang@os.amperecomputing.com, ziy@nvidia.com
Subject: Re: [PATCH v3 1/5] mm: Optimize mprotect() by batch-skipping PTEs
Date: Thu, 22 May 2025 11:13:52 +0530 [thread overview]
Message-ID: <76afb4f3-79b5-4126-b408-614bb6202526@arm.com> (raw)
In-Reply-To: <912757c0-8a75-4307-a0bd-8755f6135b5a@redhat.com>
On 21/05/25 5:36 pm, David Hildenbrand wrote:
> On 19.05.25 09:48, Dev Jain wrote:
>
> Please highlight in the subject that this is only about
> MM_CP_PROT_NUMA handling.
Sure.
>
>> In case of prot_numa, there are various cases in which we can skip to
>> the
>> next iteration. Since the skip condition is based on the folio and not
>> the PTEs, we can skip a PTE batch.
>>
>> Signed-off-by: Dev Jain <dev.jain@arm.com>
>> ---
>> mm/mprotect.c | 36 +++++++++++++++++++++++++++++-------
>> 1 file changed, 29 insertions(+), 7 deletions(-)
>>
>> diff --git a/mm/mprotect.c b/mm/mprotect.c
>> index 88608d0dc2c2..1ee160ed0b14 100644
>> --- a/mm/mprotect.c
>> +++ b/mm/mprotect.c
>> @@ -83,6 +83,18 @@ bool can_change_pte_writable(struct vm_area_struct
>> *vma, unsigned long addr,
>> return pte_dirty(pte);
>> }
>> +static int mprotect_batch(struct folio *folio, unsigned long addr,
>> pte_t *ptep,
>> + pte_t pte, int max_nr_ptes)
>> +{
>> + const fpb_t flags = FPB_IGNORE_DIRTY | FPB_IGNORE_SOFT_DIRTY;
>> +
>> + if (!folio_test_large(folio) || (max_nr_ptes == 1))
>> + return 1;
>> +
>> + return folio_pte_batch(folio, addr, ptep, pte, max_nr_ptes, flags,
>> + NULL, NULL, NULL);
>> +}
>> +
>> static long change_pte_range(struct mmu_gather *tlb,
>> struct vm_area_struct *vma, pmd_t *pmd, unsigned long addr,
>> unsigned long end, pgprot_t newprot, unsigned long cp_flags)
>> @@ -94,6 +106,7 @@ static long change_pte_range(struct mmu_gather *tlb,
>> bool prot_numa = cp_flags & MM_CP_PROT_NUMA;
>> bool uffd_wp = cp_flags & MM_CP_UFFD_WP;
>> bool uffd_wp_resolve = cp_flags & MM_CP_UFFD_WP_RESOLVE;
>> + int nr_ptes;
>> tlb_change_page_size(tlb, PAGE_SIZE);
>> pte = pte_offset_map_lock(vma->vm_mm, pmd, addr, &ptl);
>> @@ -108,8 +121,10 @@ static long change_pte_range(struct mmu_gather
>> *tlb,
>> flush_tlb_batched_pending(vma->vm_mm);
>> arch_enter_lazy_mmu_mode();
>> do {
>> + nr_ptes = 1;
>> oldpte = ptep_get(pte);
>> if (pte_present(oldpte)) {
>> + int max_nr_ptes = (end - addr) >> PAGE_SHIFT;
>> pte_t ptent;
>> /*
>> @@ -126,15 +141,18 @@ static long change_pte_range(struct mmu_gather
>> *tlb,
>> continue;
>> folio = vm_normal_folio(vma, addr, oldpte);
>> - if (!folio || folio_is_zone_device(folio) ||
>> - folio_test_ksm(folio))
>> + if (!folio)
>> continue;
>> + if (folio_is_zone_device(folio) ||
>> + folio_test_ksm(folio))
>> + goto skip_batch;
>> +
>> /* Also skip shared copy-on-write pages */
>> if (is_cow_mapping(vma->vm_flags) &&
>> (folio_maybe_dma_pinned(folio) ||
>> folio_maybe_mapped_shared(folio)))
>> - continue;
>> + goto skip_batch;
>> /*
>> * While migration can move some dirty pages,
>> @@ -143,7 +161,7 @@ static long change_pte_range(struct mmu_gather *tlb,
>> */
>> if (folio_is_file_lru(folio) &&
>> folio_test_dirty(folio))
>> - continue;
>> + goto skip_batch;
>> /*
>> * Don't mess with PTEs if page is already on the node
>> @@ -151,7 +169,7 @@ static long change_pte_range(struct mmu_gather *tlb,
>> */
>> nid = folio_nid(folio);
>> if (target_node == nid)
>> - continue;
>> + goto skip_batch;
>> toptier = node_is_toptier(nid);
>> /*
>> @@ -159,8 +177,12 @@ static long change_pte_range(struct mmu_gather
>> *tlb,
>> * balancing is disabled
>> */
>> if (!(sysctl_numa_balancing_mode &
>> NUMA_BALANCING_NORMAL) &&
>> - toptier)
>> + toptier) {
>> +skip_batch:
>> + nr_ptes = mprotect_batch(folio, addr, pte,
>> + oldpte, max_nr_ptes);
>> continue;
>> + }
>
>
> I suggest
>
> a) not burying that skip_batch label in another if condition
>
> b) looking into factoring out prot_numa handling into a separate
> function first.
>
>
> Likely we want something like
>
> if (prot_numa) {
> nr_ptes = prot_numa_pte_range_skip_ptes(vma, addr, oldpte);
> if (nr_ptes)
> continue;
> }
>
> ... likely with a better function name,
I want to be able to reuse the folio from vm_normal_folio(), and we also
need
nr_ptes to know how much to skip, so if there is no objection in passing
int *nr_ptes,
or struct folio **foliop to this new function, then I'll carry on with
your suggestion :)
>
>
next prev parent reply other threads:[~2025-05-22 5:44 UTC|newest]
Thread overview: 34+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-05-19 7:48 [PATCH v3 0/5] Optimize mprotect() for large folios Dev Jain
2025-05-19 7:48 ` [PATCH v3 1/5] mm: Optimize mprotect() by batch-skipping PTEs Dev Jain
2025-05-21 8:43 ` Ryan Roberts
2025-05-21 11:58 ` Ryan Roberts
2025-05-22 5:45 ` Dev Jain
2025-05-21 12:06 ` David Hildenbrand
2025-05-22 5:43 ` Dev Jain [this message]
2025-05-22 7:13 ` David Hildenbrand
2025-05-22 7:47 ` Dev Jain
2025-05-22 16:18 ` David Hildenbrand
2025-06-04 10:38 ` Dev Jain
2025-06-04 11:44 ` David Hildenbrand
2025-05-19 7:48 ` [PATCH v3 2/5] mm: Add batched versions of ptep_modify_prot_start/commit Dev Jain
2025-05-21 11:16 ` Ryan Roberts
2025-05-21 11:45 ` Ryan Roberts
2025-05-22 6:33 ` Dev Jain
2025-05-22 7:51 ` Ryan Roberts
2025-05-22 6:39 ` Dev Jain
2025-06-16 6:37 ` Dev Jain
2025-05-19 7:48 ` [PATCH v3 3/5] mm: Optimize mprotect() by PTE batching Dev Jain
2025-05-19 8:18 ` Barry Song
2025-05-20 9:18 ` Dev Jain
2025-05-21 13:26 ` Ryan Roberts
2025-05-22 6:59 ` Dev Jain
2025-05-22 7:11 ` Dev Jain
2025-06-16 11:24 ` Dev Jain
2025-06-26 8:09 ` Ryan Roberts
2025-06-27 4:55 ` Dev Jain
2025-05-19 7:48 ` [PATCH v3 4/5] arm64: Add batched version of ptep_modify_prot_start Dev Jain
2025-05-21 14:14 ` Ryan Roberts
2025-05-22 7:13 ` Dev Jain
2025-05-19 7:48 ` [PATCH v3 5/5] arm64: Add batched version of ptep_modify_prot_commit Dev Jain
2025-05-21 14:17 ` Ryan Roberts
2025-05-22 7:12 ` Dev Jain
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=76afb4f3-79b5-4126-b408-614bb6202526@arm.com \
--to=dev.jain@arm.com \
--cc=Liam.Howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=anshuman.khandual@arm.com \
--cc=baohua@kernel.org \
--cc=catalin.marinas@arm.com \
--cc=christophe.leroy@csgroup.eu \
--cc=david@redhat.com \
--cc=hughd@google.com \
--cc=ioworker0@gmail.com \
--cc=jannh@google.com \
--cc=joey.gouly@arm.com \
--cc=kevin.brodsky@arm.com \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lorenzo.stoakes@oracle.com \
--cc=peterx@redhat.com \
--cc=quic_zhenhuah@quicinc.com \
--cc=ryan.roberts@arm.com \
--cc=vbabka@suse.cz \
--cc=will@kernel.org \
--cc=willy@infradead.org \
--cc=yang@os.amperecomputing.com \
--cc=yangyicong@hisilicon.com \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox