linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Ryan Roberts <ryan.roberts@arm.com>
To: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Cc: Dev Jain <dev.jain@arm.com>,
	akpm@linux-foundation.org, david@redhat.com, willy@infradead.org,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	catalin.marinas@arm.com, will@kernel.org,
	Liam.Howlett@oracle.com, vbabka@suse.cz, jannh@google.com,
	anshuman.khandual@arm.com, peterx@redhat.com, joey.gouly@arm.com,
	ioworker0@gmail.com, baohua@kernel.org, kevin.brodsky@arm.com,
	quic_zhenhuah@quicinc.com, christophe.leroy@csgroup.eu,
	yangyicong@hisilicon.com, linux-arm-kernel@lists.infradead.org,
	hughd@google.com, yang@os.amperecomputing.com, ziy@nvidia.com
Subject: Re: [PATCH v4 3/4] mm: Optimize mprotect() by PTE-batching
Date: Tue, 1 Jul 2025 09:30:51 +0100	[thread overview]
Message-ID: <3776ac82-4cd5-41e7-9b96-137a1ae6f473@arm.com> (raw)
In-Reply-To: <5900e978-6349-4a3d-a384-758889b678a0@lucifer.local>

On 01/07/2025 09:15, Lorenzo Stoakes wrote:
> On Tue, Jul 01, 2025 at 09:03:27AM +0100, Ryan Roberts wrote:
>>>
>>> ////// end of Lorenzo's suggestion //////
>>>
>>> You can obviously modify this to change other stuff like whether you feed back
>>> the PAE or not in private case for use in your code.
>>
>> This sugestion for this part of the problem looks much cleaner!
> 
> Thanks :)
> 
>>
>> Sorry; this whole struct tristate thing was my idea. I never really liked it but
>> I was more focussed on trying to illustrate the big picture flow that I thought
>> would work well with a batch and sub-batches (which it seems below that you
>> hate... but let's talk about that down there).
> 
> Yeah, this is fiddly stuff so I get it as a sort of psuedocode, but as code
> obviously I've made my feelings known haha.
> 
> It seems that we can apply the fundamental underlying idea without needing to do
> it this way at any rate so we should be good.
> 
>>>>
>>>>  static int mprotect_folio_pte_batch(struct folio *folio, unsigned long addr,
>>>> -		pte_t *ptep, pte_t pte, int max_nr_ptes)
>>>> +		pte_t *ptep, pte_t pte, int max_nr_ptes, fpb_t switch_off_flags)
>>>
>>> This last parameter is pretty horrible. It's a negative mask so now you're
>>> passing 'ignore soft dirty' to the function meaning 'don't ignore it'. This is
>>> just planting land mines.
>>>
>>> Obviously David's flag changes will also alter all this.
>>>
>>> Just add a boolean re: soft dirty.
>>
>> Dev had a boolean for this in the last round. I've seen various functions expand
>> over time with increasing numbers of bool flags. So I asked to convert to a
>> flags parameter and just pass in the flags we need. Then it's a bit more future
>> proof and self documenting. (For the record I dislike the "switch_off_flags"
>> approach taken here).
> 
> Yeah, but we can change this when it needs to be changed. When it comes to
> internal non-uAPI stuff I don't think we need to be too worried about
> future-proofing like this at least just yet.
> 
> Do not fear the future churn... ;)
> 
> I mean I guess David's new flags will make this less egregious anyway.
> 
>>>>  			oldpte = modify_prot_start_ptes(vma, addr, pte, nr_ptes);
>>>>  			ptent = pte_modify(oldpte, newprot);
>>>>
>>>> @@ -227,15 +294,39 @@ static long change_pte_range(struct mmu_gather *tlb,
>>>>  			 * example, if a PTE is already dirty and no other
>>>>  			 * COW or special handling is required.
>>>>  			 */
>>>> -			if ((cp_flags & MM_CP_TRY_CHANGE_WRITABLE) &&
>>>> -			    !pte_write(ptent) &&
>>>> -			    can_change_pte_writable(vma, addr, ptent))
>>>> -				ptent = pte_mkwrite(ptent, vma);
>>>> -
>>>> -			modify_prot_commit_ptes(vma, addr, pte, oldpte, ptent, nr_ptes);
>>>> -			if (pte_needs_flush(oldpte, ptent))
>>>> -				tlb_flush_pte_range(tlb, addr, PAGE_SIZE);
>>>> -			pages++;
>>>> +			set_write = (cp_flags & MM_CP_TRY_CHANGE_WRITABLE) &&
>>>> +				    !pte_write(ptent);
>>>> +			if (set_write)
>>>> +				set_write = maybe_change_pte_writable(vma, addr, ptent, folio);
>>>> +
>>>> +			while (nr_ptes) {
>>>> +				if (set_write == TRI_MAYBE) {
>>>> +					sub_nr_ptes = anon_exclusive_batch(folio,
>>>> +						pgidx, nr_ptes, &sub_set_write);
>>>> +				} else {
>>>> +					sub_nr_ptes = nr_ptes;
>>>> +					sub_set_write = (set_write == TRI_TRUE);
>>>> +				}
>>>> +
>>>> +				if (sub_set_write)
>>>> +					newpte = pte_mkwrite(ptent, vma);
>>>> +				else
>>>> +					newpte = ptent;
>>>> +
>>>> +				modify_prot_commit_ptes(vma, addr, pte, oldpte,
>>>> +							newpte, sub_nr_ptes);
>>>> +				if (pte_needs_flush(oldpte, newpte))
>>>> +					tlb_flush_pte_range(tlb, addr,
>>>> +						sub_nr_ptes * PAGE_SIZE);
>>>> +
>>>> +				addr += sub_nr_ptes * PAGE_SIZE;
>>>> +				pte += sub_nr_ptes;
>>>> +				oldpte = pte_advance_pfn(oldpte, sub_nr_ptes);
>>>> +				ptent = pte_advance_pfn(ptent, sub_nr_ptes);
>>>> +				nr_ptes -= sub_nr_ptes;
>>>> +				pages += sub_nr_ptes;
>>>> +				pgidx += sub_nr_ptes;
>>>> +			}
>>>
>>> I hate hate hate having this loop here, let's abstract this please.
>>>
>>> I mean I think we can just use mprotect_folio_pte_batch() no? It's not
>>> abstracting much here, and we can just do all this handling there. Maybe have to
>>> pass in a bunch more params, but it saves us having to do all this.
>>
>> In an ideal world we would flatten and just have mprotect_folio_pte_batch()
>> return the batch size considering all the relevant PTE bits AND the
>> AnonExclusive bit on the pages. IIRC one of Dev's earlier versions modified the
>> core folio_pte_batch() function to also look at the AnonExclusive bit, but I
>> really disliked changing that core function (I think others did too?).
> 
> Yeah let's not change the core function.
> 
> My suggestion is to have mprotect_folio_pte_batch() do this.
> 
>>
>> So barring that approach, we are really only left with the batch and sub-batch
>> approach - although, yes, it could be abstracted more. We could maintain a
>> context struct that persists across all calls to mprotect_folio_pte_batch() and
>> it can use that to keep it's state to remember if we are in the middle of a
>> sub-batch and decide either to call folio_pte_batch() to get a new batch, or
>> call anon_exclusive_batch() to get the next sub-batch within the current batch.
>> But that started to feel overly abstracted to me.
> 
> Having this nested batch/sub-batch loop really feels worse. You just get lost in
> the complexity here very easily.
> 
> But i"m also not sure we need to maintain _that_ much state?
> 
> We're already looping over all of the PTEs here, so abstracting _the entire
> loop_ and all the sub-batch stuff to another function, that is
> mprotect_folio_pte_batch() I think sensibly, so it handles this for you makes a
> ton of sense.

So effectively turn mprotect_folio_pte_batch() into an iterator; have a struct
and a funtion to init the struct for the the number of ptes we want to iterate
over, then a per iteration function that progressively returns batches?

Then we just have a simple loop here that gets the next batch and processes it?

> 
>>
>> This loop approach, as written, felt more straightforward for the reader to
>> understand (i.e. the least-worst option). Is the context approach what you are
>> suggesting or do you have something else in mind?
>>
> 
> See above.
> 
>>>
>>> Alternatively, we could add a new wrapper function, but yeah definitely not
>>> this.
>>>
>>> Also the C programming language asks... etc etc. ;)
>>>
>>> Overall since you always end up processing folio_nr_pages(folio) you can just
>>> have the batch function or a wrapper return this and do updates as necessary
>>> here on that basis, and leave the 'sub' batching to that function.
>>
>> Sorry I don't understand this statement - could you clarify? Especially the bit
>> about "always ... processing folio_nr_pages(folio)"; I don't think we do. In
>> various corner cases the size of the folio has no relationship to the way the
>> PTEs are mapped.
> 
> Right yeah I put this badly. Obviously you can have all sorts of fun with large
> folios partially mapped and page-table split and etc. etc.
> 
> I should have said 'always process nr_ptes'.
> 
> The idea is to abstract this sub-batch stuff to another function, fundamentally.



  reply	other threads:[~2025-07-01  8:31 UTC|newest]

Thread overview: 61+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-06-28 11:34 [PATCH v4 0/4] Optimize mprotect() for large folios Dev Jain
2025-06-28 11:34 ` [PATCH v4 1/4] mm: Optimize mprotect() for MM_CP_PROT_NUMA by batch-skipping PTEs Dev Jain
2025-06-30  9:42   ` Ryan Roberts
2025-06-30  9:49     ` Dev Jain
2025-06-30  9:55       ` Ryan Roberts
2025-06-30 10:05         ` Dev Jain
2025-06-30 11:25   ` Lorenzo Stoakes
2025-06-30 11:39     ` Ryan Roberts
2025-06-30 11:53       ` Lorenzo Stoakes
2025-06-30 11:40     ` Dev Jain
2025-06-30 11:51       ` Lorenzo Stoakes
2025-06-30 11:56         ` Dev Jain
2025-07-02  9:37   ` Lorenzo Stoakes
2025-07-02 15:01     ` Dev Jain
2025-07-02 15:37       ` Lorenzo Stoakes
2025-06-28 11:34 ` [PATCH v4 2/4] mm: Add batched versions of ptep_modify_prot_start/commit Dev Jain
2025-06-30 10:10   ` Ryan Roberts
2025-06-30 10:17     ` Dev Jain
2025-06-30 10:35       ` Ryan Roberts
2025-06-30 10:42         ` Dev Jain
2025-06-30 12:57   ` Lorenzo Stoakes
2025-07-01  4:44     ` Dev Jain
2025-07-01  7:33       ` Ryan Roberts
2025-07-01  8:06         ` Lorenzo Stoakes
2025-07-01  8:23           ` Ryan Roberts
2025-07-01  8:34             ` Lorenzo Stoakes
2025-06-28 11:34 ` [PATCH v4 3/4] mm: Optimize mprotect() by PTE-batching Dev Jain
2025-06-28 12:39   ` Dev Jain
2025-06-30 10:31   ` Ryan Roberts
2025-06-30 11:21     ` Dev Jain
2025-06-30 11:47       ` Dev Jain
2025-06-30 11:50       ` Ryan Roberts
2025-06-30 11:53         ` Dev Jain
2025-07-01  5:47     ` Dev Jain
2025-07-01  7:39       ` Ryan Roberts
2025-06-30 12:52   ` Lorenzo Stoakes
2025-07-01  5:30     ` Dev Jain
     [not found]     ` <ec2c3f60-43e9-47d9-9058-49d608845200@arm.com>
2025-07-01  8:06       ` Dev Jain
2025-07-01  8:24         ` Ryan Roberts
2025-07-01  8:15       ` Lorenzo Stoakes
2025-07-01  8:30         ` Ryan Roberts [this message]
2025-07-01  8:51           ` Lorenzo Stoakes
2025-07-01  9:53             ` Ryan Roberts
2025-07-01 10:21               ` Lorenzo Stoakes
2025-07-01 11:31                 ` Ryan Roberts
2025-07-01 13:40                   ` Lorenzo Stoakes
2025-07-02 10:32                     ` Lorenzo Stoakes
2025-07-02 15:03                       ` Dev Jain
2025-07-02 15:22                         ` Lorenzo Stoakes
2025-07-03 12:59                           ` David Hildenbrand
2025-06-28 11:34 ` [PATCH v4 4/4] arm64: Add batched versions of ptep_modify_prot_start/commit Dev Jain
2025-06-30 10:43   ` Ryan Roberts
2025-06-29 23:05 ` [PATCH v4 0/4] Optimize mprotect() for large folios Andrew Morton
2025-06-30  3:33   ` Dev Jain
2025-06-30 10:45     ` Ryan Roberts
2025-06-30 11:22       ` Dev Jain
2025-06-30 11:17 ` Lorenzo Stoakes
2025-06-30 11:25   ` Dev Jain
2025-06-30 11:27 ` Lorenzo Stoakes
2025-06-30 11:43   ` Dev Jain
2025-07-01  0:08     ` Andrew Morton

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=3776ac82-4cd5-41e7-9b96-137a1ae6f473@arm.com \
    --to=ryan.roberts@arm.com \
    --cc=Liam.Howlett@oracle.com \
    --cc=akpm@linux-foundation.org \
    --cc=anshuman.khandual@arm.com \
    --cc=baohua@kernel.org \
    --cc=catalin.marinas@arm.com \
    --cc=christophe.leroy@csgroup.eu \
    --cc=david@redhat.com \
    --cc=dev.jain@arm.com \
    --cc=hughd@google.com \
    --cc=ioworker0@gmail.com \
    --cc=jannh@google.com \
    --cc=joey.gouly@arm.com \
    --cc=kevin.brodsky@arm.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=lorenzo.stoakes@oracle.com \
    --cc=peterx@redhat.com \
    --cc=quic_zhenhuah@quicinc.com \
    --cc=vbabka@suse.cz \
    --cc=will@kernel.org \
    --cc=willy@infradead.org \
    --cc=yang@os.amperecomputing.com \
    --cc=yangyicong@hisilicon.com \
    --cc=ziy@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox