linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Baolin Wang <baolin.wang@linux.alibaba.com>
To: Ryan Roberts <ryan.roberts@arm.com>,
	akpm@linux-foundation.org, david@kernel.org,
	catalin.marinas@arm.com, will@kernel.org
Cc: lorenzo.stoakes@oracle.com, Liam.Howlett@oracle.com,
	vbabka@suse.cz, rppt@kernel.org, surenb@google.com,
	mhocko@suse.com, riel@surriel.com, harry.yoo@oracle.com,
	jannh@google.com, willy@infradead.org, baohua@kernel.org,
	linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org,
	linux-kernel@vger.kernel.org
Subject: Re: [PATCH v2 1/3] arm64: mm: support batch clearing of the young flag for large folios
Date: Fri, 19 Dec 2025 09:00:34 +0800	[thread overview]
Message-ID: <9c6393e1-fc88-46bf-9a3e-b28b8cda1c75@linux.alibaba.com> (raw)
In-Reply-To: <d3b40df8-e5cf-42aa-8205-de624024fad1@arm.com>



On 2025/12/18 20:20, Ryan Roberts wrote:
> On 18/12/2025 07:15, Baolin Wang wrote:
>>
>>
>> On 2025/12/17 23:43, Ryan Roberts wrote:
>>> Sorry I'm a bit late to the party...
>>
>> Never mind. It's not late and comments are always welcome :)
>>
>>> On 11/12/2025 08:16, Baolin Wang wrote:
>>>> Currently, contpte_ptep_test_and_clear_young() and
>>>> contpte_ptep_clear_flush_young()
>>>> only clear the young flag and flush TLBs for PTEs within the contiguous range.
>>>> To support batch PTE operations for other sized large folios in the following
>>>> patches, adding a new parameter to specify the number of PTEs.
>>>>
>>>> While we are at it, rename the functions to maintain consistency with other
>>>> contpte_*() functions.
>>>>
>>>> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
>>>> ---
>>>>    arch/arm64/include/asm/pgtable.h | 12 ++++-----
>>>>    arch/arm64/mm/contpte.c          | 44 ++++++++++++++++++++++----------
>>>>    2 files changed, 37 insertions(+), 19 deletions(-)
>>>>
>>>> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
>>>> index 0944e296dd4a..e03034683156 100644
>>>> --- a/arch/arm64/include/asm/pgtable.h
>>>> +++ b/arch/arm64/include/asm/pgtable.h
>>>> @@ -1679,10 +1679,10 @@ extern void contpte_clear_full_ptes(struct mm_struct
>>>> *mm, unsigned long addr,
>>>>    extern pte_t contpte_get_and_clear_full_ptes(struct mm_struct *mm,
>>>>                    unsigned long addr, pte_t *ptep,
>>>>                    unsigned int nr, int full);
>>>> -extern int contpte_ptep_test_and_clear_young(struct vm_area_struct *vma,
>>>> -                unsigned long addr, pte_t *ptep);
>>>> -extern int contpte_ptep_clear_flush_young(struct vm_area_struct *vma,
>>>> -                unsigned long addr, pte_t *ptep);
>>>> +extern int contpte_test_and_clear_young_ptes(struct vm_area_struct *vma,
>>>> +                unsigned long addr, pte_t *ptep, unsigned int nr);
>>>> +extern int contpte_clear_flush_young_ptes(struct vm_area_struct *vma,
>>>> +                unsigned long addr, pte_t *ptep, unsigned int nr);
>>>
>>> The "contpte_" functions are intended to be private to the arm64 arch and should
>>> be exposed via the generic APIs. But I don't see any generic batched API for
>>> this, so you're only actually able to pass CONT_PTES as nr. Perhaps you're
>>> planning to define "test_and_clear_young_ptes()" and "clear_flush_young_ptes()"
>>> in later patches?
>>
>> Right. This is a preparation patch, and will be used in patch 2.
>>
>>>>    extern void contpte_wrprotect_ptes(struct mm_struct *mm, unsigned long addr,
>>>>                    pte_t *ptep, unsigned int nr);
>>>>    extern int contpte_ptep_set_access_flags(struct vm_area_struct *vma,
>>>> @@ -1854,7 +1854,7 @@ static inline int ptep_test_and_clear_young(struct
>>>> vm_area_struct *vma,
>>>>        if (likely(!pte_valid_cont(orig_pte)))
>>>>            return __ptep_test_and_clear_young(vma, addr, ptep);
>>>>    -    return contpte_ptep_test_and_clear_young(vma, addr, ptep);
>>>> +    return contpte_test_and_clear_young_ptes(vma, addr, ptep, CONT_PTES);
>>>>    }
>>>>      #define __HAVE_ARCH_PTEP_CLEAR_YOUNG_FLUSH
>>>> @@ -1866,7 +1866,7 @@ static inline int ptep_clear_flush_young(struct
>>>> vm_area_struct *vma,
>>>>        if (likely(!pte_valid_cont(orig_pte)))
>>>>            return __ptep_clear_flush_young(vma, addr, ptep);
>>>>    -    return contpte_ptep_clear_flush_young(vma, addr, ptep);
>>>> +    return contpte_clear_flush_young_ptes(vma, addr, ptep, CONT_PTES);
>>>>    }
>>>>      #define wrprotect_ptes wrprotect_ptes
>>>> diff --git a/arch/arm64/mm/contpte.c b/arch/arm64/mm/contpte.c
>>>> index c0557945939c..19b122441be3 100644
>>>> --- a/arch/arm64/mm/contpte.c
>>>> +++ b/arch/arm64/mm/contpte.c
>>>> @@ -488,8 +488,9 @@ pte_t contpte_get_and_clear_full_ptes(struct mm_struct *mm,
>>>>    }
>>>>    EXPORT_SYMBOL_GPL(contpte_get_and_clear_full_ptes);
>>>>    -int contpte_ptep_test_and_clear_young(struct vm_area_struct *vma,
>>>> -                    unsigned long addr, pte_t *ptep)
>>>> +int contpte_test_and_clear_young_ptes(struct vm_area_struct *vma,
>>>> +                    unsigned long addr, pte_t *ptep,
>>>> +                    unsigned int nr)
>>>>    {
>>>>        /*
>>>>         * ptep_clear_flush_young() technically requires us to clear the access
>>>> @@ -500,39 +501,56 @@ int contpte_ptep_test_and_clear_young(struct
>>>> vm_area_struct *vma,
>>>>         * having to unfold.
>>>>         */
>>>>    +    unsigned long start = addr;
>>>
>>> Personally I wouldn't bother defining start - just reuse addr. You're
>>> incrementing start in the below loop, so it's more appropriate to call it addr
>>> anyway.
>>
>> OK.
>>
>>>> +    unsigned long end = start + nr * PAGE_SIZE;
>>>>        int young = 0;
>>>>        int i;
>>>>    -    ptep = contpte_align_down(ptep);
>>>> -    addr = ALIGN_DOWN(addr, CONT_PTE_SIZE);
>>>> +    if (pte_cont(__ptep_get(ptep + nr - 1)))
>>>> +        end = ALIGN(end, CONT_PTE_SIZE);
>>>>    -    for (i = 0; i < CONT_PTES; i++, ptep++, addr += PAGE_SIZE)
>>>> -        young |= __ptep_test_and_clear_young(vma, addr, ptep);
>>>> +    if (pte_cont(__ptep_get(ptep))) {
>>>> +        start = ALIGN_DOWN(start, CONT_PTE_SIZE);
>>>> +        ptep = contpte_align_down(ptep);
>>>> +    }
>>>> +
>>>> +    nr = (end - start) / PAGE_SIZE;
>>>> +    for (i = 0; i < nr; i++, ptep++, start += PAGE_SIZE)
>>>
>>> Given you're now defining end, perhaps we don't need nr?
>>>
>>>      for (; addr != end; ptep++, addr += PAGE_SIZE)
>>>          young |= __ptep_test_and_clear_young(vma, addr, ptep);
>>
>> Yes, good point.
>>
>>>> +        young |= __ptep_test_and_clear_young(vma, start, ptep);
>>>>          return young;
>>>>    }
>>>> -EXPORT_SYMBOL_GPL(contpte_ptep_test_and_clear_young);
>>>> +EXPORT_SYMBOL_GPL(contpte_test_and_clear_young_ptes);
>>>>    -int contpte_ptep_clear_flush_young(struct vm_area_struct *vma,
>>>> -                    unsigned long addr, pte_t *ptep)
>>>> +int contpte_clear_flush_young_ptes(struct vm_area_struct *vma,
>>>> +                unsigned long addr, pte_t *ptep,
>>>> +                unsigned int nr)
>>>>    {
>>>>        int young;
>>>>    -    young = contpte_ptep_test_and_clear_young(vma, addr, ptep);
>>>> +    young = contpte_test_and_clear_young_ptes(vma, addr, ptep, nr);
>>>>          if (young) {
>>>> +        unsigned long start = addr;
>>>> +        unsigned long end = start + nr * PAGE_SIZE;
>>>> +
>>>> +        if (pte_cont(__ptep_get(ptep + nr - 1)))
>>>> +            end = ALIGN(end, CONT_PTE_SIZE);
>>>> +
>>>> +        if (pte_cont(__ptep_get(ptep)))
>>>> +            start = ALIGN_DOWN(start, CONT_PTE_SIZE);
>>>> +
>>>
>>> We now have this pattern of expanding contpte blocks up and down in 3 places.
>>> Perhaps create a helper?
>>
>> Sounds reasonable. How about the following helper?
>>
>> static pte_t *contpte_align_addr_ptep(unsigned long *start, unsigned long *end,
>>                                          pte_t *ptep, unsigned int nr)
>> {
>>          unsigned long end_addr = *start + nr * PAGE_SIZE;
>>
>>          if (pte_cont(__ptep_get(ptep + nr - 1)))
> 
> I think this is safe but calling it out to check; you're not checking that the
> pte is valid, so theoretically you could have a swap-entry here with whatever
> overlays the contiguous bit set. So then you would incorrectly extend.
> 
> But I think it is safe because the expectation is that core-mm has already
> checked that the whole range is present?

Yes. They must be present PTEs that map consecutive pages of the same 
large folio within a single VMA and a single page table. I will add some 
comments to make this clear.

>>                  *end = ALIGN(end_addr, CONT_PTE_SIZE);
>>
>>          if (pte_cont(__ptep_get(ptep))) {
>>                  *start = ALIGN_DOWN(*start, CONT_PTE_SIZE);
>>                  ptep = contpte_align_down(ptep);
>>          }
>>
>>          return ptep;
>> }
> 
> Looks good.

Thanks for reviewing.


  reply	other threads:[~2025-12-19  1:00 UTC|newest]

Thread overview: 35+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-12-11  8:16 [PATCH v2 0/3] support batch checking of references and unmapping " Baolin Wang
2025-12-11  8:16 ` [PATCH v2 1/3] arm64: mm: support batch clearing of the young flag " Baolin Wang
2025-12-15 11:36   ` Lorenzo Stoakes
2025-12-16  3:32     ` Baolin Wang
2025-12-16 11:11       ` Lorenzo Stoakes
2025-12-17  3:53         ` Baolin Wang
2025-12-17 14:50           ` Lorenzo Stoakes
2025-12-17 16:06             ` Ryan Roberts
2025-12-18  7:56               ` Baolin Wang
2025-12-17 15:43   ` Ryan Roberts
2025-12-18  7:15     ` Baolin Wang
2025-12-18 12:20       ` Ryan Roberts
2025-12-19  1:00         ` Baolin Wang [this message]
2025-12-11  8:16 ` [PATCH v2 2/3] mm: rmap: support batched checks of the references " Baolin Wang
2025-12-15 12:22   ` Lorenzo Stoakes
2025-12-16  3:47     ` Baolin Wang
2025-12-17  6:23   ` Dev Jain
2025-12-17  6:44     ` Baolin Wang
2025-12-17  6:49   ` Dev Jain
2025-12-17  7:09     ` Baolin Wang
2025-12-17  7:23       ` Dev Jain
2025-12-17 16:39   ` Ryan Roberts
2025-12-18  7:47     ` Baolin Wang
2025-12-18 12:08       ` Ryan Roberts
2025-12-19  0:56         ` Baolin Wang
2025-12-11  8:16 ` [PATCH v2 3/3] mm: rmap: support batched unmapping for file " Baolin Wang
2025-12-11 12:36   ` Barry Song
2025-12-15 12:38   ` Lorenzo Stoakes
2025-12-16  5:48     ` Baolin Wang
2025-12-16  6:13       ` Barry Song
2025-12-16  6:22         ` Baolin Wang
2025-12-16 10:54           ` Lorenzo Stoakes
2025-12-17  3:11             ` Baolin Wang
2025-12-17 14:28               ` Lorenzo Stoakes
2025-12-16 10:53       ` Lorenzo Stoakes

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=9c6393e1-fc88-46bf-9a3e-b28b8cda1c75@linux.alibaba.com \
    --to=baolin.wang@linux.alibaba.com \
    --cc=Liam.Howlett@oracle.com \
    --cc=akpm@linux-foundation.org \
    --cc=baohua@kernel.org \
    --cc=catalin.marinas@arm.com \
    --cc=david@kernel.org \
    --cc=harry.yoo@oracle.com \
    --cc=jannh@google.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=lorenzo.stoakes@oracle.com \
    --cc=mhocko@suse.com \
    --cc=riel@surriel.com \
    --cc=rppt@kernel.org \
    --cc=ryan.roberts@arm.com \
    --cc=surenb@google.com \
    --cc=vbabka@suse.cz \
    --cc=will@kernel.org \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox