linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Pedro Demarchi Gomes <pedrodemargomes@gmail.com>
To: David Hildenbrand <david@redhat.com>,
	Andrew Morton <akpm@linux-foundation.org>
Cc: Xu Xin <xu.xin16@zte.com.cn>,
	Chengming Zhou <chengming.zhou@linux.dev>,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH v2] ksm: use range-walk function to jump over holes in scan_get_next_rmap_item
Date: Tue, 14 Oct 2025 18:57:54 -0300	[thread overview]
Message-ID: <be137610-65a7-4402-86d8-3d169e3ac064@gmail.com> (raw)
In-Reply-To: <77b69bcb-6df0-4c3a-bb7c-a003fd51d292@redhat.com>



On 10/14/25 12:59, David Hildenbrand wrote:
> On 14.10.25 17:11, Pedro Demarchi Gomes wrote:
>> Currently, scan_get_next_rmap_item() walks every page address in a VMA
>> to locate mergeable pages. This becomes highly inefficient when scanning
>> large virtual memory areas that contain mostly unmapped regions.
>>
>> This patch replaces the per-address lookup with a range walk using
>> walk_page_range(). The range walker allows KSM to skip over entire
>> unmapped holes in a VMA, avoiding unnecessary lookups.
>> This problem was previously discussed in [1].
>>
>> Changes since v1 [2]:
>> - Use pmd_entry to walk page range
>> - Use cond_resched inside pmd_entry()
>> - walk_page_range returns page+folio
>>
>> [1] https://lore.kernel.org/linux- 
>> mm/423de7a3-1c62-4e72-8e79-19a6413e420c@redhat.com/
>> [2] https://lore.kernel.org/linux-mm/20251014055828.124522-1- 
>> pedrodemargomes@gmail.com/
>>
>> Signed-off-by: Pedro Demarchi Gomes <pedrodemargomes@gmail.com>
>> ---
> 
> [...]
> 
>> +
>> +static int ksm_pmd_entry(pmd_t *pmd, unsigned long addr,
>> +                unsigned long end, struct mm_walk *walk)
>> +{
>> +    struct mm_struct *mm = walk->mm;
>> +    struct vm_area_struct *vma = walk->vma;
>> +    struct ksm_walk_private *private = (struct ksm_walk_private *) 
>> walk->private;
>> +    struct folio *folio;
>> +    pte_t *start_pte, *pte, ptent;
>> +    spinlock_t *ptl;
>> +    int ret = 0;
>> +
>> +    start_pte = pte = pte_offset_map_lock(mm, pmd, addr, &ptl);
>> +    if (!start_pte) {
>> +        ksm_scan.address = end;
>> +        return 0;
>> +    }
> 
> Please take more time to understand the details. If there is a THP there 
> you actually have to find the relevant page.
> 

Ok

>> +
>> +    for (; addr < end; pte++, addr += PAGE_SIZE) {
>> +        ptent = ptep_get(pte);
>> +        struct page *page = vm_normal_page(vma, addr, ptent);
>> +        ksm_scan.address = addr;
> 
> Updating that value from in here is a bit nasty. I wonder if you should 
> rather make the function also return the address of the found page as well.
> 
> In the caller, if we don't find any page, there is no need to update the
> address from this function I guess. We iterated the complete MM space in 
> that case.
> 

Ok

>> +
>> +        if (ksm_test_exit(mm)) {
>> +            ret = 1;
>> +            break;
>> +        }
>> +
>> +        if (!page)
>> +            continue;
>> +
>> +        folio = page_folio(page);
>> +        if (folio_is_zone_device(folio) || !folio_test_anon(folio))
>> +            continue;
>> +
>> +        ret = 1;
>> +        folio_get(folio);
>> +        private->page = page;
>> +        private->folio = folio;
>> +        private->vma = vma;
>> +        break;
>> +    }
>> +    pte_unmap_unlock(start_pte, ptl);
>> +
>> +    cond_resched();
>> +    return ret;
>> +}
>> +
>> +struct mm_walk_ops walk_ops = {
>> +    .pmd_entry = ksm_pmd_entry,
>> +    .test_walk = ksm_walk_test,
>> +    .walk_lock = PGWALK_RDLOCK,
>> +};
>> +
>>   static struct ksm_rmap_item *scan_get_next_rmap_item(struct page 
>> **page)
>>   {
>>       struct mm_struct *mm;
>>       struct ksm_mm_slot *mm_slot;
>>       struct mm_slot *slot;
>> -    struct vm_area_struct *vma;
>>       struct ksm_rmap_item *rmap_item;
>> -    struct vma_iterator vmi;
>>       int nid;
>>       if (list_empty(&ksm_mm_head.slot.mm_node))
>> @@ -2527,64 +2595,40 @@ static struct ksm_rmap_item 
>> *scan_get_next_rmap_item(struct page **page)
>>       slot = &mm_slot->slot;
>>       mm = slot->mm;
>> -    vma_iter_init(&vmi, mm, ksm_scan.address);
>>       mmap_read_lock(mm);
>>       if (ksm_test_exit(mm))
>>           goto no_vmas;
>> -    for_each_vma(vmi, vma) {
>> -        if (!(vma->vm_flags & VM_MERGEABLE))
>> -            continue;
>> -        if (ksm_scan.address < vma->vm_start)
>> -            ksm_scan.address = vma->vm_start;
>> -        if (!vma->anon_vma)
>> -            ksm_scan.address = vma->vm_end;
>> -
>> -        while (ksm_scan.address < vma->vm_end) {
>> -            struct page *tmp_page = NULL;
>> -            struct folio_walk fw;
>> -            struct folio *folio;
>> +get_page:
>> +    struct ksm_walk_private walk_private = {
>> +        .page = NULL,
>> +        .folio = NULL,
>> +        .vma = NULL
>> +    };
>> -            if (ksm_test_exit(mm))
>> -                break;
>> +    walk_page_range(mm, ksm_scan.address, -1, &walk_ops, (void *) 
>> &walk_private);
>> +    if (walk_private.page) {
>> +        flush_anon_page(walk_private.vma, walk_private.page, 
>> ksm_scan.address);
>> +        flush_dcache_page(walk_private.page);
> 
> Keep working on the folio please.
> 

Ok

>> +        rmap_item = get_next_rmap_item(mm_slot,
>> +            ksm_scan.rmap_list, ksm_scan.address);
>> +        if (rmap_item) {
>> +            ksm_scan.rmap_list =
>> +                    &rmap_item->rmap_list;
>> -            folio = folio_walk_start(&fw, vma, ksm_scan.address, 0);
>> -            if (folio) {
>> -                if (!folio_is_zone_device(folio) &&
>> -                     folio_test_anon(folio)) {
>> -                    folio_get(folio);
>> -                    tmp_page = fw.page;
>> -                }
>> -                folio_walk_end(&fw, vma);
>> +            ksm_scan.address += PAGE_SIZE;
>> +            if (should_skip_rmap_item(walk_private.folio, rmap_item)) {
>> +                folio_put(walk_private.folio);
>> +                goto get_page;
> 
> Can you make that a while() loop to avoid the label?
> 

Ok, I will make this corrections and send a v3. Thanks!




  reply	other threads:[~2025-10-14 22:04 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-10-14 15:11 Pedro Demarchi Gomes
2025-10-14 15:59 ` David Hildenbrand
2025-10-14 21:57   ` Pedro Demarchi Gomes [this message]
2025-10-15  3:53 ` kernel test robot
2025-10-15  5:46 ` kernel test robot
2025-10-15 12:22 ` David Hildenbrand

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=be137610-65a7-4402-86d8-3d169e3ac064@gmail.com \
    --to=pedrodemargomes@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=chengming.zhou@linux.dev \
    --cc=david@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=xu.xin16@zte.com.cn \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox