From: <xu.xin16@zte.com.cn>
To: <david@kernel.org>
Cc: <akpm@linux-foundation.org>, <chengming.zhou@linux.dev>,
<hughd@google.com>, <wang.yaxin@zte.com.cn>,
<yang.yang29@zte.com.cn>, <linux-mm@kvack.org>,
<linux-kernel@vger.kernel.org>, <riel@surriel.com>
Subject: Re: [PATCH 2/2] ksm: Optimize rmap_walk_ksm by passing a suitable address range
Date: Thu, 15 Jan 2026 09:41:58 +0800 (CST) [thread overview]
Message-ID: <20260115094158027CMmlQ9-DLqT6FPvCswVli@zte.com.cn> (raw)
In-Reply-To: <baef0334-02ea-4732-aa0f-029098879cbd@kernel.org>
> >>> Solution
> >>> ========
> >>> In fact, we can significantly improve performance by passing a more precise
> >>> range based on the given addr. Since the original pages merged by KSM
> >>> correspond to anonymous VMAs, the page offset can be calculated as
> >>> pgoff = address >> PAGE_SHIFT. Therefore, we can optimize the call by
> >>> defining:
> >>>
> >>> pgoff_start = rmap_item->address >> PAGE_SHIFT;
> >>> pgoff_end = pgoff_start + folio_nr_pages(folio) - 1;
> >>>
> >>> Performance
> >>> ===========
> >>> In our real embedded Linux environment, the measured metrcis were as follows:
> >>>
> >>> 1) Time_ms: Max time for holding anon_vma lock in a single rmap_walk_ksm.
> >>> 2) Nr_iteration_total: The max times of iterations in a loop of anon_vma_interval_tree_foreach
> >>> 3) Skip_addr_out_of_range: The max times of skipping due to the first check (vma->vm_start
> >>> and vma->vm_end) in a loop of anon_vma_interval_tree_foreach.
> >>> 4) Skip_mm_mismatch: The max times of skipping due to the second check (rmap_item->mm == vma->vm_mm)
> >>> in a loop of anon_vma_interval_tree_foreach.
> >>>
> >>> The result is as follows:
> >>>
> >>> Time_ms Nr_iteration_total Skip_addr_out_of_range Skip_mm_mismatch
> >>> Before patched: 228.65 22169 22168 0
> >>> After pacthed: 0.396 3 0 2
> >>
> >> Nice improvement.
> >>
> >> Can you make your reproducer available?
> >
> > I'll do my best to try it. The original test data was derived from real business scenarios,
> > but it's quite complex. I'll try to simplify this high-latency scenario into a more
> > understandable demo as a reproduction program.
>
> Ah, I thought it was some benchmark ran on an embedded environment.
>
> How did you end up measuring these numbers?
>
That was done by inserting livepatch.ko to modify the rmap_walk_ksm().
prev parent reply other threads:[~2026-01-15 1:42 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <20260112215315996jocrkFSqeYfhABkZxqs4T@zte.com.cn>
2026-01-12 13:59 ` [PATCH 1/2] ksm: Initial the addr only once in rmap_walk_ksm xu.xin16
2026-01-12 14:01 ` [PATCH 2/2] ksm: Optimize rmap_walk_ksm by passing a suitable address range xu.xin16
2026-01-12 17:47 ` Andrew Morton
2026-01-12 19:25 ` David Hildenbrand (Red Hat)
2026-01-14 2:40 ` xu.xin16
2026-01-14 10:24 ` David Hildenbrand (Red Hat)
2026-01-15 1:41 ` xu.xin16 [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260115094158027CMmlQ9-DLqT6FPvCswVli@zte.com.cn \
--to=xu.xin16@zte.com.cn \
--cc=akpm@linux-foundation.org \
--cc=chengming.zhou@linux.dev \
--cc=david@kernel.org \
--cc=hughd@google.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=riel@surriel.com \
--cc=wang.yaxin@zte.com.cn \
--cc=yang.yang29@zte.com.cn \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox