* [PATCH v3 1/2] ksm: Initialize the addr only once in rmap_walk_ksm
2026-02-12 11:28 [PATCH v3 0/2] KSM: Optimizations for rmap_walk_ksm xu.xin16
@ 2026-02-12 11:29 ` xu.xin16
2026-02-12 11:30 ` [PATCH v3 2/2] ksm: Optimize rmap_walk_ksm by passing a suitable address range xu.xin16
1 sibling, 0 replies; 4+ messages in thread
From: xu.xin16 @ 2026-02-12 11:29 UTC (permalink / raw)
To: akpm, xu.xin16
Cc: chengming.zhou, hughd, wang.yaxin, yang.yang29, linux-mm, linux-kernel
From: xu xin <xu.xin16@zte.com.cn>
This is a minor performance optimization, especially when there are many
for-loop iterations, because the addr variable doesn’t change across
iterations.
Therefore, it only needs to be initialized once before the loop.
Signed-off-by: xu xin <xu.xin16@zte.com.cn>
Acked-by: David Hildenbrand (Arm) <david@kernel.org>
---
mm/ksm.c | 7 +++----
1 file changed, 3 insertions(+), 4 deletions(-)
diff --git a/mm/ksm.c b/mm/ksm.c
index 2d89a7c8b4eb..950e122bcbf4 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -3168,6 +3168,8 @@ void rmap_walk_ksm(struct folio *folio, struct rmap_walk_control *rwc)
return;
again:
hlist_for_each_entry(rmap_item, &stable_node->hlist, hlist) {
+ /* Ignore the stable/unstable/sqnr flags */
+ const unsigned long addr = rmap_item->address & PAGE_MASK;
struct anon_vma *anon_vma = rmap_item->anon_vma;
struct anon_vma_chain *vmac;
struct vm_area_struct *vma;
@@ -3180,16 +3182,13 @@ void rmap_walk_ksm(struct folio *folio, struct rmap_walk_control *rwc)
}
anon_vma_lock_read(anon_vma);
}
+
anon_vma_interval_tree_foreach(vmac, &anon_vma->rb_root,
0, ULONG_MAX) {
- unsigned long addr;
cond_resched();
vma = vmac->vma;
- /* Ignore the stable/unstable/sqnr flags */
- addr = rmap_item->address & PAGE_MASK;
-
if (addr < vma->vm_start || addr >= vma->vm_end)
continue;
/*
--
2.25.1
^ permalink raw reply [flat|nested] 4+ messages in thread* [PATCH v3 2/2] ksm: Optimize rmap_walk_ksm by passing a suitable address range
2026-02-12 11:28 [PATCH v3 0/2] KSM: Optimizations for rmap_walk_ksm xu.xin16
2026-02-12 11:29 ` [PATCH v3 1/2] ksm: Initialize the addr only once in rmap_walk_ksm xu.xin16
@ 2026-02-12 11:30 ` xu.xin16
2026-02-12 12:21 ` David Hildenbrand (Arm)
1 sibling, 1 reply; 4+ messages in thread
From: xu.xin16 @ 2026-02-12 11:30 UTC (permalink / raw)
To: akpm, xu.xin16, david
Cc: chengming.zhou, hughd, wang.yaxin, yang.yang29, linux-mm, linux-kernel
From: xu xin <xu.xin16@zte.com.cn>
Problem
=======
When available memory is extremely tight, causing KSM pages to be swapped
out, or when there is significant memory fragmentation and THP triggers
memory compaction, the system will invoke the rmap_walk_ksm function to
perform reverse mapping. However, we observed that this function becomes
particularly time-consuming when a large number of VMAs (e.g., 20,000)
share the same anon_vma. Through debug trace analysis, we found that most
of the latency occurs within anon_vma_interval_tree_foreach, leading to an
excessively long hold time on the anon_vma lock (even reaching 500ms or
more), which in turn causes upper-layer applications (waiting for the
anon_vma lock) to be blocked for extended periods.
Root Cause
==========
Further investigation revealed that 99.9% of iterations inside the
anon_vma_interval_tree_foreach loop are skipped due to the first check
"if (addr < vma->vm_start || addr >= vma->vm_end)), indicating that a large
number of loop iterations are ineffective. This inefficiency arises because
the pgoff_start and pgoff_end parameters passed to
anon_vma_interval_tree_foreach span the entire address space from 0 to
ULONG_MAX, resulting in very poor loop efficiency.
Solution
========
In fact, we can significantly improve performance by passing a more precise
range based on the given addr. Since the original pages merged by KSM
correspond to anonymous VMAs, the page offset can be calculated as
pgoff = address >> PAGE_SHIFT. Therefore, we can optimize the call by
defining:
pgoff = rmap_item->address >> PAGE_SHIFT;
Performance
===========
In our real embedded Linux environment, the measured metrcis were as
follows:
1) Time_ms: Max time for holding anon_vma lock in a single rmap_walk_ksm.
2) Nr_iteration_total: The max times of iterations in a loop of anon_vma_interval_tree_foreach
3) Skip_addr_out_of_range: The max times of skipping due to the first check (vma->vm_start
and vma->vm_end) in a loop of anon_vma_interval_tree_foreach.
4) Skip_mm_mismatch: The max times of skipping due to the second check (rmap_item->mm == vma->vm_mm)
in a loop of anon_vma_interval_tree_foreach.
The result is as follows:
Time_ms Nr_iteration_total Skip_addr_out_of_range Skip_mm_mismatch
Before: 228.65 22169 22168 0
After : 0.396 3 0 2
The referenced reproducer of rmap_walk_ksm can be found at:
https://lore.kernel.org/all/20260206151424734QIyWL_pA-1QeJPbJlUxsO@zte.com.cn/
Co-developed-by: Wang Yaxin <wang.yaxin@zte.com.cn>
Signed-off-by: Wang Yaxin <wang.yaxin@zte.com.cn>
Signed-off-by: xu xin <xu.xin16@zte.com.cn>
---
mm/ksm.c | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/mm/ksm.c b/mm/ksm.c
index 950e122bcbf4..7b974f333391 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -3170,6 +3170,7 @@ void rmap_walk_ksm(struct folio *folio, struct rmap_walk_control *rwc)
hlist_for_each_entry(rmap_item, &stable_node->hlist, hlist) {
/* Ignore the stable/unstable/sqnr flags */
const unsigned long addr = rmap_item->address & PAGE_MASK;
+ const pgoff_t pgoff = rmap_item->address >> PAGE_SHIFT;
struct anon_vma *anon_vma = rmap_item->anon_vma;
struct anon_vma_chain *vmac;
struct vm_area_struct *vma;
@@ -3183,8 +3184,12 @@ void rmap_walk_ksm(struct folio *folio, struct rmap_walk_control *rwc)
anon_vma_lock_read(anon_vma);
}
+ /*
+ * Currently KSM folios are order-0 normal pages, so pgoff_end
+ * should be the same as pgoff_start.
+ */
anon_vma_interval_tree_foreach(vmac, &anon_vma->rb_root,
- 0, ULONG_MAX) {
+ pgoff, pgoff) {
cond_resched();
vma = vmac->vma;
--
2.25.1
^ permalink raw reply [flat|nested] 4+ messages in thread