linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* ksm: initialize rmap values directly and make them const
@ 2026-02-06  7:22 xu.xin16
  2026-02-06  8:18 ` David Hildenbrand (Arm)
  2026-02-06 15:29 ` Matthew Wilcox
  0 siblings, 2 replies; 10+ messages in thread
From: xu.xin16 @ 2026-02-06  7:22 UTC (permalink / raw)
  To: david, akpm
  Cc: chengming.zhou, hughd, wang.yaxin, yang.yang29, linux-mm, linux-kernel

From: xu xin <xu.xin16@zte.com.cn>

Considering that commit 06fbd555dea8 ("ksm: optimize rmap_walk_ksm by passing
a suitable addressrange") seems to have already been merged, this new patch is
proposed to address the issue raised by David at:

https://lore.kernel.org/all/ba03780a-fd65-4a03-97de-bc0905106260@kernel.org/

This initialize rmap values (addr, pgoff_start, pgoff_end) directly and
make them const to make code more robust. Besides, since KSM folios are always
order-0, so folio_nr_pages(KSM folio) is always 1, so the line:

	"pgoff_end = pgoff_start + folio_nr_pages(folio) - 1;"

becomes directly:

	"pgoff_end = pgoff_start;"

The test reproducer of rmap_walk_ksm can be found at:
https://lore.kernel.org/all/20260206151424734QIyWL_pA-1QeJPbJlUxsO@zte.com.cn/

Fixes: 06fbd555dea8 ("ksm: optimize rmap_walk_ksm by passing a suitable addressrange")
Signed-off-by: xu xin <xu.xin16@zte.com.cn>
---
 mm/ksm.c | 13 +++++--------
 1 file changed, 5 insertions(+), 8 deletions(-)

diff --git a/mm/ksm.c b/mm/ksm.c
index 031c17e4ada6..c7ca117024a4 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -3171,8 +3171,11 @@ void rmap_walk_ksm(struct folio *folio, struct rmap_walk_control *rwc)
 		struct anon_vma *anon_vma = rmap_item->anon_vma;
 		struct anon_vma_chain *vmac;
 		struct vm_area_struct *vma;
-		unsigned long addr;
-		pgoff_t pgoff_start, pgoff_end;
+		/* Ignore the stable/unstable/sqnr flags */
+		const unsigned long addr = rmap_item->address & PAGE_MASK;
+		const pgoff_t pgoff_start = rmap_item->address >> PAGE_SHIFT;
+		/* KSM folios are always order-0 normal pages */
+		const pgoff_t pgoff_end = pgoff_start;

 		cond_resched();
 		if (!anon_vma_trylock_read(anon_vma)) {
@@ -3183,12 +3186,6 @@ void rmap_walk_ksm(struct folio *folio, struct rmap_walk_control *rwc)
 			anon_vma_lock_read(anon_vma);
 		}

-		/* Ignore the stable/unstable/sqnr flags */
-		addr = rmap_item->address & PAGE_MASK;
-
-		pgoff_start = rmap_item->address >> PAGE_SHIFT;
-		pgoff_end = pgoff_start + folio_nr_pages(folio) - 1;
-
 		anon_vma_interval_tree_foreach(vmac, &anon_vma->rb_root,
 					       pgoff_start, pgoff_end) {

-- 
2.25.1


^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2026-02-06 18:57 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2026-02-06  7:22 ksm: initialize rmap values directly and make them const xu.xin16
2026-02-06  8:18 ` David Hildenbrand (Arm)
2026-02-06  8:38   ` xu.xin16
2026-02-06  8:46     ` David Hildenbrand (Arm)
2026-02-06 15:39     ` Andrew Morton
2026-02-06 15:29 ` Matthew Wilcox
2026-02-06 17:34   ` David Hildenbrand (Arm)
2026-02-06 18:09     ` David Hildenbrand (Arm)
2026-02-06 18:49       ` Matthew Wilcox
2026-02-06 18:57         ` David Hildenbrand (Arm)

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox