linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [RFC PATCH V1 00/13] mm: slowtier page promotion based on PTE A bit
@ 2025-03-19 19:30 Raghavendra K T
  2025-03-19 19:30 ` [RFC PATCH V1 01/13] mm: Add kmmscand kernel daemon Raghavendra K T
                   ` (15 more replies)
  0 siblings, 16 replies; 30+ messages in thread
From: Raghavendra K T @ 2025-03-19 19:30 UTC (permalink / raw)
  To: raghavendra.kt
  Cc: AneeshKumar.KizhakeVeetil, Hasan.Maruf, Michael.Day, akpm,
	bharata, dave.hansen, david, dongjoo.linux.dev, feng.tang,
	gourry, hannes, honggyu.kim, hughd, jhubbard, jon.grimm,
	k.shutemov, kbusch, kmanaouil.dev, leesuyeon0506, leillc,
	liam.howlett, linux-kernel, linux-mm, mgorman, mingo, nadav.amit,
	nphamcs, peterz, riel, rientjes, rppt, santosh.shukla, shivankg,
	shy828301, sj, vbabka, weixugc, willy, ying.huang, ziy,
	Jonathan.Cameron, dave

Introduction:
=============
In the current hot page promotion, all the activities including the
process address space scanning, NUMA hint fault handling and page
migration is performed in the process context. i.e., scanning overhead is
borne by applications.

This is RFC V1 patch series to do (slow tier) CXL page promotion.
The approach in this patchset assists/addresses the issue by adding PTE
Accessed bit scanning.

Scanning is done by a global kernel thread which routinely scans all
the processes' address spaces and checks for accesses by reading the
PTE A bit. 

A separate migration thread migrates/promotes the pages to the toptier
node based on a simple heuristic that uses toptier scan/access information
of the mm.

Additionally based on the feedback for RFC V0 [4], a prctl knob with
a scalar value is provided to control per task scanning.

Initial results show promising number on a microbenchmark. Soon
will get numbers with real benchmarks and findings (tunings). 

Experiment:
============
Abench microbenchmark,
- Allocates 8GB/16GB/32GB/64GB of memory on CXL node
- 64 threads created, and each thread randomly accesses pages in 4K
  granularity.
- 512 iterations with a delay of 1 us between two successive iterations.

SUT: 512 CPU, 2 node 256GB, AMD EPYC.

3 runs, command:  abench -m 2 -d 1 -i 512 -s <size>

Calculates how much time is taken to complete the task, lower is better.
Expectation is CXL node memory is expected to be migrated as fast as
possible.

Base case: 6.14-rc6    w/ numab mode = 2 (hot page promotion is enabled).
patched case: 6.14-rc6 w/ numab mode = 1 (numa balancing is enabled).
we expect daemon to do page promotion.

Result:
========
         base NUMAB2                    patched NUMAB1
         time in sec  (%stdev)   time in sec  (%stdev)     %gain
 8GB     134.33       ( 0.19 )        120.52  ( 0.21 )     10.28
16GB     292.24       ( 0.60 )        275.97  ( 0.18 )      5.56
32GB     585.06       ( 0.24 )        546.49  ( 0.35 )      6.59
64GB    1278.98       ( 0.27 )       1205.20  ( 2.29 )      5.76

Base case: 6.14-rc6    w/ numab mode = 1 (numa balancing is enabled).
patched case: 6.14-rc6 w/ numab mode = 1 (numa balancing is enabled).
         base NUMAB1                    patched NUMAB1
         time in sec  (%stdev)   time in sec  (%stdev)     %gain
 8GB     186.71       ( 0.99 )        120.52  ( 0.21 )     35.45 
16GB     376.09       ( 0.46 )        275.97  ( 0.18 )     26.62 
32GB     744.37       ( 0.71 )        546.49  ( 0.35 )     26.58 
64GB    1534.49       ( 0.09 )       1205.20  ( 2.29 )     21.45


Major Changes since V0:
======================
- A separate migration thread is used for migration, thus alleviating need for
  multi-threaded scanning (atleast as per tracing).

- A simple heuristic for target node calculation is added.

- prctl (David R) interface with scalar value is added to control per task scanning.

- Steve's comment on tracing incorporated.

- Davidlohr's reported bugfix.

- Initial scan delay similar to NUMAB1 mode added.

- Got rid of migration lock during mm_walk.

PS: Occassionally I do see if scanning is too fast compared to migration,
scanning can stall waiting for lock. Should be fixed in next version by
using memslot for migration..

Disclaimer, Takeaways and discussion points and future TODOs 
==============================================================
1) Source code, patch seggregation still to be improved, current patchset only
provides a skeleton.

2) Unification of source of hotness is not easy (as mentioned perhaps by Jonathan)
but perhaps all the consumers/producers can work coopertaively.

Scanning:
3) Major positive: Current patchset is able to cover all the process address
space scanning effectively with simple algorithms to tune scan_size and scan_period.

4) Effective tracking of folio's or address space using / or ideas used in DAMON
is yet to be explored fully.

5) Use timestamp information-based migration (Similar to numab mode=2).
instead of migrating immediately when PTE A bit set.
(cons:
 - It will not be accurate since it is done outside of process
context.
 - Performance benefit may be lost.)

Migration:

6) Currently fast scanner can bombard migration list, need to maintain migration list in a more
organized way (for e.g. using memslot, so that it is also helpful in maintaining recency, frequency
information (similar to kpromoted posted by Bharata)

7) NUMAB2 throttling is very effective, we would need a common interface to control migration
and also exploit batch migration.

Thanks to Bharata, Joannes, Gregory, SJ, Chris, David Rientjes, Jonathan, John Hubbard,
Davidlohr, Ying, Willy, Hyeonggon Yoo and many of you for your valuable comments and support.

Links:
[1] https://lore.kernel.org/lkml/20241127082201.1276-1-gourry@gourry.net/
[2] kstaled: https://lore.kernel.org/lkml/1317170947-17074-3-git-send-email-walken@google.com/#r
[3] https://lore.kernel.org/lkml/Y+Pj+9bbBbHpf6xM@hirez.programming.kicks-ass.net/
[4] RFC V0: https://lore.kernel.org/all/20241201153818.2633616-1-raghavendra.kt@amd.com/
[5] Recap: https://lore.kernel.org/linux-mm/20241226012833.rmmbkws4wdhzdht6@ed.ac.uk/T/
[6] LSFMM: https://lore.kernel.org/linux-mm/20250123105721.424117-1-raghavendra.kt@amd.com/#r
[7] LSFMM: https://lore.kernel.org/linux-mm/20250131130901.00000dd1@huawei.com/

I might have CCed more people or less people than needed
unintentionally.

Patch organization:
patch 1-4 initial skeleton for scanning and migration
patch 5: migration
patch 6-8: scanning optimizations
patch 9: target_node heuristic
patch 10-12: sysfs, vmstat and tracing
patch 13: A basic prctl implementation.

Raghavendra K T (13):
  mm: Add kmmscand kernel daemon
  mm: Maintain mm_struct list in the system
  mm: Scan the mm and create a migration list
  mm: Create a separate kernel thread for migration
  mm/migration: Migrate accessed folios to toptier node
  mm: Add throttling of mm scanning using scan_period
  mm: Add throttling of mm scanning using scan_size
  mm: Add initial scan delay
  mm: Add heuristic to calculate target node
  sysfs: Add sysfs support to tune scanning
  vmstat: Add vmstat counters
  trace/kmmscand: Add tracing of scanning and migration
  prctl: Introduce new prctl to control scanning

 Documentation/filesystems/proc.rst |    2 +
 fs/exec.c                          |    4 +
 fs/proc/task_mmu.c                 |    4 +
 include/linux/kmmscand.h           |   31 +
 include/linux/migrate.h            |    2 +
 include/linux/mm.h                 |   11 +
 include/linux/mm_types.h           |    7 +
 include/linux/vm_event_item.h      |   10 +
 include/trace/events/kmem.h        |   90 ++
 include/uapi/linux/prctl.h         |    7 +
 kernel/fork.c                      |    8 +
 kernel/sys.c                       |   25 +
 mm/Kconfig                         |    8 +
 mm/Makefile                        |    1 +
 mm/kmmscand.c                      | 1515 ++++++++++++++++++++++++++++
 mm/migrate.c                       |    2 +-
 mm/vmstat.c                        |   10 +
 17 files changed, 1736 insertions(+), 1 deletion(-)
 create mode 100644 include/linux/kmmscand.h
 create mode 100644 mm/kmmscand.c


base-commit: b7f94fcf55469ad3ef8a74c35b488dbfa314d1bb
-- 
2.34.1



^ permalink raw reply	[flat|nested] 30+ messages in thread

* [RFC PATCH V1 01/13] mm: Add kmmscand kernel daemon
  2025-03-19 19:30 [RFC PATCH V1 00/13] mm: slowtier page promotion based on PTE A bit Raghavendra K T
@ 2025-03-19 19:30 ` Raghavendra K T
  2025-03-21 16:06   ` Jonathan Cameron
  2025-03-19 19:30 ` [RFC PATCH V1 02/13] mm: Maintain mm_struct list in the system Raghavendra K T
                   ` (14 subsequent siblings)
  15 siblings, 1 reply; 30+ messages in thread
From: Raghavendra K T @ 2025-03-19 19:30 UTC (permalink / raw)
  To: raghavendra.kt
  Cc: AneeshKumar.KizhakeVeetil, Hasan.Maruf, Michael.Day, akpm,
	bharata, dave.hansen, david, dongjoo.linux.dev, feng.tang,
	gourry, hannes, honggyu.kim, hughd, jhubbard, jon.grimm,
	k.shutemov, kbusch, kmanaouil.dev, leesuyeon0506, leillc,
	liam.howlett, linux-kernel, linux-mm, mgorman, mingo, nadav.amit,
	nphamcs, peterz, riel, rientjes, rppt, santosh.shukla, shivankg,
	shy828301, sj, vbabka, weixugc, willy, ying.huang, ziy,
	Jonathan.Cameron, dave

Add a skeleton to support scanning and migration.
Also add a config option for the same.

High level design:

While (1):
  scan the slowtier pages belonging to VMAs of a task.
  Add to migation list

Separate thread:
  migrate scanned pages to a toptier node based on heuristics

The overall code is heavily influenced by khugepaged design.

Signed-off-by: Raghavendra K T <raghavendra.kt@amd.com>
---
 mm/Kconfig    |   8 +++
 mm/Makefile   |   1 +
 mm/kmmscand.c | 176 ++++++++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 185 insertions(+)
 create mode 100644 mm/kmmscand.c

diff --git a/mm/Kconfig b/mm/Kconfig
index 1b501db06417..5a4931633e15 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -783,6 +783,14 @@ config KSM
 	  until a program has madvised that an area is MADV_MERGEABLE, and
 	  root has set /sys/kernel/mm/ksm/run to 1 (if CONFIG_SYSFS is set).
 
+config KMMSCAND
+	bool "Enable PTE A bit scanning and Migration"
+	depends on NUMA_BALANCING
+	help
+	  Enable PTE A bit scanning of page. CXL pages accessed are migrated to
+	  a regular NUMA node. The option creates a separate kthread for
+	  scanning and migration.
+
 config DEFAULT_MMAP_MIN_ADDR
 	int "Low address space to protect from user allocation"
 	depends on MMU
diff --git a/mm/Makefile b/mm/Makefile
index 850386a67b3e..45e2f8cc8fd6 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -94,6 +94,7 @@ obj-$(CONFIG_FAIL_PAGE_ALLOC) += fail_page_alloc.o
 obj-$(CONFIG_MEMTEST)		+= memtest.o
 obj-$(CONFIG_MIGRATION) += migrate.o
 obj-$(CONFIG_NUMA) += memory-tiers.o
+obj-$(CONFIG_KMMSCAND) += kmmscand.o
 obj-$(CONFIG_DEVICE_MIGRATION) += migrate_device.o
 obj-$(CONFIG_TRANSPARENT_HUGEPAGE) += huge_memory.o khugepaged.o
 obj-$(CONFIG_PAGE_COUNTER) += page_counter.o
diff --git a/mm/kmmscand.c b/mm/kmmscand.c
new file mode 100644
index 000000000000..6c55250b5cfb
--- /dev/null
+++ b/mm/kmmscand.c
@@ -0,0 +1,176 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <linux/mm.h>
+#include <linux/mm_types.h>
+#include <linux/sched.h>
+#include <linux/sched/mm.h>
+#include <linux/mmu_notifier.h>
+#include <linux/swap.h>
+#include <linux/mm_inline.h>
+#include <linux/kthread.h>
+#include <linux/string.h>
+#include <linux/delay.h>
+#include <linux/cleanup.h>
+
+#include <asm/pgalloc.h>
+#include "internal.h"
+
+
+static struct task_struct *kmmscand_thread __read_mostly;
+static DEFINE_MUTEX(kmmscand_mutex);
+
+/* How long to pause between two scan and migration cycle */
+static unsigned int kmmscand_scan_sleep_ms __read_mostly = 16;
+
+/* Max number of mms to scan in one scan and migration cycle */
+#define KMMSCAND_MMS_TO_SCAN	(4 * 1024UL)
+static unsigned long kmmscand_mms_to_scan __read_mostly = KMMSCAND_MMS_TO_SCAN;
+
+bool kmmscand_scan_enabled = true;
+static bool need_wakeup;
+
+static unsigned long kmmscand_sleep_expire;
+
+static DECLARE_WAIT_QUEUE_HEAD(kmmscand_wait);
+
+struct kmmscand_scan {
+	struct list_head mm_head;
+};
+
+struct kmmscand_scan kmmscand_scan = {
+	.mm_head = LIST_HEAD_INIT(kmmscand_scan.mm_head),
+};
+
+static int kmmscand_has_work(void)
+{
+	return !list_empty(&kmmscand_scan.mm_head);
+}
+
+static bool kmmscand_should_wakeup(void)
+{
+	bool wakeup =  kthread_should_stop() || need_wakeup ||
+	       time_after_eq(jiffies, kmmscand_sleep_expire);
+	if (need_wakeup)
+		need_wakeup = false;
+
+	return wakeup;
+}
+
+static void kmmscand_wait_work(void)
+{
+	const unsigned long scan_sleep_jiffies =
+		msecs_to_jiffies(kmmscand_scan_sleep_ms);
+
+	if (!scan_sleep_jiffies)
+		return;
+
+	kmmscand_sleep_expire = jiffies + scan_sleep_jiffies;
+	wait_event_timeout(kmmscand_wait,
+			kmmscand_should_wakeup(),
+			scan_sleep_jiffies);
+	return;
+}
+
+static unsigned long kmmscand_scan_mm_slot(void)
+{
+	/* placeholder for scanning */
+	msleep(100);
+	return 0;
+}
+
+static void kmmscand_do_scan(void)
+{
+	unsigned long iter = 0, mms_to_scan;
+
+	mms_to_scan = READ_ONCE(kmmscand_mms_to_scan);
+
+	while (true) {
+		cond_resched();
+
+		if (unlikely(kthread_should_stop()) ||
+			!READ_ONCE(kmmscand_scan_enabled))
+			break;
+
+		if (kmmscand_has_work())
+			kmmscand_scan_mm_slot();
+
+		iter++;
+		if (iter >= mms_to_scan)
+			break;
+	}
+}
+
+static int kmmscand(void *none)
+{
+	for (;;) {
+		if (unlikely(kthread_should_stop()))
+			break;
+
+		kmmscand_do_scan();
+
+		while (!READ_ONCE(kmmscand_scan_enabled)) {
+			cpu_relax();
+			kmmscand_wait_work();
+		}
+
+		kmmscand_wait_work();
+	}
+	return 0;
+}
+
+static int start_kmmscand(void)
+{
+	int err = 0;
+
+	guard(mutex)(&kmmscand_mutex);
+
+	/* Some one already succeeded in starting daemon */
+	if (kmmscand_thread)
+		goto end;
+
+	kmmscand_thread = kthread_run(kmmscand, NULL, "kmmscand");
+	if (IS_ERR(kmmscand_thread)) {
+		pr_err("kmmscand: kthread_run(kmmscand) failed\n");
+		err = PTR_ERR(kmmscand_thread);
+		kmmscand_thread = NULL;
+		goto end;
+	} else {
+		pr_info("kmmscand: Successfully started kmmscand");
+	}
+
+	if (!list_empty(&kmmscand_scan.mm_head))
+		wake_up_interruptible(&kmmscand_wait);
+
+end:
+	return err;
+}
+
+static int stop_kmmscand(void)
+{
+	int err = 0;
+
+	guard(mutex)(&kmmscand_mutex);
+
+	if (kmmscand_thread) {
+		kthread_stop(kmmscand_thread);
+		kmmscand_thread = NULL;
+	}
+
+	return err;
+}
+
+static int __init kmmscand_init(void)
+{
+	int err;
+
+	err = start_kmmscand();
+	if (err)
+		goto err_kmmscand;
+
+	return 0;
+
+err_kmmscand:
+	stop_kmmscand();
+
+	return err;
+}
+subsys_initcall(kmmscand_init);
-- 
2.34.1



^ permalink raw reply	[flat|nested] 30+ messages in thread

* [RFC PATCH V1 02/13] mm: Maintain mm_struct list in the system
  2025-03-19 19:30 [RFC PATCH V1 00/13] mm: slowtier page promotion based on PTE A bit Raghavendra K T
  2025-03-19 19:30 ` [RFC PATCH V1 01/13] mm: Add kmmscand kernel daemon Raghavendra K T
@ 2025-03-19 19:30 ` Raghavendra K T
  2025-03-19 19:30 ` [RFC PATCH V1 03/13] mm: Scan the mm and create a migration list Raghavendra K T
                   ` (13 subsequent siblings)
  15 siblings, 0 replies; 30+ messages in thread
From: Raghavendra K T @ 2025-03-19 19:30 UTC (permalink / raw)
  To: raghavendra.kt
  Cc: AneeshKumar.KizhakeVeetil, Hasan.Maruf, Michael.Day, akpm,
	bharata, dave.hansen, david, dongjoo.linux.dev, feng.tang,
	gourry, hannes, honggyu.kim, hughd, jhubbard, jon.grimm,
	k.shutemov, kbusch, kmanaouil.dev, leesuyeon0506, leillc,
	liam.howlett, linux-kernel, linux-mm, mgorman, mingo, nadav.amit,
	nphamcs, peterz, riel, rientjes, rppt, santosh.shukla, shivankg,
	shy828301, sj, vbabka, weixugc, willy, ying.huang, ziy,
	Jonathan.Cameron, dave

Add a hook in the fork and exec path to link mm_struct.
Reuse the mm_slot infrastructure to aid insert and lookup of mm_struct.

CC: linux-fsdevel@vger.kernel.org
Suggested-by: Bharata B Rao <bharata@amd.com>

Signed-off-by: Raghavendra K T <raghavendra.kt@amd.com>
---
 fs/exec.c                |  4 ++
 include/linux/kmmscand.h | 30 ++++++++++++++
 kernel/fork.c            |  4 ++
 mm/kmmscand.c            | 86 +++++++++++++++++++++++++++++++++++++++-
 4 files changed, 123 insertions(+), 1 deletion(-)
 create mode 100644 include/linux/kmmscand.h

diff --git a/fs/exec.c b/fs/exec.c
index 506cd411f4ac..e76285e4bc73 100644
--- a/fs/exec.c
+++ b/fs/exec.c
@@ -68,6 +68,7 @@
 #include <linux/user_events.h>
 #include <linux/rseq.h>
 #include <linux/ksm.h>
+#include <linux/kmmscand.h>
 
 #include <linux/uaccess.h>
 #include <asm/mmu_context.h>
@@ -266,6 +267,8 @@ static int __bprm_mm_init(struct linux_binprm *bprm)
 	if (err)
 		goto err_ksm;
 
+	kmmscand_execve(mm);
+
 	/*
 	 * Place the stack at the largest stack address the architecture
 	 * supports. Later, we'll move this to an appropriate place. We don't
@@ -288,6 +291,7 @@ static int __bprm_mm_init(struct linux_binprm *bprm)
 	return 0;
 err:
 	ksm_exit(mm);
+	kmmscand_exit(mm);
 err_ksm:
 	mmap_write_unlock(mm);
 err_free:
diff --git a/include/linux/kmmscand.h b/include/linux/kmmscand.h
new file mode 100644
index 000000000000..b120c65ee8c6
--- /dev/null
+++ b/include/linux/kmmscand.h
@@ -0,0 +1,30 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _LINUX_KMMSCAND_H_
+#define _LINUX_KMMSCAND_H_
+
+#ifdef CONFIG_KMMSCAND
+extern void __kmmscand_enter(struct mm_struct *mm);
+extern void __kmmscand_exit(struct mm_struct *mm);
+
+static inline void kmmscand_execve(struct mm_struct *mm)
+{
+	__kmmscand_enter(mm);
+}
+
+static inline void kmmscand_fork(struct mm_struct *mm, struct mm_struct *oldmm)
+{
+	__kmmscand_enter(mm);
+}
+
+static inline void kmmscand_exit(struct mm_struct *mm)
+{
+	__kmmscand_exit(mm);
+}
+#else /* !CONFIG_KMMSCAND */
+static inline void __kmmscand_enter(struct mm_struct *mm) {}
+static inline void __kmmscand_exit(struct mm_struct *mm) {}
+static inline void kmmscand_execve(struct mm_struct *mm) {}
+static inline void kmmscand_fork(struct mm_struct *mm, struct mm_struct *oldmm) {}
+static inline void kmmscand_exit(struct mm_struct *mm) {}
+#endif
+#endif /* _LINUX_KMMSCAND_H_ */
diff --git a/kernel/fork.c b/kernel/fork.c
index 735405a9c5f3..f61c55cf33c2 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -85,6 +85,7 @@
 #include <linux/user-return-notifier.h>
 #include <linux/oom.h>
 #include <linux/khugepaged.h>
+#include <linux/kmmscand.h>
 #include <linux/signalfd.h>
 #include <linux/uprobes.h>
 #include <linux/aio.h>
@@ -656,6 +657,8 @@ static __latent_entropy int dup_mmap(struct mm_struct *mm,
 	mm->exec_vm = oldmm->exec_vm;
 	mm->stack_vm = oldmm->stack_vm;
 
+	kmmscand_fork(mm, oldmm);
+
 	/* Use __mt_dup() to efficiently build an identical maple tree. */
 	retval = __mt_dup(&oldmm->mm_mt, &mm->mm_mt, GFP_KERNEL);
 	if (unlikely(retval))
@@ -1353,6 +1356,7 @@ static inline void __mmput(struct mm_struct *mm)
 	exit_aio(mm);
 	ksm_exit(mm);
 	khugepaged_exit(mm); /* must run before exit_mmap */
+	kmmscand_exit(mm);
 	exit_mmap(mm);
 	mm_put_huge_zero_folio(mm);
 	set_mm_exe_file(mm, NULL);
diff --git a/mm/kmmscand.c b/mm/kmmscand.c
index 6c55250b5cfb..36d0fea31dea 100644
--- a/mm/kmmscand.c
+++ b/mm/kmmscand.c
@@ -7,13 +7,14 @@
 #include <linux/swap.h>
 #include <linux/mm_inline.h>
 #include <linux/kthread.h>
+#include <linux/kmmscand.h>
 #include <linux/string.h>
 #include <linux/delay.h>
 #include <linux/cleanup.h>
 
 #include <asm/pgalloc.h>
 #include "internal.h"
-
+#include "mm_slot.h"
 
 static struct task_struct *kmmscand_thread __read_mostly;
 static DEFINE_MUTEX(kmmscand_mutex);
@@ -30,10 +31,21 @@ static bool need_wakeup;
 
 static unsigned long kmmscand_sleep_expire;
 
+static DEFINE_SPINLOCK(kmmscand_mm_lock);
 static DECLARE_WAIT_QUEUE_HEAD(kmmscand_wait);
 
+#define KMMSCAND_SLOT_HASH_BITS 10
+static DEFINE_READ_MOSTLY_HASHTABLE(kmmscand_slots_hash, KMMSCAND_SLOT_HASH_BITS);
+
+static struct kmem_cache *kmmscand_slot_cache __read_mostly;
+
+struct kmmscand_mm_slot {
+	struct mm_slot slot;
+};
+
 struct kmmscand_scan {
 	struct list_head mm_head;
+	struct kmmscand_mm_slot *mm_slot;
 };
 
 struct kmmscand_scan kmmscand_scan = {
@@ -70,6 +82,11 @@ static void kmmscand_wait_work(void)
 	return;
 }
 
+static inline int kmmscand_test_exit(struct mm_struct *mm)
+{
+	return atomic_read(&mm->mm_users) == 0;
+}
+
 static unsigned long kmmscand_scan_mm_slot(void)
 {
 	/* placeholder for scanning */
@@ -117,6 +134,65 @@ static int kmmscand(void *none)
 	return 0;
 }
 
+static inline void kmmscand_destroy(void)
+{
+	kmem_cache_destroy(kmmscand_slot_cache);
+}
+
+void __kmmscand_enter(struct mm_struct *mm)
+{
+	struct kmmscand_mm_slot *kmmscand_slot;
+	struct mm_slot *slot;
+	int wakeup;
+
+	/* __kmmscand_exit() must not run from under us */
+	VM_BUG_ON_MM(kmmscand_test_exit(mm), mm);
+
+	kmmscand_slot = mm_slot_alloc(kmmscand_slot_cache);
+
+	if (!kmmscand_slot)
+		return;
+
+	slot = &kmmscand_slot->slot;
+
+	spin_lock(&kmmscand_mm_lock);
+	mm_slot_insert(kmmscand_slots_hash, mm, slot);
+
+	wakeup = list_empty(&kmmscand_scan.mm_head);
+	list_add_tail(&slot->mm_node, &kmmscand_scan.mm_head);
+	spin_unlock(&kmmscand_mm_lock);
+
+	mmgrab(mm);
+	if (wakeup)
+		wake_up_interruptible(&kmmscand_wait);
+}
+
+void __kmmscand_exit(struct mm_struct *mm)
+{
+	struct kmmscand_mm_slot *mm_slot;
+	struct mm_slot *slot;
+	int free = 0;
+
+	spin_lock(&kmmscand_mm_lock);
+	slot = mm_slot_lookup(kmmscand_slots_hash, mm);
+	mm_slot = mm_slot_entry(slot, struct kmmscand_mm_slot, slot);
+	if (mm_slot && kmmscand_scan.mm_slot != mm_slot) {
+		hash_del(&slot->hash);
+		list_del(&slot->mm_node);
+		free = 1;
+	}
+
+	spin_unlock(&kmmscand_mm_lock);
+
+	if (free) {
+		mm_slot_free(kmmscand_slot_cache, mm_slot);
+		mmdrop(mm);
+	} else if (mm_slot) {
+		mmap_write_lock(mm);
+		mmap_write_unlock(mm);
+	}
+}
+
 static int start_kmmscand(void)
 {
 	int err = 0;
@@ -162,6 +238,13 @@ static int __init kmmscand_init(void)
 {
 	int err;
 
+	kmmscand_slot_cache = KMEM_CACHE(kmmscand_mm_slot, 0);
+
+	if (!kmmscand_slot_cache) {
+		pr_err("kmmscand: kmem_cache error");
+		return -ENOMEM;
+	}
+
 	err = start_kmmscand();
 	if (err)
 		goto err_kmmscand;
@@ -170,6 +253,7 @@ static int __init kmmscand_init(void)
 
 err_kmmscand:
 	stop_kmmscand();
+	kmmscand_destroy();
 
 	return err;
 }
-- 
2.34.1



^ permalink raw reply	[flat|nested] 30+ messages in thread

* [RFC PATCH V1 03/13] mm: Scan the mm and create a migration list
  2025-03-19 19:30 [RFC PATCH V1 00/13] mm: slowtier page promotion based on PTE A bit Raghavendra K T
  2025-03-19 19:30 ` [RFC PATCH V1 01/13] mm: Add kmmscand kernel daemon Raghavendra K T
  2025-03-19 19:30 ` [RFC PATCH V1 02/13] mm: Maintain mm_struct list in the system Raghavendra K T
@ 2025-03-19 19:30 ` Raghavendra K T
  2025-03-19 19:30 ` [RFC PATCH V1 04/13] mm: Create a separate kernel thread for migration Raghavendra K T
                   ` (12 subsequent siblings)
  15 siblings, 0 replies; 30+ messages in thread
From: Raghavendra K T @ 2025-03-19 19:30 UTC (permalink / raw)
  To: raghavendra.kt
  Cc: AneeshKumar.KizhakeVeetil, Hasan.Maruf, Michael.Day, akpm,
	bharata, dave.hansen, david, dongjoo.linux.dev, feng.tang,
	gourry, hannes, honggyu.kim, hughd, jhubbard, jon.grimm,
	k.shutemov, kbusch, kmanaouil.dev, leesuyeon0506, leillc,
	liam.howlett, linux-kernel, linux-mm, mgorman, mingo, nadav.amit,
	nphamcs, peterz, riel, rientjes, rppt, santosh.shukla, shivankg,
	shy828301, sj, vbabka, weixugc, willy, ying.huang, ziy,
	Jonathan.Cameron, dave

Since we already have the list of mm_struct in the system, add a module to
scan each mm that walks VMAs of each mm_struct and scan all the pages
associated with that.

 In the scan path: Check for the recently acccessed pages (folios) belonging to
slowtier nodes. Add all those folios to a migration list.

Signed-off-by: Raghavendra K T <raghavendra.kt@amd.com>
---
 mm/kmmscand.c | 323 +++++++++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 321 insertions(+), 2 deletions(-)

diff --git a/mm/kmmscand.c b/mm/kmmscand.c
index 36d0fea31dea..a76a58bf37b2 100644
--- a/mm/kmmscand.c
+++ b/mm/kmmscand.c
@@ -4,10 +4,18 @@
 #include <linux/sched.h>
 #include <linux/sched/mm.h>
 #include <linux/mmu_notifier.h>
+#include <linux/rmap.h>
+#include <linux/pagewalk.h>
+#include <linux/page_ext.h>
+#include <linux/page_idle.h>
+#include <linux/page_table_check.h>
+#include <linux/pagemap.h>
 #include <linux/swap.h>
 #include <linux/mm_inline.h>
 #include <linux/kthread.h>
 #include <linux/kmmscand.h>
+#include <linux/memory-tiers.h>
+#include <linux/mempolicy.h>
 #include <linux/string.h>
 #include <linux/delay.h>
 #include <linux/cleanup.h>
@@ -18,6 +26,11 @@
 
 static struct task_struct *kmmscand_thread __read_mostly;
 static DEFINE_MUTEX(kmmscand_mutex);
+/*
+ * Total VMA size to cover during scan.
+ */
+#define KMMSCAND_SCAN_SIZE	(1 * 1024 * 1024 * 1024UL)
+static unsigned long kmmscand_scan_size __read_mostly = KMMSCAND_SCAN_SIZE;
 
 /* How long to pause between two scan and migration cycle */
 static unsigned int kmmscand_scan_sleep_ms __read_mostly = 16;
@@ -39,10 +52,14 @@ static DEFINE_READ_MOSTLY_HASHTABLE(kmmscand_slots_hash, KMMSCAND_SLOT_HASH_BITS
 
 static struct kmem_cache *kmmscand_slot_cache __read_mostly;
 
+/* Per mm information collected to control VMA scanning */
 struct kmmscand_mm_slot {
 	struct mm_slot slot;
+	long address;
+	bool is_scanned;
 };
 
+/* Data structure to keep track of current mm under scan */
 struct kmmscand_scan {
 	struct list_head mm_head;
 	struct kmmscand_mm_slot *mm_slot;
@@ -52,6 +69,33 @@ struct kmmscand_scan kmmscand_scan = {
 	.mm_head = LIST_HEAD_INIT(kmmscand_scan.mm_head),
 };
 
+/*
+ * Data structure passed to control scanning and also collect
+ * per memory node information
+ */
+struct kmmscand_scanctrl {
+	struct list_head scan_list;
+	unsigned long address;
+};
+
+struct kmmscand_scanctrl kmmscand_scanctrl;
+
+/* Per folio information used for migration */
+struct kmmscand_migrate_info {
+	struct list_head migrate_node;
+	struct mm_struct *mm;
+	struct folio *folio;
+	unsigned long address;
+};
+
+static bool kmmscand_eligible_srcnid(int nid)
+{
+	if (!node_is_toptier(nid))
+		return true;
+
+	return false;
+}
+
 static int kmmscand_has_work(void)
 {
 	return !list_empty(&kmmscand_scan.mm_head);
@@ -82,15 +126,277 @@ static void kmmscand_wait_work(void)
 	return;
 }
 
+
+static inline bool is_valid_folio(struct folio *folio)
+{
+	if (!folio || folio_test_unevictable(folio) || !folio_mapped(folio) ||
+		folio_is_zone_device(folio) || folio_likely_mapped_shared(folio))
+		return false;
+
+	return true;
+}
+
+static bool folio_idle_clear_pte_refs_one(struct folio *folio,
+					 struct vm_area_struct *vma,
+					 unsigned long addr,
+					 pte_t *ptep)
+{
+	bool referenced = false;
+	struct mm_struct *mm = vma->vm_mm;
+	pmd_t *pmd = pmd_off(mm, addr);
+
+	if (ptep) {
+		if (ptep_clear_young_notify(vma, addr, ptep))
+			referenced = true;
+	} else if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE)) {
+		if (!pmd_present(*pmd))
+			WARN_ON_ONCE(1);
+		if (pmdp_clear_young_notify(vma, addr, pmd))
+			referenced = true;
+	} else {
+		WARN_ON_ONCE(1);
+	}
+
+	if (referenced) {
+		folio_clear_idle(folio);
+		folio_set_young(folio);
+	}
+
+	return true;
+}
+
+static void page_idle_clear_pte_refs(struct page *page, pte_t *pte, struct mm_walk *walk)
+{
+	bool need_lock;
+	struct folio *folio =  page_folio(page);
+	unsigned long address;
+
+	if (!folio_mapped(folio) || !folio_raw_mapping(folio))
+		return;
+
+	need_lock = !folio_test_anon(folio) || folio_test_ksm(folio);
+	if (need_lock && !folio_trylock(folio))
+		return;
+	address = vma_address(walk->vma, page_pgoff(folio, page), compound_nr(page));
+	VM_BUG_ON_VMA(address == -EFAULT, walk->vma);
+	folio_idle_clear_pte_refs_one(folio, walk->vma, address, pte);
+
+	if (need_lock)
+		folio_unlock(folio);
+}
+
+static int hot_vma_idle_pte_entry(pte_t *pte,
+				 unsigned long addr,
+				 unsigned long next,
+				 struct mm_walk *walk)
+{
+	struct page *page;
+	struct folio *folio;
+	struct mm_struct *mm;
+	struct vm_area_struct *vma;
+	struct kmmscand_migrate_info *info;
+	struct kmmscand_scanctrl *scanctrl = walk->private;
+	int srcnid;
+
+	scanctrl->address = addr;
+	pte_t pteval = ptep_get(pte);
+
+	if (!pte_present(pteval))
+		return 0;
+
+	if (pte_none(pteval))
+		return 0;
+
+	vma = walk->vma;
+	mm = vma->vm_mm;
+
+	page = pte_page(*pte);
+
+	page_idle_clear_pte_refs(page, pte, walk);
+
+	folio = page_folio(page);
+	folio_get(folio);
+
+	if (!is_valid_folio(folio)) {
+		folio_put(folio);
+		return 0;
+	}
+	srcnid = folio_nid(folio);
+
+
+	if (!folio_test_lru(folio)) {
+		folio_put(folio);
+		return 0;
+	}
+
+	if (!folio_test_idle(folio) || folio_test_young(folio) ||
+			mmu_notifier_test_young(mm, addr) ||
+			folio_test_referenced(folio) || pte_young(pteval)) {
+
+		/* Do not try to promote pages from regular nodes */
+		if (!kmmscand_eligible_srcnid(srcnid)) {
+			folio_put(folio);
+			return 0;
+		}
+		/* XXX: Leaking memory. TBD: consume info */
+		info = kzalloc(sizeof(struct kmmscand_migrate_info), GFP_NOWAIT);
+		if (info && scanctrl) {
+
+			info->mm = mm;
+			info->address = addr;
+			info->folio = folio;
+
+			/* No need of lock now */
+			list_add_tail(&info->migrate_node, &scanctrl->scan_list);
+		}
+	}
+
+	folio_set_idle(folio);
+	folio_put(folio);
+	return 0;
+}
+
+static const struct mm_walk_ops hot_vma_set_idle_ops = {
+	.pte_entry = hot_vma_idle_pte_entry,
+	.walk_lock = PGWALK_RDLOCK,
+};
+
+static void kmmscand_walk_page_vma(struct vm_area_struct *vma, struct kmmscand_scanctrl *scanctrl)
+{
+	if (!vma_migratable(vma) || !vma_policy_mof(vma) ||
+	    is_vm_hugetlb_page(vma) || (vma->vm_flags & VM_MIXEDMAP)) {
+		return;
+	}
+	if (!vma->vm_mm ||
+	    (vma->vm_file && (vma->vm_flags & (VM_READ|VM_WRITE)) == (VM_READ)))
+		return;
+
+	if (!vma_is_accessible(vma))
+		return;
+
+	walk_page_vma(vma, &hot_vma_set_idle_ops, scanctrl);
+}
+
 static inline int kmmscand_test_exit(struct mm_struct *mm)
 {
 	return atomic_read(&mm->mm_users) == 0;
 }
 
+static void kmmscand_collect_mm_slot(struct kmmscand_mm_slot *mm_slot)
+{
+	struct mm_slot *slot = &mm_slot->slot;
+	struct mm_struct *mm = slot->mm;
+
+	lockdep_assert_held(&kmmscand_mm_lock);
+
+	if (kmmscand_test_exit(mm)) {
+		/* free mm_slot */
+		hash_del(&slot->hash);
+		list_del(&slot->mm_node);
+
+		mm_slot_free(kmmscand_slot_cache, mm_slot);
+		mmdrop(mm);
+	}
+}
+
 static unsigned long kmmscand_scan_mm_slot(void)
 {
-	/* placeholder for scanning */
-	msleep(100);
+	bool next_mm = false;
+	bool update_mmslot_info = false;
+
+	unsigned long vma_scanned_size = 0;
+	unsigned long address;
+
+	struct mm_slot *slot;
+	struct mm_struct *mm;
+	struct vm_area_struct *vma = NULL;
+	struct kmmscand_mm_slot *mm_slot;
+
+	/* Retrieve mm */
+	spin_lock(&kmmscand_mm_lock);
+
+	if (kmmscand_scan.mm_slot) {
+		mm_slot = kmmscand_scan.mm_slot;
+		slot = &mm_slot->slot;
+		address = mm_slot->address;
+	} else {
+		slot = list_entry(kmmscand_scan.mm_head.next,
+				     struct mm_slot, mm_node);
+		mm_slot = mm_slot_entry(slot, struct kmmscand_mm_slot, slot);
+		address = mm_slot->address;
+		kmmscand_scan.mm_slot = mm_slot;
+	}
+
+	mm = slot->mm;
+	mm_slot->is_scanned = true;
+	spin_unlock(&kmmscand_mm_lock);
+
+	if (unlikely(!mmap_read_trylock(mm)))
+		goto outerloop_mmap_lock;
+
+	if (unlikely(kmmscand_test_exit(mm))) {
+		next_mm = true;
+		goto outerloop;
+	}
+
+	VMA_ITERATOR(vmi, mm, address);
+
+	for_each_vma(vmi, vma) {
+		kmmscand_walk_page_vma(vma, &kmmscand_scanctrl);
+		vma_scanned_size += vma->vm_end - vma->vm_start;
+
+		if (vma_scanned_size >= kmmscand_scan_size) {
+			next_mm = true;
+			/* TBD: Add scanned folios to migration list */
+			break;
+		}
+	}
+
+	if (!vma)
+		address = 0;
+	else
+		address = kmmscand_scanctrl.address + PAGE_SIZE;
+
+	update_mmslot_info = true;
+
+	if (update_mmslot_info)
+		mm_slot->address = address;
+
+outerloop:
+	/* exit_mmap will destroy ptes after this */
+	mmap_read_unlock(mm);
+
+outerloop_mmap_lock:
+	spin_lock(&kmmscand_mm_lock);
+	WARN_ON(kmmscand_scan.mm_slot != mm_slot);
+
+	/*
+	 * Release the current mm_slot if this mm is about to die, or
+	 * if we scanned all vmas of this mm.
+	 */
+	if (unlikely(kmmscand_test_exit(mm)) || !vma || next_mm) {
+		/*
+		 * Make sure that if mm_users is reaching zero while
+		 * kmmscand runs here, kmmscand_exit will find
+		 * mm_slot not pointing to the exiting mm.
+		 */
+		if (slot->mm_node.next != &kmmscand_scan.mm_head) {
+			slot = list_entry(slot->mm_node.next,
+					struct mm_slot, mm_node);
+			kmmscand_scan.mm_slot =
+				mm_slot_entry(slot, struct kmmscand_mm_slot, slot);
+
+		} else
+			kmmscand_scan.mm_slot = NULL;
+
+		if (kmmscand_test_exit(mm)) {
+			kmmscand_collect_mm_slot(mm_slot);
+			goto end;
+		}
+	}
+	mm_slot->is_scanned = false;
+end:
+	spin_unlock(&kmmscand_mm_lock);
 	return 0;
 }
 
@@ -153,6 +459,7 @@ void __kmmscand_enter(struct mm_struct *mm)
 	if (!kmmscand_slot)
 		return;
 
+	kmmscand_slot->address = 0;
 	slot = &kmmscand_slot->slot;
 
 	spin_lock(&kmmscand_mm_lock);
@@ -180,6 +487,12 @@ void __kmmscand_exit(struct mm_struct *mm)
 		hash_del(&slot->hash);
 		list_del(&slot->mm_node);
 		free = 1;
+	} else if (mm_slot && kmmscand_scan.mm_slot == mm_slot && !mm_slot->is_scanned) {
+		hash_del(&slot->hash);
+		list_del(&slot->mm_node);
+		free = 1;
+		/* TBD: Set the actual next slot */
+		kmmscand_scan.mm_slot = NULL;
 	}
 
 	spin_unlock(&kmmscand_mm_lock);
@@ -233,6 +546,11 @@ static int stop_kmmscand(void)
 
 	return err;
 }
+static void init_list(void)
+{
+	INIT_LIST_HEAD(&kmmscand_scanctrl.scan_list);
+	init_waitqueue_head(&kmmscand_wait);
+}
 
 static int __init kmmscand_init(void)
 {
@@ -245,6 +563,7 @@ static int __init kmmscand_init(void)
 		return -ENOMEM;
 	}
 
+	init_list();
 	err = start_kmmscand();
 	if (err)
 		goto err_kmmscand;
-- 
2.34.1



^ permalink raw reply	[flat|nested] 30+ messages in thread

* [RFC PATCH V1 04/13] mm: Create a separate kernel thread for migration
  2025-03-19 19:30 [RFC PATCH V1 00/13] mm: slowtier page promotion based on PTE A bit Raghavendra K T
                   ` (2 preceding siblings ...)
  2025-03-19 19:30 ` [RFC PATCH V1 03/13] mm: Scan the mm and create a migration list Raghavendra K T
@ 2025-03-19 19:30 ` Raghavendra K T
  2025-03-21 17:29   ` Jonathan Cameron
  2025-03-19 19:30 ` [RFC PATCH V1 05/13] mm/migration: Migrate accessed folios to toptier node Raghavendra K T
                   ` (11 subsequent siblings)
  15 siblings, 1 reply; 30+ messages in thread
From: Raghavendra K T @ 2025-03-19 19:30 UTC (permalink / raw)
  To: raghavendra.kt
  Cc: AneeshKumar.KizhakeVeetil, Hasan.Maruf, Michael.Day, akpm,
	bharata, dave.hansen, david, dongjoo.linux.dev, feng.tang,
	gourry, hannes, honggyu.kim, hughd, jhubbard, jon.grimm,
	k.shutemov, kbusch, kmanaouil.dev, leesuyeon0506, leillc,
	liam.howlett, linux-kernel, linux-mm, mgorman, mingo, nadav.amit,
	nphamcs, peterz, riel, rientjes, rppt, santosh.shukla, shivankg,
	shy828301, sj, vbabka, weixugc, willy, ying.huang, ziy,
	Jonathan.Cameron, dave

Having independent thread helps in:
 - Alleviating the need for multiple scanning threads
 - Aids to control batch migration (TBD)
 - Migration throttling (TBD)

Signed-off-by: Raghavendra K T <raghavendra.kt@amd.com>
---
 mm/kmmscand.c | 157 +++++++++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 154 insertions(+), 3 deletions(-)

diff --git a/mm/kmmscand.c b/mm/kmmscand.c
index a76a58bf37b2..6e96cfab5b85 100644
--- a/mm/kmmscand.c
+++ b/mm/kmmscand.c
@@ -4,6 +4,7 @@
 #include <linux/sched.h>
 #include <linux/sched/mm.h>
 #include <linux/mmu_notifier.h>
+#include <linux/migrate.h>
 #include <linux/rmap.h>
 #include <linux/pagewalk.h>
 #include <linux/page_ext.h>
@@ -41,10 +42,26 @@ static unsigned long kmmscand_mms_to_scan __read_mostly = KMMSCAND_MMS_TO_SCAN;
 
 bool kmmscand_scan_enabled = true;
 static bool need_wakeup;
+static bool migrated_need_wakeup;
+
+/* How long to pause between two migration cycles */
+static unsigned int kmmmigrate_sleep_ms __read_mostly = 20;
+
+static struct task_struct *kmmmigrated_thread __read_mostly;
+static DEFINE_MUTEX(kmmmigrated_mutex);
+static DECLARE_WAIT_QUEUE_HEAD(kmmmigrated_wait);
+static unsigned long kmmmigrated_sleep_expire;
+
+/* mm of the migrating folio entry */
+static struct mm_struct *kmmscand_cur_migrate_mm;
+
+/* Migration list is manipulated underneath because of mm_exit */
+static bool  kmmscand_migration_list_dirty;
 
 static unsigned long kmmscand_sleep_expire;
 
 static DEFINE_SPINLOCK(kmmscand_mm_lock);
+static DEFINE_SPINLOCK(kmmscand_migrate_lock);
 static DECLARE_WAIT_QUEUE_HEAD(kmmscand_wait);
 
 #define KMMSCAND_SLOT_HASH_BITS 10
@@ -80,6 +97,14 @@ struct kmmscand_scanctrl {
 
 struct kmmscand_scanctrl kmmscand_scanctrl;
 
+struct kmmscand_migrate_list {
+	struct list_head migrate_head;
+};
+
+struct kmmscand_migrate_list kmmscand_migrate_list = {
+	.migrate_head = LIST_HEAD_INIT(kmmscand_migrate_list.migrate_head),
+};
+
 /* Per folio information used for migration */
 struct kmmscand_migrate_info {
 	struct list_head migrate_node;
@@ -101,6 +126,13 @@ static int kmmscand_has_work(void)
 	return !list_empty(&kmmscand_scan.mm_head);
 }
 
+static int kmmmigrated_has_work(void)
+{
+	if (!list_empty(&kmmscand_migrate_list.migrate_head))
+		return true;
+	return false;
+}
+
 static bool kmmscand_should_wakeup(void)
 {
 	bool wakeup =  kthread_should_stop() || need_wakeup ||
@@ -111,6 +143,16 @@ static bool kmmscand_should_wakeup(void)
 	return wakeup;
 }
 
+static bool kmmmigrated_should_wakeup(void)
+{
+	bool wakeup =  kthread_should_stop() || migrated_need_wakeup ||
+	       time_after_eq(jiffies, kmmmigrated_sleep_expire);
+	if (migrated_need_wakeup)
+		migrated_need_wakeup = false;
+
+	return wakeup;
+}
+
 static void kmmscand_wait_work(void)
 {
 	const unsigned long scan_sleep_jiffies =
@@ -126,6 +168,19 @@ static void kmmscand_wait_work(void)
 	return;
 }
 
+static void kmmmigrated_wait_work(void)
+{
+	const unsigned long migrate_sleep_jiffies =
+		msecs_to_jiffies(kmmmigrate_sleep_ms);
+
+	if (!migrate_sleep_jiffies)
+		return;
+
+	kmmmigrated_sleep_expire = jiffies + migrate_sleep_jiffies;
+	wait_event_timeout(kmmmigrated_wait,
+			kmmmigrated_should_wakeup(),
+			migrate_sleep_jiffies);
+}
 
 static inline bool is_valid_folio(struct folio *folio)
 {
@@ -238,7 +293,6 @@ static int hot_vma_idle_pte_entry(pte_t *pte,
 			folio_put(folio);
 			return 0;
 		}
-		/* XXX: Leaking memory. TBD: consume info */
 		info = kzalloc(sizeof(struct kmmscand_migrate_info), GFP_NOWAIT);
 		if (info && scanctrl) {
 
@@ -282,6 +336,28 @@ static inline int kmmscand_test_exit(struct mm_struct *mm)
 	return atomic_read(&mm->mm_users) == 0;
 }
 
+static void kmmscand_cleanup_migration_list(struct mm_struct *mm)
+{
+	struct kmmscand_migrate_info *info, *tmp;
+
+	spin_lock(&kmmscand_migrate_lock);
+	if (!list_empty(&kmmscand_migrate_list.migrate_head)) {
+		if (mm == READ_ONCE(kmmscand_cur_migrate_mm)) {
+			/* A folio in this mm is being migrated. wait */
+			WRITE_ONCE(kmmscand_migration_list_dirty, true);
+		}
+
+		list_for_each_entry_safe(info, tmp, &kmmscand_migrate_list.migrate_head,
+			migrate_node) {
+			if (info && (info->mm == mm)) {
+				info->mm = NULL;
+				WRITE_ONCE(kmmscand_migration_list_dirty, true);
+			}
+		}
+	}
+	spin_unlock(&kmmscand_migrate_lock);
+}
+
 static void kmmscand_collect_mm_slot(struct kmmscand_mm_slot *mm_slot)
 {
 	struct mm_slot *slot = &mm_slot->slot;
@@ -294,11 +370,17 @@ static void kmmscand_collect_mm_slot(struct kmmscand_mm_slot *mm_slot)
 		hash_del(&slot->hash);
 		list_del(&slot->mm_node);
 
+		kmmscand_cleanup_migration_list(mm);
+
 		mm_slot_free(kmmscand_slot_cache, mm_slot);
 		mmdrop(mm);
 	}
 }
 
+static void kmmscand_migrate_folio(void)
+{
+}
+
 static unsigned long kmmscand_scan_mm_slot(void)
 {
 	bool next_mm = false;
@@ -347,9 +429,17 @@ static unsigned long kmmscand_scan_mm_slot(void)
 
 		if (vma_scanned_size >= kmmscand_scan_size) {
 			next_mm = true;
-			/* TBD: Add scanned folios to migration list */
+			/* Add scanned folios to migration list */
+			spin_lock(&kmmscand_migrate_lock);
+			list_splice_tail_init(&kmmscand_scanctrl.scan_list,
+						&kmmscand_migrate_list.migrate_head);
+			spin_unlock(&kmmscand_migrate_lock);
 			break;
 		}
+		spin_lock(&kmmscand_migrate_lock);
+		list_splice_tail_init(&kmmscand_scanctrl.scan_list,
+					&kmmscand_migrate_list.migrate_head);
+		spin_unlock(&kmmscand_migrate_lock);
 	}
 
 	if (!vma)
@@ -478,7 +568,7 @@ void __kmmscand_exit(struct mm_struct *mm)
 {
 	struct kmmscand_mm_slot *mm_slot;
 	struct mm_slot *slot;
-	int free = 0;
+	int free = 0, serialize = 1;
 
 	spin_lock(&kmmscand_mm_lock);
 	slot = mm_slot_lookup(kmmscand_slots_hash, mm);
@@ -493,10 +583,15 @@ void __kmmscand_exit(struct mm_struct *mm)
 		free = 1;
 		/* TBD: Set the actual next slot */
 		kmmscand_scan.mm_slot = NULL;
+	} else if (mm_slot && kmmscand_scan.mm_slot == mm_slot && mm_slot->is_scanned) {
+		serialize = 0;
 	}
 
 	spin_unlock(&kmmscand_mm_lock);
 
+	if (serialize)
+		kmmscand_cleanup_migration_list(mm);
+
 	if (free) {
 		mm_slot_free(kmmscand_slot_cache, mm_slot);
 		mmdrop(mm);
@@ -546,10 +641,59 @@ static int stop_kmmscand(void)
 
 	return err;
 }
+static int kmmmigrated(void *arg)
+{
+	for (;;) {
+		WRITE_ONCE(migrated_need_wakeup, false);
+		if (unlikely(kthread_should_stop()))
+			break;
+		if (kmmmigrated_has_work())
+			kmmscand_migrate_folio();
+		msleep(20);
+		kmmmigrated_wait_work();
+	}
+	return 0;
+}
+
+static int start_kmmmigrated(void)
+{
+	int err = 0;
+
+	guard(mutex)(&kmmmigrated_mutex);
+
+	/* Someone already succeeded in starting daemon */
+	if (kmmmigrated_thread)
+		goto end;
+
+	kmmmigrated_thread = kthread_run(kmmmigrated, NULL, "kmmmigrated");
+	if (IS_ERR(kmmmigrated_thread)) {
+		pr_err("kmmmigrated: kthread_run(kmmmigrated)  failed\n");
+		err = PTR_ERR(kmmmigrated_thread);
+		kmmmigrated_thread = NULL;
+		goto end;
+	} else {
+		pr_info("kmmmigrated: Successfully started kmmmigrated");
+	}
+
+	wake_up_interruptible(&kmmmigrated_wait);
+end:
+	return err;
+}
+
+static int stop_kmmmigrated(void)
+{
+	guard(mutex)(&kmmmigrated_mutex);
+	kthread_stop(kmmmigrated_thread);
+	return 0;
+}
+
 static void init_list(void)
 {
+	INIT_LIST_HEAD(&kmmscand_migrate_list.migrate_head);
 	INIT_LIST_HEAD(&kmmscand_scanctrl.scan_list);
+	spin_lock_init(&kmmscand_migrate_lock);
 	init_waitqueue_head(&kmmscand_wait);
+	init_waitqueue_head(&kmmmigrated_wait);
 }
 
 static int __init kmmscand_init(void)
@@ -568,8 +712,15 @@ static int __init kmmscand_init(void)
 	if (err)
 		goto err_kmmscand;
 
+	err = start_kmmmigrated();
+	if (err)
+		goto err_kmmmigrated;
+
 	return 0;
 
+err_kmmmigrated:
+	stop_kmmmigrated();
+
 err_kmmscand:
 	stop_kmmscand();
 	kmmscand_destroy();
-- 
2.34.1



^ permalink raw reply	[flat|nested] 30+ messages in thread

* [RFC PATCH V1 05/13] mm/migration: Migrate accessed folios to toptier node
  2025-03-19 19:30 [RFC PATCH V1 00/13] mm: slowtier page promotion based on PTE A bit Raghavendra K T
                   ` (3 preceding siblings ...)
  2025-03-19 19:30 ` [RFC PATCH V1 04/13] mm: Create a separate kernel thread for migration Raghavendra K T
@ 2025-03-19 19:30 ` Raghavendra K T
  2025-03-19 19:30 ` [RFC PATCH V1 06/13] mm: Add throttling of mm scanning using scan_period Raghavendra K T
                   ` (10 subsequent siblings)
  15 siblings, 0 replies; 30+ messages in thread
From: Raghavendra K T @ 2025-03-19 19:30 UTC (permalink / raw)
  To: raghavendra.kt
  Cc: AneeshKumar.KizhakeVeetil, Hasan.Maruf, Michael.Day, akpm,
	bharata, dave.hansen, david, dongjoo.linux.dev, feng.tang,
	gourry, hannes, honggyu.kim, hughd, jhubbard, jon.grimm,
	k.shutemov, kbusch, kmanaouil.dev, leesuyeon0506, leillc,
	liam.howlett, linux-kernel, linux-mm, mgorman, mingo, nadav.amit,
	nphamcs, peterz, riel, rientjes, rppt, santosh.shukla, shivankg,
	shy828301, sj, vbabka, weixugc, willy, ying.huang, ziy,
	Jonathan.Cameron, dave

For each recently accessed slowtier folio in the migration list:
 - Isolate LRU pages
 - Migrate to a regular node.

The rationale behind whole migration is to speedup the access to
recently accessed pages.

Currently, PTE A bit scanning approach lacks information about exact
destination node to migrate to.

Reason:
 PROT_NONE hint fault based scanning is done in a process context. Here
when the fault occurs, source CPU of the fault associated task is known.
Time of page access is also accurate.
With the lack of above information, migration is done to node 0 by default.

Signed-off-by: Raghavendra K T <raghavendra.kt@amd.com>
---
 include/linux/migrate.h |   2 +
 mm/kmmscand.c           | 209 ++++++++++++++++++++++++++++++++++++++++
 mm/migrate.c            |   2 +-
 3 files changed, 212 insertions(+), 1 deletion(-)

PS: Later in the patch a simple heuristic is used to find the target node

diff --git a/include/linux/migrate.h b/include/linux/migrate.h
index 29919faea2f1..22abae80cbb7 100644
--- a/include/linux/migrate.h
+++ b/include/linux/migrate.h
@@ -142,6 +142,8 @@ const struct movable_operations *page_movable_ops(struct page *page)
 }
 
 #ifdef CONFIG_NUMA_BALANCING
+bool migrate_balanced_pgdat(struct pglist_data *pgdat,
+				   unsigned long nr_migrate_pages);
 int migrate_misplaced_folio_prepare(struct folio *folio,
 		struct vm_area_struct *vma, int node);
 int migrate_misplaced_folio(struct folio *folio, int node);
diff --git a/mm/kmmscand.c b/mm/kmmscand.c
index 6e96cfab5b85..feca775d0191 100644
--- a/mm/kmmscand.c
+++ b/mm/kmmscand.c
@@ -59,6 +59,8 @@ static struct mm_struct *kmmscand_cur_migrate_mm;
 static bool  kmmscand_migration_list_dirty;
 
 static unsigned long kmmscand_sleep_expire;
+#define KMMSCAND_DEFAULT_TARGET_NODE	(0)
+static int kmmscand_target_node = KMMSCAND_DEFAULT_TARGET_NODE;
 
 static DEFINE_SPINLOCK(kmmscand_mm_lock);
 static DEFINE_SPINLOCK(kmmscand_migrate_lock);
@@ -182,6 +184,76 @@ static void kmmmigrated_wait_work(void)
 			migrate_sleep_jiffies);
 }
 
+/*
+ * Do not know what info to pass in the future to make
+ * decision on taget node. Keep it void * now.
+ */
+static int kmmscand_get_target_node(void *data)
+{
+	return kmmscand_target_node;
+}
+
+extern bool migrate_balanced_pgdat(struct pglist_data *pgdat,
+					unsigned long nr_migrate_pages);
+
+/*XXX: Taken from migrate.c to avoid NUMAB mode=2 and NULL vma checks*/
+static int kmmscand_migrate_misplaced_folio_prepare(struct folio *folio,
+		struct vm_area_struct *vma, int node)
+{
+	int nr_pages = folio_nr_pages(folio);
+	pg_data_t *pgdat = NODE_DATA(node);
+
+	if (folio_is_file_lru(folio)) {
+		/*
+		 * Do not migrate file folios that are mapped in multiple
+		 * processes with execute permissions as they are probably
+		 * shared libraries.
+		 *
+		 * See folio_likely_mapped_shared() on possible imprecision
+		 * when we cannot easily detect if a folio is shared.
+		 */
+		if (vma && (vma->vm_flags & VM_EXEC) &&
+		    folio_likely_mapped_shared(folio))
+			return -EACCES;
+		/*
+		 * Do not migrate dirty folios as not all filesystems can move
+		 * dirty folios in MIGRATE_ASYNC mode which is a waste of
+		 * cycles.
+		 */
+		if (folio_test_dirty(folio))
+			return -EAGAIN;
+	}
+
+	/* Avoid migrating to a node that is nearly full */
+	if (!migrate_balanced_pgdat(pgdat, nr_pages)) {
+		int z;
+
+		for (z = pgdat->nr_zones - 1; z >= 0; z--) {
+			if (managed_zone(pgdat->node_zones + z))
+				break;
+		}
+
+		/*
+		 * If there are no managed zones, it should not proceed
+		 * further.
+		 */
+		if (z < 0)
+			return -EAGAIN;
+
+		wakeup_kswapd(pgdat->node_zones + z, 0,
+			      folio_order(folio), ZONE_MOVABLE);
+		return -EAGAIN;
+	}
+
+	if (!folio_isolate_lru(folio))
+		return -EAGAIN;
+
+	node_stat_mod_folio(folio, NR_ISOLATED_ANON + folio_is_file_lru(folio),
+			    nr_pages);
+
+	return 0;
+}
+
 static inline bool is_valid_folio(struct folio *folio)
 {
 	if (!folio || folio_test_unevictable(folio) || !folio_mapped(folio) ||
@@ -191,6 +263,101 @@ static inline bool is_valid_folio(struct folio *folio)
 	return true;
 }
 
+enum kmmscand_migration_err {
+	KMMSCAND_NULL_MM = 1,
+	KMMSCAND_EXITING_MM,
+	KMMSCAND_INVALID_FOLIO,
+	KMMSCAND_NONLRU_FOLIO,
+	KMMSCAND_INELIGIBLE_SRC_NODE,
+	KMMSCAND_SAME_SRC_DEST_NODE,
+	KMMSCAND_PTE_NOT_PRESENT,
+	KMMSCAND_PMD_NOT_PRESENT,
+	KMMSCAND_NO_PTE_OFFSET_MAP_LOCK,
+	KMMSCAND_LRU_ISOLATION_ERR,
+};
+
+static int kmmscand_promote_folio(struct kmmscand_migrate_info *info, int destnid)
+{
+	unsigned long pfn;
+	unsigned long address;
+	struct page *page;
+	struct folio *folio;
+	int ret;
+	struct mm_struct *mm;
+	pmd_t *pmd;
+	pte_t *pte;
+	spinlock_t *ptl;
+	pmd_t pmde;
+	int srcnid;
+
+	if (info->mm == NULL)
+		return KMMSCAND_NULL_MM;
+
+	if (info->mm == READ_ONCE(kmmscand_cur_migrate_mm) &&
+		READ_ONCE(kmmscand_migration_list_dirty)) {
+		WARN_ON_ONCE(mm);
+		return KMMSCAND_EXITING_MM;
+	}
+
+	mm = info->mm;
+	folio = info->folio;
+
+	/* Check again if the folio is really valid now */
+	if (folio) {
+		pfn = folio_pfn(folio);
+		page = pfn_to_online_page(pfn);
+	}
+
+	if (!page || PageTail(page) || !is_valid_folio(folio))
+		return KMMSCAND_INVALID_FOLIO;
+
+	if (!folio_test_lru(folio))
+		return KMMSCAND_NONLRU_FOLIO;
+
+	folio_get(folio);
+
+	srcnid = folio_nid(folio);
+
+	/* Do not try to promote pages from regular nodes */
+	if (!kmmscand_eligible_srcnid(srcnid)) {
+		folio_put(folio);
+		return KMMSCAND_INELIGIBLE_SRC_NODE;
+	}
+
+	/* Also happen when it is already migrated */
+	if (srcnid == destnid) {
+		folio_put(folio);
+		return KMMSCAND_SAME_SRC_DEST_NODE;
+	}
+	address = info->address;
+	pmd = pmd_off(mm, address);
+	pmde = pmdp_get(pmd);
+
+	if (!pmd_present(pmde)) {
+		folio_put(folio);
+		return KMMSCAND_PMD_NOT_PRESENT;
+	}
+
+	pte = pte_offset_map_lock(mm, pmd, address, &ptl);
+	if (!pte) {
+		folio_put(folio);
+		WARN_ON_ONCE(!pte);
+		return KMMSCAND_NO_PTE_OFFSET_MAP_LOCK;
+	}
+
+	ret = kmmscand_migrate_misplaced_folio_prepare(folio, NULL, destnid);
+	if (ret) {
+		folio_put(folio);
+		pte_unmap_unlock(pte, ptl);
+		return KMMSCAND_LRU_ISOLATION_ERR;
+	}
+
+	folio_put(folio);
+	pte_unmap_unlock(pte, ptl);
+
+	return  migrate_misplaced_folio(folio, destnid);
+}
+
 static bool folio_idle_clear_pte_refs_one(struct folio *folio,
 					 struct vm_area_struct *vma,
 					 unsigned long addr,
@@ -379,6 +546,48 @@ static void kmmscand_collect_mm_slot(struct kmmscand_mm_slot *mm_slot)
 
 static void kmmscand_migrate_folio(void)
 {
+	int ret = 0, dest = -1;
+	struct kmmscand_migrate_info *info, *tmp;
+
+	spin_lock(&kmmscand_migrate_lock);
+
+	if (!list_empty(&kmmscand_migrate_list.migrate_head)) {
+		list_for_each_entry_safe(info, tmp, &kmmscand_migrate_list.migrate_head,
+			migrate_node) {
+			if (READ_ONCE(kmmscand_migration_list_dirty)) {
+				kmmscand_migration_list_dirty = false;
+				list_del(&info->migrate_node);
+				/*
+				 * Do not try to migrate this entry because mm might have
+				 * vanished underneath.
+				 */
+				kfree(info);
+				spin_unlock(&kmmscand_migrate_lock);
+				goto dirty_list_handled;
+			}
+
+			list_del(&info->migrate_node);
+			/* Note down the mm of folio entry we are migrating */
+			WRITE_ONCE(kmmscand_cur_migrate_mm, info->mm);
+			spin_unlock(&kmmscand_migrate_lock);
+
+			if (info->mm) {
+				dest = kmmscand_get_target_node(NULL);
+				ret = kmmscand_promote_folio(info, dest);
+			}
+
+			kfree(info);
+
+			spin_lock(&kmmscand_migrate_lock);
+			/* Reset  mm  of folio entry we are migrating */
+			WRITE_ONCE(kmmscand_cur_migrate_mm, NULL);
+			spin_unlock(&kmmscand_migrate_lock);
+dirty_list_handled:
+			cond_resched();
+			spin_lock(&kmmscand_migrate_lock);
+		}
+	}
+	spin_unlock(&kmmscand_migrate_lock);
 }
 
 static unsigned long kmmscand_scan_mm_slot(void)
diff --git a/mm/migrate.c b/mm/migrate.c
index fb19a18892c8..a073eb6c5009 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -2598,7 +2598,7 @@ SYSCALL_DEFINE6(move_pages, pid_t, pid, unsigned long, nr_pages,
  * Returns true if this is a safe migration target node for misplaced NUMA
  * pages. Currently it only checks the watermarks which is crude.
  */
-static bool migrate_balanced_pgdat(struct pglist_data *pgdat,
+bool migrate_balanced_pgdat(struct pglist_data *pgdat,
 				   unsigned long nr_migrate_pages)
 {
 	int z;
-- 
2.34.1



^ permalink raw reply	[flat|nested] 30+ messages in thread

* [RFC PATCH V1 06/13] mm: Add throttling of mm scanning using scan_period
  2025-03-19 19:30 [RFC PATCH V1 00/13] mm: slowtier page promotion based on PTE A bit Raghavendra K T
                   ` (4 preceding siblings ...)
  2025-03-19 19:30 ` [RFC PATCH V1 05/13] mm/migration: Migrate accessed folios to toptier node Raghavendra K T
@ 2025-03-19 19:30 ` Raghavendra K T
  2025-03-19 19:30 ` [RFC PATCH V1 07/13] mm: Add throttling of mm scanning using scan_size Raghavendra K T
                   ` (9 subsequent siblings)
  15 siblings, 0 replies; 30+ messages in thread
From: Raghavendra K T @ 2025-03-19 19:30 UTC (permalink / raw)
  To: raghavendra.kt
  Cc: AneeshKumar.KizhakeVeetil, Hasan.Maruf, Michael.Day, akpm,
	bharata, dave.hansen, david, dongjoo.linux.dev, feng.tang,
	gourry, hannes, honggyu.kim, hughd, jhubbard, jon.grimm,
	k.shutemov, kbusch, kmanaouil.dev, leesuyeon0506, leillc,
	liam.howlett, linux-kernel, linux-mm, mgorman, mingo, nadav.amit,
	nphamcs, peterz, riel, rientjes, rppt, santosh.shukla, shivankg,
	shy828301, sj, vbabka, weixugc, willy, ying.huang, ziy,
	Jonathan.Cameron, dave

Before this patch, scanning of tasks' mm is done continuously and also
at the same rate.

Improve that by adding a throttling logic:
1) If there were useful pages found during last scan and current scan,
decrease the scan_period (to increase scan rate) by TUNE_PERCENT (15%).

2) If there were no useful pages found in last scan, and there are
candidate migration pages in the current scan decrease the scan_period
aggressively by 2 power SCAN_CHANGE_SCALE (2^3 = 8 now).

Vice versa is done for the reverse case.
Scan period is clamped between MIN (500ms) and MAX (5sec).

Signed-off-by: Raghavendra K T <raghavendra.kt@amd.com>
---
 mm/kmmscand.c | 110 +++++++++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 109 insertions(+), 1 deletion(-)

diff --git a/mm/kmmscand.c b/mm/kmmscand.c
index feca775d0191..cd2215f2e00e 100644
--- a/mm/kmmscand.c
+++ b/mm/kmmscand.c
@@ -20,6 +20,7 @@
 #include <linux/string.h>
 #include <linux/delay.h>
 #include <linux/cleanup.h>
+#include <linux/minmax.h>
 
 #include <asm/pgalloc.h>
 #include "internal.h"
@@ -33,6 +34,16 @@ static DEFINE_MUTEX(kmmscand_mutex);
 #define KMMSCAND_SCAN_SIZE	(1 * 1024 * 1024 * 1024UL)
 static unsigned long kmmscand_scan_size __read_mostly = KMMSCAND_SCAN_SIZE;
 
+/*
+ * Scan period for each mm.
+ * Min: 500ms default: 2sec Max: 5sec
+ */
+#define KMMSCAND_SCAN_PERIOD_MAX	5000U
+#define KMMSCAND_SCAN_PERIOD_MIN	500U
+#define KMMSCAND_SCAN_PERIOD		2000U
+
+static unsigned int kmmscand_mm_scan_period_ms __read_mostly = KMMSCAND_SCAN_PERIOD;
+
 /* How long to pause between two scan and migration cycle */
 static unsigned int kmmscand_scan_sleep_ms __read_mostly = 16;
 
@@ -74,6 +85,11 @@ static struct kmem_cache *kmmscand_slot_cache __read_mostly;
 /* Per mm information collected to control VMA scanning */
 struct kmmscand_mm_slot {
 	struct mm_slot slot;
+	/* Unit: ms. Determines how aften mm scan should happen. */
+	unsigned int scan_period;
+	unsigned long next_scan;
+	/* Tracks how many useful pages obtained for migration in the last scan */
+	unsigned long scan_delta;
 	long address;
 	bool is_scanned;
 };
@@ -590,13 +606,92 @@ static void kmmscand_migrate_folio(void)
 	spin_unlock(&kmmscand_migrate_lock);
 }
 
+/*
+ * This is the normal change percentage when old and new delta remain same.
+ * i.e., either both positive or both zero.
+ */
+#define SCAN_PERIOD_TUNE_PERCENT	15
+
+/* This is to change the scan_period aggressively when deltas are different */
+#define SCAN_PERIOD_CHANGE_SCALE	3
+/*
+ * XXX: Hack to prevent unmigrated pages coming again and again while scanning.
+ * Actual fix needs to identify the type of unmigrated pages OR consider migration
+ * failures in next scan.
+ */
+#define KMMSCAND_IGNORE_SCAN_THR	256
+
+/* Maintains stability of scan_period by decaying last time accessed pages */
+#define SCAN_DECAY_SHIFT	4
+/*
+ * X : Number of useful pages in the last scan.
+ * Y : Number of useful pages found in current scan.
+ * Tuning scan_period:
+ *	Initial scan_period is 2s.
+ *	case 1: (X = 0, Y = 0)
+ *		Increase scan_period by SCAN_PERIOD_TUNE_PERCENT.
+ *	case 2: (X = 0, Y > 0)
+ *		Decrease scan_period by (2 << SCAN_PERIOD_CHANGE_SCALE).
+ *	case 3: (X > 0, Y = 0 )
+ *		Increase scan_period by (2 << SCAN_PERIOD_CHANGE_SCALE).
+ *	case 4: (X > 0, Y > 0)
+ *		Decrease scan_period by SCAN_PERIOD_TUNE_PERCENT.
+ */
+static inline void kmmscand_update_mmslot_info(struct kmmscand_mm_slot *mm_slot,
+				unsigned long total)
+{
+	unsigned int scan_period;
+	unsigned long now;
+	unsigned long old_scan_delta;
+
+	scan_period = mm_slot->scan_period;
+	old_scan_delta = mm_slot->scan_delta;
+
+	/* decay old value */
+	total = (old_scan_delta >> SCAN_DECAY_SHIFT) + total;
+
+	/* XXX: Hack to get rid of continuously failing/unmigrateable pages */
+	if (total < KMMSCAND_IGNORE_SCAN_THR)
+		total = 0;
+
+	/*
+	 * case 1: old_scan_delta and new delta are similar, (slow) TUNE_PERCENT used.
+	 * case 2: old_scan_delta and new delta are different. (fast) CHANGE_SCALE used.
+	 * TBD:
+	 * 1. Further tune scan_period based on delta between last and current scan delta.
+	 * 2. Optimize calculation
+	 */
+	if (!old_scan_delta && !total) {
+		scan_period = (100 + SCAN_PERIOD_TUNE_PERCENT) * scan_period;
+		scan_period /= 100;
+	} else if (old_scan_delta && total) {
+		scan_period = (100 - SCAN_PERIOD_TUNE_PERCENT) * scan_period;
+		scan_period /= 100;
+	} else if (old_scan_delta && !total) {
+		scan_period = scan_period << SCAN_PERIOD_CHANGE_SCALE;
+	} else {
+		scan_period = scan_period >> SCAN_PERIOD_CHANGE_SCALE;
+	}
+
+	scan_period = clamp(scan_period, KMMSCAND_SCAN_PERIOD_MIN, KMMSCAND_SCAN_PERIOD_MAX);
+
+	now = jiffies;
+	mm_slot->next_scan = now + msecs_to_jiffies(scan_period);
+	mm_slot->scan_period = scan_period;
+	mm_slot->scan_delta = total;
+}
+
 static unsigned long kmmscand_scan_mm_slot(void)
 {
 	bool next_mm = false;
 	bool update_mmslot_info = false;
 
+	unsigned int mm_slot_scan_period;
+	unsigned long now;
+	unsigned long mm_slot_next_scan;
 	unsigned long vma_scanned_size = 0;
 	unsigned long address;
+	unsigned long total = 0;
 
 	struct mm_slot *slot;
 	struct mm_struct *mm;
@@ -620,6 +715,8 @@ static unsigned long kmmscand_scan_mm_slot(void)
 
 	mm = slot->mm;
 	mm_slot->is_scanned = true;
+	mm_slot_next_scan = mm_slot->next_scan;
+	mm_slot_scan_period = mm_slot->scan_period;
 	spin_unlock(&kmmscand_mm_lock);
 
 	if (unlikely(!mmap_read_trylock(mm)))
@@ -630,6 +727,11 @@ static unsigned long kmmscand_scan_mm_slot(void)
 		goto outerloop;
 	}
 
+	now = jiffies;
+
+	if (mm_slot_next_scan && time_before(now, mm_slot_next_scan))
+		goto outerloop;
+
 	VMA_ITERATOR(vmi, mm, address);
 
 	for_each_vma(vmi, vma) {
@@ -658,8 +760,10 @@ static unsigned long kmmscand_scan_mm_slot(void)
 
 	update_mmslot_info = true;
 
-	if (update_mmslot_info)
+	if (update_mmslot_info) {
 		mm_slot->address = address;
+		kmmscand_update_mmslot_info(mm_slot, total);
+	}
 
 outerloop:
 	/* exit_mmap will destroy ptes after this */
@@ -759,6 +863,10 @@ void __kmmscand_enter(struct mm_struct *mm)
 		return;
 
 	kmmscand_slot->address = 0;
+	kmmscand_slot->scan_period = kmmscand_mm_scan_period_ms;
+	kmmscand_slot->next_scan = 0;
+	kmmscand_slot->scan_delta = 0;
+
 	slot = &kmmscand_slot->slot;
 
 	spin_lock(&kmmscand_mm_lock);
-- 
2.34.1



^ permalink raw reply	[flat|nested] 30+ messages in thread

* [RFC PATCH V1 07/13] mm: Add throttling of mm scanning using scan_size
  2025-03-19 19:30 [RFC PATCH V1 00/13] mm: slowtier page promotion based on PTE A bit Raghavendra K T
                   ` (5 preceding siblings ...)
  2025-03-19 19:30 ` [RFC PATCH V1 06/13] mm: Add throttling of mm scanning using scan_period Raghavendra K T
@ 2025-03-19 19:30 ` Raghavendra K T
  2025-03-19 19:30 ` [RFC PATCH V1 08/13] mm: Add initial scan delay Raghavendra K T
                   ` (8 subsequent siblings)
  15 siblings, 0 replies; 30+ messages in thread
From: Raghavendra K T @ 2025-03-19 19:30 UTC (permalink / raw)
  To: raghavendra.kt
  Cc: AneeshKumar.KizhakeVeetil, Hasan.Maruf, Michael.Day, akpm,
	bharata, dave.hansen, david, dongjoo.linux.dev, feng.tang,
	gourry, hannes, honggyu.kim, hughd, jhubbard, jon.grimm,
	k.shutemov, kbusch, kmanaouil.dev, leesuyeon0506, leillc,
	liam.howlett, linux-kernel, linux-mm, mgorman, mingo, nadav.amit,
	nphamcs, peterz, riel, rientjes, rppt, santosh.shukla, shivankg,
	shy828301, sj, vbabka, weixugc, willy, ying.huang, ziy,
	Jonathan.Cameron, dave

Before this patch, scanning is done on entire virtual address space
of all the tasks. Now the scan size is shrunk or expanded based on the
useful pages found in the last scan.

This helps to quickly get out of unnecessary scanning thus burning
lesser CPU.

Drawback: If a useful chunk is at the other end of the VMA space, it
will delay scanning and migration.

Shrink/expand algorithm for scan_size:
X : Number of useful pages in the last scan.
Y : Number of useful pages found in current scan.
Initial scan_size is 1GB
 case 1: (X = 0, Y = 0)
  Decrease scan_size by 2
 case 2: (X = 0, Y > 0)
  Aggressively change to MAX (4GB)
 case 3: (X > 0, Y = 0 )
   No change
 case 4: (X > 0, Y > 0)
   Increase scan_size by 2

Scan size is clamped between MIN (256MB) and MAX (4GB)).
TBD:  Tuning this based on real workload

Signed-off-by: Raghavendra K T <raghavendra.kt@amd.com>
---
 mm/kmmscand.c | 29 +++++++++++++++++++++++++++++
 1 file changed, 29 insertions(+)

diff --git a/mm/kmmscand.c b/mm/kmmscand.c
index cd2215f2e00e..a19b1f31271d 100644
--- a/mm/kmmscand.c
+++ b/mm/kmmscand.c
@@ -28,10 +28,15 @@
 
 static struct task_struct *kmmscand_thread __read_mostly;
 static DEFINE_MUTEX(kmmscand_mutex);
+
 /*
  * Total VMA size to cover during scan.
+ * Min: 256MB default: 1GB max: 4GB
  */
+#define KMMSCAND_SCAN_SIZE_MIN	(256 * 1024 * 1024UL)
+#define KMMSCAND_SCAN_SIZE_MAX	(4 * 1024 * 1024 * 1024UL)
 #define KMMSCAND_SCAN_SIZE	(1 * 1024 * 1024 * 1024UL)
+
 static unsigned long kmmscand_scan_size __read_mostly = KMMSCAND_SCAN_SIZE;
 
 /*
@@ -90,6 +95,8 @@ struct kmmscand_mm_slot {
 	unsigned long next_scan;
 	/* Tracks how many useful pages obtained for migration in the last scan */
 	unsigned long scan_delta;
+	/* Determines how much VMA address space to be covered in the scanning */
+	unsigned long scan_size;
 	long address;
 	bool is_scanned;
 };
@@ -621,6 +628,8 @@ static void kmmscand_migrate_folio(void)
  */
 #define KMMSCAND_IGNORE_SCAN_THR	256
 
+#define SCAN_SIZE_CHANGE_SHIFT	1
+
 /* Maintains stability of scan_period by decaying last time accessed pages */
 #define SCAN_DECAY_SHIFT	4
 /*
@@ -636,14 +645,26 @@ static void kmmscand_migrate_folio(void)
  *		Increase scan_period by (2 << SCAN_PERIOD_CHANGE_SCALE).
  *	case 4: (X > 0, Y > 0)
  *		Decrease scan_period by SCAN_PERIOD_TUNE_PERCENT.
+ * Tuning scan_size:
+ * Initial scan_size is 4GB
+ *	case 1: (X = 0, Y = 0)
+ *		Decrease scan_size by (1 << SCAN_SIZE_CHANGE_SHIFT).
+ *	case 2: (X = 0, Y > 0)
+ *		scan_size = KMMSCAND_SCAN_SIZE_MAX
+ *  case 3: (X > 0, Y = 0 )
+ *		No change
+ *  case 4: (X > 0, Y > 0)
+ *		Increase scan_size by (1 << SCAN_SIZE_CHANGE_SHIFT).
  */
 static inline void kmmscand_update_mmslot_info(struct kmmscand_mm_slot *mm_slot,
 				unsigned long total)
 {
 	unsigned int scan_period;
 	unsigned long now;
+	unsigned long scan_size;
 	unsigned long old_scan_delta;
 
+	scan_size = mm_slot->scan_size;
 	scan_period = mm_slot->scan_period;
 	old_scan_delta = mm_slot->scan_delta;
 
@@ -664,20 +685,25 @@ static inline void kmmscand_update_mmslot_info(struct kmmscand_mm_slot *mm_slot,
 	if (!old_scan_delta && !total) {
 		scan_period = (100 + SCAN_PERIOD_TUNE_PERCENT) * scan_period;
 		scan_period /= 100;
+		scan_size = scan_size >> SCAN_SIZE_CHANGE_SHIFT;
 	} else if (old_scan_delta && total) {
 		scan_period = (100 - SCAN_PERIOD_TUNE_PERCENT) * scan_period;
 		scan_period /= 100;
+		scan_size = scan_size << SCAN_SIZE_CHANGE_SHIFT;
 	} else if (old_scan_delta && !total) {
 		scan_period = scan_period << SCAN_PERIOD_CHANGE_SCALE;
 	} else {
 		scan_period = scan_period >> SCAN_PERIOD_CHANGE_SCALE;
+		scan_size = KMMSCAND_SCAN_SIZE_MAX;
 	}
 
 	scan_period = clamp(scan_period, KMMSCAND_SCAN_PERIOD_MIN, KMMSCAND_SCAN_PERIOD_MAX);
+	scan_size = clamp(scan_size, KMMSCAND_SCAN_SIZE_MIN, KMMSCAND_SCAN_SIZE_MAX);
 
 	now = jiffies;
 	mm_slot->next_scan = now + msecs_to_jiffies(scan_period);
 	mm_slot->scan_period = scan_period;
+	mm_slot->scan_size = scan_size;
 	mm_slot->scan_delta = total;
 }
 
@@ -689,6 +715,7 @@ static unsigned long kmmscand_scan_mm_slot(void)
 	unsigned int mm_slot_scan_period;
 	unsigned long now;
 	unsigned long mm_slot_next_scan;
+	unsigned long mm_slot_scan_size;
 	unsigned long vma_scanned_size = 0;
 	unsigned long address;
 	unsigned long total = 0;
@@ -717,6 +744,7 @@ static unsigned long kmmscand_scan_mm_slot(void)
 	mm_slot->is_scanned = true;
 	mm_slot_next_scan = mm_slot->next_scan;
 	mm_slot_scan_period = mm_slot->scan_period;
+	mm_slot_scan_size = mm_slot->scan_size;
 	spin_unlock(&kmmscand_mm_lock);
 
 	if (unlikely(!mmap_read_trylock(mm)))
@@ -864,6 +892,7 @@ void __kmmscand_enter(struct mm_struct *mm)
 
 	kmmscand_slot->address = 0;
 	kmmscand_slot->scan_period = kmmscand_mm_scan_period_ms;
+	kmmscand_slot->scan_size = kmmscand_scan_size;
 	kmmscand_slot->next_scan = 0;
 	kmmscand_slot->scan_delta = 0;
 
-- 
2.34.1



^ permalink raw reply	[flat|nested] 30+ messages in thread

* [RFC PATCH V1 08/13] mm: Add initial scan delay
  2025-03-19 19:30 [RFC PATCH V1 00/13] mm: slowtier page promotion based on PTE A bit Raghavendra K T
                   ` (6 preceding siblings ...)
  2025-03-19 19:30 ` [RFC PATCH V1 07/13] mm: Add throttling of mm scanning using scan_size Raghavendra K T
@ 2025-03-19 19:30 ` Raghavendra K T
  2025-03-19 19:30 ` [RFC PATCH V1 09/13] mm: Add heuristic to calculate target node Raghavendra K T
                   ` (7 subsequent siblings)
  15 siblings, 0 replies; 30+ messages in thread
From: Raghavendra K T @ 2025-03-19 19:30 UTC (permalink / raw)
  To: raghavendra.kt
  Cc: AneeshKumar.KizhakeVeetil, Hasan.Maruf, Michael.Day, akpm,
	bharata, dave.hansen, david, dongjoo.linux.dev, feng.tang,
	gourry, hannes, honggyu.kim, hughd, jhubbard, jon.grimm,
	k.shutemov, kbusch, kmanaouil.dev, leesuyeon0506, leillc,
	liam.howlett, linux-kernel, linux-mm, mgorman, mingo, nadav.amit,
	nphamcs, peterz, riel, rientjes, rppt, santosh.shukla, shivankg,
	shy828301, sj, vbabka, weixugc, willy, ying.huang, ziy,
	Jonathan.Cameron, dave

This is to prevent unnecessary scanning of short lived tasks
spending in scanning.

Signed-off-by: Raghavendra K T <raghavendra.kt@amd.com>
---
 mm/kmmscand.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/mm/kmmscand.c b/mm/kmmscand.c
index a19b1f31271d..84140b9e8ce2 100644
--- a/mm/kmmscand.c
+++ b/mm/kmmscand.c
@@ -28,6 +28,7 @@
 
 static struct task_struct *kmmscand_thread __read_mostly;
 static DEFINE_MUTEX(kmmscand_mutex);
+extern unsigned int sysctl_numa_balancing_scan_delay;
 
 /*
  * Total VMA size to cover during scan.
@@ -880,6 +881,7 @@ void __kmmscand_enter(struct mm_struct *mm)
 {
 	struct kmmscand_mm_slot *kmmscand_slot;
 	struct mm_slot *slot;
+	unsigned long now;
 	int wakeup;
 
 	/* __kmmscand_exit() must not run from under us */
@@ -890,10 +892,12 @@ void __kmmscand_enter(struct mm_struct *mm)
 	if (!kmmscand_slot)
 		return;
 
+	now = jiffies;
 	kmmscand_slot->address = 0;
 	kmmscand_slot->scan_period = kmmscand_mm_scan_period_ms;
 	kmmscand_slot->scan_size = kmmscand_scan_size;
-	kmmscand_slot->next_scan = 0;
+	kmmscand_slot->next_scan = now +
+			msecs_to_jiffies(sysctl_numa_balancing_scan_delay);
 	kmmscand_slot->scan_delta = 0;
 
 	slot = &kmmscand_slot->slot;
-- 
2.34.1



^ permalink raw reply	[flat|nested] 30+ messages in thread

* [RFC PATCH V1 09/13] mm: Add heuristic to calculate target node
  2025-03-19 19:30 [RFC PATCH V1 00/13] mm: slowtier page promotion based on PTE A bit Raghavendra K T
                   ` (7 preceding siblings ...)
  2025-03-19 19:30 ` [RFC PATCH V1 08/13] mm: Add initial scan delay Raghavendra K T
@ 2025-03-19 19:30 ` Raghavendra K T
  2025-03-21 17:42   ` Jonathan Cameron
  2025-03-19 19:30 ` [RFC PATCH V1 10/13] sysfs: Add sysfs support to tune scanning Raghavendra K T
                   ` (6 subsequent siblings)
  15 siblings, 1 reply; 30+ messages in thread
From: Raghavendra K T @ 2025-03-19 19:30 UTC (permalink / raw)
  To: raghavendra.kt
  Cc: AneeshKumar.KizhakeVeetil, Hasan.Maruf, Michael.Day, akpm,
	bharata, dave.hansen, david, dongjoo.linux.dev, feng.tang,
	gourry, hannes, honggyu.kim, hughd, jhubbard, jon.grimm,
	k.shutemov, kbusch, kmanaouil.dev, leesuyeon0506, leillc,
	liam.howlett, linux-kernel, linux-mm, mgorman, mingo, nadav.amit,
	nphamcs, peterz, riel, rientjes, rppt, santosh.shukla, shivankg,
	shy828301, sj, vbabka, weixugc, willy, ying.huang, ziy,
	Jonathan.Cameron, dave

One of the key challenges in PTE A bit based scanning is to find right
target node to promote to.

Here is a simple heuristic based approach:
   While scanning pages of any mm we also scan toptier pages that belong
to that mm. We get an insight on the distribution of pages that potentially
belonging to particular toptier node and also its recent access.

Current logic walks all the toptier node, and picks the one with highest
accesses.

Signed-off-by: Raghavendra K T <raghavendra.kt@amd.com>
---
PS: There are many potential idea possible here.
1. we can do a quick sort on toptier nodes scan and access info
  and maintain the list of preferred nodes/fallback nodes
 in case of current target_node is getting filled up

2. We can also keep the history of access/scan information from last
scan used its decayed value to get a stable view etc etc.


 include/linux/mm_types.h |   4 +
 mm/kmmscand.c            | 174 ++++++++++++++++++++++++++++++++++++++-
 2 files changed, 174 insertions(+), 4 deletions(-)

diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index 0234f14f2aa6..eeaedc7473b1 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -1015,6 +1015,10 @@ struct mm_struct {
 		/* numa_scan_seq prevents two threads remapping PTEs. */
 		int numa_scan_seq;
 #endif
+#ifdef CONFIG_KMMSCAND
+		/* Tracks promotion node. XXX: use nodemask */
+		int target_node;
+ #endif
 		/*
 		 * An operation with batched TLB flushing is going on. Anything
 		 * that can move process memory needs to flush the TLB when
diff --git a/mm/kmmscand.c b/mm/kmmscand.c
index 84140b9e8ce2..c2924b2e8a6d 100644
--- a/mm/kmmscand.c
+++ b/mm/kmmscand.c
@@ -88,6 +88,14 @@ static DEFINE_READ_MOSTLY_HASHTABLE(kmmscand_slots_hash, KMMSCAND_SLOT_HASH_BITS
 
 static struct kmem_cache *kmmscand_slot_cache __read_mostly;
 
+/* Per memory node information used to caclulate target_node for migration */
+struct kmmscand_nodeinfo {
+	unsigned long nr_scanned;
+	unsigned long nr_accessed;
+	int node;
+	bool is_toptier;
+};
+
 /* Per mm information collected to control VMA scanning */
 struct kmmscand_mm_slot {
 	struct mm_slot slot;
@@ -100,6 +108,7 @@ struct kmmscand_mm_slot {
 	unsigned long scan_size;
 	long address;
 	bool is_scanned;
+	int target_node;
 };
 
 /* Data structure to keep track of current mm under scan */
@@ -118,7 +127,9 @@ struct kmmscand_scan kmmscand_scan = {
  */
 struct kmmscand_scanctrl {
 	struct list_head scan_list;
+	struct kmmscand_nodeinfo *nodeinfo[MAX_NUMNODES];
 	unsigned long address;
+	unsigned long nr_to_scan;
 };
 
 struct kmmscand_scanctrl kmmscand_scanctrl;
@@ -208,6 +219,98 @@ static void kmmmigrated_wait_work(void)
 			migrate_sleep_jiffies);
 }
 
+static unsigned long get_slowtier_accesed(struct kmmscand_scanctrl *scanctrl)
+{
+	int node;
+	unsigned long accessed = 0;
+
+	for_each_node_state(node, N_MEMORY) {
+		if (!node_is_toptier(node) && scanctrl->nodeinfo[node])
+			accessed += scanctrl->nodeinfo[node]->nr_accessed;
+	}
+	return accessed;
+}
+
+static inline void set_nodeinfo_nr_accessed(struct kmmscand_nodeinfo *ni, unsigned long val)
+{
+	ni->nr_accessed = val;
+}
+static inline unsigned long get_nodeinfo_nr_scanned(struct kmmscand_nodeinfo *ni)
+{
+	return ni->nr_scanned;
+}
+
+static inline void set_nodeinfo_nr_scanned(struct kmmscand_nodeinfo *ni, unsigned long val)
+{
+	ni->nr_scanned = val;
+}
+
+static inline void reset_nodeinfo_nr_scanned(struct kmmscand_nodeinfo *ni)
+{
+	set_nodeinfo_nr_scanned(ni, 0);
+}
+
+static inline void reset_nodeinfo(struct kmmscand_nodeinfo *ni)
+{
+	set_nodeinfo_nr_scanned(ni, 0);
+	set_nodeinfo_nr_accessed(ni, 0);
+}
+
+static void init_one_nodeinfo(struct kmmscand_nodeinfo *ni, int node)
+{
+	ni->nr_scanned = 0;
+	ni->nr_accessed = 0;
+	ni->node = node;
+	ni->is_toptier = node_is_toptier(node) ? true : false;
+}
+
+static struct kmmscand_nodeinfo *alloc_one_nodeinfo(int node)
+{
+	struct kmmscand_nodeinfo *ni;
+
+	ni = kzalloc(sizeof(*ni), GFP_KERNEL);
+
+	if (!ni)
+		return NULL;
+
+	init_one_nodeinfo(ni, node);
+
+	return ni;
+}
+
+/* TBD: Handle errors */
+static void init_scanctrl(struct kmmscand_scanctrl *scanctrl)
+{
+	struct kmmscand_nodeinfo *ni;
+	int node;
+
+	for_each_node(node) {
+		ni = alloc_one_nodeinfo(node);
+		if (!ni)
+			WARN_ON_ONCE(ni);
+		scanctrl->nodeinfo[node] = ni;
+	}
+}
+
+static void reset_scanctrl(struct kmmscand_scanctrl *scanctrl)
+{
+	int node;
+
+	for_each_node_state(node, N_MEMORY)
+		reset_nodeinfo(scanctrl->nodeinfo[node]);
+
+	/* XXX: Not rellay required? */
+	scanctrl->nr_to_scan = kmmscand_scan_size;
+}
+
+static void free_scanctrl(struct kmmscand_scanctrl *scanctrl)
+{
+	int node;
+
+	for_each_node(node)
+		kfree(scanctrl->nodeinfo[node]);
+}
+
 /*
  * Do not know what info to pass in the future to make
  * decision on taget node. Keep it void * now.
@@ -217,6 +320,24 @@ static int kmmscand_get_target_node(void *data)
 	return kmmscand_target_node;
 }
 
+static int get_target_node(struct kmmscand_scanctrl *scanctrl)
+{
+	int node, target_node = NUMA_NO_NODE;
+	unsigned long prev = 0;
+
+	for_each_node(node) {
+		if (node_is_toptier(node) && scanctrl->nodeinfo[node] &&
+				get_nodeinfo_nr_scanned(scanctrl->nodeinfo[node]) > prev) {
+			prev = get_nodeinfo_nr_scanned(scanctrl->nodeinfo[node]);
+			target_node = node;
+		}
+	}
+	if (target_node == NUMA_NO_NODE)
+		target_node = kmmscand_get_target_node(NULL);
+
+	return target_node;
+}
+
 extern bool migrate_balanced_pgdat(struct pglist_data *pgdat,
 					unsigned long nr_migrate_pages);
 
@@ -469,6 +590,14 @@ static int hot_vma_idle_pte_entry(pte_t *pte,
 	}
 	srcnid = folio_nid(folio);
 
+	scanctrl->nodeinfo[srcnid]->nr_scanned++;
+	if (scanctrl->nr_to_scan)
+		scanctrl->nr_to_scan--;
+
+	if (!scanctrl->nr_to_scan) {
+		folio_put(folio);
+		return 1;
+	}
 
 	if (!folio_test_lru(folio)) {
 		folio_put(folio);
@@ -479,11 +608,14 @@ static int hot_vma_idle_pte_entry(pte_t *pte,
 			mmu_notifier_test_young(mm, addr) ||
 			folio_test_referenced(folio) || pte_young(pteval)) {
 
+		scanctrl->nodeinfo[srcnid]->nr_accessed++;
+
 		/* Do not try to promote pages from regular nodes */
 		if (!kmmscand_eligible_srcnid(srcnid)) {
 			folio_put(folio);
 			return 0;
 		}
+
 		info = kzalloc(sizeof(struct kmmscand_migrate_info), GFP_NOWAIT);
 		if (info && scanctrl) {
 
@@ -571,6 +703,7 @@ static void kmmscand_collect_mm_slot(struct kmmscand_mm_slot *mm_slot)
 static void kmmscand_migrate_folio(void)
 {
 	int ret = 0, dest = -1;
+	struct mm_struct *oldmm = NULL;
 	struct kmmscand_migrate_info *info, *tmp;
 
 	spin_lock(&kmmscand_migrate_lock);
@@ -596,7 +729,16 @@ static void kmmscand_migrate_folio(void)
 			spin_unlock(&kmmscand_migrate_lock);
 
 			if (info->mm) {
-				dest = kmmscand_get_target_node(NULL);
+				if (oldmm != info->mm) {
+					if (!mmap_read_trylock(info->mm)) {
+						dest = kmmscand_get_target_node(NULL);
+					} else {
+						dest = READ_ONCE(info->mm->target_node);
+						mmap_read_unlock(info->mm);
+					}
+					oldmm = info->mm;
+				}
+
 				ret = kmmscand_promote_folio(info, dest);
 			}
 
@@ -658,7 +800,7 @@ static void kmmscand_migrate_folio(void)
  *		Increase scan_size by (1 << SCAN_SIZE_CHANGE_SHIFT).
  */
 static inline void kmmscand_update_mmslot_info(struct kmmscand_mm_slot *mm_slot,
-				unsigned long total)
+				unsigned long total, int target_node)
 {
 	unsigned int scan_period;
 	unsigned long now;
@@ -706,6 +848,7 @@ static inline void kmmscand_update_mmslot_info(struct kmmscand_mm_slot *mm_slot,
 	mm_slot->scan_period = scan_period;
 	mm_slot->scan_size = scan_size;
 	mm_slot->scan_delta = total;
+	mm_slot->target_node = target_node;
 }
 
 static unsigned long kmmscand_scan_mm_slot(void)
@@ -714,6 +857,7 @@ static unsigned long kmmscand_scan_mm_slot(void)
 	bool update_mmslot_info = false;
 
 	unsigned int mm_slot_scan_period;
+	int target_node, mm_slot_target_node, mm_target_node;
 	unsigned long now;
 	unsigned long mm_slot_next_scan;
 	unsigned long mm_slot_scan_size;
@@ -746,6 +890,7 @@ static unsigned long kmmscand_scan_mm_slot(void)
 	mm_slot_next_scan = mm_slot->next_scan;
 	mm_slot_scan_period = mm_slot->scan_period;
 	mm_slot_scan_size = mm_slot->scan_size;
+	mm_slot_target_node = mm_slot->target_node;
 	spin_unlock(&kmmscand_mm_lock);
 
 	if (unlikely(!mmap_read_trylock(mm)))
@@ -756,6 +901,9 @@ static unsigned long kmmscand_scan_mm_slot(void)
 		goto outerloop;
 	}
 
+	mm_target_node = READ_ONCE(mm->target_node);
+	if (mm_target_node != mm_slot_target_node)
+		WRITE_ONCE(mm->target_node, mm_slot_target_node);
 	now = jiffies;
 
 	if (mm_slot_next_scan && time_before(now, mm_slot_next_scan))
@@ -763,11 +911,17 @@ static unsigned long kmmscand_scan_mm_slot(void)
 
 	VMA_ITERATOR(vmi, mm, address);
 
+	/* Either Scan 25% of scan_size or cover vma size of scan_size */
+	kmmscand_scanctrl.nr_to_scan =	mm_slot_scan_size >> PAGE_SHIFT;
+	/* Reduce actual amount of pages scanned */
+	kmmscand_scanctrl.nr_to_scan =	mm_slot_scan_size >> 1;
+
 	for_each_vma(vmi, vma) {
 		kmmscand_walk_page_vma(vma, &kmmscand_scanctrl);
 		vma_scanned_size += vma->vm_end - vma->vm_start;
 
-		if (vma_scanned_size >= kmmscand_scan_size) {
+		if (vma_scanned_size >= mm_slot_scan_size ||
+					!kmmscand_scanctrl.nr_to_scan) {
 			next_mm = true;
 			/* Add scanned folios to migration list */
 			spin_lock(&kmmscand_migrate_lock);
@@ -789,9 +943,19 @@ static unsigned long kmmscand_scan_mm_slot(void)
 
 	update_mmslot_info = true;
 
+	total = get_slowtier_accesed(&kmmscand_scanctrl);
+	target_node = get_target_node(&kmmscand_scanctrl);
+
+	mm_target_node = READ_ONCE(mm->target_node);
+
+	/* XXX: Do we need write lock? */
+	if (mm_target_node != target_node)
+		WRITE_ONCE(mm->target_node, target_node);
+	reset_scanctrl(&kmmscand_scanctrl);
+
 	if (update_mmslot_info) {
 		mm_slot->address = address;
-		kmmscand_update_mmslot_info(mm_slot, total);
+		kmmscand_update_mmslot_info(mm_slot, total, target_node);
 	}
 
 outerloop:
@@ -988,6 +1152,7 @@ static int stop_kmmscand(void)
 		kthread_stop(kmmscand_thread);
 		kmmscand_thread = NULL;
 	}
+	free_scanctrl(&kmmscand_scanctrl);
 
 	return err;
 }
@@ -1044,6 +1209,7 @@ static void init_list(void)
 	spin_lock_init(&kmmscand_migrate_lock);
 	init_waitqueue_head(&kmmscand_wait);
 	init_waitqueue_head(&kmmmigrated_wait);
+	init_scanctrl(&kmmscand_scanctrl);
 }
 
 static int __init kmmscand_init(void)
-- 
2.34.1



^ permalink raw reply	[flat|nested] 30+ messages in thread

* [RFC PATCH V1 10/13] sysfs: Add sysfs support to tune scanning
  2025-03-19 19:30 [RFC PATCH V1 00/13] mm: slowtier page promotion based on PTE A bit Raghavendra K T
                   ` (8 preceding siblings ...)
  2025-03-19 19:30 ` [RFC PATCH V1 09/13] mm: Add heuristic to calculate target node Raghavendra K T
@ 2025-03-19 19:30 ` Raghavendra K T
  2025-03-19 19:30 ` [RFC PATCH V1 11/13] vmstat: Add vmstat counters Raghavendra K T
                   ` (5 subsequent siblings)
  15 siblings, 0 replies; 30+ messages in thread
From: Raghavendra K T @ 2025-03-19 19:30 UTC (permalink / raw)
  To: raghavendra.kt
  Cc: AneeshKumar.KizhakeVeetil, Hasan.Maruf, Michael.Day, akpm,
	bharata, dave.hansen, david, dongjoo.linux.dev, feng.tang,
	gourry, hannes, honggyu.kim, hughd, jhubbard, jon.grimm,
	k.shutemov, kbusch, kmanaouil.dev, leesuyeon0506, leillc,
	liam.howlett, linux-kernel, linux-mm, mgorman, mingo, nadav.amit,
	nphamcs, peterz, riel, rientjes, rppt, santosh.shukla, shivankg,
	shy828301, sj, vbabka, weixugc, willy, ying.huang, ziy,
	Jonathan.Cameron, dave

Support below tunables:
scan_enable: turn on or turn off mm_struct scanning
scan_period: initial scan_period (default: 2sec)
scan_sleep_ms: sleep time between two successive round of scanning and
migration.
mms_to_scan: total mm_struct to scan before taking a pause.
target_node: default regular node to which migration of accessed pages
is done (this is only fall back mechnism, otherwise target_node
heuristic is used).

Signed-off-by: Raghavendra K T <raghavendra.kt@amd.com>
---
 mm/kmmscand.c | 206 ++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 206 insertions(+)

diff --git a/mm/kmmscand.c b/mm/kmmscand.c
index c2924b2e8a6d..618594d7c148 100644
--- a/mm/kmmscand.c
+++ b/mm/kmmscand.c
@@ -21,6 +21,7 @@
 #include <linux/delay.h>
 #include <linux/cleanup.h>
 #include <linux/minmax.h>
+#include <trace/events/kmem.h>
 
 #include <asm/pgalloc.h>
 #include "internal.h"
@@ -158,6 +159,170 @@ static bool kmmscand_eligible_srcnid(int nid)
 	return false;
 }
 
+#ifdef CONFIG_SYSFS
+static ssize_t scan_sleep_ms_show(struct kobject *kobj,
+					 struct kobj_attribute *attr,
+					 char *buf)
+{
+	return sysfs_emit(buf, "%u\n", kmmscand_scan_sleep_ms);
+}
+
+static ssize_t scan_sleep_ms_store(struct kobject *kobj,
+					  struct kobj_attribute *attr,
+					  const char *buf, size_t count)
+{
+	unsigned int msecs;
+	int err;
+
+	err = kstrtouint(buf, 10, &msecs);
+	if (err)
+		return -EINVAL;
+
+	kmmscand_scan_sleep_ms = msecs;
+	kmmscand_sleep_expire = 0;
+	wake_up_interruptible(&kmmscand_wait);
+
+	return count;
+}
+static struct kobj_attribute scan_sleep_ms_attr =
+	__ATTR_RW(scan_sleep_ms);
+
+static ssize_t mm_scan_period_ms_show(struct kobject *kobj,
+					 struct kobj_attribute *attr,
+					 char *buf)
+{
+	return sysfs_emit(buf, "%u\n", kmmscand_mm_scan_period_ms);
+}
+
+/* If a value less than MIN or greater than MAX asked for store value is clamped */
+static ssize_t mm_scan_period_ms_store(struct kobject *kobj,
+					  struct kobj_attribute *attr,
+					  const char *buf, size_t count)
+{
+	unsigned int msecs, stored_msecs;
+	int err;
+
+	err = kstrtouint(buf, 10, &msecs);
+	if (err)
+		return -EINVAL;
+
+	stored_msecs = clamp(msecs, KMMSCAND_SCAN_PERIOD_MIN, KMMSCAND_SCAN_PERIOD_MAX);
+
+	kmmscand_mm_scan_period_ms = stored_msecs;
+	kmmscand_sleep_expire = 0;
+	wake_up_interruptible(&kmmscand_wait);
+
+	return count;
+}
+
+static struct kobj_attribute mm_scan_period_ms_attr =
+	__ATTR_RW(mm_scan_period_ms);
+
+static ssize_t mms_to_scan_show(struct kobject *kobj,
+					 struct kobj_attribute *attr,
+					 char *buf)
+{
+	return sysfs_emit(buf, "%lu\n", kmmscand_mms_to_scan);
+}
+
+static ssize_t mms_to_scan_store(struct kobject *kobj,
+					  struct kobj_attribute *attr,
+					  const char *buf, size_t count)
+{
+	unsigned long val;
+	int err;
+
+	err = kstrtoul(buf, 10, &val);
+	if (err)
+		return -EINVAL;
+
+	kmmscand_mms_to_scan = val;
+	kmmscand_sleep_expire = 0;
+	wake_up_interruptible(&kmmscand_wait);
+
+	return count;
+}
+
+static struct kobj_attribute mms_to_scan_attr =
+	__ATTR_RW(mms_to_scan);
+
+static ssize_t scan_enabled_show(struct kobject *kobj,
+					 struct kobj_attribute *attr,
+					 char *buf)
+{
+	return sysfs_emit(buf, "%u\n", kmmscand_scan_enabled ? 1 : 0);
+}
+
+static ssize_t scan_enabled_store(struct kobject *kobj,
+					  struct kobj_attribute *attr,
+					  const char *buf, size_t count)
+{
+	unsigned int val;
+	int err;
+
+	err = kstrtouint(buf, 10, &val);
+	if (err || val > 1)
+		return -EINVAL;
+
+	if (val) {
+		kmmscand_scan_enabled = true;
+		need_wakeup = true;
+	} else
+		kmmscand_scan_enabled = false;
+
+	kmmscand_sleep_expire = 0;
+	wake_up_interruptible(&kmmscand_wait);
+
+	return count;
+}
+
+static struct kobj_attribute scan_enabled_attr =
+	__ATTR_RW(scan_enabled);
+
+static ssize_t target_node_show(struct kobject *kobj,
+					 struct kobj_attribute *attr,
+					 char *buf)
+{
+	return sysfs_emit(buf, "%u\n", kmmscand_target_node);
+}
+
+static ssize_t target_node_store(struct kobject *kobj,
+					  struct kobj_attribute *attr,
+					  const char *buf, size_t count)
+{
+	int err, node;
+
+	err = kstrtoint(buf, 10, &node);
+	if (err)
+		return -EINVAL;
+
+	kmmscand_sleep_expire = 0;
+	if (!node_is_toptier(node))
+		return -EINVAL;
+
+	kmmscand_target_node = node;
+	wake_up_interruptible(&kmmscand_wait);
+
+	return count;
+}
+static struct kobj_attribute target_node_attr =
+	__ATTR_RW(target_node);
+
+static struct attribute *kmmscand_attr[] = {
+	&scan_sleep_ms_attr.attr,
+	&mm_scan_period_ms_attr.attr,
+	&mms_to_scan_attr.attr,
+	&scan_enabled_attr.attr,
+	&target_node_attr.attr,
+	NULL,
+};
+
+struct attribute_group kmmscand_attr_group = {
+	.attrs = kmmscand_attr,
+	.name = "kmmscand",
+};
+#endif
+
 static int kmmscand_has_work(void)
 {
 	return !list_empty(&kmmscand_scan.mm_head);
@@ -1036,9 +1201,43 @@ static int kmmscand(void *none)
 	return 0;
 }
 
+#ifdef CONFIG_SYSFS
+extern struct kobject *mm_kobj;
+static int __init kmmscand_init_sysfs(struct kobject **kobj)
+{
+	int err;
+
+	err = sysfs_create_group(*kobj, &kmmscand_attr_group);
+	if (err) {
+		pr_err("failed to register kmmscand group\n");
+		goto err_kmmscand_attr;
+	}
+
+	return 0;
+
+err_kmmscand_attr:
+	sysfs_remove_group(*kobj, &kmmscand_attr_group);
+	return err;
+}
+
+static void __init kmmscand_exit_sysfs(struct kobject *kobj)
+{
+		sysfs_remove_group(kobj, &kmmscand_attr_group);
+}
+#else
+static inline int __init kmmscand_init_sysfs(struct kobject **kobj)
+{
+	return 0;
+}
+static inline void __init kmmscand_exit_sysfs(struct kobject *kobj)
+{
+}
+#endif
+
 static inline void kmmscand_destroy(void)
 {
 	kmem_cache_destroy(kmmscand_slot_cache);
+	kmmscand_exit_sysfs(mm_kobj);
 }
 
 void __kmmscand_enter(struct mm_struct *mm)
@@ -1223,7 +1422,13 @@ static int __init kmmscand_init(void)
 		return -ENOMEM;
 	}
 
+	err = kmmscand_init_sysfs(&mm_kobj);
+
+	if (err)
+		goto err_init_sysfs;
+
 	init_list();
+
 	err = start_kmmscand();
 	if (err)
 		goto err_kmmscand;
@@ -1239,6 +1444,7 @@ static int __init kmmscand_init(void)
 
 err_kmmscand:
 	stop_kmmscand();
+err_init_sysfs:
 	kmmscand_destroy();
 
 	return err;
-- 
2.34.1



^ permalink raw reply	[flat|nested] 30+ messages in thread

* [RFC PATCH V1 11/13] vmstat: Add vmstat counters
  2025-03-19 19:30 [RFC PATCH V1 00/13] mm: slowtier page promotion based on PTE A bit Raghavendra K T
                   ` (9 preceding siblings ...)
  2025-03-19 19:30 ` [RFC PATCH V1 10/13] sysfs: Add sysfs support to tune scanning Raghavendra K T
@ 2025-03-19 19:30 ` Raghavendra K T
  2025-03-19 19:30 ` [RFC PATCH V1 12/13] trace/kmmscand: Add tracing of scanning and migration Raghavendra K T
                   ` (4 subsequent siblings)
  15 siblings, 0 replies; 30+ messages in thread
From: Raghavendra K T @ 2025-03-19 19:30 UTC (permalink / raw)
  To: raghavendra.kt
  Cc: AneeshKumar.KizhakeVeetil, Hasan.Maruf, Michael.Day, akpm,
	bharata, dave.hansen, david, dongjoo.linux.dev, feng.tang,
	gourry, hannes, honggyu.kim, hughd, jhubbard, jon.grimm,
	k.shutemov, kbusch, kmanaouil.dev, leesuyeon0506, leillc,
	liam.howlett, linux-kernel, linux-mm, mgorman, mingo, nadav.amit,
	nphamcs, peterz, riel, rientjes, rppt, santosh.shukla, shivankg,
	shy828301, sj, vbabka, weixugc, willy, ying.huang, ziy,
	Jonathan.Cameron, dave

Add vmstat counter to track scanning, migration and
type of pages.

Signed-off-by: Raghavendra K T <raghavendra.kt@amd.com>
---
 include/linux/mm.h            | 11 ++++++++
 include/linux/vm_event_item.h | 10 +++++++
 mm/kmmscand.c                 | 52 ++++++++++++++++++++++++++++++++++-
 mm/vmstat.c                   | 10 +++++++
 4 files changed, 82 insertions(+), 1 deletion(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 7b1068ddcbb7..e40a38c28a63 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -682,6 +682,17 @@ struct vm_operations_struct {
 					  unsigned long addr);
 };
 
+#ifdef CONFIG_KMMSCAND
+void count_kmmscand_mm_scans(void);
+void count_kmmscand_vma_scans(void);
+void count_kmmscand_migadded(void);
+void count_kmmscand_migrated(void);
+void count_kmmscand_migrate_failed(void);
+void count_kmmscand_slowtier(void);
+void count_kmmscand_toptier(void);
+void count_kmmscand_idlepage(void);
+#endif
+
 #ifdef CONFIG_NUMA_BALANCING
 static inline void vma_numab_state_init(struct vm_area_struct *vma)
 {
diff --git a/include/linux/vm_event_item.h b/include/linux/vm_event_item.h
index f70d0958095c..b2ccd4f665aa 100644
--- a/include/linux/vm_event_item.h
+++ b/include/linux/vm_event_item.h
@@ -65,6 +65,16 @@ enum vm_event_item { PGPGIN, PGPGOUT, PSWPIN, PSWPOUT,
 		NUMA_HINT_FAULTS_LOCAL,
 		NUMA_PAGE_MIGRATE,
 #endif
+#ifdef CONFIG_KMMSCAND
+		KMMSCAND_MM_SCANS,
+		KMMSCAND_VMA_SCANS,
+		KMMSCAND_MIGADDED,
+		KMMSCAND_MIGRATED,
+		KMMSCAND_MIGRATE_FAILED,
+		KMMSCAND_SLOWTIER,
+		KMMSCAND_TOPTIER,
+		KMMSCAND_IDLEPAGE,
+#endif
 #ifdef CONFIG_MIGRATION
 		PGMIGRATE_SUCCESS, PGMIGRATE_FAIL,
 		THP_MIGRATION_SUCCESS,
diff --git a/mm/kmmscand.c b/mm/kmmscand.c
index 618594d7c148..c88b30e0fc7d 100644
--- a/mm/kmmscand.c
+++ b/mm/kmmscand.c
@@ -323,6 +323,39 @@ struct attribute_group kmmscand_attr_group = {
 };
 #endif
 
+void count_kmmscand_mm_scans(void)
+{
+	count_vm_numa_event(KMMSCAND_MM_SCANS);
+}
+void count_kmmscand_vma_scans(void)
+{
+	count_vm_numa_event(KMMSCAND_VMA_SCANS);
+}
+void count_kmmscand_migadded(void)
+{
+	count_vm_numa_event(KMMSCAND_MIGADDED);
+}
+void count_kmmscand_migrated(void)
+{
+	count_vm_numa_event(KMMSCAND_MIGRATED);
+}
+void count_kmmscand_migrate_failed(void)
+{
+	count_vm_numa_event(KMMSCAND_MIGRATE_FAILED);
+}
+void count_kmmscand_slowtier(void)
+{
+	count_vm_numa_event(KMMSCAND_SLOWTIER);
+}
+void count_kmmscand_toptier(void)
+{
+	count_vm_numa_event(KMMSCAND_TOPTIER);
+}
+void count_kmmscand_idlepage(void)
+{
+	count_vm_numa_event(KMMSCAND_IDLEPAGE);
+}
+
 static int kmmscand_has_work(void)
 {
 	return !list_empty(&kmmscand_scan.mm_head);
@@ -769,6 +802,9 @@ static int hot_vma_idle_pte_entry(pte_t *pte,
 		return 0;
 	}
 
+	if (node_is_toptier(srcnid))
+		count_kmmscand_toptier();
+
 	if (!folio_test_idle(folio) || folio_test_young(folio) ||
 			mmu_notifier_test_young(mm, addr) ||
 			folio_test_referenced(folio) || pte_young(pteval)) {
@@ -784,14 +820,18 @@ static int hot_vma_idle_pte_entry(pte_t *pte,
 		info = kzalloc(sizeof(struct kmmscand_migrate_info), GFP_NOWAIT);
 		if (info && scanctrl) {
 
+			count_kmmscand_slowtier();
 			info->mm = mm;
 			info->address = addr;
 			info->folio = folio;
 
 			/* No need of lock now */
 			list_add_tail(&info->migrate_node, &scanctrl->scan_list);
+
+			count_kmmscand_migadded();
 		}
-	}
+	} else
+		count_kmmscand_idlepage();
 
 	folio_set_idle(folio);
 	folio_put(folio);
@@ -907,6 +947,12 @@ static void kmmscand_migrate_folio(void)
 				ret = kmmscand_promote_folio(info, dest);
 			}
 
+			/* TBD: encode migrated count here, currently assume folio_nr_pages */
+			if (!ret)
+				count_kmmscand_migrated();
+			else
+				count_kmmscand_migrate_failed();
+
 			kfree(info);
 
 			spin_lock(&kmmscand_migrate_lock);
@@ -1083,6 +1129,7 @@ static unsigned long kmmscand_scan_mm_slot(void)
 
 	for_each_vma(vmi, vma) {
 		kmmscand_walk_page_vma(vma, &kmmscand_scanctrl);
+		count_kmmscand_vma_scans();
 		vma_scanned_size += vma->vm_end - vma->vm_start;
 
 		if (vma_scanned_size >= mm_slot_scan_size ||
@@ -1108,6 +1155,8 @@ static unsigned long kmmscand_scan_mm_slot(void)
 
 	update_mmslot_info = true;
 
+	count_kmmscand_mm_scans();
+
 	total = get_slowtier_accesed(&kmmscand_scanctrl);
 	target_node = get_target_node(&kmmscand_scanctrl);
 
@@ -1123,6 +1172,7 @@ static unsigned long kmmscand_scan_mm_slot(void)
 		kmmscand_update_mmslot_info(mm_slot, total, target_node);
 	}
 
+
 outerloop:
 	/* exit_mmap will destroy ptes after this */
 	mmap_read_unlock(mm);
diff --git a/mm/vmstat.c b/mm/vmstat.c
index 16bfe1c694dd..3a6fa834ebe0 100644
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -1340,6 +1340,16 @@ const char * const vmstat_text[] = {
 	"numa_hint_faults_local",
 	"numa_pages_migrated",
 #endif
+#ifdef CONFIG_KMMSCAND
+	"nr_kmmscand_mm_scans",
+	"nr_kmmscand_vma_scans",
+	"nr_kmmscand_migadded",
+	"nr_kmmscand_migrated",
+	"nr_kmmscand_migrate_failed",
+	"nr_kmmscand_slowtier",
+	"nr_kmmscand_toptier",
+	"nr_kmmscand_idlepage",
+#endif
 #ifdef CONFIG_MIGRATION
 	"pgmigrate_success",
 	"pgmigrate_fail",
-- 
2.34.1



^ permalink raw reply	[flat|nested] 30+ messages in thread

* [RFC PATCH V1 12/13] trace/kmmscand: Add tracing of scanning and migration
  2025-03-19 19:30 [RFC PATCH V1 00/13] mm: slowtier page promotion based on PTE A bit Raghavendra K T
                   ` (10 preceding siblings ...)
  2025-03-19 19:30 ` [RFC PATCH V1 11/13] vmstat: Add vmstat counters Raghavendra K T
@ 2025-03-19 19:30 ` Raghavendra K T
  2025-03-19 19:30 ` [RFC PATCH V1 13/13] prctl: Introduce new prctl to control scanning Raghavendra K T
                   ` (3 subsequent siblings)
  15 siblings, 0 replies; 30+ messages in thread
From: Raghavendra K T @ 2025-03-19 19:30 UTC (permalink / raw)
  To: raghavendra.kt
  Cc: AneeshKumar.KizhakeVeetil, Hasan.Maruf, Michael.Day, akpm,
	bharata, dave.hansen, david, dongjoo.linux.dev, feng.tang,
	gourry, hannes, honggyu.kim, hughd, jhubbard, jon.grimm,
	k.shutemov, kbusch, kmanaouil.dev, leesuyeon0506, leillc,
	liam.howlett, linux-kernel, linux-mm, mgorman, mingo, nadav.amit,
	nphamcs, peterz, riel, rientjes, rppt, santosh.shukla, shivankg,
	shy828301, sj, vbabka, weixugc, willy, ying.huang, ziy,
	Jonathan.Cameron, dave

Add tracing support to track
 - start and end of scanning.
 - migration.

CC: Steven Rostedt <rostedt@goodmis.org>
CC: Masami Hiramatsu <mhiramat@kernel.org>
CC: linux-trace-kernel@vger.kernel.org

Signed-off-by: Raghavendra K T <raghavendra.kt@amd.com>
---
 Changes done based Steves feedback:
 1) Using EVENT class for similar traces
 2) Dropping task_comm
 3) remove unnecessary module name in print

 include/trace/events/kmem.h | 90 +++++++++++++++++++++++++++++++++++++
 mm/kmmscand.c               |  8 ++++
 2 files changed, 98 insertions(+)

diff --git a/include/trace/events/kmem.h b/include/trace/events/kmem.h
index b37eb0a7060f..cef527ef9d79 100644
--- a/include/trace/events/kmem.h
+++ b/include/trace/events/kmem.h
@@ -9,6 +9,96 @@
 #include <linux/tracepoint.h>
 #include <trace/events/mmflags.h>
 
+DECLARE_EVENT_CLASS(kmem_mm_class,
+
+	TP_PROTO(struct mm_struct *mm),
+
+	TP_ARGS(mm),
+
+	TP_STRUCT__entry(
+		__field(	struct mm_struct *, mm		)
+	),
+
+	TP_fast_assign(
+		__entry->mm = mm;
+	),
+
+	TP_printk("mm = %p", __entry->mm)
+);
+
+DEFINE_EVENT(kmem_mm_class, kmem_mm_enter,
+	TP_PROTO(struct mm_struct *mm),
+	TP_ARGS(mm)
+);
+
+DEFINE_EVENT(kmem_mm_class, kmem_mm_exit,
+	TP_PROTO(struct mm_struct *mm),
+	TP_ARGS(mm)
+);
+
+DEFINE_EVENT(kmem_mm_class, kmem_scan_mm_start,
+	TP_PROTO(struct mm_struct *mm),
+	TP_ARGS(mm)
+);
+
+TRACE_EVENT(kmem_scan_mm_end,
+
+	TP_PROTO( struct mm_struct *mm,
+		 unsigned long start,
+		 unsigned long total,
+		 unsigned long scan_period,
+		 unsigned long scan_size,
+		 int target_node),
+
+	TP_ARGS(mm, start, total, scan_period, scan_size, target_node),
+
+	TP_STRUCT__entry(
+		__field(	struct mm_struct *, mm		)
+		__field(	unsigned long,   start		)
+		__field(	unsigned long,   total		)
+		__field(	unsigned long,   scan_period	)
+		__field(	unsigned long,   scan_size	)
+		__field(	int,		 target_node	)
+	),
+
+	TP_fast_assign(
+		__entry->mm = mm;
+		__entry->start = start;
+		__entry->total = total;
+		__entry->scan_period  = scan_period;
+		__entry->scan_size    = scan_size;
+		__entry->target_node  = target_node;
+	),
+
+	TP_printk("mm=%p, start = %ld, total = %ld, scan_period = %ld, scan_size = %ld node = %d",
+		__entry->mm, __entry->start, __entry->total, __entry->scan_period,
+		__entry->scan_size, __entry->target_node)
+);
+
+TRACE_EVENT(kmem_scan_mm_migrate,
+
+	TP_PROTO(struct mm_struct *mm,
+		 int rc,
+		 int target_node),
+
+	TP_ARGS(mm, rc, target_node),
+
+	TP_STRUCT__entry(
+		__field(	struct mm_struct *, mm		)
+		__field(	int,   rc			)
+		__field(	int,   target_node		)
+	),
+
+	TP_fast_assign(
+		__entry->mm = mm;
+		__entry->rc = rc;
+		__entry->target_node = target_node;
+	),
+
+	TP_printk("mm = %p rc = %d node = %d",
+		__entry->mm, __entry->rc, __entry->target_node)
+);
+
 TRACE_EVENT(kmem_cache_alloc,
 
 	TP_PROTO(unsigned long call_site,
diff --git a/mm/kmmscand.c b/mm/kmmscand.c
index c88b30e0fc7d..38d7825c0d62 100644
--- a/mm/kmmscand.c
+++ b/mm/kmmscand.c
@@ -945,6 +945,7 @@ static void kmmscand_migrate_folio(void)
 				}
 
 				ret = kmmscand_promote_folio(info, dest);
+				trace_kmem_scan_mm_migrate(info->mm, ret, dest);
 			}
 
 			/* TBD: encode migrated count here, currently assume folio_nr_pages */
@@ -1115,6 +1116,9 @@ static unsigned long kmmscand_scan_mm_slot(void)
 	mm_target_node = READ_ONCE(mm->target_node);
 	if (mm_target_node != mm_slot_target_node)
 		WRITE_ONCE(mm->target_node, mm_slot_target_node);
+
+	trace_kmem_scan_mm_start(mm);
+
 	now = jiffies;
 
 	if (mm_slot_next_scan && time_before(now, mm_slot_next_scan))
@@ -1172,6 +1176,8 @@ static unsigned long kmmscand_scan_mm_slot(void)
 		kmmscand_update_mmslot_info(mm_slot, total, target_node);
 	}
 
+	trace_kmem_scan_mm_end(mm, address, total, mm_slot_scan_period,
+			mm_slot_scan_size, target_node);
 
 outerloop:
 	/* exit_mmap will destroy ptes after this */
@@ -1323,6 +1329,7 @@ void __kmmscand_enter(struct mm_struct *mm)
 	spin_unlock(&kmmscand_mm_lock);
 
 	mmgrab(mm);
+	trace_kmem_mm_enter(mm);
 	if (wakeup)
 		wake_up_interruptible(&kmmscand_wait);
 }
@@ -1333,6 +1340,7 @@ void __kmmscand_exit(struct mm_struct *mm)
 	struct mm_slot *slot;
 	int free = 0, serialize = 1;
 
+	trace_kmem_mm_exit(mm);
 	spin_lock(&kmmscand_mm_lock);
 	slot = mm_slot_lookup(kmmscand_slots_hash, mm);
 	mm_slot = mm_slot_entry(slot, struct kmmscand_mm_slot, slot);
-- 
2.34.1



^ permalink raw reply	[flat|nested] 30+ messages in thread

* [RFC PATCH V1 13/13] prctl: Introduce new prctl to control scanning
  2025-03-19 19:30 [RFC PATCH V1 00/13] mm: slowtier page promotion based on PTE A bit Raghavendra K T
                   ` (11 preceding siblings ...)
  2025-03-19 19:30 ` [RFC PATCH V1 12/13] trace/kmmscand: Add tracing of scanning and migration Raghavendra K T
@ 2025-03-19 19:30 ` Raghavendra K T
  2025-03-19 23:00 ` [RFC PATCH V1 00/13] mm: slowtier page promotion based on PTE A bit Davidlohr Bueso
                   ` (2 subsequent siblings)
  15 siblings, 0 replies; 30+ messages in thread
From: Raghavendra K T @ 2025-03-19 19:30 UTC (permalink / raw)
  To: raghavendra.kt
  Cc: AneeshKumar.KizhakeVeetil, Hasan.Maruf, Michael.Day, akpm,
	bharata, dave.hansen, david, dongjoo.linux.dev, feng.tang,
	gourry, hannes, honggyu.kim, hughd, jhubbard, jon.grimm,
	k.shutemov, kbusch, kmanaouil.dev, leesuyeon0506, leillc,
	liam.howlett, linux-kernel, linux-mm, mgorman, mingo, nadav.amit,
	nphamcs, peterz, riel, rientjes, rppt, santosh.shukla, shivankg,
	shy828301, sj, vbabka, weixugc, willy, ying.huang, ziy,
	Jonathan.Cameron, dave

 A new scalar value (PTEAScanScale) to control per task PTE A bit scanning is
introduced.

0    : scanning disabled
1-10 : scanning enabled.

In future PTEAScanScale could be used to control aggressiveness of scanning.

CC: linux-doc@vger.kernel.org 
CC: Jonathan Corbet <corbet@lwn.net>
CC: linux-fsdevel@vger.kernel.org

Suggested-by: David Rientjes <rientjes@google.com>
Signed-off-by: Raghavendra K T <raghavendra.kt@amd.com>
---
 Documentation/filesystems/proc.rst |  2 ++
 fs/proc/task_mmu.c                 |  4 ++++
 include/linux/kmmscand.h           |  1 +
 include/linux/mm_types.h           |  3 +++
 include/uapi/linux/prctl.h         |  7 +++++++
 kernel/fork.c                      |  4 ++++
 kernel/sys.c                       | 25 +++++++++++++++++++++++++
 mm/kmmscand.c                      |  5 +++++
 8 files changed, 51 insertions(+)

diff --git a/Documentation/filesystems/proc.rst b/Documentation/filesystems/proc.rst
index 09f0aed5a08b..78633cab3f1a 100644
--- a/Documentation/filesystems/proc.rst
+++ b/Documentation/filesystems/proc.rst
@@ -195,6 +195,7 @@ read the file /proc/PID/status::
   VmLib:      1412 kB
   VmPTE:        20 kb
   VmSwap:        0 kB
+  PTEAScanScale: 0
   HugetlbPages:          0 kB
   CoreDumping:    0
   THP_enabled:	  1
@@ -278,6 +279,7 @@ It's slow but very precise.
  VmPTE                       size of page table entries
  VmSwap                      amount of swap used by anonymous private data
                              (shmem swap usage is not included)
+ PTEAScanScale               Integer representing async PTE A bit scan agrression
  HugetlbPages                size of hugetlb memory portions
  CoreDumping                 process's memory is currently being dumped
                              (killing the process may lead to a corrupted core)
diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
index f02cd362309a..55620a5178fb 100644
--- a/fs/proc/task_mmu.c
+++ b/fs/proc/task_mmu.c
@@ -79,6 +79,10 @@ void task_mem(struct seq_file *m, struct mm_struct *mm)
 		    " kB\nVmPTE:\t", mm_pgtables_bytes(mm) >> 10, 8);
 	SEQ_PUT_DEC(" kB\nVmSwap:\t", swap);
 	seq_puts(m, " kB\n");
+#ifdef CONFIG_KMMSCAND
+	seq_put_decimal_ull_width(m, "PTEAScanScale:\t", mm->pte_scan_scale, 8);
+	seq_puts(m, "\n");
+#endif
 	hugetlb_report_usage(m, mm);
 }
 #undef SEQ_PUT_DEC
diff --git a/include/linux/kmmscand.h b/include/linux/kmmscand.h
index b120c65ee8c6..7021f7d979a6 100644
--- a/include/linux/kmmscand.h
+++ b/include/linux/kmmscand.h
@@ -13,6 +13,7 @@ static inline void kmmscand_execve(struct mm_struct *mm)
 
 static inline void kmmscand_fork(struct mm_struct *mm, struct mm_struct *oldmm)
 {
+	mm->pte_scan_scale = oldmm->pte_scan_scale;
 	__kmmscand_enter(mm);
 }
 
diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index eeaedc7473b1..12184e8ebc58 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -1018,6 +1018,9 @@ struct mm_struct {
 #ifdef CONFIG_KMMSCAND
 		/* Tracks promotion node. XXX: use nodemask */
 		int target_node;
+
+		/* Integer representing PTE A bit scan aggression (0-10) */
+		unsigned int pte_scan_scale;
  #endif
 		/*
 		 * An operation with batched TLB flushing is going on. Anything
diff --git a/include/uapi/linux/prctl.h b/include/uapi/linux/prctl.h
index 5c6080680cb2..18face11440a 100644
--- a/include/uapi/linux/prctl.h
+++ b/include/uapi/linux/prctl.h
@@ -353,4 +353,11 @@ struct prctl_mm_map {
  */
 #define PR_LOCK_SHADOW_STACK_STATUS      76
 
+/* Set/get PTE A bit scan scale */
+#define PR_SET_PTE_A_SCAN_SCALE		77
+#define PR_GET_PTE_A_SCAN_SCALE		78
+# define PR_PTE_A_SCAN_SCALE_MIN	0
+# define PR_PTE_A_SCAN_SCALE_MAX	10
+# define PR_PTE_A_SCAN_SCALE_DEFAULT	1
+
 #endif /* _LINUX_PRCTL_H */
diff --git a/kernel/fork.c b/kernel/fork.c
index f61c55cf33c2..bfbbacb8ec36 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -106,6 +106,7 @@
 #include <uapi/linux/pidfd.h>
 #include <linux/pidfs.h>
 #include <linux/tick.h>
+#include <linux/prctl.h>
 
 #include <asm/pgalloc.h>
 #include <linux/uaccess.h>
@@ -1292,6 +1293,9 @@ static struct mm_struct *mm_init(struct mm_struct *mm, struct task_struct *p,
 	init_tlb_flush_pending(mm);
 #if defined(CONFIG_TRANSPARENT_HUGEPAGE) && !defined(CONFIG_SPLIT_PMD_PTLOCKS)
 	mm->pmd_huge_pte = NULL;
+#endif
+#ifdef CONFIG_KMMSCAND
+	mm->pte_scan_scale = PR_PTE_A_SCAN_SCALE_DEFAULT;
 #endif
 	mm_init_uprobes_state(mm);
 	hugetlb_count_init(mm);
diff --git a/kernel/sys.c b/kernel/sys.c
index cb366ff8703a..0518480d8f78 100644
--- a/kernel/sys.c
+++ b/kernel/sys.c
@@ -2142,6 +2142,19 @@ static int prctl_set_auxv(struct mm_struct *mm, unsigned long addr,
 
 	return 0;
 }
+#ifdef CONFIG_KMMSCAND
+static int prctl_pte_scan_scale_write(unsigned int scale)
+{
+	scale = clamp(scale, PR_PTE_A_SCAN_SCALE_MIN, PR_PTE_A_SCAN_SCALE_MAX);
+	current->mm->pte_scan_scale = scale;
+	return 0;
+}
+
+static unsigned int prctl_pte_scan_scale_read(void)
+{
+	return current->mm->pte_scan_scale;
+}
+#endif
 
 static int prctl_set_mm(int opt, unsigned long addr,
 			unsigned long arg4, unsigned long arg5)
@@ -2811,6 +2824,18 @@ SYSCALL_DEFINE5(prctl, int, option, unsigned long, arg2, unsigned long, arg3,
 			return -EINVAL;
 		error = arch_lock_shadow_stack_status(me, arg2);
 		break;
+#ifdef CONFIG_KMMSCAND
+	case PR_SET_PTE_A_SCAN_SCALE:
+		if (arg3 || arg4 || arg5)
+			return -EINVAL;
+		error = prctl_pte_scan_scale_write((unsigned int) arg2);
+		break;
+	case PR_GET_PTE_A_SCAN_SCALE:
+		if (arg2 || arg3 || arg4 || arg5)
+			return -EINVAL;
+		error = prctl_pte_scan_scale_read();
+		break;
+#endif
 	default:
 		trace_task_prctl_unknown(option, arg2, arg3, arg4, arg5);
 		error = -EINVAL;
diff --git a/mm/kmmscand.c b/mm/kmmscand.c
index 38d7825c0d62..68ef2141c349 100644
--- a/mm/kmmscand.c
+++ b/mm/kmmscand.c
@@ -1113,6 +1113,11 @@ static unsigned long kmmscand_scan_mm_slot(void)
 		goto outerloop;
 	}
 
+	if (!mm->pte_scan_scale) {
+		next_mm = true;
+		goto outerloop;
+	}
+
 	mm_target_node = READ_ONCE(mm->target_node);
 	if (mm_target_node != mm_slot_target_node)
 		WRITE_ONCE(mm->target_node, mm_slot_target_node);
-- 
2.34.1



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [RFC PATCH V1 00/13] mm: slowtier page promotion based on PTE A bit
  2025-03-19 19:30 [RFC PATCH V1 00/13] mm: slowtier page promotion based on PTE A bit Raghavendra K T
                   ` (12 preceding siblings ...)
  2025-03-19 19:30 ` [RFC PATCH V1 13/13] prctl: Introduce new prctl to control scanning Raghavendra K T
@ 2025-03-19 23:00 ` Davidlohr Bueso
  2025-03-20  8:51   ` Raghavendra K T
  2025-03-21 15:52 ` Jonathan Cameron
       [not found] ` <20250321105309.3521-1-hdanton@sina.com>
  15 siblings, 1 reply; 30+ messages in thread
From: Davidlohr Bueso @ 2025-03-19 23:00 UTC (permalink / raw)
  To: Raghavendra K T
  Cc: AneeshKumar.KizhakeVeetil, Hasan.Maruf, Michael.Day, akpm,
	bharata, dave.hansen, david, dongjoo.linux.dev, feng.tang,
	gourry, hannes, honggyu.kim, hughd, jhubbard, jon.grimm,
	k.shutemov, kbusch, kmanaouil.dev, leesuyeon0506, leillc,
	liam.howlett, linux-kernel, linux-mm, mgorman, mingo, nadav.amit,
	nphamcs, peterz, riel, rientjes, rppt, santosh.shukla, shivankg,
	shy828301, sj, vbabka, weixugc, willy, ying.huang, ziy,
	Jonathan.Cameron, alok.rathore

On Wed, 19 Mar 2025, Raghavendra K T wrote:

>Introduction:
>=============
>In the current hot page promotion, all the activities including the
>process address space scanning, NUMA hint fault handling and page
>migration is performed in the process context. i.e., scanning overhead is
>borne by applications.
>
>This is RFC V1 patch series to do (slow tier) CXL page promotion.
>The approach in this patchset assists/addresses the issue by adding PTE
>Accessed bit scanning.
>
>Scanning is done by a global kernel thread which routinely scans all
>the processes' address spaces and checks for accesses by reading the
>PTE A bit.
>
>A separate migration thread migrates/promotes the pages to the toptier
>node based on a simple heuristic that uses toptier scan/access information
>of the mm.
>
>Additionally based on the feedback for RFC V0 [4], a prctl knob with
>a scalar value is provided to control per task scanning.
>
>Initial results show promising number on a microbenchmark. Soon
>will get numbers with real benchmarks and findings (tunings).
>
>Experiment:
>============
>Abench microbenchmark,
>- Allocates 8GB/16GB/32GB/64GB of memory on CXL node
>- 64 threads created, and each thread randomly accesses pages in 4K
>  granularity.
>- 512 iterations with a delay of 1 us between two successive iterations.
>
>SUT: 512 CPU, 2 node 256GB, AMD EPYC.
>
>3 runs, command:  abench -m 2 -d 1 -i 512 -s <size>
>
>Calculates how much time is taken to complete the task, lower is better.
>Expectation is CXL node memory is expected to be migrated as fast as
>possible.
>
>Base case: 6.14-rc6    w/ numab mode = 2 (hot page promotion is enabled).
>patched case: 6.14-rc6 w/ numab mode = 1 (numa balancing is enabled).
>we expect daemon to do page promotion.
>
>Result:
>========
>         base NUMAB2                    patched NUMAB1
>         time in sec  (%stdev)   time in sec  (%stdev)     %gain
> 8GB     134.33       ( 0.19 )        120.52  ( 0.21 )     10.28
>16GB     292.24       ( 0.60 )        275.97  ( 0.18 )      5.56
>32GB     585.06       ( 0.24 )        546.49  ( 0.35 )      6.59
>64GB    1278.98       ( 0.27 )       1205.20  ( 2.29 )      5.76
>
>Base case: 6.14-rc6    w/ numab mode = 1 (numa balancing is enabled).
>patched case: 6.14-rc6 w/ numab mode = 1 (numa balancing is enabled).
>         base NUMAB1                    patched NUMAB1
>         time in sec  (%stdev)   time in sec  (%stdev)     %gain
> 8GB     186.71       ( 0.99 )        120.52  ( 0.21 )     35.45
>16GB     376.09       ( 0.46 )        275.97  ( 0.18 )     26.62
>32GB     744.37       ( 0.71 )        546.49  ( 0.35 )     26.58
>64GB    1534.49       ( 0.09 )       1205.20  ( 2.29 )     21.45

Very promising, but a few things. A more fair comparison would be
vs kpromoted using the PROT_NONE of NUMAB2. Essentially disregarding
the asynchronous migration, and effectively measuring synchronous
vs asynchronous scanning overhead and implied semantics. Essentially
save the extra kthread and only have a per-NUMA node migrator, which
is the common denominator for all these sources of hotness.

Similarly, while I don't see any users disabling NUMAB1 _and_ enabling
this sort of thing, it would be useful to have data on no numa balancing
at all. If nothing else, that would measure the effects of the dest
node heuristics.

Also, data/workload involving demotion would also be good to have for
a more complete picture.

>
>Major Changes since V0:
>======================
>- A separate migration thread is used for migration, thus alleviating need for
>  multi-threaded scanning (atleast as per tracing).
>
>- A simple heuristic for target node calculation is added.
>
>- prctl (David R) interface with scalar value is added to control per task scanning.
>
>- Steve's comment on tracing incorporated.
>
>- Davidlohr's reported bugfix.
>
>- Initial scan delay similar to NUMAB1 mode added.
>
>- Got rid of migration lock during mm_walk.
>
>PS: Occassionally I do see if scanning is too fast compared to migration,
>scanning can stall waiting for lock. Should be fixed in next version by
>using memslot for migration..
>
>Disclaimer, Takeaways and discussion points and future TODOs
>==============================================================
>1) Source code, patch seggregation still to be improved, current patchset only
>provides a skeleton.
>
>2) Unification of source of hotness is not easy (as mentioned perhaps by Jonathan)
>but perhaps all the consumers/producers can work coopertaively.
>
>Scanning:
>3) Major positive: Current patchset is able to cover all the process address
>space scanning effectively with simple algorithms to tune scan_size and scan_period.
>
>4) Effective tracking of folio's or address space using / or ideas used in DAMON
>is yet to be explored fully.
>
>5) Use timestamp information-based migration (Similar to numab mode=2).
>instead of migrating immediately when PTE A bit set.
>(cons:
> - It will not be accurate since it is done outside of process
>context.
> - Performance benefit may be lost.)
>
>Migration:
>
>6) Currently fast scanner can bombard migration list, need to maintain migration list in a more
>organized way (for e.g. using memslot, so that it is also helpful in maintaining recency, frequency
>information (similar to kpromoted posted by Bharata)
>
>7) NUMAB2 throttling is very effective, we would need a common interface to control migration
>and also exploit batch migration.

Does NUMAB2 continue to exist? Are there any benefits in having two sources?

Thanks,
Davidlohr

>
>Thanks to Bharata, Joannes, Gregory, SJ, Chris, David Rientjes, Jonathan, John Hubbard,
>Davidlohr, Ying, Willy, Hyeonggon Yoo and many of you for your valuable comments and support.
>
>Links:
>[1] https://lore.kernel.org/lkml/20241127082201.1276-1-gourry@gourry.net/
>[2] kstaled: https://lore.kernel.org/lkml/1317170947-17074-3-git-send-email-walken@google.com/#r
>[3] https://lore.kernel.org/lkml/Y+Pj+9bbBbHpf6xM@hirez.programming.kicks-ass.net/
>[4] RFC V0: https://lore.kernel.org/all/20241201153818.2633616-1-raghavendra.kt@amd.com/
>[5] Recap: https://lore.kernel.org/linux-mm/20241226012833.rmmbkws4wdhzdht6@ed.ac.uk/T/
>[6] LSFMM: https://lore.kernel.org/linux-mm/20250123105721.424117-1-raghavendra.kt@amd.com/#r
>[7] LSFMM: https://lore.kernel.org/linux-mm/20250131130901.00000dd1@huawei.com/
>
>I might have CCed more people or less people than needed
>unintentionally.
>
>Patch organization:
>patch 1-4 initial skeleton for scanning and migration
>patch 5: migration
>patch 6-8: scanning optimizations
>patch 9: target_node heuristic
>patch 10-12: sysfs, vmstat and tracing
>patch 13: A basic prctl implementation.
>
>Raghavendra K T (13):
>  mm: Add kmmscand kernel daemon
>  mm: Maintain mm_struct list in the system
>  mm: Scan the mm and create a migration list
>  mm: Create a separate kernel thread for migration
>  mm/migration: Migrate accessed folios to toptier node
>  mm: Add throttling of mm scanning using scan_period
>  mm: Add throttling of mm scanning using scan_size
>  mm: Add initial scan delay
>  mm: Add heuristic to calculate target node
>  sysfs: Add sysfs support to tune scanning
>  vmstat: Add vmstat counters
>  trace/kmmscand: Add tracing of scanning and migration
>  prctl: Introduce new prctl to control scanning
>
> Documentation/filesystems/proc.rst |    2 +
> fs/exec.c                          |    4 +
> fs/proc/task_mmu.c                 |    4 +
> include/linux/kmmscand.h           |   31 +
> include/linux/migrate.h            |    2 +
> include/linux/mm.h                 |   11 +
> include/linux/mm_types.h           |    7 +
> include/linux/vm_event_item.h      |   10 +
> include/trace/events/kmem.h        |   90 ++
> include/uapi/linux/prctl.h         |    7 +
> kernel/fork.c                      |    8 +
> kernel/sys.c                       |   25 +
> mm/Kconfig                         |    8 +
> mm/Makefile                        |    1 +
> mm/kmmscand.c                      | 1515 ++++++++++++++++++++++++++++
> mm/migrate.c                       |    2 +-
> mm/vmstat.c                        |   10 +
> 17 files changed, 1736 insertions(+), 1 deletion(-)
> create mode 100644 include/linux/kmmscand.h
> create mode 100644 mm/kmmscand.c
>
>
>base-commit: b7f94fcf55469ad3ef8a74c35b488dbfa314d1bb
>--
>2.34.1
>


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [RFC PATCH V1 00/13] mm: slowtier page promotion based on PTE A bit
  2025-03-19 23:00 ` [RFC PATCH V1 00/13] mm: slowtier page promotion based on PTE A bit Davidlohr Bueso
@ 2025-03-20  8:51   ` Raghavendra K T
  2025-03-20 19:11     ` Raghavendra K T
  2025-03-20 21:50     ` Davidlohr Bueso
  0 siblings, 2 replies; 30+ messages in thread
From: Raghavendra K T @ 2025-03-20  8:51 UTC (permalink / raw)
  To: AneeshKumar.KizhakeVeetil, Hasan.Maruf, Michael.Day, akpm,
	bharata, dave.hansen, david, dongjoo.linux.dev, feng.tang,
	gourry, hannes, honggyu.kim, hughd, jhubbard, jon.grimm,
	k.shutemov, kbusch, kmanaouil.dev, leesuyeon0506, leillc,
	liam.howlett, linux-kernel, linux-mm, mgorman, mingo, nadav.amit,
	nphamcs, peterz, riel, rientjes, rppt, santosh.shukla, shivankg,
	shy828301, sj, vbabka, weixugc, willy, ying.huang, ziy,
	Jonathan.Cameron, alok.rathore



On 3/20/2025 4:30 AM, Davidlohr Bueso wrote:
> On Wed, 19 Mar 2025, Raghavendra K T wrote:
> 
>> Introduction:
>> =============
>> In the current hot page promotion, all the activities including the
>> process address space scanning, NUMA hint fault handling and page
>> migration is performed in the process context. i.e., scanning overhead is
>> borne by applications.
>>
>> This is RFC V1 patch series to do (slow tier) CXL page promotion.
>> The approach in this patchset assists/addresses the issue by adding PTE
>> Accessed bit scanning.
>>
>> Scanning is done by a global kernel thread which routinely scans all
>> the processes' address spaces and checks for accesses by reading the
>> PTE A bit.
>>
>> A separate migration thread migrates/promotes the pages to the toptier
>> node based on a simple heuristic that uses toptier scan/access 
>> information
>> of the mm.
>>
>> Additionally based on the feedback for RFC V0 [4], a prctl knob with
>> a scalar value is provided to control per task scanning.
>>
>> Initial results show promising number on a microbenchmark. Soon
>> will get numbers with real benchmarks and findings (tunings).
>>
>> Experiment:
>> ============
>> Abench microbenchmark,
>> - Allocates 8GB/16GB/32GB/64GB of memory on CXL node
>> - 64 threads created, and each thread randomly accesses pages in 4K
>>  granularity.
>> - 512 iterations with a delay of 1 us between two successive iterations.
>>
>> SUT: 512 CPU, 2 node 256GB, AMD EPYC.
>>
>> 3 runs, command:  abench -m 2 -d 1 -i 512 -s <size>
>>
>> Calculates how much time is taken to complete the task, lower is better.
>> Expectation is CXL node memory is expected to be migrated as fast as
>> possible.
>>
>> Base case: 6.14-rc6    w/ numab mode = 2 (hot page promotion is enabled).
>> patched case: 6.14-rc6 w/ numab mode = 1 (numa balancing is enabled).
>> we expect daemon to do page promotion.
>>
>> Result:
>> ========
>>         base NUMAB2                    patched NUMAB1
>>         time in sec  (%stdev)   time in sec  (%stdev)     %gain
>> 8GB     134.33       ( 0.19 )        120.52  ( 0.21 )     10.28
>> 16GB     292.24       ( 0.60 )        275.97  ( 0.18 )      5.56
>> 32GB     585.06       ( 0.24 )        546.49  ( 0.35 )      6.59
>> 64GB    1278.98       ( 0.27 )       1205.20  ( 2.29 )      5.76
>>
>> Base case: 6.14-rc6    w/ numab mode = 1 (numa balancing is enabled).
>> patched case: 6.14-rc6 w/ numab mode = 1 (numa balancing is enabled).
>>         base NUMAB1                    patched NUMAB1
>>         time in sec  (%stdev)   time in sec  (%stdev)     %gain
>> 8GB     186.71       ( 0.99 )        120.52  ( 0.21 )     35.45
>> 16GB     376.09       ( 0.46 )        275.97  ( 0.18 )     26.62
>> 32GB     744.37       ( 0.71 )        546.49  ( 0.35 )     26.58
>> 64GB    1534.49       ( 0.09 )       1205.20  ( 2.29 )     21.45
> 
> Very promising, but a few things. A more fair comparison would be
> vs kpromoted using the PROT_NONE of NUMAB2. Essentially disregarding
> the asynchronous migration, and effectively measuring synchronous
> vs asynchronous scanning overhead and implied semantics. Essentially
> save the extra kthread and only have a per-NUMA node migrator, which
> is the common denominator for all these sources of hotness.


Yes, I agree that fair comparison would be
1) kmmscand generating data on pages to be promoted working with
kpromoted asynchronously migrating
VS
2) NUMAB2 generating data on pages to be migrated integrated with
kpromoted.

As Bharata already mentioned, we tried integrating kpromoted with
kmmscand generated migration list, But kmmscand generates huge amount of
scanned page data, and need to be organized better so that kpromted can 
handle the migration effectively.

(2) We have not tried it yet, will get back on the possibility (and also
numbers when both are ready).

> 
> Similarly, while I don't see any users disabling NUMAB1 _and_ enabling
> this sort of thing, it would be useful to have data on no numa balancing
> at all. If nothing else, that would measure the effects of the dest
> node heuristics.

Last time when I checked, with patch, numbers with NUMAB=0 and NUMAB=1
was not making much difference in 8GB case because most of the migration 
was handled by kmmscand. It is because before NUMAB=1 learns and tries
to migrate, kmmscand would have already migrated.

But a longer running/ more memory workload may make more difference.
I will comeback with that number.

> 
> Also, data/workload involving demotion would also be good to have for
> a more complete picture.
>

Agree.
additionally we need to handle various cases like
  - Should we choose second best target node when first node is full?
    >>
>> Major Changes since V0:
>> ======================
>> - A separate migration thread is used for migration, thus alleviating 
>> need for
>>  multi-threaded scanning (atleast as per tracing).
>>
>> - A simple heuristic for target node calculation is added.
>>
>> - prctl (David R) interface with scalar value is added to control per 
>> task scanning.
>>
>> - Steve's comment on tracing incorporated.
>>
>> - Davidlohr's reported bugfix.
>>
>> - Initial scan delay similar to NUMAB1 mode added.
>>
>> - Got rid of migration lock during mm_walk.
>>
>> PS: Occassionally I do see if scanning is too fast compared to migration,
>> scanning can stall waiting for lock. Should be fixed in next version by
>> using memslot for migration..
>>
>> Disclaimer, Takeaways and discussion points and future TODOs
>> ==============================================================
>> 1) Source code, patch seggregation still to be improved, current 
>> patchset only
>> provides a skeleton.
>>
>> 2) Unification of source of hotness is not easy (as mentioned perhaps 
>> by Jonathan)
>> but perhaps all the consumers/producers can work coopertaively.
>>
>> Scanning:
>> 3) Major positive: Current patchset is able to cover all the process 
>> address
>> space scanning effectively with simple algorithms to tune scan_size 
>> and scan_period.
>>
>> 4) Effective tracking of folio's or address space using / or ideas 
>> used in DAMON
>> is yet to be explored fully.
>>
>> 5) Use timestamp information-based migration (Similar to numab mode=2).
>> instead of migrating immediately when PTE A bit set.
>> (cons:
>> - It will not be accurate since it is done outside of process
>> context.
>> - Performance benefit may be lost.)
>>
>> Migration:
>>
>> 6) Currently fast scanner can bombard migration list, need to maintain 
>> migration list in a more
>> organized way (for e.g. using memslot, so that it is also helpful in 
>> maintaining recency, frequency
>> information (similar to kpromoted posted by Bharata)
>>
>> 7) NUMAB2 throttling is very effective, we would need a common 
>> interface to control migration
>> and also exploit batch migration.
> 
> Does NUMAB2 continue to exist? Are there any benefits in having two 
> sources?
> 

I think there is surely a benefit in having two sources.
NUMAB2 is more accurate but slow learning.

IBS: No scan overhead but we need more sampledata.

PTE A bit: more scanning overhead (but was not much significant to
impact performance when compared with NUMAB1/NUMAB2, rather it was more
performing because of proactive migration) but has less accurate data on
hotness, target_node(?).

When system is more stable, IBS was more effective.
PTE A bit and NUMAB was effective when we needed more aggressive
migration  (in that order).

- Raghu


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [RFC PATCH V1 00/13] mm: slowtier page promotion based on PTE A bit
  2025-03-20  8:51   ` Raghavendra K T
@ 2025-03-20 19:11     ` Raghavendra K T
  2025-03-21 20:35       ` Davidlohr Bueso
  2025-03-20 21:50     ` Davidlohr Bueso
  1 sibling, 1 reply; 30+ messages in thread
From: Raghavendra K T @ 2025-03-20 19:11 UTC (permalink / raw)
  To: AneeshKumar.KizhakeVeetil, Hasan.Maruf, Michael.Day, akpm,
	bharata, dave.hansen, david, dongjoo.linux.dev, feng.tang,
	gourry, hannes, honggyu.kim, hughd, jhubbard, jon.grimm,
	k.shutemov, kbusch, kmanaouil.dev, leesuyeon0506, leillc,
	liam.howlett, linux-kernel, linux-mm, mgorman, mingo, nadav.amit,
	nphamcs, peterz, riel, rientjes, rppt, santosh.shukla, shivankg,
	shy828301, sj, vbabka, weixugc, willy, ying.huang, ziy,
	Jonathan.Cameron, alok.rathore

On 3/20/2025 2:21 PM, Raghavendra K T wrote:
> On 3/20/2025 4:30 AM, Davidlohr Bueso wrote:
>> On Wed, 19 Mar 2025, Raghavendra K T wrote:
>>
>>> Introduction:
>>> =============
>>> In the current hot page promotion, all the activities including the
>>> process address space scanning, NUMA hint fault handling and page
>>> migration is performed in the process context. i.e., scanning 
>>> overhead is
>>> borne by applications.
>>>
>>> This is RFC V1 patch series to do (slow tier) CXL page promotion.
>>> The approach in this patchset assists/addresses the issue by adding PTE
>>> Accessed bit scanning.
>>>
>>> Scanning is done by a global kernel thread which routinely scans all
>>> the processes' address spaces and checks for accesses by reading the
>>> PTE A bit.
>>>
>>> A separate migration thread migrates/promotes the pages to the toptier
>>> node based on a simple heuristic that uses toptier scan/access 
>>> information
>>> of the mm.
>>>
>>> Additionally based on the feedback for RFC V0 [4], a prctl knob with
>>> a scalar value is provided to control per task scanning.
>>>
>>> Initial results show promising number on a microbenchmark. Soon
>>> will get numbers with real benchmarks and findings (tunings).
>>>
>>> Experiment:
>>> ============
>>> Abench microbenchmark,
>>> - Allocates 8GB/16GB/32GB/64GB of memory on CXL node
>>> - 64 threads created, and each thread randomly accesses pages in 4K
>>>  granularity.
>>> - 512 iterations with a delay of 1 us between two successive iterations.
>>>
>>> SUT: 512 CPU, 2 node 256GB, AMD EPYC.
>>>
>>> 3 runs, command:  abench -m 2 -d 1 -i 512 -s <size>
>>>
>>> Calculates how much time is taken to complete the task, lower is better.
>>> Expectation is CXL node memory is expected to be migrated as fast as
>>> possible.
>>>
>>> Base case: 6.14-rc6    w/ numab mode = 2 (hot page promotion is 
>>> enabled).
>>> patched case: 6.14-rc6 w/ numab mode = 1 (numa balancing is enabled).
>>> we expect daemon to do page promotion.
>>>
>>> Result:
>>> ========
>>>         base NUMAB2                    patched NUMAB1
>>>         time in sec  (%stdev)   time in sec  (%stdev)     %gain
>>> 8GB     134.33       ( 0.19 )        120.52  ( 0.21 )     10.28
>>> 16GB     292.24       ( 0.60 )        275.97  ( 0.18 )      5.56
>>> 32GB     585.06       ( 0.24 )        546.49  ( 0.35 )      6.59
>>> 64GB    1278.98       ( 0.27 )       1205.20  ( 2.29 )      5.76
>>>
>>> Base case: 6.14-rc6    w/ numab mode = 1 (numa balancing is enabled).
>>> patched case: 6.14-rc6 w/ numab mode = 1 (numa balancing is enabled).
>>>         base NUMAB1                    patched NUMAB1
>>>         time in sec  (%stdev)   time in sec  (%stdev)     %gain
>>> 8GB     186.71       ( 0.99 )        120.52  ( 0.21 )     35.45
>>> 16GB     376.09       ( 0.46 )        275.97  ( 0.18 )     26.62
>>> 32GB     744.37       ( 0.71 )        546.49  ( 0.35 )     26.58
>>> 64GB    1534.49       ( 0.09 )       1205.20  ( 2.29 )     21.45
>>
>> Very promising, but a few things. A more fair comparison would be
>> vs kpromoted using the PROT_NONE of NUMAB2. Essentially disregarding
>> the asynchronous migration, and effectively measuring synchronous
>> vs asynchronous scanning overhead and implied semantics. Essentially
>> save the extra kthread and only have a per-NUMA node migrator, which
>> is the common denominator for all these sources of hotness.
> 
> 
> Yes, I agree that fair comparison would be
> 1) kmmscand generating data on pages to be promoted working with
> kpromoted asynchronously migrating
> VS
> 2) NUMAB2 generating data on pages to be migrated integrated with
> kpromoted.
> 
> As Bharata already mentioned, we tried integrating kpromoted with
> kmmscand generated migration list, But kmmscand generates huge amount of
> scanned page data, and need to be organized better so that kpromted can 
> handle the migration effectively.
> 
> (2) We have not tried it yet, will get back on the possibility (and also
> numbers when both are ready).
> 
>>
>> Similarly, while I don't see any users disabling NUMAB1 _and_ enabling
>> this sort of thing, it would be useful to have data on no numa balancing
>> at all. If nothing else, that would measure the effects of the dest
>> node heuristics.
> 
> Last time when I checked, with patch, numbers with NUMAB=0 and NUMAB=1
> was not making much difference in 8GB case because most of the migration 
> was handled by kmmscand. It is because before NUMAB=1 learns and tries
> to migrate, kmmscand would have already migrated.
> 
> But a longer running/ more memory workload may make more difference.
> I will comeback with that number.

                  base NUMAB=2   Patched NUMAB=0
                  time in sec    time in sec
===================================================
8G:              134.33 (0.19)   119.88 ( 0.25)
16G:             292.24 (0.60)   325.06 (11.11)
32G:             585.06 (0.24)   546.15 ( 0.50)
64G:            1278.98 (0.27)  1221.41 ( 1.54)

We can see that numbers have not changed much between NUMAB=1 NUMAB=0 in
patched case.

PS: for 16G there was a bad case where a rare contention happen for lock
for same mm. that we can see from stdev, which should be taken care in
next version.

[...]



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [RFC PATCH V1 00/13] mm: slowtier page promotion based on PTE A bit
  2025-03-20  8:51   ` Raghavendra K T
  2025-03-20 19:11     ` Raghavendra K T
@ 2025-03-20 21:50     ` Davidlohr Bueso
  2025-03-21  6:48       ` Raghavendra K T
  1 sibling, 1 reply; 30+ messages in thread
From: Davidlohr Bueso @ 2025-03-20 21:50 UTC (permalink / raw)
  To: Raghavendra K T
  Cc: AneeshKumar.KizhakeVeetil, Hasan.Maruf, Michael.Day, akpm,
	bharata, dave.hansen, david, dongjoo.linux.dev, feng.tang,
	gourry, hannes, honggyu.kim, hughd, jhubbard, jon.grimm,
	k.shutemov, kbusch, kmanaouil.dev, leesuyeon0506, leillc,
	liam.howlett, linux-kernel, linux-mm, mgorman, mingo, nadav.amit,
	nphamcs, peterz, riel, rientjes, rppt, santosh.shukla, shivankg,
	shy828301, sj, vbabka, weixugc, willy, ying.huang, ziy,
	Jonathan.Cameron, alok.rathore

On Thu, 20 Mar 2025, Raghavendra K T wrote:

>>Does NUMAB2 continue to exist? Are there any benefits in having two
>>sources?
>>
>
>I think there is surely a benefit in having two sources.

I think I was a bit vague. What I'm really asking is if the scanning is
done async (kmmscand), should NUMAB2 also exist as a source and also feed
into the migrator? Looking at it differently, I guess doing so would allow
additional flexibility in choosing what to use.

>NUMAB2 is more accurate but slow learning.

Yes. Which is also why it is important to have demotion in the picture to
measure the ping pong effect. LRU based heuristics work best here.

>IBS: No scan overhead but we need more sampledata.

>PTE A bit: more scanning overhead (but was not much significant to
>impact performance when compared with NUMAB1/NUMAB2, rather it was more
>performing because of proactive migration) but has less accurate data on
>hotness, target_node(?).
>
>When system is more stable, IBS was more effective.

IBS will never be as effective as it should be simply because of the lack
of time decay/frequency (hence all that related phi hackery in the kpromoted
series). It has a global view of memory, it should beat any sw scanning
heuristics by far but the numbers have lacked.

As you know, PeterZ, Dave Hansen, Ying and I have expressed concerns about
this in the past. But that is not to say it does not serve as a source,
as you point out.

Thanks,
Davidlohr


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [RFC PATCH V1 00/13] mm: slowtier page promotion based on PTE A bit
  2025-03-20 21:50     ` Davidlohr Bueso
@ 2025-03-21  6:48       ` Raghavendra K T
  0 siblings, 0 replies; 30+ messages in thread
From: Raghavendra K T @ 2025-03-21  6:48 UTC (permalink / raw)
  To: AneeshKumar.KizhakeVeetil, Hasan.Maruf, Michael.Day, akpm,
	bharata, dave.hansen, david, dongjoo.linux.dev, feng.tang,
	gourry, hannes, honggyu.kim, hughd, jhubbard, jon.grimm,
	k.shutemov, kbusch, kmanaouil.dev, leesuyeon0506, leillc,
	liam.howlett, linux-kernel, linux-mm, mgorman, mingo, nadav.amit,
	nphamcs, peterz, riel, rientjes, rppt, santosh.shukla, shivankg,
	shy828301, sj, vbabka, weixugc, willy, ying.huang, ziy,
	Jonathan.Cameron, alok.rathore, yuzhao

+Yu Zhao

Realized we had not CCed him earlier

On 3/21/2025 3:20 AM, Davidlohr Bueso wrote:
> On Thu, 20 Mar 2025, Raghavendra K T wrote:
> 
>>> Does NUMAB2 continue to exist? Are there any benefits in having two
>>> sources?
>>>
>>
>> I think there is surely a benefit in having two sources.
> 
> I think I was a bit vague. What I'm really asking is if the scanning is
> done async (kmmscand), should NUMAB2 also exist as a source and also feed
> into the migrator? Looking at it differently, I guess doing so would allow
> additional flexibility in choosing what to use.
> 

Not exactly. Since NUMAB2 is bringing accurate timestamp information and
additional migration throttling logic on top of NUMAB1,
we can just keep NUMAB1, but borrowing migration throttling from NUMAB2
and make sure that migration is asynchronous.

This is with the assumption that kmmscand will be able to detect the
exact target node in most of the cases, and additional flexibility
of toptier balancing come from NUMAB1.


>> NUMAB2 is more accurate but slow learning.
> 
> Yes. Which is also why it is important to have demotion in the picture to
> measure the ping pong effect. LRU based heuristics work best here.
> 

+1

>> IBS: No scan overhead but we need more sampledata.
> 
>> PTE A bit: more scanning overhead (but was not much significant to
>> impact performance when compared with NUMAB1/NUMAB2, rather it was more
>> performing because of proactive migration) but has less accurate data on
>> hotness, target_node(?).
>>
>> When system is more stable, IBS was more effective.
> 
> IBS will never be as effective as it should be simply because of the lack
> of time decay/frequency (hence all that related phi hackery in the 
> kpromoted
> series). It has a global view of memory, it should beat any sw scanning
> heuristics by far but the numbers have lacked.
> 
> As you know, PeterZ, Dave Hansen, Ying and I have expressed concerns about
> this in the past. But that is not to say it does not serve as a source,
> as you point out.
> 
> Thanks,
> Davidlohr



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [RFC PATCH V1 00/13] mm: slowtier page promotion based on PTE A bit
  2025-03-19 19:30 [RFC PATCH V1 00/13] mm: slowtier page promotion based on PTE A bit Raghavendra K T
                   ` (13 preceding siblings ...)
  2025-03-19 23:00 ` [RFC PATCH V1 00/13] mm: slowtier page promotion based on PTE A bit Davidlohr Bueso
@ 2025-03-21 15:52 ` Jonathan Cameron
       [not found] ` <20250321105309.3521-1-hdanton@sina.com>
  15 siblings, 0 replies; 30+ messages in thread
From: Jonathan Cameron @ 2025-03-21 15:52 UTC (permalink / raw)
  To: Raghavendra K T
  Cc: AneeshKumar.KizhakeVeetil, Hasan.Maruf, Michael.Day, akpm,
	bharata, dave.hansen, david, dongjoo.linux.dev, feng.tang,
	gourry, hannes, honggyu.kim, hughd, jhubbard, jon.grimm,
	k.shutemov, kbusch, kmanaouil.dev, leesuyeon0506, leillc,
	liam.howlett, linux-kernel, linux-mm, mgorman, mingo, nadav.amit,
	nphamcs, peterz, riel, rientjes, rppt, santosh.shukla, shivankg,
	shy828301, sj, vbabka, weixugc, willy, ying.huang, ziy, dave

On Wed, 19 Mar 2025 19:30:15 +0000
Raghavendra K T <raghavendra.kt@amd.com> wrote:

> Introduction:
> =============
> In the current hot page promotion, all the activities including the
> process address space scanning, NUMA hint fault handling and page
> migration is performed in the process context. i.e., scanning overhead is
> borne by applications.
> 
> This is RFC V1 patch series to do (slow tier) CXL page promotion.
> The approach in this patchset assists/addresses the issue by adding PTE
> Accessed bit scanning.
> 
> Scanning is done by a global kernel thread which routinely scans all
> the processes' address spaces and checks for accesses by reading the
> PTE A bit. 
> 
> A separate migration thread migrates/promotes the pages to the toptier
> node based on a simple heuristic that uses toptier scan/access information
> of the mm.
> 
> Additionally based on the feedback for RFC V0 [4], a prctl knob with
> a scalar value is provided to control per task scanning.
> 
> Initial results show promising number on a microbenchmark. Soon
> will get numbers with real benchmarks and findings (tunings). 
> 
> Experiment:
> ============
> Abench microbenchmark,
> - Allocates 8GB/16GB/32GB/64GB of memory on CXL node
> - 64 threads created, and each thread randomly accesses pages in 4K
>   granularity.

So if I'm reading this right, this is a flat distribution and any
estimate of what is hot is noise?

That will put a positive spin on costs of migration as we will
be moving something that isn't really all that hot and so is moderately
unlikely to be accessed whilst migration is going on.  Or is the point that
the rest of the memory is also mapped but not being accessed?

I'm not entirely sure I follow what this is bound by. Is it bandwidth
bound?


> - 512 iterations with a delay of 1 us between two successive iterations.
> 
> SUT: 512 CPU, 2 node 256GB, AMD EPYC.
> 
> 3 runs, command:  abench -m 2 -d 1 -i 512 -s <size>
> 
> Calculates how much time is taken to complete the task, lower is better.
> Expectation is CXL node memory is expected to be migrated as fast as
> possible.

> 
> Base case: 6.14-rc6    w/ numab mode = 2 (hot page promotion is enabled).
> patched case: 6.14-rc6 w/ numab mode = 1 (numa balancing is enabled).
> we expect daemon to do page promotion.
> 
> Result:
> ========
>          base NUMAB2                    patched NUMAB1
>          time in sec  (%stdev)   time in sec  (%stdev)     %gain
>  8GB     134.33       ( 0.19 )        120.52  ( 0.21 )     10.28
> 16GB     292.24       ( 0.60 )        275.97  ( 0.18 )      5.56
> 32GB     585.06       ( 0.24 )        546.49  ( 0.35 )      6.59
> 64GB    1278.98       ( 0.27 )       1205.20  ( 2.29 )      5.76
> 
> Base case: 6.14-rc6    w/ numab mode = 1 (numa balancing is enabled).
> patched case: 6.14-rc6 w/ numab mode = 1 (numa balancing is enabled).
>          base NUMAB1                    patched NUMAB1
>          time in sec  (%stdev)   time in sec  (%stdev)     %gain
>  8GB     186.71       ( 0.99 )        120.52  ( 0.21 )     35.45 
> 16GB     376.09       ( 0.46 )        275.97  ( 0.18 )     26.62 
> 32GB     744.37       ( 0.71 )        546.49  ( 0.35 )     26.58 
> 64GB    1534.49       ( 0.09 )       1205.20  ( 2.29 )     21.45

Nice numbers, but maybe some more details on what they are showing?
At what point in the workload has all the memory migrated to the
fast node or does that never happen?

I'm confused :(

Jonathan




^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [RFC PATCH V1 01/13] mm: Add kmmscand kernel daemon
  2025-03-19 19:30 ` [RFC PATCH V1 01/13] mm: Add kmmscand kernel daemon Raghavendra K T
@ 2025-03-21 16:06   ` Jonathan Cameron
  2025-03-24 15:09     ` Raghavendra K T
  0 siblings, 1 reply; 30+ messages in thread
From: Jonathan Cameron @ 2025-03-21 16:06 UTC (permalink / raw)
  To: Raghavendra K T
  Cc: AneeshKumar.KizhakeVeetil, Hasan.Maruf, Michael.Day, akpm,
	bharata, dave.hansen, david, dongjoo.linux.dev, feng.tang,
	gourry, hannes, honggyu.kim, hughd, jhubbard, jon.grimm,
	k.shutemov, kbusch, kmanaouil.dev, leesuyeon0506, leillc,
	liam.howlett, linux-kernel, linux-mm, mgorman, mingo, nadav.amit,
	nphamcs, peterz, riel, rientjes, rppt, santosh.shukla, shivankg,
	shy828301, sj, vbabka, weixugc, willy, ying.huang, ziy, dave

On Wed, 19 Mar 2025 19:30:16 +0000
Raghavendra K T <raghavendra.kt@amd.com> wrote:

> Add a skeleton to support scanning and migration.
> Also add a config option for the same.
> 
> High level design:
> 
> While (1):
>   scan the slowtier pages belonging to VMAs of a task.
>   Add to migation list
> 
> Separate thread:
>   migrate scanned pages to a toptier node based on heuristics
> 
> The overall code is heavily influenced by khugepaged design.
> 
> Signed-off-by: Raghavendra K T <raghavendra.kt@amd.com>


I'm really bad and reading code and not commenting on the 'small'
stuff.  So feel free to ignore this given the RFC status!
This sort of read through helps me get my head around a series.

> ---
>  mm/Kconfig    |   8 +++
>  mm/Makefile   |   1 +
>  mm/kmmscand.c | 176 ++++++++++++++++++++++++++++++++++++++++++++++++++
>  3 files changed, 185 insertions(+)
>  create mode 100644 mm/kmmscand.c
> 
> diff --git a/mm/Kconfig b/mm/Kconfig
> index 1b501db06417..5a4931633e15 100644
> --- a/mm/Kconfig
> +++ b/mm/Kconfig
> @@ -783,6 +783,14 @@ config KSM
>  	  until a program has madvised that an area is MADV_MERGEABLE, and
>  	  root has set /sys/kernel/mm/ksm/run to 1 (if CONFIG_SYSFS is set).
>  
> +config KMMSCAND
> +	bool "Enable PTE A bit scanning and Migration"
> +	depends on NUMA_BALANCING
> +	help
> +	  Enable PTE A bit scanning of page. CXL pages accessed are migrated to

Trivial but don't mention CXL.  "Other memory tier solutions are available"

> +	  a regular NUMA node. The option creates a separate kthread for
> +	  scanning and migration.
> +

> diff --git a/mm/kmmscand.c b/mm/kmmscand.c
> new file mode 100644
> index 000000000000..6c55250b5cfb
> --- /dev/null
> +++ b/mm/kmmscand.c

> +
> +struct kmmscand_scan kmmscand_scan = {
> +	.mm_head = LIST_HEAD_INIT(kmmscand_scan.mm_head),
> +};
> +
> +static int kmmscand_has_work(void)
> +{

Unless this is going to get more complex, I'd just put
the implementation inline.  Kind of obvious what is doing
so the wrapper doesn't add much.

> +	return !list_empty(&kmmscand_scan.mm_head);
> +}
> +
> +static bool kmmscand_should_wakeup(void)
> +{
> +	bool wakeup =  kthread_should_stop() || need_wakeup ||

bonus space after =

> +	       time_after_eq(jiffies, kmmscand_sleep_expire);
> +	if (need_wakeup)
> +		need_wakeup = false;

Why not set it unconditionally?  If it is false already, no
harm done and removes need to check.

> +
> +	return wakeup;
> +}
> +
> +static void kmmscand_wait_work(void)
> +{
> +	const unsigned long scan_sleep_jiffies =
> +		msecs_to_jiffies(kmmscand_scan_sleep_ms);
> +
> +	if (!scan_sleep_jiffies)
> +		return;
> +
> +	kmmscand_sleep_expire = jiffies + scan_sleep_jiffies;
> +	wait_event_timeout(kmmscand_wait,
> +			kmmscand_should_wakeup(),
> +			scan_sleep_jiffies);

strange wrap.  Maybe add a comment on why we don't care if
this timed out or not.

> +	return;
> +}
> +
> +static unsigned long kmmscand_scan_mm_slot(void)
> +{
> +	/* placeholder for scanning */

I guess this will make sense later in series!

> +	msleep(100);
> +	return 0;
> +}
> +
> +static void kmmscand_do_scan(void)
> +{
> +	unsigned long iter = 0, mms_to_scan;
> +

	unsigned long mms_to_scan = READ_ONCE(kmmscand_mms_to_scan);

> +	mms_to_scan = READ_ONCE(kmmscand_mms_to_scan);
> +
> +	while (true) {
> +		cond_resched();

Odd to do this at start. Maybe at end of loop?

> +
> +		if (unlikely(kthread_should_stop()) ||
> +			!READ_ONCE(kmmscand_scan_enabled))
> +			break;
return;  Then we don't need to read on to see if anything else happens.
> +
> +		if (kmmscand_has_work())
> +			kmmscand_scan_mm_slot();
> +
> +		iter++;
> +		if (iter >= mms_to_scan)
> +			break;
			return;
Same argument as above.

> +	}
> +}
> +
> +static int kmmscand(void *none)
> +{
> +	for (;;) {

while (true) maybe.  Feels more natural to me for a loop
with no terminating condition.   Obviously same thing in practice.

> +		if (unlikely(kthread_should_stop()))
			return;
> +			break;
> +
> +		kmmscand_do_scan();
> +
> +		while (!READ_ONCE(kmmscand_scan_enabled)) {
> +			cpu_relax();
> +			kmmscand_wait_work();
> +		}
> +
> +		kmmscand_wait_work();
> +	}
> +	return 0;
> +}
> +
> +static int start_kmmscand(void)
> +{
> +	int err = 0;
> +
> +	guard(mutex)(&kmmscand_mutex);
> +
> +	/* Some one already succeeded in starting daemon */
> +	if (kmmscand_thread)
return 0;
> +		goto end;
> +
> +	kmmscand_thread = kthread_run(kmmscand, NULL, "kmmscand");
> +	if (IS_ERR(kmmscand_thread)) {
> +		pr_err("kmmscand: kthread_run(kmmscand) failed\n");
> +		err = PTR_ERR(kmmscand_thread);
> +		kmmscand_thread = NULL;

Use a local variable instead and only assign on success. That
way you don't need to null it out in this path.

> +		goto end;

return PTR_ERR(kmmscand_thread_local);

> +	} else {
> +		pr_info("kmmscand: Successfully started kmmscand");
No need for else give the other path exits.

> +	}
> +
> +	if (!list_empty(&kmmscand_scan.mm_head))
> +		wake_up_interruptible(&kmmscand_wait);
> +
> +end:
> +	return err;
> +}
> +
> +static int stop_kmmscand(void)
> +{
> +	int err = 0;

No point in err if always 0.

> +
> +	guard(mutex)(&kmmscand_mutex);
> +
> +	if (kmmscand_thread) {
> +		kthread_stop(kmmscand_thread);
> +		kmmscand_thread = NULL;
> +	}
> +
> +	return err;
> +}
> +
> +static int __init kmmscand_init(void)
> +{
> +	int err;
> +
> +	err = start_kmmscand();
> +	if (err)
> +		goto err_kmmscand;

start_kmmscand() should be side effect free if it is returning an
error.  Not doing that makes for hard to read code.

Superficially looks like it is already side effect free so you
can probably just return here.


> +
> +	return 0;
> +
> +err_kmmscand:
> +	stop_kmmscand();
> +
> +	return err;
> +}
> +subsys_initcall(kmmscand_init);



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [RFC PATCH V1 04/13] mm: Create a separate kernel thread for migration
  2025-03-19 19:30 ` [RFC PATCH V1 04/13] mm: Create a separate kernel thread for migration Raghavendra K T
@ 2025-03-21 17:29   ` Jonathan Cameron
  2025-03-24 15:17     ` Raghavendra K T
  0 siblings, 1 reply; 30+ messages in thread
From: Jonathan Cameron @ 2025-03-21 17:29 UTC (permalink / raw)
  To: Raghavendra K T
  Cc: AneeshKumar.KizhakeVeetil, Hasan.Maruf, Michael.Day, akpm,
	bharata, dave.hansen, david, dongjoo.linux.dev, feng.tang,
	gourry, hannes, honggyu.kim, hughd, jhubbard, jon.grimm,
	k.shutemov, kbusch, kmanaouil.dev, leesuyeon0506, leillc,
	liam.howlett, linux-kernel, linux-mm, mgorman, mingo, nadav.amit,
	nphamcs, peterz, riel, rientjes, rppt, santosh.shukla, shivankg,
	shy828301, sj, vbabka, weixugc, willy, ying.huang, ziy, dave

On Wed, 19 Mar 2025 19:30:19 +0000
Raghavendra K T <raghavendra.kt@amd.com> wrote:

> Having independent thread helps in:
>  - Alleviating the need for multiple scanning threads
>  - Aids to control batch migration (TBD)
>  - Migration throttling (TBD)
> 
A few comments on things noticed whilst reading through.

Jonathan

> Signed-off-by: Raghavendra K T <raghavendra.kt@amd.com>
> ---
>  mm/kmmscand.c | 157 +++++++++++++++++++++++++++++++++++++++++++++++++-
>  1 file changed, 154 insertions(+), 3 deletions(-)
> 
> diff --git a/mm/kmmscand.c b/mm/kmmscand.c
> index a76a58bf37b2..6e96cfab5b85 100644
> --- a/mm/kmmscand.c
> +++ b/mm/kmmscand.c

>  /* Per folio information used for migration */
>  struct kmmscand_migrate_info {
>  	struct list_head migrate_node;
> @@ -101,6 +126,13 @@ static int kmmscand_has_work(void)
>  	return !list_empty(&kmmscand_scan.mm_head);
>  }
>  
> +static int kmmmigrated_has_work(void)
> +{
> +	if (!list_empty(&kmmscand_migrate_list.migrate_head))
> +		return true;
> +	return false;
If it isn't getting more complex later, can just
	return !list_empty().
or indeed, just put that condition directly at caller.

> +}


>  static inline bool is_valid_folio(struct folio *folio)
>  {
> @@ -238,7 +293,6 @@ static int hot_vma_idle_pte_entry(pte_t *pte,
>  			folio_put(folio);
>  			return 0;
>  		}
> -		/* XXX: Leaking memory. TBD: consume info */
>  		info = kzalloc(sizeof(struct kmmscand_migrate_info), GFP_NOWAIT);
>  		if (info && scanctrl) {
>  
> @@ -282,6 +336,28 @@ static inline int kmmscand_test_exit(struct mm_struct *mm)
>  	return atomic_read(&mm->mm_users) == 0;
>  }
>  
> +static void kmmscand_cleanup_migration_list(struct mm_struct *mm)
> +{
> +	struct kmmscand_migrate_info *info, *tmp;
> +
> +	spin_lock(&kmmscand_migrate_lock);

Could scatter some guard() magic in here.

> +	if (!list_empty(&kmmscand_migrate_list.migrate_head)) {

Maybe flip logic of this unless it is going to get more complex in future
patches.  That way, with guard() handling the spin lock, you can just
return when nothing to do.

> +		if (mm == READ_ONCE(kmmscand_cur_migrate_mm)) {
> +			/* A folio in this mm is being migrated. wait */
> +			WRITE_ONCE(kmmscand_migration_list_dirty, true);
> +		}
> +
> +		list_for_each_entry_safe(info, tmp, &kmmscand_migrate_list.migrate_head,
> +			migrate_node) {
> +			if (info && (info->mm == mm)) {
> +				info->mm = NULL;
> +				WRITE_ONCE(kmmscand_migration_list_dirty, true);
> +			}
> +		}
> +	}
> +	spin_unlock(&kmmscand_migrate_lock);
> +}

>  static unsigned long kmmscand_scan_mm_slot(void)
>  {
>  	bool next_mm = false;
> @@ -347,9 +429,17 @@ static unsigned long kmmscand_scan_mm_slot(void)
>  
>  		if (vma_scanned_size >= kmmscand_scan_size) {
>  			next_mm = true;
> -			/* TBD: Add scanned folios to migration list */
> +			/* Add scanned folios to migration list */
> +			spin_lock(&kmmscand_migrate_lock);
> +			list_splice_tail_init(&kmmscand_scanctrl.scan_list,
> +						&kmmscand_migrate_list.migrate_head);
> +			spin_unlock(&kmmscand_migrate_lock);
>  			break;
>  		}
> +		spin_lock(&kmmscand_migrate_lock);
> +		list_splice_tail_init(&kmmscand_scanctrl.scan_list,
> +					&kmmscand_migrate_list.migrate_head);
> +		spin_unlock(&kmmscand_migrate_lock);

I've stared at this a while, but if we have entered the conditional block
above, do we splice the now empty list? 

>  	}
>  
>  	if (!vma)


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [RFC PATCH V1 09/13] mm: Add heuristic to calculate target node
  2025-03-19 19:30 ` [RFC PATCH V1 09/13] mm: Add heuristic to calculate target node Raghavendra K T
@ 2025-03-21 17:42   ` Jonathan Cameron
  2025-03-24 16:17     ` Raghavendra K T
  0 siblings, 1 reply; 30+ messages in thread
From: Jonathan Cameron @ 2025-03-21 17:42 UTC (permalink / raw)
  To: Raghavendra K T
  Cc: AneeshKumar.KizhakeVeetil, Hasan.Maruf, Michael.Day, akpm,
	bharata, dave.hansen, david, dongjoo.linux.dev, feng.tang,
	gourry, hannes, honggyu.kim, hughd, jhubbard, jon.grimm,
	k.shutemov, kbusch, kmanaouil.dev, leesuyeon0506, leillc,
	liam.howlett, linux-kernel, linux-mm, mgorman, mingo, nadav.amit,
	nphamcs, peterz, riel, rientjes, rppt, santosh.shukla, shivankg,
	shy828301, sj, vbabka, weixugc, willy, ying.huang, ziy, dave

On Wed, 19 Mar 2025 19:30:24 +0000
Raghavendra K T <raghavendra.kt@amd.com> wrote:

> One of the key challenges in PTE A bit based scanning is to find right
> target node to promote to.

I have the same problem with the CXL hotpage monitor so very keen to
see solutions to this (though this particular one doesn't work for
me unless A bit scanning is happening as well).

> 
> Here is a simple heuristic based approach:
>    While scanning pages of any mm we also scan toptier pages that belong
> to that mm. We get an insight on the distribution of pages that potentially
> belonging to particular toptier node and also its recent access.
> 
> Current logic walks all the toptier node, and picks the one with highest
> accesses.

Maybe talk through why this heuristic works?  What is the intuition behind it?

I can see that on basis of first touch allocation, we should get a reasonable
number of pages in the node where that CPU doing initialization is. 

Is this relying on some other mechanism to ensure that the pages being touched
are local to the CPUs touching them?

Thanks,

Jonathan


> 
> Signed-off-by: Raghavendra K T <raghavendra.kt@amd.com>
> ---
> PS: There are many potential idea possible here.
> 1. we can do a quick sort on toptier nodes scan and access info
>   and maintain the list of preferred nodes/fallback nodes
>  in case of current target_node is getting filled up
> 
> 2. We can also keep the history of access/scan information from last
> scan used its decayed value to get a stable view etc etc.
> 



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [RFC PATCH V1 00/13] mm: slowtier page promotion based on PTE A bit
  2025-03-20 19:11     ` Raghavendra K T
@ 2025-03-21 20:35       ` Davidlohr Bueso
  2025-03-25  6:36         ` Raghavendra K T
  0 siblings, 1 reply; 30+ messages in thread
From: Davidlohr Bueso @ 2025-03-21 20:35 UTC (permalink / raw)
  To: Raghavendra K T
  Cc: AneeshKumar.KizhakeVeetil, Hasan.Maruf, Michael.Day, akpm,
	bharata, dave.hansen, david, dongjoo.linux.dev, feng.tang,
	gourry, hannes, honggyu.kim, hughd, jhubbard, jon.grimm,
	k.shutemov, kbusch, kmanaouil.dev, leesuyeon0506, leillc,
	liam.howlett, linux-kernel, linux-mm, mgorman, mingo, nadav.amit,
	nphamcs, peterz, riel, rientjes, rppt, santosh.shukla, shivankg,
	shy828301, sj, vbabka, weixugc, willy, ying.huang, ziy,
	Jonathan.Cameron, alok.rathore

On Fri, 21 Mar 2025, Raghavendra K T wrote:

>>But a longer running/ more memory workload may make more difference.
>>I will comeback with that number.
>
>                 base NUMAB=2   Patched NUMAB=0
>                 time in sec    time in sec
>===================================================
>8G:              134.33 (0.19)   119.88 ( 0.25)
>16G:             292.24 (0.60)   325.06 (11.11)
>32G:             585.06 (0.24)   546.15 ( 0.50)
>64G:            1278.98 (0.27)  1221.41 ( 1.54)
>
>We can see that numbers have not changed much between NUMAB=1 NUMAB=0 in
>patched case.

Thanks. Since this might vary across workloads, another important metric
here is numa hit/misses statistics.

fyi I have also been trying this series to get some numbers as well, but
noticed overnight things went south (so no chance before LSFMM):

[  464.026917] watchdog: BUG: soft lockup - CPU#108 stuck for 52s! [kmmscand:934]
[  464.026924] Modules linked in: ...
[  464.027098] CPU: 108 UID: 0 PID: 934 Comm: kmmscand Tainted: G             L     6.14.0-rc6-kmmscand+ #4
[  464.027105] Tainted: [L]=SOFTLOCKUP
[  464.027107] Hardware name: Supermicro SSG-121E-NE3X12R/X13DSF-A, BIOS 2.1 01/29/2024
[  464.027109] RIP: 0010:pmd_off+	0x58/0xd0
[  464.027124] Code: 83 e9 01 48 21 f1 48 c1 e1 03 48 89 f8 0f 1f 00 48 23 05 fb c7 fd 00 48 03 0d 0c b9 fb 00 48 25 00 f0 ff ff 48 01 c8 48 8b 38 <48> 89 f8 0f 1f 00 48 8b 0d db c7 fd 00 48 21 c1 48 89 d0 48 c1 e8
[  464.027128] RSP: 0018:ff71a0dc1b05bbc8 EFLAGS: 00000286
[  464.027133] RAX: ff3b028e421c17f0 RBX: ffc90cb8322e5e00 RCX: ff3b020d400007f0
[  464.027136] RDX: 00007f1393978000 RSI: 00000000000000fe RDI: 000000b9726b0067
[  464.027139] RBP: ff3b02f5d05babc0 R08: 00007f9c5653f000 R09: ffc90cb8322e0001
[  464.027141] R10: 0000000000000000 R11: ff3b028dd339420c R12: 00007f1393978000
[  464.027144] R13: ff3b028dded9cbb0 R14: ffc90cb8322e0000 R15: ffffffffb9a0a4c0
[  464.027146] FS:  0000000000000000(0000) GS:ff3b030bbf400000(0000) knlGS:0000000000000000
[  464.027150] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[  464.027153] CR2: 0000564713088f19 CR3: 000000fb40822006 CR4: 0000000000773ef0
[  464.027157] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[  464.027159] DR3: 0000000000000000 DR6: 00000000fffe07f0 DR7: 0000000000000400
[  464.027162] PKRU: 55555554
[  464.027163] Call Trace:
[  464.027166]  <IRQ>
[  464.027170]  ? watchdog_timer_fn+0x21b/0x2a0
[  464.027180]  ? __pfx_watchdog_timer_fn+0x10/0x10
[  464.027186]  ? __hrtimer_run_queues+0x10f/0x2a0
[  464.027193]  ? hrtimer_interrupt+0xfb/0x240
[  464.027199]  ? __sysvec_apic_timer_interrupt+0x4e/0x110
[  464.027208]  ? sysvec_apic_timer_interrupt+0x68/0x90
[  464.027219]  </IRQ>
[  464.027221]  <TASK>
[  464.027222]  ? asm_sysvec_apic_timer_interrupt+0x16/0x20
[  464.027236]  ? pmd_off+0x58/0xd0
[  464.027243]  hot_vma_idle_pte_entry+0x151/0x500
[  464.027253]  walk_pte_range_inner+0xbe/0x100
[  464.027260]  ? __pte_offset_map_lock+0x9a/0x110
[  464.027267]  walk_pgd_range+0x8f0/0xbb0
[  464.027271]  ? __pfx_hot_vma_idle_pte_entry+0x10/0x10
[  464.027282]  __walk_page_range+0x71/0x1d0
[  464.027287]  ? prepare_to_wait_event+0x53/0x180
[  464.027294]  walk_page_vma+0x98/0xf0
[  464.027300]  kmmscand+0x2aa/0x8d0
[  464.027310]  ? __pfx_kmmscand+0x10/0x10
[  464.027318]  kthread+0xea/0x230
[  464.027326]  ? finish_task_switch.isra.0+0x88/0x2d0
[  464.027335]  ? __pfx_kthread+0x10/0x10
[  464.027341]  ret_from_fork+0x2d/0x50
[  464.027350]  ? __pfx_kthread+0x10/0x10
[  464.027355]  ret_from_fork_asm+0x1a/0x30
[  464.027365]  </TASK>


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [RFC PATCH V1 09/13] mm: Add heuristic to calculate target node
       [not found] ` <20250321105309.3521-1-hdanton@sina.com>
@ 2025-03-23 18:14   ` Raghavendra K T
       [not found]   ` <20250324110543.3599-1-hdanton@sina.com>
  1 sibling, 0 replies; 30+ messages in thread
From: Raghavendra K T @ 2025-03-23 18:14 UTC (permalink / raw)
  To: Hillf Danton; +Cc: dave.hansen, david, hannes, linux-kernel, linux-mm, ziy



On 3/21/2025 4:23 PM, Hillf Danton wrote:
> On Wed, 19 Mar 2025 19:30:24 +0000 Raghavendra K T wrote
>> One of the key challenges in PTE A bit based scanning is to find right
>> target node to promote to.
>>
>> Here is a simple heuristic based approach:
>>     While scanning pages of any mm we also scan toptier pages that belong
>> to that mm. We get an insight on the distribution of pages that potentially
>> belonging to particular toptier node and also its recent access.
>>
>> Current logic walks all the toptier node, and picks the one with highest
>> accesses.
>>
> My $.02 for selecting promotion target node given a simple multi tier system.
> 
> 	Tk /* top Tierk (k > 0) has K (K > 0) nodes */
> 	...
> 	Tj /* Tierj (j > 0) has J (J > 0) nodes */
> 	...
> 	T0 /* bottom Tier0 has O (O > 0) nodes */
> 
> Unless config comes from user space (sysfs window for example should be opened),
> 
> 1, adopt the data flow pattern of L3 cache <--> DRAM <--> SSD, to only
> select Tj+1 when promoting pages in Tj.
> 

Hello Hillf ,
Thanks for giving a thought on this. This looks to be good idea in 
general. Mostly be able to implement with reverse of preferred demotion
target?

Thinking loud, Can there be exception cases similar to non-temporal copy
operations, where we don't want to pollute cache?
I mean cases we don't want to hop via middle tier node..?

> 2, select the node in Tj+1 that has the most free pages for promotion
> by default.

Not sure if this is productive always.

for e.g.
node 0-1 toptier (100GB)
node2 slowtier

suppose a workload (that occupies 80GB in total) running on CPU of node1
where 40GB is already in node1 rest of 40GB is in node2.

Now it is preferred to consolidate workload on node1 when slowtier
data becomes hot?
(This assumes that node1 channel has enough bandwidth to cater to
requirement of the workload)

> 3, nothing more.



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [RFC PATCH V1 09/13] mm: Add heuristic to calculate target node
       [not found]   ` <20250324110543.3599-1-hdanton@sina.com>
@ 2025-03-24 14:54     ` Raghavendra K T
  0 siblings, 0 replies; 30+ messages in thread
From: Raghavendra K T @ 2025-03-24 14:54 UTC (permalink / raw)
  To: Hillf Danton; +Cc: dave.hansen, david, hannes, linux-kernel, linux-mm, ziy



On 3/24/2025 4:35 PM, Hillf Danton wrote:
> On Sun, 23 Mar 2025 23:44:02 +0530 Raghavendra K T wrote
>> On 3/21/2025 4:23 PM, Hillf Danton wrote:
>>> On Wed, 19 Mar 2025 19:30:24 +0000 Raghavendra K T wrote
>>>> One of the key challenges in PTE A bit based scanning is to find right
>>>> target node to promote to.
>>>>
>>>> Here is a simple heuristic based approach:
>>>>      While scanning pages of any mm we also scan toptier pages that belong
>>>> to that mm. We get an insight on the distribution of pages that potentially
>>>> belonging to particular toptier node and also its recent access.
>>>>
>>>> Current logic walks all the toptier node, and picks the one with highest
>>>> accesses.
>>>>
>>> My $.02 for selecting promotion target node given a simple multi tier system.
>>>
>>> 	Tk /* top Tierk (k > 0) has K (K > 0) nodes */
>>> 	...
>>> 	Tj /* Tierj (j > 0) has J (J > 0) nodes */
>>> 	...
>>> 	T0 /* bottom Tier0 has O (O > 0) nodes */
>>>
>>> Unless config comes from user space (sysfs window for example should be opened),
>>>
>>> 1, adopt the data flow pattern of L3 cache <--> DRAM <--> SSD, to only
>>> select Tj+1 when promoting pages in Tj.
>>>
>>
>> Hello Hillf ,
>> Thanks for giving a thought on this. This looks to be good idea in
>> general. Mostly be able to implement with reverse of preferred demotion
>> target?
>>
>> Thinking loud, Can there be exception cases similar to non-temporal copy
>> operations, where we don't want to pollute cache?
>> I mean cases we don't want to hop via middle tier node..?
>>
> Given page cache, direct IO and coherent DMA have their roles to play.
>

Agree.

>>> 2, select the node in Tj+1 that has the most free pages for promotion
>>> by default.
>>
>> Not sure if this is productive always.
>>
> Trying to cure all pains with ONE pill wastes minutes I think.
> 

Very much true.

> To achive reliable high order pages, page allocator can not work well in
> combination with kswapd and kcompactd without clear boundaries drawn in
> between the tree parties for example.
> 
>> for e.g.
>> node 0-1 toptier (100GB)
>> node2 slowtier
>>
>> suppose a workload (that occupies 80GB in total) running on CPU of node1
>> where 40GB is already in node1 rest of 40GB is in node2.
>>
>> Now it is preferred to consolidate workload on node1 when slowtier
>> data becomes hot?
>>
> Yes and no (say, a couple seconds later mm pressure rises in node0).
> 
> In case of yes, I would like to turn on autonuma in the toptier instead
> without bothering to select the target node. You see a line is drawn
> between autonma and slowtier promotion now.

Yes, the goal has been slow tier promotion without much overhead to the
system + co-cooperatively work with NUMAB1 for top-tier balancing.
(for e.g., providing hints of hot VMAs).





^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [RFC PATCH V1 01/13] mm: Add kmmscand kernel daemon
  2025-03-21 16:06   ` Jonathan Cameron
@ 2025-03-24 15:09     ` Raghavendra K T
  0 siblings, 0 replies; 30+ messages in thread
From: Raghavendra K T @ 2025-03-24 15:09 UTC (permalink / raw)
  To: Jonathan Cameron
  Cc: AneeshKumar.KizhakeVeetil, Hasan.Maruf, Michael.Day, akpm,
	bharata, dave.hansen, david, dongjoo.linux.dev, feng.tang,
	gourry, hannes, honggyu.kim, hughd, jhubbard, jon.grimm,
	k.shutemov, kbusch, kmanaouil.dev, leesuyeon0506, leillc,
	liam.howlett, linux-kernel, linux-mm, mgorman, mingo, nadav.amit,
	nphamcs, peterz, riel, rientjes, rppt, santosh.shukla, shivankg,
	shy828301, sj, vbabka, weixugc, willy, ying.huang, ziy, dave



On 3/21/2025 9:36 PM, Jonathan Cameron wrote:
> On Wed, 19 Mar 2025 19:30:16 +0000
> Raghavendra K T <raghavendra.kt@amd.com> wrote:
> 
>> Add a skeleton to support scanning and migration.
>> Also add a config option for the same.
>>
>> High level design:
>>
>> While (1):
>>    scan the slowtier pages belonging to VMAs of a task.
>>    Add to migation list
>>
>> Separate thread:
>>    migrate scanned pages to a toptier node based on heuristics
>>
>> The overall code is heavily influenced by khugepaged design.
>>
>> Signed-off-by: Raghavendra K T <raghavendra.kt@amd.com>
> 
> 
> I'm really bad and reading code and not commenting on the 'small'
> stuff.  So feel free to ignore this given the RFC status!
> This sort of read through helps me get my head around a series.
> 

Hello Jonathan,
I do agree that my goal till was mostly POC, and yet to harden lot of
code. But your effort reviewing this code will help miles in converging 
to good code faster.

Thank you alot and appreciate.

>> ---
>>   mm/Kconfig    |   8 +++
>>   mm/Makefile   |   1 +
>>   mm/kmmscand.c | 176 ++++++++++++++++++++++++++++++++++++++++++++++++++
>>   3 files changed, 185 insertions(+)
>>   create mode 100644 mm/kmmscand.c
>>
>> diff --git a/mm/Kconfig b/mm/Kconfig
>> index 1b501db06417..5a4931633e15 100644
>> --- a/mm/Kconfig
>> +++ b/mm/Kconfig
>> @@ -783,6 +783,14 @@ config KSM
>>   	  until a program has madvised that an area is MADV_MERGEABLE, and
>>   	  root has set /sys/kernel/mm/ksm/run to 1 (if CONFIG_SYSFS is set).
>>   
>> +config KMMSCAND
>> +	bool "Enable PTE A bit scanning and Migration"
>> +	depends on NUMA_BALANCING
>> +	help
>> +	  Enable PTE A bit scanning of page. CXL pages accessed are migrated to
> 
> Trivial but don't mention CXL.  "Other memory tier solutions are available"

Sure.

> 
>> +	  a regular NUMA node. The option creates a separate kthread for
>> +	  scanning and migration.
>> +
> 
>> diff --git a/mm/kmmscand.c b/mm/kmmscand.c
>> new file mode 100644
>> index 000000000000..6c55250b5cfb
>> --- /dev/null
>> +++ b/mm/kmmscand.c
> 
>> +
>> +struct kmmscand_scan kmmscand_scan = {
>> +	.mm_head = LIST_HEAD_INIT(kmmscand_scan.mm_head),
>> +};
>> +
>> +static int kmmscand_has_work(void)
>> +{
> 
> Unless this is going to get more complex, I'd just put
> the implementation inline.  Kind of obvious what is doing
> so the wrapper doesn't add much.
> 

Sure.

>> +	return !list_empty(&kmmscand_scan.mm_head);
>> +}
>> +
>> +static bool kmmscand_should_wakeup(void)
>> +{
>> +	bool wakeup =  kthread_should_stop() || need_wakeup ||
> 
> bonus space after =
> 

+1

>> +	       time_after_eq(jiffies, kmmscand_sleep_expire);
>> +	if (need_wakeup)
>> +		need_wakeup = false;
> 
> Why not set it unconditionally?  If it is false already, no
> harm done and removes need to check.
>

Agree. will change. This code had wakeup from sysfs variable setting
in mind :).

>> +
>> +	return wakeup;
>> +}
>> +
>> +static void kmmscand_wait_work(void)
>> +{
>> +	const unsigned long scan_sleep_jiffies =
>> +		msecs_to_jiffies(kmmscand_scan_sleep_ms);
>> +
>> +	if (!scan_sleep_jiffies)
>> +		return;
>> +
>> +	kmmscand_sleep_expire = jiffies + scan_sleep_jiffies;
>> +	wait_event_timeout(kmmscand_wait,
>> +			kmmscand_should_wakeup(),
>> +			scan_sleep_jiffies);
> 
> strange wrap.  Maybe add a comment on why we don't care if
> this timed out or not.
> 

You mean why timeout is not harmful? sure .. will do.

>> +	return;
>> +}
>> +
>> +static unsigned long kmmscand_scan_mm_slot(void)
>> +{
>> +	/* placeholder for scanning */
> 
> I guess this will make sense later in series!
> 

Agree.
I will surely have to think about right splitting that
does not hog when bisected separately.

>> +	msleep(100);
>> +	return 0;
>> +}
>> +
>> +static void kmmscand_do_scan(void)
>> +{
>> +	unsigned long iter = 0, mms_to_scan;
>> +
> 
> 	unsigned long mms_to_scan = READ_ONCE(kmmscand_mms_to_scan);
> 
>> +	mms_to_scan = READ_ONCE(kmmscand_mms_to_scan);
>> +
>> +	while (true) {
>> +		cond_resched();
> 
> Odd to do this at start. Maybe at end of loop?
> 

+1

>> +
>> +		if (unlikely(kthread_should_stop()) ||
>> +			!READ_ONCE(kmmscand_scan_enabled))
>> +			break;
> return;  Then we don't need to read on to see if anything else happens.
>> +
>> +		if (kmmscand_has_work())
>> +			kmmscand_scan_mm_slot();
>> +
>> +		iter++;
>> +		if (iter >= mms_to_scan)
>> +			break;
> 			return;
> Same argument as above.
> 

Thanks. Will think about above.

>> +	}
>> +}
>> +
>> +static int kmmscand(void *none)
>> +{
>> +	for (;;) {
> 
> while (true) maybe.  Feels more natural to me for a loop
> with no terminating condition.   Obviously same thing in practice.
> 

+1

>> +		if (unlikely(kthread_should_stop()))
> 			return;
>> +			break;
>> +
>> +		kmmscand_do_scan();
>> +
>> +		while (!READ_ONCE(kmmscand_scan_enabled)) {
>> +			cpu_relax();
>> +			kmmscand_wait_work();
>> +		}
>> +
>> +		kmmscand_wait_work();
>> +	}
>> +	return 0;
>> +}
>> +
>> +static int start_kmmscand(void)
>> +{
>> +	int err = 0;
>> +
>> +	guard(mutex)(&kmmscand_mutex);
>> +
>> +	/* Some one already succeeded in starting daemon */
>> +	if (kmmscand_thread)
> return 0;
+1

>> +		goto end;
>> +
>> +	kmmscand_thread = kthread_run(kmmscand, NULL, "kmmscand");
>> +	if (IS_ERR(kmmscand_thread)) {
>> +		pr_err("kmmscand: kthread_run(kmmscand) failed\n");
>> +		err = PTR_ERR(kmmscand_thread);
>> +		kmmscand_thread = NULL;
> 
> Use a local variable instead and only assign on success. That
> way you don't need to null it out in this path.
> 

Agree

>> +		goto end;
> 
> return PTR_ERR(kmmscand_thread_local);
> 
>> +	} else {
>> +		pr_info("kmmscand: Successfully started kmmscand");
> No need for else give the other path exits.
> 

Agree.

>> +	}
>> +
>> +	if (!list_empty(&kmmscand_scan.mm_head))
>> +		wake_up_interruptible(&kmmscand_wait);
>> +
>> +end:
>> +	return err;
>> +}
>> +
>> +static int stop_kmmscand(void)
>> +{
>> +	int err = 0;
> 
> No point in err if always 0.
> 

Yes.

>> +
>> +	guard(mutex)(&kmmscand_mutex);
>> +
>> +	if (kmmscand_thread) {
>> +		kthread_stop(kmmscand_thread);
>> +		kmmscand_thread = NULL;
>> +	}
>> +
>> +	return err;
>> +}
>> +
>> +static int __init kmmscand_init(void)
>> +{
>> +	int err;
>> +
>> +	err = start_kmmscand();
>> +	if (err)
>> +		goto err_kmmscand;
> 
> start_kmmscand() should be side effect free if it is returning an
> error.  Not doing that makes for hard to read code.
> 
> Superficially looks like it is already side effect free so you
> can probably just return here.
> 

There is one scanctrl free added later in stop_kmmscand part.

> 
>> +
>> +	return 0;
>> +
>> +err_kmmscand:
>> +	stop_kmmscand();
>> +
>> +	return err;
>> +}
>> +subsys_initcall(kmmscand_init);
> 



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [RFC PATCH V1 04/13] mm: Create a separate kernel thread for migration
  2025-03-21 17:29   ` Jonathan Cameron
@ 2025-03-24 15:17     ` Raghavendra K T
  0 siblings, 0 replies; 30+ messages in thread
From: Raghavendra K T @ 2025-03-24 15:17 UTC (permalink / raw)
  To: Jonathan Cameron
  Cc: AneeshKumar.KizhakeVeetil, Hasan.Maruf, Michael.Day, akpm,
	bharata, dave.hansen, david, dongjoo.linux.dev, feng.tang,
	gourry, hannes, honggyu.kim, hughd, jhubbard, jon.grimm,
	k.shutemov, kbusch, kmanaouil.dev, leesuyeon0506, leillc,
	liam.howlett, linux-kernel, linux-mm, mgorman, mingo, nadav.amit,
	nphamcs, peterz, riel, rientjes, rppt, santosh.shukla, shivankg,
	shy828301, sj, vbabka, weixugc, willy, ying.huang, ziy, dave



On 3/21/2025 10:59 PM, Jonathan Cameron wrote:
> On Wed, 19 Mar 2025 19:30:19 +0000
> Raghavendra K T <raghavendra.kt@amd.com> wrote:
> 
>> Having independent thread helps in:
>>   - Alleviating the need for multiple scanning threads
>>   - Aids to control batch migration (TBD)
>>   - Migration throttling (TBD)
>>
> A few comments on things noticed whilst reading through.
> 
> Jonathan
> 
>> Signed-off-by: Raghavendra K T <raghavendra.kt@amd.com>
>> ---
>>   mm/kmmscand.c | 157 +++++++++++++++++++++++++++++++++++++++++++++++++-
>>   1 file changed, 154 insertions(+), 3 deletions(-)
>>
>> diff --git a/mm/kmmscand.c b/mm/kmmscand.c
>> index a76a58bf37b2..6e96cfab5b85 100644
>> --- a/mm/kmmscand.c
>> +++ b/mm/kmmscand.c
> 
>>   /* Per folio information used for migration */
>>   struct kmmscand_migrate_info {
>>   	struct list_head migrate_node;
>> @@ -101,6 +126,13 @@ static int kmmscand_has_work(void)
>>   	return !list_empty(&kmmscand_scan.mm_head);
>>   }
>>   
>> +static int kmmmigrated_has_work(void)
>> +{
>> +	if (!list_empty(&kmmscand_migrate_list.migrate_head))
>> +		return true;
>> +	return false;
> If it isn't getting more complex later, can just
> 	return !list_empty().
> or indeed, just put that condition directly at caller.
> 

Sure.

>> +}
> 
> 
>>   static inline bool is_valid_folio(struct folio *folio)
>>   {
>> @@ -238,7 +293,6 @@ static int hot_vma_idle_pte_entry(pte_t *pte,
>>   			folio_put(folio);
>>   			return 0;
>>   		}
>> -		/* XXX: Leaking memory. TBD: consume info */
>>   		info = kzalloc(sizeof(struct kmmscand_migrate_info), GFP_NOWAIT);
>>   		if (info && scanctrl) {
>>   
>> @@ -282,6 +336,28 @@ static inline int kmmscand_test_exit(struct mm_struct *mm)
>>   	return atomic_read(&mm->mm_users) == 0;
>>   }
>>   
>> +static void kmmscand_cleanup_migration_list(struct mm_struct *mm)
>> +{
>> +	struct kmmscand_migrate_info *info, *tmp;
>> +
>> +	spin_lock(&kmmscand_migrate_lock);
> 
> Could scatter some guard() magic in here.
> 

Agree.

>> +	if (!list_empty(&kmmscand_migrate_list.migrate_head)) {
> 
> Maybe flip logic of this unless it is going to get more complex in future
> patches.  That way, with guard() handling the spin lock, you can just
> return when nothing to do.
> 

Agree. This section of code needs rewrite when implemented with mmslot
for migration part also. will keep this in mind.


>> +		if (mm == READ_ONCE(kmmscand_cur_migrate_mm)) {
>> +			/* A folio in this mm is being migrated. wait */
>> +			WRITE_ONCE(kmmscand_migration_list_dirty, true);
>> +		}
>> +
>> +		list_for_each_entry_safe(info, tmp, &kmmscand_migrate_list.migrate_head,
>> +			migrate_node) {
>> +			if (info && (info->mm == mm)) {
>> +				info->mm = NULL;
>> +				WRITE_ONCE(kmmscand_migration_list_dirty, true);
>> +			}
>> +		}
>> +	}
>> +	spin_unlock(&kmmscand_migrate_lock);
>> +}
> 
>>   static unsigned long kmmscand_scan_mm_slot(void)
>>   {
>>   	bool next_mm = false;
>> @@ -347,9 +429,17 @@ static unsigned long kmmscand_scan_mm_slot(void)
>>   
>>   		if (vma_scanned_size >= kmmscand_scan_size) {
>>   			next_mm = true;
>> -			/* TBD: Add scanned folios to migration list */
>> +			/* Add scanned folios to migration list */
>> +			spin_lock(&kmmscand_migrate_lock);
>> +			list_splice_tail_init(&kmmscand_scanctrl.scan_list,
>> +						&kmmscand_migrate_list.migrate_head);
>> +			spin_unlock(&kmmscand_migrate_lock);
>>   			break;
>>   		}
>> +		spin_lock(&kmmscand_migrate_lock);
>> +		list_splice_tail_init(&kmmscand_scanctrl.scan_list,
>> +					&kmmscand_migrate_list.migrate_head);
>> +		spin_unlock(&kmmscand_migrate_lock);
> 
> I've stared at this a while, but if we have entered the conditional block
> above, do we splice the now empty list?

We break if we hit the conditional block. Also there is a check for
empty list in splice too IIRC.

But But .. there is surely an opportunity to check if the list is empty
without using lock (using slowtier accessed count),
so thanks for bringing this up :)

> 
>>   	}
>>   
>>   	if (!vma)



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [RFC PATCH V1 09/13] mm: Add heuristic to calculate target node
  2025-03-21 17:42   ` Jonathan Cameron
@ 2025-03-24 16:17     ` Raghavendra K T
  0 siblings, 0 replies; 30+ messages in thread
From: Raghavendra K T @ 2025-03-24 16:17 UTC (permalink / raw)
  To: Jonathan Cameron
  Cc: AneeshKumar.KizhakeVeetil, Hasan.Maruf, Michael.Day, akpm,
	bharata, dave.hansen, david, dongjoo.linux.dev, feng.tang,
	gourry, hannes, honggyu.kim, hughd, jhubbard, jon.grimm,
	k.shutemov, kbusch, kmanaouil.dev, leesuyeon0506, leillc,
	liam.howlett, linux-kernel, linux-mm, mgorman, mingo, nadav.amit,
	nphamcs, peterz, riel, rientjes, rppt, santosh.shukla, shivankg,
	shy828301, sj, vbabka, weixugc, willy, ying.huang, ziy, dave,
	Hillf Danton

+Hillf

On 3/21/2025 11:12 PM, Jonathan Cameron wrote:
> On Wed, 19 Mar 2025 19:30:24 +0000
> Raghavendra K T <raghavendra.kt@amd.com> wrote:
> 
>> One of the key challenges in PTE A bit based scanning is to find right
>> target node to promote to.
> 
> I have the same problem with the CXL hotpage monitor so very keen to
> see solutions to this (though this particular one doesn't work for
> me unless A bit scanning is happening as well).
>

This is the thought I have (for how final solution looks like)

A migrate list and mm or target node(s) passed from various sources to
common migration thread for async migration.

source:
case1)
kmmscand -> (migratelist (type: folio/PFN, mminfo/migrate node) ---> 
(kmmmigrated/kpromoted)
                                                (unified migration thread)

case2)
  IBS/CHMU --> (migrate_list (type : PFN), NULL) --> (kmmmigrated/kpromoted)


for case 2 issue I see is, we are not able to associate any task or mm
to PFN. But in case we can get that.. we should be able use heuristic.

For case two, applying Hillf's suggestion of reverse demotion target +
next faster tier with highest free page availability should help IMHO.

>>
>> Here is a simple heuristic based approach:
>>     While scanning pages of any mm we also scan toptier pages that belong
>> to that mm. We get an insight on the distribution of pages that potentially
>> belonging to particular toptier node and also its recent access.
>>
>> Current logic walks all the toptier node, and picks the one with highest
>> accesses.
> 
> Maybe talk through why this heuristic works?  What is the intuition behind it?
> 
> I can see that on basis of first touch allocation, we should get a reasonable
> number of pages in the node where that CPU doing initialization is.
> 

Rationale is that suppose a workload is already running and has some
part of its working set in toptier node, consolidate it in that toptier
node.

for e.g.,

Bharata has a benchmark cbench-split (will share abench and cbench-split
source) where I can run 25:75 50:50 etc allocation on both CXL and
toptier.
After that workload touches all the pages to make them hot.

node0 (128GB) toptier
node1 (128GB) toptier
node2 (128GB) slowtier

I have run the workload with memory footprint of 8GB, 32GB, 128GB with
split of 50:50 on one toptier and one slowtier.

Observation:

Memory 	Base time (s)	Patched time (s)	%improvement
   8GB	53.29	46.47	12.79
  32GB	213.86	184.22	13.85
128GB	862.66	703.26	18.47

I could see that workload is consolidating on one node with a decent
more than 10% gain. Importantly if workload has its working set on node1
all the target_node is chosen for CXL pages is node1.

(Same thing happen when workload is spread between node0:node2,
target_node = 0)

However, going forward we need to device complex mechanism to take care 
of freepages available etc proactively.

 > Is this relying on some other mechanism to ensure that the pages 
being touched
 > are local to the CPUs touching them?

Unfortunately this where there is no control/visibility, access could be
from both local/remote. This is where we will have to rely on NUMAB1 to
take care of last mile toptier balancing (both CPU/memory).

- Raghu
[...]


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [RFC PATCH V1 00/13] mm: slowtier page promotion based on PTE A bit
  2025-03-21 20:35       ` Davidlohr Bueso
@ 2025-03-25  6:36         ` Raghavendra K T
  0 siblings, 0 replies; 30+ messages in thread
From: Raghavendra K T @ 2025-03-25  6:36 UTC (permalink / raw)
  To: AneeshKumar.KizhakeVeetil, Hasan.Maruf, Michael.Day, akpm,
	bharata, dave.hansen, david, dongjoo.linux.dev, feng.tang,
	gourry, hannes, honggyu.kim, hughd, jhubbard, jon.grimm,
	k.shutemov, kbusch, kmanaouil.dev, leesuyeon0506, leillc,
	liam.howlett, linux-kernel, linux-mm, mgorman, mingo, nadav.amit,
	nphamcs, peterz, riel, rientjes, rppt, santosh.shukla, shivankg,
	shy828301, sj, vbabka, weixugc, willy, ying.huang, ziy,
	Jonathan.Cameron, alok.rathore, kinseyho, yuanchu

+kinseyho and yuanchu

On 3/22/2025 2:05 AM, Davidlohr Bueso wrote:
> On Fri, 21 Mar 2025, Raghavendra K T wrote:
> 
>>> But a longer running/ more memory workload may make more difference.
>>> I will comeback with that number.
>>
>>                 base NUMAB=2   Patched NUMAB=0
>>                 time in sec    time in sec
>> ===================================================
>> 8G:              134.33 (0.19)   119.88 ( 0.25)
>> 16G:             292.24 (0.60)   325.06 (11.11)
>> 32G:             585.06 (0.24)   546.15 ( 0.50)
>> 64G:            1278.98 (0.27)  1221.41 ( 1.54)
>>
>> We can see that numbers have not changed much between NUMAB=1 NUMAB=0 in
>> patched case.
> 
> Thanks. Since this might vary across workloads, another important metric
> here is numa hit/misses statistics.

Hello David, sorry for coming back late.

Yes I did collect some of the other stats along with this (posting for
8GB only). I did not se much difference in total numa_hit. But there are 
differences in in numa_local etc.. (not pasted here)

#grep -A2 completed  abench_cxl_6.14.0-rc6-kmmscand+_8G.log 
abench_cxl_6.14.0-rc6-cxlfix+_numab2_8G.log
abench_cxl_6.14.0-rc6-kmmscand+_8G.log:Benchmark completed in 
120292376.0 us, Total thread execution time 7490922681.0 us
abench_cxl_6.14.0-rc6-kmmscand+_8G.log-numa_hit 6376927
abench_cxl_6.14.0-rc6-kmmscand+_8G.log-numa_miss 0
--
abench_cxl_6.14.0-rc6-kmmscand+_8G.log:Benchmark completed in 
119583939.0 us, Total thread execution time 7461705291.0 us
abench_cxl_6.14.0-rc6-kmmscand+_8G.log-numa_hit 6373409
abench_cxl_6.14.0-rc6-kmmscand+_8G.log-numa_miss 0
--
abench_cxl_6.14.0-rc6-kmmscand+_8G.log:Benchmark completed in 
119784117.0 us, Total thread execution time 7482710944.0 us
abench_cxl_6.14.0-rc6-kmmscand+_8G.log-numa_hit 6378384
abench_cxl_6.14.0-rc6-kmmscand+_8G.log-numa_miss 0
--
abench_cxl_6.14.0-rc6-cxlfix+_numab2_8G.log:Benchmark completed in 
134481344.0 us, Total thread execution time 8409840511.0 us
abench_cxl_6.14.0-rc6-cxlfix+_numab2_8G.log-numa_hit 6303300
abench_cxl_6.14.0-rc6-cxlfix+_numab2_8G.log-numa_miss 0
--
abench_cxl_6.14.0-rc6-cxlfix+_numab2_8G.log:Benchmark completed in 
133967260.0 us, Total thread execution time 8352886349.0 us
abench_cxl_6.14.0-rc6-cxlfix+_numab2_8G.log-numa_hit 6304063
abench_cxl_6.14.0-rc6-cxlfix+_numab2_8G.log-numa_miss 0
--
abench_cxl_6.14.0-rc6-cxlfix+_numab2_8G.log:Benchmark completed in 
134554911.0 us, Total thread execution time 8444951713.0 us
abench_cxl_6.14.0-rc6-cxlfix+_numab2_8G.log-numa_hit 6302506
abench_cxl_6.14.0-rc6-cxlfix+_numab2_8G.log-numa_miss 0

> 
> fyi I have also been trying this series to get some numbers as well, but
> noticed overnight things went south (so no chance before LSFMM):
>

This issue looks to be different. Could you please let me know any ways
to reproduce?

I had tested perf bench numa mem, did not find anything.

The issue I know of currently is:

kmmscand:
  for_each_mm
     for_each_vma
         scan_vma and get accessed_folo_list
         add to migration_list() // does not check for duplicate

kmmmigrated:
   for_each_folio in migration_list
        migrate_misplaced_folio()

there is also
   cleanup_migration_list() in mm teardown

migration_list is protected by single lock, and kmmscand is too
aggressive and can potentially bombard with migration_list (practical
workload may generate lesser pages though). That results in non-fatal
  softlockup that will be fixed with mmslot as I noted somewhere.

But now main challenge to solve in kmmscand is, it generates:
t1-> migration_list1 (of recently accessed folios)
t2-> migration_list2

How do I get the union of migration_list1 and migration_list2 so that
instead of migrating on first access, we can get a hotter page to
promote.

I had few solutions in mind: (That I wanted to get opinion / suggestion
from exerts during LSFMM)

1. Reusing DAMON VA scanning. scanning params are controlled in KMMSCAND 
(current heuristics)


2. Can we use LRU information to filter access list (LRU active/ folio 
is in (n-1) generation?)
  (I do see Kinseyho just posted LRU based approach)

3. Can we split the address range to 2MB to monitor? PMD level access 
monitoring.

4. Any possible ways of using bloom-filters for list1,list2

- Raghu

[snip...]



^ permalink raw reply	[flat|nested] 30+ messages in thread

end of thread, other threads:[~2025-03-25  6:37 UTC | newest]

Thread overview: 30+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-03-19 19:30 [RFC PATCH V1 00/13] mm: slowtier page promotion based on PTE A bit Raghavendra K T
2025-03-19 19:30 ` [RFC PATCH V1 01/13] mm: Add kmmscand kernel daemon Raghavendra K T
2025-03-21 16:06   ` Jonathan Cameron
2025-03-24 15:09     ` Raghavendra K T
2025-03-19 19:30 ` [RFC PATCH V1 02/13] mm: Maintain mm_struct list in the system Raghavendra K T
2025-03-19 19:30 ` [RFC PATCH V1 03/13] mm: Scan the mm and create a migration list Raghavendra K T
2025-03-19 19:30 ` [RFC PATCH V1 04/13] mm: Create a separate kernel thread for migration Raghavendra K T
2025-03-21 17:29   ` Jonathan Cameron
2025-03-24 15:17     ` Raghavendra K T
2025-03-19 19:30 ` [RFC PATCH V1 05/13] mm/migration: Migrate accessed folios to toptier node Raghavendra K T
2025-03-19 19:30 ` [RFC PATCH V1 06/13] mm: Add throttling of mm scanning using scan_period Raghavendra K T
2025-03-19 19:30 ` [RFC PATCH V1 07/13] mm: Add throttling of mm scanning using scan_size Raghavendra K T
2025-03-19 19:30 ` [RFC PATCH V1 08/13] mm: Add initial scan delay Raghavendra K T
2025-03-19 19:30 ` [RFC PATCH V1 09/13] mm: Add heuristic to calculate target node Raghavendra K T
2025-03-21 17:42   ` Jonathan Cameron
2025-03-24 16:17     ` Raghavendra K T
2025-03-19 19:30 ` [RFC PATCH V1 10/13] sysfs: Add sysfs support to tune scanning Raghavendra K T
2025-03-19 19:30 ` [RFC PATCH V1 11/13] vmstat: Add vmstat counters Raghavendra K T
2025-03-19 19:30 ` [RFC PATCH V1 12/13] trace/kmmscand: Add tracing of scanning and migration Raghavendra K T
2025-03-19 19:30 ` [RFC PATCH V1 13/13] prctl: Introduce new prctl to control scanning Raghavendra K T
2025-03-19 23:00 ` [RFC PATCH V1 00/13] mm: slowtier page promotion based on PTE A bit Davidlohr Bueso
2025-03-20  8:51   ` Raghavendra K T
2025-03-20 19:11     ` Raghavendra K T
2025-03-21 20:35       ` Davidlohr Bueso
2025-03-25  6:36         ` Raghavendra K T
2025-03-20 21:50     ` Davidlohr Bueso
2025-03-21  6:48       ` Raghavendra K T
2025-03-21 15:52 ` Jonathan Cameron
     [not found] ` <20250321105309.3521-1-hdanton@sina.com>
2025-03-23 18:14   ` [RFC PATCH V1 09/13] mm: Add heuristic to calculate target node Raghavendra K T
     [not found]   ` <20250324110543.3599-1-hdanton@sina.com>
2025-03-24 14:54     ` Raghavendra K T

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox