linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/4] sched: Move task_mm_cid_work to mm delayed work
@ 2024-12-13  9:54 Gabriele Monaco
  2024-12-13  9:54 ` [PATCH v2 1/4] " Gabriele Monaco
                   ` (4 more replies)
  0 siblings, 5 replies; 12+ messages in thread
From: Gabriele Monaco @ 2024-12-13  9:54 UTC (permalink / raw)
  To: Mathieu Desnoyers, Ingo Molnar, Peter Zijlstra, Andrew Morton,
	linux-mm, linux-kernel
  Cc: Juri Lelli, Vincent Guittot, Mel Gorman, Shuah Khan,
	linux-kselftest, Gabriele Monaco

This patchset moves the task_mm_cid_work to a preemptible and migratable
context. This reduces the impact of this task to the scheduling latency
of real time tasks.
The change makes the recurrence of the task a bit more predictable.
We also add optimisation and fixes to make sure the task_mm_cid_work
works as intended.

Patch 1 contains the main changes, removing the task_work on the
scheduler tick and using a delayed_work instead.

Patch 2 adds some optimisations to the approach, since we rely
on a delayed_work, it is no longer required to check that the minimum
interval passed since execution, we however terminate the call
immediately if we see that no mm_cid is actually active, which could
happen on processes sleeping for long time or which exited but whose mm
has not been freed yet.

Patch 3 allows the mm_cids to be actually compacted when a process
reduces its number of threads, which was not the case since the same
mm_cids were reused to improve cache locality, more details in [1].

Patch 4 adds a selftest to validate the functionality of the
task_mm_cid_work (i.e. to compact the mm_cids), this test requires patch
3 to be applied.

Changes since V1 [1]:
* Re-arm the delayed_work at each invocation
* Cancel the work synchronously at mmdrop
* Remove next scan fields and completely rely on the delayed_work
* Shrink mm_cid allocation with nr thread/affinity (Mathieu Desnoyers)
* Add self test

OVERHEAD COMPARISON

In this section, I'm going to refer to head as the current state
upstream without my patch applied, patch is the same head with these
patches applied. Likewise, I'm going to refer to task_mm_cid_work as
either the task or the function. The experiments are run on an aarch64
machine with 128 cores. The kernel has a bare configuration with
PREEMPT_RT enabled.

- Memory

The patch introduces some memory overhead:
* head uses a callback_head per thread (16 bytes)
* patch relies on a delayed work per mm but drops a long (80 bytes net)

Tasks with 5 threads or less have lower memory footprint with the
current approach.
Considering a task_struct can be 7-13 kB and an mm_struct is about 1.4
kB, the overhead should be acceptable.

- Boot time

I tested the patch booting a virtual machine with vng[2], both head and
patch get similar boot times (around 8s).

- Runtime

I run some rather demanding tests to show what could possibly be a worst
case in the approach introduced by this patch. The following tests are
running again in vng to have a plain system, running mostly the
stressors (if there). Unless differently specified, time is in us. All
tests run for 30s.
The stress-ng tests were run with 128 stressors, I will omit from the
table for clarity.

No load                       head           patch
running processes(threads):   12(12)         12(12)
duration(avg,max,sum):        75,426,987     2,42,45ms
invocations:                  13             20k

stress-ng --cpu-load 80       head           patch
running processes(threads):   129(129)       129(129)
duration(avg,max,sum):        20,2ms,740ms   7,774,280ms
invocations:                  36k            39k

stress-ng --fork              head           patch
running processes(threads):   3.6k(3.6k)     4k(4k)
duration(avg,max,sum):        34,41,720      19,457,880ms
invocations:                  21             46k

stress-ng --pthread-max 4     head           patch
running processes(threads):   129(4k)        129(4k)
duration(avg,max,sum):        31,195,41ms    21,1ms,830ms
invocations:                  1290           38k

It is important to note that some of those stressors run for a very
short period of time to just fork/create a thread, this heavily favours
head since the task won't simply run as often.
Moreover, the duration time needs to be read carefully, since the task
can now be preempted by threads, I tried to exclude that from the
computation, but to keep the probes simple, I didn't exclude
interference caused by interrupts.
On the same system while isolated, the task runs in about 30-35ms, it is
hence highly likely that much larger values are only due to
interruptions, rather than the function actually running that long.

I will post another email with the scripts used to retrieve the data and
more details about the runtime distribution.

[1] - https://lore.kernel.org/linux-kernel/20241205083110.180134-2-gmonaco@redhat.com/
[2] - https://github.com/arighi/virtme-ng

Gabriele Monaco (3):
  sched: Move task_mm_cid_work to mm delayed work
  sched: Remove mm_cid_next_scan as obsolete
  rseq/selftests: Add test for mm_cid compaction

Mathieu Desnoyers (1):
  sched: Compact RSEQ concurrency IDs with reduced threads and affinity

 include/linux/mm_types.h                      |  23 ++-
 include/linux/sched.h                         |   1 -
 kernel/sched/core.c                           |  66 +-------
 kernel/sched/sched.h                          |  32 ++--
 tools/testing/selftests/rseq/.gitignore       |   1 +
 tools/testing/selftests/rseq/Makefile         |   2 +-
 .../selftests/rseq/mm_cid_compaction_test.c   | 157 ++++++++++++++++++
 7 files changed, 203 insertions(+), 79 deletions(-)
 create mode 100644 tools/testing/selftests/rseq/mm_cid_compaction_test.c


base-commit: 231825b2e1ff6ba799c5eaf396d3ab2354e37c6b
-- 
2.47.1



^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH v2 1/4] sched: Move task_mm_cid_work to mm delayed work
  2024-12-13  9:54 [PATCH v2 0/4] sched: Move task_mm_cid_work to mm delayed work Gabriele Monaco
@ 2024-12-13  9:54 ` Gabriele Monaco
  2024-12-13 14:14   ` Mathieu Desnoyers
  2024-12-13  9:54 ` [PATCH v2 2/4] sched: Remove mm_cid_next_scan as obsolete Gabriele Monaco
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 12+ messages in thread
From: Gabriele Monaco @ 2024-12-13  9:54 UTC (permalink / raw)
  To: Mathieu Desnoyers, Ingo Molnar, Peter Zijlstra, Andrew Morton,
	linux-mm, linux-kernel
  Cc: Juri Lelli, Vincent Guittot, Mel Gorman, Shuah Khan,
	linux-kselftest, Gabriele Monaco

Currently, the task_mm_cid_work function is called in a task work
triggered by a scheduler tick. This can delay the execution of the task
for the entire duration of the function, negatively affecting the
response of real time tasks.

This patch runs the task_mm_cid_work in a new delayed work connected to
the mm_struct rather than in the task context before returning to
userspace.

This delayed work is initialised while allocating the mm and disabled
before freeing it, its execution is no longer triggered by scheduler
ticks but run periodically based on the defined MM_CID_SCAN_DELAY.

The main advantage of this change is that the function can be offloaded
to a different CPU and even preempted by RT tasks.

Moreover, this new behaviour could be more predictable in some
situations since the delayed work is always scheduled with the same
periodicity for each mm.

Signed-off-by: Gabriele Monaco <gmonaco@redhat.com>
---
 include/linux/mm_types.h | 11 +++++++++
 include/linux/sched.h    |  1 -
 kernel/sched/core.c      | 51 ++++++----------------------------------
 kernel/sched/sched.h     |  7 ------
 4 files changed, 18 insertions(+), 52 deletions(-)

diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index 7361a8f3ab68..92acb827fee4 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -856,6 +856,7 @@ struct mm_struct {
 		 * mm nr_cpus_allowed updates.
 		 */
 		raw_spinlock_t cpus_allowed_lock;
+		struct delayed_work mm_cid_work;
 #endif
 #ifdef CONFIG_MMU
 		atomic_long_t pgtables_bytes;	/* size of all page tables */
@@ -1144,11 +1145,16 @@ static inline void vma_iter_init(struct vma_iterator *vmi,
 
 #ifdef CONFIG_SCHED_MM_CID
 
+#define SCHED_MM_CID_PERIOD_NS	(100ULL * 1000000)	/* 100ms */
+#define MM_CID_SCAN_DELAY	100			/* 100ms */
+
 enum mm_cid_state {
 	MM_CID_UNSET = -1U,		/* Unset state has lazy_put flag set. */
 	MM_CID_LAZY_PUT = (1U << 31),
 };
 
+extern void task_mm_cid_work(struct work_struct *work);
+
 static inline bool mm_cid_is_unset(int cid)
 {
 	return cid == MM_CID_UNSET;
@@ -1221,12 +1227,17 @@ static inline int mm_alloc_cid_noprof(struct mm_struct *mm, struct task_struct *
 	if (!mm->pcpu_cid)
 		return -ENOMEM;
 	mm_init_cid(mm, p);
+	INIT_DELAYED_WORK(&mm->mm_cid_work, task_mm_cid_work);
+	mm->mm_cid_next_scan = jiffies + msecs_to_jiffies(MM_CID_SCAN_DELAY);
+	schedule_delayed_work(&mm->mm_cid_work,
+			      msecs_to_jiffies(MM_CID_SCAN_DELAY));
 	return 0;
 }
 #define mm_alloc_cid(...)	alloc_hooks(mm_alloc_cid_noprof(__VA_ARGS__))
 
 static inline void mm_destroy_cid(struct mm_struct *mm)
 {
+	disable_delayed_work_sync(&mm->mm_cid_work);
 	free_percpu(mm->pcpu_cid);
 	mm->pcpu_cid = NULL;
 }
diff --git a/include/linux/sched.h b/include/linux/sched.h
index d380bffee2ef..5d141c310917 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1374,7 +1374,6 @@ struct task_struct {
 	int				last_mm_cid;	/* Most recent cid in mm */
 	int				migrate_from_cpu;
 	int				mm_cid_active;	/* Whether cid bitmap is active */
-	struct callback_head		cid_work;
 #endif
 
 	struct tlbflush_unmap_batch	tlb_ubc;
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index c6d8232ad9ee..e3b27b73301c 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -4516,7 +4516,6 @@ static void __sched_fork(unsigned long clone_flags, struct task_struct *p)
 	p->wake_entry.u_flags = CSD_TYPE_TTWU;
 	p->migration_pending = NULL;
 #endif
-	init_sched_mm_cid(p);
 }
 
 DEFINE_STATIC_KEY_FALSE(sched_numa_balancing);
@@ -5654,7 +5653,6 @@ void sched_tick(void)
 		resched_latency = cpu_resched_latency(rq);
 	calc_global_load_tick(rq);
 	sched_core_tick(rq);
-	task_tick_mm_cid(rq, donor);
 	scx_tick(rq);
 
 	rq_unlock(rq, &rf);
@@ -10520,22 +10518,14 @@ static void sched_mm_cid_remote_clear_weight(struct mm_struct *mm, int cpu,
 	sched_mm_cid_remote_clear(mm, pcpu_cid, cpu);
 }
 
-static void task_mm_cid_work(struct callback_head *work)
+void task_mm_cid_work(struct work_struct *work)
 {
 	unsigned long now = jiffies, old_scan, next_scan;
-	struct task_struct *t = current;
 	struct cpumask *cidmask;
-	struct mm_struct *mm;
+	struct delayed_work *delayed_work = container_of(work, struct delayed_work, work);
+	struct mm_struct *mm = container_of(delayed_work, struct mm_struct, mm_cid_work);
 	int weight, cpu;
 
-	SCHED_WARN_ON(t != container_of(work, struct task_struct, cid_work));
-
-	work->next = work;	/* Prevent double-add */
-	if (t->flags & PF_EXITING)
-		return;
-	mm = t->mm;
-	if (!mm)
-		return;
 	old_scan = READ_ONCE(mm->mm_cid_next_scan);
 	next_scan = now + msecs_to_jiffies(MM_CID_SCAN_DELAY);
 	if (!old_scan) {
@@ -10548,9 +10538,9 @@ static void task_mm_cid_work(struct callback_head *work)
 			old_scan = next_scan;
 	}
 	if (time_before(now, old_scan))
-		return;
+		goto out;
 	if (!try_cmpxchg(&mm->mm_cid_next_scan, &old_scan, next_scan))
-		return;
+		goto out;
 	cidmask = mm_cidmask(mm);
 	/* Clear cids that were not recently used. */
 	for_each_possible_cpu(cpu)
@@ -10562,35 +10552,8 @@ static void task_mm_cid_work(struct callback_head *work)
 	 */
 	for_each_possible_cpu(cpu)
 		sched_mm_cid_remote_clear_weight(mm, cpu, weight);
-}
-
-void init_sched_mm_cid(struct task_struct *t)
-{
-	struct mm_struct *mm = t->mm;
-	int mm_users = 0;
-
-	if (mm) {
-		mm_users = atomic_read(&mm->mm_users);
-		if (mm_users == 1)
-			mm->mm_cid_next_scan = jiffies + msecs_to_jiffies(MM_CID_SCAN_DELAY);
-	}
-	t->cid_work.next = &t->cid_work;	/* Protect against double add */
-	init_task_work(&t->cid_work, task_mm_cid_work);
-}
-
-void task_tick_mm_cid(struct rq *rq, struct task_struct *curr)
-{
-	struct callback_head *work = &curr->cid_work;
-	unsigned long now = jiffies;
-
-	if (!curr->mm || (curr->flags & (PF_EXITING | PF_KTHREAD)) ||
-	    work->next != work)
-		return;
-	if (time_before(now, READ_ONCE(curr->mm->mm_cid_next_scan)))
-		return;
-
-	/* No page allocation under rq lock */
-	task_work_add(curr, work, TWA_RESUME | TWAF_NO_ALLOC);
+out:
+	schedule_delayed_work(delayed_work, msecs_to_jiffies(MM_CID_SCAN_DELAY));
 }
 
 void sched_mm_cid_exit_signals(struct task_struct *t)
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 76f5f53a645f..21be461ff913 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -3581,16 +3581,11 @@ extern void sched_dynamic_update(int mode);
 
 #ifdef CONFIG_SCHED_MM_CID
 
-#define SCHED_MM_CID_PERIOD_NS	(100ULL * 1000000)	/* 100ms */
-#define MM_CID_SCAN_DELAY	100			/* 100ms */
-
 extern raw_spinlock_t cid_lock;
 extern int use_cid_lock;
 
 extern void sched_mm_cid_migrate_from(struct task_struct *t);
 extern void sched_mm_cid_migrate_to(struct rq *dst_rq, struct task_struct *t);
-extern void task_tick_mm_cid(struct rq *rq, struct task_struct *curr);
-extern void init_sched_mm_cid(struct task_struct *t);
 
 static inline void __mm_cid_put(struct mm_struct *mm, int cid)
 {
@@ -3839,8 +3834,6 @@ static inline void switch_mm_cid(struct rq *rq,
 static inline void switch_mm_cid(struct rq *rq, struct task_struct *prev, struct task_struct *next) { }
 static inline void sched_mm_cid_migrate_from(struct task_struct *t) { }
 static inline void sched_mm_cid_migrate_to(struct rq *dst_rq, struct task_struct *t) { }
-static inline void task_tick_mm_cid(struct rq *rq, struct task_struct *curr) { }
-static inline void init_sched_mm_cid(struct task_struct *t) { }
 #endif /* !CONFIG_SCHED_MM_CID */
 
 extern u64 avg_vruntime(struct cfs_rq *cfs_rq);
-- 
2.47.1



^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH v2 2/4] sched: Remove mm_cid_next_scan as obsolete
  2024-12-13  9:54 [PATCH v2 0/4] sched: Move task_mm_cid_work to mm delayed work Gabriele Monaco
  2024-12-13  9:54 ` [PATCH v2 1/4] " Gabriele Monaco
@ 2024-12-13  9:54 ` Gabriele Monaco
  2024-12-13 14:01   ` Mathieu Desnoyers
  2024-12-13  9:54 ` [PATCH v2 3/4] sched: Compact RSEQ concurrency IDs with reduced threads and affinity Gabriele Monaco
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 12+ messages in thread
From: Gabriele Monaco @ 2024-12-13  9:54 UTC (permalink / raw)
  To: Mathieu Desnoyers, Ingo Molnar, Peter Zijlstra, Andrew Morton,
	linux-mm, linux-kernel
  Cc: Juri Lelli, Vincent Guittot, Mel Gorman, Shuah Khan,
	linux-kselftest, Gabriele Monaco

The checks for the scan time in task_mm_cid_work are now superfluous
since the task runs in a delayed_work and the minimum periodicity is
already implied.

This patch removes those checks and the field from the mm_struct.

Additionally, we include a simple check to quickly terminate the
function if we have no work to be done (i.e. no mm_cid is allocated).
This is helpful for tasks that sleep for a long time, but also for
terminated task. We are no longer following the process' state, hence
the function continues to run after a process terminates but before its
mm is freed.

Signed-off-by: Gabriele Monaco <gmonaco@redhat.com>
---
 include/linux/mm_types.h |  7 -------
 kernel/sched/core.c      | 19 +++----------------
 2 files changed, 3 insertions(+), 23 deletions(-)

diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index 92acb827fee4..8a76a1c09234 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -829,12 +829,6 @@ struct mm_struct {
 		 * runqueue locks.
 		 */
 		struct mm_cid __percpu *pcpu_cid;
-		/*
-		 * @mm_cid_next_scan: Next mm_cid scan (in jiffies).
-		 *
-		 * When the next mm_cid scan is due (in jiffies).
-		 */
-		unsigned long mm_cid_next_scan;
 		/**
 		 * @nr_cpus_allowed: Number of CPUs allowed for mm.
 		 *
@@ -1228,7 +1222,6 @@ static inline int mm_alloc_cid_noprof(struct mm_struct *mm, struct task_struct *
 		return -ENOMEM;
 	mm_init_cid(mm, p);
 	INIT_DELAYED_WORK(&mm->mm_cid_work, task_mm_cid_work);
-	mm->mm_cid_next_scan = jiffies + msecs_to_jiffies(MM_CID_SCAN_DELAY);
 	schedule_delayed_work(&mm->mm_cid_work,
 			      msecs_to_jiffies(MM_CID_SCAN_DELAY));
 	return 0;
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index e3b27b73301c..30d78fe14eff 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -10520,28 +10520,15 @@ static void sched_mm_cid_remote_clear_weight(struct mm_struct *mm, int cpu,
 
 void task_mm_cid_work(struct work_struct *work)
 {
-	unsigned long now = jiffies, old_scan, next_scan;
 	struct cpumask *cidmask;
 	struct delayed_work *delayed_work = container_of(work, struct delayed_work, work);
 	struct mm_struct *mm = container_of(delayed_work, struct mm_struct, mm_cid_work);
 	int weight, cpu;
 
-	old_scan = READ_ONCE(mm->mm_cid_next_scan);
-	next_scan = now + msecs_to_jiffies(MM_CID_SCAN_DELAY);
-	if (!old_scan) {
-		unsigned long res;
-
-		res = cmpxchg(&mm->mm_cid_next_scan, old_scan, next_scan);
-		if (res != old_scan)
-			old_scan = res;
-		else
-			old_scan = next_scan;
-	}
-	if (time_before(now, old_scan))
-		goto out;
-	if (!try_cmpxchg(&mm->mm_cid_next_scan, &old_scan, next_scan))
-		goto out;
 	cidmask = mm_cidmask(mm);
+	/* Nothing to clear for now */
+	if (cpumask_empty(cidmask))
+		goto out;
 	/* Clear cids that were not recently used. */
 	for_each_possible_cpu(cpu)
 		sched_mm_cid_remote_clear_old(mm, cpu);
-- 
2.47.1



^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH v2 3/4] sched: Compact RSEQ concurrency IDs with reduced threads and affinity
  2024-12-13  9:54 [PATCH v2 0/4] sched: Move task_mm_cid_work to mm delayed work Gabriele Monaco
  2024-12-13  9:54 ` [PATCH v2 1/4] " Gabriele Monaco
  2024-12-13  9:54 ` [PATCH v2 2/4] sched: Remove mm_cid_next_scan as obsolete Gabriele Monaco
@ 2024-12-13  9:54 ` Gabriele Monaco
  2024-12-13 14:05   ` Mathieu Desnoyers
  2024-12-13  9:54 ` [PATCH v2 4/4] rseq/selftests: Add test for mm_cid compaction Gabriele Monaco
  2024-12-13 11:31 ` [PATCH v2 0/4] sched: Move task_mm_cid_work to mm delayed work Gabriele Monaco
  4 siblings, 1 reply; 12+ messages in thread
From: Gabriele Monaco @ 2024-12-13  9:54 UTC (permalink / raw)
  To: Mathieu Desnoyers, Ingo Molnar, Peter Zijlstra, Andrew Morton,
	linux-mm, linux-kernel
  Cc: Juri Lelli, Vincent Guittot, Mel Gorman, Shuah Khan,
	linux-kselftest, Marco Elver, Ingo Molnar, Gabriele Monaco

From: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>

When a process reduces its number of threads or clears bits in its CPU
affinity mask, the mm_cid allocation should eventually converge towards
smaller values.

However, the change introduced by:

commit 7e019dcc470f ("sched: Improve cache locality of RSEQ concurrency
IDs for intermittent workloads")

adds a per-mm/CPU recent_cid which is never unset unless a thread
migrates.

This is a tradeoff between:

A) Preserving cache locality after a transition from many threads to few
   threads, or after reducing the hamming weight of the allowed CPU mask.

B) Making the mm_cid upper bounds wrt nr threads and allowed CPU mask
   easy to document and understand.

C) Allowing applications to eventually react to mm_cid compaction after
   reduction of the nr threads or allowed CPU mask, making the tracking
   of mm_cid compaction easier by shrinking it back towards 0 or not.

D) Making sure applications that periodically reduce and then increase
   again the nr threads or allowed CPU mask still benefit from good
   cache locality with mm_cid.

Introduce the following changes:

* After shrinking the number of threads or reducing the number of
  allowed CPUs, reduce the value of max_nr_cid so expansion of CID
  allocation will preserve cache locality if the number of threads or
  allowed CPUs increase again.

* Only re-use a recent_cid if it is within the max_nr_cid upper bound,
  else find the first available CID.

Fixes: 7e019dcc470f ("sched: Improve cache locality of RSEQ concurrency IDs for intermittent workloads")
Cc: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Marco Elver <elver@google.com>
Cc: Ingo Molnar <mingo@kernel.org>
Tested-by: Gabriele Monaco <gmonaco@redhat.com>
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: Gabriele Monaco <gmonaco@redhat.com>
---
 include/linux/mm_types.h |  7 ++++---
 kernel/sched/sched.h     | 25 ++++++++++++++++++++++---
 2 files changed, 26 insertions(+), 6 deletions(-)

diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index 8a76a1c09234..16076e70a6b9 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -837,10 +837,11 @@ struct mm_struct {
 		 */
 		unsigned int nr_cpus_allowed;
 		/**
-		 * @max_nr_cid: Maximum number of concurrency IDs allocated.
+		 * @max_nr_cid: Maximum number of allowed concurrency
+		 *              IDs allocated.
 		 *
-		 * Track the highest number of concurrency IDs allocated for the
-		 * mm.
+		 * Track the highest number of allowed concurrency IDs
+		 * allocated for the mm.
 		 */
 		atomic_t max_nr_cid;
 		/**
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 21be461ff913..f3b0d1d86622 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -3652,10 +3652,28 @@ static inline int __mm_cid_try_get(struct task_struct *t, struct mm_struct *mm)
 {
 	struct cpumask *cidmask = mm_cidmask(mm);
 	struct mm_cid __percpu *pcpu_cid = mm->pcpu_cid;
-	int cid = __this_cpu_read(pcpu_cid->recent_cid);
+	int cid, max_nr_cid, allowed_max_nr_cid;
 
+	/*
+	 * After shrinking the number of threads or reducing the number
+	 * of allowed cpus, reduce the value of max_nr_cid so expansion
+	 * of cid allocation will preserve cache locality if the number
+	 * of threads or allowed cpus increase again.
+	 */
+	max_nr_cid = atomic_read(&mm->max_nr_cid);
+	while ((allowed_max_nr_cid = min_t(int, READ_ONCE(mm->nr_cpus_allowed),
+					   atomic_read(&mm->mm_users))),
+	       max_nr_cid > allowed_max_nr_cid) {
+		/* atomic_try_cmpxchg loads previous mm->max_nr_cid into max_nr_cid. */
+		if (atomic_try_cmpxchg(&mm->max_nr_cid, &max_nr_cid, allowed_max_nr_cid)) {
+			max_nr_cid = allowed_max_nr_cid;
+			break;
+		}
+	}
 	/* Try to re-use recent cid. This improves cache locality. */
-	if (!mm_cid_is_unset(cid) && !cpumask_test_and_set_cpu(cid, cidmask))
+	cid = __this_cpu_read(pcpu_cid->recent_cid);
+	if (!mm_cid_is_unset(cid) && cid < max_nr_cid &&
+	    !cpumask_test_and_set_cpu(cid, cidmask))
 		return cid;
 	/*
 	 * Expand cid allocation if the maximum number of concurrency
@@ -3663,8 +3681,9 @@ static inline int __mm_cid_try_get(struct task_struct *t, struct mm_struct *mm)
 	 * and number of threads. Expanding cid allocation as much as
 	 * possible improves cache locality.
 	 */
-	cid = atomic_read(&mm->max_nr_cid);
+	cid = max_nr_cid;
 	while (cid < READ_ONCE(mm->nr_cpus_allowed) && cid < atomic_read(&mm->mm_users)) {
+		/* atomic_try_cmpxchg loads previous mm->max_nr_cid into cid. */
 		if (!atomic_try_cmpxchg(&mm->max_nr_cid, &cid, cid + 1))
 			continue;
 		if (!cpumask_test_and_set_cpu(cid, cidmask))
-- 
2.47.1



^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH v2 4/4] rseq/selftests: Add test for mm_cid compaction
  2024-12-13  9:54 [PATCH v2 0/4] sched: Move task_mm_cid_work to mm delayed work Gabriele Monaco
                   ` (2 preceding siblings ...)
  2024-12-13  9:54 ` [PATCH v2 3/4] sched: Compact RSEQ concurrency IDs with reduced threads and affinity Gabriele Monaco
@ 2024-12-13  9:54 ` Gabriele Monaco
  2024-12-13 14:29   ` Mathieu Desnoyers
  2024-12-13 11:31 ` [PATCH v2 0/4] sched: Move task_mm_cid_work to mm delayed work Gabriele Monaco
  4 siblings, 1 reply; 12+ messages in thread
From: Gabriele Monaco @ 2024-12-13  9:54 UTC (permalink / raw)
  To: Mathieu Desnoyers, Ingo Molnar, Peter Zijlstra, Andrew Morton,
	linux-mm, linux-kernel
  Cc: Juri Lelli, Vincent Guittot, Mel Gorman, Shuah Khan,
	linux-kselftest, Gabriele Monaco

A task in the kernel (task_mm_cid_work) runs somewhat periodically to
compact the mm_cid for each process, this test tries to validate that
it runs correctly and timely.

The test spawns 1 thread pinned to each CPU, then each thread, including
the main one, run in short bursts for some time. During this period, the
mm_cids should be spanning all numbers between 0 and nproc.

At the end of this phase, a thread with high enough mm_cid (> nproc/2)
is selected to be the new leader, all other threads terminate.

After some time, the only running thread should see 0 as mm_cid, if that
doesn't happen, the compaction mechanism didn't work and the test fails.

The test never fails if only 1 core is available, in which case, we
cannot test anything as the only available mm_cid is 0.

Signed-off-by: Gabriele Monaco <gmonaco@redhat.com>
---
 tools/testing/selftests/rseq/.gitignore       |   1 +
 tools/testing/selftests/rseq/Makefile         |   2 +-
 .../selftests/rseq/mm_cid_compaction_test.c   | 157 ++++++++++++++++++
 3 files changed, 159 insertions(+), 1 deletion(-)
 create mode 100644 tools/testing/selftests/rseq/mm_cid_compaction_test.c

diff --git a/tools/testing/selftests/rseq/.gitignore b/tools/testing/selftests/rseq/.gitignore
index 16496de5f6ce..2c89f97e4f73 100644
--- a/tools/testing/selftests/rseq/.gitignore
+++ b/tools/testing/selftests/rseq/.gitignore
@@ -3,6 +3,7 @@ basic_percpu_ops_test
 basic_percpu_ops_mm_cid_test
 basic_test
 basic_rseq_op_test
+mm_cid_compaction_test
 param_test
 param_test_benchmark
 param_test_compare_twice
diff --git a/tools/testing/selftests/rseq/Makefile b/tools/testing/selftests/rseq/Makefile
index 5a3432fceb58..ce1b38f46a35 100644
--- a/tools/testing/selftests/rseq/Makefile
+++ b/tools/testing/selftests/rseq/Makefile
@@ -16,7 +16,7 @@ OVERRIDE_TARGETS = 1
 
 TEST_GEN_PROGS = basic_test basic_percpu_ops_test basic_percpu_ops_mm_cid_test param_test \
 		param_test_benchmark param_test_compare_twice param_test_mm_cid \
-		param_test_mm_cid_benchmark param_test_mm_cid_compare_twice
+		param_test_mm_cid_benchmark param_test_mm_cid_compare_twice mm_cid_compaction_test
 
 TEST_GEN_PROGS_EXTENDED = librseq.so
 
diff --git a/tools/testing/selftests/rseq/mm_cid_compaction_test.c b/tools/testing/selftests/rseq/mm_cid_compaction_test.c
new file mode 100644
index 000000000000..9bc7310c3cb5
--- /dev/null
+++ b/tools/testing/selftests/rseq/mm_cid_compaction_test.c
@@ -0,0 +1,157 @@
+// SPDX-License-Identifier: LGPL-2.1
+#define _GNU_SOURCE
+#include <assert.h>
+#include <pthread.h>
+#include <sched.h>
+#include <stdint.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <stddef.h>
+
+#include "../kselftest.h"
+#include "rseq.h"
+
+#define VERBOSE 0
+#define printf_verbose(fmt, ...)                    \
+	do {                                        \
+		if (VERBOSE)                        \
+			printf(fmt, ##__VA_ARGS__); \
+	} while (0)
+
+/* 0.5 s */
+#define RUNNER_PERIOD 500000
+/* Number of runs before we terminate or get the token */
+#define THREAD_RUNS 5
+
+/*
+ * Number of times we check that the mm_cid were compacted.
+ * Checks are repeated every RUNNER_PERIOD
+ */
+#define MM_CID_CLEANUP_TIMEOUT 10
+
+struct thread_args {
+	int num_cpus;
+	pthread_mutex_t token;
+	pthread_t *tinfo;
+};
+
+static void *thread_runner(void *arg)
+{
+	struct thread_args *args = arg;
+	int i, ret, curr_mm_cid;
+
+	for (i = 0; i < THREAD_RUNS; i++)
+		usleep(RUNNER_PERIOD);
+	curr_mm_cid = rseq_current_mm_cid();
+	/*
+	 * We select one thread with high enough mm_cid to be the new leader
+	 * all other threads (including the main thread) will terminate
+	 * After some time, the mm_cid of the only remaining thread should
+	 * converge to 0, if not, the test fails
+	 */
+	if (curr_mm_cid > args->num_cpus / 2 &&
+	    !pthread_mutex_trylock(&args->token)) {
+		printf_verbose("cpu%d has %d and will be the new leader\n",
+			       sched_getcpu(), curr_mm_cid);
+		for (i = 0; i < args->num_cpus; i++) {
+			if (args->tinfo[i] == pthread_self())
+				continue;
+			ret = pthread_join(args->tinfo[i], NULL);
+			if (ret) {
+				fprintf(stderr,
+					"Error: failed to join thread %d (%d): %s\n",
+					i, ret, strerror(ret));
+				assert(ret == 0);
+			}
+		}
+		free(args->tinfo);
+
+		for (i = 0; i < MM_CID_CLEANUP_TIMEOUT; i++) {
+			curr_mm_cid = rseq_current_mm_cid();
+			printf_verbose("run %d: mm_cid %d on cpu%d\n", i,
+				       curr_mm_cid, sched_getcpu());
+			if (curr_mm_cid == 0) {
+				printf_verbose(
+					"mm_cids successfully compacted, exiting\n");
+				pthread_exit(NULL);
+			}
+			usleep(RUNNER_PERIOD);
+		}
+		assert(false);
+	}
+	printf_verbose("cpu%d has %d and is going to terminate\n",
+		       sched_getcpu(), curr_mm_cid);
+	pthread_exit(NULL);
+}
+
+void test_mm_cid_compaction(void)
+{
+	cpu_set_t affinity, test_affinity;
+	int i, j, ret, num_threads;
+	pthread_t *tinfo;
+	struct thread_args args = { .token = PTHREAD_MUTEX_INITIALIZER };
+
+	sched_getaffinity(0, sizeof(affinity), &affinity);
+	CPU_ZERO(&test_affinity);
+	num_threads = CPU_COUNT(&affinity);
+	tinfo = calloc(num_threads, sizeof(*tinfo));
+	if (!tinfo) {
+		fprintf(stderr, "Error: failed to allocate tinfo(%d): %s\n",
+			errno, strerror(errno));
+		assert(ret == 0);
+	}
+	args.num_cpus = num_threads;
+	args.tinfo = tinfo;
+	if (num_threads == 1) {
+		printf_verbose(
+			"Running on a single cpu, cannot test anything\n");
+		return;
+	}
+	for (i = 0, j = 0; i < CPU_SETSIZE && j < num_threads; i++) {
+		if (CPU_ISSET(i, &affinity)) {
+			ret = pthread_create(&tinfo[j], NULL, thread_runner,
+					     &args);
+			if (ret) {
+				fprintf(stderr,
+					"Error: failed to create thread(%d): %s\n",
+					ret, strerror(ret));
+				assert(ret == 0);
+			}
+			CPU_SET(i, &test_affinity);
+			pthread_setaffinity_np(tinfo[j], sizeof(test_affinity),
+					       &test_affinity);
+			CPU_CLR(i, &test_affinity);
+			++j;
+		}
+	}
+	printf_verbose("Started %d threads\n", num_threads);
+
+	/* Also main thread will terminate if it is not selected as leader */
+	thread_runner(&args);
+}
+
+int main(int argc, char **argv)
+{
+	if (rseq_register_current_thread()) {
+		fprintf(stderr,
+			"Error: rseq_register_current_thread(...) failed(%d): %s\n",
+			errno, strerror(errno));
+		goto error;
+	}
+	if (!rseq_mm_cid_available()) {
+		fprintf(stderr, "Error: rseq_mm_cid unavailable\n");
+		goto error;
+	}
+	test_mm_cid_compaction();
+	if (rseq_unregister_current_thread()) {
+		fprintf(stderr,
+			"Error: rseq_unregister_current_thread(...) failed(%d): %s\n",
+			errno, strerror(errno));
+		goto error;
+	}
+	return 0;
+
+error:
+	return -1;
+}
-- 
2.47.1



^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v2 0/4] sched: Move task_mm_cid_work to mm delayed work
  2024-12-13  9:54 [PATCH v2 0/4] sched: Move task_mm_cid_work to mm delayed work Gabriele Monaco
                   ` (3 preceding siblings ...)
  2024-12-13  9:54 ` [PATCH v2 4/4] rseq/selftests: Add test for mm_cid compaction Gabriele Monaco
@ 2024-12-13 11:31 ` Gabriele Monaco
  4 siblings, 0 replies; 12+ messages in thread
From: Gabriele Monaco @ 2024-12-13 11:31 UTC (permalink / raw)
  To: Mathieu Desnoyers, Ingo Molnar, Peter Zijlstra, Andrew Morton,
	linux-mm, linux-kernel
  Cc: Juri Lelli, Vincent Guittot, Mel Gorman, Shuah Khan, linux-kselftest

[-- Attachment #1: Type: text/plain, Size: 12214 bytes --]

On Fri, 2024-12-13 at 10:54 +0100, Gabriele Monaco wrote:
> OVERHEAD COMPARISON
>
> [..]
>
> I will post another email with the scripts used to retrieve the data
> and
> more details about the runtime distribution.

This message contains the performance results produced by my scripts, which are attached.
The tracing is done via bpftrace while a simple bash script is spawning and killing the loads.

From the histograms it's easier to see the distribution of the durations and if there are clear outliers.

TEST RESULTS ON HEAD

Running without loads on virtme-ng

@duration_max: 426
@duration_total: count 13, average 75, total 987

@durations:
[25, 30)               1 |@@@@@@@@@@@@@@@@@                                   |
[30, 35)               2 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@                  |
[35, 40)               2 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@                  |
[40, 45)               0 |                                                    |
[45, 50)               3 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
[50, 55)               2 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@                  |
[55, 60)               0 |                                                    |
[60, 65)               1 |@@@@@@@@@@@@@@@@@                                   |
[65, 70)               0 |                                                    |
[70, 75)               0 |                                                    |
[75, 80)               0 |                                                    |
[80, 85)               0 |                                                    |
[85, 90)               0 |                                                    |
[90, 95)               1 |@@@@@@@@@@@@@@@@@                                   |
[95, 100)              0 |                                                    |
[100, ...)             1 |@@@@@@@@@@@@@@@@@                                   |

@processes: 12
@threads: 12

Running with cpu loads on virtme-ng

@duration_max: 2508
@duration_total: count 35948, average 20, total 742603

@durations:
[10, 15)            1889 |@@@@@                                               |
[15, 20)           17278 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
[20, 25)           10742 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@                    |
[25, 30)            3327 |@@@@@@@@@@                                          |
[30, 35)            2350 |@@@@@@@                                             |
[35, 40)             326 |                                                    |
[40, 45)               5 |                                                    |
[45, 50)               1 |                                                    |
[50, 55)               2 |                                                    |
[55, 60)               1 |                                                    |
[60, 65)               2 |                                                    |
[65, 70)               2 |                                                    |
[70, 75)               0 |                                                    |
[75, 80)               0 |                                                    |
[80, 85)               1 |                                                    |
[85, 90)               0 |                                                    |
[90, 95)               1 |                                                    |
[95, 100)              1 |                                                    |
[100, ...)            20 |                                                    |

@processes: 129
@threads: 129

Running with fork loads on virtme-ng

@duration_max: 41
@duration_total: count 21, average 34, total 720

@durations:
[30, 35)              12 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
[35, 40)               8 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@                  |
[40, 45)               1 |@@@@                                                |

@processes: 3592
@threads: 3592

Running with thread loads on virtme-ng

@duration_max: 195
@duration_total: count 1286, average 31, total 41082

@durations:
(..., 10)            326 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@        |
[10, 15)              10 |@                                                   |
[15, 20)               0 |                                                    |
[20, 25)               1 |                                                    |
[25, 30)              61 |@@@@@@@@                                            |
[30, 35)             377 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
[35, 40)             264 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@                |
[40, 45)              65 |@@@@@@@@                                            |
[45, 50)              32 |@@@@                                                |
[50, 55)              12 |@                                                   |
[55, 60)              13 |@                                                   |
[60, 65)               7 |                                                    |
[65, 70)              10 |@                                                   |
[70, 75)              10 |@                                                   |
[75, 80)              33 |@@@@                                                |
[80, 85)              26 |@@@                                                 |
[85, 90)              13 |@                                                   |
[90, 95)               6 |                                                    |
[95, 100)              2 |                                                    |
[100, ...)            18 |@@                                                  |

@processes: 129
@threads: 4096

TEST RESULTS ON PATCH

Running without loads on virtme-ng

@duration_max: 42
@duration_total: count 20601, average 2, total 45496

@durations:
(..., 10)          20304 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
[10, 15)               1 |                                                    |
[15, 20)               4 |                                                    |
[20, 25)              29 |                                                    |
[25, 30)              33 |                                                    |
[30, 35)              11 |                                                    |
[35, 40)             156 |                                                    |
[40, 45)              63 |                                                    |

@processes: 12
@threads: 12

Running with cpu loads on virtme-ng

@duration_max: 774
@duration_total: count 38612, average 7, total 281558

@durations:
(..., 10)          34607 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
[10, 15)            2558 |@@@                                                 |
[15, 20)             735 |@                                                   |
[20, 25)             454 |                                                    |
[25, 30)             225 |                                                    |
[30, 35)              17 |                                                    |
[35, 40)               8 |                                                    |
[40, 45)               2 |                                                    |
[45, 50)               4 |                                                    |
[50, 55)               0 |                                                    |
[55, 60)               0 |                                                    |
[60, 65)               0 |                                                    |
[65, 70)               0 |                                                    |
[70, 75)               0 |                                                    |
[75, 80)               0 |                                                    |
[80, 85)               0 |                                                    |
[85, 90)               0 |                                                    |
[90, 95)               0 |                                                    |
[95, 100)              0 |                                                    |
[100, ...)             2 |                                                    |

@processes: 129
@threads: 129

Running with fork loads on virtme-ng

@duration_max: 457
@duration_total: count 45683, average 19, total 878511

@durations:
(..., 10)           8452 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@                  |
[10, 15)            7287 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@                       |
[15, 20)           12727 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
[20, 25)            2942 |@@@@@@@@@@@@                                        |
[25, 30)            2975 |@@@@@@@@@@@@                                        |
[30, 35)            7305 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@                       |
[35, 40)            2994 |@@@@@@@@@@@@                                        |
[40, 45)             676 |@@                                                  |
[45, 50)             180 |                                                    |
[50, 55)              57 |                                                    |
[55, 60)              19 |                                                    |
[60, 65)               6 |                                                    |
[65, 70)               4 |                                                    |
[70, 75)               2 |                                                    |
[75, 80)               5 |                                                    |
[80, 85)               6 |                                                    |
[85, 90)               4 |                                                    |
[90, 95)               5 |                                                    |
[95, 100)              2 |                                                    |
[100, ...)            34 |                                                    |

@processes: 3982
@threads: 3982

Running with thread loads on virtme-ng

@duration_max: 1046
@duration_total: count 38643, average 21, total 833034

@durations:
(..., 10)           1631 |@@@@@                                               |
[10, 15)           11027 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@              |
[15, 20)           14832 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
[20, 25)            1338 |@@@@                                                |
[25, 30)            1112 |@@@                                                 |
[30, 35)            3781 |@@@@@@@@@@@@@                                       |
[35, 40)            1994 |@@@@@@                                              |
[40, 45)             464 |@                                                   |
[45, 50)             262 |                                                    |
[50, 55)             200 |                                                    |
[55, 60)             294 |@                                                   |
[60, 65)             620 |@@                                                  |
[65, 70)             256 |                                                    |
[70, 75)             119 |                                                    |
[75, 80)             232 |                                                    |
[80, 85)             220 |                                                    |
[85, 90)              55 |                                                    |
[90, 95)              30 |                                                    |
[95, 100)             19 |                                                    |
[100, ...)           157 |                                                    |

@processes: 129
@threads: 4096


-- 
  Gabriele Monaco 
 Senior Software Engineer - Kernel Real Time 
 
Red Hat 
  gmonaco@redhat.com    


[-- Attachment #2: func_benchmark.bt --]
[-- Type: text/plain, Size: 1475 bytes --]

#!/usr/bin/env bpftrace
/**
 * Print durations and invocations
 * Call this script with the duration in seconds as argument
 * e.g. bpftrace func_benchmark.bt 30
 */

//tracepoint:sched:sched_wakeup
fentry:try_to_wake_up
{
  if(args->p->mm != 0) {
    @_mms[args->p->mm] = true;
    @_processes[args->p->tgid] = true;
    @_threads[args->p->pid] = true;
  }
}

fentry:task_mm_cid_work
{
  @start[tid] = nsecs;
  @preemptions[tid] = (uint64)0;
}

fexit:task_mm_cid_work
/@start[tid]/
{
  $curr_preemption = @preempted[tid] ? @preemptions[tid] : 0;
  $duration = (nsecs - @start[tid] - $curr_preemption)/1000;
  @durations = lhist($duration, 10, 100, 5);
  @duration_total = stats($duration);
  @duration_max = max($duration);
  delete(@start[tid]);
  delete(@preemptions[tid]);
  delete(@preempted[tid]);
}

/* Support only one preemption, should be fine for non-sleeping functions */
tracepoint:sched:sched_switch
// /@start[args.prev_pid] || @start[args.next_pid]/
{
  if (@start[args.prev_pid]) {
    @preempted[args.prev_pid] = true;
    @preemptions[args.prev_pid] = nsecs;
  }
  if (@start[args.next_pid] && @preempted[args.next_pid]) {
    @preemptions[args.next_pid] = nsecs - @preemptions[args.next_pid];
  }
}

//interval:s:30
interval:s:$1
{
  exit();
}

END
{
  @mms = len(@_mms);
  @processes = len(@_processes);
  @threads = len(@_threads);
  clear(@_mms);
  clear(@_processes);
  clear(@_threads);
  clear(@start);
  clear(@preemptions);
  clear(@preempted);
}

[-- Attachment #3: runtest_mm_cid.sh --]
[-- Type: application/x-shellscript, Size: 611 bytes --]

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v2 2/4] sched: Remove mm_cid_next_scan as obsolete
  2024-12-13  9:54 ` [PATCH v2 2/4] sched: Remove mm_cid_next_scan as obsolete Gabriele Monaco
@ 2024-12-13 14:01   ` Mathieu Desnoyers
  0 siblings, 0 replies; 12+ messages in thread
From: Mathieu Desnoyers @ 2024-12-13 14:01 UTC (permalink / raw)
  To: Gabriele Monaco, Ingo Molnar, Peter Zijlstra, Andrew Morton,
	linux-mm, linux-kernel
  Cc: Juri Lelli, Vincent Guittot, Mel Gorman, Shuah Khan, linux-kselftest

On 2024-12-13 04:54, Gabriele Monaco wrote:
> The checks for the scan time in task_mm_cid_work are now superfluous
> since the task runs in a delayed_work and the minimum periodicity is
> already implied.
> 
> This patch removes those checks and the field from the mm_struct.
> 
> Additionally, we include a simple check to quickly terminate the
> function if we have no work to be done (i.e. no mm_cid is allocated).
> This is helpful for tasks that sleep for a long time, but also for
> terminated task. We are no longer following the process' state, hence
> the function continues to run after a process terminates but before its
> mm is freed.

Can you fold it in patch 1/4 ?

Thanks,

Mathieu

> 
> Signed-off-by: Gabriele Monaco <gmonaco@redhat.com>
> ---
>   include/linux/mm_types.h |  7 -------
>   kernel/sched/core.c      | 19 +++----------------
>   2 files changed, 3 insertions(+), 23 deletions(-)
> 
> diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
> index 92acb827fee4..8a76a1c09234 100644
> --- a/include/linux/mm_types.h
> +++ b/include/linux/mm_types.h
> @@ -829,12 +829,6 @@ struct mm_struct {
>   		 * runqueue locks.
>   		 */
>   		struct mm_cid __percpu *pcpu_cid;
> -		/*
> -		 * @mm_cid_next_scan: Next mm_cid scan (in jiffies).
> -		 *
> -		 * When the next mm_cid scan is due (in jiffies).
> -		 */
> -		unsigned long mm_cid_next_scan;
>   		/**
>   		 * @nr_cpus_allowed: Number of CPUs allowed for mm.
>   		 *
> @@ -1228,7 +1222,6 @@ static inline int mm_alloc_cid_noprof(struct mm_struct *mm, struct task_struct *
>   		return -ENOMEM;
>   	mm_init_cid(mm, p);
>   	INIT_DELAYED_WORK(&mm->mm_cid_work, task_mm_cid_work);
> -	mm->mm_cid_next_scan = jiffies + msecs_to_jiffies(MM_CID_SCAN_DELAY);
>   	schedule_delayed_work(&mm->mm_cid_work,
>   			      msecs_to_jiffies(MM_CID_SCAN_DELAY));
>   	return 0;
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index e3b27b73301c..30d78fe14eff 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -10520,28 +10520,15 @@ static void sched_mm_cid_remote_clear_weight(struct mm_struct *mm, int cpu,
>   
>   void task_mm_cid_work(struct work_struct *work)
>   {
> -	unsigned long now = jiffies, old_scan, next_scan;
>   	struct cpumask *cidmask;
>   	struct delayed_work *delayed_work = container_of(work, struct delayed_work, work);
>   	struct mm_struct *mm = container_of(delayed_work, struct mm_struct, mm_cid_work);
>   	int weight, cpu;
>   
> -	old_scan = READ_ONCE(mm->mm_cid_next_scan);
> -	next_scan = now + msecs_to_jiffies(MM_CID_SCAN_DELAY);
> -	if (!old_scan) {
> -		unsigned long res;
> -
> -		res = cmpxchg(&mm->mm_cid_next_scan, old_scan, next_scan);
> -		if (res != old_scan)
> -			old_scan = res;
> -		else
> -			old_scan = next_scan;
> -	}
> -	if (time_before(now, old_scan))
> -		goto out;
> -	if (!try_cmpxchg(&mm->mm_cid_next_scan, &old_scan, next_scan))
> -		goto out;
>   	cidmask = mm_cidmask(mm);
> +	/* Nothing to clear for now */
> +	if (cpumask_empty(cidmask))
> +		goto out;
>   	/* Clear cids that were not recently used. */
>   	for_each_possible_cpu(cpu)
>   		sched_mm_cid_remote_clear_old(mm, cpu);

-- 
Mathieu Desnoyers
EfficiOS Inc.
https://www.efficios.com



^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v2 3/4] sched: Compact RSEQ concurrency IDs with reduced threads and affinity
  2024-12-13  9:54 ` [PATCH v2 3/4] sched: Compact RSEQ concurrency IDs with reduced threads and affinity Gabriele Monaco
@ 2024-12-13 14:05   ` Mathieu Desnoyers
  0 siblings, 0 replies; 12+ messages in thread
From: Mathieu Desnoyers @ 2024-12-13 14:05 UTC (permalink / raw)
  To: Gabriele Monaco, Ingo Molnar, Peter Zijlstra, Andrew Morton,
	linux-mm, linux-kernel
  Cc: Juri Lelli, Vincent Guittot, Mel Gorman, Shuah Khan,
	linux-kselftest, Marco Elver, Ingo Molnar

On 2024-12-13 04:54, Gabriele Monaco wrote:
> From: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
> 
> When a process reduces its number of threads or clears bits in its CPU
> affinity mask, the mm_cid allocation should eventually converge towards
> smaller values.

I target v6.13 for this patch. As it fixes a commit which was
recently introduced in v6.13-rc1, I would be tempted to place
this patch early in your series (first patch).

Then the more elaborate change from task work to mm delayed work
can follow, and then the added selftest.

The reason for placing your change second is because I am not
sure we need to backport it as a fix.

Thanks,

Mathieu


> 
> However, the change introduced by:
> 
> commit 7e019dcc470f ("sched: Improve cache locality of RSEQ concurrency
> IDs for intermittent workloads")
> 
> adds a per-mm/CPU recent_cid which is never unset unless a thread
> migrates.
> 
> This is a tradeoff between:
> 
> A) Preserving cache locality after a transition from many threads to few
>     threads, or after reducing the hamming weight of the allowed CPU mask.
> 
> B) Making the mm_cid upper bounds wrt nr threads and allowed CPU mask
>     easy to document and understand.
> 
> C) Allowing applications to eventually react to mm_cid compaction after
>     reduction of the nr threads or allowed CPU mask, making the tracking
>     of mm_cid compaction easier by shrinking it back towards 0 or not.
> 
> D) Making sure applications that periodically reduce and then increase
>     again the nr threads or allowed CPU mask still benefit from good
>     cache locality with mm_cid.
> 
> Introduce the following changes:
> 
> * After shrinking the number of threads or reducing the number of
>    allowed CPUs, reduce the value of max_nr_cid so expansion of CID
>    allocation will preserve cache locality if the number of threads or
>    allowed CPUs increase again.
> 
> * Only re-use a recent_cid if it is within the max_nr_cid upper bound,
>    else find the first available CID.
> 
> Fixes: 7e019dcc470f ("sched: Improve cache locality of RSEQ concurrency IDs for intermittent workloads")
> Cc: Peter Zijlstra (Intel) <peterz@infradead.org>
> Cc: Marco Elver <elver@google.com>
> Cc: Ingo Molnar <mingo@kernel.org>
> Tested-by: Gabriele Monaco <gmonaco@redhat.com>
> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
> Signed-off-by: Gabriele Monaco <gmonaco@redhat.com>
> ---
>   include/linux/mm_types.h |  7 ++++---
>   kernel/sched/sched.h     | 25 ++++++++++++++++++++++---
>   2 files changed, 26 insertions(+), 6 deletions(-)
> 
> diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
> index 8a76a1c09234..16076e70a6b9 100644
> --- a/include/linux/mm_types.h
> +++ b/include/linux/mm_types.h
> @@ -837,10 +837,11 @@ struct mm_struct {
>   		 */
>   		unsigned int nr_cpus_allowed;
>   		/**
> -		 * @max_nr_cid: Maximum number of concurrency IDs allocated.
> +		 * @max_nr_cid: Maximum number of allowed concurrency
> +		 *              IDs allocated.
>   		 *
> -		 * Track the highest number of concurrency IDs allocated for the
> -		 * mm.
> +		 * Track the highest number of allowed concurrency IDs
> +		 * allocated for the mm.
>   		 */
>   		atomic_t max_nr_cid;
>   		/**
> diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> index 21be461ff913..f3b0d1d86622 100644
> --- a/kernel/sched/sched.h
> +++ b/kernel/sched/sched.h
> @@ -3652,10 +3652,28 @@ static inline int __mm_cid_try_get(struct task_struct *t, struct mm_struct *mm)
>   {
>   	struct cpumask *cidmask = mm_cidmask(mm);
>   	struct mm_cid __percpu *pcpu_cid = mm->pcpu_cid;
> -	int cid = __this_cpu_read(pcpu_cid->recent_cid);
> +	int cid, max_nr_cid, allowed_max_nr_cid;
>   
> +	/*
> +	 * After shrinking the number of threads or reducing the number
> +	 * of allowed cpus, reduce the value of max_nr_cid so expansion
> +	 * of cid allocation will preserve cache locality if the number
> +	 * of threads or allowed cpus increase again.
> +	 */
> +	max_nr_cid = atomic_read(&mm->max_nr_cid);
> +	while ((allowed_max_nr_cid = min_t(int, READ_ONCE(mm->nr_cpus_allowed),
> +					   atomic_read(&mm->mm_users))),
> +	       max_nr_cid > allowed_max_nr_cid) {
> +		/* atomic_try_cmpxchg loads previous mm->max_nr_cid into max_nr_cid. */
> +		if (atomic_try_cmpxchg(&mm->max_nr_cid, &max_nr_cid, allowed_max_nr_cid)) {
> +			max_nr_cid = allowed_max_nr_cid;
> +			break;
> +		}
> +	}
>   	/* Try to re-use recent cid. This improves cache locality. */
> -	if (!mm_cid_is_unset(cid) && !cpumask_test_and_set_cpu(cid, cidmask))
> +	cid = __this_cpu_read(pcpu_cid->recent_cid);
> +	if (!mm_cid_is_unset(cid) && cid < max_nr_cid &&
> +	    !cpumask_test_and_set_cpu(cid, cidmask))
>   		return cid;
>   	/*
>   	 * Expand cid allocation if the maximum number of concurrency
> @@ -3663,8 +3681,9 @@ static inline int __mm_cid_try_get(struct task_struct *t, struct mm_struct *mm)
>   	 * and number of threads. Expanding cid allocation as much as
>   	 * possible improves cache locality.
>   	 */
> -	cid = atomic_read(&mm->max_nr_cid);
> +	cid = max_nr_cid;
>   	while (cid < READ_ONCE(mm->nr_cpus_allowed) && cid < atomic_read(&mm->mm_users)) {
> +		/* atomic_try_cmpxchg loads previous mm->max_nr_cid into cid. */
>   		if (!atomic_try_cmpxchg(&mm->max_nr_cid, &cid, cid + 1))
>   			continue;
>   		if (!cpumask_test_and_set_cpu(cid, cidmask))

-- 
Mathieu Desnoyers
EfficiOS Inc.
https://www.efficios.com



^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v2 1/4] sched: Move task_mm_cid_work to mm delayed work
  2024-12-13  9:54 ` [PATCH v2 1/4] " Gabriele Monaco
@ 2024-12-13 14:14   ` Mathieu Desnoyers
  2024-12-13 15:15     ` Gabriele Monaco
  0 siblings, 1 reply; 12+ messages in thread
From: Mathieu Desnoyers @ 2024-12-13 14:14 UTC (permalink / raw)
  To: Gabriele Monaco, Ingo Molnar, Peter Zijlstra, Andrew Morton,
	linux-mm, linux-kernel
  Cc: Juri Lelli, Vincent Guittot, Mel Gorman, Shuah Khan, linux-kselftest

On 2024-12-13 04:54, Gabriele Monaco wrote:
> Currently, the task_mm_cid_work function is called in a task work
> triggered by a scheduler tick. This can delay the execution of the task
> for the entire duration of the function, negatively affecting the
> response of real time tasks.
> 
> This patch runs the task_mm_cid_work in a new delayed work connected to
> the mm_struct rather than in the task context before returning to
> userspace.
> 
> This delayed work is initialised while allocating the mm and disabled
> before freeing it, its execution is no longer triggered by scheduler
> ticks but run periodically based on the defined MM_CID_SCAN_DELAY.
> 
> The main advantage of this change is that the function can be offloaded
> to a different CPU and even preempted by RT tasks.
> 
> Moreover, this new behaviour could be more predictable in some
> situations since the delayed work is always scheduled with the same
> periodicity for each mm.

This last paragraph could be clarified. AFAIR, the problem with
the preexisting approach based on the scheduler tick is with a mm
consisting of a set of periodic threads, where none happen to run
while the scheduler tick is running.

This would skip mm_cid compaction. So it's not a bug per se, because
the mm_cid allocation will just be slightly less compact than it should
be in that case.

The underlying question here is whether eventual convergence of mm_cid
towards 0 when the number of threads or the allowed CPU mask are reduced
in a mm should be guaranteed or only best effort.

If best effort, then this corner-case is not worthy of a "Fix" tag.
Otherwise, we should identify which commit it fixes and introduce a
"Fix" tag.

Thanks,

Mathieu


> 
> Signed-off-by: Gabriele Monaco <gmonaco@redhat.com>
> ---
>   include/linux/mm_types.h | 11 +++++++++
>   include/linux/sched.h    |  1 -
>   kernel/sched/core.c      | 51 ++++++----------------------------------
>   kernel/sched/sched.h     |  7 ------
>   4 files changed, 18 insertions(+), 52 deletions(-)
> 
> diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
> index 7361a8f3ab68..92acb827fee4 100644
> --- a/include/linux/mm_types.h
> +++ b/include/linux/mm_types.h
> @@ -856,6 +856,7 @@ struct mm_struct {
>   		 * mm nr_cpus_allowed updates.
>   		 */
>   		raw_spinlock_t cpus_allowed_lock;
> +		struct delayed_work mm_cid_work;
>   #endif
>   #ifdef CONFIG_MMU
>   		atomic_long_t pgtables_bytes;	/* size of all page tables */
> @@ -1144,11 +1145,16 @@ static inline void vma_iter_init(struct vma_iterator *vmi,
>   
>   #ifdef CONFIG_SCHED_MM_CID
>   
> +#define SCHED_MM_CID_PERIOD_NS	(100ULL * 1000000)	/* 100ms */
> +#define MM_CID_SCAN_DELAY	100			/* 100ms */
> +
>   enum mm_cid_state {
>   	MM_CID_UNSET = -1U,		/* Unset state has lazy_put flag set. */
>   	MM_CID_LAZY_PUT = (1U << 31),
>   };
>   
> +extern void task_mm_cid_work(struct work_struct *work);
> +
>   static inline bool mm_cid_is_unset(int cid)
>   {
>   	return cid == MM_CID_UNSET;
> @@ -1221,12 +1227,17 @@ static inline int mm_alloc_cid_noprof(struct mm_struct *mm, struct task_struct *
>   	if (!mm->pcpu_cid)
>   		return -ENOMEM;
>   	mm_init_cid(mm, p);
> +	INIT_DELAYED_WORK(&mm->mm_cid_work, task_mm_cid_work);
> +	mm->mm_cid_next_scan = jiffies + msecs_to_jiffies(MM_CID_SCAN_DELAY);
> +	schedule_delayed_work(&mm->mm_cid_work,
> +			      msecs_to_jiffies(MM_CID_SCAN_DELAY));
>   	return 0;
>   }
>   #define mm_alloc_cid(...)	alloc_hooks(mm_alloc_cid_noprof(__VA_ARGS__))
>   
>   static inline void mm_destroy_cid(struct mm_struct *mm)
>   {
> +	disable_delayed_work_sync(&mm->mm_cid_work);
>   	free_percpu(mm->pcpu_cid);
>   	mm->pcpu_cid = NULL;
>   }
> diff --git a/include/linux/sched.h b/include/linux/sched.h
> index d380bffee2ef..5d141c310917 100644
> --- a/include/linux/sched.h
> +++ b/include/linux/sched.h
> @@ -1374,7 +1374,6 @@ struct task_struct {
>   	int				last_mm_cid;	/* Most recent cid in mm */
>   	int				migrate_from_cpu;
>   	int				mm_cid_active;	/* Whether cid bitmap is active */
> -	struct callback_head		cid_work;
>   #endif
>   
>   	struct tlbflush_unmap_batch	tlb_ubc;
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index c6d8232ad9ee..e3b27b73301c 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -4516,7 +4516,6 @@ static void __sched_fork(unsigned long clone_flags, struct task_struct *p)
>   	p->wake_entry.u_flags = CSD_TYPE_TTWU;
>   	p->migration_pending = NULL;
>   #endif
> -	init_sched_mm_cid(p);
>   }
>   
>   DEFINE_STATIC_KEY_FALSE(sched_numa_balancing);
> @@ -5654,7 +5653,6 @@ void sched_tick(void)
>   		resched_latency = cpu_resched_latency(rq);
>   	calc_global_load_tick(rq);
>   	sched_core_tick(rq);
> -	task_tick_mm_cid(rq, donor);
>   	scx_tick(rq);
>   
>   	rq_unlock(rq, &rf);
> @@ -10520,22 +10518,14 @@ static void sched_mm_cid_remote_clear_weight(struct mm_struct *mm, int cpu,
>   	sched_mm_cid_remote_clear(mm, pcpu_cid, cpu);
>   }
>   
> -static void task_mm_cid_work(struct callback_head *work)
> +void task_mm_cid_work(struct work_struct *work)
>   {
>   	unsigned long now = jiffies, old_scan, next_scan;
> -	struct task_struct *t = current;
>   	struct cpumask *cidmask;
> -	struct mm_struct *mm;
> +	struct delayed_work *delayed_work = container_of(work, struct delayed_work, work);
> +	struct mm_struct *mm = container_of(delayed_work, struct mm_struct, mm_cid_work);
>   	int weight, cpu;
>   
> -	SCHED_WARN_ON(t != container_of(work, struct task_struct, cid_work));
> -
> -	work->next = work;	/* Prevent double-add */
> -	if (t->flags & PF_EXITING)
> -		return;
> -	mm = t->mm;
> -	if (!mm)
> -		return;
>   	old_scan = READ_ONCE(mm->mm_cid_next_scan);
>   	next_scan = now + msecs_to_jiffies(MM_CID_SCAN_DELAY);
>   	if (!old_scan) {
> @@ -10548,9 +10538,9 @@ static void task_mm_cid_work(struct callback_head *work)
>   			old_scan = next_scan;
>   	}
>   	if (time_before(now, old_scan))
> -		return;
> +		goto out;
>   	if (!try_cmpxchg(&mm->mm_cid_next_scan, &old_scan, next_scan))
> -		return;
> +		goto out;
>   	cidmask = mm_cidmask(mm);
>   	/* Clear cids that were not recently used. */
>   	for_each_possible_cpu(cpu)
> @@ -10562,35 +10552,8 @@ static void task_mm_cid_work(struct callback_head *work)
>   	 */
>   	for_each_possible_cpu(cpu)
>   		sched_mm_cid_remote_clear_weight(mm, cpu, weight);
> -}
> -
> -void init_sched_mm_cid(struct task_struct *t)
> -{
> -	struct mm_struct *mm = t->mm;
> -	int mm_users = 0;
> -
> -	if (mm) {
> -		mm_users = atomic_read(&mm->mm_users);
> -		if (mm_users == 1)
> -			mm->mm_cid_next_scan = jiffies + msecs_to_jiffies(MM_CID_SCAN_DELAY);
> -	}
> -	t->cid_work.next = &t->cid_work;	/* Protect against double add */
> -	init_task_work(&t->cid_work, task_mm_cid_work);
> -}
> -
> -void task_tick_mm_cid(struct rq *rq, struct task_struct *curr)
> -{
> -	struct callback_head *work = &curr->cid_work;
> -	unsigned long now = jiffies;
> -
> -	if (!curr->mm || (curr->flags & (PF_EXITING | PF_KTHREAD)) ||
> -	    work->next != work)
> -		return;
> -	if (time_before(now, READ_ONCE(curr->mm->mm_cid_next_scan)))
> -		return;
> -
> -	/* No page allocation under rq lock */
> -	task_work_add(curr, work, TWA_RESUME | TWAF_NO_ALLOC);
> +out:
> +	schedule_delayed_work(delayed_work, msecs_to_jiffies(MM_CID_SCAN_DELAY));
>   }
>   
>   void sched_mm_cid_exit_signals(struct task_struct *t)
> diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> index 76f5f53a645f..21be461ff913 100644
> --- a/kernel/sched/sched.h
> +++ b/kernel/sched/sched.h
> @@ -3581,16 +3581,11 @@ extern void sched_dynamic_update(int mode);
>   
>   #ifdef CONFIG_SCHED_MM_CID
>   
> -#define SCHED_MM_CID_PERIOD_NS	(100ULL * 1000000)	/* 100ms */
> -#define MM_CID_SCAN_DELAY	100			/* 100ms */
> -
>   extern raw_spinlock_t cid_lock;
>   extern int use_cid_lock;
>   
>   extern void sched_mm_cid_migrate_from(struct task_struct *t);
>   extern void sched_mm_cid_migrate_to(struct rq *dst_rq, struct task_struct *t);
> -extern void task_tick_mm_cid(struct rq *rq, struct task_struct *curr);
> -extern void init_sched_mm_cid(struct task_struct *t);
>   
>   static inline void __mm_cid_put(struct mm_struct *mm, int cid)
>   {
> @@ -3839,8 +3834,6 @@ static inline void switch_mm_cid(struct rq *rq,
>   static inline void switch_mm_cid(struct rq *rq, struct task_struct *prev, struct task_struct *next) { }
>   static inline void sched_mm_cid_migrate_from(struct task_struct *t) { }
>   static inline void sched_mm_cid_migrate_to(struct rq *dst_rq, struct task_struct *t) { }
> -static inline void task_tick_mm_cid(struct rq *rq, struct task_struct *curr) { }
> -static inline void init_sched_mm_cid(struct task_struct *t) { }
>   #endif /* !CONFIG_SCHED_MM_CID */
>   
>   extern u64 avg_vruntime(struct cfs_rq *cfs_rq);

-- 
Mathieu Desnoyers
EfficiOS Inc.
https://www.efficios.com



^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v2 4/4] rseq/selftests: Add test for mm_cid compaction
  2024-12-13  9:54 ` [PATCH v2 4/4] rseq/selftests: Add test for mm_cid compaction Gabriele Monaco
@ 2024-12-13 14:29   ` Mathieu Desnoyers
  2024-12-13 15:03     ` Gabriele Monaco
  0 siblings, 1 reply; 12+ messages in thread
From: Mathieu Desnoyers @ 2024-12-13 14:29 UTC (permalink / raw)
  To: Gabriele Monaco, Ingo Molnar, Peter Zijlstra, Andrew Morton,
	linux-mm, linux-kernel
  Cc: Juri Lelli, Vincent Guittot, Mel Gorman, Shuah Khan, linux-kselftest

On 2024-12-13 04:54, Gabriele Monaco wrote:
> A task in the kernel (task_mm_cid_work) runs somewhat periodically to
> compact the mm_cid for each process, this test tries to validate that
> it runs correctly and timely.
> 
> The test spawns 1 thread pinned to each CPU, then each thread, including
> the main one, run in short bursts for some time. During this period, the
> mm_cids should be spanning all numbers between 0 and nproc.
> 
> At the end of this phase, a thread with high enough mm_cid (> nproc/2)
> is selected to be the new leader, all other threads terminate.
> 
> After some time, the only running thread should see 0 as mm_cid, if that
> doesn't happen, the compaction mechanism didn't work and the test fails.
> 
> The test never fails if only 1 core is available, in which case, we
> cannot test anything as the only available mm_cid is 0.
> 
> Signed-off-by: Gabriele Monaco <gmonaco@redhat.com>
> ---
>   tools/testing/selftests/rseq/.gitignore       |   1 +
>   tools/testing/selftests/rseq/Makefile         |   2 +-
>   .../selftests/rseq/mm_cid_compaction_test.c   | 157 ++++++++++++++++++
>   3 files changed, 159 insertions(+), 1 deletion(-)
>   create mode 100644 tools/testing/selftests/rseq/mm_cid_compaction_test.c
> 
> diff --git a/tools/testing/selftests/rseq/.gitignore b/tools/testing/selftests/rseq/.gitignore
> index 16496de5f6ce..2c89f97e4f73 100644
> --- a/tools/testing/selftests/rseq/.gitignore
> +++ b/tools/testing/selftests/rseq/.gitignore
> @@ -3,6 +3,7 @@ basic_percpu_ops_test
>   basic_percpu_ops_mm_cid_test
>   basic_test
>   basic_rseq_op_test
> +mm_cid_compaction_test
>   param_test
>   param_test_benchmark
>   param_test_compare_twice
> diff --git a/tools/testing/selftests/rseq/Makefile b/tools/testing/selftests/rseq/Makefile
> index 5a3432fceb58..ce1b38f46a35 100644
> --- a/tools/testing/selftests/rseq/Makefile
> +++ b/tools/testing/selftests/rseq/Makefile
> @@ -16,7 +16,7 @@ OVERRIDE_TARGETS = 1
>   
>   TEST_GEN_PROGS = basic_test basic_percpu_ops_test basic_percpu_ops_mm_cid_test param_test \
>   		param_test_benchmark param_test_compare_twice param_test_mm_cid \
> -		param_test_mm_cid_benchmark param_test_mm_cid_compare_twice
> +		param_test_mm_cid_benchmark param_test_mm_cid_compare_twice mm_cid_compaction_test
>   
>   TEST_GEN_PROGS_EXTENDED = librseq.so
>   
> diff --git a/tools/testing/selftests/rseq/mm_cid_compaction_test.c b/tools/testing/selftests/rseq/mm_cid_compaction_test.c
> new file mode 100644
> index 000000000000..9bc7310c3cb5
> --- /dev/null
> +++ b/tools/testing/selftests/rseq/mm_cid_compaction_test.c
> @@ -0,0 +1,157 @@
> +// SPDX-License-Identifier: LGPL-2.1
> +#define _GNU_SOURCE
> +#include <assert.h>
> +#include <pthread.h>
> +#include <sched.h>
> +#include <stdint.h>
> +#include <stdio.h>
> +#include <stdlib.h>
> +#include <string.h>
> +#include <stddef.h>
> +
> +#include "../kselftest.h"
> +#include "rseq.h"
> +
> +#define VERBOSE 0
> +#define printf_verbose(fmt, ...)                    \
> +	do {                                        \
> +		if (VERBOSE)                        \
> +			printf(fmt, ##__VA_ARGS__); \
> +	} while (0)
> +
> +/* 0.5 s */
> +#define RUNNER_PERIOD 500000
> +/* Number of runs before we terminate or get the token */
> +#define THREAD_RUNS 5
> +
> +/*
> + * Number of times we check that the mm_cid were compacted.
> + * Checks are repeated every RUNNER_PERIOD
> + */
> +#define MM_CID_CLEANUP_TIMEOUT 10
> +
> +struct thread_args {
> +	int num_cpus;
> +	pthread_mutex_t token;
> +	pthread_t *tinfo;
> +};
> +
> +static void *thread_runner(void *arg)
> +{
> +	struct thread_args *args = arg;
> +	int i, ret, curr_mm_cid;
> +
> +	for (i = 0; i < THREAD_RUNS; i++)
> +		usleep(RUNNER_PERIOD);
> +	curr_mm_cid = rseq_current_mm_cid();
> +	/*
> +	 * We select one thread with high enough mm_cid to be the new leader
> +	 * all other threads (including the main thread) will terminate
> +	 * After some time, the mm_cid of the only remaining thread should
> +	 * converge to 0, if not, the test fails
> +	 */
> +	if (curr_mm_cid > args->num_cpus / 2 &&

I think we want  curr_mm_cid >= args->num_cpus / 2   here,
otherwise the case with 2 cpus would not match.

> +	    !pthread_mutex_trylock(&args->token)) {
> +		printf_verbose("cpu%d has %d and will be the new leader\n",
> +			       sched_getcpu(), curr_mm_cid);
> +		for (i = 0; i < args->num_cpus; i++) {
> +			if (args->tinfo[i] == pthread_self())
> +				continue;
> +			ret = pthread_join(args->tinfo[i], NULL);

We'd want a synchronization point to join the main thread. I'm not sure
if the main thread is joinable.

Perhaps we could try calling pthread_self() from the main thread, and
store that in the main thread struct thread_args, and use it to join
the main thread afterwards ?

> +			if (ret) {
> +				fprintf(stderr,
> +					"Error: failed to join thread %d (%d): %s\n",
> +					i, ret, strerror(ret));
> +				assert(ret == 0);
> +			}
> +		}
> +		free(args->tinfo);
> +
> +		for (i = 0; i < MM_CID_CLEANUP_TIMEOUT; i++) {
> +			curr_mm_cid = rseq_current_mm_cid();
> +			printf_verbose("run %d: mm_cid %d on cpu%d\n", i,
> +				       curr_mm_cid, sched_getcpu());
> +			if (curr_mm_cid == 0) {
> +				printf_verbose(
> +					"mm_cids successfully compacted, exiting\n");
> +				pthread_exit(NULL);
> +			}
> +			usleep(RUNNER_PERIOD);
> +		}
> +		assert(false);
> +	}
> +	printf_verbose("cpu%d has %d and is going to terminate\n",
> +		       sched_getcpu(), curr_mm_cid);
> +	pthread_exit(NULL);
> +}
> +
> +void test_mm_cid_compaction(void)
> +{
> +	cpu_set_t affinity, test_affinity;
> +	int i, j, ret, num_threads;
> +	pthread_t *tinfo;
> +	struct thread_args args = { .token = PTHREAD_MUTEX_INITIALIZER };
> +
> +	sched_getaffinity(0, sizeof(affinity), &affinity);
> +	CPU_ZERO(&test_affinity);
> +	num_threads = CPU_COUNT(&affinity);
> +	tinfo = calloc(num_threads, sizeof(*tinfo));
> +	if (!tinfo) {
> +		fprintf(stderr, "Error: failed to allocate tinfo(%d): %s\n",
> +			errno, strerror(errno));
> +		assert(ret == 0);
> +	}
> +	args.num_cpus = num_threads;
> +	args.tinfo = tinfo;
> +	if (num_threads == 1) {
> +		printf_verbose(
> +			"Running on a single cpu, cannot test anything\n");
> +		return;
> +	}
> +	for (i = 0, j = 0; i < CPU_SETSIZE && j < num_threads; i++) {
> +		if (CPU_ISSET(i, &affinity)) {

Including the main thread, we end up creating nr_cpus + 1 threads.
I suspect we want to take the main thread into account here, and create
one less thread.

We could use tinfo[0] to store the main thread info.

> +			ret = pthread_create(&tinfo[j], NULL, thread_runner,
> +					     &args);
> +			if (ret) {
> +				fprintf(stderr,
> +					"Error: failed to create thread(%d): %s\n",
> +					ret, strerror(ret));
> +				assert(ret == 0);
> +			}
> +			CPU_SET(i, &test_affinity);
> +			pthread_setaffinity_np(tinfo[j], sizeof(test_affinity),
> +					       &test_affinity);

It would be better that each thread set their own affinity when
they start rather than having the main thread set each created thread
affinity while they are already running. Otherwise it's racy and
timing-dependent.

And don't forget to set the main thread's affinity.

Thanks,

Mathieu

> +			CPU_CLR(i, &test_affinity);
> +			++j;
> +		}
> +	}
> +	printf_verbose("Started %d threads\n", num_threads);
> +
> +	/* Also main thread will terminate if it is not selected as leader */
> +	thread_runner(&args);
> +}
> +
> +int main(int argc, char **argv)
> +{
> +	if (rseq_register_current_thread()) {
> +		fprintf(stderr,
> +			"Error: rseq_register_current_thread(...) failed(%d): %s\n",
> +			errno, strerror(errno));
> +		goto error;
> +	}
> +	if (!rseq_mm_cid_available()) {
> +		fprintf(stderr, "Error: rseq_mm_cid unavailable\n");
> +		goto error;
> +	}
> +	test_mm_cid_compaction();
> +	if (rseq_unregister_current_thread()) {
> +		fprintf(stderr,
> +			"Error: rseq_unregister_current_thread(...) failed(%d): %s\n",
> +			errno, strerror(errno));
> +		goto error;
> +	}
> +	return 0;
> +
> +error:
> +	return -1;
> +}

-- 
Mathieu Desnoyers
EfficiOS Inc.
https://www.efficios.com



^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v2 4/4] rseq/selftests: Add test for mm_cid compaction
  2024-12-13 14:29   ` Mathieu Desnoyers
@ 2024-12-13 15:03     ` Gabriele Monaco
  0 siblings, 0 replies; 12+ messages in thread
From: Gabriele Monaco @ 2024-12-13 15:03 UTC (permalink / raw)
  To: Mathieu Desnoyers, linux-mm, linux-kernel
  Cc: Juri Lelli, Vincent Guittot, Mel Gorman, Shuah Khan,
	linux-kselftest, Ingo Molnar, Peter Zijlstra, Andrew Morton


On Fri, 2024-12-13 at 09:29 -0500, Mathieu Desnoyers wrote:
> On 2024-12-13 04:54, Gabriele Monaco wrote:
> > A task in the kernel (task_mm_cid_work) runs somewhat periodically
> > to
> > compact the mm_cid for each process, this test tries to validate
> > that
> > it runs correctly and timely.
> > 
> > + /*
> > + * We select one thread with high enough mm_cid to be the new
> > leader
> > + * all other threads (including the main thread) will terminate
> > + * After some time, the mm_cid of the only remaining thread should
> > + * converge to 0, if not, the test fails
> > + */
> > + if (curr_mm_cid > args->num_cpus / 2 &&
> 
> I think we want  curr_mm_cid >= args->num_cpus / 2   here,
> otherwise the case with 2 cpus would not match.

Right, good point.

> > +     !pthread_mutex_trylock(&args->token)) {
> > + printf_verbose("cpu%d has %d and will be the new leader\n",
> > +        sched_getcpu(), curr_mm_cid);
> > + for (i = 0; i < args->num_cpus; i++) {
> > + if (args->tinfo[i] == pthread_self())
> > + continue;
> > + ret = pthread_join(args->tinfo[i], NULL);
> 
> We'd want a synchronization point to join the main thread. I'm not
> sure
> if the main thread is joinable.
> 
> Perhaps we could try calling pthread_self() from the main thread, and
> store that in the main thread struct thread_args, and use it to join
> the main thread afterwards ?
> > 
> > +void test_mm_cid_compaction(void)
> > +{
> > + for (i = 0, j = 0; i < CPU_SETSIZE && j < num_threads; i++) {
> > + if (CPU_ISSET(i, &affinity)) {
> 
> Including the main thread, we end up creating nr_cpus + 1 threads.
> I suspect we want to take the main thread into account here, and
> create
> one less thread.
> 
> We could use tinfo[0] to store the main thread info.

Good idea, that would get two birds with one stone.
I just forgot to pass it but it seems the main thread is perfectly
joinable (just checked), so that should work fairly easily.

> 
> > + ret = pthread_create(&tinfo[j], NULL, thread_runner,
> > +      &args);
> > + if (ret) {
> > + fprintf(stderr,
> > + "Error: failed to create thread(%d): %s\n",
> > + ret, strerror(ret));
> > + assert(ret == 0);
> > + }
> > + CPU_SET(i, &test_affinity);
> > + pthread_setaffinity_np(tinfo[j], sizeof(test_affinity),
> > +        &test_affinity);
> 
> It would be better that each thread set their own affinity when
> they start rather than having the main thread set each created thread
> affinity while they are already running. Otherwise it's racy and
> timing-dependent.
> 
> And don't forget to set the main thread's affinity.

Sure, will do!

Thanks for the comments, working on V3.
Gabriele



^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v2 1/4] sched: Move task_mm_cid_work to mm delayed work
  2024-12-13 14:14   ` Mathieu Desnoyers
@ 2024-12-13 15:15     ` Gabriele Monaco
  0 siblings, 0 replies; 12+ messages in thread
From: Gabriele Monaco @ 2024-12-13 15:15 UTC (permalink / raw)
  To: Mathieu Desnoyers, linux-mm, linux-kernel
  Cc: Juri Lelli, Vincent Guittot, Mel Gorman, Shuah Khan,
	linux-kselftest, Ingo Molnar, Peter Zijlstra, Andrew Morton


On Fri, 2024-12-13 at 09:14 -0500, Mathieu Desnoyers wrote:
> On 2024-12-13 04:54, Gabriele Monaco wrote:
> > Currently, the task_mm_cid_work function is called in a task work
> > triggered by a scheduler tick. This can delay the execution of the
> > task
> > for the entire duration of the function, negatively affecting the
> > response of real time tasks.
> > 
> > This patch runs the task_mm_cid_work in a new delayed work
> > connected to
> > the mm_struct rather than in the task context before returning to
> > userspace.
> > 
> > This delayed work is initialised while allocating the mm and
> > disabled
> > before freeing it, its execution is no longer triggered by
> > scheduler
> > ticks but run periodically based on the defined MM_CID_SCAN_DELAY.
> > 
> > The main advantage of this change is that the function can be
> > offloaded
> > to a different CPU and even preempted by RT tasks.
> > 
> > Moreover, this new behaviour could be more predictable in some
> > situations since the delayed work is always scheduled with the same
> > periodicity for each mm.
> 
> This last paragraph could be clarified. AFAIR, the problem with
> the preexisting approach based on the scheduler tick is with a mm
> consisting of a set of periodic threads, where none happen to run
> while the scheduler tick is running.
> 
> This would skip mm_cid compaction. So it's not a bug per se, because
> the mm_cid allocation will just be slightly less compact than it
> should
> be in that case.
> 
> The underlying question here is whether eventual convergence of
> mm_cid
> towards 0 when the number of threads or the allowed CPU mask are
> reduced
> in a mm should be guaranteed or only best effort.
> 
> If best effort, then this corner-case is not worthy of a "Fix" tag.
> Otherwise, we should identify which commit it fixes and introduce a
> "Fix" tag.
> 

I will definitely make it clearer, but I'm also not sure if the patch
is actually a fix for that.
I wanted to mention it rather as a nice consequence of the change. The
main purpose for us is that it solves latency issues in isolated
environments.

From that point of view, it's still /fixing/ the latency spikes
introduced by that commit, so perhaps it deserves the Fix tag anyway.

Let me know what you think about that.

I'm going to merge this patch with 2/4 and pull yours first in V3.

Thanks for the review
Gabriele



^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2024-12-13 15:15 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-12-13  9:54 [PATCH v2 0/4] sched: Move task_mm_cid_work to mm delayed work Gabriele Monaco
2024-12-13  9:54 ` [PATCH v2 1/4] " Gabriele Monaco
2024-12-13 14:14   ` Mathieu Desnoyers
2024-12-13 15:15     ` Gabriele Monaco
2024-12-13  9:54 ` [PATCH v2 2/4] sched: Remove mm_cid_next_scan as obsolete Gabriele Monaco
2024-12-13 14:01   ` Mathieu Desnoyers
2024-12-13  9:54 ` [PATCH v2 3/4] sched: Compact RSEQ concurrency IDs with reduced threads and affinity Gabriele Monaco
2024-12-13 14:05   ` Mathieu Desnoyers
2024-12-13  9:54 ` [PATCH v2 4/4] rseq/selftests: Add test for mm_cid compaction Gabriele Monaco
2024-12-13 14:29   ` Mathieu Desnoyers
2024-12-13 15:03     ` Gabriele Monaco
2024-12-13 11:31 ` [PATCH v2 0/4] sched: Move task_mm_cid_work to mm delayed work Gabriele Monaco

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox