* [PATCH v3 1/3] sched: Compact RSEQ concurrency IDs with reduced threads and affinity
2024-12-16 13:09 [PATCH v3 0/3] sched: Restructure task_mm_cid_work for predictability Gabriele Monaco
@ 2024-12-16 13:09 ` Gabriele Monaco
2024-12-16 13:09 ` [PATCH v3 2/3] sched: Move task_mm_cid_work to mm delayed work Gabriele Monaco
2024-12-16 13:09 ` [PATCH v3 3/3] rseq/selftests: Add test for mm_cid compaction Gabriele Monaco
2 siblings, 0 replies; 9+ messages in thread
From: Gabriele Monaco @ 2024-12-16 13:09 UTC (permalink / raw)
To: Mathieu Desnoyers, Peter Zijlstra, Ingo Molnar, linux-mm, linux-kernel
Cc: Juri Lelli, Marco Elver, Gabriele Monaco
From: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
When a process reduces its number of threads or clears bits in its CPU
affinity mask, the mm_cid allocation should eventually converge towards
smaller values.
However, the change introduced by:
commit 7e019dcc470f ("sched: Improve cache locality of RSEQ concurrency
IDs for intermittent workloads")
adds a per-mm/CPU recent_cid which is never unset unless a thread
migrates.
This is a tradeoff between:
A) Preserving cache locality after a transition from many threads to few
threads, or after reducing the hamming weight of the allowed CPU mask.
B) Making the mm_cid upper bounds wrt nr threads and allowed CPU mask
easy to document and understand.
C) Allowing applications to eventually react to mm_cid compaction after
reduction of the nr threads or allowed CPU mask, making the tracking
of mm_cid compaction easier by shrinking it back towards 0 or not.
D) Making sure applications that periodically reduce and then increase
again the nr threads or allowed CPU mask still benefit from good
cache locality with mm_cid.
Introduce the following changes:
* After shrinking the number of threads or reducing the number of
allowed CPUs, reduce the value of max_nr_cid so expansion of CID
allocation will preserve cache locality if the number of threads or
allowed CPUs increase again.
* Only re-use a recent_cid if it is within the max_nr_cid upper bound,
else find the first available CID.
Fixes: 7e019dcc470f ("sched: Improve cache locality of RSEQ concurrency IDs for intermittent workloads")
Cc: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Marco Elver <elver@google.com>
Cc: Ingo Molnar <mingo@kernel.org>
Tested-by: Gabriele Monaco <gmonaco@redhat.com>
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: Gabriele Monaco <gmonaco@redhat.com>
---
include/linux/mm_types.h | 7 ++++---
kernel/sched/sched.h | 25 ++++++++++++++++++++++---
2 files changed, 26 insertions(+), 6 deletions(-)
diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index 7361a8f3ab68..d56948a74254 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -843,10 +843,11 @@ struct mm_struct {
*/
unsigned int nr_cpus_allowed;
/**
- * @max_nr_cid: Maximum number of concurrency IDs allocated.
+ * @max_nr_cid: Maximum number of allowed concurrency
+ * IDs allocated.
*
- * Track the highest number of concurrency IDs allocated for the
- * mm.
+ * Track the highest number of allowed concurrency IDs
+ * allocated for the mm.
*/
atomic_t max_nr_cid;
/**
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 76f5f53a645f..b50dcd908702 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -3657,10 +3657,28 @@ static inline int __mm_cid_try_get(struct task_struct *t, struct mm_struct *mm)
{
struct cpumask *cidmask = mm_cidmask(mm);
struct mm_cid __percpu *pcpu_cid = mm->pcpu_cid;
- int cid = __this_cpu_read(pcpu_cid->recent_cid);
+ int cid, max_nr_cid, allowed_max_nr_cid;
+ /*
+ * After shrinking the number of threads or reducing the number
+ * of allowed cpus, reduce the value of max_nr_cid so expansion
+ * of cid allocation will preserve cache locality if the number
+ * of threads or allowed cpus increase again.
+ */
+ max_nr_cid = atomic_read(&mm->max_nr_cid);
+ while ((allowed_max_nr_cid = min_t(int, READ_ONCE(mm->nr_cpus_allowed),
+ atomic_read(&mm->mm_users))),
+ max_nr_cid > allowed_max_nr_cid) {
+ /* atomic_try_cmpxchg loads previous mm->max_nr_cid into max_nr_cid. */
+ if (atomic_try_cmpxchg(&mm->max_nr_cid, &max_nr_cid, allowed_max_nr_cid)) {
+ max_nr_cid = allowed_max_nr_cid;
+ break;
+ }
+ }
/* Try to re-use recent cid. This improves cache locality. */
- if (!mm_cid_is_unset(cid) && !cpumask_test_and_set_cpu(cid, cidmask))
+ cid = __this_cpu_read(pcpu_cid->recent_cid);
+ if (!mm_cid_is_unset(cid) && cid < max_nr_cid &&
+ !cpumask_test_and_set_cpu(cid, cidmask))
return cid;
/*
* Expand cid allocation if the maximum number of concurrency
@@ -3668,8 +3686,9 @@ static inline int __mm_cid_try_get(struct task_struct *t, struct mm_struct *mm)
* and number of threads. Expanding cid allocation as much as
* possible improves cache locality.
*/
- cid = atomic_read(&mm->max_nr_cid);
+ cid = max_nr_cid;
while (cid < READ_ONCE(mm->nr_cpus_allowed) && cid < atomic_read(&mm->mm_users)) {
+ /* atomic_try_cmpxchg loads previous mm->max_nr_cid into cid. */
if (!atomic_try_cmpxchg(&mm->max_nr_cid, &cid, cid + 1))
continue;
if (!cpumask_test_and_set_cpu(cid, cidmask))
--
2.47.1
^ permalink raw reply [flat|nested] 9+ messages in thread* [PATCH v3 2/3] sched: Move task_mm_cid_work to mm delayed work
2024-12-16 13:09 [PATCH v3 0/3] sched: Restructure task_mm_cid_work for predictability Gabriele Monaco
2024-12-16 13:09 ` [PATCH v3 1/3] sched: Compact RSEQ concurrency IDs with reduced threads and affinity Gabriele Monaco
@ 2024-12-16 13:09 ` Gabriele Monaco
2024-12-24 16:03 ` Mathieu Desnoyers
2024-12-16 13:09 ` [PATCH v3 3/3] rseq/selftests: Add test for mm_cid compaction Gabriele Monaco
2 siblings, 1 reply; 9+ messages in thread
From: Gabriele Monaco @ 2024-12-16 13:09 UTC (permalink / raw)
To: Mathieu Desnoyers, Peter Zijlstra, Ingo Molnar, linux-mm, linux-kernel
Cc: Juri Lelli, Gabriele Monaco, Andrew Morton
Currently, the task_mm_cid_work function is called in a task work
triggered by a scheduler tick to frequently compact the mm_cids of each
process. This can delay the execution of the corresponding thread for
the entire duration of the function, negatively affecting the response
in case of real time tasks. In practice, we observe task_mm_cid_work
increasing the latency of 30-35us on a 128 cores system, this order of
magnitude is meaningful under PREEMPT_RT.
This patch runs the task_mm_cid_work in a new delayed work connected to
the mm_struct rather than in the task context before returning to
userspace.
This delayed work is initialised while allocating the mm and disabled
before freeing it, its execution is no longer triggered by scheduler
ticks but run periodically based on the defined MM_CID_SCAN_DELAY.
The main advantage of this change is that the function can be offloaded
to a different CPU and even preempted by RT tasks.
Moreover, this new behaviour could be more predictable with periodic
tasks with short runtime, which may rarely run during a scheduler tick.
Now, the work is always scheduled with the same periodicity for each mm
(though the periodicity is not guaranteed due to interference from other
tasks, but mm_cid compaction is mostly best effort).
To avoid excessively increased runtime, we quickly return from the
function if we have no work to be done (i.e. no mm_cid is allocated).
This is helpful for tasks that sleep for a long time, but also for
terminated task. We are no longer following the process' state, hence
the function continues to run after a process terminates but before its
mm is freed.
Fixes: 223baf9d17f2 ("sched: Fix performance regression introduced by mm_cid")
To: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Gabriele Monaco <gmonaco@redhat.com>
---
include/linux/mm_types.h | 16 ++++++----
include/linux/sched.h | 1 -
kernel/sched/core.c | 66 +++++-----------------------------------
kernel/sched/sched.h | 7 -----
4 files changed, 18 insertions(+), 72 deletions(-)
diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index d56948a74254..16076e70a6b9 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -829,12 +829,6 @@ struct mm_struct {
* runqueue locks.
*/
struct mm_cid __percpu *pcpu_cid;
- /*
- * @mm_cid_next_scan: Next mm_cid scan (in jiffies).
- *
- * When the next mm_cid scan is due (in jiffies).
- */
- unsigned long mm_cid_next_scan;
/**
* @nr_cpus_allowed: Number of CPUs allowed for mm.
*
@@ -857,6 +851,7 @@ struct mm_struct {
* mm nr_cpus_allowed updates.
*/
raw_spinlock_t cpus_allowed_lock;
+ struct delayed_work mm_cid_work;
#endif
#ifdef CONFIG_MMU
atomic_long_t pgtables_bytes; /* size of all page tables */
@@ -1145,11 +1140,16 @@ static inline void vma_iter_init(struct vma_iterator *vmi,
#ifdef CONFIG_SCHED_MM_CID
+#define SCHED_MM_CID_PERIOD_NS (100ULL * 1000000) /* 100ms */
+#define MM_CID_SCAN_DELAY 100 /* 100ms */
+
enum mm_cid_state {
MM_CID_UNSET = -1U, /* Unset state has lazy_put flag set. */
MM_CID_LAZY_PUT = (1U << 31),
};
+extern void task_mm_cid_work(struct work_struct *work);
+
static inline bool mm_cid_is_unset(int cid)
{
return cid == MM_CID_UNSET;
@@ -1222,12 +1222,16 @@ static inline int mm_alloc_cid_noprof(struct mm_struct *mm, struct task_struct *
if (!mm->pcpu_cid)
return -ENOMEM;
mm_init_cid(mm, p);
+ INIT_DELAYED_WORK(&mm->mm_cid_work, task_mm_cid_work);
+ schedule_delayed_work(&mm->mm_cid_work,
+ msecs_to_jiffies(MM_CID_SCAN_DELAY));
return 0;
}
#define mm_alloc_cid(...) alloc_hooks(mm_alloc_cid_noprof(__VA_ARGS__))
static inline void mm_destroy_cid(struct mm_struct *mm)
{
+ disable_delayed_work_sync(&mm->mm_cid_work);
free_percpu(mm->pcpu_cid);
mm->pcpu_cid = NULL;
}
diff --git a/include/linux/sched.h b/include/linux/sched.h
index d380bffee2ef..5d141c310917 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1374,7 +1374,6 @@ struct task_struct {
int last_mm_cid; /* Most recent cid in mm */
int migrate_from_cpu;
int mm_cid_active; /* Whether cid bitmap is active */
- struct callback_head cid_work;
#endif
struct tlbflush_unmap_batch tlb_ubc;
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index c6d8232ad9ee..30d78fe14eff 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -4516,7 +4516,6 @@ static void __sched_fork(unsigned long clone_flags, struct task_struct *p)
p->wake_entry.u_flags = CSD_TYPE_TTWU;
p->migration_pending = NULL;
#endif
- init_sched_mm_cid(p);
}
DEFINE_STATIC_KEY_FALSE(sched_numa_balancing);
@@ -5654,7 +5653,6 @@ void sched_tick(void)
resched_latency = cpu_resched_latency(rq);
calc_global_load_tick(rq);
sched_core_tick(rq);
- task_tick_mm_cid(rq, donor);
scx_tick(rq);
rq_unlock(rq, &rf);
@@ -10520,38 +10518,17 @@ static void sched_mm_cid_remote_clear_weight(struct mm_struct *mm, int cpu,
sched_mm_cid_remote_clear(mm, pcpu_cid, cpu);
}
-static void task_mm_cid_work(struct callback_head *work)
+void task_mm_cid_work(struct work_struct *work)
{
- unsigned long now = jiffies, old_scan, next_scan;
- struct task_struct *t = current;
struct cpumask *cidmask;
- struct mm_struct *mm;
+ struct delayed_work *delayed_work = container_of(work, struct delayed_work, work);
+ struct mm_struct *mm = container_of(delayed_work, struct mm_struct, mm_cid_work);
int weight, cpu;
- SCHED_WARN_ON(t != container_of(work, struct task_struct, cid_work));
-
- work->next = work; /* Prevent double-add */
- if (t->flags & PF_EXITING)
- return;
- mm = t->mm;
- if (!mm)
- return;
- old_scan = READ_ONCE(mm->mm_cid_next_scan);
- next_scan = now + msecs_to_jiffies(MM_CID_SCAN_DELAY);
- if (!old_scan) {
- unsigned long res;
-
- res = cmpxchg(&mm->mm_cid_next_scan, old_scan, next_scan);
- if (res != old_scan)
- old_scan = res;
- else
- old_scan = next_scan;
- }
- if (time_before(now, old_scan))
- return;
- if (!try_cmpxchg(&mm->mm_cid_next_scan, &old_scan, next_scan))
- return;
cidmask = mm_cidmask(mm);
+ /* Nothing to clear for now */
+ if (cpumask_empty(cidmask))
+ goto out;
/* Clear cids that were not recently used. */
for_each_possible_cpu(cpu)
sched_mm_cid_remote_clear_old(mm, cpu);
@@ -10562,35 +10539,8 @@ static void task_mm_cid_work(struct callback_head *work)
*/
for_each_possible_cpu(cpu)
sched_mm_cid_remote_clear_weight(mm, cpu, weight);
-}
-
-void init_sched_mm_cid(struct task_struct *t)
-{
- struct mm_struct *mm = t->mm;
- int mm_users = 0;
-
- if (mm) {
- mm_users = atomic_read(&mm->mm_users);
- if (mm_users == 1)
- mm->mm_cid_next_scan = jiffies + msecs_to_jiffies(MM_CID_SCAN_DELAY);
- }
- t->cid_work.next = &t->cid_work; /* Protect against double add */
- init_task_work(&t->cid_work, task_mm_cid_work);
-}
-
-void task_tick_mm_cid(struct rq *rq, struct task_struct *curr)
-{
- struct callback_head *work = &curr->cid_work;
- unsigned long now = jiffies;
-
- if (!curr->mm || (curr->flags & (PF_EXITING | PF_KTHREAD)) ||
- work->next != work)
- return;
- if (time_before(now, READ_ONCE(curr->mm->mm_cid_next_scan)))
- return;
-
- /* No page allocation under rq lock */
- task_work_add(curr, work, TWA_RESUME | TWAF_NO_ALLOC);
+out:
+ schedule_delayed_work(delayed_work, msecs_to_jiffies(MM_CID_SCAN_DELAY));
}
void sched_mm_cid_exit_signals(struct task_struct *t)
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index b50dcd908702..f3b0d1d86622 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -3581,16 +3581,11 @@ extern void sched_dynamic_update(int mode);
#ifdef CONFIG_SCHED_MM_CID
-#define SCHED_MM_CID_PERIOD_NS (100ULL * 1000000) /* 100ms */
-#define MM_CID_SCAN_DELAY 100 /* 100ms */
-
extern raw_spinlock_t cid_lock;
extern int use_cid_lock;
extern void sched_mm_cid_migrate_from(struct task_struct *t);
extern void sched_mm_cid_migrate_to(struct rq *dst_rq, struct task_struct *t);
-extern void task_tick_mm_cid(struct rq *rq, struct task_struct *curr);
-extern void init_sched_mm_cid(struct task_struct *t);
static inline void __mm_cid_put(struct mm_struct *mm, int cid)
{
@@ -3858,8 +3853,6 @@ static inline void switch_mm_cid(struct rq *rq,
static inline void switch_mm_cid(struct rq *rq, struct task_struct *prev, struct task_struct *next) { }
static inline void sched_mm_cid_migrate_from(struct task_struct *t) { }
static inline void sched_mm_cid_migrate_to(struct rq *dst_rq, struct task_struct *t) { }
-static inline void task_tick_mm_cid(struct rq *rq, struct task_struct *curr) { }
-static inline void init_sched_mm_cid(struct task_struct *t) { }
#endif /* !CONFIG_SCHED_MM_CID */
extern u64 avg_vruntime(struct cfs_rq *cfs_rq);
--
2.47.1
^ permalink raw reply [flat|nested] 9+ messages in thread* Re: [PATCH v3 2/3] sched: Move task_mm_cid_work to mm delayed work
2024-12-16 13:09 ` [PATCH v3 2/3] sched: Move task_mm_cid_work to mm delayed work Gabriele Monaco
@ 2024-12-24 16:03 ` Mathieu Desnoyers
0 siblings, 0 replies; 9+ messages in thread
From: Mathieu Desnoyers @ 2024-12-24 16:03 UTC (permalink / raw)
To: Gabriele Monaco, Peter Zijlstra, Ingo Molnar, linux-mm, linux-kernel
Cc: Juri Lelli, Andrew Morton
On 2024-12-16 08:09, Gabriele Monaco wrote:
> Currently, the task_mm_cid_work function is called in a task work
> triggered by a scheduler tick to frequently compact the mm_cids of each
> process. This can delay the execution of the corresponding thread for
> the entire duration of the function, negatively affecting the response
> in case of real time tasks. In practice, we observe task_mm_cid_work
> increasing the latency of 30-35us on a 128 cores system, this order of
> magnitude is meaningful under PREEMPT_RT.
>
> This patch runs the task_mm_cid_work in a new delayed work connected to
> the mm_struct rather than in the task context before returning to
> userspace.
>
> This delayed work is initialised while allocating the mm and disabled
> before freeing it, its execution is no longer triggered by scheduler
> ticks but run periodically based on the defined MM_CID_SCAN_DELAY.
>
> The main advantage of this change is that the function can be offloaded
> to a different CPU and even preempted by RT tasks.
>
> Moreover, this new behaviour could be more predictable with periodic
> tasks with short runtime, which may rarely run during a scheduler tick.
> Now, the work is always scheduled with the same periodicity for each mm
> (though the periodicity is not guaranteed due to interference from other
> tasks, but mm_cid compaction is mostly best effort).
>
> To avoid excessively increased runtime, we quickly return from the
> function if we have no work to be done (i.e. no mm_cid is allocated).
> This is helpful for tasks that sleep for a long time, but also for
> terminated task. We are no longer following the process' state, hence
> the function continues to run after a process terminates but before its
> mm is freed.
>
> Fixes: 223baf9d17f2 ("sched: Fix performance regression introduced by mm_cid")
> To: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Reviewed-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Ingo Molnar <mingo@kernel.org>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Signed-off-by: Gabriele Monaco <gmonaco@redhat.com>
> ---
> include/linux/mm_types.h | 16 ++++++----
> include/linux/sched.h | 1 -
> kernel/sched/core.c | 66 +++++-----------------------------------
> kernel/sched/sched.h | 7 -----
> 4 files changed, 18 insertions(+), 72 deletions(-)
>
> diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
> index d56948a74254..16076e70a6b9 100644
> --- a/include/linux/mm_types.h
> +++ b/include/linux/mm_types.h
> @@ -829,12 +829,6 @@ struct mm_struct {
> * runqueue locks.
> */
> struct mm_cid __percpu *pcpu_cid;
> - /*
> - * @mm_cid_next_scan: Next mm_cid scan (in jiffies).
> - *
> - * When the next mm_cid scan is due (in jiffies).
> - */
> - unsigned long mm_cid_next_scan;
> /**
> * @nr_cpus_allowed: Number of CPUs allowed for mm.
> *
> @@ -857,6 +851,7 @@ struct mm_struct {
> * mm nr_cpus_allowed updates.
> */
> raw_spinlock_t cpus_allowed_lock;
> + struct delayed_work mm_cid_work;
> #endif
> #ifdef CONFIG_MMU
> atomic_long_t pgtables_bytes; /* size of all page tables */
> @@ -1145,11 +1140,16 @@ static inline void vma_iter_init(struct vma_iterator *vmi,
>
> #ifdef CONFIG_SCHED_MM_CID
>
> +#define SCHED_MM_CID_PERIOD_NS (100ULL * 1000000) /* 100ms */
> +#define MM_CID_SCAN_DELAY 100 /* 100ms */
> +
> enum mm_cid_state {
> MM_CID_UNSET = -1U, /* Unset state has lazy_put flag set. */
> MM_CID_LAZY_PUT = (1U << 31),
> };
>
> +extern void task_mm_cid_work(struct work_struct *work);
> +
> static inline bool mm_cid_is_unset(int cid)
> {
> return cid == MM_CID_UNSET;
> @@ -1222,12 +1222,16 @@ static inline int mm_alloc_cid_noprof(struct mm_struct *mm, struct task_struct *
> if (!mm->pcpu_cid)
> return -ENOMEM;
> mm_init_cid(mm, p);
> + INIT_DELAYED_WORK(&mm->mm_cid_work, task_mm_cid_work);
> + schedule_delayed_work(&mm->mm_cid_work,
> + msecs_to_jiffies(MM_CID_SCAN_DELAY));
> return 0;
> }
> #define mm_alloc_cid(...) alloc_hooks(mm_alloc_cid_noprof(__VA_ARGS__))
>
> static inline void mm_destroy_cid(struct mm_struct *mm)
> {
> + disable_delayed_work_sync(&mm->mm_cid_work);
> free_percpu(mm->pcpu_cid);
> mm->pcpu_cid = NULL;
> }
> diff --git a/include/linux/sched.h b/include/linux/sched.h
> index d380bffee2ef..5d141c310917 100644
> --- a/include/linux/sched.h
> +++ b/include/linux/sched.h
> @@ -1374,7 +1374,6 @@ struct task_struct {
> int last_mm_cid; /* Most recent cid in mm */
> int migrate_from_cpu;
> int mm_cid_active; /* Whether cid bitmap is active */
> - struct callback_head cid_work;
> #endif
>
> struct tlbflush_unmap_batch tlb_ubc;
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index c6d8232ad9ee..30d78fe14eff 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -4516,7 +4516,6 @@ static void __sched_fork(unsigned long clone_flags, struct task_struct *p)
> p->wake_entry.u_flags = CSD_TYPE_TTWU;
> p->migration_pending = NULL;
> #endif
> - init_sched_mm_cid(p);
> }
>
> DEFINE_STATIC_KEY_FALSE(sched_numa_balancing);
> @@ -5654,7 +5653,6 @@ void sched_tick(void)
> resched_latency = cpu_resched_latency(rq);
> calc_global_load_tick(rq);
> sched_core_tick(rq);
> - task_tick_mm_cid(rq, donor);
> scx_tick(rq);
>
> rq_unlock(rq, &rf);
> @@ -10520,38 +10518,17 @@ static void sched_mm_cid_remote_clear_weight(struct mm_struct *mm, int cpu,
> sched_mm_cid_remote_clear(mm, pcpu_cid, cpu);
> }
>
> -static void task_mm_cid_work(struct callback_head *work)
> +void task_mm_cid_work(struct work_struct *work)
> {
> - unsigned long now = jiffies, old_scan, next_scan;
> - struct task_struct *t = current;
> struct cpumask *cidmask;
> - struct mm_struct *mm;
> + struct delayed_work *delayed_work = container_of(work, struct delayed_work, work);
> + struct mm_struct *mm = container_of(delayed_work, struct mm_struct, mm_cid_work);
> int weight, cpu;
>
> - SCHED_WARN_ON(t != container_of(work, struct task_struct, cid_work));
> -
> - work->next = work; /* Prevent double-add */
> - if (t->flags & PF_EXITING)
> - return;
> - mm = t->mm;
> - if (!mm)
> - return;
> - old_scan = READ_ONCE(mm->mm_cid_next_scan);
> - next_scan = now + msecs_to_jiffies(MM_CID_SCAN_DELAY);
> - if (!old_scan) {
> - unsigned long res;
> -
> - res = cmpxchg(&mm->mm_cid_next_scan, old_scan, next_scan);
> - if (res != old_scan)
> - old_scan = res;
> - else
> - old_scan = next_scan;
> - }
> - if (time_before(now, old_scan))
> - return;
> - if (!try_cmpxchg(&mm->mm_cid_next_scan, &old_scan, next_scan))
> - return;
> cidmask = mm_cidmask(mm);
> + /* Nothing to clear for now */
> + if (cpumask_empty(cidmask))
> + goto out;
> /* Clear cids that were not recently used. */
> for_each_possible_cpu(cpu)
> sched_mm_cid_remote_clear_old(mm, cpu);
> @@ -10562,35 +10539,8 @@ static void task_mm_cid_work(struct callback_head *work)
> */
> for_each_possible_cpu(cpu)
> sched_mm_cid_remote_clear_weight(mm, cpu, weight);
> -}
> -
> -void init_sched_mm_cid(struct task_struct *t)
> -{
> - struct mm_struct *mm = t->mm;
> - int mm_users = 0;
> -
> - if (mm) {
> - mm_users = atomic_read(&mm->mm_users);
> - if (mm_users == 1)
> - mm->mm_cid_next_scan = jiffies + msecs_to_jiffies(MM_CID_SCAN_DELAY);
> - }
> - t->cid_work.next = &t->cid_work; /* Protect against double add */
> - init_task_work(&t->cid_work, task_mm_cid_work);
> -}
> -
> -void task_tick_mm_cid(struct rq *rq, struct task_struct *curr)
> -{
> - struct callback_head *work = &curr->cid_work;
> - unsigned long now = jiffies;
> -
> - if (!curr->mm || (curr->flags & (PF_EXITING | PF_KTHREAD)) ||
> - work->next != work)
> - return;
> - if (time_before(now, READ_ONCE(curr->mm->mm_cid_next_scan)))
> - return;
> -
> - /* No page allocation under rq lock */
> - task_work_add(curr, work, TWA_RESUME | TWAF_NO_ALLOC);
> +out:
> + schedule_delayed_work(delayed_work, msecs_to_jiffies(MM_CID_SCAN_DELAY));
> }
>
> void sched_mm_cid_exit_signals(struct task_struct *t)
> diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> index b50dcd908702..f3b0d1d86622 100644
> --- a/kernel/sched/sched.h
> +++ b/kernel/sched/sched.h
> @@ -3581,16 +3581,11 @@ extern void sched_dynamic_update(int mode);
>
> #ifdef CONFIG_SCHED_MM_CID
>
> -#define SCHED_MM_CID_PERIOD_NS (100ULL * 1000000) /* 100ms */
> -#define MM_CID_SCAN_DELAY 100 /* 100ms */
> -
> extern raw_spinlock_t cid_lock;
> extern int use_cid_lock;
>
> extern void sched_mm_cid_migrate_from(struct task_struct *t);
> extern void sched_mm_cid_migrate_to(struct rq *dst_rq, struct task_struct *t);
> -extern void task_tick_mm_cid(struct rq *rq, struct task_struct *curr);
> -extern void init_sched_mm_cid(struct task_struct *t);
>
> static inline void __mm_cid_put(struct mm_struct *mm, int cid)
> {
> @@ -3858,8 +3853,6 @@ static inline void switch_mm_cid(struct rq *rq,
> static inline void switch_mm_cid(struct rq *rq, struct task_struct *prev, struct task_struct *next) { }
> static inline void sched_mm_cid_migrate_from(struct task_struct *t) { }
> static inline void sched_mm_cid_migrate_to(struct rq *dst_rq, struct task_struct *t) { }
> -static inline void task_tick_mm_cid(struct rq *rq, struct task_struct *curr) { }
> -static inline void init_sched_mm_cid(struct task_struct *t) { }
> #endif /* !CONFIG_SCHED_MM_CID */
>
> extern u64 avg_vruntime(struct cfs_rq *cfs_rq);
--
Mathieu Desnoyers
EfficiOS Inc.
https://www.efficios.com
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH v3 3/3] rseq/selftests: Add test for mm_cid compaction
2024-12-16 13:09 [PATCH v3 0/3] sched: Restructure task_mm_cid_work for predictability Gabriele Monaco
2024-12-16 13:09 ` [PATCH v3 1/3] sched: Compact RSEQ concurrency IDs with reduced threads and affinity Gabriele Monaco
2024-12-16 13:09 ` [PATCH v3 2/3] sched: Move task_mm_cid_work to mm delayed work Gabriele Monaco
@ 2024-12-16 13:09 ` Gabriele Monaco
2024-12-24 16:20 ` Mathieu Desnoyers
2 siblings, 1 reply; 9+ messages in thread
From: Gabriele Monaco @ 2024-12-16 13:09 UTC (permalink / raw)
To: Mathieu Desnoyers, Peter Zijlstra, Ingo Molnar, linux-mm, linux-kernel
Cc: Juri Lelli, Gabriele Monaco, Shuah Khan
A task in the kernel (task_mm_cid_work) runs somewhat periodically to
compact the mm_cid for each process, this test tries to validate that
it runs correctly and timely.
The test spawns 1 thread pinned to each CPU, then each thread, including
the main one, runs in short bursts for some time. During this period, the
mm_cids should be spanning all numbers between 0 and nproc.
At the end of this phase, a thread with high enough mm_cid (>= nproc/2)
is selected to be the new leader, all other threads terminate.
After some time, the only running thread should see 0 as mm_cid, if that
doesn't happen, the compaction mechanism didn't work and the test fails.
The test never fails if only 1 core is available, in which case, we
cannot test anything as the only available mm_cid is 0.
To: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Cc: Shuah Khan <shuah@kernel.org>
Signed-off-by: Gabriele Monaco <gmonaco@redhat.com>
---
tools/testing/selftests/rseq/.gitignore | 1 +
tools/testing/selftests/rseq/Makefile | 2 +-
.../selftests/rseq/mm_cid_compaction_test.c | 190 ++++++++++++++++++
3 files changed, 192 insertions(+), 1 deletion(-)
create mode 100644 tools/testing/selftests/rseq/mm_cid_compaction_test.c
diff --git a/tools/testing/selftests/rseq/.gitignore b/tools/testing/selftests/rseq/.gitignore
index 16496de5f6ce..2c89f97e4f73 100644
--- a/tools/testing/selftests/rseq/.gitignore
+++ b/tools/testing/selftests/rseq/.gitignore
@@ -3,6 +3,7 @@ basic_percpu_ops_test
basic_percpu_ops_mm_cid_test
basic_test
basic_rseq_op_test
+mm_cid_compaction_test
param_test
param_test_benchmark
param_test_compare_twice
diff --git a/tools/testing/selftests/rseq/Makefile b/tools/testing/selftests/rseq/Makefile
index 5a3432fceb58..ce1b38f46a35 100644
--- a/tools/testing/selftests/rseq/Makefile
+++ b/tools/testing/selftests/rseq/Makefile
@@ -16,7 +16,7 @@ OVERRIDE_TARGETS = 1
TEST_GEN_PROGS = basic_test basic_percpu_ops_test basic_percpu_ops_mm_cid_test param_test \
param_test_benchmark param_test_compare_twice param_test_mm_cid \
- param_test_mm_cid_benchmark param_test_mm_cid_compare_twice
+ param_test_mm_cid_benchmark param_test_mm_cid_compare_twice mm_cid_compaction_test
TEST_GEN_PROGS_EXTENDED = librseq.so
diff --git a/tools/testing/selftests/rseq/mm_cid_compaction_test.c b/tools/testing/selftests/rseq/mm_cid_compaction_test.c
new file mode 100644
index 000000000000..e5557b38a4e9
--- /dev/null
+++ b/tools/testing/selftests/rseq/mm_cid_compaction_test.c
@@ -0,0 +1,190 @@
+// SPDX-License-Identifier: LGPL-2.1
+#define _GNU_SOURCE
+#include <assert.h>
+#include <pthread.h>
+#include <sched.h>
+#include <stdint.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <stddef.h>
+
+#include "../kselftest.h"
+#include "rseq.h"
+
+#define VERBOSE 0
+#define printf_verbose(fmt, ...) \
+ do { \
+ if (VERBOSE) \
+ printf(fmt, ##__VA_ARGS__); \
+ } while (0)
+
+/* 0.5 s */
+#define RUNNER_PERIOD 500000
+/* Number of runs before we terminate or get the token */
+#define THREAD_RUNS 5
+
+/*
+ * Number of times we check that the mm_cid were compacted.
+ * Checks are repeated every RUNNER_PERIOD
+ */
+#define MM_CID_COMPACT_TIMEOUT 10
+
+struct thread_args {
+ int cpu;
+ int num_cpus;
+ pthread_mutex_t *token;
+ pthread_t *tinfo;
+ struct thread_args *args_head;
+};
+
+static void *thread_runner(void *arg)
+{
+ struct thread_args *args = arg;
+ int i, ret, curr_mm_cid;
+ cpu_set_t affinity;
+
+ CPU_ZERO(&affinity);
+ CPU_SET(args->cpu, &affinity);
+ ret = pthread_setaffinity_np(pthread_self(), sizeof(affinity), &affinity);
+ if (ret) {
+ fprintf(stderr,
+ "Error: failed to set affinity to thread %d (%d): %s\n",
+ args->cpu, ret, strerror(ret));
+ assert(ret == 0);
+ }
+ for (i = 0; i < THREAD_RUNS; i++)
+ usleep(RUNNER_PERIOD);
+ curr_mm_cid = rseq_current_mm_cid();
+ /*
+ * We select one thread with high enough mm_cid to be the new leader
+ * all other threads (including the main thread) will terminate
+ * After some time, the mm_cid of the only remaining thread should
+ * converge to 0, if not, the test fails
+ */
+ if (curr_mm_cid >= args->num_cpus / 2 &&
+ !pthread_mutex_trylock(args->token)) {
+ printf_verbose("cpu%d has %d and will be the new leader\n",
+ sched_getcpu(), curr_mm_cid);
+ for (i = 0; i < args->num_cpus; i++) {
+ if (args->tinfo[i] == pthread_self())
+ continue;
+ ret = pthread_join(args->tinfo[i], NULL);
+ if (ret) {
+ fprintf(stderr,
+ "Error: failed to join thread %d (%d): %s\n",
+ i, ret, strerror(ret));
+ assert(ret == 0);
+ }
+ }
+ free(args->tinfo);
+ free(args->token);
+ free(args->args_head);
+
+ for (i = 0; i < MM_CID_COMPACT_TIMEOUT; i++) {
+ curr_mm_cid = rseq_current_mm_cid();
+ printf_verbose("run %d: mm_cid %d on cpu%d\n", i,
+ curr_mm_cid, sched_getcpu());
+ if (curr_mm_cid == 0) {
+ printf_verbose(
+ "mm_cids successfully compacted, exiting\n");
+ pthread_exit(NULL);
+ }
+ usleep(RUNNER_PERIOD);
+ }
+ assert(false);
+ }
+ printf_verbose("cpu%d has %d and is going to terminate\n",
+ sched_getcpu(), curr_mm_cid);
+ pthread_exit(NULL);
+}
+
+void test_mm_cid_compaction(void)
+{
+ cpu_set_t affinity;
+ int i, j, ret, num_threads;
+ pthread_t *tinfo;
+ pthread_mutex_t *token;
+ struct thread_args *args;
+
+ sched_getaffinity(0, sizeof(affinity), &affinity);
+ num_threads = CPU_COUNT(&affinity);
+ tinfo = calloc(num_threads, sizeof(*tinfo));
+ if (!tinfo) {
+ fprintf(stderr, "Error: failed to allocate tinfo(%d): %s\n",
+ errno, strerror(errno));
+ assert(ret == 0);
+ }
+ args = calloc(num_threads, sizeof(*args));
+ if (!args) {
+ fprintf(stderr, "Error: failed to allocate args(%d): %s\n",
+ errno, strerror(errno));
+ assert(ret == 0);
+ }
+ token = calloc(num_threads, sizeof(*token));
+ if (!token) {
+ fprintf(stderr, "Error: failed to allocate token(%d): %s\n",
+ errno, strerror(errno));
+ assert(ret == 0);
+ }
+ if (num_threads == 1) {
+ printf_verbose(
+ "Running on a single cpu, cannot test anything\n");
+ return;
+ }
+ pthread_mutex_init(token, NULL);
+ /* The main thread runs on CPU0 */
+ for (i = 0, j = 0; i < CPU_SETSIZE && j < num_threads; i++) {
+ if (CPU_ISSET(i, &affinity)) {
+ args[j].num_cpus = num_threads;
+ args[j].tinfo = tinfo;
+ args[j].token = token;
+ args[j].cpu = i;
+ args[j].args_head = args;
+ if (!j) {
+ /* The first thread is the main one */
+ tinfo[0] = pthread_self();
+ ++j;
+ continue;
+ }
+ ret = pthread_create(&tinfo[j], NULL, thread_runner,
+ &args[j]);
+ if (ret) {
+ fprintf(stderr,
+ "Error: failed to create thread(%d): %s\n",
+ ret, strerror(ret));
+ assert(ret == 0);
+ }
+ ++j;
+ }
+ }
+ printf_verbose("Started %d threads\n", num_threads);
+
+ /* Also main thread will terminate if it is not selected as leader */
+ thread_runner(&args[0]);
+}
+
+int main(int argc, char **argv)
+{
+ if (rseq_register_current_thread()) {
+ fprintf(stderr,
+ "Error: rseq_register_current_thread(...) failed(%d): %s\n",
+ errno, strerror(errno));
+ goto error;
+ }
+ if (!rseq_mm_cid_available()) {
+ fprintf(stderr, "Error: rseq_mm_cid unavailable\n");
+ goto error;
+ }
+ test_mm_cid_compaction();
+ if (rseq_unregister_current_thread()) {
+ fprintf(stderr,
+ "Error: rseq_unregister_current_thread(...) failed(%d): %s\n",
+ errno, strerror(errno));
+ goto error;
+ }
+ return 0;
+
+error:
+ return -1;
+}
--
2.47.1
^ permalink raw reply [flat|nested] 9+ messages in thread* Re: [PATCH v3 3/3] rseq/selftests: Add test for mm_cid compaction
2024-12-16 13:09 ` [PATCH v3 3/3] rseq/selftests: Add test for mm_cid compaction Gabriele Monaco
@ 2024-12-24 16:20 ` Mathieu Desnoyers
2024-12-26 9:04 ` Gabriele Monaco
0 siblings, 1 reply; 9+ messages in thread
From: Mathieu Desnoyers @ 2024-12-24 16:20 UTC (permalink / raw)
To: Gabriele Monaco, Peter Zijlstra, Ingo Molnar, linux-mm, linux-kernel
Cc: Juri Lelli, Shuah Khan
On 2024-12-16 08:09, Gabriele Monaco wrote:
> A task in the kernel (task_mm_cid_work) runs somewhat periodically to
> compact the mm_cid for each process, this test tries to validate that
> it runs correctly and timely.
>
> The test spawns 1 thread pinned to each CPU, then each thread, including
> the main one, runs in short bursts for some time. During this period, the
> mm_cids should be spanning all numbers between 0 and nproc.
>
> At the end of this phase, a thread with high enough mm_cid (>= nproc/2)
> is selected to be the new leader, all other threads terminate.
>
> After some time, the only running thread should see 0 as mm_cid, if that
> doesn't happen, the compaction mechanism didn't work and the test fails.
>
> The test never fails if only 1 core is available, in which case, we
> cannot test anything as the only available mm_cid is 0.
>
> To: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
> Cc: Shuah Khan <shuah@kernel.org>
> Signed-off-by: Gabriele Monaco <gmonaco@redhat.com>
> ---
> tools/testing/selftests/rseq/.gitignore | 1 +
> tools/testing/selftests/rseq/Makefile | 2 +-
> .../selftests/rseq/mm_cid_compaction_test.c | 190 ++++++++++++++++++
> 3 files changed, 192 insertions(+), 1 deletion(-)
> create mode 100644 tools/testing/selftests/rseq/mm_cid_compaction_test.c
>
> diff --git a/tools/testing/selftests/rseq/.gitignore b/tools/testing/selftests/rseq/.gitignore
> index 16496de5f6ce..2c89f97e4f73 100644
> --- a/tools/testing/selftests/rseq/.gitignore
> +++ b/tools/testing/selftests/rseq/.gitignore
> @@ -3,6 +3,7 @@ basic_percpu_ops_test
> basic_percpu_ops_mm_cid_test
> basic_test
> basic_rseq_op_test
> +mm_cid_compaction_test
> param_test
> param_test_benchmark
> param_test_compare_twice
> diff --git a/tools/testing/selftests/rseq/Makefile b/tools/testing/selftests/rseq/Makefile
> index 5a3432fceb58..ce1b38f46a35 100644
> --- a/tools/testing/selftests/rseq/Makefile
> +++ b/tools/testing/selftests/rseq/Makefile
> @@ -16,7 +16,7 @@ OVERRIDE_TARGETS = 1
>
> TEST_GEN_PROGS = basic_test basic_percpu_ops_test basic_percpu_ops_mm_cid_test param_test \
> param_test_benchmark param_test_compare_twice param_test_mm_cid \
> - param_test_mm_cid_benchmark param_test_mm_cid_compare_twice
> + param_test_mm_cid_benchmark param_test_mm_cid_compare_twice mm_cid_compaction_test
>
> TEST_GEN_PROGS_EXTENDED = librseq.so
>
> diff --git a/tools/testing/selftests/rseq/mm_cid_compaction_test.c b/tools/testing/selftests/rseq/mm_cid_compaction_test.c
> new file mode 100644
> index 000000000000..e5557b38a4e9
> --- /dev/null
> +++ b/tools/testing/selftests/rseq/mm_cid_compaction_test.c
> @@ -0,0 +1,190 @@
> +// SPDX-License-Identifier: LGPL-2.1
> +#define _GNU_SOURCE
> +#include <assert.h>
> +#include <pthread.h>
> +#include <sched.h>
> +#include <stdint.h>
> +#include <stdio.h>
> +#include <stdlib.h>
> +#include <string.h>
> +#include <stddef.h>
> +
> +#include "../kselftest.h"
> +#include "rseq.h"
> +
> +#define VERBOSE 0
> +#define printf_verbose(fmt, ...) \
> + do { \
> + if (VERBOSE) \
> + printf(fmt, ##__VA_ARGS__); \
> + } while (0)
> +
> +/* 0.5 s */
> +#define RUNNER_PERIOD 500000
> +/* Number of runs before we terminate or get the token */
> +#define THREAD_RUNS 5
> +
> +/*
> + * Number of times we check that the mm_cid were compacted.
> + * Checks are repeated every RUNNER_PERIOD
Minor style issue: missing period.
> + */
> +#define MM_CID_COMPACT_TIMEOUT 10
> +
> +struct thread_args {
> + int cpu;
> + int num_cpus;
> + pthread_mutex_t *token;
> + pthread_t *tinfo;
> + struct thread_args *args_head;
> +};
> +
> +static void *thread_runner(void *arg)
> +{
> + struct thread_args *args = arg;
> + int i, ret, curr_mm_cid;
> + cpu_set_t affinity;
> +
> + CPU_ZERO(&affinity);
> + CPU_SET(args->cpu, &affinity);
> + ret = pthread_setaffinity_np(pthread_self(), sizeof(affinity), &affinity);
> + if (ret) {
> + fprintf(stderr,
> + "Error: failed to set affinity to thread %d (%d): %s\n",
> + args->cpu, ret, strerror(ret));
> + assert(ret == 0);
> + }
> + for (i = 0; i < THREAD_RUNS; i++)
> + usleep(RUNNER_PERIOD);
> + curr_mm_cid = rseq_current_mm_cid();
> + /*
> + * We select one thread with high enough mm_cid to be the new leader
> + * all other threads (including the main thread) will terminate
^ missing period.
> + * After some time, the mm_cid of the only remaining thread should
> + * converge to 0, if not, the test fails
^ missing period.
> + */
> + if (curr_mm_cid >= args->num_cpus / 2 &&
> + !pthread_mutex_trylock(args->token)) {
> + printf_verbose("cpu%d has %d and will be the new leader\n",
has mm_cid=%d ?
> + sched_getcpu(), curr_mm_cid);
> + for (i = 0; i < args->num_cpus; i++) {
> + if (args->tinfo[i] == pthread_self())
> + continue;
> + ret = pthread_join(args->tinfo[i], NULL);
> + if (ret) {
> + fprintf(stderr,
> + "Error: failed to join thread %d (%d): %s\n",
> + i, ret, strerror(ret));
> + assert(ret == 0);
> + }
> + }
> + free(args->tinfo);
> + free(args->token);
> + free(args->args_head);
> +
> + for (i = 0; i < MM_CID_COMPACT_TIMEOUT; i++) {
> + curr_mm_cid = rseq_current_mm_cid();
> + printf_verbose("run %d: mm_cid %d on cpu%d\n", i,
mm_cid=%d (if we want consistent output)
> + curr_mm_cid, sched_getcpu());
> + if (curr_mm_cid == 0) {
> + printf_verbose(
> + "mm_cids successfully compacted, exiting\n");
> + pthread_exit(NULL);
> + }
> + usleep(RUNNER_PERIOD);
> + }
> + assert(false);
I suspect we'd want an explicit error message here
with an abort() rather than an assertion which can be
compiled-out with -DNDEBUG.
> + }
> + printf_verbose("cpu%d has %d and is going to terminate\n",
> + sched_getcpu(), curr_mm_cid);
> + pthread_exit(NULL);
> +}
> +
> +void test_mm_cid_compaction(void)
This function should return its error to the caller
rather than assert.
> +{
> + cpu_set_t affinity;
> + int i, j, ret, num_threads;
> + pthread_t *tinfo;
> + pthread_mutex_t *token;
> + struct thread_args *args;
> +
> + sched_getaffinity(0, sizeof(affinity), &affinity);
> + num_threads = CPU_COUNT(&affinity);
> + tinfo = calloc(num_threads, sizeof(*tinfo));
> + if (!tinfo) {
> + fprintf(stderr, "Error: failed to allocate tinfo(%d): %s\n",
> + errno, strerror(errno));
> + assert(ret == 0);
> + }
> + args = calloc(num_threads, sizeof(*args));
> + if (!args) {
> + fprintf(stderr, "Error: failed to allocate args(%d): %s\n",
> + errno, strerror(errno));
> + assert(ret == 0);
> + }
> + token = calloc(num_threads, sizeof(*token));
> + if (!token) {
> + fprintf(stderr, "Error: failed to allocate token(%d): %s\n",
> + errno, strerror(errno));
> + assert(ret == 0);
> + }
> + if (num_threads == 1) {
> + printf_verbose(
> + "Running on a single cpu, cannot test anything\n");
> + return;
This should return a value telling the caller that
the test is skipped (not an error per se).
> + }
> + pthread_mutex_init(token, NULL);
> + /* The main thread runs on CPU0 */
> + for (i = 0, j = 0; i < CPU_SETSIZE && j < num_threads; i++) {
> + if (CPU_ISSET(i, &affinity)) {
We can save an indent level here by moving this
in the for () condition:
for (i = 0, j = 0; i < CPU_SETSIZE &&
CPU_ISSET(i, &affinity) && j < num_threads; i++) {
> + args[j].num_cpus = num_threads;
> + args[j].tinfo = tinfo;
> + args[j].token = token;
> + args[j].cpu = i;
> + args[j].args_head = args;
> + if (!j) {
> + /* The first thread is the main one */
> + tinfo[0] = pthread_self();
> + ++j;
> + continue;
> + }
> + ret = pthread_create(&tinfo[j], NULL, thread_runner,
> + &args[j]);
> + if (ret) {
> + fprintf(stderr,
> + "Error: failed to create thread(%d): %s\n",
> + ret, strerror(ret));
> + assert(ret == 0);
> + }
> + ++j;
> + }
> + }
> + printf_verbose("Started %d threads\n", num_threads);
I think there is a missing rendez-vous point here. Assuming a
sufficiently long unexpected delay (think of a guest VM VCPU
preempted for a long time), the new leader can start poking
into args and other thread's info while we are still creating
threads here.
Thanks,
Mathieu
> +
> + /* Also main thread will terminate if it is not selected as leader */
> + thread_runner(&args[0]);
> +}
> +
> +int main(int argc, char **argv)
> +{
> + if (rseq_register_current_thread()) {
> + fprintf(stderr,
> + "Error: rseq_register_current_thread(...) failed(%d): %s\n",
> + errno, strerror(errno));
> + goto error;
> + }
> + if (!rseq_mm_cid_available()) {
> + fprintf(stderr, "Error: rseq_mm_cid unavailable\n");
> + goto error;
> + }
> + test_mm_cid_compaction();
> + if (rseq_unregister_current_thread()) {
> + fprintf(stderr,
> + "Error: rseq_unregister_current_thread(...) failed(%d): %s\n",
> + errno, strerror(errno));
> + goto error;
> + }
> + return 0;
> +
> +error:
> + return -1;
> +}
--
Mathieu Desnoyers
EfficiOS Inc.
https://www.efficios.com
^ permalink raw reply [flat|nested] 9+ messages in thread* Re: [PATCH v3 3/3] rseq/selftests: Add test for mm_cid compaction
2024-12-24 16:20 ` Mathieu Desnoyers
@ 2024-12-26 9:04 ` Gabriele Monaco
2024-12-26 14:17 ` Mathieu Desnoyers
0 siblings, 1 reply; 9+ messages in thread
From: Gabriele Monaco @ 2024-12-26 9:04 UTC (permalink / raw)
To: Mathieu Desnoyers, Peter Zijlstra, Ingo Molnar, linux-mm, linux-kernel
Cc: Juri Lelli, Shuah Khan
On Tue, 2024-12-24 at 11:20 -0500, Mathieu Desnoyers wrote:
> On 2024-12-16 08:09, Gabriele Monaco wrote:
> > A task in the kernel (task_mm_cid_work) runs somewhat periodically
> > to
> > compact the mm_cid for each process, this test tries to validate
> > that
> > it runs correctly and timely.
> >
> > + if (curr_mm_cid == 0) {
> > + printf_verbose(
> > + "mm_cids successfully compacted, exiting\n");
> > + pthread_exit(NULL);
> > + }
> > + usleep(RUNNER_PERIOD);
> > + }
> > + assert(false);
>
> I suspect we'd want an explicit error message here
> with an abort() rather than an assertion which can be
> compiled-out with -DNDEBUG.
>
> > + }
> > + printf_verbose("cpu%d has %d and is going to terminate\n",
> > + sched_getcpu(), curr_mm_cid);
> > + pthread_exit(NULL);
> > +}
> > +
> > +void test_mm_cid_compaction(void)
>
> This function should return its error to the caller
> rather than assert.
>
> > +{
> > + cpu_set_t affinity;
> > + int i, j, ret, num_threads;
> > + pthread_t *tinfo;
> > + pthread_mutex_t *token;
> > + struct thread_args *args;
> > +
> > + sched_getaffinity(0, sizeof(affinity), &affinity);
> > + num_threads = CPU_COUNT(&affinity);
> > + tinfo = calloc(num_threads, sizeof(*tinfo));
> > + if (!tinfo) {
> > + fprintf(stderr, "Error: failed to allocate tinfo(%d): %s\n",
> > + errno, strerror(errno));
> > + assert(ret == 0);
> > + }
> > + args = calloc(num_threads, sizeof(*args));
> > + if (!args) {
> > + fprintf(stderr, "Error: failed to allocate args(%d): %s\n",
> > + errno, strerror(errno));
> > + assert(ret == 0);
> > + }
> > + token = calloc(num_threads, sizeof(*token));
> > + if (!token) {
> > + fprintf(stderr, "Error: failed to allocate token(%d): %s\n",
> > + errno, strerror(errno));
> > + assert(ret == 0);
> > + }
> > + if (num_threads == 1) {
> > + printf_verbose(
> > + "Running on a single cpu, cannot test anything\n");
> > + return;
>
> This should return a value telling the caller that
> the test is skipped (not an error per se).
>
Thanks for the review!
I'm not sure how to properly handle these, but it seems to me the
cleanest way is to use ksft_* functions to report failures and skipped
tests. Other tests in rseq don't use the library but it doesn't seem a
big deal if just one test is using it, for now.
It gets a bit complicated to return values since we are exiting from
the main thread (sure we could join the remaining /winning/ thread but
we would end up with 2 threads running). The ksft_* functions solve
this quite nicely using exit codes, though.
> > + }
> > + pthread_mutex_init(token, NULL);
> > + /* The main thread runs on CPU0 */
> > + for (i = 0, j = 0; i < CPU_SETSIZE && j < num_threads; i++) {
> > + if (CPU_ISSET(i, &affinity)) {
>
> We can save an indent level here by moving this
> in the for () condition:
>
> for (i = 0, j = 0; i < CPU_SETSIZE &&
> CPU_ISSET(i, &affinity) && j < num_threads; i++) {
>
Well, if we assume the affinity mask is contiguous, which is likely but
not always true. A typical setup with isolated CPUs have one
housekeeping core per NUMA node, let's say 0,32,64,96 out of 127 cpus,
the test would run only on cpu 0 in that case.
> > + args[j].num_cpus = num_threads;
> > + args[j].tinfo = tinfo;
> > + args[j].token = token;
> > + args[j].cpu = i;
> > + args[j].args_head = args;
> > + if (!j) {
> > + /* The first thread is the main one */
> > + tinfo[0] = pthread_self();
> > + ++j;
> > + continue;
> > + }
> > + ret = pthread_create(&tinfo[j], NULL, thread_runner,
> > + &args[j]);
> > + if (ret) {
> > + fprintf(stderr,
> > + "Error: failed to create thread(%d): %s\n",
> > + ret, strerror(ret));
> > + assert(ret == 0);
> > + }
> > + ++j;
> > + }
> > + }
> > + printf_verbose("Started %d threads\n", num_threads);
>
> I think there is a missing rendez-vous point here. Assuming a
> sufficiently long unexpected delay (think of a guest VM VCPU
> preempted for a long time), the new leader can start poking
> into args and other thread's info while we are still creating
> threads here.
>
Yeah, good point, I'm assuming all threads are ready by the time we are
done waiting but that's not bulletproof. I'll add a barrier.
Thanks again for the comments, I'll prepare a V4.
Gabriele
^ permalink raw reply [flat|nested] 9+ messages in thread* Re: [PATCH v3 3/3] rseq/selftests: Add test for mm_cid compaction
2024-12-26 9:04 ` Gabriele Monaco
@ 2024-12-26 14:17 ` Mathieu Desnoyers
2024-12-27 9:12 ` Gabriele Monaco
0 siblings, 1 reply; 9+ messages in thread
From: Mathieu Desnoyers @ 2024-12-26 14:17 UTC (permalink / raw)
To: Gabriele Monaco, Peter Zijlstra, Ingo Molnar, linux-mm, linux-kernel
Cc: Juri Lelli, Shuah Khan
On 2024-12-26 04:04, Gabriele Monaco wrote:
>
> On Tue, 2024-12-24 at 11:20 -0500, Mathieu Desnoyers wrote:
>> On 2024-12-16 08:09, Gabriele Monaco wrote:
>>> A task in the kernel (task_mm_cid_work) runs somewhat periodically
>>> to
>>> compact the mm_cid for each process, this test tries to validate
>>> that
>>> it runs correctly and timely.
>>>
>>> + if (curr_mm_cid == 0) {
>>> + printf_verbose(
>>> + "mm_cids successfully compacted, exiting\n");
>>> + pthread_exit(NULL);
>>> + }
>>> + usleep(RUNNER_PERIOD);
>>> + }
>>> + assert(false);
>>
>> I suspect we'd want an explicit error message here
>> with an abort() rather than an assertion which can be
>> compiled-out with -DNDEBUG.
>>
>>> + }
>>> + printf_verbose("cpu%d has %d and is going to terminate\n",
>>> + sched_getcpu(), curr_mm_cid);
>>> + pthread_exit(NULL);
>>> +}
>>> +
>>> +void test_mm_cid_compaction(void)
>>
>> This function should return its error to the caller
>> rather than assert.
>>
>>> +{
>>> + cpu_set_t affinity;
>>> + int i, j, ret, num_threads;
>>> + pthread_t *tinfo;
>>> + pthread_mutex_t *token;
>>> + struct thread_args *args;
>>> +
>>> + sched_getaffinity(0, sizeof(affinity), &affinity);
>>> + num_threads = CPU_COUNT(&affinity);
>>> + tinfo = calloc(num_threads, sizeof(*tinfo));
>>> + if (!tinfo) {
>>> + fprintf(stderr, "Error: failed to allocate tinfo(%d): %s\n",
>>> + errno, strerror(errno));
>>> + assert(ret == 0);
>>> + }
>>> + args = calloc(num_threads, sizeof(*args));
>>> + if (!args) {
>>> + fprintf(stderr, "Error: failed to allocate args(%d): %s\n",
>>> + errno, strerror(errno));
>>> + assert(ret == 0);
>>> + }
>>> + token = calloc(num_threads, sizeof(*token));
>>> + if (!token) {
>>> + fprintf(stderr, "Error: failed to allocate token(%d): %s\n",
>>> + errno, strerror(errno));
>>> + assert(ret == 0);
>>> + }
>>> + if (num_threads == 1) {
>>> + printf_verbose(
>>> + "Running on a single cpu, cannot test anything\n");
>>> + return;
>>
>> This should return a value telling the caller that
>> the test is skipped (not an error per se).
>>
>
> Thanks for the review!
> I'm not sure how to properly handle these, but it seems to me the
> cleanest way is to use ksft_* functions to report failures and skipped
> tests. Other tests in rseq don't use the library but it doesn't seem a
> big deal if just one test is using it, for now.
For the moment, we could do like the following test which
does a skip:
void test_membarrier(void)
{
fprintf(stderr, "rseq_offset_deref_addv is not implemented on this architecture. "
"Skipping membarrier test.\n");
}
We can revamp the rest of the tests to use ksft in the future.
Currently everything is driven from run_param_test.sh, and it would
require significant rework to move to ksft.
>
> It gets a bit complicated to return values since we are exiting from
> the main thread (sure we could join the remaining /winning/ thread but
> we would end up with 2 threads running). The ksft_* functions solve
> this quite nicely using exit codes, though.
Then thread_running should be marked with the noreturn attribute.
test_mm_cid_compaction can indeed return if it fails in the preparation
stages, just not when calling thread_running.
So we want test_mm_cid_compaction to return errors so main can handle
them, and we may want to move the call to thread_running directly
into main after success of test_mm_cid_compaction preparation step.
It's not like we can append any further test after this noreturn call.
>
>>> + }
>>> + pthread_mutex_init(token, NULL);
>>> + /* The main thread runs on CPU0 */
>>> + for (i = 0, j = 0; i < CPU_SETSIZE && j < num_threads; i++) {
>>> + if (CPU_ISSET(i, &affinity)) {
>>
>> We can save an indent level here by moving this
>> in the for () condition:
>>
>> for (i = 0, j = 0; i < CPU_SETSIZE &&
>> CPU_ISSET(i, &affinity) && j < num_threads; i++) {
>>
>
> Well, if we assume the affinity mask is contiguous, which is likely but
> not always true. A typical setup with isolated CPUs have one
> housekeeping core per NUMA node, let's say 0,32,64,96 out of 127 cpus,
> the test would run only on cpu 0 in that case.
Whooops, yes, you are right. Scratch that. It would become
an end loop condition, which is not what we want here.
then to remove indent level we'd want:
if (!CPU_ISSET(i, &affinity))
continue;
in the for loop.
>
>>> + args[j].num_cpus = num_threads;
>>> + args[j].tinfo = tinfo;
>>> + args[j].token = token;
>>> + args[j].cpu = i;
>>> + args[j].args_head = args;
>>> + if (!j) {
>>> + /* The first thread is the main one */
>>> + tinfo[0] = pthread_self();
>>> + ++j;
>>> + continue;
>>> + }
>>> + ret = pthread_create(&tinfo[j], NULL, thread_runner,
>>> + &args[j]);
>>> + if (ret) {
>>> + fprintf(stderr,
>>> + "Error: failed to create thread(%d): %s\n",
>>> + ret, strerror(ret));
>>> + assert(ret == 0);
>>> + }
>>> + ++j;
>>> + }
>>> + }
>>> + printf_verbose("Started %d threads\n", num_threads);
>>
>> I think there is a missing rendez-vous point here. Assuming a
>> sufficiently long unexpected delay (think of a guest VM VCPU
>> preempted for a long time), the new leader can start poking
>> into args and other thread's info while we are still creating
>> threads here.
>>
>
> Yeah, good point, I'm assuming all threads are ready by the time we are
> done waiting but that's not bulletproof. I'll add a barrier.
>
> Thanks again for the comments, I'll prepare a V4.
Thanks!
Mathieu
> Gabriele
>
--
Mathieu Desnoyers
EfficiOS Inc.
https://www.efficios.com
^ permalink raw reply [flat|nested] 9+ messages in thread* Re: [PATCH v3 3/3] rseq/selftests: Add test for mm_cid compaction
2024-12-26 14:17 ` Mathieu Desnoyers
@ 2024-12-27 9:12 ` Gabriele Monaco
0 siblings, 0 replies; 9+ messages in thread
From: Gabriele Monaco @ 2024-12-27 9:12 UTC (permalink / raw)
To: Mathieu Desnoyers, Peter Zijlstra, Ingo Molnar, linux-mm, linux-kernel
Cc: Juri Lelli, Shuah Khan
On Thu, 2024-12-26 at 09:17 -0500, Mathieu Desnoyers wrote:
> On 2024-12-26 04:04, Gabriele Monaco wrote:
> >
> > On Tue, 2024-12-24 at 11:20 -0500, Mathieu Desnoyers wrote:
> > > On 2024-12-16 08:09, Gabriele Monaco wrote:
> > > > + if (curr_mm_cid == 0) {
> > > > + printf_verbose(
> > > > + "mm_cids successfully compacted, exiting\n");
> > > > + pthread_exit(NULL);
> > > > + }
> > > > + usleep(RUNNER_PERIOD);
> > > > + }
> > > > + assert(false);
> > >
> > > I suspect we'd want an explicit error message here
> > > with an abort() rather than an assertion which can be
> > > compiled-out with -DNDEBUG.
> > >
> > > > + }
> > > > + printf_verbose("cpu%d has %d and is going to terminate\n",
> > > > + sched_getcpu(), curr_mm_cid);
> > > > + pthread_exit(NULL);
> > > > +}
> > > > +
> > > > +void test_mm_cid_compaction(void)
> > >
> > > This function should return its error to the caller
> > > rather than assert.
> > >
> > > > + if (num_threads == 1) {
> > > > + printf_verbose(
> > > > + "Running on a single cpu, cannot test anything\n");
> > > > + return;
> > >
> > > This should return a value telling the caller that
> > > the test is skipped (not an error per se).
> > >
> >
> > Thanks for the review!
> > I'm not sure how to properly handle these, but it seems to me the
> > cleanest way is to use ksft_* functions to report failures and
> > skipped
> > tests. Other tests in rseq don't use the library but it doesn't
> > seem a
> > big deal if just one test is using it, for now.
>
> For the moment, we could do like the following test which
> does a skip:
>
> void test_membarrier(void)
> {
> fprintf(stderr, "rseq_offset_deref_addv is not implemented
> on this architecture. "
> "Skipping membarrier test.\n");
> }
>
> We can revamp the rest of the tests to use ksft in the future.
>
> Currently everything is driven from run_param_test.sh, and it would
> require significant rework to move to ksft.
>
> >
> > It gets a bit complicated to return values since we are exiting
> > from
> > the main thread (sure we could join the remaining /winning/ thread
> > but
> > we would end up with 2 threads running). The ksft_* functions solve
> > this quite nicely using exit codes, though.
>
> Then thread_running should be marked with the noreturn attribute.
>
> test_mm_cid_compaction can indeed return if it fails in the
> preparation
> stages, just not when calling thread_running.
>
> So we want test_mm_cid_compaction to return errors so main can handle
> them, and we may want to move the call to thread_running directly
> into main after success of test_mm_cid_compaction preparation step.
>
> It's not like we can append any further test after this noreturn
> call.
>
Alright, I'm a bit confused now. I see how in the rseq folder there are
tests in param_test (run by a shell script) and tests on their own c
file that are run just as binary.
For simplicity I added this new test in a separate file and I tried to
mirror what the other tests are doing: all of them are calling one or
more void functions from main (test_*) and some minimal initialisation
(register rseq, which I believe I may not even need, since I'm already
not doing it for all threads).
Now, I can have my test_* function return a value and handle it from
main e.g. aborting if the function returns some value, but that would
require me to define some return values (e.g. abort, fail, perhaps
skip) in use only for this test.
It felt it more consistent to just stick to the void function and
abort/exit directly from there (or return in case of skip).
All other tests do use abort for errors and assert for the pass/fail
condition, but since in my case nothing else can execute after, I'd
say I can simply use exit(0)/exit(1) from the winning thread.
What do you think?
Thanks,
Gabriele
^ permalink raw reply [flat|nested] 9+ messages in thread