* [PATCH v2 0/3] scftorture: Avoid kfree from IRQ context.
@ 2024-11-07 11:13 Sebastian Andrzej Siewior
2024-11-07 11:13 ` [PATCH v2 1/3] scftorture: Avoid additional div operation Sebastian Andrzej Siewior
` (4 more replies)
0 siblings, 5 replies; 10+ messages in thread
From: Sebastian Andrzej Siewior @ 2024-11-07 11:13 UTC (permalink / raw)
To: kasan-dev, linux-kernel, linux-mm
Cc: Paul E. McKenney, Boqun Feng, Marco Elver, Peter Zijlstra,
Tomas Gleixner, Vlastimil Babka, akpm, cl, iamjoonsoo.kim,
longman, penberg, rientjes, sfr
Hi,
Paul reported kfree from IRQ context in scftorture which is noticed by
lockdep since the recent PROVE_RAW_LOCK_NESTING switch.
The last patch in this series adresses the issues, the other things
happened on the way.
v1…v2:
- Remove kfree_bulk(). I get more invocations per report without it.
- Pass `cpu' to scf_cleanup_free_list in scftorture_invoker() instead
of scfp->cpu. The latter is the thread number which can be larger
than the number CPUs leading to a crash in such a case. Reported by
Boqun Feng.
- Clean up the per-CPU lists on module exit. Reported by Boqun Feng.
Sebastian
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH v2 1/3] scftorture: Avoid additional div operation.
2024-11-07 11:13 [PATCH v2 0/3] scftorture: Avoid kfree from IRQ context Sebastian Andrzej Siewior
@ 2024-11-07 11:13 ` Sebastian Andrzej Siewior
2024-11-07 11:13 ` [PATCH v2 2/3] scftorture: Move memory allocation outside of preempt_disable region Sebastian Andrzej Siewior
` (3 subsequent siblings)
4 siblings, 0 replies; 10+ messages in thread
From: Sebastian Andrzej Siewior @ 2024-11-07 11:13 UTC (permalink / raw)
To: kasan-dev, linux-kernel, linux-mm
Cc: Paul E. McKenney, Boqun Feng, Marco Elver, Peter Zijlstra,
Tomas Gleixner, Vlastimil Babka, akpm, cl, iamjoonsoo.kim,
longman, penberg, rientjes, sfr, Sebastian Andrzej Siewior
Replace "scfp->cpu % nr_cpu_ids" with "cpu". This has been computed
earlier.
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
kernel/scftorture.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/scftorture.c b/kernel/scftorture.c
index 44e83a6462647..455cbff35a1a2 100644
--- a/kernel/scftorture.c
+++ b/kernel/scftorture.c
@@ -463,7 +463,7 @@ static int scftorture_invoker(void *arg)
// Make sure that the CPU is affinitized appropriately during testing.
curcpu = raw_smp_processor_id();
- WARN_ONCE(curcpu != scfp->cpu % nr_cpu_ids,
+ WARN_ONCE(curcpu != cpu,
"%s: Wanted CPU %d, running on %d, nr_cpu_ids = %d\n",
__func__, scfp->cpu, curcpu, nr_cpu_ids);
--
2.45.2
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH v2 2/3] scftorture: Move memory allocation outside of preempt_disable region.
2024-11-07 11:13 [PATCH v2 0/3] scftorture: Avoid kfree from IRQ context Sebastian Andrzej Siewior
2024-11-07 11:13 ` [PATCH v2 1/3] scftorture: Avoid additional div operation Sebastian Andrzej Siewior
@ 2024-11-07 11:13 ` Sebastian Andrzej Siewior
2024-11-07 11:13 ` [PATCH v2 3/3] scftorture: Use a lock-less list to free memory Sebastian Andrzej Siewior
` (2 subsequent siblings)
4 siblings, 0 replies; 10+ messages in thread
From: Sebastian Andrzej Siewior @ 2024-11-07 11:13 UTC (permalink / raw)
To: kasan-dev, linux-kernel, linux-mm
Cc: Paul E. McKenney, Boqun Feng, Marco Elver, Peter Zijlstra,
Tomas Gleixner, Vlastimil Babka, akpm, cl, iamjoonsoo.kim,
longman, penberg, rientjes, sfr, Sebastian Andrzej Siewior
Memory allocations can not happen within regions with explicit disabled
preemption PREEMPT_RT. The problem is that the locking structures
underneath are sleeping locks.
Move the memory allocation outside of the preempt-disabled section. Keep
the GFP_ATOMIC for the allocation to behave like a "ememergncy
allocation".
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
kernel/scftorture.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/kernel/scftorture.c b/kernel/scftorture.c
index 455cbff35a1a2..555b3b10621fe 100644
--- a/kernel/scftorture.c
+++ b/kernel/scftorture.c
@@ -320,10 +320,6 @@ static void scftorture_invoke_one(struct scf_statistics *scfp, struct torture_ra
struct scf_check *scfcp = NULL;
struct scf_selector *scfsp = scf_sel_rand(trsp);
- if (use_cpus_read_lock)
- cpus_read_lock();
- else
- preempt_disable();
if (scfsp->scfs_prim == SCF_PRIM_SINGLE || scfsp->scfs_wait) {
scfcp = kmalloc(sizeof(*scfcp), GFP_ATOMIC);
if (!scfcp) {
@@ -337,6 +333,10 @@ static void scftorture_invoke_one(struct scf_statistics *scfp, struct torture_ra
scfcp->scfc_rpc = false;
}
}
+ if (use_cpus_read_lock)
+ cpus_read_lock();
+ else
+ preempt_disable();
switch (scfsp->scfs_prim) {
case SCF_PRIM_RESCHED:
if (IS_BUILTIN(CONFIG_SCF_TORTURE_TEST)) {
--
2.45.2
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH v2 3/3] scftorture: Use a lock-less list to free memory.
2024-11-07 11:13 [PATCH v2 0/3] scftorture: Avoid kfree from IRQ context Sebastian Andrzej Siewior
2024-11-07 11:13 ` [PATCH v2 1/3] scftorture: Avoid additional div operation Sebastian Andrzej Siewior
2024-11-07 11:13 ` [PATCH v2 2/3] scftorture: Move memory allocation outside of preempt_disable region Sebastian Andrzej Siewior
@ 2024-11-07 11:13 ` Sebastian Andrzej Siewior
2024-11-07 18:31 ` Paul E. McKenney
2024-11-07 20:45 ` Boqun Feng
2024-11-07 15:27 ` [PATCH v2 4/3] scftorture: Wait until scf_cleanup_handler() completes Sebastian Andrzej Siewior
2024-11-07 19:05 ` [PATCH v2 0/3] scftorture: Avoid kfree from IRQ context Paul E. McKenney
4 siblings, 2 replies; 10+ messages in thread
From: Sebastian Andrzej Siewior @ 2024-11-07 11:13 UTC (permalink / raw)
To: kasan-dev, linux-kernel, linux-mm
Cc: Paul E. McKenney, Boqun Feng, Marco Elver, Peter Zijlstra,
Tomas Gleixner, Vlastimil Babka, akpm, cl, iamjoonsoo.kim,
longman, penberg, rientjes, sfr, Sebastian Andrzej Siewior
scf_handler() is used as a SMP function call. This function is always
invoked in IRQ-context even with forced-threading enabled. This function
frees memory which not allowed on PREEMPT_RT because the locking
underneath is using sleeping locks.
Add a per-CPU scf_free_pool where each SMP functions adds its memory to
be freed. This memory is then freed by scftorture_invoker() on each
iteration. On the majority of invocations the number of items is less
than five. If the thread sleeps/ gets delayed the number exceed 350 but
did not reach 400 in testing. These were the spikes during testing.
The bulk free of 64 pointers at once should improve the give-back if the
list grows. The list size is ~1.3 items per invocations.
Having one global scf_free_pool with one cleaning thread let the list
grow to over 10.000 items with 32 CPUs (again, spikes not the average)
especially if the CPU went to sleep. The per-CPU part looks like a good
compromise.
Reported-by: "Paul E. McKenney" <paulmck@kernel.org>
Closes: https://lore.kernel.org/lkml/41619255-cdc2-4573-a360-7794fc3614f7@paulmck-laptop/
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
kernel/scftorture.c | 39 +++++++++++++++++++++++++++++++++++----
1 file changed, 35 insertions(+), 4 deletions(-)
diff --git a/kernel/scftorture.c b/kernel/scftorture.c
index 555b3b10621fe..1268a91af5d88 100644
--- a/kernel/scftorture.c
+++ b/kernel/scftorture.c
@@ -97,6 +97,7 @@ struct scf_statistics {
static struct scf_statistics *scf_stats_p;
static struct task_struct *scf_torture_stats_task;
static DEFINE_PER_CPU(long long, scf_invoked_count);
+static DEFINE_PER_CPU(struct llist_head, scf_free_pool);
// Data for random primitive selection
#define SCF_PRIM_RESCHED 0
@@ -133,6 +134,7 @@ struct scf_check {
bool scfc_wait;
bool scfc_rpc;
struct completion scfc_completion;
+ struct llist_node scf_node;
};
// Use to wait for all threads to start.
@@ -148,6 +150,31 @@ static DEFINE_TORTURE_RANDOM_PERCPU(scf_torture_rand);
extern void resched_cpu(int cpu); // An alternative IPI vector.
+static void scf_add_to_free_list(struct scf_check *scfcp)
+{
+ struct llist_head *pool;
+ unsigned int cpu;
+
+ cpu = raw_smp_processor_id() % nthreads;
+ pool = &per_cpu(scf_free_pool, cpu);
+ llist_add(&scfcp->scf_node, pool);
+}
+
+static void scf_cleanup_free_list(unsigned int cpu)
+{
+ struct llist_head *pool;
+ struct llist_node *node;
+ struct scf_check *scfcp;
+
+ pool = &per_cpu(scf_free_pool, cpu);
+ node = llist_del_all(pool);
+ while (node) {
+ scfcp = llist_entry(node, struct scf_check, scf_node);
+ node = node->next;
+ kfree(scfcp);
+ }
+}
+
// Print torture statistics. Caller must ensure serialization.
static void scf_torture_stats_print(void)
{
@@ -296,7 +323,7 @@ static void scf_handler(void *scfc_in)
if (scfcp->scfc_rpc)
complete(&scfcp->scfc_completion);
} else {
- kfree(scfcp);
+ scf_add_to_free_list(scfcp);
}
}
@@ -363,7 +390,7 @@ static void scftorture_invoke_one(struct scf_statistics *scfp, struct torture_ra
scfp->n_single_wait_ofl++;
else
scfp->n_single_ofl++;
- kfree(scfcp);
+ scf_add_to_free_list(scfcp);
scfcp = NULL;
}
break;
@@ -391,7 +418,7 @@ static void scftorture_invoke_one(struct scf_statistics *scfp, struct torture_ra
preempt_disable();
} else {
scfp->n_single_rpc_ofl++;
- kfree(scfcp);
+ scf_add_to_free_list(scfcp);
scfcp = NULL;
}
break;
@@ -428,7 +455,7 @@ static void scftorture_invoke_one(struct scf_statistics *scfp, struct torture_ra
pr_warn("%s: Memory-ordering failure, scfs_prim: %d.\n", __func__, scfsp->scfs_prim);
atomic_inc(&n_mb_out_errs); // Leak rather than trash!
} else {
- kfree(scfcp);
+ scf_add_to_free_list(scfcp);
}
barrier(); // Prevent race-reduction compiler optimizations.
}
@@ -479,6 +506,8 @@ static int scftorture_invoker(void *arg)
VERBOSE_SCFTORTOUT("scftorture_invoker %d started", scfp->cpu);
do {
+ scf_cleanup_free_list(cpu);
+
scftorture_invoke_one(scfp, &rand);
while (cpu_is_offline(cpu) && !torture_must_stop()) {
schedule_timeout_interruptible(HZ / 5);
@@ -538,6 +567,8 @@ static void scf_torture_cleanup(void)
end:
torture_cleanup_end();
+ for (i = 0; i < nthreads; i++)
+ scf_cleanup_free_list(i);
}
static int __init scf_torture_init(void)
--
2.45.2
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH v2 4/3] scftorture: Wait until scf_cleanup_handler() completes.
2024-11-07 11:13 [PATCH v2 0/3] scftorture: Avoid kfree from IRQ context Sebastian Andrzej Siewior
` (2 preceding siblings ...)
2024-11-07 11:13 ` [PATCH v2 3/3] scftorture: Use a lock-less list to free memory Sebastian Andrzej Siewior
@ 2024-11-07 15:27 ` Sebastian Andrzej Siewior
2024-11-07 19:05 ` [PATCH v2 0/3] scftorture: Avoid kfree from IRQ context Paul E. McKenney
4 siblings, 0 replies; 10+ messages in thread
From: Sebastian Andrzej Siewior @ 2024-11-07 15:27 UTC (permalink / raw)
To: kasan-dev, linux-kernel, linux-mm
Cc: Paul E. McKenney, Boqun Feng, Marco Elver, Peter Zijlstra,
Tomas Gleixner, Vlastimil Babka, akpm, cl, iamjoonsoo.kim,
longman, penberg, rientjes, sfr
The smp_call_function() needs to be invoked with the wait flag set to
wait until scf_cleanup_handler() is done. This ensures that all SMP
function calls, that have been queued earlier, complete at this point.
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
kernel/scftorture.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/scftorture.c b/kernel/scftorture.c
index 1268a91af5d88..ee8831096c091 100644
--- a/kernel/scftorture.c
+++ b/kernel/scftorture.c
@@ -552,7 +552,7 @@ static void scf_torture_cleanup(void)
torture_stop_kthread("scftorture_invoker", scf_stats_p[i].task);
else
goto end;
- smp_call_function(scf_cleanup_handler, NULL, 0);
+ smp_call_function(scf_cleanup_handler, NULL, 1);
torture_stop_kthread(scf_torture_stats, scf_torture_stats_task);
scf_torture_stats_print(); // -After- the stats thread is stopped!
kfree(scf_stats_p); // -After- the last stats print has completed!
--
2.45.2
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v2 3/3] scftorture: Use a lock-less list to free memory.
2024-11-07 11:13 ` [PATCH v2 3/3] scftorture: Use a lock-less list to free memory Sebastian Andrzej Siewior
@ 2024-11-07 18:31 ` Paul E. McKenney
2024-11-07 20:45 ` Boqun Feng
1 sibling, 0 replies; 10+ messages in thread
From: Paul E. McKenney @ 2024-11-07 18:31 UTC (permalink / raw)
To: Sebastian Andrzej Siewior
Cc: kasan-dev, linux-kernel, linux-mm, Boqun Feng, Marco Elver,
Peter Zijlstra, Tomas Gleixner, Vlastimil Babka, akpm, cl,
iamjoonsoo.kim, longman, penberg, rientjes, sfr
On Thu, Nov 07, 2024 at 12:13:08PM +0100, Sebastian Andrzej Siewior wrote:
> scf_handler() is used as a SMP function call. This function is always
> invoked in IRQ-context even with forced-threading enabled. This function
> frees memory which not allowed on PREEMPT_RT because the locking
> underneath is using sleeping locks.
>
> Add a per-CPU scf_free_pool where each SMP functions adds its memory to
> be freed. This memory is then freed by scftorture_invoker() on each
> iteration. On the majority of invocations the number of items is less
> than five. If the thread sleeps/ gets delayed the number exceed 350 but
> did not reach 400 in testing. These were the spikes during testing.
> The bulk free of 64 pointers at once should improve the give-back if the
> list grows. The list size is ~1.3 items per invocations.
>
> Having one global scf_free_pool with one cleaning thread let the list
> grow to over 10.000 items with 32 CPUs (again, spikes not the average)
> especially if the CPU went to sleep. The per-CPU part looks like a good
> compromise.
>
> Reported-by: "Paul E. McKenney" <paulmck@kernel.org>
> Closes: https://lore.kernel.org/lkml/41619255-cdc2-4573-a360-7794fc3614f7@paulmck-laptop/
> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Nice!!!
One nit at the end below.
> ---
> kernel/scftorture.c | 39 +++++++++++++++++++++++++++++++++++----
> 1 file changed, 35 insertions(+), 4 deletions(-)
>
> diff --git a/kernel/scftorture.c b/kernel/scftorture.c
> index 555b3b10621fe..1268a91af5d88 100644
> --- a/kernel/scftorture.c
> +++ b/kernel/scftorture.c
> @@ -97,6 +97,7 @@ struct scf_statistics {
> static struct scf_statistics *scf_stats_p;
> static struct task_struct *scf_torture_stats_task;
> static DEFINE_PER_CPU(long long, scf_invoked_count);
> +static DEFINE_PER_CPU(struct llist_head, scf_free_pool);
>
> // Data for random primitive selection
> #define SCF_PRIM_RESCHED 0
> @@ -133,6 +134,7 @@ struct scf_check {
> bool scfc_wait;
> bool scfc_rpc;
> struct completion scfc_completion;
> + struct llist_node scf_node;
> };
>
> // Use to wait for all threads to start.
> @@ -148,6 +150,31 @@ static DEFINE_TORTURE_RANDOM_PERCPU(scf_torture_rand);
>
> extern void resched_cpu(int cpu); // An alternative IPI vector.
>
> +static void scf_add_to_free_list(struct scf_check *scfcp)
> +{
> + struct llist_head *pool;
> + unsigned int cpu;
> +
> + cpu = raw_smp_processor_id() % nthreads;
> + pool = &per_cpu(scf_free_pool, cpu);
> + llist_add(&scfcp->scf_node, pool);
> +}
> +
> +static void scf_cleanup_free_list(unsigned int cpu)
> +{
> + struct llist_head *pool;
> + struct llist_node *node;
> + struct scf_check *scfcp;
> +
> + pool = &per_cpu(scf_free_pool, cpu);
> + node = llist_del_all(pool);
> + while (node) {
> + scfcp = llist_entry(node, struct scf_check, scf_node);
> + node = node->next;
> + kfree(scfcp);
> + }
> +}
> +
> // Print torture statistics. Caller must ensure serialization.
> static void scf_torture_stats_print(void)
> {
> @@ -296,7 +323,7 @@ static void scf_handler(void *scfc_in)
> if (scfcp->scfc_rpc)
> complete(&scfcp->scfc_completion);
> } else {
> - kfree(scfcp);
> + scf_add_to_free_list(scfcp);
> }
> }
>
> @@ -363,7 +390,7 @@ static void scftorture_invoke_one(struct scf_statistics *scfp, struct torture_ra
> scfp->n_single_wait_ofl++;
> else
> scfp->n_single_ofl++;
> - kfree(scfcp);
> + scf_add_to_free_list(scfcp);
> scfcp = NULL;
> }
> break;
> @@ -391,7 +418,7 @@ static void scftorture_invoke_one(struct scf_statistics *scfp, struct torture_ra
> preempt_disable();
> } else {
> scfp->n_single_rpc_ofl++;
> - kfree(scfcp);
> + scf_add_to_free_list(scfcp);
> scfcp = NULL;
> }
> break;
> @@ -428,7 +455,7 @@ static void scftorture_invoke_one(struct scf_statistics *scfp, struct torture_ra
> pr_warn("%s: Memory-ordering failure, scfs_prim: %d.\n", __func__, scfsp->scfs_prim);
> atomic_inc(&n_mb_out_errs); // Leak rather than trash!
> } else {
> - kfree(scfcp);
> + scf_add_to_free_list(scfcp);
> }
> barrier(); // Prevent race-reduction compiler optimizations.
> }
> @@ -479,6 +506,8 @@ static int scftorture_invoker(void *arg)
> VERBOSE_SCFTORTOUT("scftorture_invoker %d started", scfp->cpu);
>
> do {
> + scf_cleanup_free_list(cpu);
> +
> scftorture_invoke_one(scfp, &rand);
> while (cpu_is_offline(cpu) && !torture_must_stop()) {
> schedule_timeout_interruptible(HZ / 5);
> @@ -538,6 +567,8 @@ static void scf_torture_cleanup(void)
>
> end:
> torture_cleanup_end();
> + for (i = 0; i < nthreads; i++)
> + scf_cleanup_free_list(i);
It would be better for this to precede the call to torture_cleanup_end().
As soon as torture_cleanup_end() is invoked, in theory, another torture
test might start. Yes, in practice, this would only matter if the next
module was again scftorture and you aren't supposed to modprobe a given
module until after the prior rmmod has completed, which would prevent
this scf_cleanup_free_list() from interacting with the incoming instance
of scftorture.
But why even allow the possibility?
Thanx, Paul
> }
>
> static int __init scf_torture_init(void)
> --
> 2.45.2
>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v2 0/3] scftorture: Avoid kfree from IRQ context.
2024-11-07 11:13 [PATCH v2 0/3] scftorture: Avoid kfree from IRQ context Sebastian Andrzej Siewior
` (3 preceding siblings ...)
2024-11-07 15:27 ` [PATCH v2 4/3] scftorture: Wait until scf_cleanup_handler() completes Sebastian Andrzej Siewior
@ 2024-11-07 19:05 ` Paul E. McKenney
4 siblings, 0 replies; 10+ messages in thread
From: Paul E. McKenney @ 2024-11-07 19:05 UTC (permalink / raw)
To: Sebastian Andrzej Siewior
Cc: kasan-dev, linux-kernel, linux-mm, Boqun Feng, Marco Elver,
Peter Zijlstra, Tomas Gleixner, Vlastimil Babka, akpm, cl,
iamjoonsoo.kim, longman, penberg, rientjes, sfr
On Thu, Nov 07, 2024 at 12:13:05PM +0100, Sebastian Andrzej Siewior wrote:
> Hi,
>
> Paul reported kfree from IRQ context in scftorture which is noticed by
> lockdep since the recent PROVE_RAW_LOCK_NESTING switch.
>
> The last patch in this series adresses the issues, the other things
> happened on the way.
For the series:
Tested-by: Paul E. McKenney <paulmck@kernel.org>
> v1…v2:
> - Remove kfree_bulk(). I get more invocations per report without it.
> - Pass `cpu' to scf_cleanup_free_list in scftorture_invoker() instead
> of scfp->cpu. The latter is the thread number which can be larger
> than the number CPUs leading to a crash in such a case. Reported by
> Boqun Feng.
> - Clean up the per-CPU lists on module exit. Reported by Boqun Feng.
>
> Sebastian
>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v2 3/3] scftorture: Use a lock-less list to free memory.
2024-11-07 11:13 ` [PATCH v2 3/3] scftorture: Use a lock-less list to free memory Sebastian Andrzej Siewior
2024-11-07 18:31 ` Paul E. McKenney
@ 2024-11-07 20:45 ` Boqun Feng
2024-11-07 21:53 ` Paul E. McKenney
2024-11-08 10:32 ` Sebastian Andrzej Siewior
1 sibling, 2 replies; 10+ messages in thread
From: Boqun Feng @ 2024-11-07 20:45 UTC (permalink / raw)
To: Sebastian Andrzej Siewior
Cc: kasan-dev, linux-kernel, linux-mm, Paul E. McKenney, Marco Elver,
Peter Zijlstra, Tomas Gleixner, Vlastimil Babka, akpm, cl,
iamjoonsoo.kim, longman, penberg, rientjes, sfr
On Thu, Nov 07, 2024 at 12:13:08PM +0100, Sebastian Andrzej Siewior wrote:
> scf_handler() is used as a SMP function call. This function is always
> invoked in IRQ-context even with forced-threading enabled. This function
> frees memory which not allowed on PREEMPT_RT because the locking
> underneath is using sleeping locks.
>
> Add a per-CPU scf_free_pool where each SMP functions adds its memory to
> be freed. This memory is then freed by scftorture_invoker() on each
> iteration. On the majority of invocations the number of items is less
> than five. If the thread sleeps/ gets delayed the number exceed 350 but
> did not reach 400 in testing. These were the spikes during testing.
> The bulk free of 64 pointers at once should improve the give-back if the
> list grows. The list size is ~1.3 items per invocations.
>
> Having one global scf_free_pool with one cleaning thread let the list
> grow to over 10.000 items with 32 CPUs (again, spikes not the average)
> especially if the CPU went to sleep. The per-CPU part looks like a good
> compromise.
>
> Reported-by: "Paul E. McKenney" <paulmck@kernel.org>
> Closes: https://lore.kernel.org/lkml/41619255-cdc2-4573-a360-7794fc3614f7@paulmck-laptop/
> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> ---
> kernel/scftorture.c | 39 +++++++++++++++++++++++++++++++++++----
> 1 file changed, 35 insertions(+), 4 deletions(-)
>
> diff --git a/kernel/scftorture.c b/kernel/scftorture.c
> index 555b3b10621fe..1268a91af5d88 100644
> --- a/kernel/scftorture.c
> +++ b/kernel/scftorture.c
> @@ -97,6 +97,7 @@ struct scf_statistics {
> static struct scf_statistics *scf_stats_p;
> static struct task_struct *scf_torture_stats_task;
> static DEFINE_PER_CPU(long long, scf_invoked_count);
> +static DEFINE_PER_CPU(struct llist_head, scf_free_pool);
>
> // Data for random primitive selection
> #define SCF_PRIM_RESCHED 0
> @@ -133,6 +134,7 @@ struct scf_check {
> bool scfc_wait;
> bool scfc_rpc;
> struct completion scfc_completion;
> + struct llist_node scf_node;
> };
>
> // Use to wait for all threads to start.
> @@ -148,6 +150,31 @@ static DEFINE_TORTURE_RANDOM_PERCPU(scf_torture_rand);
>
> extern void resched_cpu(int cpu); // An alternative IPI vector.
>
> +static void scf_add_to_free_list(struct scf_check *scfcp)
> +{
> + struct llist_head *pool;
> + unsigned int cpu;
> +
> + cpu = raw_smp_processor_id() % nthreads;
> + pool = &per_cpu(scf_free_pool, cpu);
> + llist_add(&scfcp->scf_node, pool);
> +}
> +
> +static void scf_cleanup_free_list(unsigned int cpu)
> +{
> + struct llist_head *pool;
> + struct llist_node *node;
> + struct scf_check *scfcp;
> +
> + pool = &per_cpu(scf_free_pool, cpu);
> + node = llist_del_all(pool);
> + while (node) {
> + scfcp = llist_entry(node, struct scf_check, scf_node);
> + node = node->next;
> + kfree(scfcp);
> + }
> +}
> +
> // Print torture statistics. Caller must ensure serialization.
> static void scf_torture_stats_print(void)
> {
> @@ -296,7 +323,7 @@ static void scf_handler(void *scfc_in)
> if (scfcp->scfc_rpc)
> complete(&scfcp->scfc_completion);
> } else {
> - kfree(scfcp);
> + scf_add_to_free_list(scfcp);
> }
> }
>
> @@ -363,7 +390,7 @@ static void scftorture_invoke_one(struct scf_statistics *scfp, struct torture_ra
> scfp->n_single_wait_ofl++;
> else
> scfp->n_single_ofl++;
> - kfree(scfcp);
> + scf_add_to_free_list(scfcp);
> scfcp = NULL;
> }
> break;
> @@ -391,7 +418,7 @@ static void scftorture_invoke_one(struct scf_statistics *scfp, struct torture_ra
> preempt_disable();
> } else {
> scfp->n_single_rpc_ofl++;
> - kfree(scfcp);
> + scf_add_to_free_list(scfcp);
> scfcp = NULL;
> }
> break;
> @@ -428,7 +455,7 @@ static void scftorture_invoke_one(struct scf_statistics *scfp, struct torture_ra
> pr_warn("%s: Memory-ordering failure, scfs_prim: %d.\n", __func__, scfsp->scfs_prim);
> atomic_inc(&n_mb_out_errs); // Leak rather than trash!
> } else {
> - kfree(scfcp);
> + scf_add_to_free_list(scfcp);
> }
> barrier(); // Prevent race-reduction compiler optimizations.
> }
> @@ -479,6 +506,8 @@ static int scftorture_invoker(void *arg)
> VERBOSE_SCFTORTOUT("scftorture_invoker %d started", scfp->cpu);
>
> do {
> + scf_cleanup_free_list(cpu);
> +
> scftorture_invoke_one(scfp, &rand);
> while (cpu_is_offline(cpu) && !torture_must_stop()) {
> schedule_timeout_interruptible(HZ / 5);
> @@ -538,6 +567,8 @@ static void scf_torture_cleanup(void)
>
> end:
> torture_cleanup_end();
> + for (i = 0; i < nthreads; i++)
This needs to be:
for (i = 0; i < nr_cpu_ids; i++)
because nthreads can be larger than nr_cpu_ids, and it'll access a
out-of-bound percpu section.
Regards,
Boqun
> + scf_cleanup_free_list(i);
> }
>
> static int __init scf_torture_init(void)
> --
> 2.45.2
>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v2 3/3] scftorture: Use a lock-less list to free memory.
2024-11-07 20:45 ` Boqun Feng
@ 2024-11-07 21:53 ` Paul E. McKenney
2024-11-08 10:32 ` Sebastian Andrzej Siewior
1 sibling, 0 replies; 10+ messages in thread
From: Paul E. McKenney @ 2024-11-07 21:53 UTC (permalink / raw)
To: Boqun Feng
Cc: Sebastian Andrzej Siewior, kasan-dev, linux-kernel, linux-mm,
Marco Elver, Peter Zijlstra, Tomas Gleixner, Vlastimil Babka,
akpm, cl, iamjoonsoo.kim, longman, penberg, rientjes, sfr
On Thu, Nov 07, 2024 at 12:45:25PM -0800, Boqun Feng wrote:
> On Thu, Nov 07, 2024 at 12:13:08PM +0100, Sebastian Andrzej Siewior wrote:
> > scf_handler() is used as a SMP function call. This function is always
> > invoked in IRQ-context even with forced-threading enabled. This function
> > frees memory which not allowed on PREEMPT_RT because the locking
> > underneath is using sleeping locks.
> >
> > Add a per-CPU scf_free_pool where each SMP functions adds its memory to
> > be freed. This memory is then freed by scftorture_invoker() on each
> > iteration. On the majority of invocations the number of items is less
> > than five. If the thread sleeps/ gets delayed the number exceed 350 but
> > did not reach 400 in testing. These were the spikes during testing.
> > The bulk free of 64 pointers at once should improve the give-back if the
> > list grows. The list size is ~1.3 items per invocations.
> >
> > Having one global scf_free_pool with one cleaning thread let the list
> > grow to over 10.000 items with 32 CPUs (again, spikes not the average)
> > especially if the CPU went to sleep. The per-CPU part looks like a good
> > compromise.
> >
> > Reported-by: "Paul E. McKenney" <paulmck@kernel.org>
> > Closes: https://lore.kernel.org/lkml/41619255-cdc2-4573-a360-7794fc3614f7@paulmck-laptop/
> > Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> > ---
> > kernel/scftorture.c | 39 +++++++++++++++++++++++++++++++++++----
> > 1 file changed, 35 insertions(+), 4 deletions(-)
> >
> > diff --git a/kernel/scftorture.c b/kernel/scftorture.c
> > index 555b3b10621fe..1268a91af5d88 100644
> > --- a/kernel/scftorture.c
> > +++ b/kernel/scftorture.c
> > @@ -97,6 +97,7 @@ struct scf_statistics {
> > static struct scf_statistics *scf_stats_p;
> > static struct task_struct *scf_torture_stats_task;
> > static DEFINE_PER_CPU(long long, scf_invoked_count);
> > +static DEFINE_PER_CPU(struct llist_head, scf_free_pool);
> >
> > // Data for random primitive selection
> > #define SCF_PRIM_RESCHED 0
> > @@ -133,6 +134,7 @@ struct scf_check {
> > bool scfc_wait;
> > bool scfc_rpc;
> > struct completion scfc_completion;
> > + struct llist_node scf_node;
> > };
> >
> > // Use to wait for all threads to start.
> > @@ -148,6 +150,31 @@ static DEFINE_TORTURE_RANDOM_PERCPU(scf_torture_rand);
> >
> > extern void resched_cpu(int cpu); // An alternative IPI vector.
> >
> > +static void scf_add_to_free_list(struct scf_check *scfcp)
> > +{
> > + struct llist_head *pool;
> > + unsigned int cpu;
> > +
> > + cpu = raw_smp_processor_id() % nthreads;
> > + pool = &per_cpu(scf_free_pool, cpu);
> > + llist_add(&scfcp->scf_node, pool);
> > +}
> > +
> > +static void scf_cleanup_free_list(unsigned int cpu)
> > +{
> > + struct llist_head *pool;
> > + struct llist_node *node;
> > + struct scf_check *scfcp;
> > +
> > + pool = &per_cpu(scf_free_pool, cpu);
> > + node = llist_del_all(pool);
> > + while (node) {
> > + scfcp = llist_entry(node, struct scf_check, scf_node);
> > + node = node->next;
> > + kfree(scfcp);
> > + }
> > +}
> > +
> > // Print torture statistics. Caller must ensure serialization.
> > static void scf_torture_stats_print(void)
> > {
> > @@ -296,7 +323,7 @@ static void scf_handler(void *scfc_in)
> > if (scfcp->scfc_rpc)
> > complete(&scfcp->scfc_completion);
> > } else {
> > - kfree(scfcp);
> > + scf_add_to_free_list(scfcp);
> > }
> > }
> >
> > @@ -363,7 +390,7 @@ static void scftorture_invoke_one(struct scf_statistics *scfp, struct torture_ra
> > scfp->n_single_wait_ofl++;
> > else
> > scfp->n_single_ofl++;
> > - kfree(scfcp);
> > + scf_add_to_free_list(scfcp);
> > scfcp = NULL;
> > }
> > break;
> > @@ -391,7 +418,7 @@ static void scftorture_invoke_one(struct scf_statistics *scfp, struct torture_ra
> > preempt_disable();
> > } else {
> > scfp->n_single_rpc_ofl++;
> > - kfree(scfcp);
> > + scf_add_to_free_list(scfcp);
> > scfcp = NULL;
> > }
> > break;
> > @@ -428,7 +455,7 @@ static void scftorture_invoke_one(struct scf_statistics *scfp, struct torture_ra
> > pr_warn("%s: Memory-ordering failure, scfs_prim: %d.\n", __func__, scfsp->scfs_prim);
> > atomic_inc(&n_mb_out_errs); // Leak rather than trash!
> > } else {
> > - kfree(scfcp);
> > + scf_add_to_free_list(scfcp);
> > }
> > barrier(); // Prevent race-reduction compiler optimizations.
> > }
> > @@ -479,6 +506,8 @@ static int scftorture_invoker(void *arg)
> > VERBOSE_SCFTORTOUT("scftorture_invoker %d started", scfp->cpu);
> >
> > do {
> > + scf_cleanup_free_list(cpu);
> > +
> > scftorture_invoke_one(scfp, &rand);
> > while (cpu_is_offline(cpu) && !torture_must_stop()) {
> > schedule_timeout_interruptible(HZ / 5);
> > @@ -538,6 +567,8 @@ static void scf_torture_cleanup(void)
> >
> > end:
> > torture_cleanup_end();
> > + for (i = 0; i < nthreads; i++)
>
> This needs to be:
>
> for (i = 0; i < nr_cpu_ids; i++)
>
> because nthreads can be larger than nr_cpu_ids, and it'll access a
> out-of-bound percpu section.
I clearly did not test thoroughly enough. Good catch!!!
Thanx, Paul
> Regards,
> Boqun
>
> > + scf_cleanup_free_list(i);
> > }
> >
> > static int __init scf_torture_init(void)
> > --
> > 2.45.2
> >
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v2 3/3] scftorture: Use a lock-less list to free memory.
2024-11-07 20:45 ` Boqun Feng
2024-11-07 21:53 ` Paul E. McKenney
@ 2024-11-08 10:32 ` Sebastian Andrzej Siewior
1 sibling, 0 replies; 10+ messages in thread
From: Sebastian Andrzej Siewior @ 2024-11-08 10:32 UTC (permalink / raw)
To: Boqun Feng
Cc: kasan-dev, linux-kernel, linux-mm, Paul E. McKenney, Marco Elver,
Peter Zijlstra, Tomas Gleixner, Vlastimil Babka, akpm, cl,
iamjoonsoo.kim, longman, penberg, rientjes, sfr
On 2024-11-07 12:45:25 [-0800], Boqun Feng wrote:
> > @@ -538,6 +567,8 @@ static void scf_torture_cleanup(void)
> >
> > end:
> > torture_cleanup_end();
> > + for (i = 0; i < nthreads; i++)
>
> This needs to be:
>
> for (i = 0; i < nr_cpu_ids; i++)
>
> because nthreads can be larger than nr_cpu_ids, and it'll access a
> out-of-bound percpu section.
And I though I learned my lesson last time.
Thank you.
> Regards,
> Boqun
Sebastian
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2024-11-08 10:32 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-11-07 11:13 [PATCH v2 0/3] scftorture: Avoid kfree from IRQ context Sebastian Andrzej Siewior
2024-11-07 11:13 ` [PATCH v2 1/3] scftorture: Avoid additional div operation Sebastian Andrzej Siewior
2024-11-07 11:13 ` [PATCH v2 2/3] scftorture: Move memory allocation outside of preempt_disable region Sebastian Andrzej Siewior
2024-11-07 11:13 ` [PATCH v2 3/3] scftorture: Use a lock-less list to free memory Sebastian Andrzej Siewior
2024-11-07 18:31 ` Paul E. McKenney
2024-11-07 20:45 ` Boqun Feng
2024-11-07 21:53 ` Paul E. McKenney
2024-11-08 10:32 ` Sebastian Andrzej Siewior
2024-11-07 15:27 ` [PATCH v2 4/3] scftorture: Wait until scf_cleanup_handler() completes Sebastian Andrzej Siewior
2024-11-07 19:05 ` [PATCH v2 0/3] scftorture: Avoid kfree from IRQ context Paul E. McKenney
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox