linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Hillf Danton <hdanton@sina.com>
To: Valentin Schneider <vschneid@redhat.com>
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	Lai Jiangshan <jiangshanlai@gmail.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Frederic Weisbecker <frederic@kernel.org>,
	Marcelo Tosatti <mtosatti@redhat.com>
Subject: Re: [PATCH v4 4/4] workqueue: Unbind workers before sending them to exit()
Date: Wed,  5 Oct 2022 22:50:22 +0800	[thread overview]
Message-ID: <20221005145022.1695-1-hdanton@sina.com> (raw)
In-Reply-To: <xhsmhczb634mq.mognet@vschneid.remote.csb>

On 05 Oct 2022 12:13:17 +0100 Valentin Schneider <vschneid@redhat.com>
>On 05/10/22 09:08, Hillf Danton wrote:
>> On 4 Oct 2022 16:05:21 +0100 Valentin Schneider <vschneid@redhat.com>
>>> It has been reported that isolated CPUs can suffer from interference due to
>>> per-CPU kworkers waking up just to die.
>>>
>>> A surge of workqueue activity during initial setup of a latency-sensitive
>>> application (refresh_vm_stats() being one of the culprits) can cause extra
>>> per-CPU kworkers to be spawned. Then, said latency-sensitive task can be
>>> running merrily on an isolated CPU only to be interrupted sometime later by
>>> a kworker marked for death (cf. IDLE_WORKER_TIMEOUT, 5 minutes after last
>>> kworker activity).
>>>
>> Is tick stopped on the isolated CPU? If tick can hit it then it can accept
>> more than exiting kworker.
>
>From what I've seen in the scenarios where that happens, yes. The
>pool->idle_timer gets queued from an isolated CPU and ends up on a
>housekeeping CPU (cf. get_target_base()).

Yes, you are right.

>With nohz_full on the cmdline, wq_unbound_cpumask already excludes isolated
>CPU, but that doesn't apply to per-CPU kworkers. Or did you mean some other
>mechanism?

Bound kworkers can be destroyed by the idle timer on a housekeeping CPU.

Diff is only for thoughts.

+++ b/kernel/workqueue.c
@@ -1985,6 +1985,7 @@ fail:
 static void destroy_worker(struct worker *worker)
 {
 	struct worker_pool *pool = worker->pool;
+	int cpu = smp_processor_id();
 
 	lockdep_assert_held(&pool->lock);
 
@@ -1999,6 +2000,12 @@ static void destroy_worker(struct worker
 
 	list_del_init(&worker->entry);
 	worker->flags |= WORKER_DIE;
+
+	if (!(pool->flags & POOL_DISASSOCIATED) && pool->cpu != cpu) {
+		/* send worker to die on a housekeeping cpu */
+		cpumask_clear(&worker->task->cpus_mask);
+		cpumask_set_cpu(cpu, &worker->task->cpus_mask);
+	}
 	wake_up_process(worker->task);
 }
 


  reply	other threads:[~2022-10-05 14:50 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <20221004150521.822266-1-vschneid@redhat.com>
     [not found] ` <20221004150521.822266-5-vschneid@redhat.com>
2022-10-05  1:08   ` Hillf Danton
2022-10-05 11:13     ` Valentin Schneider
2022-10-05 14:50       ` Hillf Danton [this message]
2022-10-05 16:14         ` Valentin Schneider
2022-10-06  2:07           ` Hillf Danton

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20221005145022.1695-1-hdanton@sina.com \
    --to=hdanton@sina.com \
    --cc=frederic@kernel.org \
    --cc=jiangshanlai@gmail.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mtosatti@redhat.com \
    --cc=peterz@infradead.org \
    --cc=vschneid@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox