linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Peter Oskolkov <posk@google.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: Peter Oskolkov <posk@posk.io>, Ingo Molnar <mingo@redhat.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	 juri.lelli@redhat.com,
	Vincent Guittot <vincent.guittot@linaro.org>,
	 dietmar.eggemann@arm.com, Steven Rostedt <rostedt@goodmis.org>,
	 Ben Segall <bsegall@google.com>,
	mgorman@suse.de, bristot@redhat.com,
	 Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	 Linux Memory Management List <linux-mm@kvack.org>,
	linux-api@vger.kernel.org, x86@kernel.org,
	 Paul Turner <pjt@google.com>, Andrei Vagin <avagin@google.com>,
	Jann Horn <jannh@google.com>,
	 Thierry Delisle <tdelisle@uwaterloo.ca>
Subject: Re: [RFC][PATCH 0/3] sched: User Managed Concurrency Groups
Date: Wed, 15 Dec 2021 11:49:51 -0800	[thread overview]
Message-ID: <CAPNVh5cJy2y+sTx0cPA1BPSAg=GjXC8XGT7fLzHwzvXH2=xjmw@mail.gmail.com> (raw)
In-Reply-To: <YboxjUM+D9Kg52mO@hirez.programming.kicks-ass.net>

On Wed, Dec 15, 2021 at 10:19 AM Peter Zijlstra <peterz@infradead.org> wrote:
>
> On Wed, Dec 15, 2021 at 09:56:06AM -0800, Peter Oskolkov wrote:
>
> > > Right, so the problem I'm having is that a single idle server ptr like
> > > before can trivially miss waking annother idle server.
> >
> > I believe the approach I used in my patchset, suggested by Thierry
> > Delisle, works.
> >
> > In short, there is a single idle server ptr for the kernel to work
> > with. The userspace maintains a list of idle servers. If the ptr is
> > NULL, the list is empty. When the kernel wakes the idle server it
> > sees, the server reaps the runnable worker list and wakes another idle
> > server from the userspace list, if available. This newly woken idle
> > server repoints the ptr to itself, checks the runnable worker list, to
> > avoid missing a woken worker, then goes to sleep.
> >
> > Why do you think this approach is not OK?
>
> Suppose at least 4 servers, 2 idle, 2 working.
>
> Now, the first of the working servers (lets call it S0) gets a wakeup
> (say Ta), it finds the idle server (say S3) and consumes it, sticking Ta
> on S3 and kicking it alive.

TL;DR: our models are different here. In your model a single server
can have a bunch of workers interacting with it; in my model only a
single RUNNING worker is assigned to a server, which it wakes when it
blocks.

More details:

"Working servers" cannot get wakeups, because a "working server" has a
single RUNNING worker attached to it. When a worker blocks, it wakes
its attached server and becomes a detached blocked worker (same is
true if the worker is "preempted": it blocks and wakes its assigned
server).

Blocked workers upon wakeup do this, in order:

- always add themselves to the runnable worker list (the list is
shared among ALL servers, it is NOT per server);
- wake a server pointed to by idle_server_ptr, if not NULL;
- sleep, waiting for a wakeup from a server;

Server S, upon becoming IDLE (no worker to run, or woken on idle
server list) does this, in order, in userspace (simplified, see
umcg_get_idle_worker() in
https://lore.kernel.org/lkml/20211122211327.5931-5-posk@google.com/):
- take a userspace (spin) lock (so the steps below are all within a
single critical section):
- compare_xchg(idle_server_ptr, NULL, S);
  - if failed, there is another server in idle_server_ptr, so S adds
itself to the userspace idle server list, releases the lock, goes to
sleep;
  - if succeeded:
    - check the runnable worker list;
        - if empty, release the lock, sleep;
        - if not empty:
           - get the list
           - xchg(idle_server_ptr, NULL) (either S removes itself, or
a worker in the kernel does it first, does not matter);
           - release the lock;
           - wake server S1 on idle server list. S1 goes through all
of these steps.

The protocol above serializes the userspace dealing with the idle
server ptr/list. Wakeups in the kernel will be caught if there are
idle servers. Yes, the protocol in the userspace is complicated (more
complicated than outlined above, as the reaped idle/runnable worker
list from the kernel is added to the userspace idle/runnable worker
list), but the kernel side is very simple. I've tested this
interaction extensively, I'm reasonably sure that no worker wakeups
are lost.

>
> Concurrently and loosing the race the other working server (S1) gets a
> wake-up from Tb, like said, it lost, no idle server, so Tb goes on the
> queue of S1.
>
> So then S3 wakes, finds Ta and they live happily ever after.
>
> While S2 and Tb fail to meet one another and both linger in sadness.
>


  reply	other threads:[~2021-12-15 19:50 UTC|newest]

Thread overview: 40+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-12-14 20:44 Peter Zijlstra
2021-12-14 20:44 ` [RFC][PATCH 1/3] sched/umcg: add WF_CURRENT_CPU and externise ttwu Peter Zijlstra
2021-12-14 20:44 ` [RFC][PATCH 2/3] x86/uaccess: Implement unsafe_try_cmpxchg_user() Peter Zijlstra
2021-12-20 17:30   ` Sean Christopherson
2021-12-21 11:17     ` Peter Zijlstra
2021-12-14 20:44 ` [RFC][PATCH 3/3] sched: User Mode Concurency Groups Peter Zijlstra
2021-12-21 17:19   ` Peter Oskolkov
2022-01-14 14:09     ` Peter Zijlstra
2022-01-14 15:16       ` Peter Zijlstra
2022-01-14 17:15       ` Peter Zijlstra
2022-01-17 11:35       ` Peter Zijlstra
2022-01-17 12:22         ` Peter Zijlstra
2022-01-17 12:12       ` Peter Zijlstra
2022-01-18 10:09       ` Peter Zijlstra
2022-01-18 18:19         ` Peter Oskolkov
2022-01-19  8:47           ` Peter Zijlstra
2022-01-19 17:33             ` Peter Oskolkov
2022-01-19  8:51           ` Peter Zijlstra
2022-01-19  8:59           ` Peter Zijlstra
2022-01-19 17:52             ` Peter Oskolkov
2022-01-20 10:37               ` Peter Zijlstra
2022-01-17 13:04     ` Peter Zijlstra
2021-12-24 11:27   ` Peter Zijlstra
2021-12-14 21:00 ` [RFC][PATCH 0/3] sched: User Managed Concurrency Groups Peter Zijlstra
2021-12-15  3:46 ` Peter Oskolkov
2021-12-15 10:06   ` Peter Zijlstra
2021-12-15 13:03     ` Peter Zijlstra
2021-12-15 17:56     ` Peter Oskolkov
2021-12-15 18:18       ` Peter Zijlstra
2021-12-15 19:49         ` Peter Oskolkov [this message]
2021-12-15 22:25           ` Peter Zijlstra
2021-12-15 23:26             ` Peter Oskolkov
2021-12-16 13:23               ` Thomas Gleixner
2021-12-15 18:25       ` Peter Zijlstra
2021-12-15 21:04         ` Peter Oskolkov
2021-12-15 23:16           ` Peter Zijlstra
2021-12-15 23:31             ` Peter Oskolkov
2021-12-15 10:44   ` Peter Zijlstra
2021-12-15 13:49     ` Matthew Wilcox
2021-12-15 17:54       ` Peter Zijlstra

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAPNVh5cJy2y+sTx0cPA1BPSAg=GjXC8XGT7fLzHwzvXH2=xjmw@mail.gmail.com' \
    --to=posk@google.com \
    --cc=avagin@google.com \
    --cc=bristot@redhat.com \
    --cc=bsegall@google.com \
    --cc=dietmar.eggemann@arm.com \
    --cc=jannh@google.com \
    --cc=juri.lelli@redhat.com \
    --cc=linux-api@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@suse.de \
    --cc=mingo@redhat.com \
    --cc=peterz@infradead.org \
    --cc=pjt@google.com \
    --cc=posk@posk.io \
    --cc=rostedt@goodmis.org \
    --cc=tdelisle@uwaterloo.ca \
    --cc=tglx@linutronix.de \
    --cc=vincent.guittot@linaro.org \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox