From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AEE6FC433EF for ; Wed, 15 Dec 2021 23:31:39 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 32DBD6B0071; Wed, 15 Dec 2021 18:31:29 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 2DCC86B0073; Wed, 15 Dec 2021 18:31:29 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 17E196B0074; Wed, 15 Dec 2021 18:31:29 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0096.hostedemail.com [216.40.44.96]) by kanga.kvack.org (Postfix) with ESMTP id 097BF6B0071 for ; Wed, 15 Dec 2021 18:31:29 -0500 (EST) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id C33E482FEF for ; Wed, 15 Dec 2021 23:31:18 +0000 (UTC) X-FDA: 78921627036.28.A0D0C96 Received: from mail-wm1-f49.google.com (mail-wm1-f49.google.com [209.85.128.49]) by imf22.hostedemail.com (Postfix) with ESMTP id EFBD1C0015 for ; Wed, 15 Dec 2021 23:31:17 +0000 (UTC) Received: by mail-wm1-f49.google.com with SMTP id b73so2301662wmd.0 for ; Wed, 15 Dec 2021 15:31:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=R01CyzImEjFqZlRPtnyEgfL839FtsDcyz8TSbqLjGFA=; b=evsTmw35JcLTmGyu4ktOcI/D45kYtnqmxiTtsjofteT1Ro0kvTt1TsvD/g5gOhFwaZ vXr6Yx/o/RL60AI/wSekM2lWX7wXUX8DChM9E7f17YuqyDZzlQc7ReFSaFyBoV1lfZIX dLsDMg3duOSvIJWYjdCUwfA9eLIq1+hDnWq1+IucqX34XvvSYioZxDPLVrjhfAgQkgc8 CPoIpYbjCk6LzRVvYxmXWOFhRXfNx8JwGFceZsS9YzPiYlsBgJYEBlOkJsV7SfZWXAGv OP2WTSNa0SxJ23gIuAX/4r1ycpBqyhBJ7trFQtdKQR0mTlQBqA7h+43bF5ir2An2avwB LdRw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=R01CyzImEjFqZlRPtnyEgfL839FtsDcyz8TSbqLjGFA=; b=3uhbbjDHG5MMzTtGyHXh9xICQ4AzmTuwwOSys0pFf3J1PNFdZTVyami1x5oZaYyYCS Aj24di1yfb/x7K8McnWEG2nQI9ShQbSzA5ssN5VV2vMorFEQxi5NBf79TI99H/RY6lKd aZUmeWaTls/+KNY3PcNWGmAUPO3tNOAR0nmnx4I4NWcEr4eVcp4yy9w2uKg+SibeDSF0 QTce4Viufbj8qI3oR5Ud2J5LLhg0sMpSz4oNOkPrIKdV9AQgsQMLCHkDp7QPMY/B8CR2 3GVpgfWBLnpZoxni7pZUe0inJAQ2mKpiF9MEFN5cQfFil1/hq/vN9BZ7mnHpZiXX+bcX XWkA== X-Gm-Message-State: AOAM532xUs9kf7ZJMjX7ewfhDXw4Zt4Kk/UYP+0lJ7LwFt67TRA/cKfJ lD90EgVdt9RYbsqH1RruNvIwnzzbBhpz7ysvC7FzxA== X-Google-Smtp-Source: ABdhPJwNpfYJQt9YTrP3W12SE2315MvdZQZm2bvTPjAl3TvpeWxQZRls3fMraY5/+CEZyDhvHM2aOMB2ZazAa1IHThk= X-Received: by 2002:a7b:ce9a:: with SMTP id q26mr2382597wmj.145.1639611076818; Wed, 15 Dec 2021 15:31:16 -0800 (PST) MIME-Version: 1.0 References: <20211214204445.665580974@infradead.org> <20211215231610.GI16608@worktop.programming.kicks-ass.net> In-Reply-To: <20211215231610.GI16608@worktop.programming.kicks-ass.net> From: Peter Oskolkov Date: Wed, 15 Dec 2021 15:31:05 -0800 Message-ID: Subject: Re: [RFC][PATCH 0/3] sched: User Managed Concurrency Groups To: Peter Zijlstra Cc: Peter Oskolkov , Ingo Molnar , Thomas Gleixner , juri.lelli@redhat.com, Vincent Guittot , dietmar.eggemann@arm.com, Steven Rostedt , Ben Segall , mgorman@suse.de, bristot@redhat.com, Linux Kernel Mailing List , Linux Memory Management List , linux-api@vger.kernel.org, x86@kernel.org, Paul Turner , Andrei Vagin , Jann Horn , Thierry Delisle Content-Type: text/plain; charset="UTF-8" X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: EFBD1C0015 X-Stat-Signature: q9urkskq84w9b4i743zy7o6zfh7by66i Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=evsTmw35; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf22.hostedemail.com: domain of posk@google.com designates 209.85.128.49 as permitted sender) smtp.mailfrom=posk@google.com X-HE-Tag: 1639611077-705239 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Dec 15, 2021 at 3:16 PM Peter Zijlstra wrote: > > On Wed, Dec 15, 2021 at 01:04:33PM -0800, Peter Oskolkov wrote: > > On Wed, Dec 15, 2021 at 10:25 AM Peter Zijlstra wrote: > > > > > > On Wed, Dec 15, 2021 at 09:56:06AM -0800, Peter Oskolkov wrote: > > > > On Wed, Dec 15, 2021 at 2:06 AM Peter Zijlstra wrote: > > > > > /* > > > > > + * Enqueue tsk to it's server's runnable list and wake the server for pickup if > > > > > + * so desired. Notable LAZY workers will not wake the server and rely on the > > > > > + * server to do pickup whenever it naturally runs next. > > > > > > > > No, I never suggested we needed per-server runnable queues: in all my > > > > patchsets I had a single list of idle (runnable) workers. > > > > > > This is not about the idle servers.. > > > > > > So without the LAZY thing on, a previously blocked task hitting sys_exit > > > will enqueue itself on the runnable list and wake the server for pickup. > > > > How can a blocked task hit sys_exit()? Shouldn't it be RUNNING? > > Task was RUNNING, hits schedule() after passing through sys_enter(). > this marks it BLOCKED. Task wakes again and proceeds to sys_exit(), at > which point it's marked RUNNABLE and put on the runnable list. After > which it'll kick the server to process said list. > Ah, you are talking about sys_exit hook; sorry, I thought you talked about the exit() syscall. [...] > > Well, that's *your* use-case. I'm fairly sure there's more people that > want to use this thing. > > > multiple > > priorities and work isolation: these are easy to address directly with > > a scheduler that has a global view rather than multiple > > per-cpu/per-server schedulers/queues that try to coordinate. > > You can trivially create this, even if the underlying thing is > per-server. Simply have a lock and shared data structure between the > servers. > > Even in the kernel, it should be mostly trivial to create a global > policy. The only tricky bit (in the kernel) is the whole affinity muck, > but userspace doesn't *need* to do even that. > > > > LAZY enables that.. *however* it does need to wake the server when it is > > > idle, otherwise they'll all sit there waiting for one another. > > > > If all servers are busy running workers, then it is not up to the > > kernel to "preempt" them in my model: the userspace can set up another > > thread/task to preempt a misbehaving worker, which will wake the > > server attached to it. > > So the way I'm seeing things is that the server *is* the 'CPU'. A UP > machine cannot rely on another CPU to make preemption happen. > > Also, preemption is very much not about misbehaviour. Wakeup can cause a > preemption event if the woken task is deemed higher priority than the > current running one for example. > > And time based preemption is definitely also a thing wrt resource > distribution. > > > But in practice there are always workers > > blocking in the kernel, which wakes their servers, which then reap the > > woken/runnable workers list, so well-behaving code does not need this. > > This seems to discount pure computational workloads. > > > And so we need to figure out this high-level thing first: do we go > > with the per-server worker queues/lists, or do we go with the approach > > I use in my patchset? It seems to me that the kernel-side code in my > > patchset is not more complicated than your patchset is shaping up to > > be, and some things are actually easier to accomplish, like having a > > single idle_server_ptr vs this LAZY and/or server "preemption" > > behavior that you have. > > > > Again, I'm OK with having it your way if all needed features are > > covered, but I think we should be explicit about why > > per-server/per-cpu model is chosen vs the one I proposed, especially > > as it seems the kernel side code is not really simpler in the end. > > So I went with a UP first approach. I made single server preemption > driven scheduling work first (both tick and wakeup-preemption are > supported). I agree that the UP approach is better than the LAZY one if we have per-server/per-cpu worker queues. > > The whole LAZY thing is only meant to supress some of that (notably > wakeup preemption), but you're right in that it's not really nice. I got > it working, but I'm not particularly happy with it either. > > Having the sys_enter/sys_exit hooks also made the page pins short lived, > and signals much simpler to handle. You're destroying signals IIUC. > > > So I see no fundamental reason why userspace cannot do something like: > > struct umcg_task *current = NULL; > > for (;;) { > self->state = UMCG_TASK_RUNNABLE | UMCG_TF_COND_WAIT; > > runnable_ptr = (void *)__atomic_exchange_n(&self->runnable_workers_ptr, > NULL, __ATOMIC_SEQ_CST); > > pthread_mutex_lock(&global_queue.lock); > while (runnable_ptr) { > next = (void *)runnable_ptr->runnable_workers_ptr; > enqueue_task(&global_queue, runnable_ptr); > runnable_ptr = next; > } > > /* complicated bit about current already running goes here */ > > current = pick_task(&global_queue); > self->next_tid = current ? current->tid : 0; > unlock: > pthread_mutex_unlock(&global_queue.lock); > > ret = sys_umcg_wait(0, 0); > > pthread_mutex_lock(&global_queue.lock); > /* umcg_wait() didn't switch, make sure to return the task */ > if (self->next_tid) { > enqueue_task(&global_queue, current); > current = NULL; > } > pthread_mutex_unlock(&global_queue.lock); > > // do something with @ret > } > > to get global scheduling and all the contention^Wgoodness related to it. > Except, of course, it's more complicated, but I think the idea's clear > enough. Let me spend some time and see if I can make all of this work together beyond simple tests. With the upcoming holidays and some other things I am busy with, this may take more than a week, I'm afraid...