From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9E7A8CCD193 for ; Wed, 18 Sep 2024 09:37:51 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1C3956B0082; Wed, 18 Sep 2024 05:37:51 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 173E36B0083; Wed, 18 Sep 2024 05:37:51 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 03A956B0085; Wed, 18 Sep 2024 05:37:50 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id DA6BE6B0082 for ; Wed, 18 Sep 2024 05:37:50 -0400 (EDT) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 89F9D12043F for ; Wed, 18 Sep 2024 09:37:50 +0000 (UTC) X-FDA: 82577357100.28.1B7FB86 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf24.hostedemail.com (Postfix) with ESMTP id B9ED7180005 for ; Wed, 18 Sep 2024 09:37:48 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=bfDmmr8N; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf24.hostedemail.com: domain of frederic@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=frederic@kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1726652178; a=rsa-sha256; cv=none; b=3UeY9d+5flAliyxtz9OsG7sj4H99Weq9mDofbNjGKuHPuchKUtjZC4FrxKucCf2Y6uqZwP d/eh1x+hus9kmoAQD0NCKs3ryAg7n4EgT67xHOTj2VcRdCt3uUti3dsGmZ6B9BcH4bZwrY nbEeueXH72SLJX4Ze3ukJblPTkW56HE= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=bfDmmr8N; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf24.hostedemail.com: domain of frederic@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=frederic@kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1726652178; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=D2OpSgcosLaN82sazmK9AHiWUjEYVwg6NDzNbTMSUQk=; b=4bG0OE/GnlQ4x+7h/BWgPXqyAypf0LGEIGof5XxnSvJpSDbMj/D+I3z6GsskgJUI9jRMsl 7HKwLmRo0ORks6ub6TihqBbuy44R23XV1uKGd6je4yKdYHu7TA8+rDzYswTuwmi04stPcj MTwDoUtjy6VFnNcYampOemrkVRbFgDc= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id C82DE5C5A46; Wed, 18 Sep 2024 09:37:43 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9D067C4CECD; Wed, 18 Sep 2024 09:37:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1726652267; bh=/YXKhttM8zuIxIN2nJwr0w1M7H9NGUcLtOuFqlKqeEA=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=bfDmmr8NP+uNEGat9zwE4x/mVW1JZYse1XNkUuiEEDbK/cFLPhGCDHt7TKtPJ4NNn N6XGkK+u3SX0FOpsepD29fY+1TwzPijd5DjvvQcy/HH0MNd3/kP7L5M3JVrbScoUP2 IbDj5kLpSNPP4qaMNv4xWHi70/iFxf2FlysUOqANq1+6bdc/DcBJZMTyU4Qys26y1K cbOkhtIVXmVqv1VSelR71BnSSscvqB6YBUePi36XDE4MUGc10YMh2vc84fozJJaPUl g7Q/z5I1RzCPCXrbu/Q6NYcD9H9iEAp84i/UPJ9aBYLhGkZxkkfX+d9K2h5+hf6GiA kkIEqIMHI8gxw== Date: Wed, 18 Sep 2024 11:37:42 +0200 From: Frederic Weisbecker To: Michal Hocko Cc: LKML , Andrew Morton , Kees Cook , Peter Zijlstra , Thomas Gleixner , Vlastimil Babka , linux-mm@kvack.org, "Paul E. McKenney" , Neeraj Upadhyay , Joel Fernandes , Boqun Feng , Zqiang , rcu@vger.kernel.org Subject: Re: [PATCH 12/19] kthread: Default affine kthread to its preferred NUMA node Message-ID: References: <20240916224925.20540-1-frederic@kernel.org> <20240916224925.20540-13-frederic@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: B9ED7180005 X-Stat-Signature: ob31w58agob9cxomcrowdpqdhqgo4efx X-Rspam-User: X-HE-Tag: 1726652268-99049 X-HE-Meta: U2FsdGVkX1+KpibJbqBxuPOGyhRx83rwGMGU5/se/EgOvyr8vJ+DRKi3tU5BeUI37THPUGyKtgO+IDHhok21YnuGandKtY5sh2IzKarx+hi+WgZBJWCKxAIQaKsaIvW2Y+yhcPSO4xRRdaRIi7hoTor4ZSdNnGPT4reYzu6K+f46jHXblfJA22IDHqT6uzLfysYGi70a+NgR/RHQ1pIqE30kJgUwE1G5vrk8gYKfMGONY1VEv9tnBm1DvGWq5y48zcf4NX3ydvqAp9Ko4t3mYV+7zZWd7m9flAPF9tRDy4zjWVG/Ie52WGRWwK2li54i8QoicWrxC2uvbju/1nVFQwFabwdotRN9QF9AY2A69WNAmKkUmkqNZcr2KTRFoOox5CW6U+5VRwi9EdiURf2mZbmHIf2TsnVROG0XtIYbyQN9GAy878F7QiOl4DuIhT+KUXyp7IfuLceOi100ggGuPajQkt+LyJT7ChkJzmg1qMwikDqf0XsWBTGPtw+Z4S7JlowasP5qF144TAO5uESqeXVBC3PVhQecCoL0uJqKzwMIH+UPKZgHrLpGXsvjk2QcDFNO0YLDWhitgAhe3cm/2sqMlrBIjWL3xHCCH1qQ1mEAzDTj0vXsg/6rjDyJQLo9t3quB0Z23WuaCpq5VDmTB4r6FyKqMrSGLf+gZdKYnSKR3n4dtxRxUkiCB2nJXWJwfB2QMkA05E6ZgmQWUGXwHq8/u5q74Dx8gA5bQQSNX80jd6NykUXdbMmY3nsQO5mLvl0kExnQpqWhQFR1BqPXKhltOld5K0R4YOgaBWmVA68Nv1YUjwB5hWxMfygPfIKq6N+/vRc0JAHKHrhgVCn97XqAZ9TPMPpUdhh63OJUO4fqphpLkYCLFyYxOTmSWHgkN81hWEmIVFRRQlikO2CQt9jcTakKB7TPi9nF38bOhIlvNYWi+cSOFiGFE+2tHGMOLAmIlAXkaebIwIX8IoV 9NHQs4ue 7hFEz2l8Y+xiS6caP5O8pe82AQnf392+5XM2y2WSyE+3hJAZ//rsXLmEMTkjJzWYv/lICaubSekAAq7znQdU6thIws47yxZer4nBaw1UPoDZoCNYY0npM/OT+5Htl2TySkxxl4Wkncz5SJlcF+LK6pnvEpuGKMW7FGNdfvhp/yrJEKUblKfWiZZjMp0W7385Moo1uWGNN90MlFSuvITVhk0t7xC5OFF1zTj+Oi9G28uPU27AYbY8vEP62RJW2f2d9CLPtauj1hnvBSAHZoQ7NRULijnoc32BqDKsUIiU5kTLe1xLTeMfq1/KMaQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Le Tue, Sep 17, 2024 at 01:07:25PM +0200, Michal Hocko a écrit : > On Tue 17-09-24 12:34:51, Frederic Weisbecker wrote: > > Le Tue, Sep 17, 2024 at 08:26:49AM +0200, Michal Hocko a écrit : > > > On Tue 17-09-24 00:49:16, Frederic Weisbecker wrote: > > > > Kthreads attached to a preferred NUMA node for their task structure > > > > allocation can also be assumed to run preferrably within that same node. > > > > > > > > A more precise affinity is usually notified by calling > > > > kthread_create_on_cpu() or kthread_bind[_mask]() before the first wakeup. > > > > > > > > For the others, a default affinity to the node is desired and sometimes > > > > implemented with more or less success when it comes to deal with hotplug > > > > events and nohz_full / CPU Isolation interactions: > > > > > > > > - kcompactd is affine to its node and handles hotplug but not CPU Isolation > > > > - kswapd is affine to its node and ignores hotplug and CPU Isolation > > > > - A bunch of drivers create their kthreads on a specific node and > > > > don't take care about affining further. > > > > > > > > Handle that default node affinity preference at the generic level > > > > instead, provided a kthread is created on an actual node and doesn't > > > > apply any specific affinity such as a given CPU or a custom cpumask to > > > > bind to before its first wake-up. > > > > > > Makes sense. > > > > > > > This generic handling is aware of CPU hotplug events and CPU isolation > > > > such that: > > > > > > > > * When a housekeeping CPU goes up and is part of the node of a given > > > > kthread, it is added to its applied affinity set (and > > > > possibly the default last resort online housekeeping set is removed > > > > from the set). > > > > > > > > * When a housekeeping CPU goes down while it was part of the node of a > > > > kthread, it is removed from the kthread's applied > > > > affinity. The last resort is to affine the kthread to all online > > > > housekeeping CPUs. > > > > > > But I am not really sure about this part. Sure it makes sense to set the > > > affinity to exclude isolated CPUs but why do we care about hotplug > > > events at all. Let's say we offline all cpus from a given node (or > > > that all but isolated cpus are offline - is this even > > > realistic/reasonable usecase?). Wouldn't scheduler ignore the kthread's > > > affinity in such a case? In other words how is that different from > > > tasksetting an userspace task to a cpu that goes offline? We still do > > > allow such a task to run, right? We just do not care about affinity > > > anymore. > > > > Suppose we have this artificial online set: > > > > NODE 0 -> CPU 0 > > NODE 1 -> CPU 1 > > NODE 2 -> CPU 2 > > > > And we have nohz_full=1,2 > > > > So there is kswapd/2 that is affine to NODE 2 and thus CPU 2 for now. > > > > Now CPU 2 goes offline. The scheduler migrates off all > > tasks. select_fallback_rq() for kswapd/2 doesn't find a suitable CPU > > to run to so it affines kswapd/2 to all remaining online CPUs (CPU 0, CPU 1) > > (see the "No more Mr. Nice Guy" comment). > > > > But CPU 1 is nohz_full, so kswapd/2 could run on that isolated CPU. Unless we > > handle things before, like this patchset does. > > But that is equally broken as before, no? CPU2 is isolated as well so it > doesn't really make much of a difference. Right. I should correct my example with nohz_full=1 only. > > > And note that adding isolcpus=domain,1,2 or setting 1,2 as isolated > > cpuset partition (like most isolated workloads should do) is not helping > > here. And I'm not sure this last resort scheduler code is the right place > > to handle isolated cpumasks. > > Well, we would have the same situation with userspace tasks, no? Say I > have taskset -p 2 (because I want bidning to node2) and that CPU2 goes > offline. The task needs to be moved somewhere. And it would be last > resort logic to do that unless I am missing anything. Why should kernel > threads be any different? Good point. > > > So it looks necessary, unless I am missing something else? > > I am not objecting to patch per se. I am just not sure this is really > needed. It is great to have kernel threads bound to non isolated cpus by > default if they have node preferences. But as soon as somebody starts > offlining cpus excessively and make the initial cpumask empty then > select_fallback_rq sounds like the right thing to do. > > Not my call though. I was just curious why this is needed and it seems > to me you are looking for some sort of correctness for broken setups. It looks like it makes sense to explore that path. We still need the cpu up probe to reaffine when a suitable target comes up. But it seems the CPU down part can be handled by select_fallback_rq. I'll try that. Thanks. > -- > Michal Hocko > SUSE Labs >