From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 76EF8C4321E for ; Wed, 30 Nov 2022 21:44:07 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D94BE6B0073; Wed, 30 Nov 2022 16:44:06 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D451E6B0074; Wed, 30 Nov 2022 16:44:06 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C33CF6B0075; Wed, 30 Nov 2022 16:44:06 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id B57866B0073 for ; Wed, 30 Nov 2022 16:44:06 -0500 (EST) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 62C56120EAA for ; Wed, 30 Nov 2022 21:44:06 +0000 (UTC) X-FDA: 80191436892.20.7FD642E Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf09.hostedemail.com (Postfix) with ESMTP id E1B2014000F for ; Wed, 30 Nov 2022 21:44:04 +0000 (UTC) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id EA4F161E06; Wed, 30 Nov 2022 21:44:03 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 50466C433D6; Wed, 30 Nov 2022 21:44:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1669844643; bh=qD0Rli9JXd8tq5gzZqVcpKKh0sLQJzXsETSrA7cBLoE=; h=Date:From:To:Cc:Subject:Reply-To:References:In-Reply-To:From; b=iIsRvFPmlSFnfkTZGuPyoNqrQYobjXgXCUjn5jCIr3u0fhlfNCVbZfE6JxInH72mu qVHdTom2PfdjMF9kZr0zS68ge2WowIKoNJLhDmUyQnbCmXU0s64BePUn7LlAQhjS21 zUMUVrXw8B/yMZ/O1uHzvKj9wxLD2M9/0U1QuCf1QOudMIM92xjSAnFXjmGTiK1xNM 19gNfmEs6paVJAfrMsdnIM9M1mqegdAdqkm3eOonag1yXDtXEYxkXfy4OfGgFviH2a boljbCA0LhVcrI3sICI8kAVpmGxEpxLFIRAvP9d8HhHKuzpPrBSlauaeTpU3AuQT4n K1gTTNEeM8nDg== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id E53B55C051C; Wed, 30 Nov 2022 13:44:02 -0800 (PST) Date: Wed, 30 Nov 2022 13:44:02 -0800 From: "Paul E. McKenney" To: Tejun Heo Cc: rcu@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-team@meta.com, rostedt@goodmis.org, "Joel Fernandes (Google)" , Dennis Zhou , Christoph Lameter , linux-mm@kvack.org Subject: Re: [PATCH rcu 12/16] percpu-refcount: Use call_rcu_hurry() for atomic switch Message-ID: <20221130214402.GV4001@paulmck-ThinkPad-P17-Gen-1> Reply-To: paulmck@kernel.org References: <20221130181316.GA1012431@paulmck-ThinkPad-P17-Gen-1> <20221130181325.1012760-12-paulmck@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1669844645; h=from:from:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=2nIe2FGriEFqKScTKOVXnO957t6/wA+TvahmdAYASWU=; b=YzV7PgYmmgMBqSsSCpaUYgPVAvvgA28/xAzgUmg0BIRbldkrtzkcev5sPMOgq2pWkczdf1 yjhbeiAxf4MINKQfl/7o0Yg7Au15CV5ZIl4sIfY5OTv9gYUVnfuGMCZC2DwEOcceyNUX4n 4ZPdFyycrVIeMO7g++BjDEVs2kn+iE0= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=iIsRvFPm; spf=pass (imf09.hostedemail.com: domain of "SRS0=/+J4=36=paulmck-ThinkPad-P17-Gen-1.home=paulmck@kernel.org" designates 139.178.84.217 as permitted sender) smtp.mailfrom="SRS0=/+J4=36=paulmck-ThinkPad-P17-Gen-1.home=paulmck@kernel.org"; dmarc=pass (policy=none) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1669844645; a=rsa-sha256; cv=none; b=2JQe+yMhN0RZL22ii73Mf44J6EGLT23VL7tg9tB5ZDeWhN1ezQI/j+L48rfafwPg6pjQWg ZOAyqUg4xkWu0b2Kto9CDE2dTJtvDkaqbKD680B2SFX99squMLZCVkYeuV2l9O2Dk2VwhL OzN70lL9O6uBMaDVRXC7j4YKONkfXiI= X-Stat-Signature: 383ecedhq3c3wn6acmrq9iiaah8sdouu X-Rspam-User: X-Rspamd-Queue-Id: E1B2014000F X-Rspamd-Server: rspam11 Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=iIsRvFPm; spf=pass (imf09.hostedemail.com: domain of "SRS0=/+J4=36=paulmck-ThinkPad-P17-Gen-1.home=paulmck@kernel.org" designates 139.178.84.217 as permitted sender) smtp.mailfrom="SRS0=/+J4=36=paulmck-ThinkPad-P17-Gen-1.home=paulmck@kernel.org"; dmarc=pass (policy=none) header.from=kernel.org X-HE-Tag: 1669844644-315851 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Nov 30, 2022 at 09:43:44AM -1000, Tejun Heo wrote: > On Wed, Nov 30, 2022 at 10:13:21AM -0800, Paul E. McKenney wrote: > > From: "Joel Fernandes (Google)" > > > > Earlier commits in this series allow battery-powered systems to build > > their kernels with the default-disabled CONFIG_RCU_LAZY=y Kconfig option. > > This Kconfig option causes call_rcu() to delay its callbacks in order to > > batch callbacks. This means that a given RCU grace period covers more > > callbacks, thus reducing the number of grace periods, in turn reducing > > the amount of energy consumed, which increases battery lifetime which > > can be a very good thing. This is not a subtle effect: In some important > > use cases, the battery lifetime is increased by more than 10%. > > > > This CONFIG_RCU_LAZY=y option is available only for CPUs that offload > > callbacks, for example, CPUs mentioned in the rcu_nocbs kernel boot > > parameter passed to kernels built with CONFIG_RCU_NOCB_CPU=y. > > > > Delaying callbacks is normally not a problem because most callbacks do > > nothing but free memory. If the system is short on memory, a shrinker > > will kick all currently queued lazy callbacks out of their laziness, > > thus freeing their memory in short order. Similarly, the rcu_barrier() > > function, which blocks until all currently queued callbacks are invoked, > > will also kick lazy callbacks, thus enabling rcu_barrier() to complete > > in a timely manner. > > > > However, there are some cases where laziness is not a good option. > > For example, synchronize_rcu() invokes call_rcu(), and blocks until > > the newly queued callback is invoked. It would not be a good for > > synchronize_rcu() to block for ten seconds, even on an idle system. > > Therefore, synchronize_rcu() invokes call_rcu_hurry() instead of > > call_rcu(). The arrival of a non-lazy call_rcu_hurry() callback on a > > given CPU kicks any lazy callbacks that might be already queued on that > > CPU. After all, if there is going to be a grace period, all callbacks > > might as well get full benefit from it. > > > > Yes, this could be done the other way around by creating a > > call_rcu_lazy(), but earlier experience with this approach and > > feedback at the 2022 Linux Plumbers Conference shifted the approach > > to call_rcu() being lazy with call_rcu_hurry() for the few places > > where laziness is inappropriate. > > > > And another call_rcu() instance that cannot be lazy is the one on the > > percpu refcounter's "per-CPU to atomic switch" code path, which > > uses RCU when switching to atomic mode. The enqueued callback > > wakes up waiters waiting in the percpu_ref_switch_waitq. Allowing > > this callback to be lazy would result in unacceptable slowdowns for > > users of per-CPU refcounts, such as blk_pre_runtime_suspend(). > > > > Therefore, make __percpu_ref_switch_to_atomic() use call_rcu_hurry() > > in order to revert to the old behavior. > > > > [ paulmck: Apply s/call_rcu_flush/call_rcu_hurry/ feedback from Tejun Heo. ] > > > > Signed-off-by: Joel Fernandes (Google) > > Signed-off-by: Paul E. McKenney > > Cc: Dennis Zhou > > Cc: Tejun Heo > > Cc: Christoph Lameter > > Cc: > > Acked-by: Tejun Heo I applied both, thank you very much! Thanx, Paul