From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9656CC61D90 for ; Tue, 21 Nov 2023 19:25:25 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 157EB6B0330; Tue, 21 Nov 2023 14:25:25 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 0E1056B036F; Tue, 21 Nov 2023 14:25:25 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E73476B0371; Tue, 21 Nov 2023 14:25:24 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id CB4206B0330 for ; Tue, 21 Nov 2023 14:25:24 -0500 (EST) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 97BECB5CCF for ; Tue, 21 Nov 2023 19:25:24 +0000 (UTC) X-FDA: 81482940168.24.047EEAD Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by imf20.hostedemail.com (Postfix) with ESMTP id 7849B1C000C for ; Tue, 21 Nov 2023 19:25:21 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=c64K2SE3; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf20.hostedemail.com: domain of "SRS0=TAUs=HC=paulmck-ThinkPad-P17-Gen-1.home=paulmck@kernel.org" designates 145.40.68.75 as permitted sender) smtp.mailfrom="SRS0=TAUs=HC=paulmck-ThinkPad-P17-Gen-1.home=paulmck@kernel.org" ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1700594721; h=from:from:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=+w7m9wROUpYtWH9y0N4Ozm4EsJ/xZRFvrUb55++oKWE=; b=0vhJbYcXe4VpBDKSR/ViJbCSkM29LQuAhIXuF3sHXUSUAzDEvbxf8IpFgJKnAKBLK7kInf /pmEEe7P8pTaD/MWPjjynoSg03YSwacUQ1eWMq0ASmkKuJx1JhHirja4f5v/6N+4UiK3Cg M2a5VRIDLrcKQ5YfUiS/nNHO6g1FQl0= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=c64K2SE3; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf20.hostedemail.com: domain of "SRS0=TAUs=HC=paulmck-ThinkPad-P17-Gen-1.home=paulmck@kernel.org" designates 145.40.68.75 as permitted sender) smtp.mailfrom="SRS0=TAUs=HC=paulmck-ThinkPad-P17-Gen-1.home=paulmck@kernel.org" ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1700594721; a=rsa-sha256; cv=none; b=28wzz4QycCQMW3ZGvKCVwQogRwCRE/nsMiVOhNPj608Ks3JKnvSvEr5RHtgA0AjjzRzHMH 51yxA4U0yATIUQi4YVgIv1Hqvy0APEplP8BnZSoOfDuns2/jv0QIf8fZugYzzqRHev6jn/ /KIIx+4uzvjLZsgmrGjECfmlsOYpIM0= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by ams.source.kernel.org (Postfix) with ESMTP id 8789DB82446; Tue, 21 Nov 2023 19:25:19 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id D1B9EC433C7; Tue, 21 Nov 2023 19:25:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1700594718; bh=txS+NC9XfL0401PnsDenT1JWd4lisQJp6T4LJN6fisg=; h=Date:From:To:Cc:Subject:Reply-To:References:In-Reply-To:From; b=c64K2SE3oDO62ebSsXI1mle4tjTOrX1Js8JNvA4shwAidhIP/TXtz0i10QhBRaXa9 7CZWE0BunfGvmfJvqjqqXxSLOEioJE5h6EIGdgUwf+Z/15/XY/Hki8kDGZQulJM8CW 5tuXvWNFKe4K9Pxcc8nkSN2DkPd7C9GX0/UrFBTjPgdhGYqn1IL4agaxJneotiQGYs HBcWAbjywsYSK0NoS5HD9tXMaWOSFrqzcTpjyZk70wMqerZxjxOPFVL+eOv08plPVP FoqJWDRa9rOIhcN5c6HYqzDdLz//cqwUhFK107N5aRXx/xqNx2Y1kENtOj2C7ToiTn 2wExk7Yc35pOg== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 62D57CE0594; Tue, 21 Nov 2023 11:25:18 -0800 (PST) Date: Tue, 21 Nov 2023 11:25:18 -0800 From: "Paul E. McKenney" To: Ankur Arora Cc: linux-kernel@vger.kernel.org, tglx@linutronix.de, peterz@infradead.org, torvalds@linux-foundation.org, linux-mm@kvack.org, x86@kernel.org, akpm@linux-foundation.org, luto@kernel.org, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, mingo@redhat.com, juri.lelli@redhat.com, vincent.guittot@linaro.org, willy@infradead.org, mgorman@suse.de, jon.grimm@amd.com, bharata@amd.com, raghavendra.kt@amd.com, boris.ostrovsky@oracle.com, konrad.wilk@oracle.com, jgross@suse.com, andrew.cooper3@citrix.com, mingo@kernel.org, bristot@kernel.org, mathieu.desnoyers@efficios.com, geert@linux-m68k.org, glaubitz@physik.fu-berlin.de, anton.ivanov@cambridgegreys.com, mattst88@gmail.com, krypton@ulrich-teichert.org, rostedt@goodmis.org, David.Laight@aculab.com, richard@nod.at, mjguzik@gmail.com Subject: Re: [RFC PATCH 48/86] rcu: handle quiescent states for PREEMPT_RCU=n Message-ID: Reply-To: paulmck@kernel.org References: <20231107215742.363031-1-ankur.a.arora@oracle.com> <20231107215742.363031-49-ankur.a.arora@oracle.com> <2027da00-273d-41cf-b9e7-460776181083@paulmck-laptop> <87lear4wj6.fsf@oracle.com> <46a4c47a-ba1c-4776-a6f8-6c2146cbdd0d@paulmck-laptop> <31d50051-e42c-4ef2-a1ac-e45370c3752e@paulmck-laptop> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <31d50051-e42c-4ef2-a1ac-e45370c3752e@paulmck-laptop> X-Rspamd-Queue-Id: 7849B1C000C X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: s5uay984iew3jc5zhonecbeknyifr4z3 X-HE-Tag: 1700594721-418026 X-HE-Meta: U2FsdGVkX19H6uoxSG+uBJJYFWyKOj6CiJVnA9hV0P3BI/5IhAUXe0+S+qjJKdDgc+zV2ENHfFXXFyUUlH50+iASgpYtlcbIb4x8Sd8x34UoGpGonaV+sCP0cCwgM8rqN51Rdba0WOWrGKgfQC9NCTFG6M/B9/gegPWMBx21yrSE7dYhGMJRss8LkdZNMVnts8rxi/Q6yTT0o8exjgSrpOwHlF+Z7ktP3ySiDpo1t4zUn8yirZONR8oVr+8B0jQdHDkNzw/UEAWdeF/vjiLakhgBlGPJW92iKBSIOK9C8jEGIlT6LI7V6M/nDovtJV1dnQI062EpKiZkrDO+BJfustB4IE6YsESoJu4/4k5j2v6XXcoE9MExtLs+j7m5wvQCBumPtrLxRJ7VsO5mSMze8OW3EZIWksRWL4p4C/U/C9OPxFaYWqoD2FHSQ08q+2JWboKlGrGx0Gb85sPYuEmbEigVeDl0pHqhgWFKnnlq2bR5gPNxvqoSnb01XpERU4vEL8DnSQXoMoqmHwX3j79dMl6gSg/E0CDBN9S2tsuB781DHiTVXVUDaxDqamSY5MxOUcnNqPyalnTHDTFA0mcEwrhv0xPWEtw+Dpso7oSRmiIr2YqXTfEeAF6b3eBl/lf2cmiQk0m6HX6bshkYkiqaQ5wcwwXUvyWoTAwQQEWxpbCsX99OMrSB38YVfY7yiWMpQ24zVOzzsPvMXy64cpma99/hYe71VgSVc49slHgHTwjL2YwDwkfxey3e6Mk81Kk8ODxrJsd091QsR8G4fGq6QjP1rjxUJwFA5JFPDxrBYZzVJqzaK6pz76P7zlzTJ/SkkJZlr960vU7gZBdH2pGkBZeunjtxUIDehKD/MmxDMcdZIdBTT4P63j1GNRSiDbj2HVL+hzZxHLH7RI8B2HojMEyovkH+czOCZsQmFvbF0Tzsrx9iwgiIBc8K/d/76e43vsxLxzlXHdTQLI7eFM+ m6C6oUBg mMDFvU2Zqt2FX7yG35yYyoG1OlEkGHz6tdijuQxCzKmQgGBBEEHth6w6dMYIiN1+cD6be6utkyPycR1VO/DuoGjzYgsjiI+5c8OPNPp4v4zwo7oJSF05jQy7aiq9dXHwGB9jfNwOkrMi6VdCQS/DSxQYuj0e1cDG379mk8AHW1uJ3PHeU7Wv+W/3EXT5lFXHnu7pMTwQn+B9MijUXP4fP2XUBa1HmWTWaoNeuQema0EZ+ct6aY0CzDEoRyoA5qKGb1+tFKjQqp6OY+l3RDr9flqSjvfXQlx1+Y/S3Wj1fWuMmiVlaNVXcAohpKhCC9vciBp0+Ix0xzjsIZ5F5O5o+cUmG4lj/GAEiHnv9YJWAparRhwrBI7Vd3wpPVXNDNGn9j0jivJKiPoxp2R5sOBSO2rL4CMcCp3Q4l5J64t2pBSC2tvmbVb/DnEUCUoJqWHDAHHKSkI1WfIG6abCiiH7ijk/ItRcKaOpQQ3g/Wf9yLQ2422kNHDeYQHCNwC+bPMo8iWDS/oQYzBoCT2vAsblA18Yfo2MFuD5UmG4AAkjtL54j18uxetlyFuOSZeCKIHD2x/5qdjH+U/HjbEGDMtNGMVE1dTcO3Q3HC/9z/1WLzivVnAyPTS6CEpDQaOBfUzl9wq64mAENTMZ7IrD7kbxs+UK/FTWQUDT1Hy4G X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Mon, Nov 20, 2023 at 09:34:05PM -0800, Paul E. McKenney wrote: > On Mon, Nov 20, 2023 at 09:17:57PM -0800, Paul E. McKenney wrote: > > On Mon, Nov 20, 2023 at 07:26:05PM -0800, Ankur Arora wrote: > > > > > > Paul E. McKenney writes: > > > > On Tue, Nov 07, 2023 at 01:57:34PM -0800, Ankur Arora wrote: > > > >> cond_resched() is used to provide urgent quiescent states for > > > >> read-side critical sections on PREEMPT_RCU=n configurations. > > > >> This was necessary because lacking preempt_count, there was no > > > >> way for the tick handler to know if we were executing in RCU > > > >> read-side critical section or not. > > > >> > > > >> An always-on CONFIG_PREEMPT_COUNT, however, allows the tick to > > > >> reliably report quiescent states. > > > >> > > > >> Accordingly, evaluate preempt_count() based quiescence in > > > >> rcu_flavor_sched_clock_irq(). > > > >> > > > >> Suggested-by: Paul E. McKenney > > > >> Signed-off-by: Ankur Arora > > > >> --- > > > >> kernel/rcu/tree_plugin.h | 3 ++- > > > >> kernel/sched/core.c | 15 +-------------- > > > >> 2 files changed, 3 insertions(+), 15 deletions(-) > > > >> > > > >> diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h > > > >> index f87191e008ff..618f055f8028 100644 > > > >> --- a/kernel/rcu/tree_plugin.h > > > >> +++ b/kernel/rcu/tree_plugin.h > > > >> @@ -963,7 +963,8 @@ static void rcu_preempt_check_blocked_tasks(struct rcu_node *rnp) > > > >> */ > > > >> static void rcu_flavor_sched_clock_irq(int user) > > > >> { > > > >> - if (user || rcu_is_cpu_rrupt_from_idle()) { > > > >> + if (user || rcu_is_cpu_rrupt_from_idle() || > > > >> + !(preempt_count() & (PREEMPT_MASK | SOFTIRQ_MASK))) { > > > > > > > > This looks good. > > > > > > > >> /* > > > >> * Get here if this CPU took its interrupt from user > > > >> diff --git a/kernel/sched/core.c b/kernel/sched/core.c > > > >> index bf5df2b866df..15db5fb7acc7 100644 > > > >> --- a/kernel/sched/core.c > > > >> +++ b/kernel/sched/core.c > > > >> @@ -8588,20 +8588,7 @@ int __sched _cond_resched(void) > > > >> preempt_schedule_common(); > > > >> return 1; > > > >> } > > > >> - /* > > > >> - * In preemptible kernels, ->rcu_read_lock_nesting tells the tick > > > >> - * whether the current CPU is in an RCU read-side critical section, > > > >> - * so the tick can report quiescent states even for CPUs looping > > > >> - * in kernel context. In contrast, in non-preemptible kernels, > > > >> - * RCU readers leave no in-memory hints, which means that CPU-bound > > > >> - * processes executing in kernel context might never report an > > > >> - * RCU quiescent state. Therefore, the following code causes > > > >> - * cond_resched() to report a quiescent state, but only when RCU > > > >> - * is in urgent need of one. > > > >> - * / > > > >> -#ifndef CONFIG_PREEMPT_RCU > > > >> - rcu_all_qs(); > > > >> -#endif > > > > > > > > But... > > > > > > > > Suppose we have a long-running loop in the kernel that regularly > > > > enables preemption, but only momentarily. Then the added > > > > rcu_flavor_sched_clock_irq() check would almost always fail, making > > > > for extremely long grace periods. > > > > > > So, my thinking was that if RCU wants to end a grace period, it would > > > force a context switch by setting TIF_NEED_RESCHED (and as patch 38 mentions > > > RCU always uses the the eager version) causing __schedule() to call > > > rcu_note_context_switch(). > > > That's similar to the preempt_schedule_common() case in the > > > _cond_resched() above. > > > > But that requires IPIing that CPU, correct? > > > > > But if I see your point, RCU might just want to register a quiescent > > > state and for this long-running loop rcu_flavor_sched_clock_irq() does > > > seem to fall down. > > > > > > > Or did I miss a change that causes preempt_enable() to help RCU out? > > > > > > Something like this? > > > > > > diff --git a/include/linux/preempt.h b/include/linux/preempt.h > > > index dc5125b9c36b..e50f358f1548 100644 > > > --- a/include/linux/preempt.h > > > +++ b/include/linux/preempt.h > > > @@ -222,6 +222,8 @@ do { \ > > > barrier(); \ > > > if (unlikely(preempt_count_dec_and_test())) \ > > > __preempt_schedule(); \ > > > + if (!(preempt_count() & (PREEMPT_MASK | SOFTIRQ_MASK))) \ > > > + rcu_all_qs(); \ > > > } while (0) > > > > Or maybe something like this to lighten the load a bit: > > > > #define preempt_enable() \ > > do { \ > > barrier(); \ > > if (unlikely(preempt_count_dec_and_test())) { \ > > __preempt_schedule(); \ > > if (raw_cpu_read(rcu_data.rcu_urgent_qs) && \ > > !(preempt_count() & (PREEMPT_MASK | SOFTIRQ_MASK))) \ > > rcu_all_qs(); \ > > } \ > > } while (0) > > > > And at that point, we should be able to drop the PREEMPT_MASK, not > > that it makes any difference that I am aware of: > > > > #define preempt_enable() \ > > do { \ > > barrier(); \ > > if (unlikely(preempt_count_dec_and_test())) { \ > > __preempt_schedule(); \ > > if (raw_cpu_read(rcu_data.rcu_urgent_qs) && \ > > !(preempt_count() & SOFTIRQ_MASK)) \ > > rcu_all_qs(); \ > > } \ > > } while (0) > > > > Except that we can migrate as soon as that preempt_count_dec_and_test() > > returns. And that rcu_all_qs() disables and re-enables preemption, > > which will result in undesired recursion. Sigh. > > > > So maybe something like this: > > > > #define preempt_enable() \ > > do { \ > > if (raw_cpu_read(rcu_data.rcu_urgent_qs) && \ > > !(preempt_count() & SOFTIRQ_MASK)) \ > > Sigh. This needs to include (PREEMPT_MASK | SOFTIRQ_MASK), > but check for equality to something like (1UL << PREEMPT_SHIFT). > > Clearly time to sleep. :-/ Maybe this might actually work: #define preempt_enable() \ do { \ barrier(); \ if (!IS_ENABLED(CONFIG_PREEMPT_RCU) && raw_cpu_read(rcu_data.rcu_urgent_qs) && \ (preempt_count() & (PREEMPT_MASK | SOFTIRQ_MASK | HARDIRQ_MASK | NMI_MASK) == PREEMPT_OFFSET) && !irqs_disabled()) \ rcu_all_qs(); \ if (unlikely(preempt_count_dec_and_test())) { \ __preempt_schedule(); \ } \ } while (0) And the rcu_all_qs() below might also work. Thanx, Paul > > rcu_all_qs(); \ > > barrier(); \ > > if (unlikely(preempt_count_dec_and_test())) { \ > > __preempt_schedule(); \ > > } \ > > } while (0) > > > > Then rcu_all_qs() becomes something like this: > > > > void rcu_all_qs(void) > > { > > unsigned long flags; > > > > /* Load rcu_urgent_qs before other flags. */ > > if (!smp_load_acquire(this_cpu_ptr(&rcu_data.rcu_urgent_qs))) > > return; > > this_cpu_write(rcu_data.rcu_urgent_qs, false); > > if (unlikely(raw_cpu_read(rcu_data.rcu_need_heavy_qs))) { > > local_irq_save(flags); > > rcu_momentary_dyntick_idle(); > > local_irq_restore(flags); > > } > > rcu_qs(); > > } > > EXPORT_SYMBOL_GPL(rcu_all_qs); > > > > > Though I do wonder about the likelihood of hitting the case you describe > > > and maybe instead of adding the check on every preempt_enable() > > > it might be better to instead force a context switch in the > > > rcu_flavor_sched_clock_irq() (as we do in the PREEMPT_RCU=y case.) > > > > Maybe. But rcu_all_qs() is way lighter weight than a context switch. > > > > Thanx, Paul