From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B872AC02182 for ; Tue, 21 Jan 2025 20:32:38 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 052436B0082; Tue, 21 Jan 2025 15:32:38 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id F1E1B6B0083; Tue, 21 Jan 2025 15:32:37 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DE6246B0085; Tue, 21 Jan 2025 15:32:37 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id C22BE6B0082 for ; Tue, 21 Jan 2025 15:32:37 -0500 (EST) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 4A1FB47CC9 for ; Tue, 21 Jan 2025 20:32:37 +0000 (UTC) X-FDA: 83032607154.11.A27BF85 Received: from nyc.source.kernel.org (nyc.source.kernel.org [147.75.193.91]) by imf01.hostedemail.com (Postfix) with ESMTP id 86C7F40019 for ; Tue, 21 Jan 2025 20:32:35 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=MPBB6QHF; spf=pass (imf01.hostedemail.com: domain of "SRS0=5OOQ=UN=paulmck-ThinkPad-P17-Gen-1.home=paulmck@kernel.org" designates 147.75.193.91 as permitted sender) smtp.mailfrom="SRS0=5OOQ=UN=paulmck-ThinkPad-P17-Gen-1.home=paulmck@kernel.org"; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1737491555; h=from:from:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=jCe8sNYMjYzUPncp2E2lvrc4K9m+u58XcISHXoLEvSg=; b=XPkisEWlV5V9nGPIXwAJFj+yANPvV3VYyovohDX/KLIpIXgS8NhbPuCbMoL8C/JD/VCoZW 4oUjpshCKxmuL6Enta5NI7oZ74LvZiHOnrsmCsVIsirkhm0zKkNRiSu/KnEthwrdI8RHxN QVxZbkgS5gtFylcIQnuzZHFvBW2Xr7g= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=MPBB6QHF; spf=pass (imf01.hostedemail.com: domain of "SRS0=5OOQ=UN=paulmck-ThinkPad-P17-Gen-1.home=paulmck@kernel.org" designates 147.75.193.91 as permitted sender) smtp.mailfrom="SRS0=5OOQ=UN=paulmck-ThinkPad-P17-Gen-1.home=paulmck@kernel.org"; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1737491555; a=rsa-sha256; cv=none; b=x0PenbgAqhsGeowxU9t8isl9mpJCtbAW1Ypx2A0V/qyHtfW9k1smd6yxyNVLZw/gKWvab/ nMpN0vADdKEsZ24ztm1QOjvFIm08vP9LxtgqEsVS9eugfUlOlv1vaDCj9HyHlBAd1IFFL+ MFo6otYAzpW8Pi2Xm88DUOv7CntWHHo= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by nyc.source.kernel.org (Postfix) with ESMTP id 51E3FA41F47; Tue, 21 Jan 2025 20:30:47 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7E200C4CEDF; Tue, 21 Jan 2025 20:32:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1737491554; bh=wammBvIw3dJ8RgfpkognKu1INahaT5JaO/KOGAL5arA=; h=Date:From:To:Cc:Subject:Reply-To:References:In-Reply-To:From; b=MPBB6QHFQWXZBw8uF2X7cihvWvZfi+xX+Nz3gBW6fUw8NjotUXjLAhVSnH2E8aKjj cI0T1R7CFq8jqnHkJvfeG07qlbXEUWsGjY2N9tWiSgky5RLLyE4lrBXiKD+mdmAlSw wHQG2ynq4Bdqp0I8UXfNoRGM8tKrTf0/q0JLWT1Fr9fWYgY2X2gj6EA+nvZoBctYu/ 18J366zgxuJawIZK9cMrlEISanW4X3hB5WSu1y/cZQRVkTw2aIez8d9Qz17EZ4Qs7Q FQtx1+KksZlXV0wuH2fFYHVLLHs+/FgmLjipMLmMZBtwYhr9ER90Om0GBnsFxrQu0s DABNQwB49tmCw== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 2CF4DCE10F9; Tue, 21 Jan 2025 12:32:34 -0800 (PST) Date: Tue, 21 Jan 2025 12:32:34 -0800 From: "Paul E. McKenney" To: Uladzislau Rezki Cc: Vlastimil Babka , linux-mm@kvack.org, Andrew Morton , RCU , LKML , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Oleksiy Avramchenko Subject: Re: [PATCH v2 0/5] Move kvfree_rcu() into SLAB (v2) Message-ID: <7b7b92a3-780e-4db0-a5cf-3e78d79587a2@paulmck-laptop> Reply-To: paulmck@kernel.org References: <6fb206de-0185-4026-a6f5-1d150752d8d0@suse.cz> <5bb80786-220d-45d2-bd35-51876df4203c@paulmck-laptop> <55931fdd-1d5f-4ffd-8496-fe436171dee2@suse.cz> <970317a9-0283-4eec-94ae-63056659d7de@suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Rspamd-Queue-Id: 86C7F40019 X-Stat-Signature: m1f3mdjgdzuwbbhszdigaxbutgn8ypnb X-Rspamd-Server: rspam08 X-Rspam-User: X-HE-Tag: 1737491555-681005 X-HE-Meta: U2FsdGVkX18Vkx49b4JRohtSVjBKKXBgfROegkmZFNdLOaIBlouSV5mn/0HiiCzz+uIE9/edENMRGWk4lEN8/9uOZSNGgQJL+630gLIQFSpgjTo/x840GmEXQEy6Yjc9YAgncgZEgxv5BinsY8GmxjIb17YQH4Ti6qlM6jxIVy55SzcSQtnF/oCgMLnis6UGltrvDY+ZudcBfT3/ikUDMjfBTyEvDvy+VWY+GGQAqmshO/H9y9M4vp7v6dkySSrkJE2d4pBQ9jcRvq/m7pU6Vg6H+UZlgHsxPiBLp31Ui60w0BflCJq4oJkL8HVL1STQzDMxm+ze7FY61voyF/T2jHWmIzZ56xeiFD2BDargRbcakaivPee3BbAuvUOPC3ipakK/UwBozrzm14z7FLy02jupDJhBaeCAmsUaiAOdj59aUkF78CxXgtB1ZOWQ+AfPYOdUI5+3uTqU3kEGL7IGhhtNYltW1uN5dT1PGLS54XMIuT27Y26vF49DsRLeQHsW1CudgzorGoP6cfVHloOqqG+vE+wC98n5OHwrA3jFN/H928cO081jmaAfqYbDtEHc8AVulCtFT5eJ+L5UMrP7gNYEmTuqcZzmLqKlAKVgw9cdKbn0aIh4iv444A5XVjrF3gv5fVRuYIWmPpts3JVCt2PZKVusCXRlveAH24VVicZphGmDAchr/u1osD8xaZ9LB4ezJZDTysBm0WDq31JD/3JYz1OwRKxyxnK6f/t2WzlJU+OVywLqCC6etw7Am0JP2gwXRZ5GbAPfpXr6zjMmGMjfSZYyWNJ81f0kJITkbomOvRh5a2ggokP35FMCASeTM88Dz1CwgwPnQrwSs3i85Qj9DwSA+T6MgL+NZXl/UPqAjppgrQtOcIg15b0aw5pO5pAa+5YV70nB4ff1F4i2ZjvmRHhYkHClE6ESDHMrgvV+V23b8mKyi3/gSbKO4z3CYT6cp8wn33B4Xsirds+ iObcFiRQ R4sIlIEltQCJo4zBVAhi3uaVhOUUDPodQK1be2/EbNjWRmKAl/iBzcutmRAirX+9qrOJldPhQDuvIwuZWidk5G4IRq0cN6CMlEr/6rpjXKkhH+uAAaLNl4I/56XylxCfnMwNpFWI2l/1zlOb5ShnG1tHan4dPlccsYp6DnDwIJDNjU4GgxDKmjJndF1SNhBHtproCxtfS2nn4+Pz4YAMzdf0noCvBneb8orbBxXBotAww4lLjBnaUMBwhorz7GCwygu5/C7A3pfc3vP/IoQ1L7vNzAM97FYJBWWoyJGIPx+FC/SLn0YwMMmP9nqHPAM6fD1Fome96gnnoUROhsO/gUH5JiNl8MD6KE5whDwT/u2Fr+RUm01azY3fK7LqKYxHw0sfZpUWCXrLFGy/1VwRWmv42d2D1O90L70s6wEUfcqwmykNpwo7CIlUaWYMW1LPml9JRuuW+jETKJJ7cSYH2KjEFnihc/PJFEAmQus4N1GU6HEoWMUT9iWQa+AD5Mpquyygi29bAFBJGFeU6b5yeu6AZCUHzv9vHfl6hbJrz5J1AvcmOE4e38mnPyhONoo1Zwas0t46Y0OmDBbwpPBo4piIuq1P3ZBreljVxRLRRu/ijNLW8aGRWkUEHV1p+e5aTY09OxH8TGRxpRBw/THGEyksdhFL3KIUKQ1f2k9Zxo1TvLiW9YAE+frr9FgzWiJg7Uod43ofASCUebbul0ZK6xiWGIE/M46iGD9u6Y02EQmuSjeLtz2W6Ze/Vg6ZR1hEOV8IGA79GMbD4mFM= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Tue, Jan 21, 2025 at 03:14:16PM +0100, Uladzislau Rezki wrote: > On Tue, Jan 21, 2025 at 02:49:13PM +0100, Vlastimil Babka wrote: > > On 1/21/25 2:33 PM, Uladzislau Rezki wrote: > > > On Mon, Jan 20, 2025 at 11:06:13PM +0100, Vlastimil Babka wrote: > > >> On 12/16/24 17:46, Paul E. McKenney wrote: > > >>> On Mon, Dec 16, 2024 at 04:55:06PM +0100, Uladzislau Rezki wrote: > > >>>> On Mon, Dec 16, 2024 at 04:44:41PM +0100, Vlastimil Babka wrote: > > >>>>> On 12/16/24 16:41, Uladzislau Rezki wrote: > > >>>>>> On Mon, Dec 16, 2024 at 03:20:44PM +0100, Vlastimil Babka wrote: > > >>>>>>> On 12/16/24 12:03, Uladzislau Rezki wrote: > > >>>>>>>> On Sun, Dec 15, 2024 at 06:30:02PM +0100, Vlastimil Babka wrote: > > >>>>>>>> > > >>>>>>>>> Also how about a followup patch moving the rcu-tiny implementation of > > >>>>>>>>> kvfree_call_rcu()? > > >>>>>>>>> > > >>>>>>>> As, Paul already noted, it would make sense. Or just remove a tiny > > >>>>>>>> implementation. > > >>>>>>> > > >>>>>>> AFAICS tiny rcu is for !SMP systems. Do they benefit from the "full" > > >>>>>>> implementation with all the batching etc or would that be unnecessary overhead? > > >>>>>>> > > >>>>>> Yes, it is for a really small systems with low amount of memory. I see > > >>>>>> only one overhead it is about driving objects in pages. For a small > > >>>>>> system it can be critical because we allocate. > > >>>>>> > > >>>>>> From the other hand, for a tiny variant we can modify the normal variant > > >>>>>> by bypassing batching logic, thus do not consume memory(for Tiny case) > > >>>>>> i.e. merge it to a normal kvfree_rcu() path. > > >>>>> > > >>>>> Maybe we could change it to use CONFIG_SLUB_TINY as that has similar use > > >>>>> case (less memory usage on low memory system, tradeoff for worse performance). > > >>>>> > > >>>> Yep, i also was thinking about that without saying it :) > > >>> > > >>> Works for me as well! > > >> > > >> Hi, so I tried looking at this. First I just moved the code to slab as seen > > >> in the top-most commit here [1]. Hope the non-inlined __kvfree_call_rcu() is > > >> not a show-stopper here. > > >> > > >> Then I wanted to switch the #ifdefs from CONFIG_TINY_RCU to CONFIG_SLUB_TINY > > >> to control whether we use the full blown batching implementation or the > > >> simple call_rcu() implmentation, and realized it's not straightforward and > > >> reveals there are still some subtle dependencies of kvfree_rcu() on RCU > > >> internals :) > > >> > > >> Problem 1: !CONFIG_SLUB_TINY with CONFIG_TINY_RCU > > >> > > >> AFAICS the batching implementation includes kfree_rcu_scheduler_running() > > >> which is called from rcu_set_runtime_mode() but only on TREE_RCU. Perhaps > > >> there are other facilities the batching implementation needs that only > > >> exists in the TREE_RCU implementation > > >> > > >> Possible solution: batching implementation depends on both !CONFIG_SLUB_TINY > > >> and !CONFIG_TINY_RCU. I think it makes sense as both !SMP systems and small > > >> memory systems are fine with the simple implementation. > > >> > > >> Problem 2: CONFIG_TREE_RCU with !CONFIG_SLUB_TINY > > >> > > >> AFAICS I can't just make the simple implementation do call_rcu() on > > >> CONFIG_TREE_RCU, because call_rcu() no longer knows how to handle the fake > > >> callback (__is_kvfree_rcu_offset()) - I see how rcu_reclaim_tiny() does that > > >> but no such equivalent exists in TREE_RCU. Am I right? > > >> > > >> Possible solution: teach TREE_RCU callback invocation to handle > > >> __is_kvfree_rcu_offset() again, perhaps hide that branch behind #ifndef > > >> CONFIG_SLUB_TINY to avoid overhead if the batching implementation is used. > > >> Downside: we visibly demonstrate how kvfree_rcu() is not purely a slab thing > > >> but RCU has to special case it still. > > >> > > >> Possible solution 2: instead of the special offset handling, SLUB provides a > > >> callback function, which will determine pointer to the object from the > > >> pointer to a middle of it without knowing the rcu_head offset. > > >> Downside: this will have some overhead, but SLUB_TINY is not meant to be > > >> performant anyway so we might not care. > > >> Upside: we can remove __is_kvfree_rcu_offset() from TINY_RCU as well > > >> > > >> Thoughts? > > >> > > > For the call_rcu() and to be able to reclaim over it we need to patch the > > > tree.c(please note TINY already works): > > > > > > > > > diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c > > > index b1f883fcd918..ab24229dfa73 100644 > > > --- a/kernel/rcu/tree.c > > > +++ b/kernel/rcu/tree.c > > > @@ -2559,13 +2559,19 @@ static void rcu_do_batch(struct rcu_data *rdp) > > > debug_rcu_head_unqueue(rhp); > > > > > > rcu_lock_acquire(&rcu_callback_map); > > > - trace_rcu_invoke_callback(rcu_state.name, rhp); > > > > > > f = rhp->func; > > > - debug_rcu_head_callback(rhp); > > > - WRITE_ONCE(rhp->func, (rcu_callback_t)0L); > > > - f(rhp); > > > > > > + if (__is_kvfree_rcu_offset((unsigned long) f)) { > > > + trace_rcu_invoke_kvfree_callback("", rhp, (unsigned long) f); > > > + kvfree((void *) rhp - (unsigned long) f); > > > + } else { > > > + trace_rcu_invoke_callback(rcu_state.name, rhp); > > > + debug_rcu_head_callback(rhp); > > > + WRITE_ONCE(rhp->func, (rcu_callback_t)0L); > > > + f(rhp); > > > + } > > > rcu_lock_release(&rcu_callback_map); > > > > Right so that's the first Possible solution, but without the #ifdef. So > > there's an overhead of checking __is_kvfree_rcu_offset() even if the > > batching is done in slab and this function is never called with an offset. > > > Or fulfilling a missing functionality? TREE is broken in that sense > whereas a TINY handles it without any issues. > > It can be called for SLUB_TINY option, just call_rcu() instead of > batching layer. And yes, kvfree_rcu_barrier() switches to rcu_barrier(). Would this make sense? if (IS_ENABLED(CONFIG_TINY_RCU) && __is_kvfree_rcu_offset((unsigned long) f)) { Just to be repetitive, other alternatives include: 1. Take advantage of SLOB being no longer with us. 2. Get rid of Tiny RCU's special casing of kfree_rcu(), and then eliminate the above "if" statement in favor of its "else" clause. 3. Make Tiny RCU implement a trivial version of kfree_rcu() that passes a list through RCU. I don't have strong feelings, and am happy to defer to your guys' decision. > > After coming up with Possible solution 2, I've started liking the idea > > more as RCU could then forget about the __is_kvfree_rcu_offset() > > "callbacks" completely, and the performant case of TREE_RCU + batching > > would be unaffected. > > > I doubt it is a performance issue :) Me neither, especially with IS_ENABLED(). > > I'm speculating perhaps if there was not CONFIG_SLOB in the past, the > > __is_kvfree_rcu_offset() would never exist in the first place? SLAB and > > SLUB both can determine start of the object from a pointer to the middle > > of it, while SLOB couldn't. > > > We needed just to reclaim over RCU. So, i do not know. Paul probably > knows more then me :) In the absence of SLOB, yes, I would hope that I would have thought of determining the start of the object from a pointer to the middle of it. Or that someone would have pointed that out during review. But I honestly do not remember. ;-) > > > /* > > > > > > > > > Mixing up CONFIG_SLUB_TINY with CONFIG_TINY_RCU in the slab_common.c > > > should be avoided, i.e. if we can, we should eliminate a dependency on > > > TREE_RCU or TINY_RCU in a slab. As much as possible. > > > > > > So, it requires a more closer look for sure :) > > > > That requires solving Problem 1 above, but question is if it's worth the > > trouble. Systems running TINY_RCU are unlikely to benefit from the batching? > > > > But sure there's also possibility to hide these dependencies in KConfig, > > so the slab code would only consider a single (for example) #ifdef > > CONFIG_KVFREE_RCU_BATCHING that would be set automatically depending on > > TREE_RCU and !SLUB_TINY. > > > It is for small systems. We can use TINY or !SMP. We covered this AFAIR > that a single CPU system should not go with batching: > > #ifdef SLUB_TINY || !SMP || TINY_RCU > > or: > > config TINY_RCU > bool > default y if !PREEMPT_RCU && !SMP > + select SLUB_TINY > > > Paul, more input? I will say that Tiny RCU used to get much more focus from its users 10-15 years ago than it does now. So one approach is to implement the simplest option, and add any needed complexity back in when and if people complain. ;-) Thanx, Paul