From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2F551C02182 for ; Tue, 21 Jan 2025 14:14:26 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7C57D6B0083; Tue, 21 Jan 2025 09:14:25 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 7575E6B0085; Tue, 21 Jan 2025 09:14:25 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5ED02280001; Tue, 21 Jan 2025 09:14:25 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 41A406B0083 for ; Tue, 21 Jan 2025 09:14:25 -0500 (EST) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id B90FD1C7CE9 for ; Tue, 21 Jan 2025 14:14:24 +0000 (UTC) X-FDA: 83031654048.26.7869990 Received: from mail-lf1-f44.google.com (mail-lf1-f44.google.com [209.85.167.44]) by imf19.hostedemail.com (Postfix) with ESMTP id B1E8A1A0014 for ; Tue, 21 Jan 2025 14:14:22 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=RsJte3ho; spf=pass (imf19.hostedemail.com: domain of urezki@gmail.com designates 209.85.167.44 as permitted sender) smtp.mailfrom=urezki@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1737468862; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=gIQOpzR6yNWoD8ipC5V3XjuHm0att9yM9ZGjy6IN9+A=; b=1ot4KbnoKLh6epiOPex1D75JL1eeTS3/qYblAfCCTQd8K9mj+/bB/wgjRw/IYYbIfSfwdK XBKkRmSstpr6yaLxQNUabO0U3L25t3QLUVPu6tNV/guV7QGp8kYp2pc/+ISmAuKL0zkXUG dkxr2XOva9ngNwvh8EH4MRxizKp8l0I= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1737468862; a=rsa-sha256; cv=none; b=vy5HAs32zlaOQgVJgqhOr8BfavKj+XNzswAsoK3C1oZbsNQPFOYJkCcuMbJ/cd02PHT2o2 LJVw7AlI3dStYHh802d3HJ7m2LQXfbkMfbSbB31y7woMEtTiq46BisMFYRJUaWLhJV4eSU mZFrQ0GXf344KJ5s/pxavzc2J+l3vEs= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=RsJte3ho; spf=pass (imf19.hostedemail.com: domain of urezki@gmail.com designates 209.85.167.44 as permitted sender) smtp.mailfrom=urezki@gmail.com; dmarc=pass (policy=none) header.from=gmail.com Received: by mail-lf1-f44.google.com with SMTP id 2adb3069b0e04-5439a6179a7so5065837e87.1 for ; Tue, 21 Jan 2025 06:14:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1737468860; x=1738073660; darn=kvack.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:date:from:from:to:cc:subject:date:message-id:reply-to; bh=gIQOpzR6yNWoD8ipC5V3XjuHm0att9yM9ZGjy6IN9+A=; b=RsJte3hoxj6exRcWjjKH6cFG0IBPDbYZUJhgrpP2yMMwk3Nj6QGCVEyjoBqTSRGvlQ +ZAppjglS5cJICjEzjGlpg6rW70ISpetECKNrNVxZ2Ta7Rk5WOZo42rJuJK5jYVYQwdt Yd9/3rN8PnnhQCXuTioTghO61W4pn/UYYcaj9odrjxCMTUcn/Q+nuC7Im+93glaeQCKR AU2iC9jU97gb33l4Z1PXxsnXynAIxJEMtLrHrSQLmzEBvA1tAweDAX02tnqIJjsg4CRr 68cewOhPa/fexDix2en22bzmZo/dbFjrq15KIdd/GysO95pMkQjAWmoCwkTdc/1pPa/H AeaA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1737468860; x=1738073660; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:date:from:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=gIQOpzR6yNWoD8ipC5V3XjuHm0att9yM9ZGjy6IN9+A=; b=EFJjjVSPn5dgreRcprBSir10LF8XPsKEopQdSKQBRdNiF4+lme2Qb9B54IIqXegawH +mk6/BUi6U/jEWl/VQ/t7iKY3/n71Rf0V2NK+3507a54wad/IwiVvCtSp4olQHtwYOJ5 bC2vveWp2bO3aTJeyzxuUX2gWnCh6olInCY8ergBBjY7Ynxsh8gXTEWHMsrh6Y9Qj1Ia pkw1INpvPrLGNk4G+TFEyFarAJJIXA8Jql9EQeUuRp15OObdysB/M47QAqtUU/tR6ubA TQNm0ikTAaIcRBuIQkr92l5xPTw3uKt+AHPujtUPsXGSvqFpFJU/gfJvv41SZJIvCCf0 YwAg== X-Forwarded-Encrypted: i=1; AJvYcCWh4epYy0pNg7k1fQcdu65s0n4McxkfZjdy3Q/MkPS/bNd2iS4xhFBIA2g4kycn1C0pc6PZdU6Rwg==@kvack.org X-Gm-Message-State: AOJu0YwLgkGBD9SsWL4vhJhCQrXR7mPKne+u+YY3JyodbYObtw3fOuNE jR7wtICDY0u8VsdHVQOnYrfJEWwp0yNRnxIC6UGXjVYX4s7b5h7W X-Gm-Gg: ASbGncvzIlnkSU902JwRXpqwIbk51Ddjol1BqNwndMpUEQcKxwlfwA+5QaNO/XIYkd2 o9VtEsdWyNWwvywxJ5JBdF0ltrg3FRy9ZEX6kSvYPTJefyku7ll+xyXbkg7HoFqQpcXni96GVp8 V3MR/jMxZU1ISNlUqYNoYRJbpSHFUSQ3UzexrlRSsS7n0G+l9fFqiAKOWnYldfI4kirc3C/nGHM 7OShG2H0Ao7oMSvST7sfdHym6k1oOG93qnh7er3qPAWJ36ahbDUxi1F7Iqf7qbgg8ZOOem9lXnI XzPbZTGOIN6eh3YH9184JgDc X-Google-Smtp-Source: AGHT+IHKF0UuYdysDJKBFPcl8RzGq0GxCejrD8bcJH7pNQ+0ZFK0IdVFdZ2wHHhQ10dForpl27zR0A== X-Received: by 2002:ac2:5de1:0:b0:542:2e05:313b with SMTP id 2adb3069b0e04-5439bfb74b3mr4867627e87.21.1737468859654; Tue, 21 Jan 2025 06:14:19 -0800 (PST) Received: from pc636 (host-217-213-93-172.mobileonline.telia.com. [217.213.93.172]) by smtp.gmail.com with ESMTPSA id 2adb3069b0e04-5439af0e8ffsm1867778e87.59.2025.01.21.06.14.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 21 Jan 2025 06:14:18 -0800 (PST) From: Uladzislau Rezki X-Google-Original-From: Uladzislau Rezki Date: Tue, 21 Jan 2025 15:14:16 +0100 To: Vlastimil Babka , paulmck@kernel.org Cc: Uladzislau Rezki , paulmck@kernel.org, linux-mm@kvack.org, Andrew Morton , RCU , LKML , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Oleksiy Avramchenko Subject: Re: [PATCH v2 0/5] Move kvfree_rcu() into SLAB (v2) Message-ID: References: <17476947-d447-4de3-87bb-97d5f3c0497d@suse.cz> <6fb206de-0185-4026-a6f5-1d150752d8d0@suse.cz> <5bb80786-220d-45d2-bd35-51876df4203c@paulmck-laptop> <55931fdd-1d5f-4ffd-8496-fe436171dee2@suse.cz> <970317a9-0283-4eec-94ae-63056659d7de@suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <970317a9-0283-4eec-94ae-63056659d7de@suse.cz> X-Stat-Signature: t8rpfnz95j17jukbjtatqj31bsyf5kto X-Rspamd-Queue-Id: B1E8A1A0014 X-Rspam-User: X-Rspamd-Server: rspam06 X-HE-Tag: 1737468862-204575 X-HE-Meta: U2FsdGVkX19yMp0PEfTH9qIZRUbl/TleEyZnSzos2Sy7mlgczuuKrpUtbQCcpTGQ8oXwUXzcod7D7BOhTWIop+XGM7Hirsh5mI2HGzAWf501QfgkBgS5JnH6tQj2V2WMG5wU6eEMhYNH8a1jADds3SYuE17nQIhThFPc2l9ySIUK7FvNp5n59aPie9bJfhzjATnvLZhxTvcRsFV9YAeo01VOxVA7yWTT5kOzmzhK+sni/oqh4xv6rnkkAYH8FAcUitzWoo/bKYUewqIIdc3+ygGIlI3YN/PNe5Jrt0NQVwsdEq0hdR6xBjJA2lCpLlQoInfcooBfrbLk68LDvWgdwCiLsf2vfrDkTJmAAGmQJHxT6013o0rmUTiuTwo94TqJhnONKbgjnAMox24TvQI9oSWd9JaHiXWdSeqXtmSWpZkQe8EHvNoWvykOuTURgZAYAOhGcOY1rMRjx49JiFLFIwZ5j/DIAO8dD3NACMZOcNjHg69ACHhMuNO/v8Yk3/TvNPOk7r9ItUyQx1fDYxvv5p62r3ZtY237MjlNqjHAc46zjawdKPQoFNGqMTI286tgxGIU+G5szFFLPiLyPlMBxi3ZJ1mOkE7NWtdDrT6MebbXfPqMjdKsjZ51/z7to2o+Am0PVZhx+YeVpNxbOFu2i9+i/44prwBAEAmTcOdFIq9XX0ckEXZ5Tl3rqiRycaKvDxWtNJLQbQ7nGN36j/NVjooA5pITsiA9yzlZaDVnDnR2wmURRIkFkVZOP0lYGi4+CL4lMC93EhFn0TpdkhJCbDNd6jDpVnRI+Rb8QjQfFhg9MR1TdxBTQ6maTwdwJeRiw1j/0A78VE7tChf1TcBB8JFbFkO9TWZgAEzqcktf5xHr86uX8r5OkOPPAMp8Zf+FDWuxBNrkerf0eQxMiG3XDKh2MYoWxW0nTC1h4ja8oZJUGOK5uPecOUw7UBwYZxi4RYK7qopXOAa8sm7JZzF mR7jWfMA T1BSYepYA4jaW3aARCZR/BbI4uWtr2s/KNY55SUbRDkr0aGKIxhsvs/iCpbKAxI/ktIVpnwBkmOoBkQEUFk+954X5Bg6Evx85j5BIOmaTLgKg9wr1/4Ni1/yMuo53kMXz1QYGMs14f5Y0l6/ITjMdxrfONzW3wvSt0Mqiiyv4+5ivEv8eq7OYPuIk4THuadmQrHhmpzVNaMSgR53P7h+Yacmrao8zVVR+gM/qsCJbwRZg3rsFukJNFT/Dlz5qOjni9OYLLzLnAHiRo3v6PyTqUx4DyOI7cnPTRZsBA2KMMmGm4GdlG2Bi7r6nEqMAiCosrVhsSCKBHk5PXKQB5bUlKIOUbFRp7Cfh/lclRoFO+vEWFpUkukeuD9FhGGA5Q5r5es1/+7Ncx9aO57bMku2fsDK5fgAL59nQ2IRCIe72NcAshprFNDuFitXjdVIhnvnt72epWnPP0Q+gqAQLqnlA6R58lYBerQo9OxsV8ZoL6yU+ywmzhKfhcukpIb6NwgRxaIRT9+bVtktrPuw= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Tue, Jan 21, 2025 at 02:49:13PM +0100, Vlastimil Babka wrote: > On 1/21/25 2:33 PM, Uladzislau Rezki wrote: > > On Mon, Jan 20, 2025 at 11:06:13PM +0100, Vlastimil Babka wrote: > >> On 12/16/24 17:46, Paul E. McKenney wrote: > >>> On Mon, Dec 16, 2024 at 04:55:06PM +0100, Uladzislau Rezki wrote: > >>>> On Mon, Dec 16, 2024 at 04:44:41PM +0100, Vlastimil Babka wrote: > >>>>> On 12/16/24 16:41, Uladzislau Rezki wrote: > >>>>>> On Mon, Dec 16, 2024 at 03:20:44PM +0100, Vlastimil Babka wrote: > >>>>>>> On 12/16/24 12:03, Uladzislau Rezki wrote: > >>>>>>>> On Sun, Dec 15, 2024 at 06:30:02PM +0100, Vlastimil Babka wrote: > >>>>>>>> > >>>>>>>>> Also how about a followup patch moving the rcu-tiny implementation of > >>>>>>>>> kvfree_call_rcu()? > >>>>>>>>> > >>>>>>>> As, Paul already noted, it would make sense. Or just remove a tiny > >>>>>>>> implementation. > >>>>>>> > >>>>>>> AFAICS tiny rcu is for !SMP systems. Do they benefit from the "full" > >>>>>>> implementation with all the batching etc or would that be unnecessary overhead? > >>>>>>> > >>>>>> Yes, it is for a really small systems with low amount of memory. I see > >>>>>> only one overhead it is about driving objects in pages. For a small > >>>>>> system it can be critical because we allocate. > >>>>>> > >>>>>> From the other hand, for a tiny variant we can modify the normal variant > >>>>>> by bypassing batching logic, thus do not consume memory(for Tiny case) > >>>>>> i.e. merge it to a normal kvfree_rcu() path. > >>>>> > >>>>> Maybe we could change it to use CONFIG_SLUB_TINY as that has similar use > >>>>> case (less memory usage on low memory system, tradeoff for worse performance). > >>>>> > >>>> Yep, i also was thinking about that without saying it :) > >>> > >>> Works for me as well! > >> > >> Hi, so I tried looking at this. First I just moved the code to slab as seen > >> in the top-most commit here [1]. Hope the non-inlined __kvfree_call_rcu() is > >> not a show-stopper here. > >> > >> Then I wanted to switch the #ifdefs from CONFIG_TINY_RCU to CONFIG_SLUB_TINY > >> to control whether we use the full blown batching implementation or the > >> simple call_rcu() implmentation, and realized it's not straightforward and > >> reveals there are still some subtle dependencies of kvfree_rcu() on RCU > >> internals :) > >> > >> Problem 1: !CONFIG_SLUB_TINY with CONFIG_TINY_RCU > >> > >> AFAICS the batching implementation includes kfree_rcu_scheduler_running() > >> which is called from rcu_set_runtime_mode() but only on TREE_RCU. Perhaps > >> there are other facilities the batching implementation needs that only > >> exists in the TREE_RCU implementation > >> > >> Possible solution: batching implementation depends on both !CONFIG_SLUB_TINY > >> and !CONFIG_TINY_RCU. I think it makes sense as both !SMP systems and small > >> memory systems are fine with the simple implementation. > >> > >> Problem 2: CONFIG_TREE_RCU with !CONFIG_SLUB_TINY > >> > >> AFAICS I can't just make the simple implementation do call_rcu() on > >> CONFIG_TREE_RCU, because call_rcu() no longer knows how to handle the fake > >> callback (__is_kvfree_rcu_offset()) - I see how rcu_reclaim_tiny() does that > >> but no such equivalent exists in TREE_RCU. Am I right? > >> > >> Possible solution: teach TREE_RCU callback invocation to handle > >> __is_kvfree_rcu_offset() again, perhaps hide that branch behind #ifndef > >> CONFIG_SLUB_TINY to avoid overhead if the batching implementation is used. > >> Downside: we visibly demonstrate how kvfree_rcu() is not purely a slab thing > >> but RCU has to special case it still. > >> > >> Possible solution 2: instead of the special offset handling, SLUB provides a > >> callback function, which will determine pointer to the object from the > >> pointer to a middle of it without knowing the rcu_head offset. > >> Downside: this will have some overhead, but SLUB_TINY is not meant to be > >> performant anyway so we might not care. > >> Upside: we can remove __is_kvfree_rcu_offset() from TINY_RCU as well > >> > >> Thoughts? > >> > > For the call_rcu() and to be able to reclaim over it we need to patch the > > tree.c(please note TINY already works): > > > > > > diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c > > index b1f883fcd918..ab24229dfa73 100644 > > --- a/kernel/rcu/tree.c > > +++ b/kernel/rcu/tree.c > > @@ -2559,13 +2559,19 @@ static void rcu_do_batch(struct rcu_data *rdp) > > debug_rcu_head_unqueue(rhp); > > > > rcu_lock_acquire(&rcu_callback_map); > > - trace_rcu_invoke_callback(rcu_state.name, rhp); > > > > f = rhp->func; > > - debug_rcu_head_callback(rhp); > > - WRITE_ONCE(rhp->func, (rcu_callback_t)0L); > > - f(rhp); > > > > + if (__is_kvfree_rcu_offset((unsigned long) f)) { > > + trace_rcu_invoke_kvfree_callback("", rhp, (unsigned long) f); > > + kvfree((void *) rhp - (unsigned long) f); > > + } else { > > + trace_rcu_invoke_callback(rcu_state.name, rhp); > > + debug_rcu_head_callback(rhp); > > + WRITE_ONCE(rhp->func, (rcu_callback_t)0L); > > + f(rhp); > > + } > > rcu_lock_release(&rcu_callback_map); > > Right so that's the first Possible solution, but without the #ifdef. So > there's an overhead of checking __is_kvfree_rcu_offset() even if the > batching is done in slab and this function is never called with an offset. > Or fulfilling a missing functionality? TREE is broken in that sense whereas a TINY handles it without any issues. It can be called for SLUB_TINY option, just call_rcu() instead of batching layer. And yes, kvfree_rcu_barrier() switches to rcu_barrier(). > > After coming up with Possible solution 2, I've started liking the idea > more as RCU could then forget about the __is_kvfree_rcu_offset() > "callbacks" completely, and the performant case of TREE_RCU + batching > would be unaffected. > I doubt it is a performance issue :) > > I'm speculating perhaps if there was not CONFIG_SLOB in the past, the > __is_kvfree_rcu_offset() would never exist in the first place? SLAB and > SLUB both can determine start of the object from a pointer to the middle > of it, while SLOB couldn't. > We needed just to reclaim over RCU. So, i do not know. Paul probably knows more then me :) > > /* > > > > > > Mixing up CONFIG_SLUB_TINY with CONFIG_TINY_RCU in the slab_common.c > > should be avoided, i.e. if we can, we should eliminate a dependency on > > TREE_RCU or TINY_RCU in a slab. As much as possible. > > > > So, it requires a more closer look for sure :) > > That requires solving Problem 1 above, but question is if it's worth the > trouble. Systems running TINY_RCU are unlikely to benefit from the batching? > > But sure there's also possibility to hide these dependencies in KConfig, > so the slab code would only consider a single (for example) #ifdef > CONFIG_KVFREE_RCU_BATCHING that would be set automatically depending on > TREE_RCU and !SLUB_TINY. > It is for small systems. We can use TINY or !SMP. We covered this AFAIR that a single CPU system should not go with batching: #ifdef SLUB_TINY || !SMP || TINY_RCU or: config TINY_RCU bool default y if !PREEMPT_RCU && !SMP + select SLUB_TINY Paul, more input? -- Uladzislau Rezki