From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C6861C02192 for ; Thu, 6 Feb 2025 03:03:53 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2AD076B0082; Wed, 5 Feb 2025 22:03:53 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 25D2E6B0083; Wed, 5 Feb 2025 22:03:53 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1249B6B0085; Wed, 5 Feb 2025 22:03:53 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id EB5166B0082 for ; Wed, 5 Feb 2025 22:03:52 -0500 (EST) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 9CF5F4AC1F for ; Thu, 6 Feb 2025 03:03:52 +0000 (UTC) X-FDA: 83088025104.13.3203D0F Received: from mail-qt1-f175.google.com (mail-qt1-f175.google.com [209.85.160.175]) by imf04.hostedemail.com (Postfix) with ESMTP id C260D40006 for ; Thu, 6 Feb 2025 03:03:50 +0000 (UTC) Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=XfTT6rZ7; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf04.hostedemail.com: domain of surenb@google.com designates 209.85.160.175 as permitted sender) smtp.mailfrom=surenb@google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1738811030; a=rsa-sha256; cv=none; b=qEZPKF1y/AhFyZRH18g9xVf+0XlWj/jZqcpww+eVTzqNWDu+fjFXZ3TZS0jhxFbxGyNuik ylydaicJUcaSpCgeY9FYVREuhVjYmbWtui9VLPCfe6G9HRdlPe5w/t4ev+bs1SOp4AnjMY BtyM47X3rMMGlGF7H25aGV/WEaZJRiw= ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=XfTT6rZ7; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf04.hostedemail.com: domain of surenb@google.com designates 209.85.160.175 as permitted sender) smtp.mailfrom=surenb@google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1738811030; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=vqtRnsVhyH+vzmD6JdDF4krTMwhYQH59/osiMu3zvUs=; b=5n8pSd7voWjxg5aJFCU6B3cBI3LWq+fRpoI58E4s7Zhsk4XqVa3QcWUm+FC5J8eU4cjGYz zfr0OidDaVWVFj1QPSTVVu+Z5yohjJPHdbOk4SOfZFCiF5LNMpqyd7URdk/VeV3rgW6WN9 8+gU3EzFNyXBD8R+ZRilPcRRnVdxliE= Received: by mail-qt1-f175.google.com with SMTP id d75a77b69052e-4678c9310afso61691cf.1 for ; Wed, 05 Feb 2025 19:03:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1738811030; x=1739415830; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=vqtRnsVhyH+vzmD6JdDF4krTMwhYQH59/osiMu3zvUs=; b=XfTT6rZ7j1GDtRxr1JxAKc1JMBU02VNXlGnPfFub7SECNcLE/N9V8ARspeL77FViPX e9ZsMWeoFgowrBLb3NxNteF7+Fk+05LyW7DpYkFc8ph2U03AyQ7hnyXv/O0yrrQlEAdp iUV3LvplG6Xv+4q+pff5gMwYDOelfgBJNjKz+sBzKTls/a5o7fItu1VVEWIYbq4sjb5H SIJRHsshSXrxDqrLOdK5IGnxbzj5dRWIFIbZEdZW1I5QbpKED0KRN3K7uG859+efAfDZ o6Ck1q603xJwTBDanilkUihStclJJpjuBgj37zB4EaPdqgwFB/cv3VV471czBSEowGe1 GdMQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738811030; x=1739415830; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=vqtRnsVhyH+vzmD6JdDF4krTMwhYQH59/osiMu3zvUs=; b=Ah5h3yKot1cP37JGlxzmJ03Bbw9uYSdj5wwuWy2v+7D27pyJK5HDOV8Ah6VbeWRjX2 XI9cdM7GnURr5S+7cwpcYmW1v090kqBz95w+qXPgpPkB/ZS3LY9CesX6nDl0jOV7AfLP X55ey/7VQ2nHmToU1vWOTTcdIBgBICab6D9BT8oa0AEAr0jH2QeM2a0oWnFrp8zmrCy2 mpQXrdkE5QjXxdHrq4Pft6Hww188Pdg1SOZ5vtHVyDAAq2cXoqnupWG0BsfvcE5Yuik2 vQR+3xlIWU80iecJoqpoBb2fav+TcYDYJanJhL9MCzgLmhkjOkANvdTcC2rr6bIBOPn2 1Frw== X-Forwarded-Encrypted: i=1; AJvYcCWbhy1blCD5XyYNHjn83ouT1c+KSjEXiC+1yUbxLvpCb7IyWjp+JF7C+VQNYk/MfGBN7wSIMkqk6A==@kvack.org X-Gm-Message-State: AOJu0YxGYIVJvf/dTzjeyM0/fJI54A8rLRQg88RL4uhVYvp9QJ5o2xBB qZdUp5xdzAinwwHBCSteqvA3hnk8l/nhrGb579QNcHl2g9/khMbtW2i7L2NRF1aggqlCzL/voGt LOhvPpKgUtc5IZy6ZJjeRymIDdQjNprV60XUT X-Gm-Gg: ASbGncvmBaLoS5Cmg2LOktz/dg2JH9n7iRmWx9db3a9TAvCOe/rAHzOg0gROxljWByx jgv2pVc575kfVkn6dlUCtKVwGlrIOjPkg7jnKbxNhYoUp3dcS/2I9+pFzt5u1/5cM7oQsPTUKYh 8nIcWYt2/iMSeaGdI+xFC8r9A7RZw= X-Google-Smtp-Source: AGHT+IEXGhN1wiJhI/iDpHCWE3OOyLOFM/JC7uhg6dBjOBVuy9VTffWNjr0xrOA+SMXj4Dk+83aXPKc8UqLhcyjPQwA= X-Received: by 2002:a05:622a:1898:b0:467:84a1:df08 with SMTP id d75a77b69052e-47106cc6068mr662351cf.23.1738811029558; Wed, 05 Feb 2025 19:03:49 -0800 (PST) MIME-Version: 1.0 References: <20250111042604.3230628-1-surenb@google.com> <20250111042604.3230628-12-surenb@google.com> <20250115104841.GX5388@noisy.programming.kicks-ass.net> <20250115111334.GE8385@noisy.programming.kicks-ass.net> <20250115160011.GG8385@noisy.programming.kicks-ass.net> <20250117154135.GA17569@willie-the-truck> <20250127140915.GA25672@willie-the-truck> In-Reply-To: From: Suren Baghdasaryan Date: Wed, 5 Feb 2025 19:03:38 -0800 X-Gm-Features: AWEUYZnQDk6XVm__O0Og-sXi_DUYAqWnBd9fWPKyBkrKIw852XdoDL4KwZxh59Q Message-ID: Subject: Re: [PATCH] refcount: Strengthen inc_not_zero() To: Will Deacon Cc: Peter Zijlstra , boqun.feng@gmail.com, mark.rutland@arm.com, Mateusz Guzik , akpm@linux-foundation.org, willy@infradead.org, liam.howlett@oracle.com, lorenzo.stoakes@oracle.com, david.laight.linux@gmail.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, oliver.sang@intel.com, mgorman@techsingularity.net, david@redhat.com, peterx@redhat.com, oleg@redhat.com, dave@stgolabs.net, paulmck@kernel.org, brauner@kernel.org, dhowells@redhat.com, hdanton@sina.com, hughd@google.com, lokeshgidra@google.com, minchan@google.com, jannh@google.com, shakeel.butt@linux.dev, souravpanda@google.com, pasha.tatashin@soleen.com, klarasmodin@gmail.com, richard.weiyang@gmail.com, corbet@lwn.net, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@android.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspam-User: X-Rspamd-Queue-Id: C260D40006 X-Rspamd-Server: rspam10 X-Stat-Signature: w6a1rrrjszabcykuii3h9aa5yed5gbpu X-HE-Tag: 1738811030-653610 X-HE-Meta: U2FsdGVkX19Z3daQGqxdE1S8iJNjGZ9dcuqb/rpXwLwrV8+7MfqFcIiwv/W5j0UueVh7RuZ59AxAOONBICLy/x50uSM0d1TRKxdunpAXML6fXAj/dyhWgjSU8SHp+741LxhwKyhVkiu+1jTMzxN93gqba26Srtx27sMWiktgu9eXNtBx/3eVCJCzkETZxePpacTiFPhtthqAwAts9YKU0+E/gxLf8CG3l9/XQ2jN7BnpPZZPk+WBeSmgr/U/4zbb0bbyy818LaztVYNDfud8PJ34x59zZRfNhgxtPi6edP2/sVS3WXTMI+6+Xru8f95gDdt61TDLm7+U3gA/zgHrut2dPB2Eppzi3VKsXgpA6CxsuOp1KWjBz1nqD1sjhxJgfrAZik3kTN94N8ly/FlKpSDSZwLJeuUDWSkd0aH7r1dfmw4PrBo9ISphavwG7y5MY5+SCZfYUgbe21nl3G6jBksRSEEAootUP0O4yFIGRakmUpNPfXUaUPVRNCsCL0VaaGuqG15Xp87cxyoljWCmirj7ZRHwbTHiXIPbP/CbPT2FWOXGV0Fy+e9LohIHzaj8fbawZr7HpTVNcgnvd+ibDWPVFQEYtMASrplFYNC1JDUO+PXSPFxSkHwAceEVqr380aWmqWUijgBgjW1h87oVVb/4qD59XzJ61ixIXgFiucms2msmX/kar7rIoKcXZiux892OXHL8swPhgUXhL+yJEJyG7ac7ouJTxeNBUo1IXh6nDa6OsQmLQKDEGVV94WS3tMwYfD5JumAyv/jHkSqyvUN9t0qyfQkmn0WYoYvJEY9j4GYUhM/yW4zTrWllir3Y6BgJyOLDUT1PmtCpHnq3P9YuXMHD8P8ZG/+CUaa3dxkNrIeaNHuIlKopDOB85TVxDUnY0m24soe/fZ/AldJBtX/39Sqd4DxaBPo2m01PuTWiToXBJwcwXE3zPaBsFWdWz8ang1atsG4wyRObEC4 F7Mv4Nu2 iGgiKxc2ZsOJGNXAsfdOaXvxulKoVMctLvFHVGs3KBCYoj4LqwpQxS9CoYY4pVWEqaf8pnHrytbDYIPTwiUSzHgjOuqGNRV4jWZkaSAMOgNWwYUeZJjZsitGQLMrXYS0hy8BwYFzIqvzG1JxipntEt8vxeff1sH/V4xY3ZY/K9HWnTq3ckiGOIitFuWMP/xzel1WWCANgbhqutUy4J28wymIgmPnWAlbFlxu0CUlo/sH7SCkYZvsGigRKhCA9PoiULtTo/8Y4AzEbtBBQPeI6PsG1dq28rLpaH0dGUBbh8/8K1wzRqNc5fN198KikRCeXXNUsEuM964ABgZCDqWB2Uq5gIkY+p4o+ZzML4IQeIqb4Vbwvl+GA5RIb7MORJ6SAP40WP3ZITpbkwXHjxVt8RS4sLXsQiioL1G3SSHwz4C9XHn7AsQO0AYjpuHJI27PvC6peUJsCNKlFCcypEo0xhTnhxXqYYcTa9+Is X-Bogosity: Ham, tests=bogofilter, spamicity=0.000105, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Tue, Jan 28, 2025 at 3:51=E2=80=AFPM Suren Baghdasaryan wrote: > > On Mon, Jan 27, 2025 at 11:21=E2=80=AFAM Suren Baghdasaryan wrote: > > > > On Mon, Jan 27, 2025 at 6:09=E2=80=AFAM Will Deacon w= rote: > > > > > > On Fri, Jan 17, 2025 at 03:41:36PM +0000, Will Deacon wrote: > > > > On Wed, Jan 15, 2025 at 05:00:11PM +0100, Peter Zijlstra wrote: > > > > > On Wed, Jan 15, 2025 at 12:13:34PM +0100, Peter Zijlstra wrote: > > > > > > > > > > > Notably, it means refcount_t is entirely unsuitable for anythin= g > > > > > > SLAB_TYPESAFE_BY_RCU, since they all will need secondary valida= tion > > > > > > conditions after the refcount succeeds. > > > > > > > > > > > > And this is probably fine, but let me ponder this all a little = more. > > > > > > > > > > Even though SLAB_TYPESAFE_BY_RCU is relatively rare, I think we'd= better > > > > > fix this, these things are hard enough as they are. > > > > > > > > > > Will, others, wdyt? > > > > > > > > We should also update the Documentation (atomic_t.txt and > > > > refcount-vs-atomic.rst) if we strengthen this. > > > > > > > > > --- > > > > > Subject: refcount: Strengthen inc_not_zero() > > > > > > > > > > For speculative lookups where a successful inc_not_zero() pins th= e > > > > > object, but where we still need to double check if the object acq= uired > > > > > is indeed the one we set out to aquire, needs this validation to = happen > > > > > *after* the increment. > > > > > > > > > > Notably SLAB_TYPESAFE_BY_RCU is one such an example. > > > > > > > > > > Signed-off-by: Peter Zijlstra (Intel) > > > > > --- > > > > > include/linux/refcount.h | 15 ++++++++------- > > > > > 1 file changed, 8 insertions(+), 7 deletions(-) > > > > > > > > > > diff --git a/include/linux/refcount.h b/include/linux/refcount.h > > > > > index 35f039ecb272..340e7ffa445e 100644 > > > > > --- a/include/linux/refcount.h > > > > > +++ b/include/linux/refcount.h > > > > > @@ -69,9 +69,10 @@ > > > > > * its the lock acquire, for RCU/lockless data structures its th= e dependent > > > > > * load. > > > > > * > > > > > - * Do note that inc_not_zero() provides a control dependency whi= ch will order > > > > > - * future stores against the inc, this ensures we'll never modif= y the object > > > > > - * if we did not in fact acquire a reference. > > > > > + * Do note that inc_not_zero() does provide acquire order, which= will order > > > > > + * future load and stores against the inc, this ensures all subs= equent accesses > > > > > + * are from this object and not anything previously occupying th= is memory -- > > > > > + * consider SLAB_TYPESAFE_BY_RCU. > > > > > * > > > > > * The decrements will provide release order, such that all the = prior loads and > > > > > * stores will be issued before, it also provides a control depe= ndency, which > > > > > @@ -144,7 +145,7 @@ bool __refcount_add_not_zero(int i, refcount_= t *r, int *oldp) > > > > > do { > > > > > if (!old) > > > > > break; > > > > > - } while (!atomic_try_cmpxchg_relaxed(&r->refs, &old, old + i)= ); > > > > > + } while (!atomic_try_cmpxchg_acquire(&r->refs, &old, old + i)= ); > > > > > > > > Hmm, do the later memory accesses need to be ordered against the st= ore > > > > part of the increment or just the read? If it's the former, then I = don't > > > > think that _acquire is sufficient -- accesses can still get in-betw= een > > > > the read and write parts of the RmW. > > > > > > I dug some more into this at the end of last week. For the > > > SLAB_TYPESAFE_BY_RCU where we're racing inc_not_zero() with > > > dec_and_test(), then I think using _acquire() above is correct as the > > > later references can only move up into the critical section in the ca= se > > > that we successfully obtained a reference. > > > > > > However, if we're going to make the barriers implicit in the refcount > > > operations here then I think we also need to do something on the prod= ucer > > > side for when the object is re-initialised after being destroyed and > > > allocated again. I think that would necessitate release ordering for > > > refcount_set() so that whatever allows the consumer to validate the > > > object (e.g. sequence number) is published *before* the refcount. > > > > Thanks Will! > > I would like to expand on your answer to provide an example of the > > race that would happen without release ordering in the producer. To > > save reader's time I provide a simplified flow and reasoning first. > > More detailed code of what I'm considering a typical > > SLAB_TYPESAFE_BY_RCU refcounting example is added at the end of my > > reply (Addendum). > > Simplified flow looks like this: > > > > consumer: > > obj =3D lookup(collection, key); > > if (!refcount_inc_not_zero(&obj->ref)) > > return; > > smp_rmb(); /* Peter's new acquire fence */ > > if (READ_ONCE(obj->key) !=3D key) { > > put_ref(obj); > > return; > > } > > use(obj->value); > > > > producer: > > old_key =3D obj->key; > > remove(collection, old_key); > > if (!refcount_dec_and_test(&obj->ref)) > > return; > > obj->key =3D KEY_INVALID; > > free(objj); > > ... > > obj =3D malloc(); /* obj is reused */ > > obj->key =3D new_key; > > obj->value =3D new_value; > > smp_wmb(); /* Will's proposed release fence */ > > refcount_set(obj->ref, 1); > > insert(collection, key, obj); > > > > Let's consider a case when new_key =3D=3D old_key. Will call both of th= em > > "key". Without WIll's proposed fence the following reordering is > > possible: > > > > consumer: > > obj =3D lookup(collection, key); > > > > producer: > > key =3D obj->key > > remove(collection, key); > > if (!refcount_dec_and_test(&obj->ref)) > > return; > > obj->key =3D KEY_INVALID; > > free(objj); > > obj =3D malloc(); /* obj is reused */ > > refcount_set(obj->ref, 1); > > obj->key =3D key; /* same key */ > > > > if (!refcount_inc_not_zero(&obj->ref)) > > return; > > smp_rmb(); > > if (READ_ONCE(obj->key) !=3D key) { > > put_ref(obj); > > return; > > } > > use(obj->value); > > > > obj->value =3D new_value; /* reordered store */ > > add(collection, key, obj); > > > > So, the consumer finds the old object, successfully takes a refcount > > and validates the key. It succeeds because the object is allocated and > > has the same key, which is fine. However it proceeds to use stale > > obj->value. Will's proposed release ordering would prevent that. > > > > The example in https://elixir.bootlin.com/linux/v6.12.6/source/include/= linux/slab.h#L102 > > omits all these ordering issues for SLAB_TYPESAFE_BY_RCU. > > I think it would be better to introduce two new functions: > > refcount_add_not_zero_acquire() and refcount_set_release(), clearly > > document that they should be used when a freed object can be recycled > > and reused, like in SLAB_TYPESAFE_BY_RCU case. refcount_set_release() > > should also clarify that once it's called, the object can be accessed > > by consumers even if it was not added yet into the collection used for > > object lookup (like in the example above). SLAB_TYPESAFE_BY_RCU > > comment at https://elixir.bootlin.com/linux/v6.12.6/source/include/linu= x/slab.h#L102 > > then can explicitly use these new functions in the example code, > > further clarifying their purpose and proper use. > > WDYT? > > Hi Peter, > Should I take a stab at preparing a patch to add the two new > refcounting functions suggested above with updates to the > documentation and comments? > If you disagree with that or need more time to decide then I'll wait. > Please let me know. Not sure if "--in-reply-to" worked but I just posted a patch adding new refcounting APIs for SLAB_TYPESAFE_BY_RCU here: https://lore.kernel.org/all/20250206025201.979573-1-surenb@google.com/ Since Peter seems to be busy I discussed these ordering requirements for SLAB_TYPESAFE_BY_RCU with Paul McKenney and he was leaning towards having separate functions with the additional fences for this case. That's what I provided in my patch. Another possible option is to add acquire ordering in the __refcount_add_not_zero() as Peter suggested and add refcount_set_release() function. Thanks, Suren. > Thanks, > Suren. > > > > > > ADDENDUM. > > Detailed code for typical use of refcounting with SLAB_TYPESAFE_BY_RCU: > > > > struct object { > > refcount_t ref; > > u64 key; > > u64 value; > > }; > > > > void init(struct object *obj, u64 key, u64 value) > > { > > obj->key =3D key; > > obj->value =3D value; > > smp_wmb(); /* Will's proposed release fence */ > > refcount_set(obj->ref, 1); > > } > > > > bool get_ref(struct object *obj, u64 key) > > { > > if (!refcount_inc_not_zero(&obj->ref)) > > return false; > > smp_rmb(); /* Peter's new acquire fence */ > > if (READ_ONCE(obj->key) !=3D key) { > > put_ref(obj); > > return false; > > } > > return true; > > } > > > > void put_ref(struct object *obj) > > { > > if (!refcount_dec_and_test(&obj->ref)) > > return; > > obj->key =3D KEY_INVALID; > > free(obj); > > } > > > > consumer: > > obj =3D lookup(collection, key); > > if (!get_ref(obj, key) > > return; > > use(obj->value); > > > > producer: > > remove(collection, old_obj->key); > > put_ref(old_obj); > > new_obj =3D malloc(); > > init(new_obj, new_key, new_value); > > insert(collection, new_key, new_obj); > > > > With SLAB_TYPESAFE_BY_RCU old_obj in the producer can be reused and be > > equal to new_obj. > > > > > > > > > > Will