From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C0EA1C369A9 for ; Thu, 10 Apr 2025 20:24:45 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BDD8D280135; Thu, 10 Apr 2025 16:24:44 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B66F628012D; Thu, 10 Apr 2025 16:24:44 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9E04B280135; Thu, 10 Apr 2025 16:24:44 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 75DBC28012D for ; Thu, 10 Apr 2025 16:24:44 -0400 (EDT) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 5BDEC161432 for ; Thu, 10 Apr 2025 20:24:44 +0000 (UTC) X-FDA: 83319262488.23.95CD3E2 Received: from mail-qt1-f178.google.com (mail-qt1-f178.google.com [209.85.160.178]) by imf21.hostedemail.com (Postfix) with ESMTP id 5EFED1C0006 for ; Thu, 10 Apr 2025 20:24:42 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=Scy0+sly; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf21.hostedemail.com: domain of surenb@google.com designates 209.85.160.178 as permitted sender) smtp.mailfrom=surenb@google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1744316682; a=rsa-sha256; cv=none; b=z/R/8MRIrJpzK/wsQM0zNI9u/wsprlk7vAoqgbICIvF+ZPs1cHhMktPIbo54KPvwSocLyR 4xVBLQ1+KdKrpUX+f+aY83hUVtnfGgnOKeHRn8KyRuT79lj/lKmt9a+H+65zK+7eU3JMbh OVBfn7YOPCUkH51fBBkl+ki4xW4bOlQ= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=Scy0+sly; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf21.hostedemail.com: domain of surenb@google.com designates 209.85.160.178 as permitted sender) smtp.mailfrom=surenb@google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1744316682; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=19mHqIThm1zf5l4Ua+g+ONu7h+sC9EUDgVJ28stINlg=; b=cOka5DXpT5KU0cDampsUMC7bi80OQu+PL6ri8pWqxbUhcNEqcUAq8+DINk2LuNHlWjXaiy O8hbWrz3Pv1FNvENmRW/1WkrxHRXyi2cQ+C1qRQyfNmvJ4BidX9ZmymbW35VL1YGWRmptn KhoY9ClgdXJ20BlBUQyz145i6SyBjwY= Received: by mail-qt1-f178.google.com with SMTP id d75a77b69052e-4769e30af66so19311cf.1 for ; Thu, 10 Apr 2025 13:24:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1744316681; x=1744921481; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=19mHqIThm1zf5l4Ua+g+ONu7h+sC9EUDgVJ28stINlg=; b=Scy0+slyJTqygz+9g/4+30GyMGGoZdxvUwwCoUzHs4E9XLVhAchyFrLfEk8u1dQqd2 yURmXiPak/GsAkEXZEP+LnUY0zZ5tPsjvvfLIYWH5v6A28SHJ6qgcTJ5KK/1Gq+Jns4Z wkj8C/vODloO5DiX4KkGxsRXEmFNY/6fifrvY+J13uTEdNMfdjE1Kcc0HPTOgkBcInbt hoWlxyD0QS1HscKcgfap+gvvC8f/v53gs8Mfbht0UNR8z3tKvG+xyjRfcSMEY+rwTRwB LbcGShzZw7nFqkP0NJMT8k4x0N8Uu+yBxbwObChKpH8UnOnyG/zFhe+IOMtI1nRb2MBb u5TQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1744316681; x=1744921481; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=19mHqIThm1zf5l4Ua+g+ONu7h+sC9EUDgVJ28stINlg=; b=EI0vE9MtuvAfT1gupYKtAkX6s+cBTZGVB2+B9gxHhmENb8YDobSwzFk6OmO+uvFWJl 3GnPHu1T2guF/ADLrRKbagYeDlpucDTPrPuC9+vgBRtDnLWgqCr8H2N1sQTe6jKlyyW+ t7JMerY5Rfdf+UlujdrE79tMF95fWNxewRZmRC7f9qNLN5skZoTHpAHZr3Mf677VtUQf mMlNxhxNqiAN9MAxkdb+4xzFUQRwQ9FWFErDN0HFmXsiRL0Puq5IfLbT1BOkSSJa3Grm gV+g8QSf9DkDoOATAKxRsLBVss8T+Bs1ARXrU7yiEEvqCcPdtsgUiOEJnXltiPqKtlpW nFSw== X-Forwarded-Encrypted: i=1; AJvYcCWH1yoW3xF7gcJtYNrWb4GX0LAd5yIRmgMCizYhUYmwapYxHoWWUeZEqWZT/OkVNriI5LiJVawotA==@kvack.org X-Gm-Message-State: AOJu0YxQw1Ew0E8Aqgv+7MxjG5/eOKerKSOaf8r0EXS6+lHDd0SQFgNg XugXT/uh7VraxKHAWe99aFMeWJuz5dZCkeoajOOoo8+IfoJ/4d0WCvvrzZydVc9zjHBUALsVuM+ pDoiwIUVdBT+vMNoces/02RXGnYN6bzQ/ab5dQvOXrCp4+8BBkRwR X-Gm-Gg: ASbGncsnVfS03Qbxq6PUZBg8FsjwzhUhw50PaUzMuvjikg4vg5ZbA6RlSYIzxPr54EW Uq5dgifSz0yfZiTBYzOq/dCUgSzAIKoJ45u4sgbf7OVnzDLtU4toVoituuxSr8f4JTbkhzrcvtb vc/BPQJjNpZQR5+7ZqB8AKaYbtFIxGzKe9Uqvrc82Ivf2vNL30Zdz6MbzbXPcqbjM= X-Google-Smtp-Source: AGHT+IHEW6Mbhn8c4rWa0Dx1TYjImixeItfJ87VXSD4PD7Xfg9JMqxMigqQjfBAP3d/DgG2q4j3E01sOTCc0j8GnQNk= X-Received: by 2002:a05:622a:649:b0:472:915:a200 with SMTP id d75a77b69052e-479766a6b4cmr1010801cf.28.1744316681212; Thu, 10 Apr 2025 13:24:41 -0700 (PDT) MIME-Version: 1.0 References: <20250317-slub-percpu-caches-v3-0-9d9884d8b643@suse.cz> <20250317-slub-percpu-caches-v3-3-9d9884d8b643@suse.cz> In-Reply-To: <20250317-slub-percpu-caches-v3-3-9d9884d8b643@suse.cz> From: Suren Baghdasaryan Date: Thu, 10 Apr 2025 13:24:30 -0700 X-Gm-Features: ATxdqUHCb6eoYuvkJ03BDipnAyKU-wq4ygZ83CBhGZ7b5h8WgLIG_oQhdERjvMw Message-ID: Subject: Re: [PATCH RFC v3 3/8] slab: add sheaf support for batching kfree_rcu() operations To: Vlastimil Babka Cc: "Liam R. Howlett" , Christoph Lameter , David Rientjes , Roman Gushchin , Harry Yoo , Uladzislau Rezki , linux-mm@kvack.org, linux-kernel@vger.kernel.org, rcu@vger.kernel.org, maple-tree@lists.infradead.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: 5EFED1C0006 X-Stat-Signature: 9nzmro36w6neajk1rtywcfbif1tphmso X-Rspam-User: X-Rspamd-Server: rspam06 X-HE-Tag: 1744316682-671870 X-HE-Meta: U2FsdGVkX18b7xG5vko+Og/P9jIRDuO2SF3FbdbI346WWcLfQYrwHqY/aCTM66IrJnMc0YbRR2XyzZfGdkV7QkRiYJDaylh+bVacL/T0HaqvcQGV621MCH6Zz0qZX9MvnFuB1UcoCFhVX+VRDf/tOR1AB9BOsFMpFlHqp4JCnpOPAKkmZ9OnnB2lBqqxNvsuEW5exGnQ/SqoVLVKAb2t1BNr5hK1nnP24iSPWhFwCpd/9EzcrqFp5+1ZSxNddP7Us64kFdDLeitbbOqxA60FksZJfTRJBn1s3fw7knhY1jfZvjo3hU2dYrV+KzIjGqjMK4IkPmzxLFOXZhNQo//6d7zoAgZsFZFmqmL47A3N7SlxjfPj/hJWOFkWxvv22eCWluTmI7ydBcRIPzGrqdesKplulsx5kxQ9B6N4oBpp4jV2gzS+N0/dy/ADm5deaoUsExsHcR9RHE/NiuEpi71OG2RUTWe7uhood1VNHqqvt3IW1pXTnGxkWXswkJhLK0gpMrv65VCkCANcZctDn8s2F3NH3EGQkRnzIawNSXZW+fHZR46T9Le35iFbV28K1xvcctX65LbU6BIdvNHVT1YCNZU1c5friBgYKOSc9GBlKpD93/Dzk4Y/ZVFSdAVvi0fq2vFNG0+CEkT3wYmFgkE8bdN8BGJPQI/fKHmNr5rqSJQL3PNUdoepNSJtDf6mFtt5IZOGYjO/VLDdEtjE4CoVkPLJBRjDuwqDD9BEopX7ynwS9ujT4sQSmXCC+SELCK3LNBYDJeiKL3BSaQMD4V6QQK9uqpG+DCqzNX9QPf+t040GXo2gvhXptydYceHDXDR15Z+VyRzeHzUAMGhpdimbLAPCpbyoOSmBv4Br5BdSt9U2yT0contn/3S1O5bXd2tR9yi6yDMGIeP5c1nFWnhP00lhlf8QY/kzwDA0ERlVbALYrlkQzl5sP/ch/bpIuQyHoX2hWsL3rCrQpocqW8k ZsZp30QC yjJFI7avwh1etYcI0jdjrnF/AIxp9O4kpH9QQ8+9YSmSFFriEAh2eQZEtvVwSBR2gxK3J9mD2Kxm8JwSQPYgpjvgtiql5f1iQZiTmbqQMiBOPHs6gmjTVYjqzNjEolJhTHEwS6fanuP4HrwKbqFhqV7UeMlin6Bb4JxA3/RpY0Bbbh+TpOVbwYRwDWe676ZE0BxH39cfXoM2oxO0WR7rVKvv0TrJJyo/jBvUS5sWyWbArzGQfD1k3UjLpbX5bv2JugG1WYwzYdLALRB4= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Mon, Mar 17, 2025 at 7:33=E2=80=AFAM Vlastimil Babka wr= ote: > > Extend the sheaf infrastructure for more efficient kfree_rcu() handling. > For caches with sheaves, on each cpu maintain a rcu_free sheaf in > addition to main and spare sheaves. > > kfree_rcu() operations will try to put objects on this sheaf. Once full, > the sheaf is detached and submitted to call_rcu() with a handler that > will try to put it in the barn, or flush to slab pages using bulk free, > when the barn is full. Then a new empty sheaf must be obtained to put > more objects there. > > It's possible that no free sheaves are available to use for a new > rcu_free sheaf, and the allocation in kfree_rcu() context can only use > GFP_NOWAIT and thus may fail. In that case, fall back to the existing > kfree_rcu() machinery. > > Expected advantages: > - batching the kfree_rcu() operations, that could eventually replace the > existing batching > - sheaves can be reused for allocations via barn instead of being > flushed to slabs, which is more efficient > - this includes cases where only some cpus are allowed to process rcu > callbacks (Android) > > Possible disadvantage: > - objects might be waiting for more than their grace period (it is > determined by the last object freed into the sheaf), increasing memory > usage - but the existing batching does that too? > > Only implement this for CONFIG_KVFREE_RCU_BATCHED as the tiny > implementation favors smaller memory footprint over performance. > > Signed-off-by: Vlastimil Babka > Reviewed-by: Suren Baghdasaryan > --- > mm/slab.h | 2 + > mm/slab_common.c | 24 ++++++++ > mm/slub.c | 165 +++++++++++++++++++++++++++++++++++++++++++++++++= +++++- > 3 files changed, 189 insertions(+), 2 deletions(-) > > diff --git a/mm/slab.h b/mm/slab.h > index 8daaec53b6ecfc44171191d421adb12e5cba2c58..94e9959e1aefa350d3d74e3f5= 309fde7a5cf2ec8 100644 > --- a/mm/slab.h > +++ b/mm/slab.h > @@ -459,6 +459,8 @@ static inline bool is_kmalloc_normal(struct kmem_cach= e *s) > return !(s->flags & (SLAB_CACHE_DMA|SLAB_ACCOUNT|SLAB_RECLAIM_ACC= OUNT)); > } > > +bool __kfree_rcu_sheaf(struct kmem_cache *s, void *obj); > + > /* Legal flag mask for kmem_cache_create(), for various configurations *= / > #define SLAB_CORE_FLAGS (SLAB_HWCACHE_ALIGN | SLAB_CACHE_DMA | \ > SLAB_CACHE_DMA32 | SLAB_PANIC | \ > diff --git a/mm/slab_common.c b/mm/slab_common.c > index ceeefb287899a82f30ad79b403556001c1860311..9496176770ed47491e01ed78e= 060a74771d5541e 100644 > --- a/mm/slab_common.c > +++ b/mm/slab_common.c > @@ -1613,6 +1613,27 @@ static void kfree_rcu_work(struct work_struct *wor= k) > kvfree_rcu_list(head); > } > > +static bool kfree_rcu_sheaf(void *obj) > +{ > + struct kmem_cache *s; > + struct folio *folio; > + struct slab *slab; > + > + if (is_vmalloc_addr(obj)) > + return false; > + > + folio =3D virt_to_folio(obj); > + if (unlikely(!folio_test_slab(folio))) > + return false; > + > + slab =3D folio_slab(folio); > + s =3D slab->slab_cache; > + if (s->cpu_sheaves) > + return __kfree_rcu_sheaf(s, obj); > + > + return false; > +} > + > static bool > need_offload_krc(struct kfree_rcu_cpu *krcp) > { > @@ -1957,6 +1978,9 @@ void kvfree_call_rcu(struct rcu_head *head, void *p= tr) > if (!head) > might_sleep(); > > + if (kfree_rcu_sheaf(ptr)) > + return; > + > // Queue the object but don't yet schedule the batch. > if (debug_rcu_head_queue(ptr)) { > // Probable double kfree_rcu(), just leak. > diff --git a/mm/slub.c b/mm/slub.c > index fa3a6329713a9f45b189f27d4b1b334b54589c38..83f4395267dccfbc144920baa= 7d0a85a27fbb1b4 100644 > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -350,6 +350,8 @@ enum stat_item { > ALLOC_FASTPATH, /* Allocation from cpu slab */ > ALLOC_SLOWPATH, /* Allocation by getting a new cpu slab *= / > FREE_PCS, /* Free to percpu sheaf */ > + FREE_RCU_SHEAF, /* Free to rcu_free sheaf */ > + FREE_RCU_SHEAF_FAIL, /* Failed to free to a rcu_free sheaf */ > FREE_FASTPATH, /* Free to cpu slab */ > FREE_SLOWPATH, /* Freeing not to cpu slab */ > FREE_FROZEN, /* Freeing to frozen slab */ > @@ -442,6 +444,7 @@ struct slab_sheaf { > struct rcu_head rcu_head; > struct list_head barn_list; > }; > + struct kmem_cache *cache; > unsigned int size; > void *objects[]; > }; > @@ -450,6 +453,7 @@ struct slub_percpu_sheaves { > localtry_lock_t lock; > struct slab_sheaf *main; /* never NULL when unlocked */ > struct slab_sheaf *spare; /* empty or full, may be NULL */ > + struct slab_sheaf *rcu_free; /* for batching kfree_rcu() */ > struct node_barn *barn; > }; > > @@ -2461,6 +2465,8 @@ static struct slab_sheaf *alloc_empty_sheaf(struct = kmem_cache *s, gfp_t gfp) > if (unlikely(!sheaf)) > return NULL; > > + sheaf->cache =3D s; > + > stat(s, SHEAF_ALLOC); > > return sheaf; > @@ -2585,6 +2591,24 @@ static void sheaf_flush_unused(struct kmem_cache *= s, struct slab_sheaf *sheaf) > sheaf->size =3D 0; > } > > +static void __rcu_free_sheaf_prepare(struct kmem_cache *s, > + struct slab_sheaf *sheaf); > + > +static void rcu_free_sheaf_nobarn(struct rcu_head *head) > +{ > + struct slab_sheaf *sheaf; > + struct kmem_cache *s; > + > + sheaf =3D container_of(head, struct slab_sheaf, rcu_head); > + s =3D sheaf->cache; > + > + __rcu_free_sheaf_prepare(s, sheaf); > + > + sheaf_flush_unused(s, sheaf); > + > + free_empty_sheaf(s, sheaf); > +} > + > /* > * Caller needs to make sure migration is disabled in order to fully flu= sh > * single cpu's sheaves > @@ -2597,7 +2621,7 @@ static void sheaf_flush_unused(struct kmem_cache *s= , struct slab_sheaf *sheaf) > static void pcs_flush_all(struct kmem_cache *s) > { > struct slub_percpu_sheaves *pcs; > - struct slab_sheaf *spare; > + struct slab_sheaf *spare, *rcu_free; > > localtry_lock(&s->cpu_sheaves->lock); > pcs =3D this_cpu_ptr(s->cpu_sheaves); > @@ -2605,6 +2629,9 @@ static void pcs_flush_all(struct kmem_cache *s) > spare =3D pcs->spare; > pcs->spare =3D NULL; > > + rcu_free =3D pcs->rcu_free; > + pcs->rcu_free =3D NULL; > + > localtry_unlock(&s->cpu_sheaves->lock); > > if (spare) { > @@ -2612,6 +2639,9 @@ static void pcs_flush_all(struct kmem_cache *s) > free_empty_sheaf(s, spare); > } > > + if (rcu_free) > + call_rcu(&rcu_free->rcu_head, rcu_free_sheaf_nobarn); > + > sheaf_flush_main(s); > } > > @@ -2628,6 +2658,11 @@ static void __pcs_flush_all_cpu(struct kmem_cache = *s, unsigned int cpu) > free_empty_sheaf(s, pcs->spare); > pcs->spare =3D NULL; > } > + > + if (pcs->rcu_free) { > + call_rcu(&pcs->rcu_free->rcu_head, rcu_free_sheaf_nobarn)= ; > + pcs->rcu_free =3D NULL; > + } > } > > static void pcs_destroy(struct kmem_cache *s) > @@ -2644,6 +2679,7 @@ static void pcs_destroy(struct kmem_cache *s) > continue; > > WARN_ON(pcs->spare); > + WARN_ON(pcs->rcu_free); > > if (!WARN_ON(pcs->main->size)) { > free_empty_sheaf(s, pcs->main); > @@ -3707,7 +3743,7 @@ static bool has_pcs_used(int cpu, struct kmem_cache= *s) > > pcs =3D per_cpu_ptr(s->cpu_sheaves, cpu); > > - return (pcs->spare || pcs->main->size); > + return (pcs->spare || pcs->rcu_free || pcs->main->size); > } > > static void pcs_flush_all(struct kmem_cache *s); > @@ -5240,6 +5276,122 @@ bool free_to_pcs(struct kmem_cache *s, void *obje= ct) > return true; > } > > +static void __rcu_free_sheaf_prepare(struct kmem_cache *s, > + struct slab_sheaf *sheaf) > +{ > + bool init =3D slab_want_init_on_free(s); > + void **p =3D &sheaf->objects[0]; > + unsigned int i =3D 0; > + > + while (i < sheaf->size) { > + struct slab *slab =3D virt_to_slab(p[i]); > + > + memcg_slab_free_hook(s, slab, p + i, 1); > + alloc_tagging_slab_free_hook(s, slab, p + i, 1); > + > + if (unlikely(!slab_free_hook(s, p[i], init, false))) { > + p[i] =3D p[--sheaf->size]; > + continue; > + } > + > + i++; > + } > +} > + > +static void rcu_free_sheaf(struct rcu_head *head) > +{ > + struct slab_sheaf *sheaf; > + struct node_barn *barn; > + struct kmem_cache *s; > + > + sheaf =3D container_of(head, struct slab_sheaf, rcu_head); > + > + s =3D sheaf->cache; > + > + __rcu_free_sheaf_prepare(s, sheaf); > + > + barn =3D get_node(s, numa_mem_id())->barn; > + > + /* due to slab_free_hook() */ > + if (unlikely(sheaf->size =3D=3D 0)) > + goto empty; > + > + if (!barn_put_full_sheaf(barn, sheaf, false)) > + return; > + > + sheaf_flush_unused(s, sheaf); > + > +empty: > + if (!barn_put_empty_sheaf(barn, sheaf, false)) > + return; > + > + free_empty_sheaf(s, sheaf); > +} > + > +bool __kfree_rcu_sheaf(struct kmem_cache *s, void *obj) > +{ > + struct slub_percpu_sheaves *pcs; > + struct slab_sheaf *rcu_sheaf; > + > + if (!localtry_trylock(&s->cpu_sheaves->lock)) > + goto fail; > + > + pcs =3D this_cpu_ptr(s->cpu_sheaves); > + > + if (unlikely(!pcs->rcu_free)) { > + > + struct slab_sheaf *empty; > + > + empty =3D barn_get_empty_sheaf(pcs->barn); > + > + if (empty) { > + pcs->rcu_free =3D empty; > + goto do_free; > + } > + > + localtry_unlock(&s->cpu_sheaves->lock); > + > + empty =3D alloc_empty_sheaf(s, GFP_NOWAIT); > + > + if (!empty) > + goto fail; > + > + if (!localtry_trylock(&s->cpu_sheaves->lock)) > + goto fail; > + > + pcs =3D this_cpu_ptr(s->cpu_sheaves); > + > + if (unlikely(pcs->rcu_free)) > + barn_put_empty_sheaf(pcs->barn, empty, true); > + else > + pcs->rcu_free =3D empty; > + } > + > +do_free: > + > + rcu_sheaf =3D pcs->rcu_free; > + > + rcu_sheaf->objects[rcu_sheaf->size++] =3D obj; > + > + if (likely(rcu_sheaf->size < s->sheaf_capacity)) { > + localtry_unlock(&s->cpu_sheaves->lock); > + stat(s, FREE_RCU_SHEAF); > + return true; > + } > + > + pcs->rcu_free =3D NULL; > + localtry_unlock(&s->cpu_sheaves->lock); > + > + call_rcu(&rcu_sheaf->rcu_head, rcu_free_sheaf); > + > + stat(s, FREE_RCU_SHEAF); > + return true; nit: I think the above code could be simplified to: do_free: rcu_sheaf =3D pcs->rcu_free; rcu_sheaf->objects[rcu_sheaf->size++] =3D obj; if (likely(rcu_sheaf->size < s->sheaf_capacity)) rcu_sheaf =3D NULL; else pcs->rcu_free =3D NULL; localtry_unlock(&s->cpu_sheaves->lock); stat(s, FREE_RCU_SHEAF); if (rcu_sheaf) call_rcu(&rcu_sheaf->rcu_head, rcu_free_sheaf); return true; > + > +fail: > + stat(s, FREE_RCU_SHEAF_FAIL); > + return false; > +} > + > /* > * Bulk free objects to the percpu sheaves. > * Unlike free_to_pcs() this includes the calls to all necessary hooks > @@ -6569,6 +6721,11 @@ int __kmem_cache_shutdown(struct kmem_cache *s) > struct kmem_cache_node *n; > > flush_all_cpus_locked(s); > + > + /* we might have rcu sheaves in flight */ > + if (s->cpu_sheaves) > + rcu_barrier(); > + > /* Attempt to free all objects */ > for_each_kmem_cache_node(s, node, n) { > if (n->barn) > @@ -7974,6 +8131,8 @@ STAT_ATTR(ALLOC_PCS, alloc_cpu_sheaf); > STAT_ATTR(ALLOC_FASTPATH, alloc_fastpath); > STAT_ATTR(ALLOC_SLOWPATH, alloc_slowpath); > STAT_ATTR(FREE_PCS, free_cpu_sheaf); > +STAT_ATTR(FREE_RCU_SHEAF, free_rcu_sheaf); > +STAT_ATTR(FREE_RCU_SHEAF_FAIL, free_rcu_sheaf_fail); > STAT_ATTR(FREE_FASTPATH, free_fastpath); > STAT_ATTR(FREE_SLOWPATH, free_slowpath); > STAT_ATTR(FREE_FROZEN, free_frozen); > @@ -8069,6 +8228,8 @@ static struct attribute *slab_attrs[] =3D { > &alloc_fastpath_attr.attr, > &alloc_slowpath_attr.attr, > &free_cpu_sheaf_attr.attr, > + &free_rcu_sheaf_attr.attr, > + &free_rcu_sheaf_fail_attr.attr, > &free_fastpath_attr.attr, > &free_slowpath_attr.attr, > &free_frozen_attr.attr, > > -- > 2.48.1 >