From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 07537C021B2 for ; Sun, 23 Feb 2025 02:33:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 473076B007B; Sat, 22 Feb 2025 21:33:35 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 4231D6B0082; Sat, 22 Feb 2025 21:33:35 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2EA886B0083; Sat, 22 Feb 2025 21:33:35 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 0CF2A6B007B for ; Sat, 22 Feb 2025 21:33:35 -0500 (EST) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id B18E1C1E9E for ; Sun, 23 Feb 2025 02:33:34 +0000 (UTC) X-FDA: 83149638348.03.436783D Received: from mail-qt1-f181.google.com (mail-qt1-f181.google.com [209.85.160.181]) by imf08.hostedemail.com (Postfix) with ESMTP id DB41F160004 for ; Sun, 23 Feb 2025 02:33:32 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=lM6Ei2K0; spf=pass (imf08.hostedemail.com: domain of surenb@google.com designates 209.85.160.181 as permitted sender) smtp.mailfrom=surenb@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1740278012; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=WSj9GP1d8XWYgpE9Ac+Ecoi7aYqhEle0Xlq1f/D5dZE=; b=hkhjk45vM+W/ZOww2SlpzCNPz2vVCBXNPdya1+qir7D17972TpSJ+WTsJUdrAF3CJW5m+G 3O7df+HECkW60Xvf1whxQnL54dqLOb8y3Rjqg++4YmEx1WPm6dXJ+NW56juFDbeUznUJ9O Yfjq0r2IP3NEPp7yeR/UDF4tIQDotp4= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=lM6Ei2K0; spf=pass (imf08.hostedemail.com: domain of surenb@google.com designates 209.85.160.181 as permitted sender) smtp.mailfrom=surenb@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1740278012; a=rsa-sha256; cv=none; b=VhLvD1sCPCkKLR1lp+p3odFYu2Obyjhuzo/yJ8amoDuzgerxdCrxScoBeqnwi23XaOBCnf MlT3bfKBszZCjlZjW4bA9fNvVznMZYlAhqYnCeXSo05MVvkBdtfO7pymRi4A+64xouHdqI TJ7S2Rj7HmevoYU659mhEhAEvYL2TjA= Received: by mail-qt1-f181.google.com with SMTP id d75a77b69052e-471fa3b19bcso197821cf.0 for ; Sat, 22 Feb 2025 18:33:32 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1740278012; x=1740882812; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=WSj9GP1d8XWYgpE9Ac+Ecoi7aYqhEle0Xlq1f/D5dZE=; b=lM6Ei2K0LoYK2C//cyTqbKa0WWNzyDy6mJQDUqnePnJdPuNwjMsgG/BhkZgopgYmy0 9ZSsCpTo9Oi1JSDCKrtD2IUWYQBLKQIRtmabSpUv1+JA+Ln6iuE8kpfQ5bE+ErvMPCPY q45SyEjR8Ps4ZC5SYsyEN96+Nwtk9sK+7PZx8PblSBQ268LMErooaol6K/LHzZe+RA78 2wHpLYSQItgrmqPcoNmSzgFDaD4w3gtEn+IlykyeISuR3x2VUf61T8wb2lQM3ryFDVkH 0p8VQw+IG14OSc6l8KmShmwCmXGLVH5D3K2VAIXUm/CHhMOdJm+kxNtuR5oaKkudoEF3 QgnQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740278012; x=1740882812; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=WSj9GP1d8XWYgpE9Ac+Ecoi7aYqhEle0Xlq1f/D5dZE=; b=quuYfbaDCt9v4EwVKwqT6hmd8QZiRgAoLkrjq7Obv9NNh+yikC919jlYu15PxzE+Q+ PipAle2EPDXnuM2CmoJKQY/G2xyadbxrnQWW+Dhm5jtXALt1FVgm386kES4TqQKBRJAJ e9jpPzVEt+nJV8vplXpCeGNedqda1wFEwUZOYqfbLECQ94v4JER9GTWtrgL0/JPGC0xZ ZqacYQUrPquBY+1fD1t35x+qwmQ5SRwulcjg/y8T7230kUaXhxwCdrR3pHKPlZfv/O9R qa09XhvCpi+BRi62OqH6GRUM7mKs5qf4guJQZBIDBVrWALr75v3GHPWxQG2kL/0gIscY cQpQ== X-Forwarded-Encrypted: i=1; AJvYcCX8f8PzN1CAMEAjIU2N5Ja0DAqNzakkKgArfguifUIqvK53U1KUNlNA1sKxFDyHNkhEIY59G0TGbw==@kvack.org X-Gm-Message-State: AOJu0YyIuqkvF52V8cUSpMImZ6ey2rVgN/cwaXnrd7Jpfx9eqJj4gXP7 pyYOZmOan6KQlqgzKSQ1WvrlfhNaePxdplIvTJB2naOCoKhFVwIg/jAH551i0cfpzBplBXpZNmI qVBlFUK3d3Qkt7eTt+JHtg/jAJ982GGJqtPNF X-Gm-Gg: ASbGnctm7ldv9hF5IlwLAY7eI+OHnf2g73njZi6iFYq7DPGG2VA/Tk7Q7djJ4hfXhY9 fFCPZMqL+0zeGiR1fw86uKAYuSzBNM3c0cimZ2xUJewvjR6gt0oiAakpVIpwz0COkG3wsoNC7/N 2zZG+I5mA= X-Google-Smtp-Source: AGHT+IEppLjtYdAFjvlkukOC/jwGq/ljjBXjwRnu2q+hPcfFhpmhwybwFTynciwADSRgXkd7A3tpPjJlBLJkmbijUh0= X-Received: by 2002:a05:622a:3cd:b0:471:9ece:b13b with SMTP id d75a77b69052e-47234b4f60amr2676031cf.1.1740278011571; Sat, 22 Feb 2025 18:33:31 -0800 (PST) MIME-Version: 1.0 References: <20250214-slub-percpu-caches-v2-0-88592ee0966a@suse.cz> <20250214-slub-percpu-caches-v2-5-88592ee0966a@suse.cz> In-Reply-To: <20250214-slub-percpu-caches-v2-5-88592ee0966a@suse.cz> From: Suren Baghdasaryan Date: Sat, 22 Feb 2025 18:33:20 -0800 X-Gm-Features: AWEUYZnP93sWzD1RjU5sHg7X_s4QtECm9lIkbUx8vkBfv4DWYmyuj81qIuEQ_lM Message-ID: Subject: Re: [PATCH RFC v2 05/10] slab: switch percpu sheaves locking to localtry_lock To: Vlastimil Babka Cc: "Liam R. Howlett" , Christoph Lameter , David Rientjes , Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Uladzislau Rezki , linux-mm@kvack.org, linux-kernel@vger.kernel.org, rcu@vger.kernel.org, maple-tree@lists.infradead.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspam-User: X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: DB41F160004 X-Stat-Signature: w8o6eq9r8nmtqc9quh6srfrf8uiwffn9 X-HE-Tag: 1740278012-492586 X-HE-Meta: U2FsdGVkX1/7UhayDe/UcKzy33ZQUqakscN3wwpI1cOT27JSvMtlerSOVAvvph3Z7X8iIRJjFVlJH6gs8v1Cb2hbkxfZbeifDM7qRmjzofPsyEt58ndG4+Xsue1MPz+pLlXn/D9/V/VOknfiQWY0k9yy42eC98ZS8djJp0KvyOVfg19+lH8n4VSS2oQo+lOnUxBd+aavGW47Vc+FEGCjrW2OrzmDv6tCMasgZfIIJIW+mP5Qq1Gee5IikIkt5YeINprzGU0AU5Mmlu55JsLo+RvzquqnzJiCewJ1MY18pzkq4Z9skWwO8BGToGrF9j0yLAvL9dDQJyivBuCYEDVhvAP92qY1cZUGJSQUqdXpvMICnk+KdQAEpyWe1sa5aUisD3Bszd48QvtglUdLCXKgoY8+Xt5jY7YoCbFjlXhhw+S+pyT3JbVQC1Ege2NvREvOxubopEl2clNuG2d/FkOPVkZsqIp4k2QIbSD64fiyJuusRg1SX6gqRxhX2mKOz8xoEdkz9NzymmMvtLZoXLj9KgOCsFzpahPLVtfhT+mhuQCyxM6WMBcodbdTxkFjwGog6Vwvli7P+69ZNlD5VJn4DL1xubazfyQAll5g3ugrZWA0RtmXfh1nBf1kxrbNzkCrD4906yuiJJzVHK7QSG1AQNLwZ+vc+J+lSuvlsEUYglS5dQacORdbahJX+H8MmmIEylww4mSCGzgQ+38GiltciD5vo72RiYUABREF1+gkddBMFLDawDyG1X5wIYMaZo36y/lRguZp1/s+vawP6XcNHGWo6txL2BqqWLAofeTFfXL2yIk7c+En67PFjxfXZBmZZO/cJl0XqK1peBLfSErrXZlHkwtjtxZV0uttngjXfIleARiLL4rK0E0i6AYRD05WErgK5beo4KF59evszEsmDvsQD3PvKcgSk6yPil8piBSaTmRGlnhmNjq2PePp1jC37NAdEf3+E2oG3FZ9wJl i1GaInJL GDoHLYWla4HGXiKjjBBqJvVQc36jGdYSGr26l1DVWu+GgqRSSojyfOP9iyTn2gWd0xI94u/qlLbvHMrf1uon7AZpXPRc6FU/GsrwH93MbzdMPyK+5zHLFHZO30utl6d9q6PJqG13ipPR6MMFriBm1X44mzSmGknKCi+4JOA9HSMggvFlOVs9uHbG3+gNIe5lovyMAmXZMhZFcGurltTftqswIKtpn7p6ciEndAJPYYiNIiL303YgRgmqEA9/8Rq0meMypSki1t7nNebY4f+NgIK/KqDf7ctdXE0pQPAN0kdrwdSKhMz336Euvl8ophFO4YJBVqknf/Bk6XEsXVe1D6q9/WA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Fri, Feb 14, 2025 at 8:27=E2=80=AFAM Vlastimil Babka wr= ote: > > Instead of local_lock_irqsave(), use localtry_trylock() when potential > callers include irq context, and localtry_lock() otherwise (such as when > we already know the gfp flags allow blocking). > > This should reduce the locking (due to irq disabling/enabling) overhead. > Failing to use percpu sheaves in an irq due to preempting an already > locked user of sheaves should be rare so it's a favorable tradeoff. > > Signed-off-by: Vlastimil Babka Reviewed-by: Suren Baghdasaryan > --- > mm/slub.c | 122 ++++++++++++++++++++++++++++++++++++++------------------= ------ > 1 file changed, 76 insertions(+), 46 deletions(-) > > diff --git a/mm/slub.c b/mm/slub.c > index 40175747212fefb27137309b27571abe8d0966e2..3d7345e7e938d53950ed0d6ab= e8eb0e93cf8f5b1 100644 > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -450,7 +450,7 @@ struct slab_sheaf { > }; > > struct slub_percpu_sheaves { > - local_lock_t lock; > + localtry_lock_t lock; > struct slab_sheaf *main; /* never NULL when unlocked */ > struct slab_sheaf *spare; /* empty or full, may be NULL */ > struct slab_sheaf *rcu_free; > @@ -2529,16 +2529,19 @@ static struct slab_sheaf *alloc_full_sheaf(struct= kmem_cache *s, gfp_t gfp) > > static void __kmem_cache_free_bulk(struct kmem_cache *s, size_t size, vo= id **p); > > -static void sheaf_flush_main(struct kmem_cache *s) > +/* returns true if at least partially flushed */ > +static bool sheaf_flush_main(struct kmem_cache *s) > { > struct slub_percpu_sheaves *pcs; > unsigned int batch, remaining; > void *objects[PCS_BATCH_MAX]; > struct slab_sheaf *sheaf; > - unsigned long flags; > + bool ret =3D false; > > next_batch: > - local_lock_irqsave(&s->cpu_sheaves->lock, flags); > + if (!localtry_trylock(&s->cpu_sheaves->lock)) > + return ret; > + > pcs =3D this_cpu_ptr(s->cpu_sheaves); > sheaf =3D pcs->main; > > @@ -2549,14 +2552,18 @@ static void sheaf_flush_main(struct kmem_cache *s= ) > > remaining =3D sheaf->size; > > - local_unlock_irqrestore(&s->cpu_sheaves->lock, flags); > + localtry_unlock(&s->cpu_sheaves->lock); > > __kmem_cache_free_bulk(s, batch, &objects[0]); > > stat_add(s, SHEAF_FLUSH_MAIN, batch); > > + ret =3D true; > + > if (remaining) > goto next_batch; > + > + return ret; > } > > static void sheaf_flush(struct kmem_cache *s, struct slab_sheaf *sheaf) > @@ -2593,6 +2600,8 @@ static void rcu_free_sheaf_nobarn(struct rcu_head *= head) > * Caller needs to make sure migration is disabled in order to fully flu= sh > * single cpu's sheaves > * > + * must not be called from an irq > + * > * flushing operations are rare so let's keep it simple and flush to sla= bs > * directly, skipping the barn > */ > @@ -2600,9 +2609,8 @@ static void pcs_flush_all(struct kmem_cache *s) > { > struct slub_percpu_sheaves *pcs; > struct slab_sheaf *spare, *rcu_free; > - unsigned long flags; > > - local_lock_irqsave(&s->cpu_sheaves->lock, flags); > + localtry_lock(&s->cpu_sheaves->lock); > pcs =3D this_cpu_ptr(s->cpu_sheaves); > > spare =3D pcs->spare; > @@ -2611,7 +2619,7 @@ static void pcs_flush_all(struct kmem_cache *s) > rcu_free =3D pcs->rcu_free; > pcs->rcu_free =3D NULL; > > - local_unlock_irqrestore(&s->cpu_sheaves->lock, flags); > + localtry_unlock(&s->cpu_sheaves->lock); > > if (spare) { > sheaf_flush(s, spare); > @@ -4554,10 +4562,11 @@ static __fastpath_inline > void *alloc_from_pcs(struct kmem_cache *s, gfp_t gfp) > { > struct slub_percpu_sheaves *pcs; > - unsigned long flags; > void *object; > > - local_lock_irqsave(&s->cpu_sheaves->lock, flags); > + if (!localtry_trylock(&s->cpu_sheaves->lock)) > + return NULL; > + > pcs =3D this_cpu_ptr(s->cpu_sheaves); > > if (unlikely(pcs->main->size =3D=3D 0)) { > @@ -4590,7 +4599,7 @@ void *alloc_from_pcs(struct kmem_cache *s, gfp_t gf= p) > } > } > > - local_unlock_irqrestore(&s->cpu_sheaves->lock, flags); > + localtry_unlock(&s->cpu_sheaves->lock); > > if (!can_alloc) > return NULL; > @@ -4612,7 +4621,11 @@ void *alloc_from_pcs(struct kmem_cache *s, gfp_t g= fp) > if (!full) > return NULL; > > - local_lock_irqsave(&s->cpu_sheaves->lock, flags); > + /* > + * we can reach here only when gfpflags_allow_blocking > + * so this must not be an irq > + */ > + localtry_lock(&s->cpu_sheaves->lock); > pcs =3D this_cpu_ptr(s->cpu_sheaves); > > /* > @@ -4646,7 +4659,7 @@ void *alloc_from_pcs(struct kmem_cache *s, gfp_t gf= p) > do_alloc: > object =3D pcs->main->objects[--pcs->main->size]; > > - local_unlock_irqrestore(&s->cpu_sheaves->lock, flags); > + localtry_unlock(&s->cpu_sheaves->lock); > > stat(s, ALLOC_PCS); > > @@ -4658,12 +4671,13 @@ unsigned int alloc_from_pcs_bulk(struct kmem_cach= e *s, size_t size, void **p) > { > struct slub_percpu_sheaves *pcs; > struct slab_sheaf *main; > - unsigned long flags; > unsigned int allocated =3D 0; > unsigned int batch; > > next_batch: > - local_lock_irqsave(&s->cpu_sheaves->lock, flags); > + if (!localtry_trylock(&s->cpu_sheaves->lock)) > + return allocated; > + > pcs =3D this_cpu_ptr(s->cpu_sheaves); > > if (unlikely(pcs->main->size =3D=3D 0)) { > @@ -4683,7 +4697,7 @@ unsigned int alloc_from_pcs_bulk(struct kmem_cache = *s, size_t size, void **p) > goto do_alloc; > } > > - local_unlock_irqrestore(&s->cpu_sheaves->lock, flags); > + localtry_unlock(&s->cpu_sheaves->lock); > > /* > * Once full sheaves in barn are depleted, let the bulk > @@ -4701,7 +4715,7 @@ unsigned int alloc_from_pcs_bulk(struct kmem_cache = *s, size_t size, void **p) > main->size -=3D batch; > memcpy(p, main->objects + main->size, batch * sizeof(void *)); > > - local_unlock_irqrestore(&s->cpu_sheaves->lock, flags); > + localtry_unlock(&s->cpu_sheaves->lock); > > stat_add(s, ALLOC_PCS, batch); > > @@ -5121,13 +5135,14 @@ static void __slab_free(struct kmem_cache *s, str= uct slab *slab, > * The object is expected to have passed slab_free_hook() already. > */ > static __fastpath_inline > -void free_to_pcs(struct kmem_cache *s, void *object) > +bool free_to_pcs(struct kmem_cache *s, void *object) > { > struct slub_percpu_sheaves *pcs; > - unsigned long flags; > > restart: > - local_lock_irqsave(&s->cpu_sheaves->lock, flags); > + if (!localtry_trylock(&s->cpu_sheaves->lock)) > + return false; > + > pcs =3D this_cpu_ptr(s->cpu_sheaves); > > if (unlikely(pcs->main->size =3D=3D s->sheaf_capacity)) { > @@ -5162,7 +5177,7 @@ void free_to_pcs(struct kmem_cache *s, void *object= ) > struct slab_sheaf *to_flush =3D pcs->spare; > > pcs->spare =3D NULL; > - local_unlock_irqrestore(&s->cpu_sheaves->lock, fl= ags); > + localtry_unlock(&s->cpu_sheaves->lock); > > sheaf_flush(s, to_flush); > empty =3D to_flush; > @@ -5170,17 +5185,27 @@ void free_to_pcs(struct kmem_cache *s, void *obje= ct) > } > > alloc_empty: > - local_unlock_irqrestore(&s->cpu_sheaves->lock, flags); > + localtry_unlock(&s->cpu_sheaves->lock); > > empty =3D alloc_empty_sheaf(s, GFP_NOWAIT); > > if (!empty) { > - sheaf_flush_main(s); > - goto restart; > + if (sheaf_flush_main(s)) > + goto restart; > + else > + return false; > } > > got_empty: > - local_lock_irqsave(&s->cpu_sheaves->lock, flags); > + if (!localtry_trylock(&s->cpu_sheaves->lock)) { > + struct node_barn *barn; > + > + barn =3D get_node(s, numa_mem_id())->barn; > + > + barn_put_empty_sheaf(barn, empty, true); > + return false; > + } > + > pcs =3D this_cpu_ptr(s->cpu_sheaves); > > /* > @@ -5209,9 +5234,11 @@ void free_to_pcs(struct kmem_cache *s, void *objec= t) > do_free: > pcs->main->objects[pcs->main->size++] =3D object; > > - local_unlock_irqrestore(&s->cpu_sheaves->lock, flags); > + localtry_unlock(&s->cpu_sheaves->lock); > > stat(s, FREE_PCS); > + > + return true; > } > > static void __rcu_free_sheaf_prepare(struct kmem_cache *s, > @@ -5270,9 +5297,10 @@ bool __kfree_rcu_sheaf(struct kmem_cache *s, void = *obj) > { > struct slub_percpu_sheaves *pcs; > struct slab_sheaf *rcu_sheaf; > - unsigned long flags; > > - local_lock_irqsave(&s->cpu_sheaves->lock, flags); > + if (!localtry_trylock(&s->cpu_sheaves->lock)) > + goto fail; > + > pcs =3D this_cpu_ptr(s->cpu_sheaves); > > if (unlikely(!pcs->rcu_free)) { > @@ -5286,16 +5314,16 @@ bool __kfree_rcu_sheaf(struct kmem_cache *s, void= *obj) > goto do_free; > } > > - local_unlock_irqrestore(&s->cpu_sheaves->lock, flags); > + localtry_unlock(&s->cpu_sheaves->lock); > > empty =3D alloc_empty_sheaf(s, GFP_NOWAIT); > > - if (!empty) { > - stat(s, FREE_RCU_SHEAF_FAIL); > - return false; > - } > + if (!empty) > + goto fail; > + > + if (!localtry_trylock(&s->cpu_sheaves->lock)) > + goto fail; > > - local_lock_irqsave(&s->cpu_sheaves->lock, flags); > pcs =3D this_cpu_ptr(s->cpu_sheaves); > > if (unlikely(pcs->rcu_free)) > @@ -5311,19 +5339,22 @@ bool __kfree_rcu_sheaf(struct kmem_cache *s, void= *obj) > rcu_sheaf->objects[rcu_sheaf->size++] =3D obj; > > if (likely(rcu_sheaf->size < s->sheaf_capacity)) { > - local_unlock_irqrestore(&s->cpu_sheaves->lock, flags); > + localtry_unlock(&s->cpu_sheaves->lock); > stat(s, FREE_RCU_SHEAF); > return true; > } > > pcs->rcu_free =3D NULL; > - local_unlock_irqrestore(&s->cpu_sheaves->lock, flags); > + localtry_unlock(&s->cpu_sheaves->lock); > > call_rcu(&rcu_sheaf->rcu_head, rcu_free_sheaf); > > stat(s, FREE_RCU_SHEAF); > - > return true; > + > +fail: > + stat(s, FREE_RCU_SHEAF_FAIL); > + return false; > } > > /* > @@ -5335,7 +5366,6 @@ static void free_to_pcs_bulk(struct kmem_cache *s, = size_t size, void **p) > { > struct slub_percpu_sheaves *pcs; > struct slab_sheaf *main; > - unsigned long flags; > unsigned int batch, i =3D 0; > bool init; > > @@ -5358,7 +5388,9 @@ static void free_to_pcs_bulk(struct kmem_cache *s, = size_t size, void **p) > } > > next_batch: > - local_lock_irqsave(&s->cpu_sheaves->lock, flags); > + if (!localtry_trylock(&s->cpu_sheaves->lock)) > + goto fallback; > + > pcs =3D this_cpu_ptr(s->cpu_sheaves); > > if (unlikely(pcs->main->size =3D=3D s->sheaf_capacity)) { > @@ -5389,13 +5421,13 @@ static void free_to_pcs_bulk(struct kmem_cache *s= , size_t size, void **p) > } > > no_empty: > - local_unlock_irqrestore(&s->cpu_sheaves->lock, flags); > + localtry_unlock(&s->cpu_sheaves->lock); > > /* > * if we depleted all empty sheaves in the barn or there = are too > * many full sheaves, free the rest to slab pages > */ > - > +fallback: > __kmem_cache_free_bulk(s, size, p); > return; > } > @@ -5407,7 +5439,7 @@ static void free_to_pcs_bulk(struct kmem_cache *s, = size_t size, void **p) > memcpy(main->objects + main->size, p, batch * sizeof(void *)); > main->size +=3D batch; > > - local_unlock_irqrestore(&s->cpu_sheaves->lock, flags); > + localtry_unlock(&s->cpu_sheaves->lock); > > stat_add(s, FREE_PCS, batch); > > @@ -5507,9 +5539,7 @@ void slab_free(struct kmem_cache *s, struct slab *s= lab, void *object, > if (unlikely(!slab_free_hook(s, object, slab_want_init_on_free(s)= , false))) > return; > > - if (s->cpu_sheaves) > - free_to_pcs(s, object); > - else > + if (!s->cpu_sheaves || !free_to_pcs(s, object)) > do_slab_free(s, slab, object, object, 1, addr); > } > > @@ -6288,7 +6318,7 @@ static int init_percpu_sheaves(struct kmem_cache *s= ) > > pcs =3D per_cpu_ptr(s->cpu_sheaves, cpu); > > - local_lock_init(&pcs->lock); > + localtry_lock_init(&pcs->lock); > > nid =3D cpu_to_mem(cpu); > > > -- > 2.48.1 >