From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A744DC433EF for ; Sat, 4 Sep 2021 10:51:20 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3628F6108E for ; Sat, 4 Sep 2021 10:51:20 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 3628F6108E Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id D6F80940010; Sat, 4 Sep 2021 06:50:17 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id AFC86940016; Sat, 4 Sep 2021 06:50:17 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 38127940015; Sat, 4 Sep 2021 06:50:17 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0163.hostedemail.com [216.40.44.163]) by kanga.kvack.org (Postfix) with ESMTP id CAF06900009 for ; Sat, 4 Sep 2021 06:50:16 -0400 (EDT) Received: from smtpin38.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 92A4D2FE10 for ; Sat, 4 Sep 2021 10:50:16 +0000 (UTC) X-FDA: 78549571632.38.F15ABEC Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28]) by imf13.hostedemail.com (Postfix) with ESMTP id 26BCE1038BBB for ; Sat, 4 Sep 2021 10:50:16 +0000 (UTC) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 2DB0C226A8; Sat, 4 Sep 2021 10:50:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1630752615; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=xAUa1QV1E7WZp8xaR1lxfPzUt1l/lfDrMd17yvl/1no=; b=xoYb8UO0pfRYtVuTIcUiV3Uw5xbAJXgGbDksiC6dM1VK9yiX3Tp2udKfG1YfkeBss9jQq7 GH7s+BKpZ2cF1bxoKQ2m/3DlTBtY3RPZHK6WMFQj2KyIA9pUg/eLRE6A9Fjn50RWYolxlq TP6yW0806JfzcxtB7uAmkeaxF+kDEXU= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1630752615; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=xAUa1QV1E7WZp8xaR1lxfPzUt1l/lfDrMd17yvl/1no=; b=vdQjANC9Jhbs2aKbis9GAnx+I9UYj2A4r4DACXmp5CiCQXrNqxLmN71/XNbovsB0WfOqJr jhoAkVjGxQMHThDQ== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 00DFC13A2C; Sat, 4 Sep 2021 10:50:14 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id EG0zO2ZPM2HoUQAAMHmgww (envelope-from ); Sat, 04 Sep 2021 10:50:14 +0000 From: Vlastimil Babka To: linux-mm@kvack.org, Christoph Lameter , David Rientjes , Pekka Enberg , Joonsoo Kim Cc: Andrew Morton , linux-kernel@vger.kernel.org, Mike Galbraith , Sebastian Andrzej Siewior , Thomas Gleixner , Mel Gorman , Vlastimil Babka Subject: [PATCH v6 33/33] mm, slub: convert kmem_cpu_slab protection to local_lock Date: Sat, 4 Sep 2021 12:50:03 +0200 Message-Id: <20210904105003.11688-34-vbabka@suse.cz> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210904105003.11688-1-vbabka@suse.cz> References: <20210904105003.11688-1-vbabka@suse.cz> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=15564; h=from:subject; bh=V2eX1JvR3IvnRl+fzg+BMlhF/2Ina4/tpZGNoC8paUI=; b=owGbwMvMwMH4QPFU8cfOlQKMp9WSGBKN/SPPrH200/K0fuwJz8dndovZzajYOf+PMYust8Kea38W vu5828lozMLAyMEgK6bI0us9mXGl6WOJfR5xZ2AGsTKBTGHg4hSAiYgLsv938Hfqryjf9vvbvbuc8f OmrI1jOrv+YtdXcb34KesjrgTctNTWNp+kkMcpEtdpxivGIZAj/3Kq5LGYRDNGk4Six3eL2Gvn5yaI 71f5nM4T+urfGnFXobQFpaGRGYcSguTFcr7tXBDrcdvIRjo38hTLLs+rKX4bjB7bRM4yrm6V2moX1t kivEkhe5tLSlD869aQmObZx/R1boS4mf2fsuxDpNt6jZInsdoNFefS5vyaY7Fc9cbMv0x202VF3iQ9 qeu6OiFuXsh5xZ4u59OiBx/xq+dkTbU3mul3Wmzeh7jsxtSH1xLPB3KI/7i8160x51nVgZDi2Geptt XPssrcHCQZ1+Z0rts08ajNH39RAA== X-Developer-Key: i=vbabka@suse.cz; a=openpgp; fpr=A940D434992C2E8E99103D50224FA7E7CC82A664 Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=xoYb8UO0; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=vdQjANC9; spf=pass (imf13.hostedemail.com: domain of vbabka@suse.cz designates 195.135.220.28 as permitted sender) smtp.mailfrom=vbabka@suse.cz; dmarc=none X-Stat-Signature: bry498nmd88kcwbg6q1g78ewya4pgtgb X-Rspamd-Queue-Id: 26BCE1038BBB X-Rspamd-Server: rspam04 X-HE-Tag: 1630752616-323985 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Embed local_lock into struct kmem_cpu_slab and use the irq-safe versions = of local_lock instead of plain local_irq_save/restore. On !PREEMPT_RT that's equivalent, with better lockdep visibility. On PREEMPT_RT that means bett= er preemption. However, the cost on PREEMPT_RT is the loss of lockless fast paths which = only work with cpu freelist. Those are designed to detect and recover from bei= ng preempted by other conflicting operations (both fast or slow path), but t= he slow path operations assume they cannot be preempted by a fast path opera= tion, which is guaranteed naturally with disabled irqs. With local locks on PREEMPT_RT, the fast paths now also need to take the local lock to avoid = races. In the allocation fastpath slab_alloc_node() we can just defer to the slo= wpath __slab_alloc() which also works with cpu freelist, but under the local lo= ck. In the free fastpath do_slab_free() we have to add a new local lock prote= cted version of freeing to the cpu freelist, as the existing slowpath only wor= ks with the page freelist. Also update the comment about locking scheme in SLUB to reflect changes d= one by this series. [ Mike Galbraith : use local_lock() without irq in PREEMPT= _RT scope; debugging of RT crashes resulting in put_cpu_partial() locking c= hanges ] Signed-off-by: Vlastimil Babka --- include/linux/slub_def.h | 6 ++ mm/slub.c | 146 +++++++++++++++++++++++++++++---------- 2 files changed, 117 insertions(+), 35 deletions(-) diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h index dcde82a4434c..85499f0586b0 100644 --- a/include/linux/slub_def.h +++ b/include/linux/slub_def.h @@ -10,6 +10,7 @@ #include #include #include +#include =20 enum stat_item { ALLOC_FASTPATH, /* Allocation from cpu slab */ @@ -40,6 +41,10 @@ enum stat_item { CPU_PARTIAL_DRAIN, /* Drain cpu partial to node partial */ NR_SLUB_STAT_ITEMS }; =20 +/* + * When changing the layout, make sure freelist and tid are still compat= ible + * with this_cpu_cmpxchg_double() alignment requirements. + */ struct kmem_cache_cpu { void **freelist; /* Pointer to next available object */ unsigned long tid; /* Globally unique transaction id */ @@ -47,6 +52,7 @@ struct kmem_cache_cpu { #ifdef CONFIG_SLUB_CPU_PARTIAL struct page *partial; /* Partially allocated frozen slabs */ #endif + local_lock_t lock; /* Protects the fields above */ #ifdef CONFIG_SLUB_STATS unsigned stat[NR_SLUB_STAT_ITEMS]; #endif diff --git a/mm/slub.c b/mm/slub.c index 38d4cc51e880..3d2025f7163b 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -46,13 +46,21 @@ /* * Lock order: * 1. slab_mutex (Global Mutex) - * 2. node->list_lock - * 3. slab_lock(page) (Only on some arches and for debugging) + * 2. node->list_lock (Spinlock) + * 3. kmem_cache->cpu_slab->lock (Local lock) + * 4. slab_lock(page) (Only on some arches or for debugging) + * 5. object_map_lock (Only for debugging) * * slab_mutex * * The role of the slab_mutex is to protect the list of all the slabs * and to synchronize major metadata changes to slab cache structures. + * Also synchronizes memory hotplug callbacks. + * + * slab_lock + * + * The slab_lock is a wrapper around the page lock, thus it is a bit + * spinlock. * * The slab_lock is only used for debugging and on arches that do not * have the ability to do a cmpxchg_double. It only protects: @@ -61,6 +69,8 @@ * C. page->objects -> Number of objects in page * D. page->frozen -> frozen state * + * Frozen slabs + * * If a slab is frozen then it is exempt from list management. It is n= ot * on any list except per cpu partial list. The processor that froze t= he * slab is the one who can perform list operations on the page. Other @@ -68,6 +78,8 @@ * froze the slab is the only one that can retrieve the objects from t= he * page's freelist. * + * list_lock + * * The list_lock protects the partial and full list on each node and * the partial slab counter. If taken then no new slabs may be added o= r * removed from the lists nor make the number of partial slabs be modi= fied. @@ -79,10 +91,36 @@ * slabs, operations can continue without any centralized lock. F.e. * allocating a long series of objects that fill up slabs does not req= uire * the list lock. - * Interrupts are disabled during allocation and deallocation in order= to - * make the slab allocator safe to use in the context of an irq. In ad= dition - * interrupts are disabled to ensure that the processor does not chang= e - * while handling per_cpu slabs, due to kernel preemption. + * + * cpu_slab->lock local lock + * + * This locks protect slowpath manipulation of all kmem_cache_cpu fiel= ds + * except the stat counters. This is a percpu structure manipulated on= ly by + * the local cpu, so the lock protects against being preempted or inte= rrupted + * by an irq. Fast path operations rely on lockless operations instead= . + * On PREEMPT_RT, the local lock does not actually disable irqs (and t= hus + * prevent the lockless operations), so fastpath operations also need = to take + * the lock and are no longer lockless. + * + * lockless fastpaths + * + * The fast path allocation (slab_alloc_node()) and freeing (do_slab_f= ree()) + * are fully lockless when satisfied from the percpu slab (and when + * cmpxchg_double is possible to use, otherwise slab_lock is taken). + * They also don't disable preemption or migration or irqs. They rely = on + * the transaction id (tid) field to detect being preempted or moved t= o + * another cpu. + * + * irq, preemption, migration considerations + * + * Interrupts are disabled as part of list_lock or local_lock operatio= ns, or + * around the slab_lock operation, in order to make the slab allocator= safe + * to use in the context of an irq. + * + * In addition, preemption (or migration on PREEMPT_RT) is disabled in= the + * allocation slowpath, bulk allocation, and put_cpu_partial(), so tha= t the + * local cpu doesn't change in the process and e.g. the kmem_cache_cpu= pointer + * doesn't have to be revalidated in each section protected by the loc= al lock. * * SLUB assigns one slab for allocation to each processor. * Allocations only occur from these slabs called cpu slabs. @@ -2250,9 +2288,13 @@ static inline void note_cmpxchg_failure(const char= *n, static void init_kmem_cache_cpus(struct kmem_cache *s) { int cpu; + struct kmem_cache_cpu *c; =20 - for_each_possible_cpu(cpu) - per_cpu_ptr(s->cpu_slab, cpu)->tid =3D init_tid(cpu); + for_each_possible_cpu(cpu) { + c =3D per_cpu_ptr(s->cpu_slab, cpu); + local_lock_init(&c->lock); + c->tid =3D init_tid(cpu); + } } =20 /* @@ -2463,10 +2505,10 @@ static void unfreeze_partials(struct kmem_cache *= s) struct page *partial_page; unsigned long flags; =20 - local_irq_save(flags); + local_lock_irqsave(&s->cpu_slab->lock, flags); partial_page =3D this_cpu_read(s->cpu_slab->partial); this_cpu_write(s->cpu_slab->partial, NULL); - local_irq_restore(flags); + local_unlock_irqrestore(&s->cpu_slab->lock, flags); =20 if (partial_page) __unfreeze_partials(s, partial_page); @@ -2499,7 +2541,7 @@ static void put_cpu_partial(struct kmem_cache *s, s= truct page *page, int drain) int pages =3D 0; int pobjects =3D 0; =20 - local_irq_save(flags); + local_lock_irqsave(&s->cpu_slab->lock, flags); =20 oldpage =3D this_cpu_read(s->cpu_slab->partial); =20 @@ -2527,7 +2569,7 @@ static void put_cpu_partial(struct kmem_cache *s, s= truct page *page, int drain) =20 this_cpu_write(s->cpu_slab->partial, page); =20 - local_irq_restore(flags); + local_unlock_irqrestore(&s->cpu_slab->lock, flags); =20 if (page_to_unfreeze) { __unfreeze_partials(s, page_to_unfreeze); @@ -2549,7 +2591,7 @@ static inline void flush_slab(struct kmem_cache *s,= struct kmem_cache_cpu *c) struct page *page; void *freelist; =20 - local_irq_save(flags); + local_lock_irqsave(&s->cpu_slab->lock, flags); =20 page =3D c->page; freelist =3D c->freelist; @@ -2558,7 +2600,7 @@ static inline void flush_slab(struct kmem_cache *s,= struct kmem_cache_cpu *c) c->freelist =3D NULL; c->tid =3D next_tid(c->tid); =20 - local_irq_restore(flags); + local_unlock_irqrestore(&s->cpu_slab->lock, flags); =20 if (page) { deactivate_slab(s, page, freelist); @@ -2780,8 +2822,6 @@ static inline bool pfmemalloc_match_unsafe(struct p= age *page, gfp_t gfpflags) * The page is still frozen if the return value is not NULL. * * If this function returns NULL then the page has been unfrozen. - * - * This function must be called with interrupt disabled. */ static inline void *get_freelist(struct kmem_cache *s, struct page *page= ) { @@ -2789,6 +2829,8 @@ static inline void *get_freelist(struct kmem_cache = *s, struct page *page) unsigned long counters; void *freelist; =20 + lockdep_assert_held(this_cpu_ptr(&s->cpu_slab->lock)); + do { freelist =3D page->freelist; counters =3D page->counters; @@ -2873,9 +2915,9 @@ static void *___slab_alloc(struct kmem_cache *s, gf= p_t gfpflags, int node, goto deactivate_slab; =20 /* must check again c->page in case we got preempted and it changed */ - local_irq_save(flags); + local_lock_irqsave(&s->cpu_slab->lock, flags); if (unlikely(page !=3D c->page)) { - local_irq_restore(flags); + local_unlock_irqrestore(&s->cpu_slab->lock, flags); goto reread_page; } freelist =3D c->freelist; @@ -2886,7 +2928,7 @@ static void *___slab_alloc(struct kmem_cache *s, gf= p_t gfpflags, int node, =20 if (!freelist) { c->page =3D NULL; - local_irq_restore(flags); + local_unlock_irqrestore(&s->cpu_slab->lock, flags); stat(s, DEACTIVATE_BYPASS); goto new_slab; } @@ -2895,7 +2937,7 @@ static void *___slab_alloc(struct kmem_cache *s, gf= p_t gfpflags, int node, =20 load_freelist: =20 - lockdep_assert_irqs_disabled(); + lockdep_assert_held(this_cpu_ptr(&s->cpu_slab->lock)); =20 /* * freelist is pointing to the list of objects to be used. @@ -2905,39 +2947,39 @@ static void *___slab_alloc(struct kmem_cache *s, = gfp_t gfpflags, int node, VM_BUG_ON(!c->page->frozen); c->freelist =3D get_freepointer(s, freelist); c->tid =3D next_tid(c->tid); - local_irq_restore(flags); + local_unlock_irqrestore(&s->cpu_slab->lock, flags); return freelist; =20 deactivate_slab: =20 - local_irq_save(flags); + local_lock_irqsave(&s->cpu_slab->lock, flags); if (page !=3D c->page) { - local_irq_restore(flags); + local_unlock_irqrestore(&s->cpu_slab->lock, flags); goto reread_page; } freelist =3D c->freelist; c->page =3D NULL; c->freelist =3D NULL; - local_irq_restore(flags); + local_unlock_irqrestore(&s->cpu_slab->lock, flags); deactivate_slab(s, page, freelist); =20 new_slab: =20 if (slub_percpu_partial(c)) { - local_irq_save(flags); + local_lock_irqsave(&s->cpu_slab->lock, flags); if (unlikely(c->page)) { - local_irq_restore(flags); + local_unlock_irqrestore(&s->cpu_slab->lock, flags); goto reread_page; } if (unlikely(!slub_percpu_partial(c))) { - local_irq_restore(flags); + local_unlock_irqrestore(&s->cpu_slab->lock, flags); /* we were preempted and partial list got empty */ goto new_objects; } =20 page =3D c->page =3D slub_percpu_partial(c); slub_set_percpu_partial(c, page); - local_irq_restore(flags); + local_unlock_irqrestore(&s->cpu_slab->lock, flags); stat(s, CPU_PARTIAL_ALLOC); goto redo; } @@ -2990,7 +3032,7 @@ static void *___slab_alloc(struct kmem_cache *s, gf= p_t gfpflags, int node, =20 retry_load_page: =20 - local_irq_save(flags); + local_lock_irqsave(&s->cpu_slab->lock, flags); if (unlikely(c->page)) { void *flush_freelist =3D c->freelist; struct page *flush_page =3D c->page; @@ -2999,7 +3041,7 @@ static void *___slab_alloc(struct kmem_cache *s, gf= p_t gfpflags, int node, c->freelist =3D NULL; c->tid =3D next_tid(c->tid); =20 - local_irq_restore(flags); + local_unlock_irqrestore(&s->cpu_slab->lock, flags); =20 deactivate_slab(s, flush_page, flush_freelist); =20 @@ -3118,7 +3160,15 @@ static __always_inline void *slab_alloc_node(struc= t kmem_cache *s, =20 object =3D c->freelist; page =3D c->page; - if (unlikely(!object || !page || !node_match(page, node))) { + /* + * We cannot use the lockless fastpath on PREEMPT_RT because if a + * slowpath has taken the local_lock_irqsave(), it is not protected + * against a fast path operation in an irq handler. So we need to take + * the slow path which uses local_lock. It is still relatively fast if + * there is a suitable cpu freelist. + */ + if (IS_ENABLED(CONFIG_PREEMPT_RT) || + unlikely(!object || !page || !node_match(page, node))) { object =3D __slab_alloc(s, gfpflags, node, addr, c); } else { void *next_object =3D get_freepointer_safe(s, object); @@ -3378,6 +3428,7 @@ static __always_inline void do_slab_free(struct kme= m_cache *s, barrier(); =20 if (likely(page =3D=3D c->page)) { +#ifndef CONFIG_PREEMPT_RT void **freelist =3D READ_ONCE(c->freelist); =20 set_freepointer(s, tail_obj, freelist); @@ -3390,6 +3441,31 @@ static __always_inline void do_slab_free(struct km= em_cache *s, note_cmpxchg_failure("slab_free", s, tid); goto redo; } +#else /* CONFIG_PREEMPT_RT */ + /* + * We cannot use the lockless fastpath on PREEMPT_RT because if + * a slowpath has taken the local_lock_irqsave(), it is not + * protected against a fast path operation in an irq handler. So + * we need to take the local_lock. We shouldn't simply defer to + * __slab_free() as that wouldn't use the cpu freelist at all. + */ + void **freelist; + + local_lock(&s->cpu_slab->lock); + c =3D this_cpu_ptr(s->cpu_slab); + if (unlikely(page !=3D c->page)) { + local_unlock(&s->cpu_slab->lock); + goto redo; + } + tid =3D c->tid; + freelist =3D c->freelist; + + set_freepointer(s, tail_obj, freelist); + c->freelist =3D head; + c->tid =3D next_tid(tid); + + local_unlock(&s->cpu_slab->lock); +#endif stat(s, FREE_FASTPATH); } else __slab_free(s, page, head, tail_obj, cnt, addr); @@ -3568,7 +3644,7 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp= _t flags, size_t size, * handlers invoking normal fastpath. */ c =3D slub_get_cpu_ptr(s->cpu_slab); - local_irq_disable(); + local_lock_irq(&s->cpu_slab->lock); =20 for (i =3D 0; i < size; i++) { void *object =3D kfence_alloc(s, s->object_size, flags); @@ -3589,7 +3665,7 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp= _t flags, size_t size, */ c->tid =3D next_tid(c->tid); =20 - local_irq_enable(); + local_unlock_irq(&s->cpu_slab->lock); =20 /* * Invoking slow path likely have side-effect @@ -3603,7 +3679,7 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp= _t flags, size_t size, c =3D this_cpu_ptr(s->cpu_slab); maybe_wipe_obj_freeptr(s, p[i]); =20 - local_irq_disable(); + local_lock_irq(&s->cpu_slab->lock); =20 continue; /* goto for-loop */ } @@ -3612,7 +3688,7 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp= _t flags, size_t size, maybe_wipe_obj_freeptr(s, p[i]); } c->tid =3D next_tid(c->tid); - local_irq_enable(); + local_unlock_irq(&s->cpu_slab->lock); slub_put_cpu_ptr(s->cpu_slab); =20 /* --=20 2.33.0