From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3E668C432BE for ; Wed, 11 Aug 2021 14:17:15 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id BD732610A7 for ; Wed, 11 Aug 2021 14:17:14 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org BD732610A7 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 1DEFF6B0071; Wed, 11 Aug 2021 10:17:14 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 168F06B0072; Wed, 11 Aug 2021 10:17:14 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0089C6B0073; Wed, 11 Aug 2021 10:17:13 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0144.hostedemail.com [216.40.44.144]) by kanga.kvack.org (Postfix) with ESMTP id D51216B0071 for ; Wed, 11 Aug 2021 10:17:13 -0400 (EDT) Received: from smtpin25.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 862D21806596F for ; Wed, 11 Aug 2021 14:17:13 +0000 (UTC) X-FDA: 78463001946.25.BC7BA31 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf22.hostedemail.com (Postfix) with ESMTP id 2760C3729 for ; Wed, 11 Aug 2021 14:17:13 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id 11A0560FE6; Wed, 11 Aug 2021 14:17:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1628691432; bh=sYdBC8kgm4fNaeNwc4aArQa6IxOwH3wXOa1ntdEAgec=; h=Date:From:To:Cc:Subject:Reply-To:References:In-Reply-To:From; b=h5pe+3ujBXJ2iPBNB2+CCxl6LQjKyXhZH5ryVwzgnE5VbpkGJ3t5Hs8AMRsvg5nvw uleyflZRa/2Ws2c8aD6z4PxeCMsCp9LHSQ4f/0TmtPJTE2pgzrrvDf/xWohyzQYFx/ c+wkD43LdKiFnCnoKtvT+8uwP5cp2oYjd3cpgcQBUcldls02j8l/sVIXjlYj18aKa6 ClMlVHHOZkKYUeYQFyu1C8t1cYx2jWGVjT47a0HfkZ7DkR3X42Qkk7x7L+aYXF1isR ZzQ394pL1NSjrMehASjUx6JMjUd2sh9eSHvjl6pxXPrtDH/SkVzeU6vaxFChAyamO6 VWKXWi6cnuRHg== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id BE19A5C03A0; Wed, 11 Aug 2021 07:17:11 -0700 (PDT) Date: Wed, 11 Aug 2021 07:17:11 -0700 From: "Paul E. McKenney" To: Vlastimil Babka Cc: Mike Galbraith , Qian Cai , Andrew Morton , Christoph Lameter , David Rientjes , Pekka Enberg , Joonsoo Kim , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Sebastian Andrzej Siewior , Thomas Gleixner , Mel Gorman , Jesper Dangaard Brouer , Jann Horn Subject: Re: [PATCH v4 29/35] mm: slub: Move flush_cpu_slab() invocations __free_slab() invocations out of IRQ context Message-ID: <20210811141711.GA206897@paulmck-ThinkPad-P17-Gen-1> Reply-To: paulmck@kernel.org References: <20210805152000.12817-1-vbabka@suse.cz> <20210805152000.12817-30-vbabka@suse.cz> <0b36128c-3e12-77df-85fe-a153a714569b@quicinc.com> <2eb3cf340716c40f03a0a342ab40219b3d1de195.camel@gmx.de> <20210810203123.GB190765@paulmck-ThinkPad-P17-Gen-1> <20210810235336.GQ4126399@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline In-Reply-To: <20210810235336.GQ4126399@paulmck-ThinkPad-P17-Gen-1> Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=h5pe+3uj; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf22.hostedemail.com: domain of "SRS0=XOQD=NC=paulmck-ThinkPad-P17-Gen-1.home=paulmck@kernel.org" designates 198.145.29.99 as permitted sender) smtp.mailfrom="SRS0=XOQD=NC=paulmck-ThinkPad-P17-Gen-1.home=paulmck@kernel.org" X-Stat-Signature: hft946x7fhyrun3aeceu749wjhekyo3n X-Rspamd-Queue-Id: 2760C3729 X-Rspamd-Server: rspam05 X-HE-Tag: 1628691433-133109 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Aug 10, 2021 at 04:53:36PM -0700, Paul E. McKenney wrote: > On Wed, Aug 11, 2021 at 12:36:00AM +0200, Vlastimil Babka wrote: > > On 8/10/2021 10:31 PM, Paul E. McKenney wrote: > > > On Tue, Aug 10, 2021 at 01:47:42PM +0200, Mike Galbraith wrote: > > >> On Tue, 2021-08-10 at 11:03 +0200, Vlastimil Babka wrote: > > >>> On 8/9/21 3:41 PM, Qian Cai wrote: > > >>>>> =A0 > > >>>>> +static DEFINE_MUTEX(flush_lock); > > >>>>> +static DEFINE_PER_CPU(struct slub_flush_work, slub_flush); > > >>>>> + > > >>>>> =A0static void flush_all(struct kmem_cache *s) > > >>>>> =A0{ > > >>>>> -=A0=A0=A0=A0=A0=A0=A0on_each_cpu_cond(has_cpu_slab, flush_cpu_= slab, s, 1); > > >>>>> +=A0=A0=A0=A0=A0=A0=A0struct slub_flush_work *sfw; > > >>>>> +=A0=A0=A0=A0=A0=A0=A0unsigned int cpu; > > >>>>> + > > >>>>> +=A0=A0=A0=A0=A0=A0=A0mutex_lock(&flush_lock); > > >>>> > > >>>> Vlastimil, taking the lock here could trigger a warning during m= emory offline/online due to the locking order: > > >>>> > > >>>> slab_mutex -> flush_lock > > >>>> > > >>>> [=A0=A0 91.374541] WARNING: possible circular locking dependency= detected > > >>>> [=A0=A0 91.381411] 5.14.0-rc5-next-20210809+ #84 Not tainted > > >>>> [=A0=A0 91.387149] ---------------------------------------------= --------- > > >>>> [=A0=A0 91.394016] lsbug/1523 is trying to acquire lock: > > >>>> [=A0=A0 91.399406] ffff800018e76530 (flush_lock){+.+.}-{3:3}, at= : flush_all+0x50/0x1c8 > > >>>> [=A0=A0 91.407425] > > >>>> =A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 but task is already h= olding lock: > > >>>> [=A0=A0 91.414638] ffff800018e48468 (slab_mutex){+.+.}-{3:3}, at= : slab_memory_callback+0x44/0x280 > > >>>> [=A0=A0 91.423603] > > >>>> =A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 which lock already de= pends on the new lock. > > >>>> > > >>> > > >>> OK, managed to reproduce in qemu and this fixes it for me on top = of > > >>> next-20210809. Could you test as well, as your testing might be m= ore > > >>> comprehensive? I will format is as a fixup for the proper patch i= n the series then. > > >> > > >> As it appeared it should, moving cpu_hotplug_lock outside slab_mut= ex in > > >> kmem_cache_destroy() on top of that silenced the cpu offline gripe= . > > >=20 > > > And this one got rid of the remainder of the deadlock, but gets me = the > > > splat shown at the end of this message. So some sort of middle gro= und > > > may be needed. > > >=20 > > > (Same reproducer as in my previous reply to Vlastimil.) > > >=20 > > > Thanx, Paul > > >=20 > > >> --- > > >> mm/slab_common.c | 2 ++ > > >> mm/slub.c | 2 +- > > >> 2 files changed, 3 insertions(+), 1 deletion(-) > > >> > > >> --- a/mm/slab_common.c > > >> +++ b/mm/slab_common.c > > >> @@ -502,6 +502,7 @@ void kmem_cache_destroy(struct kmem_cach > > >> if (unlikely(!s)) > > >> return; > > >> > > >> + cpus_read_lock(); > > >> mutex_lock(&slab_mutex); > > >> > > >> s->refcount--; > > >> @@ -516,6 +517,7 @@ void kmem_cache_destroy(struct kmem_cach > > >> } > > >> out_unlock: > > >> mutex_unlock(&slab_mutex); > > >> + cpus_read_unlock(); > > >> } > > >> EXPORT_SYMBOL(kmem_cache_destroy); > > >> > > >> --- a/mm/slub.c > > >> +++ b/mm/slub.c > > >> @@ -4234,7 +4234,7 @@ int __kmem_cache_shutdown(struct kmem_ca > > >> int node; > > >> struct kmem_cache_node *n; > > >> > > >> - flush_all(s); > > >> + flush_all_cpus_locked(s); > > >> /* Attempt to free all objects */ > > >> for_each_kmem_cache_node(s, node, n) { > > >> free_partial(s, n); > > >=20 > > > [ 602.539109] ------------[ cut here ]------------ > > > [ 602.539804] WARNING: CPU: 3 PID: 88 at kernel/cpu.c:335 lockdep_= assert_cpus_held+0x29/0x30 > >=20 > > So this says the assert failed and we don't have the cpus_read_lock()= , right, but... > >=20 > > > [ 602.540940] Modules linked in: > > > [ 602.541377] CPU: 3 PID: 88 Comm: torture_shutdow Not tainted 5.1= 4.0-rc5-next-20210809+ #3299 > > > [ 602.542536] Hardware name: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.mo= dule_el8.5.0+746+bbd5d70c 04/01/2014 > > > [ 602.543786] RIP: 0010:lockdep_assert_cpus_held+0x29/0x30 > > > [ 602.544524] Code: 00 83 3d 4d f1 a4 01 01 76 0a 8b 05 4d 23 a5 0= 1 85 c0 75 01 c3 be ff ff ff ff 48 c7 c7 b0 86 66 a3 e8 9b 05 c9 00 85 c0= 75 ea <0f> 0b c3 0f 1f 40 00 41 57 41 89 ff 41 56 4d 89 c6 41 55 49 89 c= d > > > [ 602.547051] RSP: 0000:ffffb382802efdb8 EFLAGS: 00010246 > > > [ 602.547783] RAX: 0000000000000000 RBX: ffffa23301a44000 RCX: 000= 0000000000001 > > > [ 602.548764] RDX: 0000000000000001 RSI: ffffffffa335f5c0 RDI: fff= fffffa33adbbf[ 602.549747] RBP: ffffa23301a44000 R08: ffffa23302810000 R= 09: 974cf0ba5c48ad3c > > > [ 602.550727] R10: ffffb382802efe78 R11: 0000000000000001 R12: fff= fa23301a44000[ 602.551709] R13: 00000000000249c0 R14: 00000000ffffffff R= 15: 0000000fffffffe0 > > > [ 602.552694] FS: 0000000000000000(0000) GS:ffffa2331f580000(0000= ) knlGS:0000000000000000 > > > [ 602.553805] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 > > > [ 602.554606] CR2: 0000000000000000 CR3: 0000000017222000 CR4: 000= 00000000006e0 > > > [ 602.555601] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 000= 0000000000000 > > > [ 602.556590] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 000= 0000000000400 > > > [ 602.557585] Call Trace: > > > [ 602.557927] flush_all_cpus_locked+0x29/0x140 > > > [ 602.558535] __kmem_cache_shutdown+0x26/0x200 > > > [ 602.559145] ? lock_is_held_type+0xd6/0x130 > > > [ 602.559739] ? torture_onoff+0x260/0x260 > > > [ 602.560284] kmem_cache_destroy+0x38/0x110 > >=20 > > It should have been taken here. I don't understand. It's as if only t= he > > mm/slub.c was patched by Mike's patch, but mm/slab_common.c not? >=20 > You know, you would think that I would have learned how to reliably > apply a patch by now. Apparently not this morning. >=20 > Anyway, right in one! I will try again with the full patch later. And with both patches: Tested-by: Paul E. McKenney Thanx, Paul > > > [ 602.560859] rcu_torture_cleanup.cold.36+0x192/0x421 > > > [ 602.561539] ? wait_woken+0x60/0x60 > > > [ 602.562035] ? torture_onoff+0x260/0x260 > > > [ 602.562591] torture_shutdown+0xdd/0x1c0 > > > [ 602.563131] kthread+0x132/0x160 > > > [ 602.563592] ? set_kthread_struct+0x40/0x40 > > > [ 602.564172] ret_from_fork+0x22/0x30 > > > [ 602.564696] irq event stamp: 1307 > > > [ 602.565161] hardirqs last enabled at (1315): [] __up_console_sem+0x4d/0x50 > > > [ 602.566321] hardirqs last disabled at (1324): [] __up_console_sem+0x32/0x50 > > > [ 602.567479] softirqs last enabled at (1304): [] __do_softirq+0x311/0x473 > > > [ 602.568616] softirqs last disabled at (1299): [] irq_exit_rcu+0xe8/0xf0 > > > [ 602.569735] ---[ end trace 26fd643e1df331c9 ]--- > > >=20 > >=20