From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 175EEC83F1A for ; Mon, 14 Jul 2025 18:46:32 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 80E138D000C; Mon, 14 Jul 2025 14:46:32 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7BF138D0001; Mon, 14 Jul 2025 14:46:32 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6AD508D000C; Mon, 14 Jul 2025 14:46:32 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 584768D0001 for ; Mon, 14 Jul 2025 14:46:32 -0400 (EDT) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id D8F24140272 for ; Mon, 14 Jul 2025 18:46:31 +0000 (UTC) X-FDA: 83663750982.22.E30BDA4 Received: from mail-wr1-f53.google.com (mail-wr1-f53.google.com [209.85.221.53]) by imf21.hostedemail.com (Postfix) with ESMTP id C1A811C0009 for ; Mon, 14 Jul 2025 18:46:29 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=Z3A8fqBp; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf21.hostedemail.com: domain of alexei.starovoitov@gmail.com designates 209.85.221.53 as permitted sender) smtp.mailfrom=alexei.starovoitov@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1752518789; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=fE8CIuj1NSGIOmV+CFOurI1s5NLtbei1Sq1HYe0whhQ=; b=FOsjF4DK+W+u8bs1RyAnlvsjHTNl2FkgpGcO2kj9UhPvS0GHTwdWgeafhvvwQ8LOAGkBNj ojedX364BNc//W3YmZnNQl+wyfBXEZhBmSab8obQFEEcodq2S8hTxlMc5x6cYfYJXz0/c/ EwK2jUMEEA2hb8dcdbfN8HL1V4EDpVA= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1752518789; a=rsa-sha256; cv=none; b=Ko5A2nUJOvcdOEo5ZJzVNH3vZlQxM1QEkt7QHRME31gmFgpDBa5vUEO9fkpEOwUTZ05qX3 SRY98GxmleLRxmuX32TlKLJz1F8cCkGd/S4K7E6F4yS7Evqrq7BYYzbtbaVGb1ZkxommzE BjgLjrIWmp1+EFVCG/RrJ0AGlWjJByA= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=Z3A8fqBp; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf21.hostedemail.com: domain of alexei.starovoitov@gmail.com designates 209.85.221.53 as permitted sender) smtp.mailfrom=alexei.starovoitov@gmail.com Received: by mail-wr1-f53.google.com with SMTP id ffacd0b85a97d-3a6f2c6715fso3755989f8f.1 for ; Mon, 14 Jul 2025 11:46:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1752518788; x=1753123588; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=fE8CIuj1NSGIOmV+CFOurI1s5NLtbei1Sq1HYe0whhQ=; b=Z3A8fqBpchAp0s7DrsghlK4RNVSI4URA/kS8R8oONF/dPYml+AjSn8jKq/qh5Wvssq mxrRHbgHOJ7Xc5n64ZK+oCM6b60mGNHEId887+B79eUd9deUG78aTqno6EIL7CH8iOFg UV07PsKMnH3K3NC9ZlLQOx4uUNSgIbI0hQW2ZHWliDANOQ2+funQ5MISpEPHQAumrATi fW/o4TFCNO4ibr0mAH+X+oO/oJH3uiIeVXs58VCKQzUe8ZenmK6OTfXLKzhJQiB/aH+v VBuVQ3+drEMhi0SHglGb2ocmuHh0Sr7FQq27c9BG4Kzv+Bh/aVNb4sS/RR1ZX0LMDsxj GIfA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1752518788; x=1753123588; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=fE8CIuj1NSGIOmV+CFOurI1s5NLtbei1Sq1HYe0whhQ=; b=Vxyh8akpw/kX3qPIKVbKoIN68HMck9Mxzzf7BCfazHJi7R0t2EXRcz+UKECmpfKYUL iovGTb/94AOM2+EWI1cWEPcUXktvgVXyRC48oSBuBe8mLoOG4EChGzqyOwP9UEUjYP1b oONqcU47yxUpd+k270TrfV9f8lUH/pWr/JhBI7Ij2k5DJj2ej26Yn4+xktw0MkmEB/SM jzBNmTntZKQ83nrUglCMj8F8e02rmIGeegZXqn/HC6zHiQuy7g+Eh/eOaIWPBxQSwxzi gl0qiTqchIgMwFDdvbzVsCmKA6PTHf4Tw1aq7H3QRUbhREyXX2r01RFyvZA7Y9MxyZ/Z MQQg== X-Forwarded-Encrypted: i=1; AJvYcCUJVuN3VAeZLNk6bhwh5byNvev+aYgEbQ244v2h/QkwdBpRGVpsWS1iEDlR8gTDZUnib646ICpLIg==@kvack.org X-Gm-Message-State: AOJu0YxTCQrHeeN8Jht9AJIIuUNP/fs+HKFuZqUS0J2YEeLmMg28yAp9 ek0Uur45nA08e1Hk+cuFK7VH7TIbdhu2RebbMdv3S+Ewzt8/LDLjzbdE9rwzII7aP7lvogZyo7N SQGdo1EIbRLzD1wWMKSE55nxR8+MAO2Q= X-Gm-Gg: ASbGncsSrR9rqvlO3SBHp0k+VUpvfl2yxXgi5M8boMr3uzas9nzGObL8U6+5tRvu7Le d6NT+tYXVSbCBiWhEeSH7ZADSG766CnVJUaWgue7Dz9BJkee1fva7mfamBOYVms+dzVh+01D2aY ++n9g49PuauPd6MX90xTi4Ht2VhgRE62VdwCs9mb0i6mKADhxYaTyX9z08jMgVuMDgzvmdaDark //xs44OrolBW9i9YISnmf0= X-Google-Smtp-Source: AGHT+IEoGMs2VNK9KodUHhx9kWA0nxw7EEC51nbhdajDCX1VdU45zhDhU6YG+APa7TdwC1UiaxKoceBTzt5B+dU47Xo= X-Received: by 2002:a05:6000:26c3:b0:3a4:f7e3:c63c with SMTP id ffacd0b85a97d-3b5f2d2c149mr11595283f8f.0.1752518787823; Mon, 14 Jul 2025 11:46:27 -0700 (PDT) MIME-Version: 1.0 References: <20250709015303.8107-1-alexei.starovoitov@gmail.com> <20250709015303.8107-4-alexei.starovoitov@gmail.com> <20250711075001.fnlMZfk6@linutronix.de> <1adbee35-6131-49de-835b-2c93aacfdd1e@suse.cz> <20250711151730.rz_TY1Qq@linutronix.de> <20250714110639.uOaKJEfL@linutronix.de> In-Reply-To: From: Alexei Starovoitov Date: Mon, 14 Jul 2025 11:46:14 -0700 X-Gm-Features: Ac12FXxFbTAz4ruKThiiT7B_QMHMH6DNJ_14i4n0-hrL4qNhb-43Kz_IBgOWr-Q Message-ID: Subject: Re: [PATCH v2 3/6] locking/local_lock: Introduce local_lock_lockdep_start/end() To: Vlastimil Babka Cc: Sebastian Andrzej Siewior , bpf , linux-mm , Harry Yoo , Shakeel Butt , Michal Hocko , Andrii Nakryiko , Kumar Kartikeya Dwivedi , Andrew Morton , Peter Zijlstra , Steven Rostedt , Johannes Weiner Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Stat-Signature: npr1eehfwb66inhg5daftamxiohuexry X-Rspamd-Queue-Id: C1A811C0009 X-Rspamd-Server: rspam10 X-Rspam-User: X-HE-Tag: 1752518789-191064 X-HE-Meta: U2FsdGVkX1+zy+QM0xVjP4hvG7znxdb8emfYJnW4ZQAtSjcM7I+zet4ujPK09nfhjeMK3NfbmynpEuEdcgsHoFRbku6xhwJR3ACNgNqNamwtcGtcoRlhRX4ascQtRrN7JRhQMl5HXG05fvZJzHwIQ+8y3IB688CYP8+DqirdaQe6tzXmFB/uU/tJvZiMeQmhw7+4ZLHceyLzN6SeerNU2dYKyAX59L8Z1MCbZx4hTHL1TJjHAwtARDXeyJsttVlnOsdykXAGJIfgQ1e8ZZsGSyk810srhQKrR3H0rTqgku625DD8byOVtyWD2W9YzwQSDo6NtvtHS66POEQObLFgIkjo7OGoFe6P0lyNHD8KiwQNWCZYWXKLvwV+/CUFc08d2L/CKP0EyPg1o9swPFm0AiRoaa6va9InKQaFJoilV2kuBM5jA/9N7MMevkg0pZt4puvCLATwXrOxCuOcuJIqybXEYP5M+jvKfeOTXkbl3n6ryE3Nx7+OWivp/EIaT8WWgtCyuHnMiK4D8Rh6HzzCJxzZYySTbt9q1bzK5w1ceKtuIedLb+sykZG/XvDiprTF4VvScVs9seogSZMZO28BFrugKXrRm3+m0W2KmdGiR/Gp3BVJbkbqVSgL488LcvqnKVL7cypNAcuXclPAVpnVh7QtnbGkunVQC8lxULGQx6GLeG8GWbNulJ7Dg7HZyN0+uVZgyT18foZkqz3wAhFdpDVcBcmDaOQhxres3YFrZQbgYZs9oQd/dmtFoXutI2OSwWpXx1WuFtzbb2cXxZ16takHjVEiCxn+2Si4O+FzMCpcAqJwAoVKNxSXprThS3kFcc3ycdLsdad6LC4P3FqD5LKuvyUgtqaAuB5XSWhTBmUcvPBvyW/ConFzMhc4p3dt3PbZ99HvzZtosYpmYFflD2kfkJEYNllpTKh6mV3/jjwh5AY73hOocdobLaAce7GCnWidhpTh1fqSDxxXmB9 jqBWkqWy iKwzJka5EuNZH0SUV6LrTPfr+yqZI3ioAkdIZHiWscetrYdh8f49Qch28TNikh74lymNkooz+lL5efT0GhzdOYqq3UX8mNEjj5LEnPqfW3N3tLedrso3WzmDFl8q/5/zzkxD+oL3sWYJaCreh33TaXfZsKb2CgoTGh2biZcWlicHGAwpRCi5/svkmmiFPd5FurQSidaK1gLpbksshG40ucZNBynXFwjixV+xRpTT7xSyHpYe9tsSlkNSyij5+LHDUQ0fa2Y3dzk/OKQbp/qmCJ7NQv33WA0ECtPB6V/Vuw/b5RHHGh1DlMsRY4EXn2nUwP7ZdfiVQ1sVjOoayTuP/fVgfOjnbgVSHP0xMG7CLqSNmErEzCUcM4hTID3KJWE4FCJy66iMgj17j40SR1E9RepFsSieDxHhPuwgn3BjQgTfBn/kIqG34PkKewmJcKRoRqwKEa0a/8V4AG9Qg6HM+ObM8BX3TF351bAyr3UDnYIr5gu+BHI1tDd88Fg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Mon, Jul 14, 2025 at 11:33=E2=80=AFAM Vlastimil Babka w= rote: > > On 7/14/25 19:52, Alexei Starovoitov wrote: > > On Mon, Jul 14, 2025 at 4:06=E2=80=AFAM Sebastian Andrzej Siewior > > wrote: > >> > >> On 2025-07-11 19:19:26 [-0700], Alexei Starovoitov wrote: > >> > > If there is no parent check then we could do "normal lock" on both > >> > > sides. > >> > > >> > How would ___slab_alloc() know whether there was a parent check or n= ot? > >> > > >> > imo keeping local_lock_irqsave() as-is is cleaner, > >> > since if there is no parent check lockdep will rightfully complain. > >> > >> what about this: > >> > >> diff --git a/mm/slub.c b/mm/slub.c > >> index 7e2ffe1d46c6c..3520d1c25c205 100644 > >> --- a/mm/slub.c > >> +++ b/mm/slub.c > >> @@ -3693,6 +3693,34 @@ static inline void *freeze_slab(struct kmem_cac= he *s, struct slab *slab) > >> return freelist; > >> } > >> > >> +static void local_lock_cpu_slab(struct kmem_cache *s, const gfp_t gfp= _flags, > >> + unsigned long *flags) > >> +{ > >> + bool allow_spin =3D gfpflags_allow_spinning(gfp_flags); > >> + > >> + /* > >> + * ___slab_alloc()'s caller is supposed to check if kmem_cache= ::kmem_cache_cpu::lock > >> + * can be acquired without a deadlock before invoking the func= tion. > >> + * > >> + * On PREEMPT_RT an invocation is not possible from IRQ-off or= preempt > >> + * disabled context. The lock will always be acquired and if n= eeded it > >> + * block and sleep until the lock is available. > >> + * > >> + * On !PREEMPT_RT allocations from any context but NMI are saf= e. The lock > >> + * is always acquired with disabled interrupts meaning it is a= lways > >> + * possible to it. > >> + * In NMI context it is needed to check if the lock is acquire= d. If it is not, > >> + * it is safe to acquire it. The trylock semantic is used to t= ell lockdep > >> + * that we don't spin. The BUG_ON() will not trigger if it is = safe to acquire > >> + * the lock. > >> + * > >> + */ > >> + if (!IS_ENABLED(CONFIG_PREEMPT_RT) && !allow_spin) > >> + BUG_ON(!local_trylock_irqsave(&s->cpu_slab->lock, *fla= gs)); > >> + else > >> + local_lock_irqsave(&s->cpu_slab->lock, *flags); > >> +} > > > > the patch misses these two: > > > > diff --git a/mm/slub.c b/mm/slub.c > > index 36779519b02c..2f30b85fbf68 100644 > > --- a/mm/slub.c > > +++ b/mm/slub.c > > @@ -3260,7 +3260,7 @@ static void put_cpu_partial(struct kmem_cache > > *s, struct slab *slab, int drain) > > unsigned long flags; > > int slabs =3D 0; > > > > - local_lock_irqsave(&s->cpu_slab->lock, flags); > > + local_lock_cpu_slab(s, 0, &flags); > > > > oldslab =3D this_cpu_read(s->cpu_slab->partial); > > > > @@ -4889,8 +4889,9 @@ static __always_inline void do_slab_free(struct > > kmem_cache *s, > > goto redo; > > } > > } else { > > + long flags; > > /* Update the free list under the local lock */ > > - local_lock(&s->cpu_slab->lock); > > + local_lock_cpu_slab(s, 0, &flags); > > c =3D this_cpu_ptr(s->cpu_slab); > > if (unlikely(slab !=3D c->slab)) { > > local_unlock(&s->cpu_slab->lock); > > > > I realized that the latter one was missing local_lock_lockdep_start/end= () > > in my patch as well, but that's secondary. > > > > So with above it works on !RT, > > but on RT lockdep complains as I explained earlier. > > > > With yours and above hunks applied here is full lockdep splat: > > > > [ 39.819636] =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D > > [ 39.819638] WARNING: possible recursive locking detected > > [ 39.819641] 6.16.0-rc5-00342-gc8aca7837440-dirty #54 Tainted: G = O > > [ 39.819645] -------------------------------------------- > > [ 39.819646] page_alloc_kthr/2306 is trying to acquire lock: > > [ 39.819650] ff110001f5cbea88 ((&c->lock)){+.+.}-{3:3}, at: > > ___slab_alloc+0xb7/0xec0 > > [ 39.819667] > > [ 39.819667] but task is already holding lock: > > [ 39.819668] ff110001f5cbfe88 ((&c->lock)){+.+.}-{3:3}, at: > > ___slab_alloc+0xb7/0xec0 > > [ 39.819677] > > [ 39.819677] other info that might help us debug this: > > [ 39.819678] Possible unsafe locking scenario: > > [ 39.819678] > > [ 39.819679] CPU0 > > [ 39.819680] ---- > > [ 39.819681] lock((&c->lock)); > > [ 39.819684] lock((&c->lock)); > > [ 39.819687] > > [ 39.819687] *** DEADLOCK *** > > [ 39.819687] > > [ 39.819687] May be due to missing lock nesting notation > > [ 39.819687] > > [ 39.819689] 2 locks held by page_alloc_kthr/2306: > > [ 39.819691] #0: ff110001f5cbfe88 ((&c->lock)){+.+.}-{3:3}, at: > > ___slab_alloc+0xb7/0xec0 > > [ 39.819700] #1: ffffffff8588f3a0 (rcu_read_lock){....}-{1:3}, at: > > rt_spin_lock+0x197/0x250 > > [ 39.819710] > > [ 39.819710] stack backtrace: > > [ 39.819714] CPU: 1 UID: 0 PID: 2306 Comm: page_alloc_kthr Tainted: > > G O 6.16.0-rc5-00342-gc8aca7837440-dirty #54 > > PREEMPT_RT > > [ 39.819721] Tainted: [O]=3DOOT_MODULE > > [ 39.819723] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), > > BIOS rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014 > > [ 39.819726] Call Trace: > > [ 39.819729] > > [ 39.819734] dump_stack_lvl+0x5b/0x80 > > [ 39.819740] print_deadlock_bug.cold+0xbd/0xca > > [ 39.819747] __lock_acquire+0x12ad/0x2590 > > [ 39.819753] ? __lock_acquire+0x42b/0x2590 > > [ 39.819758] lock_acquire+0x133/0x2d0 > > [ 39.819763] ? ___slab_alloc+0xb7/0xec0 > > [ 39.819769] ? try_to_take_rt_mutex+0x624/0xfc0 > > [ 39.819773] ? __lock_acquire+0x42b/0x2590 > > [ 39.819778] rt_spin_lock+0x6f/0x250 > > But why are we here in ___slab_alloc, trying to take the lock... > > > [ 39.819783] ? ___slab_alloc+0xb7/0xec0 > > [ 39.819788] ? rtlock_slowlock_locked+0x5c60/0x5c60 > > [ 39.819792] ? rtlock_slowlock_locked+0xc3/0x5c60 > > [ 39.819798] ___slab_alloc+0xb7/0xec0 > > [ 39.819803] ? __lock_acquire+0x42b/0x2590 > > [ 39.819809] ? my_debug_callback+0x20e/0x390 [bpf_testmod] > > [ 39.819826] ? __lock_acquire+0x42b/0x2590 > > [ 39.819830] ? rt_read_unlock+0x2f0/0x2f0 > > [ 39.819835] ? my_debug_callback+0x20e/0x390 [bpf_testmod] > > [ 39.819844] ? kmalloc_nolock_noprof+0x15a/0x430 > > [ 39.819849] kmalloc_nolock_noprof+0x15a/0x430 > > When in patch 6/6 __slab_alloc() we should have bailed out via > > if (unlikely(!gfpflags_allow_spinning(gfpflags))) { > + if (local_lock_is_locked(&s->cpu_slab->lock)) { > + /* > + * EBUSY is an internal signal to kmalloc_nolock(= ) to > + * retry a different bucket. It's not propagated > + * to the caller. > + */ > + p =3D ERR_PTR(-EBUSY); > + goto out; > + } > > So it doesn't seem to me as a lack of lockdep tricking, but we reached > something we should not have because the avoidance based on > local_lock_is_locked() above didn't work properly? At least if I read the > splat and backtrace properly, it doesn't seem to suggest a theoretical > scenario but that we really tried to lock something we already had locked= . It's not theoretical. Such slab re-entrance can happen with a tracepoint: slab -> some tracepoint -> bpf -> slab I simulate it with a stress test: +extern void (*debug_callback)(void); +#define local_unlock_irqrestore(lock, flags) \ + do { \ + if (debug_callback) debug_callback(); \ + __local_unlock_irqrestore(lock, flags); \ + } while (0) and debug_callback() calls kmalloc_nolock(random_size) without any bpf to simplify testing. > > [ 39.819857] my_debug_callback+0x20e/0x390 [bpf_testmod] > > What exactly did you instrument here? > > > [ 39.819867] ? page_alloc_kthread+0x320/0x320 [bpf_testmod] > > [ 39.819875] ? lock_is_held_type+0x85/0xe0 > > [ 39.819881] ___slab_alloc+0x256/0xec0 > > And here we took the lock originally? yes, but they are truly different local_locks of different kmalloc buckets, and local_lock_is_locked() is working. See in the splat: > > [ 39.819646] page_alloc_kthr/2306 is trying to acquire lock: > > [ 39.819650] ff110001f5cbea88 ((&c->lock)){+.+.}-{3:3}, at: > > ___slab_alloc+0xb7/0xec0 > > [ 39.819667] > > [ 39.819667] but task is already holding lock: > > [ 39.819668] ff110001f5cbfe88 ((&c->lock)){+.+.}-{3:3}, at: > > ___slab_alloc+0xb7/0xec0 the addresses of the locks are different and they're different kmalloc buckets, but lockdep cannot understand this without explicit local_lock_lockdep_start(). The same thing I'm trying to explain in the commit log.