From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2C75CC021B2 for ; Tue, 25 Feb 2025 13:39:26 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B5934280002; Tue, 25 Feb 2025 08:39:25 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id AE36A280001; Tue, 25 Feb 2025 08:39:25 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 95B32280002; Tue, 25 Feb 2025 08:39:25 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 75F12280001 for ; Tue, 25 Feb 2025 08:39:25 -0500 (EST) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id EA593A3AB8 for ; Tue, 25 Feb 2025 13:39:24 +0000 (UTC) X-FDA: 83158573848.24.119AC54 Received: from mail-lj1-f169.google.com (mail-lj1-f169.google.com [209.85.208.169]) by imf22.hostedemail.com (Postfix) with ESMTP id E4D04C000E for ; Tue, 25 Feb 2025 13:39:22 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=Ghy6EjXd; spf=pass (imf22.hostedemail.com: domain of urezki@gmail.com designates 209.85.208.169 as permitted sender) smtp.mailfrom=urezki@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1740490763; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=uT3CYG520Dq/1dijGlLbTlImqbVeMFyUskqf9mGklcY=; b=PyzRmEttslK5CiyS1bn5WV4Tw50bXHjRpkKNLG422ulGPr42c4xrItk7YHT+GT7K156NGV kEFDZ4fUO2l+XTiNbeKvwWTT6z0Zfo/nKYuF/nGFbiGxk116+YAEn7cZMxvSKdq+lAJJpm vKE8jGCN/LDfkMDEeJiO3Zumj3WqcyY= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=Ghy6EjXd; spf=pass (imf22.hostedemail.com: domain of urezki@gmail.com designates 209.85.208.169 as permitted sender) smtp.mailfrom=urezki@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1740490763; a=rsa-sha256; cv=none; b=lVsf+f553cT+XGcZCvvQCOGJLfPT+CzCAqSZsuSzwZq0DOEsr1wEOqZ2jN/epyyK3yDPRu xmfZQlaL+ELe/QQPRsuR8G7qzvcbA38jxiWEhJgqf2nLtCsOG6m+/NnaoG4mQChjjeiRqY y12y0kMw5e87Y+Bv9wgmc4ABT5zQhU0= Received: by mail-lj1-f169.google.com with SMTP id 38308e7fff4ca-30a2d4b61e4so41068471fa.1 for ; Tue, 25 Feb 2025 05:39:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1740490761; x=1741095561; darn=kvack.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:date:from:from:to:cc:subject:date:message-id:reply-to; bh=uT3CYG520Dq/1dijGlLbTlImqbVeMFyUskqf9mGklcY=; b=Ghy6EjXdo7K1xUZbSjhjWTgpEMujAwm/rk3a+9gj4b4WrI9f1u4SBmrr/AM6wvoXBt +SEEVqaD2zMokGcrZlYvRdqdVzBNFg3t7L6abCmFXKqdZWNHHJxPggaZzISkOxPE5i51 WxO8CbDPO05czhX+M04/SF97OsME7K3E2EKoVW7qejiHuKTrcxWssQLmenQTw+4IFFZP yCBeEMFODwKmWN8LpDawtcLaXmfScwiKxrbUMQqWcFn2wgv27W3PYW/m53O9cDCMpZlV g8gFr13gk46JmHo6Plgj7BsnLC74xD/lB3AmkNqD2E87uTdX1gMpSndTs8SkATFSPdAk xG7A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740490761; x=1741095561; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:date:from:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=uT3CYG520Dq/1dijGlLbTlImqbVeMFyUskqf9mGklcY=; b=imNItEU7zRsBY38gvFZ/3lIdiSkl0n91PyHq40PkwKGs4Kq5O7EXqSrGRqBrkzX+3k gnQk8A25iNAPpzqaDuA4ES47YTkBbpkTlLVjxmXwPzHDqLfmsUsQp6s92mvbvm7GqNtv bxNuk3leADTEUiasCNwcvQ18rggKN41MC0mXtkkqt2ezcGlKnpcGnqQ3nsFK6pLI0/j6 JJx/kbWA9ZQrr3qt96qaVyAOVJUA0WZOc4chB1Xn5azK9zboSjK+U8xqBxVnVqRPW6fv IUDKsJgcLx0dHJITYNxzzR9cHaWl0pNtYaJw5iLzLpmXmoVM8WPL4tuCPtslcjI2JLhS YyBw== X-Forwarded-Encrypted: i=1; AJvYcCX4/kLKIkUZe8izUKGczQHkhIQA0VSPWLQ4jEQxJ/Dkk0ltVirpz5VZFN+qCFznDyFji3Xrt0BWmw==@kvack.org X-Gm-Message-State: AOJu0Yw8diewYwB/xBKEOZS+WBEGOsEEn7Igkx2vTm266tmWPYy6WFmv wMTxH4PBRdWo4rQygL+iSKFNrj3EsFep7I0WmetMrjzD8B4dG+b6 X-Gm-Gg: ASbGncs9r397aofeFqEEDROE/LGw4Wrq4fHVewMfdtLLARqlNhVPmDzaWNoiI4ZfMI1 WbfHdrIJZJPBh6jEtU9FDx4RHM7Am2MMHanxjKGW2RgoqnIUfcsKxdNR7UmgFFybnyYwsjpGJEB uQMxHzqvtTKRscEEWeWbUw915QhtGdIedOkBImozm1mAvW+IWNAkXhOhDfbitZEfUVjuI+4oaeU oOHRu04V/imlT8W7F3oUhxEh35eakZnbn1qx3KxlL8/dAJ4Uzhc2TOaEBIIfoi/xKGTHOIYNgdx p32lCHuXQnwcUEGrWwKAIxUy97uvToqyxJESNoNR5T3LnnHE X-Google-Smtp-Source: AGHT+IFNV2rx8abzKJFcJ2A+jBxuuw5RrwkFtqBmny0yVTTzAz0hhI1h/FhGU/in4snmWgKvHrdp0Q== X-Received: by 2002:a05:651c:2227:b0:300:3307:389d with SMTP id 38308e7fff4ca-30a598e5c7cmr69996921fa.19.1740490760754; Tue, 25 Feb 2025 05:39:20 -0800 (PST) Received: from pc636 (host-95-203-6-24.mobileonline.telia.com. [95.203.6.24]) by smtp.gmail.com with ESMTPSA id 38308e7fff4ca-30a819f4a4bsm2372111fa.49.2025.02.25.05.39.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 25 Feb 2025 05:39:19 -0800 (PST) From: Uladzislau Rezki X-Google-Original-From: Uladzislau Rezki Date: Tue, 25 Feb 2025 14:39:16 +0100 To: Vlastimil Babka Cc: Uladzislau Rezki , Keith Busch , "Paul E. McKenney" , Joel Fernandes , Josh Triplett , Boqun Feng , Christoph Lameter , David Rientjes , Steven Rostedt , Mathieu Desnoyers , Lai Jiangshan , Zqiang , Julia Lawall , Jakub Kicinski , "Jason A. Donenfeld" , Andrew Morton , Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com>, linux-mm@kvack.org, linux-kernel@vger.kernel.org, rcu@vger.kernel.org, Alexander Potapenko , Marco Elver , Dmitry Vyukov , kasan-dev@googlegroups.com, Jann Horn , Mateusz Guzik , linux-nvme@lists.infradead.org, leitao@debian.org Subject: Re: [PATCH v2 6/7] mm, slab: call kvfree_rcu_barrier() from kmem_cache_destroy() Message-ID: References: <20240807-b4-slab-kfree_rcu-destroy-v2-0-ea79102f428c@suse.cz> <20240807-b4-slab-kfree_rcu-destroy-v2-6-ea79102f428c@suse.cz> <2811463a-751f-4443-9125-02628dc315d9@suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Rspam-User: X-Rspamd-Queue-Id: E4D04C000E X-Stat-Signature: qgkj5g37zz9ki7695atd7nqby5ts8ctx X-Rspamd-Server: rspam03 X-HE-Tag: 1740490762-548653 X-HE-Meta: U2FsdGVkX18uMTEbJiO+aVJvJAh+lDAdo89Z6b+HnCjK3PAW5K4M2Kjm3GJIHKu0lOMWfDr0J8E9pLe2dHHfP1P15hBDXyik2YIDctBTpAPyOeBcaue0vj4n9H6XBnIc0dhL0IZTG6gO7PVtrvTHnaduMnoo0IPFOsxAxqwHHii6WizPntGmkDb+UFysMWos/HgXFpaO8ezOUgGY/lYTgAY8ZFYiCGqZjN/zMy2r5kDkzKzRaFsREFcdwwCAmuNjmo+tZnqN+GhviBc8IamGpM4hex2b2bZ1liiTYQUIS5FTzhpIEJy1RT8lbPEGXtkeaWdPe/JX9V3veSiWpIDsR+QiI9XOEZA+RiwYDER1Aoi7fpbLbGVRxfAyRw1re0PBfgS/kjZcoF//JLtGGk3zTcPt75g4EL4fae+CEqSvTL0Jsn50R/VquwFm21mHZFmYjxYXhnN9qjMjrgkoDn7VehmPKDVkOLW8jH4jbRyxo23lf8fkoUkygBwTpu4lQZ1e7oIHYiGu8R6TehlntrdtlGs9cWsKv+/qjYLLXztEqAJqNB0Dh1oQbYiP3+awi7014PYfhZAmy0p5ZnSb8I/AGZF+MVoZL6cWeorbW9+Cc9Jl4TrIuXF7BdlLDBLwFyXZ50/PILoyuNTHtNiceUtAlh+9sRMX35PRjhjjcT8uTcRSuWK+NRZxZsSsY+ddLxe0QAmq3lZmEdH/myN6EDFgpS6uoNd5T7q0NJu9oyJV5l2hjxY74yVUtwEm5aAk2/6hehIEp91wwERVQn2xfUiv5TYsX+iMZ9i/VONpV7SYsg/Mnt1/qvnps1yHsjMwkQyKtoN05uCOvz3JeSrdvY885I3r/zPvHJGhDbG1wgqEt0V5WreROCg9fB2PuBIIDCk3/1vy4BXwWMwTR2ZzMzEnSdQdWwnI1hrXb7k5uQRb3XOcku4lByHHcvHnJnUV/hSxyl+AAZBuRcWlaKKGU9F j+7XXaVm +XOiZfIRupSUN3eXVV/dQ/e/XBP8L/03QPgY1fmN2JEW3SwOeMfQvHHwLHPObRZ/F+0EXl3tYRmtHFlZeRfSqvwMW07Unm57ZQXo4BTyE8PAStk45zUxQRkUSd1mfKbhs6quD6H11YIv4EeofEmkVMV5MAhH/srHOaiAo66fe01zNiVgKnrVG/bf74F0FgLnzRAkEtquQEUnXLQkvsSHUqdCfFRmOKU+JdMP7nU+uyfYDURF+2XLg00rB41JdOBnKcsOmB+Me5uk1PZP1UgCeR1MvPKr73Vo4OAHuVgsQMS1Zdbo51h/TXNQ4wDAUAbVAhkC77YK9vFcFOAqKpsIQcAoZVPF2tF7B/WNEIYYU41JTkPQ2VcjUvKtXlEPy57npoVcBmX0ViCYSJhOnwKY+4RZ0pEASTuvuOEfuF7icsPUV39be8JcVzm+fR5Dke6jpaImszDH5keP18bbuo2pTzTkKkLOsJcmm3ifgHc1IoYEnwl0+F8IDKac2cRo4WFizerqCVW+y9o7cc1k= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Tue, Feb 25, 2025 at 10:57:38AM +0100, Vlastimil Babka wrote: > On 2/24/25 12:44, Uladzislau Rezki wrote: > > On Fri, Feb 21, 2025 at 06:28:49PM +0100, Vlastimil Babka wrote: > >> On 2/21/25 17:30, Keith Busch wrote: > >> > On Wed, Aug 07, 2024 at 12:31:19PM +0200, Vlastimil Babka wrote: > >> >> We would like to replace call_rcu() users with kfree_rcu() where the > >> >> existing callback is just a kmem_cache_free(). However this causes > >> >> issues when the cache can be destroyed (such as due to module unload). > >> >> > >> >> Currently such modules should be issuing rcu_barrier() before > >> >> kmem_cache_destroy() to have their call_rcu() callbacks processed first. > >> >> This barrier is however not sufficient for kfree_rcu() in flight due > >> >> to the batching introduced by a35d16905efc ("rcu: Add basic support for > >> >> kfree_rcu() batching"). > >> >> > >> >> This is not a problem for kmalloc caches which are never destroyed, but > >> >> since removing SLOB, kfree_rcu() is allowed also for any other cache, > >> >> that might be destroyed. > >> >> > >> >> In order not to complicate the API, put the responsibility for handling > >> >> outstanding kfree_rcu() in kmem_cache_destroy() itself. Use the newly > >> >> introduced kvfree_rcu_barrier() to wait before destroying the cache. > >> >> This is similar to how we issue rcu_barrier() for SLAB_TYPESAFE_BY_RCU > >> >> caches, but has to be done earlier, as the latter only needs to wait for > >> >> the empty slab pages to finish freeing, and not objects from the slab. > >> >> > >> >> Users of call_rcu() with arbitrary callbacks should still issue > >> >> rcu_barrier() before destroying the cache and unloading the module, as > >> >> kvfree_rcu_barrier() is not a superset of rcu_barrier() and the > >> >> callbacks may be invoking module code or performing other actions that > >> >> are necessary for a successful unload. > >> >> > >> >> Signed-off-by: Vlastimil Babka > >> >> --- > >> >> mm/slab_common.c | 3 +++ > >> >> 1 file changed, 3 insertions(+) > >> >> > >> >> diff --git a/mm/slab_common.c b/mm/slab_common.c > >> >> index c40227d5fa07..1a2873293f5d 100644 > >> >> --- a/mm/slab_common.c > >> >> +++ b/mm/slab_common.c > >> >> @@ -508,6 +508,9 @@ void kmem_cache_destroy(struct kmem_cache *s) > >> >> if (unlikely(!s) || !kasan_check_byte(s)) > >> >> return; > >> >> > >> >> + /* in-flight kfree_rcu()'s may include objects from our cache */ > >> >> + kvfree_rcu_barrier(); > >> >> + > >> >> cpus_read_lock(); > >> >> mutex_lock(&slab_mutex); > >> > > >> > This patch appears to be triggering a new warning in certain conditions > >> > when tearing down an nvme namespace's block device. Stack trace is at > >> > the end. > >> > > >> > The warning indicates that this shouldn't be called from a > >> > WQ_MEM_RECLAIM workqueue. This workqueue is responsible for bringing up > >> > and tearing down block devices, so this is a memory reclaim use AIUI. > >> > I'm a bit confused why we can't tear down a disk from within a memory > >> > reclaim workqueue. Is the recommended solution to simply remove the WQ > >> > flag when creating the workqueue? > >> > >> I think it's reasonable to expect a memory reclaim related action would > >> destroy a kmem cache. Mateusz's suggestion would work around the issue, but > >> then we could get another surprising warning elsewhere. Also making the > >> kmem_cache destroys async can be tricky when a recreation happens > >> immediately under the same name (implications with sysfs/debugfs etc). We > >> managed to make the destroying synchronous as part of this series and it > >> would be great to keep it that way. > >> > >> > ------------[ cut here ]------------ > >> > workqueue: WQ_MEM_RECLAIM nvme-wq:nvme_scan_work is flushing !WQ_MEM_RECLAIM events_unbound:kfree_rcu_work > >> > >> Maybe instead kfree_rcu_work should be using a WQ_MEM_RECLAIM workqueue? It > >> is after all freeing memory. Ulad, what do you think? > >> > > We reclaim memory, therefore WQ_MEM_RECLAIM seems what we need. > > AFAIR, there is an extra rescue worker, which can really help > > under a low memory condition in a way that we do a progress. > > > > Do we have a reproducer of mentioned splat? > > I tried to create a kunit test for it, but it doesn't trigger anything. Maybe > it's too simple, or racy, and thus we are not flushing any of the queues from > kvfree_rcu_barrier()? > See some comments below. I will try to reproduce it today. But from the first glance it should trigger it. > ----8<---- > From 1e19ea78e7fe254034970f75e3b7cb705be50163 Mon Sep 17 00:00:00 2001 > From: Vlastimil Babka > Date: Tue, 25 Feb 2025 10:51:28 +0100 > Subject: [PATCH] add test for kmem_cache_destroy in a workqueue > > --- > lib/slub_kunit.c | 48 ++++++++++++++++++++++++++++++++++++++++++++++++ > 1 file changed, 48 insertions(+) > > diff --git a/lib/slub_kunit.c b/lib/slub_kunit.c > index f11691315c2f..5fe9775fba05 100644 > --- a/lib/slub_kunit.c > +++ b/lib/slub_kunit.c > @@ -6,6 +6,7 @@ > #include > #include > #include > +#include > #include "../mm/slab.h" > > static struct kunit_resource resource; > @@ -181,6 +182,52 @@ static void test_kfree_rcu(struct kunit *test) > KUNIT_EXPECT_EQ(test, 0, slab_errors); > } > > +struct cache_destroy_work { > + struct work_struct work; > + struct kmem_cache *s; > +}; > + > +static void cache_destroy_workfn(struct work_struct *w) > +{ > + struct cache_destroy_work *cdw; > + > + cdw = container_of(w, struct cache_destroy_work, work); > + > + kmem_cache_destroy(cdw->s); > +} > + > +static void test_kfree_rcu_wq_destroy(struct kunit *test) > +{ > + struct test_kfree_rcu_struct *p; > + struct cache_destroy_work cdw; > + struct workqueue_struct *wq; > + struct kmem_cache *s; > + > + if (IS_BUILTIN(CONFIG_SLUB_KUNIT_TEST)) > + kunit_skip(test, "can't do kfree_rcu() when test is built-in"); > + > + INIT_WORK_ONSTACK(&cdw.work, cache_destroy_workfn); > + wq = alloc_workqueue("test_kfree_rcu_destroy_wq", WQ_UNBOUND | WQ_MEM_RECLAIM, 0); > Maybe it is worth to add WQ_HIGHPRI also to be ahead? > + if (!wq) > + kunit_skip(test, "failed to alloc wq"); > + > + s = test_kmem_cache_create("TestSlub_kfree_rcu_wq_destroy", > + sizeof(struct test_kfree_rcu_struct), > + SLAB_NO_MERGE); > + p = kmem_cache_alloc(s, GFP_KERNEL); > + > + kfree_rcu(p, rcu); > + > + cdw.s = s; > + queue_work(wq, &cdw.work); > + msleep(1000); I am not sure it is needed. From the other hand it does nothing if i do not miss anything. > + flush_work(&cdw.work); > + > + destroy_workqueue(wq); > + > + KUNIT_EXPECT_EQ(test, 0, slab_errors); > +} > + > static void test_leak_destroy(struct kunit *test) > { > struct kmem_cache *s = test_kmem_cache_create("TestSlub_leak_destroy", > @@ -254,6 +301,7 @@ static struct kunit_case test_cases[] = { > KUNIT_CASE(test_clobber_redzone_free), > KUNIT_CASE(test_kmalloc_redzone_access), > KUNIT_CASE(test_kfree_rcu), > + KUNIT_CASE(test_kfree_rcu_wq_destroy), > KUNIT_CASE(test_leak_destroy), > KUNIT_CASE(test_krealloc_redzone_zeroing), > {} > -- > 2.48.1 > > -- Uladzislau Rezki