From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 43ACBC3DA5D for ; Mon, 22 Jul 2024 14:17:28 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CC4996B0085; Mon, 22 Jul 2024 10:17:27 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C74D56B0088; Mon, 22 Jul 2024 10:17:27 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B3C9E6B0089; Mon, 22 Jul 2024 10:17:27 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 970BE6B0085 for ; Mon, 22 Jul 2024 10:17:27 -0400 (EDT) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 3E1311A17A5 for ; Mon, 22 Jul 2024 14:17:27 +0000 (UTC) X-FDA: 82367591334.25.0B5A7D1 Received: from mail-oi1-f172.google.com (mail-oi1-f172.google.com [209.85.167.172]) by imf22.hostedemail.com (Postfix) with ESMTP id 6D7ABC0014 for ; Mon, 22 Jul 2024 14:17:25 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=AXIYLot1; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf22.hostedemail.com: domain of sxwjean@gmail.com designates 209.85.167.172 as permitted sender) smtp.mailfrom=sxwjean@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1721657811; a=rsa-sha256; cv=none; b=FsSsx4i8Ip4L8Hlvw9aErTLCXlrP5Xv7/sklJkObT7rJAxAOUZ7KRr0Fqw9YOSNw7TixYQ dDdoH9RyoxU0Ecuh7mamhqk0cH84XrptuNf37H7pN8EDwJm3QUxWUsSWXBTYbYJk4XjGv+ qQDfnu8N2vygW6MXpLCqv9F4SdLfXLo= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=AXIYLot1; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf22.hostedemail.com: domain of sxwjean@gmail.com designates 209.85.167.172 as permitted sender) smtp.mailfrom=sxwjean@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1721657811; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Dw3eisWLuZeGgaILJkXtnC5IGcenu22gbT3S7Hw254U=; b=Nw75tinH8oNKvjIjDJJpfimMo+W0OumkWZV3vUuRVLJvUBQq4wR5zyiEtzEPofB93qQobR Rgj/QSneYzDmOkJZUBoTLkVPV1afkDNxrjxswNAQbP5mNJr6iV+N9dV1cfHE3ljIGaYNEU u77N0PkulufPlbtIGcTX8l+Q6FNW8T4= Received: by mail-oi1-f172.google.com with SMTP id 5614622812f47-3d96365dc34so2609730b6e.2 for ; Mon, 22 Jul 2024 07:17:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1721657844; x=1722262644; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=Dw3eisWLuZeGgaILJkXtnC5IGcenu22gbT3S7Hw254U=; b=AXIYLot1npSWELRW0LVCAyR2OEqOrBcCby27y+xwf469gGmGQdHApj2KXWuo8y7Rv1 2XMrxW7BFztAbo3APQs8154PHV7ehf0AzCso5gVYW0JegpK0ziuckbNaZM5A8qUFI6E7 Nu4EteCv3eOlCRlJGQ5WgQKW7KoaYLa447JDQ0eXxLfx8J+DFKRiY6p2FMgd7guiA9TE mAV7uY8J9mPpA8CHZY/jiwoFPxjpNz7XO9Fth/Xhpv2PIFBgXWRfudGvrd70XS6TA9pl g3vmWdDMZKCWqUKygrhUH+dhyUi2UgS/fYcm1LPznosv0Nv4m1bCdoo0oN7mNk/JCv9S iC7Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1721657844; x=1722262644; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Dw3eisWLuZeGgaILJkXtnC5IGcenu22gbT3S7Hw254U=; b=NSV1Mx5Rou92lcSVKVUBH48pLO6LOq2qSNx9dNCnoQq6j2zFe+eZENNjv771erOmUQ GdKFl+zDOkBibqlsf2DrrLZKZaL3CzYQld7N/6ivqxhKw5bnrhfoaaHJQpGHJKeYE5+u QEyBgQm7+MaYO/yzKLgyxAzk5BMTNVa861LPeLgRyQdmo5jFIR39rjj6XTCAeVzqKIb1 OcCMpGDQ97G0uR+X+r8E8xDiLix8hgmPfrubJxGrW/e2wGaVqXQhDqhuFCqFLY8B5PyG hrRHZ1POQkTIkvXVBXwlqhuwl1+hke6VTh4yh6DPc9YrbcjFasSylBp2yXa2BOd3rj8y Q2Aw== X-Forwarded-Encrypted: i=1; AJvYcCXpGOlTNufRW4bEnaK+m4t7FDFeVlR05HliYtQVDuqxIv62PtyeahMP51vbo2KQSSwIY+rANApDBe1UeL4DkP6vPx0= X-Gm-Message-State: AOJu0YxVrJh4BPLv7Hg+Uee2bjQk1pJPby2Ob8FtzORLvkhlbIGhx8Ew NxOp+WzLVmmqMULqLW/6ntFnFxa2RpC9HRwr2fH6v/ryDIbR2clhCeO2QGEyd0t2DATBRxzRpd/ ARDpuOKJH7i1KYSTnDXcEqwCzydk= X-Google-Smtp-Source: AGHT+IEYbzAdfQwz+pnDUObwz1euCWA5YSl30VMPQDT2VyKSZHWAgM5vxnUyWtD+pLWMKNVbgdYu45VPZujze8LTiXc= X-Received: by 2002:a05:6871:5c8:b0:260:71c4:f33a with SMTP id 586e51a60fabf-263ab67df6fmr6093251fac.39.1721657840080; Mon, 22 Jul 2024 07:17:20 -0700 (PDT) MIME-Version: 1.0 References: <20240715-b4-slab-kfree_rcu-destroy-v1-0-46b2984c2205@suse.cz> <20240715-b4-slab-kfree_rcu-destroy-v1-2-46b2984c2205@suse.cz> In-Reply-To: <20240715-b4-slab-kfree_rcu-destroy-v1-2-46b2984c2205@suse.cz> From: Xiongwei Song Date: Mon, 22 Jul 2024 22:16:54 +0800 Message-ID: Subject: Re: [PATCH RFC 2/6] mm, slab: always maintain per-node slab and object count To: Vlastimil Babka Cc: "Paul E. McKenney" , Joel Fernandes , Josh Triplett , Boqun Feng , Christoph Lameter , David Rientjes , Steven Rostedt , Mathieu Desnoyers , Lai Jiangshan , Zqiang , Julia Lawall , Jakub Kicinski , "Jason A. Donenfeld" , "Uladzislau Rezki (Sony)" , Andrew Morton , Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com>, linux-mm@kvack.org, linux-kernel@vger.kernel.org, rcu@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: 6D7ABC0014 X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: ousfm7r3esk13ksidprzdzd8nawxsje4 X-HE-Tag: 1721657845-405551 X-HE-Meta: U2FsdGVkX18K5yKH6J0XznwTvSUEk7EwV10UGB4oqQZVaXdIMhDx1iBjrHyhr9lqixFrO5W6qXAv6lVNj5HDtHb3KBK7pJTraITtPII650Xam+blf9hS7Dg+NOFywr5a6MPmjVW2NIEoO/0wyfucnl0R0MFujUmFu/Pl+IPk3Y4MurXTtSUXrml2FCqClcu+a9cHZL88IsqYmTQQHWJeogBSLAx3liHdhagWJCiq3S7zUosp+wBwTBn7YwUChU1By30A+hfPzIsfH2UJCMVgkMhA4B2I8APcsiRNYFelVJvKj9fdcymidIMsamJwQchkqcO+7qgDp8tZZaVw9CJ6Z0HCrVZRDG1QdRFiKos6ZHkonapyYQ1EDksMB6v8/GIAMmI6KH43fUGwBGcJYGYG3p7LQ3UOLrGsNVW3l7BMcRjJAFVqcTgHB7pD75ECVPd96jqaAoDIgwiUp7XZU7p/4WiQbJBGLSa+JGd6BAH48M/osVVmCMTZIdjokvnxizwRd9uzPvlPl8FcZG6EHu6LC8NJqXmhXL6c5/oKwE+n+eoeLqoRgoGw9MzI5PPBZ9UY4RP0SNpJX+Y5Us4LXuwVJhcm3r5YidISo1JUrYXVnusqvu+FQPLr5BqAMwFYQMI5LsyW2xUUIHaYeek1D8EtKFd/xpPgt54AIzuPB+1Zw6+cwwPf84FxuVXVJRa4Sjfhdmcwv2S6sYr2+qwJANU+Xm5Cyvoct6zbTO6Cc2rm4SHEFiI9ktz2A2ORrAd2vUGVv4dRIIKpHZb/KRQ3TZ2oEBBGc5r64/JryB97JtvJrrJhpfu5ZYAcjCiycmUReEopUp9Ip7Buh8+3MZtgs6qe3OMpfKGZQ3YVcih24XRwo9N8FSUIUpAyf9I9Bw9EzntHlyXGNp/tCB3MEWQPMcJZO9zv3Dl/Lr+7E/NLXm3M7WuhGrIiJ9yRWBpv4NiBqYHKHzv9vLOLAwWSc5+Sdm1 FpSiZgYz bsgJP+jBCcYino13RBr/H4XhQvyeQuVctpoANeP3ItSenOPhzDst5hvZ7174sKK3vRla4zOi9mk8T+nQnFrsYzZVMHyXRu1gsycz8cGmNWSrHuLGX+4i49xFTEhpaOuE8YOSCUQGAE+2wsMnzG6VrKWG7XyTVL3TA8jRHrgyLzB/suD75/VEYNd+YZSj/37svh4MXct+a6zCVKEdxqjuQFjY9TnpH/RY6p7F5LUUtomppaa2BtR6LE/F9t64bFA0cwqeH76ZyhFPuc3pvWTA8tkL2jBtPnGH1uW7U00f4shFhYwbdla7Dwsypx1iGfcVrEDsqGwitC7OqakW7OVYmscl6TFl6UMNB+UE1bATFd5P+az2ANmfdoVsyYA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Don't we need the following changes for this patch? diff --git a/mm/slub.c b/mm/slub.c index c1222467c346..e6beb6743342 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -4967,9 +4967,9 @@ init_kmem_cache_node(struct kmem_cache_node *n) n->nr_partial =3D 0; spin_lock_init(&n->list_lock); INIT_LIST_HEAD(&n->partial); -#ifdef CONFIG_SLUB_DEBUG atomic_long_set(&n->nr_slabs, 0); atomic_long_set(&n->total_objects, 0); +#ifdef CONFIG_SLUB_DEBUG INIT_LIST_HEAD(&n->full); #endif } Thanks, Xiongwei On Tue, Jul 16, 2024 at 4:29=E2=80=AFAM Vlastimil Babka wr= ote: > > Currently SLUB counts per-node slabs and total objects only with > CONFIG_SLUB_DEBUG, in order to minimize overhead. However, the detection > in __kmem_cache_shutdown() whether there are no outstanding object > relies on the per-node slab count (node_nr_slabs()) so it may be > unreliable without CONFIG_SLUB_DEBUG. Thus we might be failing to warn > about such situations, and instead destroy a cache while leaving its > slab(s) around (due to a buggy slab user creating such a scenario, not > in normal operation). > > We will also need node_nr_slabs() to be reliable in the following work > to gracefully handle kmem_cache_destroy() with kfree_rcu() objects in > flight. Thus make the counting of per-node slabs and objects > unconditional. > > Note that CONFIG_SLUB_DEBUG is the default anyway, and the counting is > done only when allocating or freeing a slab page, so even in > !CONFIG_SLUB_DEBUG configs the overhead should be negligible. > > Signed-off-by: Vlastimil Babka > --- > mm/slub.c | 49 +++++++++++++++++++++---------------------------- > 1 file changed, 21 insertions(+), 28 deletions(-) > > diff --git a/mm/slub.c b/mm/slub.c > index 829a1f08e8a2..aa4d80109c49 100644 > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -426,9 +426,9 @@ struct kmem_cache_node { > spinlock_t list_lock; > unsigned long nr_partial; > struct list_head partial; > -#ifdef CONFIG_SLUB_DEBUG > atomic_long_t nr_slabs; > atomic_long_t total_objects; > +#ifdef CONFIG_SLUB_DEBUG > struct list_head full; > #endif > }; > @@ -438,6 +438,26 @@ static inline struct kmem_cache_node *get_node(struc= t kmem_cache *s, int node) > return s->node[node]; > } > > +static inline unsigned long node_nr_slabs(struct kmem_cache_node *n) > +{ > + return atomic_long_read(&n->nr_slabs); > +} > + > +static inline void inc_slabs_node(struct kmem_cache *s, int node, int ob= jects) > +{ > + struct kmem_cache_node *n =3D get_node(s, node); > + > + atomic_long_inc(&n->nr_slabs); > + atomic_long_add(objects, &n->total_objects); > +} > +static inline void dec_slabs_node(struct kmem_cache *s, int node, int ob= jects) > +{ > + struct kmem_cache_node *n =3D get_node(s, node); > + > + atomic_long_dec(&n->nr_slabs); > + atomic_long_sub(objects, &n->total_objects); > +} > + > /* > * Iterator over all nodes. The body will be executed for each node that= has > * a kmem_cache_node structure allocated (which is true for all online n= odes) > @@ -1511,26 +1531,6 @@ static void remove_full(struct kmem_cache *s, stru= ct kmem_cache_node *n, struct > list_del(&slab->slab_list); > } > > -static inline unsigned long node_nr_slabs(struct kmem_cache_node *n) > -{ > - return atomic_long_read(&n->nr_slabs); > -} > - > -static inline void inc_slabs_node(struct kmem_cache *s, int node, int ob= jects) > -{ > - struct kmem_cache_node *n =3D get_node(s, node); > - > - atomic_long_inc(&n->nr_slabs); > - atomic_long_add(objects, &n->total_objects); > -} > -static inline void dec_slabs_node(struct kmem_cache *s, int node, int ob= jects) > -{ > - struct kmem_cache_node *n =3D get_node(s, node); > - > - atomic_long_dec(&n->nr_slabs); > - atomic_long_sub(objects, &n->total_objects); > -} > - > /* Object debug checks for alloc/free paths */ > static void setup_object_debug(struct kmem_cache *s, void *object) > { > @@ -1871,13 +1871,6 @@ slab_flags_t kmem_cache_flags(slab_flags_t flags, = const char *name) > > #define disable_higher_order_debug 0 > > -static inline unsigned long node_nr_slabs(struct kmem_cache_node *n) > - { return 0; } > -static inline void inc_slabs_node(struct kmem_cache *s, int node, > - int objects) {} > -static inline void dec_slabs_node(struct kmem_cache *s, int node, > - int objects) {} > - > #ifndef CONFIG_SLUB_TINY > static bool freelist_corrupted(struct kmem_cache *s, struct slab *slab, > void **freelist, void *nextfree) > > -- > 2.45.2 > >