From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 42F97C4167B for ; Wed, 6 Dec 2023 09:35:54 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B43AB6B0093; Wed, 6 Dec 2023 04:35:53 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id ACCE06B009B; Wed, 6 Dec 2023 04:35:53 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9479D6B009C; Wed, 6 Dec 2023 04:35:53 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 7DABE6B0093 for ; Wed, 6 Dec 2023 04:35:53 -0500 (EST) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 50D8D140174 for ; Wed, 6 Dec 2023 09:35:53 +0000 (UTC) X-FDA: 81535886586.29.AB4B12F Received: from mail-pl1-f173.google.com (mail-pl1-f173.google.com [209.85.214.173]) by imf07.hostedemail.com (Postfix) with ESMTP id 75E804000A for ; Wed, 6 Dec 2023 09:35:51 +0000 (UTC) Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b="Pi43xN/S"; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf07.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.214.173 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1701855351; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=yma25+fKs5uOP5FQ6tfNto8zkRZRxKoDlNbBVbZifwI=; b=4dQ6MVjZnP/s/Ij+bL+9BxXmtP21u6zctmVFlhUrrcBtF1JX9+GRD6rV9Tc4a09ZDc6Z6l Qm4Gsrc+O6VNLrDN42oeseFKs8tz9sNzHSsmzDyWYlmIwSuNFKOila9w67Gbc3KTwtYGtX 4B1s+jSBeAv4MQ2H2+2kCHA5Yo5FGuU= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b="Pi43xN/S"; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf07.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.214.173 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1701855351; a=rsa-sha256; cv=none; b=qfk0wLZ83bw8j1kkRGtu7MXJJqHfEDVM+OxZPp7NSbElUrnJoZHYuxNOdlm88f0Mr+x7C7 NvNphUNowoi1CR1af7BgcP1VEtGRqVF4xpceFxdvvjzRRGRUz7ZyLoxLeBbVl3XjSvIW1B cZ+tfDGD/ei1KMtekc3ctcT2Q54Lel8= Received: by mail-pl1-f173.google.com with SMTP id d9443c01a7336-1d05e4a94c3so45606985ad.1 for ; Wed, 06 Dec 2023 01:35:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1701855350; x=1702460150; darn=kvack.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=yma25+fKs5uOP5FQ6tfNto8zkRZRxKoDlNbBVbZifwI=; b=Pi43xN/ScdynNojodg+PygGw3eYzT2NOPLAw+4pi4k24koxpvWz1zHZN466pDkmgkh dG14578j3kUSnUiWhZceI35oGO4N23dQTeMFIbod3N7LN0vyOcIuxfadruymQN9ie9WO 37MIeR47IiFxiAhUlMmrFqmZPog9f/9h64boRuxgEVPhCg86Ckn66IFEUd4paZEmAaKY hMDG6nEaSphSALqACETvOJ5go3NBkojmtnpH6k69sHfTQnYJGsQ4KUNQa1qohWcp+46L mCcJrXGuMkSmtrB5gW7sz4dguQpQ3HMCtxR7bOerYIDedk/pExsvOJNDlDdE31x5HQxg Io2w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701855350; x=1702460150; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=yma25+fKs5uOP5FQ6tfNto8zkRZRxKoDlNbBVbZifwI=; b=KazAlhGTTvNT2kjP5kJ55jnsMbICAWqpBnVu9mdLyUjJeBvZCInrPgSA+BGqkvnA8U 4Eaj8Fd4LCkpu9ZKkkbQwMWW8Prx/hd7dhrUKsizjHt5iTTC9Dqn5RX8l1i/83g9yDf5 Ql7F8/Pe0vv1E3Ay6Inw/gaB7ZAXwmakXjjQQbB/nOQhg2C2FRaPD2FZYm0fSqKLTIuO Qxm9roYUSPAF0YwVgcjhM4p8Z9IbWkxRnjNd8wpM8C/f+w8MMH0WcRTlCagDrprT61Le Ak5/pzT+coKamoSuhA1CckPXav4oEyigjDUjcZX1FA6shaMplflTfdNzsg5taWont+TU /cCg== X-Gm-Message-State: AOJu0YwCT10B4e1E00szjr0iWVBHIzZVShrIu9u2cSlpwfgUuSslK0i3 0KXo60OuQ3c77eY3Lw4/Ufk= X-Google-Smtp-Source: AGHT+IHZjom9nz5RbaP6rAPTPYSFyoGmDXl8F1GdAE9vlsDqXtsXUXXEWE/jcur7nyFw7f865jd2sg== X-Received: by 2002:a17:902:ce92:b0:1d0:7ed3:ea7c with SMTP id f18-20020a170902ce9200b001d07ed3ea7cmr647263plg.29.1701855350258; Wed, 06 Dec 2023 01:35:50 -0800 (PST) Received: from localhost.localdomain ([1.245.180.67]) by smtp.gmail.com with ESMTPSA id d9-20020a170902aa8900b001c9db5e2929sm11663481plr.93.2023.12.06.01.35.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 06 Dec 2023 01:35:49 -0800 (PST) Date: Wed, 6 Dec 2023 18:35:42 +0900 From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Vlastimil Babka Cc: David Rientjes , Christoph Lameter , Pekka Enberg , Joonsoo Kim , Andrew Morton , Roman Gushchin , Andrey Ryabinin , Alexander Potapenko , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Marco Elver , Johannes Weiner , Michal Hocko , Shakeel Butt , Muchun Song , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org, linux-hardening@vger.kernel.org Subject: Re: [PATCH v2 10/21] mm/slab: move struct kmem_cache_cpu declaration to slub.c Message-ID: References: <20231120-slab-remove-slab-v2-0-9c9c70177183@suse.cz> <20231120-slab-remove-slab-v2-10-9c9c70177183@suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20231120-slab-remove-slab-v2-10-9c9c70177183@suse.cz> X-Rspamd-Queue-Id: 75E804000A X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: j4ex6ki4zma4beqkykxhbcb9jfjjt7qb X-HE-Tag: 1701855351-935613 X-HE-Meta: U2FsdGVkX18LZjgaaZHLWfggbFJ5sARoOAndZSYFVItzYB5b0LXmKwDqGP2A2YiAcRO0UWviOycRV/K2cLOnImebjESCF9sqCin9jj437RqRNvDmZQOjfYu4G7nf7oPEE32hmF/ogY4OJ62XDSjvznMernf8PYfLmNJJGSu8sNzFMdcs5pyRaS4gvh/aF2txkWkVdrH6VrzgTgmxV8KqNnxjvFj86kic81LMoV7TR3c+PlbWK85Lja6qtv2X58gyl1iiOJ7LGqAyKFkT4LpYx++sNLGdrPu9IRf0AhMuwnjxvik0ML0AMSWdYAbqejyDcHxtdalZaZoctNm4JtSEY9tU4JDWUwKiHyv+AjB4lp0p19CmyJDgucRz4wIYgBpOIcANu7Za+y14VVY/pcf0hSYMySv9jqFTNvl6CaDvwaDN5noD1a97ZVnz7lcdmFWkDy3X17YSRi+063ihT7XWl08xqBem4g3kH/CuibZoixCcOfrCyAZ5yzdqr5eMkC6V/HKmLf8HUaUKpzwhinOyRdxZdTmWd/0d28zsbqTcikzcppJX2ZrrfrPiQWE4Rqz2KJfdErDVnubXj7oP3aYBPJgnHRiO354OTWIc9aSL2tg++bNGAdjpIbb7DSyV+TEZzajRqtJj70rhpAPvdArdvfvigzm2c0PDOSgXmihfxoBhDwZ18mFjtNUw5HA3Fryxzc2piM1zjmIJUiGGm7fVmjW/hurmd64y7fZzw3OTZKQo3cD1gqsitxauVOPGbnacEtGGGOxg/YIm026v0vOo3Y6xcp8jdFadwQOGaI0lXaLzKXM/m3m0awmNjWLIJcVyTErFDF/7hHpAs5O+cXIW3T5YNjm6kiMhquT7D/SJ9lmXR5N1HULiWs8rbVh0pbglHFr2Ij0gpJGrLupPCoiuSNYPoSUlhhiccIC4q1QEiRWHu6s3WqbSeNbSiXuRPasCpLfZfgVwEAneI95SIb0 8PRYmsD7 JaoitUUt5REAA+JiimEsQ4xtZBMIFu6QWJHDDMy37cV9rIeHjLjKRyYWtaiRN3iBjcoatqUgBf/Wmq9BYSRrjUHpdspqZglPz1Kfp+vPfY9LoFPRwepleXT67knKul+7AxwJaWaLN5limqPOgN8ZceW6a+36+e6SSrx+efp6f0V8ptl8KpE+k0nAXIBAr3iDPFKoBQVfeG/ZIfoIU5B6/sqyAnERhY4Te4Ju1yW9vMaSjlpdCyP6Ezbh5rxnBpQ7JV8Wa+W3idRyAmqSx46VrVZ5pfmfpAGmE1CfWC0ND/487vbxaBM+tP7we/LrwboZ1wBXtzMxBOj3IcSxsawOE55VDN0v/oNNOXB8PhuiuP7JYl5nb1My+NUFQj9IvVdVGy2PaR51Vt0pfCCny8uEkf/fMQYC5aTN5zTrbpEcUdt9x7r18zdB5mCK227vZbrJp1UJZl4pYj2QdZSV2q2gwUAlYJ7Z/fGNxvZg/k+Av7Kc3kNwrbb8ggUPQxe+zHNAZeoj4JT4oa6YD3KQCS3C5+XJqzo7xsiLOaUHhjVRXvhBkPSyqbWjuqXIr+BrRzB2JpGthLcxRdjFiVUupqBBOBNIL5LGDKPCSJfg0A0ixwE1Cy13AJsWNitkVcw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Mon, Nov 20, 2023 at 07:34:21PM +0100, Vlastimil Babka wrote: > Nothing outside SLUB itself accesses the struct kmem_cache_cpu fields so > it does not need to be declared in slub_def.h. This allows also to move > enum stat_item. > > Reviewed-by: Kees Cook > Signed-off-by: Vlastimil Babka > --- > include/linux/slub_def.h | 54 ------------------------------------------------ > mm/slub.c | 54 ++++++++++++++++++++++++++++++++++++++++++++++++ > 2 files changed, 54 insertions(+), 54 deletions(-) > > diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h > index deb90cf4bffb..a0229ea42977 100644 > --- a/include/linux/slub_def.h > +++ b/include/linux/slub_def.h > @@ -12,60 +12,6 @@ > #include > #include > > -enum stat_item { > - ALLOC_FASTPATH, /* Allocation from cpu slab */ > - ALLOC_SLOWPATH, /* Allocation by getting a new cpu slab */ > - FREE_FASTPATH, /* Free to cpu slab */ > - FREE_SLOWPATH, /* Freeing not to cpu slab */ > - FREE_FROZEN, /* Freeing to frozen slab */ > - FREE_ADD_PARTIAL, /* Freeing moves slab to partial list */ > - FREE_REMOVE_PARTIAL, /* Freeing removes last object */ > - ALLOC_FROM_PARTIAL, /* Cpu slab acquired from node partial list */ > - ALLOC_SLAB, /* Cpu slab acquired from page allocator */ > - ALLOC_REFILL, /* Refill cpu slab from slab freelist */ > - ALLOC_NODE_MISMATCH, /* Switching cpu slab */ > - FREE_SLAB, /* Slab freed to the page allocator */ > - CPUSLAB_FLUSH, /* Abandoning of the cpu slab */ > - DEACTIVATE_FULL, /* Cpu slab was full when deactivated */ > - DEACTIVATE_EMPTY, /* Cpu slab was empty when deactivated */ > - DEACTIVATE_TO_HEAD, /* Cpu slab was moved to the head of partials */ > - DEACTIVATE_TO_TAIL, /* Cpu slab was moved to the tail of partials */ > - DEACTIVATE_REMOTE_FREES,/* Slab contained remotely freed objects */ > - DEACTIVATE_BYPASS, /* Implicit deactivation */ > - ORDER_FALLBACK, /* Number of times fallback was necessary */ > - CMPXCHG_DOUBLE_CPU_FAIL,/* Failure of this_cpu_cmpxchg_double */ > - CMPXCHG_DOUBLE_FAIL, /* Number of times that cmpxchg double did not match */ > - CPU_PARTIAL_ALLOC, /* Used cpu partial on alloc */ > - CPU_PARTIAL_FREE, /* Refill cpu partial on free */ > - CPU_PARTIAL_NODE, /* Refill cpu partial from node partial */ > - CPU_PARTIAL_DRAIN, /* Drain cpu partial to node partial */ > - NR_SLUB_STAT_ITEMS > -}; > - > -#ifndef CONFIG_SLUB_TINY > -/* > - * When changing the layout, make sure freelist and tid are still compatible > - * with this_cpu_cmpxchg_double() alignment requirements. > - */ > -struct kmem_cache_cpu { > - union { > - struct { > - void **freelist; /* Pointer to next available object */ > - unsigned long tid; /* Globally unique transaction id */ > - }; > - freelist_aba_t freelist_tid; > - }; > - struct slab *slab; /* The slab from which we are allocating */ > -#ifdef CONFIG_SLUB_CPU_PARTIAL > - struct slab *partial; /* Partially allocated frozen slabs */ > -#endif > - local_lock_t lock; /* Protects the fields above */ > -#ifdef CONFIG_SLUB_STATS > - unsigned stat[NR_SLUB_STAT_ITEMS]; > -#endif > -}; > -#endif /* CONFIG_SLUB_TINY */ > - > #ifdef CONFIG_SLUB_CPU_PARTIAL > #define slub_percpu_partial(c) ((c)->partial) > > diff --git a/mm/slub.c b/mm/slub.c > index 3e01731783df..979932d046fd 100644 > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -330,6 +330,60 @@ static void debugfs_slab_add(struct kmem_cache *); > static inline void debugfs_slab_add(struct kmem_cache *s) { } > #endif > > +enum stat_item { > + ALLOC_FASTPATH, /* Allocation from cpu slab */ > + ALLOC_SLOWPATH, /* Allocation by getting a new cpu slab */ > + FREE_FASTPATH, /* Free to cpu slab */ > + FREE_SLOWPATH, /* Freeing not to cpu slab */ > + FREE_FROZEN, /* Freeing to frozen slab */ > + FREE_ADD_PARTIAL, /* Freeing moves slab to partial list */ > + FREE_REMOVE_PARTIAL, /* Freeing removes last object */ > + ALLOC_FROM_PARTIAL, /* Cpu slab acquired from node partial list */ > + ALLOC_SLAB, /* Cpu slab acquired from page allocator */ > + ALLOC_REFILL, /* Refill cpu slab from slab freelist */ > + ALLOC_NODE_MISMATCH, /* Switching cpu slab */ > + FREE_SLAB, /* Slab freed to the page allocator */ > + CPUSLAB_FLUSH, /* Abandoning of the cpu slab */ > + DEACTIVATE_FULL, /* Cpu slab was full when deactivated */ > + DEACTIVATE_EMPTY, /* Cpu slab was empty when deactivated */ > + DEACTIVATE_TO_HEAD, /* Cpu slab was moved to the head of partials */ > + DEACTIVATE_TO_TAIL, /* Cpu slab was moved to the tail of partials */ > + DEACTIVATE_REMOTE_FREES,/* Slab contained remotely freed objects */ > + DEACTIVATE_BYPASS, /* Implicit deactivation */ > + ORDER_FALLBACK, /* Number of times fallback was necessary */ > + CMPXCHG_DOUBLE_CPU_FAIL,/* Failures of this_cpu_cmpxchg_double */ > + CMPXCHG_DOUBLE_FAIL, /* Failures of slab freelist update */ > + CPU_PARTIAL_ALLOC, /* Used cpu partial on alloc */ > + CPU_PARTIAL_FREE, /* Refill cpu partial on free */ > + CPU_PARTIAL_NODE, /* Refill cpu partial from node partial */ > + CPU_PARTIAL_DRAIN, /* Drain cpu partial to node partial */ > + NR_SLUB_STAT_ITEMS > +}; > + > +#ifndef CONFIG_SLUB_TINY > +/* > + * When changing the layout, make sure freelist and tid are still compatible > + * with this_cpu_cmpxchg_double() alignment requirements. > + */ > +struct kmem_cache_cpu { > + union { > + struct { > + void **freelist; /* Pointer to next available object */ > + unsigned long tid; /* Globally unique transaction id */ > + }; > + freelist_aba_t freelist_tid; > + }; > + struct slab *slab; /* The slab from which we are allocating */ > +#ifdef CONFIG_SLUB_CPU_PARTIAL > + struct slab *partial; /* Partially allocated frozen slabs */ > +#endif > + local_lock_t lock; /* Protects the fields above */ > +#ifdef CONFIG_SLUB_STATS > + unsigned int stat[NR_SLUB_STAT_ITEMS]; > +#endif > +}; > +#endif /* CONFIG_SLUB_TINY */ > + > static inline void stat(const struct kmem_cache *s, enum stat_item si) > { > #ifdef CONFIG_SLUB_STATS Looks good to me, Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> > > -- > 2.42.1 > >