From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D5385C10F04 for ; Wed, 6 Dec 2023 09:45:31 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 72EE56B009B; Wed, 6 Dec 2023 04:45:31 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 6B7F96B009D; Wed, 6 Dec 2023 04:45:31 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5590D6B009E; Wed, 6 Dec 2023 04:45:31 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 440A46B009B for ; Wed, 6 Dec 2023 04:45:31 -0500 (EST) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 1E0491A012F for ; Wed, 6 Dec 2023 09:45:31 +0000 (UTC) X-FDA: 81535910862.25.E6C7D79 Received: from mail-pj1-f49.google.com (mail-pj1-f49.google.com [209.85.216.49]) by imf04.hostedemail.com (Postfix) with ESMTP id 4D7444001A for ; Wed, 6 Dec 2023 09:45:29 +0000 (UTC) Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=ZdoADK5C; spf=pass (imf04.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.216.49 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1701855929; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=cDd5Zi7vCvO2ycBR8Ub2HRlOTkCLIJYdUeR29MHQbio=; b=Phwx38vKkFJ47kAse25FPxOB97vlPm5KrARr+Xr5g3rkZFXATQfqeqG7LWngIKUhuYsR4u 4ri5w+1pdgn9jrbv4DDIW0+nn8lf5BawlLPhiyR05ETtdgvJcZNzeAmhWz5Hb08AcsfbQY P/qfp2FdUShMfO5nZOL8tPjDDQPT3tI= ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=ZdoADK5C; spf=pass (imf04.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.216.49 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1701855929; a=rsa-sha256; cv=none; b=PpvMGWflEshLcWZAL+MM1fcoxo/ZnQrJNmKO7zXaUvq58+ua4Iw9pw2Oe2R2fc/HdiU8bp hR51Hg8PdcGfwdFlVdXxSC5JULidRIZETVkF3+LyS9chWFCCRpsawxdh7m/TQBtI/9uAek sNC2b6HTPJTyuPXAaba6YreCqK4Qsq8= Received: by mail-pj1-f49.google.com with SMTP id 98e67ed59e1d1-2866951b6e0so4422510a91.2 for ; Wed, 06 Dec 2023 01:45:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1701855928; x=1702460728; darn=kvack.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=cDd5Zi7vCvO2ycBR8Ub2HRlOTkCLIJYdUeR29MHQbio=; b=ZdoADK5CBRGogjLv0Xu97KCKDDNN0m1nZbAIxHZX5GFabtCZRpiJlTSwZyHFuUtME4 ugX8lvTiCJjTai2IFLewX3BGID5PBui8idmCT2KDvH81FO/P7UVDL5DAIC9UCDbYza0e AYgNmPbCOnAKR0vLSoJtqZBlQDn+Rsr4lOZ9xsQHqXeGDyRgFmyQfSjf+1fawLFkowkZ J/CoSoQqAaDRPAKs1PtMLAUgRyn5KYSXT0jzt/h3A6o4970qI9Er1cTa44NF9+zicsxl cTQDhBq/P4y4u9JvMBu3+iZVZrVNZJ1OLJB/7L2rPPKd7gk6cbpe8764xR2TzqC5szlS AVhA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701855928; x=1702460728; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=cDd5Zi7vCvO2ycBR8Ub2HRlOTkCLIJYdUeR29MHQbio=; b=V+/xxzVQZQbwfGPAZ79UkwqGIFzC4hUNmuDBi2fRt02OU/WpcEn//6E0u5w9XUG7li LwdgiLQiRyGW5pFpMAsIorT1imBhlNTqaYuC32OYvr3Fqeqv/LD62FcXJXy6U7mRTn1u Bn/LbtD9ALHqoY5HXy3/tw1Ke70JXPBp1gY3BB6skUl1toZ9Kn0qE5UsDR9FnWEVPHkh CFeEOOcjLZ+mE97nECEt/Z3A6yu25XqE6zjEInEmxuW5zcAcSlw5moMKx/g1Pa2iOMD6 CaJkJOhE6wLdnXpudTMei0mcybrl3ZGJ3GyRSIbxQjpI+zeRXLOxYBL9Jph5twNLw/pe QQHQ== X-Gm-Message-State: AOJu0YyjLUDBlb1/mC5zwJBAl2ZbZhETwy2eUsbWWkY4DADyAouEgAf4 hv69l5J0h6TAe+anLRCSB5U= X-Google-Smtp-Source: AGHT+IHdaI2N+Uyv0y5U6vvyUM83vZn0QO5kyeyTrfCVEOhHY8wZLirKQdWWbZQYJWf2FlKP8QJrdg== X-Received: by 2002:a17:90b:1298:b0:286:6cc0:cac2 with SMTP id fw24-20020a17090b129800b002866cc0cac2mr546651pjb.57.1701855927932; Wed, 06 Dec 2023 01:45:27 -0800 (PST) Received: from localhost.localdomain ([1.245.180.67]) by smtp.gmail.com with ESMTPSA id ms10-20020a17090b234a00b0028679f2ee38sm938188pjb.0.2023.12.06.01.45.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 06 Dec 2023 01:45:27 -0800 (PST) Date: Wed, 6 Dec 2023 18:45:19 +0900 From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Vlastimil Babka Cc: David Rientjes , Christoph Lameter , Pekka Enberg , Joonsoo Kim , Andrew Morton , Roman Gushchin , Andrey Ryabinin , Alexander Potapenko , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Marco Elver , Johannes Weiner , Michal Hocko , Shakeel Butt , Muchun Song , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org, linux-hardening@vger.kernel.org Subject: Re: [PATCH v2 11/21] mm/slab: move the rest of slub_def.h to mm/slab.h Message-ID: References: <20231120-slab-remove-slab-v2-0-9c9c70177183@suse.cz> <20231120-slab-remove-slab-v2-11-9c9c70177183@suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20231120-slab-remove-slab-v2-11-9c9c70177183@suse.cz> X-Rspamd-Queue-Id: 4D7444001A X-Rspam-User: X-Stat-Signature: 6x8qo8ss6p7wmyqxfg6wanuhskn6mstj X-Rspamd-Server: rspam01 X-HE-Tag: 1701855929-345613 X-HE-Meta: U2FsdGVkX1+N0DpeUvcElVGYNp2q8BiF6U0k8EshrrK6sIfBOFNr3Y/nSQHxd3yqQyMz4guUmdruBEfEz/LvFUOGGqxGtpTUz3Xptn83nhx6Bq1k8SIy1D6l9Y9sdqVn7BmVOhLzpIK8KVIZdq1pQsd8UF31FjXVw1HkXfNTpW/nkg1hJCZzuhN0Y/6WzXqxI2EpDG9GTeHJBIUIccqIQqiuxQHqtNxnK8WMtOPuFjJ1Bzj1vA6lt1h0AR3M4o9k4tF5LxqHRqUdouZvLSY9Pd155iRpSJYltckud/8KXDhrMgsEaTGyP68SMjwgOYzA2Ogx1N3SzaYBB8zjqxQ7NriNwQqIbTmUVPyd0DYG/okajypBcloUpNJ5n6Gcp4gt9xrbVWpgcNfuu0GMBegI7mh2h3yj4C0RPx2db0W5ymXIKWEkMXwHyBT3LVzr1kwhjcP3MLtHpSvou4Qi28Kn3uvJEbQ2n5IK9VWTEVIH5dwh2bFj8QSU/mpe+F+otcI6lp5eXur8DzZ+4gt9ZctPlyjGx75OssKNn7GGTN4qYejk/bO9rGfQxp3DQs44QLwMpqrLfwnnJquegCmCzetFqxnR6PbvtFdVRYdF+9OAcioK0P4T4H6yBGmP4UrlkQicilt1o2RAmSvQ9rQgE1agjgqNiozKCsGZj6H25WXLra0pSaLCrCtE/yCSJlt4oTPp7e2LeWD1ezQ6HAnq7khLmeYZVzVvoB4fQYef9XDsHYH14XximCRifH/0kdyFEPfop+b1C9CkyJlSxgfW8jqzqycStbIZ5c1sHAec2t/wlWdKnRF1RNx9mXlziZ5B2mMxjrcLvMm1FvCi+4eNokKBhkznijIdnUNvDI31w4wzsqUWWrHRkq/TGIuiZD4j3TC5wmt/lRg8u8YM3gqxTggO9JfhXmEDZSZAbBlDm6hN2mW7jaBbsxq53qpSTcTxvtzHZsThzSRr1Jp7ouEfAcz 9f5G9I/O sKFbfK5rYCc2MtPhuf/TEdmCls2jOzgZfD4ykXfk1Gzv8sVpUsTMVeBN1UflYqkD4ijlTXXg9qt+fFLCMNKMfKpLq+d3ccN/rU5rVXMuo1nD6to5G53E7C4ZSwjEnfMmFpOWqHgfDy5z29qDc24yYJaHxQEeQSaQ29tA8c8OW21qVBNeVu8V8DGM+U7/Zm1+CNymAQn30fI3a4hIEoUoWqmjqUUaF4VIO/7Nm+SV9ZJm1w+HDMMk9jBqsjHK0Wo1vcGPu8m31euxuA0Ge5H08PorQ2RcsKgMKVzNLM8KpXOZalXpvCODZCEPBXB7ommz1FVG1E4/wLw7ae4t1C2mZfEu4sYncQiun5Yn4Eb90kck2tsDB3po4YUzLyjkwOqthph8Eqaz3JTEpuYpp5PuDrRStWVPu0MVQeYKwp57ru0VC17lirJH3Rk9bg+Bq9J+2iQuIObGReWxmiq22CyJ6EIDvSDmpPX8K7YdmuwGLQ9/n7ptXH0ixtfpH2g6FjMVlXnA8wurg6r3dAroZZgpm2OrRohANgHOJbXkWp7nJU+ZcAma7WzLqSLAr3eu22Q1Vl3Qw5Wtic2FaIbzGRug4HSdg1fChhKval4Hs10uIW1XcOSlXVLRoK60/AA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Mon, Nov 20, 2023 at 07:34:22PM +0100, Vlastimil Babka wrote: > mm/slab.h is the only place to include include/linux/slub_def.h which > has allowed switching between SLAB and SLUB. Now we can simply move the > contents over and remove slub_def.h. > > Use this opportunity to fix up some whitespace (alignment) issues. > > Reviewed-by: Kees Cook > Signed-off-by: Vlastimil Babka > --- > include/linux/slub_def.h | 150 ----------------------------------------------- > mm/slab.h | 138 ++++++++++++++++++++++++++++++++++++++++++- > 2 files changed, 137 insertions(+), 151 deletions(-) > > diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h > deleted file mode 100644 > index a0229ea42977..000000000000 > --- a/include/linux/slub_def.h > +++ /dev/null > @@ -1,150 +0,0 @@ > -/* SPDX-License-Identifier: GPL-2.0 */ > -#ifndef _LINUX_SLUB_DEF_H > -#define _LINUX_SLUB_DEF_H > - > -/* > - * SLUB : A Slab allocator without object queues. > - * > - * (C) 2007 SGI, Christoph Lameter > - */ > -#include > -#include > -#include > -#include > - > -#ifdef CONFIG_SLUB_CPU_PARTIAL > -#define slub_percpu_partial(c) ((c)->partial) > - > -#define slub_set_percpu_partial(c, p) \ > -({ \ > - slub_percpu_partial(c) = (p)->next; \ > -}) > - > -#define slub_percpu_partial_read_once(c) READ_ONCE(slub_percpu_partial(c)) > -#else > -#define slub_percpu_partial(c) NULL > - > -#define slub_set_percpu_partial(c, p) > - > -#define slub_percpu_partial_read_once(c) NULL > -#endif // CONFIG_SLUB_CPU_PARTIAL > - > -/* > - * Word size structure that can be atomically updated or read and that > - * contains both the order and the number of objects that a slab of the > - * given order would contain. > - */ > -struct kmem_cache_order_objects { > - unsigned int x; > -}; > - > -/* > - * Slab cache management. > - */ > -struct kmem_cache { > -#ifndef CONFIG_SLUB_TINY > - struct kmem_cache_cpu __percpu *cpu_slab; > -#endif > - /* Used for retrieving partial slabs, etc. */ > - slab_flags_t flags; > - unsigned long min_partial; > - unsigned int size; /* The size of an object including metadata */ > - unsigned int object_size;/* The size of an object without metadata */ > - struct reciprocal_value reciprocal_size; > - unsigned int offset; /* Free pointer offset */ > -#ifdef CONFIG_SLUB_CPU_PARTIAL > - /* Number of per cpu partial objects to keep around */ > - unsigned int cpu_partial; > - /* Number of per cpu partial slabs to keep around */ > - unsigned int cpu_partial_slabs; > -#endif > - struct kmem_cache_order_objects oo; > - > - /* Allocation and freeing of slabs */ > - struct kmem_cache_order_objects min; > - gfp_t allocflags; /* gfp flags to use on each alloc */ > - int refcount; /* Refcount for slab cache destroy */ > - void (*ctor)(void *); > - unsigned int inuse; /* Offset to metadata */ > - unsigned int align; /* Alignment */ > - unsigned int red_left_pad; /* Left redzone padding size */ > - const char *name; /* Name (only for display!) */ > - struct list_head list; /* List of slab caches */ > -#ifdef CONFIG_SYSFS > - struct kobject kobj; /* For sysfs */ > -#endif > -#ifdef CONFIG_SLAB_FREELIST_HARDENED > - unsigned long random; > -#endif > - > -#ifdef CONFIG_NUMA > - /* > - * Defragmentation by allocating from a remote node. > - */ > - unsigned int remote_node_defrag_ratio; > -#endif > - > -#ifdef CONFIG_SLAB_FREELIST_RANDOM > - unsigned int *random_seq; > -#endif > - > -#ifdef CONFIG_KASAN_GENERIC > - struct kasan_cache kasan_info; > -#endif > - > -#ifdef CONFIG_HARDENED_USERCOPY > - unsigned int useroffset; /* Usercopy region offset */ > - unsigned int usersize; /* Usercopy region size */ > -#endif > - > - struct kmem_cache_node *node[MAX_NUMNODES]; > -}; > - > -#if defined(CONFIG_SYSFS) && !defined(CONFIG_SLUB_TINY) > -#define SLAB_SUPPORTS_SYSFS > -void sysfs_slab_unlink(struct kmem_cache *); > -void sysfs_slab_release(struct kmem_cache *); > -#else > -static inline void sysfs_slab_unlink(struct kmem_cache *s) > -{ > -} > -static inline void sysfs_slab_release(struct kmem_cache *s) > -{ > -} > -#endif > - > -void *fixup_red_left(struct kmem_cache *s, void *p); > - > -static inline void *nearest_obj(struct kmem_cache *cache, const struct slab *slab, > - void *x) { > - void *object = x - (x - slab_address(slab)) % cache->size; > - void *last_object = slab_address(slab) + > - (slab->objects - 1) * cache->size; > - void *result = (unlikely(object > last_object)) ? last_object : object; > - > - result = fixup_red_left(cache, result); > - return result; > -} > - > -/* Determine object index from a given position */ > -static inline unsigned int __obj_to_index(const struct kmem_cache *cache, > - void *addr, void *obj) > -{ > - return reciprocal_divide(kasan_reset_tag(obj) - addr, > - cache->reciprocal_size); > -} > - > -static inline unsigned int obj_to_index(const struct kmem_cache *cache, > - const struct slab *slab, void *obj) > -{ > - if (is_kfence_address(obj)) > - return 0; > - return __obj_to_index(cache, slab_address(slab), obj); > -} > - > -static inline int objs_per_slab(const struct kmem_cache *cache, > - const struct slab *slab) > -{ > - return slab->objects; > -} > -#endif /* _LINUX_SLUB_DEF_H */ > diff --git a/mm/slab.h b/mm/slab.h > index 014c36ea51fa..3a8d13c099fa 100644 > --- a/mm/slab.h > +++ b/mm/slab.h > @@ -209,7 +209,143 @@ static inline size_t slab_size(const struct slab *slab) > return PAGE_SIZE << slab_order(slab); > } > > -#include > +#include > +#include > +#include > +#include > + > +#ifdef CONFIG_SLUB_CPU_PARTIAL > +#define slub_percpu_partial(c) ((c)->partial) > + > +#define slub_set_percpu_partial(c, p) \ > +({ \ > + slub_percpu_partial(c) = (p)->next; \ > +}) > + > +#define slub_percpu_partial_read_once(c) READ_ONCE(slub_percpu_partial(c)) > +#else > +#define slub_percpu_partial(c) NULL > + > +#define slub_set_percpu_partial(c, p) > + > +#define slub_percpu_partial_read_once(c) NULL > +#endif // CONFIG_SLUB_CPU_PARTIAL > + > +/* > + * Word size structure that can be atomically updated or read and that > + * contains both the order and the number of objects that a slab of the > + * given order would contain. > + */ > +struct kmem_cache_order_objects { > + unsigned int x; > +}; > + > +/* > + * Slab cache management. > + */ > +struct kmem_cache { > +#ifndef CONFIG_SLUB_TINY > + struct kmem_cache_cpu __percpu *cpu_slab; > +#endif > + /* Used for retrieving partial slabs, etc. */ > + slab_flags_t flags; > + unsigned long min_partial; > + unsigned int size; /* Object size including metadata */ > + unsigned int object_size; /* Object size without metadata */ > + struct reciprocal_value reciprocal_size; > + unsigned int offset; /* Free pointer offset */ > +#ifdef CONFIG_SLUB_CPU_PARTIAL > + /* Number of per cpu partial objects to keep around */ > + unsigned int cpu_partial; > + /* Number of per cpu partial slabs to keep around */ > + unsigned int cpu_partial_slabs; > +#endif > + struct kmem_cache_order_objects oo; > + > + /* Allocation and freeing of slabs */ > + struct kmem_cache_order_objects min; > + gfp_t allocflags; /* gfp flags to use on each alloc */ > + int refcount; /* Refcount for slab cache destroy */ > + void (*ctor)(void *object); /* Object constructor */ > + unsigned int inuse; /* Offset to metadata */ > + unsigned int align; /* Alignment */ > + unsigned int red_left_pad; /* Left redzone padding size */ > + const char *name; /* Name (only for display!) */ > + struct list_head list; /* List of slab caches */ > +#ifdef CONFIG_SYSFS > + struct kobject kobj; /* For sysfs */ > +#endif > +#ifdef CONFIG_SLAB_FREELIST_HARDENED > + unsigned long random; > +#endif > + > +#ifdef CONFIG_NUMA > + /* > + * Defragmentation by allocating from a remote node. > + */ > + unsigned int remote_node_defrag_ratio; > +#endif > + > +#ifdef CONFIG_SLAB_FREELIST_RANDOM > + unsigned int *random_seq; > +#endif > + > +#ifdef CONFIG_KASAN_GENERIC > + struct kasan_cache kasan_info; > +#endif > + > +#ifdef CONFIG_HARDENED_USERCOPY > + unsigned int useroffset; /* Usercopy region offset */ > + unsigned int usersize; /* Usercopy region size */ > +#endif > + > + struct kmem_cache_node *node[MAX_NUMNODES]; > +}; > + > +#if defined(CONFIG_SYSFS) && !defined(CONFIG_SLUB_TINY) > +#define SLAB_SUPPORTS_SYSFS > +void sysfs_slab_unlink(struct kmem_cache *s); > +void sysfs_slab_release(struct kmem_cache *s); > +#else > +static inline void sysfs_slab_unlink(struct kmem_cache *s) { } > +static inline void sysfs_slab_release(struct kmem_cache *s) { } > +#endif > + > +void *fixup_red_left(struct kmem_cache *s, void *p); > + > +static inline void *nearest_obj(struct kmem_cache *cache, > + const struct slab *slab, void *x) > +{ > + void *object = x - (x - slab_address(slab)) % cache->size; > + void *last_object = slab_address(slab) + > + (slab->objects - 1) * cache->size; > + void *result = (unlikely(object > last_object)) ? last_object : object; > + > + result = fixup_red_left(cache, result); > + return result; > +} > + > +/* Determine object index from a given position */ > +static inline unsigned int __obj_to_index(const struct kmem_cache *cache, > + void *addr, void *obj) > +{ > + return reciprocal_divide(kasan_reset_tag(obj) - addr, > + cache->reciprocal_size); > +} > + > +static inline unsigned int obj_to_index(const struct kmem_cache *cache, > + const struct slab *slab, void *obj) > +{ > + if (is_kfence_address(obj)) > + return 0; > + return __obj_to_index(cache, slab_address(slab), obj); > +} > + > +static inline int objs_per_slab(const struct kmem_cache *cache, > + const struct slab *slab) > +{ > + return slab->objects; > +} > > #include > #include Looks good to me, Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> > > -- > 2.42.1 > >