From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 84F13C072A2 for ; Mon, 13 Nov 2023 19:14:42 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id EA8316B0279; Mon, 13 Nov 2023 14:14:17 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id B236B6B027B; Mon, 13 Nov 2023 14:14:17 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 669436B027D; Mon, 13 Nov 2023 14:14:17 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 2605B6B0277 for ; Mon, 13 Nov 2023 14:14:17 -0500 (EST) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id F1DA416010E for ; Mon, 13 Nov 2023 19:14:16 +0000 (UTC) X-FDA: 81453881712.09.E94E886 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29]) by imf06.hostedemail.com (Postfix) with ESMTP id 05EDE18002A for ; Mon, 13 Nov 2023 19:14:14 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b="J/xXyVkR"; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=x7V0Kskh; spf=pass (imf06.hostedemail.com: domain of vbabka@suse.cz designates 195.135.220.29 as permitted sender) smtp.mailfrom=vbabka@suse.cz; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1699902855; a=rsa-sha256; cv=none; b=W4umXpDb27APS31Ml7ZTv/yN1tORk0zCicFY5dl76Us79tpZmmY+KmmInpOlYUXACDyG/O 5Z1+VJOkoBq0W2sSGxzDn3HsuOYfYKW/EsX1TiHgtNfPYNrksbDPVXRPW5U/9r0fdc8j0I 1rLnvMcPAhpfzWlpSfTs3soxlr+eFC0= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b="J/xXyVkR"; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=x7V0Kskh; spf=pass (imf06.hostedemail.com: domain of vbabka@suse.cz designates 195.135.220.29 as permitted sender) smtp.mailfrom=vbabka@suse.cz; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1699902855; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=cXA2SgJxJZfvYr5jQXRsm4AX40wbUd1muC5Hg9F2/Ho=; b=iiIda53qU0ABWAMydZ3L9HnXLDdQZyC2/MZP8oUq5YJTuZyEL4gmwQEMWMJ6LzSKyLQ5W/ l/GCJDr+Bb3YRUjL/5kvFu83R3oNwJKGfN28KQIQ8x6Em/p35cE2dFsELbgK2PHVIPk55/ 8VRCa9+oLXU83k0eB0VBxDi9O1bihGI= Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id DAF4D1F88C; Mon, 13 Nov 2023 19:14:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1699902852; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=cXA2SgJxJZfvYr5jQXRsm4AX40wbUd1muC5Hg9F2/Ho=; b=J/xXyVkRwG9lSfoCJ0sf2qONquUSYGBcGDQOJ/KZqHp/2Ym7NKdEIeRsO1HMAF00YvGdpQ ++5uNeJdkS1dnWj3bfV85B7sWKeT1esox0SyGaMjRaFHxV5UzdaNlz0cFN4h27vUuLRdmi b2xb85b9yRuj00ahmRBUZ1SvldtCpa0= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1699902852; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=cXA2SgJxJZfvYr5jQXRsm4AX40wbUd1muC5Hg9F2/Ho=; b=x7V0Kskh+UT6qgyWN1pbjlXOVG+wZu56XEQE1QMz53nsyEzacDIO/mco2BfSmDidoPDY3u 8ufLwRenOBSrIlAQ== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 817BD13398; Mon, 13 Nov 2023 19:14:12 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id iCnpHoR1UmVFOgAAMHmgww (envelope-from ); Mon, 13 Nov 2023 19:14:12 +0000 From: Vlastimil Babka To: David Rientjes , Christoph Lameter , Pekka Enberg , Joonsoo Kim Cc: Andrew Morton , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Roman Gushchin , linux-mm@kvack.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Andrey Ryabinin , Alexander Potapenko , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Marco Elver , Johannes Weiner , Michal Hocko , Shakeel Butt , Muchun Song , Kees Cook , kasan-dev@googlegroups.com, cgroups@vger.kernel.org, Vlastimil Babka Subject: [PATCH 10/20] mm/slab: move the rest of slub_def.h to mm/slab.h Date: Mon, 13 Nov 2023 20:13:51 +0100 Message-ID: <20231113191340.17482-32-vbabka@suse.cz> X-Mailer: git-send-email 2.42.1 In-Reply-To: <20231113191340.17482-22-vbabka@suse.cz> References: <20231113191340.17482-22-vbabka@suse.cz> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 05EDE18002A X-Stat-Signature: sb5a3odziz6om4okot87hwzjr43y37ak X-Rspam-User: X-HE-Tag: 1699902854-145198 X-HE-Meta: U2FsdGVkX1/Q0kLp6BhNHiCLl2pQ8nuJXdvrAm/DaMM4ktL1TiV5oD4qLlrXeA/SLX62uHkUs7SXk0FSZdgOH3ShxKh2Y6mWo2w6k4pKR1scWaX9sDMJvq8Wt2w3WoKNyXGf4a6QVohdNpO02quJGYXLVuj+d9PMaKDr0wrMxVPy1zbIbPLt40GqfbcpAoT9AzQi+yK1nylGwqGxUB25tJS32DDxFeTrnyZMIG73dO0Wn9AgkUD5NR9b9DtgrPHl4hdYShexQw/pwKdOf382kTJ/QdZa4EIzgqPfGzlX4YoLb7gmRxvlCpowQl1Xhtjo+XLjD7TjsPQYlT70sltkBkg3VMqvDaTYpxvLoa6HNBK0AT5zQ35u/UtyqwefxaLc6KVUB0FvBtgUZYIWYp3TMrHaBv404GUe04Jima05QaEOSJ/GIyth+MbAIQV8GNusvzwdzQ+Jj6Mmh0N+TgspCOB9CfnX0Vjj5kLhMITxeinFjS/uIAuORU+yRLOFH9BKKEEBnHB0KTAKc6qkGHHlH4Tdyt/CVJ5YyZm4/SkAwULasM5C9DiM05LJ7uO0cB5vGOBRxLWxl2J22L6a21YAFScdTWZRPr/TWM4ysvv2Z0dJys/jSrKovR0ehSsrTDub2tUklcTVjPhpQLS8CvFMPitQ0kWVjgcs1rDgrtabVySKWC3oB4u9Fy+Yoz9jSey2W9jXDLAmr+TGNbS2v/NuGfdR5nfyPKjBOsSrneJ0OA30aiia4K329UZmOSmXAuJRWDhVR31Ue4T7TtuQ7U0lhe7qVQp3UZBINBWC26q37nt1RwHAOpVHQmGxebDL++zDzPi4S0mtHi+RpI5wwGQ3Q136PQC+nmIg4h+hX/Zdumel4SSwyrDeqpDhJWrb5m189W/IY6pEw3C2qMSrj37OkAZ9GXJOcUYwAsl3yr+QEOPctP3Na3fjnpil4EQsIM5xmw8J3sAyVbv4ac6AlSx 1AKLYwR0 IM7hcQujbLuGD4C88WtHBX2n5ZK+BtHq3mcD/EAPv7VudpgNzarvoDLMgKl0j3x1sYPosOagNqbGvINb4HK5GcWcK/lKBeGuFKXmC5jfEHybctCzO0TRp7TMDipMJfbLf4YdnanRdZC2KemEpzo0n92/diZm6CC1FmTK9umwcmyZ+GRoFx0ZUWZTgnFo4K/3wlT6KBe7tfbJHV4ZOPzXH808rXM3o9p7Ncd2xNIiQQ2Qwu+f975KgN5gay8+1E09BAPJ+hpzbt8HTND1IUm34W3ZPcjongJ1COEEvaIDt/n30h9wxxmBK27r6Xw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: mm/slab.h is the only place to include include/linux/slub_def.h which has allowed switching between SLAB and SLUB. Now we can simply move the contents over and remove slub_def.h. Signed-off-by: Vlastimil Babka --- include/linux/slub_def.h | 150 --------------------------------------- mm/slab.h | 137 ++++++++++++++++++++++++++++++++++- 2 files changed, 136 insertions(+), 151 deletions(-) delete mode 100644 include/linux/slub_def.h diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h deleted file mode 100644 index a0229ea42977..000000000000 --- a/include/linux/slub_def.h +++ /dev/null @@ -1,150 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0 */ -#ifndef _LINUX_SLUB_DEF_H -#define _LINUX_SLUB_DEF_H - -/* - * SLUB : A Slab allocator without object queues. - * - * (C) 2007 SGI, Christoph Lameter - */ -#include -#include -#include -#include - -#ifdef CONFIG_SLUB_CPU_PARTIAL -#define slub_percpu_partial(c) ((c)->partial) - -#define slub_set_percpu_partial(c, p) \ -({ \ - slub_percpu_partial(c) = (p)->next; \ -}) - -#define slub_percpu_partial_read_once(c) READ_ONCE(slub_percpu_partial(c)) -#else -#define slub_percpu_partial(c) NULL - -#define slub_set_percpu_partial(c, p) - -#define slub_percpu_partial_read_once(c) NULL -#endif // CONFIG_SLUB_CPU_PARTIAL - -/* - * Word size structure that can be atomically updated or read and that - * contains both the order and the number of objects that a slab of the - * given order would contain. - */ -struct kmem_cache_order_objects { - unsigned int x; -}; - -/* - * Slab cache management. - */ -struct kmem_cache { -#ifndef CONFIG_SLUB_TINY - struct kmem_cache_cpu __percpu *cpu_slab; -#endif - /* Used for retrieving partial slabs, etc. */ - slab_flags_t flags; - unsigned long min_partial; - unsigned int size; /* The size of an object including metadata */ - unsigned int object_size;/* The size of an object without metadata */ - struct reciprocal_value reciprocal_size; - unsigned int offset; /* Free pointer offset */ -#ifdef CONFIG_SLUB_CPU_PARTIAL - /* Number of per cpu partial objects to keep around */ - unsigned int cpu_partial; - /* Number of per cpu partial slabs to keep around */ - unsigned int cpu_partial_slabs; -#endif - struct kmem_cache_order_objects oo; - - /* Allocation and freeing of slabs */ - struct kmem_cache_order_objects min; - gfp_t allocflags; /* gfp flags to use on each alloc */ - int refcount; /* Refcount for slab cache destroy */ - void (*ctor)(void *); - unsigned int inuse; /* Offset to metadata */ - unsigned int align; /* Alignment */ - unsigned int red_left_pad; /* Left redzone padding size */ - const char *name; /* Name (only for display!) */ - struct list_head list; /* List of slab caches */ -#ifdef CONFIG_SYSFS - struct kobject kobj; /* For sysfs */ -#endif -#ifdef CONFIG_SLAB_FREELIST_HARDENED - unsigned long random; -#endif - -#ifdef CONFIG_NUMA - /* - * Defragmentation by allocating from a remote node. - */ - unsigned int remote_node_defrag_ratio; -#endif - -#ifdef CONFIG_SLAB_FREELIST_RANDOM - unsigned int *random_seq; -#endif - -#ifdef CONFIG_KASAN_GENERIC - struct kasan_cache kasan_info; -#endif - -#ifdef CONFIG_HARDENED_USERCOPY - unsigned int useroffset; /* Usercopy region offset */ - unsigned int usersize; /* Usercopy region size */ -#endif - - struct kmem_cache_node *node[MAX_NUMNODES]; -}; - -#if defined(CONFIG_SYSFS) && !defined(CONFIG_SLUB_TINY) -#define SLAB_SUPPORTS_SYSFS -void sysfs_slab_unlink(struct kmem_cache *); -void sysfs_slab_release(struct kmem_cache *); -#else -static inline void sysfs_slab_unlink(struct kmem_cache *s) -{ -} -static inline void sysfs_slab_release(struct kmem_cache *s) -{ -} -#endif - -void *fixup_red_left(struct kmem_cache *s, void *p); - -static inline void *nearest_obj(struct kmem_cache *cache, const struct slab *slab, - void *x) { - void *object = x - (x - slab_address(slab)) % cache->size; - void *last_object = slab_address(slab) + - (slab->objects - 1) * cache->size; - void *result = (unlikely(object > last_object)) ? last_object : object; - - result = fixup_red_left(cache, result); - return result; -} - -/* Determine object index from a given position */ -static inline unsigned int __obj_to_index(const struct kmem_cache *cache, - void *addr, void *obj) -{ - return reciprocal_divide(kasan_reset_tag(obj) - addr, - cache->reciprocal_size); -} - -static inline unsigned int obj_to_index(const struct kmem_cache *cache, - const struct slab *slab, void *obj) -{ - if (is_kfence_address(obj)) - return 0; - return __obj_to_index(cache, slab_address(slab), obj); -} - -static inline int objs_per_slab(const struct kmem_cache *cache, - const struct slab *slab) -{ - return slab->objects; -} -#endif /* _LINUX_SLUB_DEF_H */ diff --git a/mm/slab.h b/mm/slab.h index 014c36ea51fa..6e76216ac74e 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -209,7 +209,142 @@ static inline size_t slab_size(const struct slab *slab) return PAGE_SIZE << slab_order(slab); } -#include +#include +#include +#include +#include + +#ifdef CONFIG_SLUB_CPU_PARTIAL +#define slub_percpu_partial(c) ((c)->partial) + +#define slub_set_percpu_partial(c, p) \ +({ \ + slub_percpu_partial(c) = (p)->next; \ +}) + +#define slub_percpu_partial_read_once(c) READ_ONCE(slub_percpu_partial(c)) +#else +#define slub_percpu_partial(c) NULL + +#define slub_set_percpu_partial(c, p) + +#define slub_percpu_partial_read_once(c) NULL +#endif // CONFIG_SLUB_CPU_PARTIAL + +/* + * Word size structure that can be atomically updated or read and that + * contains both the order and the number of objects that a slab of the + * given order would contain. + */ +struct kmem_cache_order_objects { + unsigned int x; +}; + +/* + * Slab cache management. + */ +struct kmem_cache { +#ifndef CONFIG_SLUB_TINY + struct kmem_cache_cpu __percpu *cpu_slab; +#endif + /* Used for retrieving partial slabs, etc. */ + slab_flags_t flags; + unsigned long min_partial; + unsigned int size; /* The size of an object including metadata */ + unsigned int object_size;/* The size of an object without metadata */ + struct reciprocal_value reciprocal_size; + unsigned int offset; /* Free pointer offset */ +#ifdef CONFIG_SLUB_CPU_PARTIAL + /* Number of per cpu partial objects to keep around */ + unsigned int cpu_partial; + /* Number of per cpu partial slabs to keep around */ + unsigned int cpu_partial_slabs; +#endif + struct kmem_cache_order_objects oo; + + /* Allocation and freeing of slabs */ + struct kmem_cache_order_objects min; + gfp_t allocflags; /* gfp flags to use on each alloc */ + int refcount; /* Refcount for slab cache destroy */ + void (*ctor)(void *object); /* Object constructor */ + unsigned int inuse; /* Offset to metadata */ + unsigned int align; /* Alignment */ + unsigned int red_left_pad; /* Left redzone padding size */ + const char *name; /* Name (only for display!) */ + struct list_head list; /* List of slab caches */ +#ifdef CONFIG_SYSFS + struct kobject kobj; /* For sysfs */ +#endif +#ifdef CONFIG_SLAB_FREELIST_HARDENED + unsigned long random; +#endif + +#ifdef CONFIG_NUMA + /* + * Defragmentation by allocating from a remote node. + */ + unsigned int remote_node_defrag_ratio; +#endif + +#ifdef CONFIG_SLAB_FREELIST_RANDOM + unsigned int *random_seq; +#endif + +#ifdef CONFIG_KASAN_GENERIC + struct kasan_cache kasan_info; +#endif + +#ifdef CONFIG_HARDENED_USERCOPY + unsigned int useroffset; /* Usercopy region offset */ + unsigned int usersize; /* Usercopy region size */ +#endif + + struct kmem_cache_node *node[MAX_NUMNODES]; +}; + +#if defined(CONFIG_SYSFS) && !defined(CONFIG_SLUB_TINY) +#define SLAB_SUPPORTS_SYSFS +void sysfs_slab_unlink(struct kmem_cache *s); +void sysfs_slab_release(struct kmem_cache *s); +#else +static inline void sysfs_slab_unlink(struct kmem_cache *s) { } +static inline void sysfs_slab_release(struct kmem_cache *s) { } +#endif + +void *fixup_red_left(struct kmem_cache *s, void *p); + +static inline void *nearest_obj(struct kmem_cache *cache, + const struct slab *slab, void *x) { + void *object = x - (x - slab_address(slab)) % cache->size; + void *last_object = slab_address(slab) + + (slab->objects - 1) * cache->size; + void *result = (unlikely(object > last_object)) ? last_object : object; + + result = fixup_red_left(cache, result); + return result; +} + +/* Determine object index from a given position */ +static inline unsigned int __obj_to_index(const struct kmem_cache *cache, + void *addr, void *obj) +{ + return reciprocal_divide(kasan_reset_tag(obj) - addr, + cache->reciprocal_size); +} + +static inline unsigned int obj_to_index(const struct kmem_cache *cache, + const struct slab *slab, void *obj) +{ + if (is_kfence_address(obj)) + return 0; + return __obj_to_index(cache, slab_address(slab), obj); +} + +static inline int objs_per_slab(const struct kmem_cache *cache, + const struct slab *slab) +{ + return slab->objects; +} #include #include -- 2.42.1