From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7C88CC5ACB3 for ; Tue, 21 Nov 2023 08:54:51 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E21516B047A; Tue, 21 Nov 2023 03:54:50 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id DA83D6B047E; Tue, 21 Nov 2023 03:54:50 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C21F46B0480; Tue, 21 Nov 2023 03:54:50 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id ACD2A6B047A for ; Tue, 21 Nov 2023 03:54:50 -0500 (EST) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 735421409B6 for ; Tue, 21 Nov 2023 08:54:50 +0000 (UTC) X-FDA: 81481351140.04.40FEEA8 Received: from mail-vs1-f53.google.com (mail-vs1-f53.google.com [209.85.217.53]) by imf01.hostedemail.com (Postfix) with ESMTP id 8EAA040008 for ; Tue, 21 Nov 2023 08:54:48 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=fjQaGtmE; spf=pass (imf01.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.217.53 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1700556888; a=rsa-sha256; cv=none; b=o+da3Px6QKiLsX3BuU6MdCE76lR6jdqtsGTXUEJqy/X2c4NtepRR9dEFLcnzvFlovq825R Mm1LZHL4hGdXiHF5ZKkoaPJAawIh31l6i+0L1JT0CIbD/ESKFtcCd5MYGxfb/HWPqB3MPJ FkK67QrijRR3RFhCdEcNUNrDgtCxHic= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=fjQaGtmE; spf=pass (imf01.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.217.53 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1700556888; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=3QLs44ZYfikKflEsGu3/LjCRwKrWuU/zW4Qq8PcgIOs=; b=ANarVHgSCHkHQg30XfNFIpXHWx5McnHwnsNj53q2TCULwim5UoN1Wo9iltP8+hMVOHOkCB tynXy/y7L2KZX8nebFUO6NxeLWrIBEq4fQRHpF5bgiYmlqTf/AGT1epa6tb27GbszpPKgd BnmFmqSMX0y+ZHzDUKuTL+HbDWhE1SI= Received: by mail-vs1-f53.google.com with SMTP id ada2fe7eead31-45efc08a6f3so1857513137.0 for ; Tue, 21 Nov 2023 00:54:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1700556887; x=1701161687; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=3QLs44ZYfikKflEsGu3/LjCRwKrWuU/zW4Qq8PcgIOs=; b=fjQaGtmEBBdLrx8ZGdLBlSIc/1xbS7G7GUqu0EU9PQtv89g/+EacSV/UQbrAnhlnr+ 9KFNEoAt2HKicbC+F7HF5zeRqdSI236BcWFE3HDJHuaYewIxVDvKm2gdOfQAOmKDc1cQ +ZXlFEnsj4yFnlsxYEqUOksZ8xrc4y9heDaGV6Wk62PKxdsk+xVPADpGNx+zzJ8DQ5Ix Pz2Yk9m6ry7v5mFqShotC9Eg5q1Q/PHuAkZKpxw5gY2Ur9rX6L8gZ7WAqKCh59Nb7Jtp q0gueYYf8L6NmdlhNaWAkpfCmuRB+VGB5PHgb9s1zCKVozYTVbz7YCyJ+LwBb7faj7Z+ OPgw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700556887; x=1701161687; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=3QLs44ZYfikKflEsGu3/LjCRwKrWuU/zW4Qq8PcgIOs=; b=Wnk0yPf1LikNBV+mr8ihbQZVARzgqQ2J1blTEWf+0x/MOZIEbYfrYGpW/kKxq4Zg8D gRc3MM5tBQ7es7sW+8lkHL0Baty4hCD0oqEfwJ4dx3kLg5CT++qLPbPFB0AkwK6gjDt3 3ZIQkCuwRk3W0rGROHrwtJ3NQ9lbOmPFPrO6G9RT+suksSDTrjwWjrQ9RAY9y3O7Js/i R9+T9LvyL38Dm03Yk3vCzcrBob6vw0zc4dh6XRpRfI430LnLG4zY4ccTT+cA05bJyryr HDiFglrJcVGiKiCDQqmKaGtx8UgHqFc4R5J85D4/MS4/+WXsgq9AFxPYhXS+H5OVtUBQ kWDw== X-Gm-Message-State: AOJu0Yyzjdw0pOvbLmmSv7bP9WY4zhAFbNHh73Mn+ef1SmixqQ0xuvRl Dl2FnSdptBEktd86ftRsNWSnEYOTdpbftb+ummY= X-Google-Smtp-Source: AGHT+IHvKvtNElxV6l2/0QrTgPeTdb5xUNcFO1zWzTkpB2f18bCYzj/+mcWlU1gk0ASyfwwpUGB/OmWVzmxJqJ/XATs= X-Received: by 2002:a05:6102:2086:b0:462:9f1d:258b with SMTP id h6-20020a056102208600b004629f1d258bmr5812196vsr.28.1700556887486; Tue, 21 Nov 2023 00:54:47 -0800 (PST) MIME-Version: 1.0 References: <20231120091214.150502-1-sxwjean@me.com> <20231120091214.150502-5-sxwjean@me.com> In-Reply-To: <20231120091214.150502-5-sxwjean@me.com> From: Hyeonggon Yoo <42.hyeyoo@gmail.com> Date: Tue, 21 Nov 2023 17:54:36 +0900 Message-ID: Subject: Re: [PATCH 4/4] mm/slab: move slab merge from slab_common.c to slub.c To: sxwjean@me.com Cc: cl@linux.com, penberg@kernel.org, rientjes@google.com, iamjoonsoo.kim@lge.com, vbabka@suse.cz, roman.gushchin@linux.dev, corbet@lwn.net, linux-mm@kvack.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 8EAA040008 X-Stat-Signature: ziyaxqfitou84dqx9puudmg3pck17bpe X-Rspam-User: X-HE-Tag: 1700556888-276365 X-HE-Meta: U2FsdGVkX19HS01OPl4b+l7gGfwqU+WnLla0Z7lnOJSVZK90rl1BSTXbC4FpQH92CKOWVpRf9t+cw1jrQYXlzVsQcMyLQg/NtNGdmO6S8pSkFN86DksxN/9cHwgBL/dQvZyAq5XuyXSs5b7t8433W2kE/GbW3uzNOuHnlAg7BoMtPMk+OWGWSQxBHt7lozJAyMTPp+mTDBgE+R3Iju3aNOSDosLli5urUMn37qgbG4AlR0BHWQno8vcRNAC25ck09YAmX2rsG8iWIBTKoS20q+iuHJAp2waboG4l6nwUNfWfFsotzopkjrlwBgQlgegGn/WRHIv/dYnT+Q904as5BG/RhAqIYw1ATDcNzukLz6a4s14lsokZJCX1gHIlLsLHwXBZ80UFyAY62UFC6ppo4i4GqzX1mWAYfKVLSOTRFwt6rt5tWhJw6MJkolZUpdUDHNLEWSq2/I2EvFE2GhUKyQtb4nCsA71wLpOVpX3ugQ6I4ob1djkP2+O9kLQplgRPjxa1Osnd9tWsO5x+bIkri8vtgiOLW1ZdmePP43OJLBFTvXwF3vtv7ZMNCVnWPqvGfHSoo/pUItHhfXRkzdJrRFJDVtjVZSLz+1OnfBS9Gf78cvlo2Qq/MTANgODODXSU/Ic6E9nfbm1yPAczWzpaOp9EYXj8reU4I13XkcEZ7EBNj448QH3d6eIGlveAUbv2ba0YILAGJ2eTVa12Km3yPDO9u5cGys3v+rGvSEYceYEO2nw7alNrOvjeOki/DcGF47j9QON6n76XmwMvTM3zanNtEzNgFTpqWNnBF/FbbRyxfwLDlklC4im3c0c2XcJmU5z24lCUYB17gMvpBdYCxnRBVAh/KN2G98cuVogDFVhdnDKwCW0UJSW83toPwyvvGDLNM3nQx6dwltBoBfMO9IDO+UmkCAU2UzN2y4CrUVQlD+IL+ad2OByglPjtki8q/q5U0Ndncr33/03oCdh chDM/+5K F5a8vXvD6Eqy85vht2WFKvwx1Xq99wIgj1VD8t9FZwOzYf5xWLiwx7Vx+GVUbY0/hGHMqGtGcLfIdibeow8S++D1kKvPJ6s5sg1DxLVr9uXhkM8Rw/l8ll1eQAy2QD/EbSwTq0mUpXwCEhCGq7okq3ge+6D43q6LTUQlsXA2T44PtJ47B95ZtaC2qltTmyePgwfl1DEQ4HGNqnzSTFXvsJ7zSJFQVhXerX3u5Ii+t1QSzQ9GpDD70tXqrB+tydp+vAK6Or/BwUARODI/HCnJgX4BRUgA50IsIVdtBTarw9jFUVAQrA8859qXBOUe1NULNrWT49JsEHcWz0Sr+wykDmAkNfBbLgUS4e+6TBjfLuPXcqCuuw/LIMeoOZNqsk4hDqASL+gxTLIf+i5smp7xqznpbWnDsqRDUr3bC72L9Pjkag+BydbSY+WCt9FSoBAFjfhCCERsbXYNnQ4YNJZly3j470T/JnMYSsquxHQKZMYHkAkLjacJf/A41wJn8yh1lx/0KrQbsei/QSh0= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Mon, Nov 20, 2023 at 6:13=E2=80=AFPM wrote: > > From: Xiongwei Song > > Since slab allocator has been removed. There is no users about slab > merge except slub. This commit is almost to revert > commit 423c929cbbec ("mm/slab_common: commonize slab merge logic"). > > Also change all prefix of slab merge related functions, variables and > definitions from "slab/SLAB" to"slub/SLUB". Could you please elaborate a little bit? I am not sure if I understand what the last two patches of this series are useful for. - Why rename variable/function/macro names? - Why move merge related functions from slab_common.c to slub.c? (I mean merging slab_common.c and slub.c into single file might make sens= e but why move only some parts of one into the other?) > Signed-off-by: Xiongwei Song > --- > mm/slab.h | 3 -- > mm/slab_common.c | 98 ---------------------------------------------- > mm/slub.c | 100 ++++++++++++++++++++++++++++++++++++++++++++++- > 3 files changed, 99 insertions(+), 102 deletions(-) > > diff --git a/mm/slab.h b/mm/slab.h > index 8d20f8c6269d..cd52e705ce28 100644 > --- a/mm/slab.h > +++ b/mm/slab.h > @@ -429,9 +429,6 @@ extern void create_boot_cache(struct kmem_cache *, co= nst char *name, > > unsigned int calculate_alignment(slab_flags_t flags, > unsigned int align, unsigned int size); > -int slab_unmergeable(struct kmem_cache *s); > -struct kmem_cache *find_mergeable(unsigned size, unsigned align, > - slab_flags_t flags, const char *name, void (*ctor)(void *= )); > struct kmem_cache * > __kmem_cache_alias(const char *name, unsigned int size, unsigned int ali= gn, > slab_flags_t flags, void (*ctor)(void *)); > diff --git a/mm/slab_common.c b/mm/slab_common.c > index 62eb77fdedf2..6960ae5c35ee 100644 > --- a/mm/slab_common.c > +++ b/mm/slab_common.c > @@ -45,36 +45,6 @@ static void slab_caches_to_rcu_destroy_workfn(struct w= ork_struct *work); > static DECLARE_WORK(slab_caches_to_rcu_destroy_work, > slab_caches_to_rcu_destroy_workfn); > > -/* > - * Set of flags that will prevent slab merging > - */ > -#define SLAB_NEVER_MERGE (SLAB_RED_ZONE | SLAB_POISON | SLAB_STORE_USER = | \ > - SLAB_TRACE | SLAB_TYPESAFE_BY_RCU | SLAB_NOLEAKTRACE | \ > - SLAB_FAILSLAB | SLAB_NO_MERGE | kasan_never_merge()) > - > -#define SLAB_MERGE_SAME (SLAB_RECLAIM_ACCOUNT | SLAB_CACHE_DMA | \ > - SLAB_CACHE_DMA32 | SLAB_ACCOUNT) > - > -/* > - * Merge control. If this is set then no merging of slab caches will occ= ur. > - */ > -static bool slub_nomerge =3D !IS_ENABLED(CONFIG_SLAB_MERGE_DEFAULT); > - > -static int __init setup_slab_nomerge(char *str) > -{ > - slub_nomerge =3D true; > - return 1; > -} > - > -static int __init setup_slab_merge(char *str) > -{ > - slub_nomerge =3D false; > - return 1; > -} > - > -__setup_param("slub_nomerge", slub_nomerge, setup_slab_nomerge, 0); > -__setup_param("slub_merge", slub_merge, setup_slab_merge, 0); > - > /* > * Determine the size of a slab object > */ > @@ -130,74 +100,6 @@ unsigned int calculate_alignment(slab_flags_t flags, > return ALIGN(align, sizeof(void *)); > } > > -/* > - * Find a mergeable slab cache > - */ > -int slab_unmergeable(struct kmem_cache *s) > -{ > - if (slub_nomerge || (s->flags & SLAB_NEVER_MERGE)) > - return 1; > - > - if (s->ctor) > - return 1; > - > -#ifdef CONFIG_HARDENED_USERCOPY > - if (s->usersize) > - return 1; > -#endif > - > - /* > - * We may have set a slab to be unmergeable during bootstrap. > - */ > - if (s->refcount < 0) > - return 1; > - > - return 0; > -} > - > -struct kmem_cache *find_mergeable(unsigned int size, unsigned int align, > - slab_flags_t flags, const char *name, void (*ctor)(void *= )) > -{ > - struct kmem_cache *s; > - > - if (slub_nomerge) > - return NULL; > - > - if (ctor) > - return NULL; > - > - size =3D ALIGN(size, sizeof(void *)); > - align =3D calculate_alignment(flags, align, size); > - size =3D ALIGN(size, align); > - flags =3D kmem_cache_flags(size, flags, name); > - > - if (flags & SLAB_NEVER_MERGE) > - return NULL; > - > - list_for_each_entry_reverse(s, &slab_caches, list) { > - if (slab_unmergeable(s)) > - continue; > - > - if (size > s->size) > - continue; > - > - if ((flags & SLAB_MERGE_SAME) !=3D (s->flags & SLAB_MERGE= _SAME)) > - continue; > - /* > - * Check if alignment is compatible. > - * Courtesy of Adrian Drzewiecki > - */ > - if ((s->size & ~(align - 1)) !=3D s->size) > - continue; > - > - if (s->size - size >=3D sizeof(void *)) > - continue; > - > - return s; > - } > - return NULL; > -} > - > static struct kmem_cache *create_cache(const char *name, > unsigned int object_size, unsigned int align, > slab_flags_t flags, unsigned int useroffset, > diff --git a/mm/slub.c b/mm/slub.c > index ae1e6e635253..435d9ed140e4 100644 > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -709,6 +709,104 @@ static inline bool slab_update_freelist(struct kmem= _cache *s, struct slab *slab, > return false; > } > > +/* > + * Set of flags that will prevent slab merging > + */ > +#define SLUB_NEVER_MERGE (SLAB_RED_ZONE | SLAB_POISON | SLAB_STORE_USER = | \ > + SLAB_TRACE | SLAB_TYPESAFE_BY_RCU | SLAB_NOLEAKTRACE | \ > + SLAB_FAILSLAB | SLAB_NO_MERGE | kasan_never_merge()) > + > +#define SLUB_MERGE_SAME (SLAB_RECLAIM_ACCOUNT | SLAB_CACHE_DMA | \ > + SLAB_CACHE_DMA32 | SLAB_ACCOUNT) > + > +/* > + * Merge control. If this is set then no merging of slab caches will occ= ur. > + */ > +static bool slub_nomerge =3D !IS_ENABLED(CONFIG_SLAB_MERGE_DEFAULT); > + > +static int __init setup_slub_nomerge(char *str) > +{ > + slub_nomerge =3D true; > + return 1; > +} > + > +static int __init setup_slub_merge(char *str) > +{ > + slub_nomerge =3D false; > + return 1; > +} > + > +__setup_param("slub_nomerge", slub_nomerge, setup_slab_nomerge, 0); > +__setup_param("slub_merge", slub_merge, setup_slab_merge, 0); > + > +/* > + * Find a mergeable slab cache > + */ > +static inline int slub_unmergeable(struct kmem_cache *s) > +{ > + if (slub_nomerge || (s->flags & SLUB_NEVER_MERGE)) > + return 1; > + > + if (s->ctor) > + return 1; > + > +#ifdef CONFIG_HARDENED_USERCOPY > + if (s->usersize) > + return 1; > +#endif > + > + /* > + * We may have set a slab to be unmergeable during bootstrap. > + */ > + if (s->refcount < 0) > + return 1; > + > + return 0; > +} > + > +static struct kmem_cache *find_mergeable(unsigned int size, unsigned int= align, > + slab_flags_t flags, const char *name, void (*ctor)(void *= )) > +{ > + struct kmem_cache *s; > + > + if (slub_nomerge) > + return NULL; > + > + if (ctor) > + return NULL; > + > + size =3D ALIGN(size, sizeof(void *)); > + align =3D calculate_alignment(flags, align, size); > + size =3D ALIGN(size, align); > + flags =3D kmem_cache_flags(size, flags, name); > + > + if (flags & SLUB_NEVER_MERGE) > + return NULL; > + > + list_for_each_entry_reverse(s, &slab_caches, list) { > + if (slub_unmergeable(s)) > + continue; > + > + if (size > s->size) > + continue; > + > + if ((flags & SLUB_MERGE_SAME) !=3D (s->flags & SLUB_MERGE= _SAME)) > + continue; > + /* > + * Check if alignment is compatible. > + * Courtesy of Adrian Drzewiecki > + */ > + if ((s->size & ~(align - 1)) !=3D s->size) > + continue; > + > + if (s->size - size >=3D sizeof(void *)) > + continue; > + > + return s; > + } > + return NULL; > +} > + > #ifdef CONFIG_SLUB_DEBUG > static unsigned long object_map[BITS_TO_LONGS(MAX_OBJS_PER_PAGE)]; > static DEFINE_SPINLOCK(object_map_lock); > @@ -6679,7 +6777,7 @@ static int sysfs_slab_add(struct kmem_cache *s) > int err; > const char *name; > struct kset *kset =3D cache_kset(s); > - int unmergeable =3D slab_unmergeable(s); > + int unmergeable =3D slub_unmergeable(s); > > if (!unmergeable && disable_higher_order_debug && > (slub_debug & DEBUG_METADATA_FLAGS)) > -- > 2.34.1 >