From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BA7E7C61D97 for ; Wed, 22 Nov 2023 01:09:20 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5F62D6B0535; Tue, 21 Nov 2023 20:09:20 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 5A6266B0538; Tue, 21 Nov 2023 20:09:20 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 494FE6B0539; Tue, 21 Nov 2023 20:09:20 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 3889B6B0535 for ; Tue, 21 Nov 2023 20:09:20 -0500 (EST) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 18DAE1A02ED for ; Wed, 22 Nov 2023 01:09:20 +0000 (UTC) X-FDA: 81483806880.03.3C22F11 Received: from mail-oi1-f178.google.com (mail-oi1-f178.google.com [209.85.167.178]) by imf28.hostedemail.com (Postfix) with ESMTP id 54E84C0008 for ; Wed, 22 Nov 2023 01:09:17 +0000 (UTC) Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=VO72id26; spf=pass (imf28.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.167.178 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1700615357; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=WFUu9VYQOno00V0Z+W24X8uKsrmKACfQgy6rmcyLykA=; b=aZN0wj+STw9HRXTW3/d9JnE9udaeOY7VNGYAjJJchFCgojqUO0QPDiMR13A4ttlQSnifCL LjzpVYn+Y4SRV5L4HY/8xUe6IJuGQLcTJBsVOADZtLCuSyor2mnFqWzhGQhlmg127rkRv9 8CWR/wcWDenEFOHXU2mhH8CP4kxFeQs= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1700615357; a=rsa-sha256; cv=none; b=joMrYqPEh4m/CYYt+ThgbWFBWIltlRZyV80H6PmfZpn5q+nvUuQfJcXeqPbcKB6YDpzs+e UvFYrJSa5ttjpMXJV2CU9ibHK7rd/GKMwF0ipOfcOmeceHQOr4rLvtNnIRnqak78UHBe8u KoAv9Qlo4eiqJ7+34dBMuxNTW7wJK7o= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=VO72id26; spf=pass (imf28.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.167.178 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com; dmarc=pass (policy=none) header.from=gmail.com Received: by mail-oi1-f178.google.com with SMTP id 5614622812f47-3b6a837a2e1so3635967b6e.0 for ; Tue, 21 Nov 2023 17:09:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1700615356; x=1701220156; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=WFUu9VYQOno00V0Z+W24X8uKsrmKACfQgy6rmcyLykA=; b=VO72id26enPDMqhfQ1oKky9BEVoaU1JzHnqYFFgXwLrGbDK/P4MhOQ+Ogo0anjfntN fvKQRPJ18MqrNKVNruTjNF+ITOAeLN4sGgPDW2XZ6kUjN+gStE5aCZg6faLoVVHtvKS5 lYGrS41Mk8ImPjVReStTy/oZbwHTH8PEe/CSOvRL+yOGREDM/pM3SifX13fimA5gVwak JHt2alIaApcZ25xkSwJ4UHXLnGvSoQSpDf7Z8E/xMiXHUTYWSnf+nJtCXgNX0PyNaOme pq7DjnyFubBreU0lRdOZhF2k/NsCwW3yT49hZqbOLdNTdrcqqK8ne/uj9nK6Rr5kKBI0 i8ww== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700615356; x=1701220156; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=WFUu9VYQOno00V0Z+W24X8uKsrmKACfQgy6rmcyLykA=; b=qtQB+ZHKaGw+8NttXxfZbWBvhFVGwkhPR+U+UuZT8EWbW6i7B20eEN5WFhCq/8gsop 29bTDls3/nijKSB+wCKRn3zCTMIr3rvCkR1LJ+aYoDQZFm2UosnzOYknObyJGgfgCdsX /w7fYQ8ONOpIr1+4vBrU5YUBCVI5YeWp0Ol1rA8ZPhxzf7dxIzBY+0jyXAkBrV/R6WAW xScaqI3SByAfomBm0L1W84qAXc+w8nSRomLPX0Lrx4TjdnvDbgFO0mHvVy09VM7WnjE8 E1pbpldw9ajW23NK2odLhghzzVQ2vRXPvQT8k1qY/tUnu+VUUF5tTDtaK+wYVxp/81Aw rXCQ== X-Gm-Message-State: AOJu0YzDN5RJBhw9ATCoO11jS1K1vzGBCFNRfDuW6Fivq67eVSEeO3V6 UWAiXsHlOFO6WK4QWekS/rOf9AZTchnLL7T8tUE= X-Google-Smtp-Source: AGHT+IFT1DG0nvQuvae20Tsdfe3wdPTyTnvsaMyMUJM6vv3DN1rl3YiKjY3GnJFtGsW+7xN91fGzdgelUT2eRT2OcdY= X-Received: by 2002:a05:6358:927:b0:16b:fd1c:fdba with SMTP id r39-20020a056358092700b0016bfd1cfdbamr879293rwi.27.1700615356263; Tue, 21 Nov 2023 17:09:16 -0800 (PST) MIME-Version: 1.0 References: <20231102032330.1036151-1-chengming.zhou@linux.dev> <20231102032330.1036151-3-chengming.zhou@linux.dev> In-Reply-To: <20231102032330.1036151-3-chengming.zhou@linux.dev> From: Hyeonggon Yoo <42.hyeyoo@gmail.com> Date: Wed, 22 Nov 2023 10:09:04 +0900 Message-ID: Subject: Re: [PATCH v5 2/9] slub: Change get_partial() interfaces to return slab To: chengming.zhou@linux.dev Cc: vbabka@suse.cz, cl@linux.com, penberg@kernel.org, rientjes@google.com, iamjoonsoo.kim@lge.com, akpm@linux-foundation.org, roman.gushchin@linux.dev, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Chengming Zhou Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: 54E84C0008 X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: yt8wkdfrupoyansbpqcuoj3c6pd7chb5 X-HE-Tag: 1700615357-828652 X-HE-Meta: U2FsdGVkX1/tmLgNom3VE0FkfGAY32kY1/ivVJBrgc189rcq+q/yB1fvDOmBLrOl+zk+EmG0M/9GyVo7dXZPOY1JWNmfgcbT2bzHE3AJUSf3FnZ/vc05P/TF0xdSbI4mGiqgEQr6eW5qT4PQFq2fvu44U7wmWqsIch+aB2Z30v1N0ftcTfhmX5Jo6x+jEQrRHV92Q/82/Zn1Z5siXhe/P7eVk5MAczcSNH76a9d/OW6rDVTWmVOdhGeahmCrPOnd8I9jXTPu6fIcLYyNqApiuhbA+Yx4qrM3bqCkZ2U7DaAA292OqbXEX6MMrP9A61UE7StTaoaYMV9fOWi75K8Jt226DGzfrDvbgxOYHxRDuK/uL0kkS4SLVsibKxFZzEZcu+rD18vz1MYXSqAlQlsyj0zr9xwVrbWlVY3WonVsH4znRoBe9NakvIy1AQTSgR9IGv/OnCPRLcBPLMwJenP7NuMCM7SPzlb95tCfPwNC12PyoGD1XrH/es2gBihQmOIHl0PqPhZzK0JNoOzVoc6QXfYIu5fCL2dBDXRT2BVEN81PTJDyRisytmYRO3FYeQfvNUX2e7tOUoe5rUcwJ9JpHzTbEXXHphfjphnQzG6KwSSSDy88V1Mu8jPZ4p0Q/fzg6Sm5kOOtKlAk86iePeHQWhXhM7JhALz1FoyjMoqOuBe+JjcA+i6+nE/4WaPrfcQRin8Mb0g9q+T/nM3VuMwgVMkfCNVAvv0i8fUbHMVvMOkKaHKhUET5c65xwE1G+hbAJj5/YlRRPjgG/SCH2+GW1hyRxJOBPa28jZxtefXuu+gYXl6ASLAUepiTS1rrYMGzcRKGpN3ef/aiAT9EaL0aQ/WNvpHsOlEsq6Ukneqsv0sYxVzbCrl0Epcyo5w7wDisnaej7H5rYPdEMNzUvWBSBBJ5hggMpM0Ggfs+uwfsp4yjn/8fvT8iqH624MWLz3dul6qqB8Eaxwl4hP0Sfhr 8g4Eonva YxM4ie1B7PWjT93uob5hjIvfvo7ft5xp5T1/wvn5/o1GnAChHqQoj8731vyX/50NX8lYxVhMvlaoBa8WSf/ukaSH9zAgrW86CDmfGEwmREvuS4IOrlU5pcK5UtpKieE7Z43ZlsOVSnp0F2SFF60/Chve26xPxDkizkzjzCw3EKTzBE3TUs7LNt/ePcQhlxEeoPQz4UaxLWBOY53DNTrvSK6mPjuYN5wEAxw0nH1UeaVWz0xGMo31LpV8RZOl0kZJP+kYWGJvK3K7d/D494Jvkc/Y6PXBk3Pp+QpmlHi6t7VPgJgPfxMJwN4CumFGp1tgqiDvcz3YSm9hiy8Tq/OVYyTNBl0cg6QWs9FS00FqHR+JHbpWU61a+VMhZnqidRZEYKKiTfx4PXB/NF8KMnFuw6FsGd8oD1nHf/QC0vX/Fb9QP+VK8tWm43SUOmu30w7VH6wp5uJ5jnax7I/nETcTLfHU+EdmNCT3xVu2STceiJyIh+rultN5rnxZVhI7W1cdLtO+8uMyChTsPV0M= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Thu, Nov 2, 2023 at 12:24=E2=80=AFPM wrote: > > From: Chengming Zhou > > We need all get_partial() related interfaces to return a slab, instead > of returning the freelist (or object). > > Use the partial_context.object to return back freelist or object for > now. This patch shouldn't have any functional changes. > > Suggested-by: Vlastimil Babka > Signed-off-by: Chengming Zhou > Reviewed-by: Vlastimil Babka > Tested-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> > --- > mm/slub.c | 63 +++++++++++++++++++++++++++++-------------------------- > 1 file changed, 33 insertions(+), 30 deletions(-) > > diff --git a/mm/slub.c b/mm/slub.c > index 0b0fdc8c189f..03384cd965c5 100644 > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -204,9 +204,9 @@ DEFINE_STATIC_KEY_FALSE(slub_debug_enabled); > > /* Structure holding parameters for get_partial() call chain */ > struct partial_context { > - struct slab **slab; > gfp_t flags; > unsigned int orig_size; > + void *object; > }; > > static inline bool kmem_cache_debug(struct kmem_cache *s) > @@ -2269,10 +2269,11 @@ static inline bool pfmemalloc_match(struct slab *= slab, gfp_t gfpflags); > /* > * Try to allocate a partial slab from a specific node. > */ > -static void *get_partial_node(struct kmem_cache *s, struct kmem_cache_no= de *n, > - struct partial_context *pc) > +static struct slab *get_partial_node(struct kmem_cache *s, > + struct kmem_cache_node *n, > + struct partial_context *pc) > { > - struct slab *slab, *slab2; > + struct slab *slab, *slab2, *partial =3D NULL; > void *object =3D NULL; > unsigned long flags; > unsigned int partial_slabs =3D 0; > @@ -2288,27 +2289,28 @@ static void *get_partial_node(struct kmem_cache *= s, struct kmem_cache_node *n, > > spin_lock_irqsave(&n->list_lock, flags); > list_for_each_entry_safe(slab, slab2, &n->partial, slab_list) { > - void *t; > - > if (!pfmemalloc_match(slab, pc->flags)) > continue; > > if (IS_ENABLED(CONFIG_SLUB_TINY) || kmem_cache_debug(s)) = { > object =3D alloc_single_from_partial(s, n, slab, > pc->orig_size); > - if (object) > + if (object) { > + partial =3D slab; > + pc->object =3D object; > break; > + } > continue; > } > > - t =3D acquire_slab(s, n, slab, object =3D=3D NULL); > - if (!t) > + object =3D acquire_slab(s, n, slab, object =3D=3D NULL); > + if (!object) > break; > > - if (!object) { > - *pc->slab =3D slab; > + if (!partial) { > + partial =3D slab; > + pc->object =3D object; > stat(s, ALLOC_FROM_PARTIAL); > - object =3D t; > } else { > put_cpu_partial(s, slab, 0); > stat(s, CPU_PARTIAL_NODE); > @@ -2324,20 +2326,21 @@ static void *get_partial_node(struct kmem_cache *= s, struct kmem_cache_node *n, > > } > spin_unlock_irqrestore(&n->list_lock, flags); > - return object; > + return partial; > } > > /* > * Get a slab from somewhere. Search in increasing NUMA distances. > */ > -static void *get_any_partial(struct kmem_cache *s, struct partial_contex= t *pc) > +static struct slab *get_any_partial(struct kmem_cache *s, > + struct partial_context *pc) > { > #ifdef CONFIG_NUMA > struct zonelist *zonelist; > struct zoneref *z; > struct zone *zone; > enum zone_type highest_zoneidx =3D gfp_zone(pc->flags); > - void *object; > + struct slab *slab; > unsigned int cpuset_mems_cookie; > > /* > @@ -2372,8 +2375,8 @@ static void *get_any_partial(struct kmem_cache *s, = struct partial_context *pc) > > if (n && cpuset_zone_allowed(zone, pc->flags) && > n->nr_partial > s->min_partial) { > - object =3D get_partial_node(s, n, pc); > - if (object) { > + slab =3D get_partial_node(s, n, pc); > + if (slab) { > /* > * Don't check read_mems_allowed_= retry() > * here - if mems_allowed was upd= ated in > @@ -2381,7 +2384,7 @@ static void *get_any_partial(struct kmem_cache *s, = struct partial_context *pc) > * between allocation and the cpu= set > * update > */ > - return object; > + return slab; > } > } > } > @@ -2393,17 +2396,18 @@ static void *get_any_partial(struct kmem_cache *s= , struct partial_context *pc) > /* > * Get a partial slab, lock it and return it. > */ > -static void *get_partial(struct kmem_cache *s, int node, struct partial_= context *pc) > +static struct slab *get_partial(struct kmem_cache *s, int node, > + struct partial_context *pc) > { > - void *object; > + struct slab *slab; > int searchnode =3D node; > > if (node =3D=3D NUMA_NO_NODE) > searchnode =3D numa_mem_id(); > > - object =3D get_partial_node(s, get_node(s, searchnode), pc); > - if (object || node !=3D NUMA_NO_NODE) > - return object; > + slab =3D get_partial_node(s, get_node(s, searchnode), pc); > + if (slab || node !=3D NUMA_NO_NODE) > + return slab; > > return get_any_partial(s, pc); > } > @@ -3213,10 +3217,10 @@ static void *___slab_alloc(struct kmem_cache *s, = gfp_t gfpflags, int node, > new_objects: > > pc.flags =3D gfpflags; > - pc.slab =3D &slab; > pc.orig_size =3D orig_size; > - freelist =3D get_partial(s, node, &pc); > - if (freelist) { > + slab =3D get_partial(s, node, &pc); > + if (slab) { > + freelist =3D pc.object; > if (kmem_cache_debug(s)) { > /* > * For debug caches here we had to go through > @@ -3408,12 +3412,11 @@ static void *__slab_alloc_node(struct kmem_cache = *s, > void *object; > > pc.flags =3D gfpflags; > - pc.slab =3D &slab; > pc.orig_size =3D orig_size; > - object =3D get_partial(s, node, &pc); > + slab =3D get_partial(s, node, &pc); > > - if (object) > - return object; > + if (slab) > + return pc.object; > > slab =3D new_slab(s, gfpflags, node); > if (unlikely(!slab)) { Looks good to me, Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>