From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E143FC4167B for ; Sun, 3 Dec 2023 06:53:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5EC556B036E; Sun, 3 Dec 2023 01:53:35 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 59BC16B0370; Sun, 3 Dec 2023 01:53:35 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 43D836B0371; Sun, 3 Dec 2023 01:53:35 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 333B56B036E for ; Sun, 3 Dec 2023 01:53:35 -0500 (EST) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id AF9AB1601F7 for ; Sun, 3 Dec 2023 06:53:34 +0000 (UTC) X-FDA: 81524591148.27.F9F7E5B Received: from mail-vs1-f49.google.com (mail-vs1-f49.google.com [209.85.217.49]) by imf15.hostedemail.com (Postfix) with ESMTP id E3708A0009 for ; Sun, 3 Dec 2023 06:53:32 +0000 (UTC) Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=Zwba9MhG; spf=pass (imf15.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.217.49 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1701586412; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=o5X4nM4F8oU4+mEy0o6zjiFJGF70n9/7GLXA805ymyQ=; b=XSk6gg29YGUvLad3V/GEUQOlpjRtWgi+oJFElJOo6K3uFR7Sb25oU4HhokOX9B7j61Eyh8 fC5WZXdQYn0221XV07u74A3WqFKzdungz3SLENn3lzAOX7UdIh6/MmmH9JAWwQ5eedNxIg XSS212THnG0D4960K+Krb+Nx5lUh1K8= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1701586412; a=rsa-sha256; cv=none; b=7cOALBoFEDo2yNesmRpCnyioEY4qP2FADFyPMubbq7KB+OmdXcQgBp7/FMe9tEBuUa7ynn 5SfL17rqVbS7zfAWMjHGh4n09UTuSSLrYjvaHmS4w1u/klUsKodtK1C1J/d6Aap0u0grV+ ZEfyXvHguSOPJFRHKBGhPxtUj1uooqw= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=Zwba9MhG; spf=pass (imf15.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.217.49 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com; dmarc=pass (policy=none) header.from=gmail.com Received: by mail-vs1-f49.google.com with SMTP id ada2fe7eead31-4647d236e92so166998137.1 for ; Sat, 02 Dec 2023 22:53:32 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1701586412; x=1702191212; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=o5X4nM4F8oU4+mEy0o6zjiFJGF70n9/7GLXA805ymyQ=; b=Zwba9MhGvlgdVXt00XHcql7LNJk2MT+2tFID2SwX0rZBanXVN8vw5NLhCtF5yjkJ/1 wQLix9hFEO5utOweoD2gX22qI7WaLXK5lCXCg9u/Lpj1HkREO393F1PMFikaUB5HMykU qbeFfi2edgyQ3+HqchsIdIKDWY2fJrWVRzIESzv9u5VaDNLd+Z5ZOGCkLr/YrrhZuJqc p507Ny2Dy6SnEo46vYZ8QYEuPkxFBgFVN0fXMlgYBnB1hEoMv2J3Fs7TkwFpDRUAc7ih 0AAD9Csvpb0pLjHKmF9O8EDc0OuboyTD24xw1fMm5Q5tbIsX1c9thKXE4/Sw3sWolCtW fSNg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701586412; x=1702191212; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=o5X4nM4F8oU4+mEy0o6zjiFJGF70n9/7GLXA805ymyQ=; b=pWjeLKcs4W/x1HZuvHVzPexxPIB0EUgaVWBDTlJ5IRZwzOGwDHmj3sBn0zImKSLVxU 6JVHdKpJSKmkooIFrRBypoSz/gIXF+Pt4QaWEsWubhQ+Mt59Nnig4ruYu9dXvUPuWfjV YYpg211gST8WNbmIblMEu+8m1WS4glal9QnhGiFs9SrWPkJp6X80IXBlbRvbf2+HmG7D NAN6NZED93SCvZcZkho/eRLCOPr6zAf9VKWvoBXO6H/rrbE0P/gc2dyDsnDiaJkNT+PI wlU0ZhH5qPpJoTA3Wvh1UmfUhxQ5IrcrUEFPPqSZYwjdF4V41tKLkdFxpEz3X5sJiGXe 43aA== X-Gm-Message-State: AOJu0YwUbg1DABPKKhbGCmuUhrPqT4q3p12S/Uo5eF+LHzKsuIvIaGil KQwaQ6OeYaKIt+qtblNC0kk6Iw438AWn2qHf4eE= X-Google-Smtp-Source: AGHT+IFbN2sLxI2zbfVSLs0YHG1qR0k3sdKfgNXqxwgJShIZavcYKV+U9gdOAktmxdW6yBPyo4TWlCTJhrfLGZJ6B04= X-Received: by 2002:a05:6102:411:b0:464:4746:c48b with SMTP id d17-20020a056102041100b004644746c48bmr649728vsq.27.1701586411817; Sat, 02 Dec 2023 22:53:31 -0800 (PST) MIME-Version: 1.0 References: <20231102032330.1036151-1-chengming.zhou@linux.dev> <20231102032330.1036151-7-chengming.zhou@linux.dev> In-Reply-To: <20231102032330.1036151-7-chengming.zhou@linux.dev> From: Hyeonggon Yoo <42.hyeyoo@gmail.com> Date: Sun, 3 Dec 2023 15:53:19 +0900 Message-ID: Subject: Re: [PATCH v5 6/9] slub: Delay freezing of partial slabs To: chengming.zhou@linux.dev Cc: vbabka@suse.cz, cl@linux.com, penberg@kernel.org, rientjes@google.com, iamjoonsoo.kim@lge.com, akpm@linux-foundation.org, roman.gushchin@linux.dev, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Chengming Zhou Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: E3708A0009 X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: pozip4id8i4gztidb16atb5k7i3z7wkd X-HE-Tag: 1701586412-314206 X-HE-Meta: U2FsdGVkX1/ZHRVBvzPB6Hx/I+viqel80k63ygXfwNZ5FXGoP02pO3CIJ9aMpBY/doNAM7MsOTKljmDXISJ018ftibV41cvJJkhIn5bMr25B3jK2O+GZWqb1l9sMfSsDK5pFESdOaCoW/A1AWALvGR8A+j/Afgph8SScaNGSnrj02AR6iLmUDY6SW1JdCBr27stq1/u3n+qsWyV9gZRSVJ2kAEFAZsN+IQTCBHlPaBlyMBcUzSwWYljg0imAu5AvKSIBe3ZXswjRYHn0Il9QYZlI0UVPOLZY0zver97xv5jMgZGhqkuZkW87vyW7cdV3XZd9dWY1P+1k+vnkgmbbMgXzBtwKHy8njSHJ+eqqC1IQl4+Q6003RKRTGgZ0UivhsDVdiOSUuDj+vCepoOEgECTaFOlNisaHDs7ujQb2t63B4Op1yl9ab9f9EL/UNT67AD/Hg3rE78klHTzxugkaJLhLGTYsKLB1fsjqpsJ08C6zDZQyFWEKwp/ouY+3+xic7y4tF0x0n3PHRbqfVxQfi3AuG9UGYVn/fx57uTrV5uMpmOaGCE3/2Icmupkk4Ikl5qEpnRpDKu4vjarCFmUSKD/dgcPGBq+PWGdraBdDE75ymuuf8HWQCBpp5qglOB37M8gnNWSJQ8dZLMffXd8F4jCnHeFgriXqU/I2z4ILv+yP9r6QRBi1g5uOm9b2xfsb/8IF7GdG2ZFH6iZiQtXF7t/3qNCSn2Em7Vlz4EaWLXpKx6vggWqeVH+ShlqQlpzr6bnsdQ/j/J5D/eQLKYFYvknhkUeZQJj9YlYx2V7auTxMsDvuz5JQgfwhUfL+ZfX4wUCYwMjX3JLFYXDuwyDC4dyBIxs00c3nJKTbtRLchtzaazDaOYGPaHbiz3+yy1BJ0mmm+E1tWlWk8de0n0GzdOGcguBkz6WbSDlU0h7PHIrMfGT1voEJlUnOHoEi8shyfyQdgcV8aAfy8A0GmDv 9+Hyd3B4 l716gV3U6tsApuaRI8TyKoC7POuAbtKrvMzBW2AEg0MSuRGKhRF7fLxmjt3BVHPZkL+JhlCku768qduVqnL2+tNxY2UB/BWl6JeiJwhqSKRAa96RCAPthTzk+vGobXU0OtU5tKsFdeESojhvWOWcjnIfJliE80fif6ILFc3aMnCc3b1P6kHCHwi4rCH/7hbOhoN7Oz/JtXtJ18Vhowpfi1UNYxVGctSBgKhw8i51GL38aFNElMjhz8+6WZtUXsRouo/9kE5uR0ff39TpKaO/JN5Q8na15YBzu1aHj9CVprwrQqABTHX026HPBWjU1WT7CZd8hZoxAzbaTUxFeqmNxGQ4eSKqATRJzF5UCPEFev8zQe9bdo7ayaYPrFtYx84/naX/g8pTGM9dlsex7900EbcRKU8sxZ/e/lwRtjk9xQrBvdXwOpDDwRlCdVh7LIVjII+qXX64oLSaPSjf8H8kz4FWh5A== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Thu, Nov 2, 2023 at 12:25=E2=80=AFPM wrote: > > From: Chengming Zhou > > Now we will freeze slabs when moving them out of node partial list to > cpu partial list, this method needs two cmpxchg_double operations: > > 1. freeze slab (acquire_slab()) under the node list_lock > 2. get_freelist() when pick used in ___slab_alloc() > > Actually we don't need to freeze when moving slabs out of node partial > list, we can delay freezing to when use slab freelist in ___slab_alloc(), > so we can save one cmpxchg_double(). > > And there are other good points: > - The moving of slabs between node partial list and cpu partial list > becomes simpler, since we don't need to freeze or unfreeze at all. > > - The node list_lock contention would be less, since we don't need to > freeze any slab under the node list_lock. > > We can achieve this because there is no concurrent path would manipulate > the partial slab list except the __slab_free() path, which is now > serialized by slab_test_node_partial() under the list_lock. > > Since the slab returned by get_partial() interfaces is not frozen anymore > and no freelist is returned in the partial_context, so we need to use the > introduced freeze_slab() to freeze it and get its freelist. > > Similarly, the slabs on the CPU partial list are not frozen anymore, > we need to freeze_slab() on it before use. > > We can now delete acquire_slab() as it became unused. > > Signed-off-by: Chengming Zhou > Reviewed-by: Vlastimil Babka > Tested-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> > --- > mm/slub.c | 113 +++++++++++------------------------------------------- > 1 file changed, 23 insertions(+), 90 deletions(-) > > diff --git a/mm/slub.c b/mm/slub.c > index edf567971679..bcb5b2c4e213 100644 > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -2234,51 +2234,6 @@ static void *alloc_single_from_new_slab(struct kme= m_cache *s, > return object; > } > > -/* > - * Remove slab from the partial list, freeze it and > - * return the pointer to the freelist. > - * > - * Returns a list of objects or NULL if it fails. > - */ > -static inline void *acquire_slab(struct kmem_cache *s, > - struct kmem_cache_node *n, struct slab *slab, > - int mode) Nit: alloc_single_from_partial()'s comment still refers to acquire_slab(). > -{ > - void *freelist; > - unsigned long counters; > - struct slab new; > - > - lockdep_assert_held(&n->list_lock); > - > - /* > - * Zap the freelist and set the frozen bit. > - * The old freelist is the list of objects for the > - * per cpu allocation list. > - */ > - freelist =3D slab->freelist; > - counters =3D slab->counters; > - new.counters =3D counters; > - if (mode) { > - new.inuse =3D slab->objects; > - new.freelist =3D NULL; > - } else { > - new.freelist =3D freelist; > - } > - > - VM_BUG_ON(new.frozen); > - new.frozen =3D 1; > - > - if (!__slab_update_freelist(s, slab, > - freelist, counters, > - new.freelist, new.counters, > - "acquire_slab")) > - return NULL; > - > - remove_partial(n, slab); > - WARN_ON(!freelist); > - return freelist; > -} > - > #ifdef CONFIG_SLUB_CPU_PARTIAL > static void put_cpu_partial(struct kmem_cache *s, struct slab *slab, int= drain); > #else > @@ -2295,7 +2250,6 @@ static struct slab *get_partial_node(struct kmem_ca= che *s, > struct partial_context *pc) > { > struct slab *slab, *slab2, *partial =3D NULL; > - void *object =3D NULL; > unsigned long flags; > unsigned int partial_slabs =3D 0; > > @@ -2314,7 +2268,7 @@ static struct slab *get_partial_node(struct kmem_ca= che *s, > continue; > > if (IS_ENABLED(CONFIG_SLUB_TINY) || kmem_cache_debug(s)) = { > - object =3D alloc_single_from_partial(s, n, slab, > + void *object =3D alloc_single_from_partial(s, n, = slab, > pc->orig_size); > if (object) { > partial =3D slab; > @@ -2324,13 +2278,10 @@ static struct slab *get_partial_node(struct kmem_= cache *s, > continue; > } > > - object =3D acquire_slab(s, n, slab, object =3D=3D NULL); > - if (!object) > - break; > + remove_partial(n, slab); > > if (!partial) { > partial =3D slab; > - pc->object =3D object; > stat(s, ALLOC_FROM_PARTIAL); > } else { > put_cpu_partial(s, slab, 0); > @@ -2629,9 +2580,6 @@ static void __unfreeze_partials(struct kmem_cache *= s, struct slab *partial_slab) > unsigned long flags =3D 0; > > while (partial_slab) { > - struct slab new; > - struct slab old; > - > slab =3D partial_slab; > partial_slab =3D slab->next; > > @@ -2644,23 +2592,7 @@ static void __unfreeze_partials(struct kmem_cache = *s, struct slab *partial_slab) > spin_lock_irqsave(&n->list_lock, flags); > } > > - do { > - > - old.freelist =3D slab->freelist; > - old.counters =3D slab->counters; > - VM_BUG_ON(!old.frozen); > - > - new.counters =3D old.counters; > - new.freelist =3D old.freelist; > - > - new.frozen =3D 0; > - > - } while (!__slab_update_freelist(s, slab, > - old.freelist, old.counters, > - new.freelist, new.counters, > - "unfreezing slab")); > - > - if (unlikely(!new.inuse && n->nr_partial >=3D s->min_part= ial)) { > + if (unlikely(!slab->inuse && n->nr_partial >=3D s->min_pa= rtial)) { > slab->next =3D slab_to_discard; > slab_to_discard =3D slab; > } else { > @@ -3167,7 +3099,6 @@ static void *___slab_alloc(struct kmem_cache *s, gf= p_t gfpflags, int node, > node =3D NUMA_NO_NODE; > goto new_slab; > } > -redo: > > if (unlikely(!node_match(slab, node))) { > /* > @@ -3243,7 +3174,8 @@ static void *___slab_alloc(struct kmem_cache *s, gf= p_t gfpflags, int node, > > new_slab: > > - if (slub_percpu_partial(c)) { > +#ifdef CONFIG_SLUB_CPU_PARTIAL > + while (slub_percpu_partial(c)) { > local_lock_irqsave(&s->cpu_slab->lock, flags); > if (unlikely(c->slab)) { > local_unlock_irqrestore(&s->cpu_slab->lock, flags= ); > @@ -3255,12 +3187,22 @@ static void *___slab_alloc(struct kmem_cache *s, = gfp_t gfpflags, int node, > goto new_objects; > } > > - slab =3D c->slab =3D slub_percpu_partial(c); > + slab =3D slub_percpu_partial(c); > slub_set_percpu_partial(c, slab); > local_unlock_irqrestore(&s->cpu_slab->lock, flags); > stat(s, CPU_PARTIAL_ALLOC); > - goto redo; > + > + if (unlikely(!node_match(slab, node) || > + !pfmemalloc_match(slab, gfpflags))) { > + slab->next =3D NULL; > + __unfreeze_partials(s, slab); > + continue; > + } > + > + freelist =3D freeze_slab(s, slab); > + goto retry_load_slab; > } > +#endif > > new_objects: > > @@ -3268,8 +3210,8 @@ static void *___slab_alloc(struct kmem_cache *s, gf= p_t gfpflags, int node, > pc.orig_size =3D orig_size; > slab =3D get_partial(s, node, &pc); > if (slab) { > - freelist =3D pc.object; > if (kmem_cache_debug(s)) { > + freelist =3D pc.object; > /* > * For debug caches here we had to go through > * alloc_single_from_partial() so just store the > @@ -3281,6 +3223,7 @@ static void *___slab_alloc(struct kmem_cache *s, gf= p_t gfpflags, int node, > return freelist; > } > > + freelist =3D freeze_slab(s, slab); > goto retry_load_slab; > } > > @@ -3682,18 +3625,8 @@ static void __slab_free(struct kmem_cache *s, stru= ct slab *slab, > was_frozen =3D new.frozen; > new.inuse -=3D cnt; > if ((!new.inuse || !prior) && !was_frozen) { > - > - if (kmem_cache_has_cpu_partial(s) && !prior) { > - > - /* > - * Slab was on no list before and will be > - * partially empty > - * We can defer the list move and instead > - * freeze it. > - */ > - new.frozen =3D 1; > - > - } else { /* Needs to be taken off a list */ > + /* Needs to be taken off a list */ > + if (!kmem_cache_has_cpu_partial(s) || prior) { > > n =3D get_node(s, slab_nid(slab)); > /* > @@ -3723,9 +3656,9 @@ static void __slab_free(struct kmem_cache *s, struc= t slab *slab, > * activity can be necessary. > */ > stat(s, FREE_FROZEN); > - } else if (new.frozen) { > + } else if (kmem_cache_has_cpu_partial(s) && !prior) { > /* > - * If we just froze the slab then put it onto the > + * If we started with a full slab then put it ont= o the > * per cpu partial list. > */ > put_cpu_partial(s, slab, 1); > -- Looks good to me, Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Thanks! > 2.20.1 >