From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E554EC4332F for ; Tue, 31 Oct 2023 14:09:19 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 80EF76B0309; Tue, 31 Oct 2023 10:09:19 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7BDE46B030A; Tue, 31 Oct 2023 10:09:19 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 65D746B030B; Tue, 31 Oct 2023 10:09:19 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 4F2076B0309 for ; Tue, 31 Oct 2023 10:09:19 -0400 (EDT) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 8FBAFB5C56 for ; Tue, 31 Oct 2023 14:09:18 +0000 (UTC) X-FDA: 81405938796.30.C9E1095 Received: from out-176.mta1.migadu.com (out-176.mta1.migadu.com [95.215.58.176]) by imf23.hostedemail.com (Postfix) with ESMTP id 97A2C14000B for ; Tue, 31 Oct 2023 14:09:16 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=T7PF2RDx; dmarc=pass (policy=none) header.from=linux.dev; spf=pass (imf23.hostedemail.com: domain of chengming.zhou@linux.dev designates 95.215.58.176 as permitted sender) smtp.mailfrom=chengming.zhou@linux.dev ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1698761356; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=WFFMQyhLZJnEzwgDfbrlXS+sPYcJwR5pdI64P4BtRGU=; b=rGgDQP4WiEB1f7RqFqm1SK0qyuwEILGv3h8eXQus1yzmqjeCUCvs2/BiBiAxQQbPEcSKz9 coodQRSqXHBYdfCC3Iv9jGFK4sqL2U8jm/Y6gzqKnSb8u7sWvzQX8bVTOpgU/w2D2NZOEh 1inryW1RS1PhXKn6+sH725CroSKiDkk= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=T7PF2RDx; dmarc=pass (policy=none) header.from=linux.dev; spf=pass (imf23.hostedemail.com: domain of chengming.zhou@linux.dev designates 95.215.58.176 as permitted sender) smtp.mailfrom=chengming.zhou@linux.dev ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1698761356; a=rsa-sha256; cv=none; b=NCnn2MSMuzs3ZAw0Ds9ZjFo/YrGU225fzwNgfVJcQDw6HSQ3j4k9ZudsXa5Xk0Omh/SsBm hzzFoyGM9ciF2N6C4Bz8lHwWR50WFJUVnT5WTCzNN8tplm2gK5mUyjgkfA95sZqU/EYjrz sUNMfX+O8wjY21KOfU9wOKwbaP9apwk= X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1698761355; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=WFFMQyhLZJnEzwgDfbrlXS+sPYcJwR5pdI64P4BtRGU=; b=T7PF2RDxrnhml9AGrzBwH7Qb6gebmTAsuAiIOValuuZMya2Gd9T1jQvXTSlgNdQynWleC7 BIbYj4Hhvn5EHHFj1uNZNgRFHhPDpDZmezPhOhX5zL2VsVdotFZfFjnqhG8XarSBkxuXgN ddq0HXNsmojE5zrxEzjWz16CJWMGtZU= From: chengming.zhou@linux.dev To: vbabka@suse.cz, cl@linux.com, penberg@kernel.org, willy@infradead.org Cc: rientjes@google.com, iamjoonsoo.kim@lge.com, akpm@linux-foundation.org, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, chengming.zhou@linux.dev, Chengming Zhou Subject: [RFC PATCH v4 6/9] slub: Delay freezing of partial slabs Date: Tue, 31 Oct 2023 14:07:38 +0000 Message-Id: <20231031140741.79387-7-chengming.zhou@linux.dev> In-Reply-To: <20231031140741.79387-1-chengming.zhou@linux.dev> References: <20231031140741.79387-1-chengming.zhou@linux.dev> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Migadu-Flow: FLOW_OUT X-Rspam-User: X-Stat-Signature: d1uxrgptjmyr3hmkdceyaghnzbsjir3d X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 97A2C14000B X-HE-Tag: 1698761356-966153 X-HE-Meta: U2FsdGVkX19Hk9TJqIBqwoLju0ywc0wf2v6/Y7+diVn6nvS+qdaMqerx93oIF8oCabBKUhiJn953bIi9wmrvy0IZr4rcWBAy0NepM2AVCCRy+f9e3S2OsIMbkxNRZE9bLEXZQs3f7Ic5wQn0D8JcLbhAfgIR1F8xhl8RjkhxjFNyhff7Py7YjxDKP5a3/G6VlksmVMe1y5bwvYKePtSjitV378UIVjCp4XqFYgz85sqoiipIXfMfCwu0nDOeIww6TVYflwrbs3V6h3ASwJ8+4w8MofOgEoep5tiGpJ2wut1Sp3zMBYXtIDzf3lXCLBlWg247T/vv1AWZ85zw24H+bomwcAnDtq6LVpVzkS4lQYPn7yX1sUIxNcZGsYVlcHkMr8pRVecc0JtQDytZrtmkFEpxYu5mAF3BcekQAib+7yEu/RMEvXBpHaKeSN5RRU9bmiXMhcR+BM/BSmkrS4buAjUvlY+qEm29bbolSfSE1wgyYl57Wza3GvrOMVjnrHsiOgMOJYOktBngROv7faEAnG1tOs6U1jdD48169wBfLkCvnJK7iJ4EuTsWWH+RJ56pCBocxQEbLnFFTQNvMH/fP2exk+XG6OMLBVHTym63GBfdjcOz/9hWBBQP19NTDxqDZd1NbyP9mmI917ePnzNiGkEUotzWFt6EYjEXjZisNOAAyPO7UF5XEfNjpXWaWX2A6Ck8GtpyQwVL6h6EszB6BFsHrrsmzSGbHUyjP8do4xn+XmL97m8ve/naydXCWqSui6XW8QZyoM8XkVx1Ueg0VZbckaMN+uyqwibBjQ+fECJXwM5ztt0IphJCqwmgY4idvpjYfcdJqaHBKEJxQOVIRoBuA3uNDBlm2FgzYZPjBXl/PCdeFiRR4HBj09JpndBrfIaTfobARsKKY7xL45v9k1HQNGq74QwgufcAIFRAjDa0VhUx4ajPusUo6cHikgwyfVsgG36QWJ9R2Jm7LT2 5RO05oLO h9QOjaUi+Ax/AtdSuWIwOa14IVJtlLHXTHdfOWWhnJxA/e7PmqSlcfX7RZwrn726xT/QdPP2bKYd34HWvXy86LkL5R8SynLz48q8muAUJdQDKy0MGmTmu2uAg2BP7X+qighVDFLV4Z1ZHWP06CL+w8nNJX8kP9a5OCqJHE+vHgsHbDtMYe/TzDBeqG0G84+Q6BRN0iR0SHVqGWiU= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Chengming Zhou Now we will freeze slabs when moving them out of node partial list to cpu partial list, this method needs two cmpxchg_double operations: 1. freeze slab (acquire_slab()) under the node list_lock 2. get_freelist() when pick used in ___slab_alloc() Actually we don't need to freeze when moving slabs out of node partial list, we can delay freezing to when use slab freelist in ___slab_alloc(), so we can save one cmpxchg_double(). And there are other good points: - The moving of slabs between node partial list and cpu partial list becomes simpler, since we don't need to freeze or unfreeze at all. - The node list_lock contention would be less, since we don't need to freeze any slab under the node list_lock. We can achieve this because there is no concurrent path would manipulate the partial slab list except the __slab_free() path, which is now serialized by slab_test_node_partial() under the list_lock. Since the slab returned by get_partial() interfaces is not frozen anymore and no freelist is returned in the partial_context, so we need to use the introduced freeze_slab() to freeze it and get its freelist. Similarly, the slabs on the CPU partial list are not frozen anymore, we need to freeze_slab() on it before use. We can now delete acquire_slab() as it became unused. Signed-off-by: Chengming Zhou Reviewed-by: Vlastimil Babka --- mm/slub.c | 113 +++++++++++------------------------------------------- 1 file changed, 23 insertions(+), 90 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index edf567971679..bcb5b2c4e213 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2234,51 +2234,6 @@ static void *alloc_single_from_new_slab(struct kmem_cache *s, return object; } -/* - * Remove slab from the partial list, freeze it and - * return the pointer to the freelist. - * - * Returns a list of objects or NULL if it fails. - */ -static inline void *acquire_slab(struct kmem_cache *s, - struct kmem_cache_node *n, struct slab *slab, - int mode) -{ - void *freelist; - unsigned long counters; - struct slab new; - - lockdep_assert_held(&n->list_lock); - - /* - * Zap the freelist and set the frozen bit. - * The old freelist is the list of objects for the - * per cpu allocation list. - */ - freelist = slab->freelist; - counters = slab->counters; - new.counters = counters; - if (mode) { - new.inuse = slab->objects; - new.freelist = NULL; - } else { - new.freelist = freelist; - } - - VM_BUG_ON(new.frozen); - new.frozen = 1; - - if (!__slab_update_freelist(s, slab, - freelist, counters, - new.freelist, new.counters, - "acquire_slab")) - return NULL; - - remove_partial(n, slab); - WARN_ON(!freelist); - return freelist; -} - #ifdef CONFIG_SLUB_CPU_PARTIAL static void put_cpu_partial(struct kmem_cache *s, struct slab *slab, int drain); #else @@ -2295,7 +2250,6 @@ static struct slab *get_partial_node(struct kmem_cache *s, struct partial_context *pc) { struct slab *slab, *slab2, *partial = NULL; - void *object = NULL; unsigned long flags; unsigned int partial_slabs = 0; @@ -2314,7 +2268,7 @@ static struct slab *get_partial_node(struct kmem_cache *s, continue; if (IS_ENABLED(CONFIG_SLUB_TINY) || kmem_cache_debug(s)) { - object = alloc_single_from_partial(s, n, slab, + void *object = alloc_single_from_partial(s, n, slab, pc->orig_size); if (object) { partial = slab; @@ -2324,13 +2278,10 @@ static struct slab *get_partial_node(struct kmem_cache *s, continue; } - object = acquire_slab(s, n, slab, object == NULL); - if (!object) - break; + remove_partial(n, slab); if (!partial) { partial = slab; - pc->object = object; stat(s, ALLOC_FROM_PARTIAL); } else { put_cpu_partial(s, slab, 0); @@ -2629,9 +2580,6 @@ static void __unfreeze_partials(struct kmem_cache *s, struct slab *partial_slab) unsigned long flags = 0; while (partial_slab) { - struct slab new; - struct slab old; - slab = partial_slab; partial_slab = slab->next; @@ -2644,23 +2592,7 @@ static void __unfreeze_partials(struct kmem_cache *s, struct slab *partial_slab) spin_lock_irqsave(&n->list_lock, flags); } - do { - - old.freelist = slab->freelist; - old.counters = slab->counters; - VM_BUG_ON(!old.frozen); - - new.counters = old.counters; - new.freelist = old.freelist; - - new.frozen = 0; - - } while (!__slab_update_freelist(s, slab, - old.freelist, old.counters, - new.freelist, new.counters, - "unfreezing slab")); - - if (unlikely(!new.inuse && n->nr_partial >= s->min_partial)) { + if (unlikely(!slab->inuse && n->nr_partial >= s->min_partial)) { slab->next = slab_to_discard; slab_to_discard = slab; } else { @@ -3167,7 +3099,6 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, node = NUMA_NO_NODE; goto new_slab; } -redo: if (unlikely(!node_match(slab, node))) { /* @@ -3243,7 +3174,8 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, new_slab: - if (slub_percpu_partial(c)) { +#ifdef CONFIG_SLUB_CPU_PARTIAL + while (slub_percpu_partial(c)) { local_lock_irqsave(&s->cpu_slab->lock, flags); if (unlikely(c->slab)) { local_unlock_irqrestore(&s->cpu_slab->lock, flags); @@ -3255,12 +3187,22 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, goto new_objects; } - slab = c->slab = slub_percpu_partial(c); + slab = slub_percpu_partial(c); slub_set_percpu_partial(c, slab); local_unlock_irqrestore(&s->cpu_slab->lock, flags); stat(s, CPU_PARTIAL_ALLOC); - goto redo; + + if (unlikely(!node_match(slab, node) || + !pfmemalloc_match(slab, gfpflags))) { + slab->next = NULL; + __unfreeze_partials(s, slab); + continue; + } + + freelist = freeze_slab(s, slab); + goto retry_load_slab; } +#endif new_objects: @@ -3268,8 +3210,8 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, pc.orig_size = orig_size; slab = get_partial(s, node, &pc); if (slab) { - freelist = pc.object; if (kmem_cache_debug(s)) { + freelist = pc.object; /* * For debug caches here we had to go through * alloc_single_from_partial() so just store the @@ -3281,6 +3223,7 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, return freelist; } + freelist = freeze_slab(s, slab); goto retry_load_slab; } @@ -3682,18 +3625,8 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab, was_frozen = new.frozen; new.inuse -= cnt; if ((!new.inuse || !prior) && !was_frozen) { - - if (kmem_cache_has_cpu_partial(s) && !prior) { - - /* - * Slab was on no list before and will be - * partially empty - * We can defer the list move and instead - * freeze it. - */ - new.frozen = 1; - - } else { /* Needs to be taken off a list */ + /* Needs to be taken off a list */ + if (!kmem_cache_has_cpu_partial(s) || prior) { n = get_node(s, slab_nid(slab)); /* @@ -3723,9 +3656,9 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab, * activity can be necessary. */ stat(s, FREE_FROZEN); - } else if (new.frozen) { + } else if (kmem_cache_has_cpu_partial(s) && !prior) { /* - * If we just froze the slab then put it onto the + * If we started with a full slab then put it onto the * per cpu partial list. */ put_cpu_partial(s, slab, 1); -- 2.20.1