From: chengming.zhou@linux.dev
To: cl@linux.com, penberg@kernel.org
Cc: rientjes@google.com, iamjoonsoo.kim@lge.com,
akpm@linux-foundation.org, vbabka@suse.cz,
roman.gushchin@linux.dev, 42.hyeyoo@gmail.com,
linux-mm@kvack.org, linux-kernel@vger.kernel.org,
chengming.zhou@linux.dev,
Chengming Zhou <zhouchengming@bytedance.com>
Subject: [RFC PATCH 2/5] slub: Don't manipulate slab list when used by cpu
Date: Tue, 17 Oct 2023 15:44:36 +0000 [thread overview]
Message-ID: <20231017154439.3036608-3-chengming.zhou@linux.dev> (raw)
In-Reply-To: <20231017154439.3036608-1-chengming.zhou@linux.dev>
From: Chengming Zhou <zhouchengming@bytedance.com>
We will change to don't freeze slab when moving it out of node partial
list in the following patch, so we can't rely on the frozen bit to
indicate if we should manipulate the slab list or not.
This patch use the introduced on_partial() helper, which check the
slab->flags that protected by node list_lock, so we can know if the
slab is on the node partial list.
Signed-off-by: Chengming Zhou <zhouchengming@bytedance.com>
---
mm/slub.c | 11 +++++++++++
1 file changed, 11 insertions(+)
diff --git a/mm/slub.c b/mm/slub.c
index e5356ad14951..27eac93baa13 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3636,6 +3636,7 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab,
unsigned long counters;
struct kmem_cache_node *n = NULL;
unsigned long flags;
+ bool on_node_partial;
stat(s, FREE_SLOWPATH);
@@ -3683,6 +3684,7 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab,
*/
spin_lock_irqsave(&n->list_lock, flags);
+ on_node_partial = on_partial(n, slab);
}
}
@@ -3711,6 +3713,15 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab,
return;
}
+ /*
+ * This slab was not on node partial list and not full either,
+ * in which case we shouldn't manipulate its list, early return.
+ */
+ if (!on_node_partial && prior) {
+ spin_unlock_irqrestore(&n->list_lock, flags);
+ return;
+ }
+
if (unlikely(!new.inuse && n->nr_partial >= s->min_partial))
goto slab_empty;
--
2.40.1
next prev parent reply other threads:[~2023-10-17 15:45 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-10-17 15:44 [RFC PATCH 0/5] slub: Delay freezing of CPU partial slabs chengming.zhou
2023-10-17 15:44 ` [RFC PATCH 1/5] slub: Introduce on_partial() chengming.zhou
2023-10-17 15:54 ` Matthew Wilcox
2023-10-18 7:37 ` Chengming Zhou
2023-10-27 5:26 ` kernel test robot
2023-10-27 9:43 ` Chengming Zhou
2023-10-17 15:44 ` chengming.zhou [this message]
2023-10-17 15:44 ` [RFC PATCH 3/5] slub: Optimize deactivate_slab() chengming.zhou
2023-10-17 15:44 ` [RFC PATCH 4/5] slub: Don't freeze slabs for cpu partial chengming.zhou
2023-10-17 15:44 ` [RFC PATCH 5/5] slub: Introduce get_cpu_partial() chengming.zhou
2023-10-18 6:34 ` [RFC PATCH 0/5] slub: Delay freezing of CPU partial slabs Hyeonggon Yoo
2023-10-18 7:44 ` Chengming Zhou
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20231017154439.3036608-3-chengming.zhou@linux.dev \
--to=chengming.zhou@linux.dev \
--cc=42.hyeyoo@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=cl@linux.com \
--cc=iamjoonsoo.kim@lge.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=penberg@kernel.org \
--cc=rientjes@google.com \
--cc=roman.gushchin@linux.dev \
--cc=vbabka@suse.cz \
--cc=zhouchengming@bytedance.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox