From: chengming.zhou@linux.dev
To: cl@linux.com, penberg@kernel.org
Cc: rientjes@google.com, iamjoonsoo.kim@lge.com,
akpm@linux-foundation.org, vbabka@suse.cz,
roman.gushchin@linux.dev, 42.hyeyoo@gmail.com,
willy@infradead.org, pcc@google.com, tytso@mit.edu,
maz@kernel.org, ruansy.fnst@fujitsu.com, vishal.moola@gmail.com,
lrh2000@pku.edu.cn, hughd@google.com,
linux-kernel@vger.kernel.org, linux-mm@kvack.org,
chengming.zhou@linux.dev,
Chengming Zhou <zhouchengming@bytedance.com>
Subject: [RFC PATCH v2 1/6] slub: Keep track of whether slub is on the per-node partial list
Date: Sat, 21 Oct 2023 14:43:12 +0000 [thread overview]
Message-ID: <20231021144317.3400916-2-chengming.zhou@linux.dev> (raw)
In-Reply-To: <20231021144317.3400916-1-chengming.zhou@linux.dev>
From: Chengming Zhou <zhouchengming@bytedance.com>
Now we rely on the "frozen" bit to see if we should manipulate the
slab->slab_list, which will be changed in the following patch.
Instead we introduce another way to keep track of whether slub is on
the per-node partial list, here we reuse the PG_workingset bit.
Signed-off-by: Chengming Zhou <zhouchengming@bytedance.com>
---
include/linux/page-flags.h | 2 ++
mm/slab.h | 19 +++++++++++++++++++
mm/slub.c | 3 +++
3 files changed, 24 insertions(+)
diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
index a88e64acebfe..e8b1be71d722 100644
--- a/include/linux/page-flags.h
+++ b/include/linux/page-flags.h
@@ -478,6 +478,8 @@ PAGEFLAG(Active, active, PF_HEAD) __CLEARPAGEFLAG(Active, active, PF_HEAD)
TESTCLEARFLAG(Active, active, PF_HEAD)
PAGEFLAG(Workingset, workingset, PF_HEAD)
TESTCLEARFLAG(Workingset, workingset, PF_HEAD)
+ __SETPAGEFLAG(Workingset, workingset, PF_HEAD)
+ __CLEARPAGEFLAG(Workingset, workingset, PF_HEAD)
__PAGEFLAG(Slab, slab, PF_NO_TAIL)
PAGEFLAG(Checked, checked, PF_NO_COMPOUND) /* Used by some filesystems */
diff --git a/mm/slab.h b/mm/slab.h
index 8cd3294fedf5..9cff64cae8de 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -193,6 +193,25 @@ static inline void __slab_clear_pfmemalloc(struct slab *slab)
__folio_clear_active(slab_folio(slab));
}
+/*
+ * Slub reuse PG_workingset bit to keep track of whether it's on
+ * the per-node partial list.
+ */
+static inline bool slab_test_node_partial(const struct slab *slab)
+{
+ return folio_test_workingset((struct folio *)slab_folio(slab));
+}
+
+static inline void slab_set_node_partial(struct slab *slab)
+{
+ __folio_set_workingset(slab_folio(slab));
+}
+
+static inline void slab_clear_node_partial(struct slab *slab)
+{
+ __folio_clear_workingset(slab_folio(slab));
+}
+
static inline void *slab_address(const struct slab *slab)
{
return folio_address(slab_folio(slab));
diff --git a/mm/slub.c b/mm/slub.c
index 63d281dfacdb..3fad4edca34b 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -2127,6 +2127,7 @@ __add_partial(struct kmem_cache_node *n, struct slab *slab, int tail)
list_add_tail(&slab->slab_list, &n->partial);
else
list_add(&slab->slab_list, &n->partial);
+ slab_set_node_partial(slab);
}
static inline void add_partial(struct kmem_cache_node *n,
@@ -2141,6 +2142,7 @@ static inline void remove_partial(struct kmem_cache_node *n,
{
lockdep_assert_held(&n->list_lock);
list_del(&slab->slab_list);
+ slab_clear_node_partial(slab);
n->nr_partial--;
}
@@ -4831,6 +4833,7 @@ static int __kmem_cache_do_shrink(struct kmem_cache *s)
if (free == slab->objects) {
list_move(&slab->slab_list, &discard);
+ slab_clear_node_partial(slab);
n->nr_partial--;
dec_slabs_node(s, node, slab->objects);
} else if (free <= SHRINK_PROMOTE_MAX)
--
2.20.1
next prev parent reply other threads:[~2023-10-21 14:44 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-10-21 14:43 [RFC PATCH v2 0/6] slub: Delay freezing of CPU partial slabs chengming.zhou
2023-10-21 14:43 ` chengming.zhou [this message]
2023-10-23 12:32 ` [RFC PATCH v2 1/6] slub: Keep track of whether slub is on the per-node partial list Matthew Wilcox
2023-10-23 16:22 ` Matthew Wilcox
2023-10-24 1:57 ` Chengming Zhou
2023-10-21 14:43 ` [RFC PATCH v2 2/6] slub: Prepare __slab_free() for unfrozen partial slab out of node " chengming.zhou
2023-10-21 14:43 ` [RFC PATCH v2 3/6] slub: Don't freeze slabs for cpu partial chengming.zhou
2023-10-23 16:00 ` Vlastimil Babka
2023-10-24 2:39 ` Chengming Zhou
2023-10-21 14:43 ` [RFC PATCH v2 4/6] slub: Simplify acquire_slab() chengming.zhou
2023-10-21 14:43 ` [RFC PATCH v2 5/6] slub: Introduce get_cpu_partial() chengming.zhou
2023-10-21 14:43 ` [RFC PATCH v2 6/6] slub: Optimize deactivate_slab() chengming.zhou
2023-10-22 14:52 ` [RFC PATCH v2 0/6] slub: Delay freezing of CPU partial slabs Hyeonggon Yoo
2023-10-24 2:02 ` Chengming Zhou
2023-10-23 15:46 ` Vlastimil Babka
2023-10-23 17:00 ` Christoph Lameter (Ampere)
2023-10-23 18:44 ` Vlastimil Babka
2023-10-23 21:05 ` Christoph Lameter (Ampere)
2023-10-24 8:19 ` Vlastimil Babka
2023-10-24 11:03 ` Chengming Zhou
2023-10-24 2:20 ` Chengming Zhou
2023-10-24 8:20 ` Vlastimil Babka
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20231021144317.3400916-2-chengming.zhou@linux.dev \
--to=chengming.zhou@linux.dev \
--cc=42.hyeyoo@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=cl@linux.com \
--cc=hughd@google.com \
--cc=iamjoonsoo.kim@lge.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lrh2000@pku.edu.cn \
--cc=maz@kernel.org \
--cc=pcc@google.com \
--cc=penberg@kernel.org \
--cc=rientjes@google.com \
--cc=roman.gushchin@linux.dev \
--cc=ruansy.fnst@fujitsu.com \
--cc=tytso@mit.edu \
--cc=vbabka@suse.cz \
--cc=vishal.moola@gmail.com \
--cc=willy@infradead.org \
--cc=zhouchengming@bytedance.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox