linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: chengming.zhou@linux.dev
To: cl@linux.com, penberg@kernel.org
Cc: rientjes@google.com, iamjoonsoo.kim@lge.com,
	akpm@linux-foundation.org, vbabka@suse.cz,
	roman.gushchin@linux.dev, 42.hyeyoo@gmail.com,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	chengming.zhou@linux.dev,
	Chengming Zhou <zhouchengming@bytedance.com>
Subject: [RFC PATCH v3 1/7] slub: Keep track of whether slub is on the per-node partial list
Date: Tue, 24 Oct 2023 09:33:39 +0000	[thread overview]
Message-ID: <20231024093345.3676493-2-chengming.zhou@linux.dev> (raw)
In-Reply-To: <20231024093345.3676493-1-chengming.zhou@linux.dev>

From: Chengming Zhou <zhouchengming@bytedance.com>

Now we rely on the "frozen" bit to see if we should manipulate the
slab->slab_list, which will be changed in the following patch.

Instead we introduce another way to keep track of whether slub is on
the per-node partial list, here we reuse the PG_workingset bit.

We use __set_bit and __clear_bit directly instead of the atomic version
for better performance and it's safe since it's protected by the slub
node list_lock.

Signed-off-by: Chengming Zhou <zhouchengming@bytedance.com>
---
 mm/slab.h | 19 +++++++++++++++++++
 mm/slub.c |  3 +++
 2 files changed, 22 insertions(+)

diff --git a/mm/slab.h b/mm/slab.h
index 8cd3294fedf5..50522b688cfb 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -193,6 +193,25 @@ static inline void __slab_clear_pfmemalloc(struct slab *slab)
 	__folio_clear_active(slab_folio(slab));
 }
 
+/*
+ * Slub reuse PG_workingset bit to keep track of whether it's on
+ * the per-node partial list.
+ */
+static inline bool slab_test_node_partial(const struct slab *slab)
+{
+	return folio_test_workingset((struct folio *)slab_folio(slab));
+}
+
+static inline void slab_set_node_partial(struct slab *slab)
+{
+	__set_bit(PG_workingset, folio_flags(slab_folio(slab), 0));
+}
+
+static inline void slab_clear_node_partial(struct slab *slab)
+{
+	__clear_bit(PG_workingset, folio_flags(slab_folio(slab), 0));
+}
+
 static inline void *slab_address(const struct slab *slab)
 {
 	return folio_address(slab_folio(slab));
diff --git a/mm/slub.c b/mm/slub.c
index 63d281dfacdb..3fad4edca34b 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -2127,6 +2127,7 @@ __add_partial(struct kmem_cache_node *n, struct slab *slab, int tail)
 		list_add_tail(&slab->slab_list, &n->partial);
 	else
 		list_add(&slab->slab_list, &n->partial);
+	slab_set_node_partial(slab);
 }
 
 static inline void add_partial(struct kmem_cache_node *n,
@@ -2141,6 +2142,7 @@ static inline void remove_partial(struct kmem_cache_node *n,
 {
 	lockdep_assert_held(&n->list_lock);
 	list_del(&slab->slab_list);
+	slab_clear_node_partial(slab);
 	n->nr_partial--;
 }
 
@@ -4831,6 +4833,7 @@ static int __kmem_cache_do_shrink(struct kmem_cache *s)
 
 			if (free == slab->objects) {
 				list_move(&slab->slab_list, &discard);
+				slab_clear_node_partial(slab);
 				n->nr_partial--;
 				dec_slabs_node(s, node, slab->objects);
 			} else if (free <= SHRINK_PROMOTE_MAX)
-- 
2.40.1



  reply	other threads:[~2023-10-24  9:34 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-10-24  9:33 [RFC PATCH v3 0/7] slub: Delay freezing of CPU partial slabs chengming.zhou
2023-10-24  9:33 ` chengming.zhou [this message]
     [not found]   ` <6d054dbe-c90d-591d-11ca-b9ad3787683d@suse.cz>
2023-10-28  1:30     ` [RFC PATCH v3 1/7] slub: Keep track of whether slub is on the per-node partial list Chengming Zhou
2023-10-24  9:33 ` [RFC PATCH v3 2/7] slub: Prepare __slab_free() for unfrozen partial slab out of node " chengming.zhou
     [not found]   ` <43da5c9a-aeff-1bff-81a8-4611470c2514@suse.cz>
2023-10-28  1:35     ` Chengming Zhou
2023-10-24  9:33 ` [RFC PATCH v3 3/7] slub: Reflow ___slab_alloc() chengming.zhou
2023-10-24  9:33 ` [RFC PATCH v3 4/7] slub: Change get_partial() interfaces to return slab chengming.zhou
2023-10-30 16:55   ` Vlastimil Babka
2023-10-31  2:22     ` Chengming Zhou
2023-10-24  9:33 ` [RFC PATCH v3 5/7] slub: Introduce freeze_slab() chengming.zhou
2023-10-30 18:11   ` Vlastimil Babka
2023-10-24  9:33 ` [RFC PATCH v3 6/7] slub: Delay freezing of partial slabs chengming.zhou
2023-10-25  2:18   ` Chengming Zhou
2023-10-26  5:49   ` kernel test robot
2023-10-26  7:41     ` Chengming Zhou
2023-10-31  9:50   ` Vlastimil Babka
2023-10-24  9:33 ` [RFC PATCH v3 7/7] slub: Optimize deactivate_slab() chengming.zhou
2023-10-31 11:15   ` Vlastimil Babka
2023-10-31 11:41     ` Chengming Zhou
2023-10-27 17:57 ` [RFC PATCH v3 0/7] slub: Delay freezing of CPU partial slabs Christoph Lameter
2023-10-28  2:36   ` Chengming Zhou
2023-10-30 16:19     ` Vlastimil Babka
2023-10-31  2:29       ` Chengming Zhou
2023-10-30 19:25     ` Christoph Lameter
2023-10-31  2:50       ` Chengming Zhou
2023-10-31  3:47         ` Christoph Lameter
2023-10-31  4:57           ` Chengming Zhou

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20231024093345.3676493-2-chengming.zhou@linux.dev \
    --to=chengming.zhou@linux.dev \
    --cc=42.hyeyoo@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=cl@linux.com \
    --cc=iamjoonsoo.kim@lge.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=penberg@kernel.org \
    --cc=rientjes@google.com \
    --cc=roman.gushchin@linux.dev \
    --cc=vbabka@suse.cz \
    --cc=zhouchengming@bytedance.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox