From: Qi Zheng <zhengqi.arch@bytedance.com>
To: akpm@linux-foundation.org, david@fromorbit.com, tkhai@ya.ru,
vbabka@suse.cz, roman.gushchin@linux.dev, djwong@kernel.org,
brauner@kernel.org, paulmck@kernel.org, tytso@mit.edu,
steven.price@arm.com, cel@kernel.org, senozhatsky@chromium.org,
yujie.liu@intel.com, gregkh@linuxfoundation.org,
muchun.song@linux.dev
Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org,
linux-fsdevel@vger.kernel.org,
Qi Zheng <zhengqi.arch@bytedance.com>
Subject: [PATCH v6 43/45] mm: shrinker: make memcg slab shrink lockless
Date: Mon, 11 Sep 2023 17:44:42 +0800 [thread overview]
Message-ID: <20230911094444.68966-44-zhengqi.arch@bytedance.com> (raw)
In-Reply-To: <20230911094444.68966-1-zhengqi.arch@bytedance.com>
Like global slab shrink, this commit also uses refcount+RCU method to make
memcg slab shrink lockless.
Use the following script to do slab shrink stress test:
```
DIR="/root/shrinker/memcg/mnt"
do_create()
{
mkdir -p /sys/fs/cgroup/memory/test
echo 4G > /sys/fs/cgroup/memory/test/memory.limit_in_bytes
for i in `seq 0 $1`;
do
mkdir -p /sys/fs/cgroup/memory/test/$i;
echo $$ > /sys/fs/cgroup/memory/test/$i/cgroup.procs;
mkdir -p $DIR/$i;
done
}
do_mount()
{
for i in `seq $1 $2`;
do
mount -t tmpfs $i $DIR/$i;
done
}
do_touch()
{
for i in `seq $1 $2`;
do
echo $$ > /sys/fs/cgroup/memory/test/$i/cgroup.procs;
dd if=/dev/zero of=$DIR/$i/file$i bs=1M count=1 &
done
}
case "$1" in
touch)
do_touch $2 $3
;;
test)
do_create 4000
do_mount 0 4000
do_touch 0 3000
;;
*)
exit 1
;;
esac
```
Save the above script, then run test and touch commands. Then we can use
the following perf command to view hotspots:
perf top -U -F 999
1) Before applying this patchset:
33.15% [kernel] [k] down_read_trylock
25.38% [kernel] [k] shrink_slab
21.75% [kernel] [k] up_read
4.45% [kernel] [k] _find_next_bit
2.27% [kernel] [k] do_shrink_slab
1.80% [kernel] [k] intel_idle_irq
1.79% [kernel] [k] shrink_lruvec
0.67% [kernel] [k] xas_descend
0.41% [kernel] [k] mem_cgroup_iter
0.40% [kernel] [k] shrink_node
0.38% [kernel] [k] list_lru_count_one
2) After applying this patchset:
64.56% [kernel] [k] shrink_slab
12.18% [kernel] [k] do_shrink_slab
3.30% [kernel] [k] __rcu_read_unlock
2.61% [kernel] [k] shrink_lruvec
2.49% [kernel] [k] __rcu_read_lock
1.93% [kernel] [k] intel_idle_irq
0.89% [kernel] [k] shrink_node
0.81% [kernel] [k] mem_cgroup_iter
0.77% [kernel] [k] mem_cgroup_calculate_protection
0.66% [kernel] [k] list_lru_count_one
We can see that the first perf hotspot becomes shrink_slab, which is what
we expect.
Signed-off-by: Qi Zheng <zhengqi.arch@bytedance.com>
---
mm/shrinker.c | 85 +++++++++++++++++++++++++++++++++++++++------------
1 file changed, 66 insertions(+), 19 deletions(-)
diff --git a/mm/shrinker.c b/mm/shrinker.c
index 82dc61133c5b..ad64cac5248c 100644
--- a/mm/shrinker.c
+++ b/mm/shrinker.c
@@ -218,7 +218,6 @@ static int shrinker_memcg_alloc(struct shrinker *shrinker)
return -ENOSYS;
down_write(&shrinker_rwsem);
- /* This may call shrinker, so it must use down_read_trylock() */
id = idr_alloc(&shrinker_idr, shrinker, 0, 0, GFP_KERNEL);
if (id < 0)
goto unlock;
@@ -252,10 +251,15 @@ static long xchg_nr_deferred_memcg(int nid, struct shrinker *shrinker,
{
struct shrinker_info *info;
struct shrinker_info_unit *unit;
+ long nr_deferred;
- info = shrinker_info_protected(memcg, nid);
+ rcu_read_lock();
+ info = rcu_dereference(memcg->nodeinfo[nid]->shrinker_info);
unit = info->unit[shrinker_id_to_index(shrinker->id)];
- return atomic_long_xchg(&unit->nr_deferred[shrinker_id_to_offset(shrinker->id)], 0);
+ nr_deferred = atomic_long_xchg(&unit->nr_deferred[shrinker_id_to_offset(shrinker->id)], 0);
+ rcu_read_unlock();
+
+ return nr_deferred;
}
static long add_nr_deferred_memcg(long nr, int nid, struct shrinker *shrinker,
@@ -263,10 +267,16 @@ static long add_nr_deferred_memcg(long nr, int nid, struct shrinker *shrinker,
{
struct shrinker_info *info;
struct shrinker_info_unit *unit;
+ long nr_deferred;
- info = shrinker_info_protected(memcg, nid);
+ rcu_read_lock();
+ info = rcu_dereference(memcg->nodeinfo[nid]->shrinker_info);
unit = info->unit[shrinker_id_to_index(shrinker->id)];
- return atomic_long_add_return(nr, &unit->nr_deferred[shrinker_id_to_offset(shrinker->id)]);
+ nr_deferred =
+ atomic_long_add_return(nr, &unit->nr_deferred[shrinker_id_to_offset(shrinker->id)]);
+ rcu_read_unlock();
+
+ return nr_deferred;
}
void reparent_shrinker_deferred(struct mem_cgroup *memcg)
@@ -463,18 +473,54 @@ static unsigned long shrink_slab_memcg(gfp_t gfp_mask, int nid,
if (!mem_cgroup_online(memcg))
return 0;
- if (!down_read_trylock(&shrinker_rwsem))
- return 0;
-
- info = shrinker_info_protected(memcg, nid);
+ /*
+ * lockless algorithm of memcg shrink.
+ *
+ * The shrinker_info may be freed asynchronously via RCU in the
+ * expand_one_shrinker_info(), so the rcu_read_lock() needs to be used
+ * to ensure the existence of the shrinker_info.
+ *
+ * The shrinker_info_unit is never freed unless its corresponding memcg
+ * is destroyed. Here we already hold the refcount of memcg, so the
+ * memcg will not be destroyed, and of course shrinker_info_unit will
+ * not be freed.
+ *
+ * So in the memcg shrink:
+ * step 1: use rcu_read_lock() to guarantee existence of the
+ * shrinker_info.
+ * step 2: after getting shrinker_info_unit we can safely release the
+ * RCU lock.
+ * step 3: traverse the bitmap and calculate shrinker_id
+ * step 4: use rcu_read_lock() to guarantee existence of the shrinker.
+ * step 5: use shrinker_id to find the shrinker, then use
+ * shrinker_try_get() to guarantee existence of the shrinker,
+ * then we can release the RCU lock to do do_shrink_slab() that
+ * may sleep.
+ * step 6: do shrinker_put() paired with step 5 to put the refcount,
+ * if the refcount reaches 0, then wake up the waiter in
+ * shrinker_free() by calling complete().
+ * Note: here is different from the global shrink, we don't
+ * need to acquire the RCU lock to guarantee existence of
+ * the shrinker, because we don't need to use this
+ * shrinker to traverse the next shrinker in the bitmap.
+ * step 7: we have already exited the read-side of rcu critical section
+ * before calling do_shrink_slab(), the shrinker_info may be
+ * released in expand_one_shrinker_info(), so go back to step 1
+ * to reacquire the shrinker_info.
+ */
+again:
+ rcu_read_lock();
+ info = rcu_dereference(memcg->nodeinfo[nid]->shrinker_info);
if (unlikely(!info))
goto unlock;
- for (; index < shrinker_id_to_index(info->map_nr_max); index++) {
+ if (index < shrinker_id_to_index(info->map_nr_max)) {
struct shrinker_info_unit *unit;
unit = info->unit[index];
+ rcu_read_unlock();
+
for_each_set_bit(offset, unit->map, SHRINKER_UNIT_BITS) {
struct shrink_control sc = {
.gfp_mask = gfp_mask,
@@ -484,12 +530,14 @@ static unsigned long shrink_slab_memcg(gfp_t gfp_mask, int nid,
struct shrinker *shrinker;
int shrinker_id = calc_shrinker_id(index, offset);
+ rcu_read_lock();
shrinker = idr_find(&shrinker_idr, shrinker_id);
- if (unlikely(!shrinker || !(shrinker->flags & SHRINKER_REGISTERED))) {
- if (!shrinker)
- clear_bit(offset, unit->map);
+ if (unlikely(!shrinker || !shrinker_try_get(shrinker))) {
+ clear_bit(offset, unit->map);
+ rcu_read_unlock();
continue;
}
+ rcu_read_unlock();
/* Call non-slab shrinkers even though kmem is disabled */
if (!memcg_kmem_online() &&
@@ -522,15 +570,14 @@ static unsigned long shrink_slab_memcg(gfp_t gfp_mask, int nid,
set_shrinker_bit(memcg, nid, shrinker_id);
}
freed += ret;
-
- if (rwsem_is_contended(&shrinker_rwsem)) {
- freed = freed ? : 1;
- goto unlock;
- }
+ shrinker_put(shrinker);
}
+
+ index++;
+ goto again;
}
unlock:
- up_read(&shrinker_rwsem);
+ rcu_read_unlock();
return freed;
}
#else /* !CONFIG_MEMCG */
--
2.30.2
next prev parent reply other threads:[~2023-09-11 9:51 UTC|newest]
Thread overview: 72+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-09-11 9:43 [PATCH v6 00/45] use refcount+RCU method to implement lockless slab shrink Qi Zheng
2023-09-11 9:44 ` [PATCH v6 01/45] mm: shrinker: add infrastructure for dynamically allocating shrinker Qi Zheng
2023-09-18 9:03 ` Muchun Song
2023-09-18 12:06 ` Qi Zheng
2023-09-19 2:36 ` Muchun Song
2023-09-19 2:46 ` [PATCH] mm: shrinker: some cleanup Qi Zheng
2023-09-19 8:04 ` Greg KH
2023-09-19 8:41 ` Qi Zheng
2023-09-11 9:44 ` [PATCH v6 02/45] kvm: mmu: dynamically allocate the x86-mmu shrinker Qi Zheng
2023-09-11 9:44 ` [PATCH v6 03/45] binder: dynamically allocate the android-binder shrinker Qi Zheng
2023-09-11 9:44 ` [PATCH v6 04/45] drm/ttm: dynamically allocate the drm-ttm_pool shrinker Qi Zheng
2023-09-11 9:44 ` [PATCH v6 05/45] xenbus/backend: dynamically allocate the xen-backend shrinker Qi Zheng
2023-09-11 9:44 ` [PATCH v6 06/45] erofs: dynamically allocate the erofs-shrinker Qi Zheng
2023-09-11 9:44 ` [PATCH v6 07/45] f2fs: dynamically allocate the f2fs-shrinker Qi Zheng
2023-09-11 9:44 ` [PATCH v6 08/45] gfs2: dynamically allocate the gfs2-glock shrinker Qi Zheng
2023-09-11 9:44 ` [PATCH v6 09/45] gfs2: dynamically allocate the gfs2-qd shrinker Qi Zheng
2023-09-11 9:44 ` [PATCH v6 10/45] NFSv4.2: dynamically allocate the nfs-xattr shrinkers Qi Zheng
2023-09-11 9:44 ` [PATCH v6 11/45] nfs: dynamically allocate the nfs-acl shrinker Qi Zheng
2023-09-11 9:44 ` [PATCH v6 12/45] nfsd: dynamically allocate the nfsd-filecache shrinker Qi Zheng
2023-09-11 9:44 ` [PATCH v6 13/45] quota: dynamically allocate the dquota-cache shrinker Qi Zheng
2023-09-11 9:44 ` [PATCH v6 14/45] ubifs: dynamically allocate the ubifs-slab shrinker Qi Zheng
2023-09-11 9:44 ` [PATCH v6 15/45] rcu: dynamically allocate the rcu-lazy shrinker Qi Zheng
2023-09-18 7:27 ` Muchun Song
2023-09-11 9:44 ` [PATCH v6 16/45] rcu: dynamically allocate the rcu-kfree shrinker Qi Zheng
2023-09-12 7:30 ` Uladzislau Rezki
2023-09-11 9:44 ` [PATCH v6 17/45] mm: thp: dynamically allocate the thp-related shrinkers Qi Zheng
2023-09-11 9:44 ` [PATCH v6 18/45] sunrpc: dynamically allocate the sunrpc_cred shrinker Qi Zheng
2023-09-11 9:44 ` [PATCH v6 19/45] mm: workingset: dynamically allocate the mm-shadow shrinker Qi Zheng
2023-09-18 7:26 ` Muchun Song
2023-09-11 9:44 ` [PATCH v6 20/45] drm/i915: dynamically allocate the i915_gem_mm shrinker Qi Zheng
2023-09-11 9:44 ` [PATCH v6 21/45] drm/msm: dynamically allocate the drm-msm_gem shrinker Qi Zheng
2023-09-11 9:44 ` [PATCH v6 22/45] drm/panfrost: dynamically allocate the drm-panfrost shrinker Qi Zheng
2023-09-11 9:44 ` [PATCH v6 23/45] dm: dynamically allocate the dm-bufio shrinker Qi Zheng
2023-09-11 9:44 ` [PATCH v6 24/45] dm zoned: dynamically allocate the dm-zoned-meta shrinker Qi Zheng
2023-09-11 9:44 ` [PATCH v6 25/45] md/raid5: dynamically allocate the md-raid5 shrinker Qi Zheng
2023-09-11 9:44 ` [PATCH v6 26/45] bcache: dynamically allocate the md-bcache shrinker Qi Zheng
2023-09-18 7:24 ` Muchun Song
2023-09-11 9:44 ` [PATCH v6 27/45] vmw_balloon: dynamically allocate the vmw-balloon shrinker Qi Zheng
2023-09-11 18:40 ` Nadav Amit
2023-09-11 9:44 ` [PATCH v6 28/45] virtio_balloon: dynamically allocate the virtio-balloon shrinker Qi Zheng
2023-09-11 9:44 ` [PATCH v6 29/45] mbcache: dynamically allocate the mbcache shrinker Qi Zheng
2023-09-11 9:44 ` [PATCH v6 30/45] ext4: dynamically allocate the ext4-es shrinker Qi Zheng
2023-09-11 9:44 ` [PATCH v6 31/45] jbd2,ext4: dynamically allocate the jbd2-journal shrinker Qi Zheng
2023-09-11 9:44 ` [PATCH v6 32/45] nfsd: dynamically allocate the nfsd-client shrinker Qi Zheng
2023-09-11 9:44 ` [PATCH v6 33/45] nfsd: dynamically allocate the nfsd-reply shrinker Qi Zheng
2023-09-18 7:21 ` Muchun Song
2023-09-11 9:44 ` [PATCH v6 34/45] xfs: dynamically allocate the xfs-buf shrinker Qi Zheng
2023-09-11 9:44 ` [PATCH v6 35/45] xfs: dynamically allocate the xfs-inodegc shrinker Qi Zheng
2023-09-11 9:44 ` [PATCH v6 36/45] xfs: dynamically allocate the xfs-qm shrinker Qi Zheng
2023-09-11 9:44 ` [PATCH v6 37/45] zsmalloc: dynamically allocate the mm-zspool shrinker Qi Zheng
2023-09-11 9:44 ` [PATCH v6 38/45] fs: super: dynamically allocate the s_shrink Qi Zheng
2023-09-13 17:03 ` David Sterba
2023-09-11 9:44 ` [PATCH v6 39/45] mm: shrinker: remove old APIs Qi Zheng
2023-09-11 9:44 ` [PATCH v6 40/45] mm: shrinker: add a secondary array for shrinker_info::{map, nr_deferred} Qi Zheng
2023-09-28 14:15 ` [PATCH] fixup: " Qi Zheng
2023-09-28 14:17 ` Qi Zheng
2023-09-11 9:44 ` [PATCH v6 41/45] mm: shrinker: rename {prealloc|unregister}_memcg_shrinker() to shrinker_memcg_{alloc|remove}() Qi Zheng
2023-09-18 7:20 ` Muchun Song
2023-09-11 9:44 ` [PATCH v6 42/45] mm: shrinker: make global slab shrink lockless Qi Zheng
2023-12-06 7:47 ` Lai Jiangshan
2023-12-06 7:55 ` Qi Zheng
2023-12-06 8:10 ` Lai Jiangshan
2023-12-06 8:23 ` Lai Jiangshan
2023-12-06 8:37 ` Qi Zheng
2023-12-06 9:13 ` Dave Chinner
2023-09-11 9:44 ` Qi Zheng [this message]
2023-09-11 9:44 ` [PATCH v6 44/45] mm: shrinker: hold write lock to reparent shrinker nr_deferred Qi Zheng
2023-09-18 7:17 ` Muchun Song
2023-09-11 9:44 ` [PATCH v6 45/45] mm: shrinker: convert shrinker_rwsem to mutex Qi Zheng
2023-09-18 7:16 ` Muchun Song
2023-11-26 14:27 ` [PATCH v6 00/45] use refcount+RCU method to implement lockless slab shrink Ryan Lahfa
2023-11-27 13:53 ` Greg KH
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230911094444.68966-44-zhengqi.arch@bytedance.com \
--to=zhengqi.arch@bytedance.com \
--cc=akpm@linux-foundation.org \
--cc=brauner@kernel.org \
--cc=cel@kernel.org \
--cc=david@fromorbit.com \
--cc=djwong@kernel.org \
--cc=gregkh@linuxfoundation.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=muchun.song@linux.dev \
--cc=paulmck@kernel.org \
--cc=roman.gushchin@linux.dev \
--cc=senozhatsky@chromium.org \
--cc=steven.price@arm.com \
--cc=tkhai@ya.ru \
--cc=tytso@mit.edu \
--cc=vbabka@suse.cz \
--cc=yujie.liu@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox