From: Vlastimil Babka <vbabka@suse.cz>
To: zangchunxin@bytedance.com, akpm@linux-foundation.org
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org,
Muchun Song <songmuchun@bytedance.com>
Subject: Re: [PATCH v2] mm/vmscan: fix infinite loop in drop_slab_node
Date: Wed, 9 Sep 2020 19:59:44 +0200 [thread overview]
Message-ID: <16906d44-9e3c-76a1-f1a9-ced61e865467@suse.cz> (raw)
In-Reply-To: <20200909152047.27905-1-zangchunxin@bytedance.com>
On 9/9/20 5:20 PM, zangchunxin@bytedance.com wrote:
> From: Chunxin Zang <zangchunxin@bytedance.com>
>
> On our server, there are about 10k memcg in one machine. They use memory
> very frequently. When I tigger drop caches,the process will infinite loop
> in drop_slab_node.
>
> There are two reasons:
> 1.We have too many memcgs, even though one object freed in one memcg, the
> sum of object is bigger than 10.
>
> 2.We spend a lot of time in traverse memcg once. So, the memcg who
> traversed at the first have been freed many objects. Traverse memcg next
> time, the freed count bigger than 10 again.
>
> We can get the following info through 'ps':
>
> root:~# ps -aux | grep drop
> root 357956 ... R Aug25 21119854:55 echo 3 > /proc/sys/vm/drop_caches
> root 1771385 ... R Aug16 21146421:17 echo 3 > /proc/sys/vm/drop_caches
> root 1986319 ... R 18:56 117:27 echo 3 > /proc/sys/vm/drop_caches
> root 2002148 ... R Aug24 5720:39 echo 3 > /proc/sys/vm/drop_caches
> root 2564666 ... R 18:59 113:58 echo 3 > /proc/sys/vm/drop_caches
> root 2639347 ... R Sep03 2383:39 echo 3 > /proc/sys/vm/drop_caches
> root 3904747 ... R 03:35 993:31 echo 3 > /proc/sys/vm/drop_caches
> root 4016780 ... R Aug21 7882:18 echo 3 > /proc/sys/vm/drop_caches
>
> Use bpftrace follow 'freed' value in drop_slab_node:
>
> root:~# bpftrace -e 'kprobe:drop_slab_node+70 {@ret=hist(reg("bp")); }'
> Attaching 1 probe...
> ^B^C
>
> @ret:
> [64, 128) 1 | |
> [128, 256) 28 | |
> [256, 512) 107 |@ |
> [512, 1K) 298 |@@@ |
> [1K, 2K) 613 |@@@@@@@ |
> [2K, 4K) 4435 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
> [4K, 8K) 442 |@@@@@ |
> [8K, 16K) 299 |@@@ |
> [16K, 32K) 100 |@ |
> [32K, 64K) 139 |@ |
> [64K, 128K) 56 | |
> [128K, 256K) 26 | |
> [256K, 512K) 2 | |
>
> In the while loop, we can check whether the TASK_KILLABLE signal is set,
> if so, we should break the loop.
That's definitely a good change, thanks. I would just maybe consider:
- Test in the memcg iteration loop? If you have 10k memcgs as you mention, this
can still take long until the test happens?
- Exit also on other signals such as SIGABRT, SIGTERM? If I write to drop_caches
and think it's too long, I would prefer to kill it by ctrl-c and not just kill
-9. Dunno if the canonical way of testing for this is if
(signal_pending(current)) or differently.
- IMHO it's still worth to bail out in your scenario even without a signal, e.g.
by the doubling of threshold. But it can be a separate patch.
Thanks!
> Signed-off-by: Chunxin Zang <zangchunxin@bytedance.com>
> Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> ---
> changelogs in v2:
> 1) Via check TASK_KILLABLE signal break loop.
>
> mm/vmscan.c | 3 +++
> 1 file changed, 3 insertions(+)
>
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index b6d84326bdf2..c3ed8b45d264 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -704,6 +704,9 @@ void drop_slab_node(int nid)
> do {
> struct mem_cgroup *memcg = NULL;
>
> + if (fatal_signal_pending(current))
> + return;
> +
> freed = 0;
> memcg = mem_cgroup_iter(NULL, NULL, NULL);
> do {
>
next prev parent reply other threads:[~2020-09-09 17:59 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-09-09 15:20 zangchunxin
2020-09-09 16:09 ` Chris Down
2020-09-09 17:59 ` Vlastimil Babka [this message]
2020-09-09 21:47 ` Chris Down
2020-09-09 21:52 ` Matthew Wilcox
2020-09-09 21:56 ` Chris Down
2020-09-09 22:13 ` Vlastimil Babka
2020-09-09 21:49 ` Matthew Wilcox
2020-09-14 9:30 ` Michal Hocko
2020-09-14 13:25 ` [External] " Chunxin Zang
2020-09-14 13:47 ` Michal Hocko
2020-09-14 15:02 ` Chunxin Zang
2020-09-14 15:17 ` Michal Hocko
2020-09-15 4:23 ` Chunxin Zang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=16906d44-9e3c-76a1-f1a9-ced61e865467@suse.cz \
--to=vbabka@suse.cz \
--cc=akpm@linux-foundation.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=songmuchun@bytedance.com \
--cc=zangchunxin@bytedance.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox