linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Michal Hocko <mhocko@suse.com>
To: zangchunxin@bytedance.com
Cc: akpm@linux-foundation.org, linux-mm@kvack.org,
	linux-kernel@vger.kernel.org,
	Muchun Song <songmuchun@bytedance.com>
Subject: Re: [PATCH v2] mm/vmscan: fix infinite loop in drop_slab_node
Date: Mon, 14 Sep 2020 11:30:32 +0200	[thread overview]
Message-ID: <20200914093032.GG16999@dhcp22.suse.cz> (raw)
In-Reply-To: <20200909152047.27905-1-zangchunxin@bytedance.com>

The subject is misleading because this patch doesn't fix an infinite
loop, right? It just allows the userspace to interrupt the operation.

On Wed 09-09-20 23:20:47, zangchunxin@bytedance.com wrote:
> From: Chunxin Zang <zangchunxin@bytedance.com>
> 
> On our server, there are about 10k memcg in one machine. They use memory
> very frequently. When I tigger drop caches,the process will infinite loop
> in drop_slab_node.

Is this really an infinite loop, or it just takes a lot of time to
process all the metadata in that setup? If this is really an infinite
loop then we should look at it. My current understanding is that the
operation would finish at some time it just takes painfully long to get
there.

> There are two reasons:
> 1.We have too many memcgs, even though one object freed in one memcg, the
>   sum of object is bigger than 10.
> 
> 2.We spend a lot of time in traverse memcg once. So, the memcg who
>   traversed at the first have been freed many objects. Traverse memcg next
>   time, the freed count bigger than 10 again.
> 
> We can get the following info through 'ps':
> 
>   root:~# ps -aux | grep drop
>   root  357956 ... R    Aug25 21119854:55 echo 3 > /proc/sys/vm/drop_caches
>   root 1771385 ... R    Aug16 21146421:17 echo 3 > /proc/sys/vm/drop_caches
>   root 1986319 ... R    18:56 117:27 echo 3 > /proc/sys/vm/drop_caches
>   root 2002148 ... R    Aug24 5720:39 echo 3 > /proc/sys/vm/drop_caches
>   root 2564666 ... R    18:59 113:58 echo 3 > /proc/sys/vm/drop_caches
>   root 2639347 ... R    Sep03 2383:39 echo 3 > /proc/sys/vm/drop_caches
>   root 3904747 ... R    03:35 993:31 echo 3 > /proc/sys/vm/drop_caches
>   root 4016780 ... R    Aug21 7882:18 echo 3 > /proc/sys/vm/drop_caches
> 
> Use bpftrace follow 'freed' value in drop_slab_node:
> 
>   root:~# bpftrace -e 'kprobe:drop_slab_node+70 {@ret=hist(reg("bp")); }'
>   Attaching 1 probe...
>   ^B^C
> 
>   @ret:
>   [64, 128)        1 |                                                    |
>   [128, 256)      28 |                                                    |
>   [256, 512)     107 |@                                                   |
>   [512, 1K)      298 |@@@                                                 |
>   [1K, 2K)       613 |@@@@@@@                                             |
>   [2K, 4K)      4435 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
>   [4K, 8K)       442 |@@@@@                                               |
>   [8K, 16K)      299 |@@@                                                 |
>   [16K, 32K)     100 |@                                                   |
>   [32K, 64K)     139 |@                                                   |
>   [64K, 128K)     56 |                                                    |
>   [128K, 256K)    26 |                                                    |
>   [256K, 512K)     2 |                                                    |
> 
> In the while loop, we can check whether the TASK_KILLABLE signal is set,
> if so, we should break the loop.

I would make it explicit that this is not fixing the above scenario. It
just helps to cancel to operation which is a good thing in general.
 
> Signed-off-by: Chunxin Zang <zangchunxin@bytedance.com>
> Signed-off-by: Muchun Song <songmuchun@bytedance.com>

With updated changelog
Acked-by: Michal Hocko <mhocko@suse.com>

> ---
> 	changelogs in v2: 
> 	1) Via check TASK_KILLABLE signal break loop.
> 
>  mm/vmscan.c | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index b6d84326bdf2..c3ed8b45d264 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -704,6 +704,9 @@ void drop_slab_node(int nid)
>  	do {
>  		struct mem_cgroup *memcg = NULL;
>  
> +		if (fatal_signal_pending(current))
> +			return;
> +
>  		freed = 0;
>  		memcg = mem_cgroup_iter(NULL, NULL, NULL);
>  		do {
> -- 
> 2.11.0
> 

-- 
Michal Hocko
SUSE Labs


  parent reply	other threads:[~2020-09-14  9:30 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-09-09 15:20 zangchunxin
2020-09-09 16:09 ` Chris Down
2020-09-09 17:59 ` Vlastimil Babka
2020-09-09 21:47   ` Chris Down
2020-09-09 21:52     ` Matthew Wilcox
2020-09-09 21:56       ` Chris Down
2020-09-09 22:13       ` Vlastimil Babka
2020-09-09 21:49   ` Matthew Wilcox
2020-09-14  9:30 ` Michal Hocko [this message]
2020-09-14 13:25   ` [External] " Chunxin Zang
2020-09-14 13:47     ` Michal Hocko
2020-09-14 15:02       ` Chunxin Zang
2020-09-14 15:17         ` Michal Hocko
2020-09-15  4:23           ` Chunxin Zang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200914093032.GG16999@dhcp22.suse.cz \
    --to=mhocko@suse.com \
    --cc=akpm@linux-foundation.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=songmuchun@bytedance.com \
    --cc=zangchunxin@bytedance.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox