On Mon, Sep 14, 2020 at 5:30 PM Michal Hocko wrote: > The subject is misleading because this patch doesn't fix an infinite > loop, right? It just allows the userspace to interrupt the operation. > > Yes, so we are making a separate patch follow Vlastimil's recommendations. Use double of threshold to end the loop. On Thu, Sep 10, 2020 at 1:59 AM Vlastimil Babka wrote: > > From: Chunxin Zang > > > ... > - IMHO it's still worth to bail out in your scenario even without a > signal, e.g. > by the doubling of threshold. But it can be a separate patch. > Thanks! > ... On Wed 09-09-20 23:20:47, zangchunxin@bytedance.com wrote: > > From: Chunxin Zang > > > > On our server, there are about 10k memcg in one machine. They use memory > > very frequently. When I tigger drop caches,the process will infinite loop > > in drop_slab_node. > > Is this really an infinite loop, or it just takes a lot of time to > process all the metadata in that setup? If this is really an infinite > loop then we should look at it. My current understanding is that the > operation would finish at some time it just takes painfully long to get > there. > Yes, it's really an infinite loop. Every loop spends a lot of time. In this time, memcg will alloc/free memory, so the next loop, the total of 'freed' always bigger than 10. > > There are two reasons: > > 1.We have too many memcgs, even though one object freed in one memcg, the > > sum of object is bigger than 10. > > > > 2.We spend a lot of time in traverse memcg once. So, the memcg who > > traversed at the first have been freed many objects. Traverse memcg > next > > time, the freed count bigger than 10 again. > > > > We can get the following info through 'ps': > > > > root:~# ps -aux | grep drop > > root 357956 ... R Aug25 21119854:55 echo 3 > > /proc/sys/vm/drop_caches > > root 1771385 ... R Aug16 21146421:17 echo 3 > > /proc/sys/vm/drop_caches > > root 1986319 ... R 18:56 117:27 echo 3 > /proc/sys/vm/drop_caches > > root 2002148 ... R Aug24 5720:39 echo 3 > /proc/sys/vm/drop_caches > > root 2564666 ... R 18:59 113:58 echo 3 > /proc/sys/vm/drop_caches > > root 2639347 ... R Sep03 2383:39 echo 3 > /proc/sys/vm/drop_caches > > root 3904747 ... R 03:35 993:31 echo 3 > /proc/sys/vm/drop_caches > > root 4016780 ... R Aug21 7882:18 echo 3 > /proc/sys/vm/drop_caches > > > > Use bpftrace follow 'freed' value in drop_slab_node: > > > > root:~# bpftrace -e 'kprobe:drop_slab_node+70 {@ret=hist(reg("bp")); }' > > Attaching 1 probe... > > ^B^C > > > > @ret: > > [64, 128) 1 | > | > > [128, 256) 28 | > | > > [256, 512) 107 |@ > | > > [512, 1K) 298 |@@@ > | > > [1K, 2K) 613 |@@@@@@@ > | > > [2K, 4K) 4435 > |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@| > > [4K, 8K) 442 |@@@@@ > | > > [8K, 16K) 299 |@@@ > | > > [16K, 32K) 100 |@ > | > > [32K, 64K) 139 |@ > | > > [64K, 128K) 56 | > | > > [128K, 256K) 26 | > | > > [256K, 512K) 2 | > | > > > > In the while loop, we can check whether the TASK_KILLABLE signal is set, > > if so, we should break the loop. > > I would make it explicit that this is not fixing the above scenario. It > just helps to cancel to operation which is a good thing in general. > > > Signed-off-by: Chunxin Zang > > Signed-off-by: Muchun Song > > With updated changelog > Acked-by: Michal Hocko > > > --- > > changelogs in v2: > > 1) Via check TASK_KILLABLE signal break loop. > > > > mm/vmscan.c | 3 +++ > > 1 file changed, 3 insertions(+) > > > > diff --git a/mm/vmscan.c b/mm/vmscan.c > > index b6d84326bdf2..c3ed8b45d264 100644 > > --- a/mm/vmscan.c > > +++ b/mm/vmscan.c > > @@ -704,6 +704,9 @@ void drop_slab_node(int nid) > > do { > > struct mem_cgroup *memcg = NULL; > > > > + if (fatal_signal_pending(current)) > > + return; > > + > > freed = 0; > > memcg = mem_cgroup_iter(NULL, NULL, NULL); > > do { > > -- > > 2.11.0 > > > > -- > Best wishes > Chunxin >