From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-io1-f69.google.com (mail-io1-f69.google.com [209.85.166.69]) by kanga.kvack.org (Postfix) with ESMTP id 69E556B0007 for ; Wed, 10 Oct 2018 09:11:05 -0400 (EDT) Received: by mail-io1-f69.google.com with SMTP id l24-v6so4539524iok.21 for ; Wed, 10 Oct 2018 06:11:05 -0700 (PDT) Received: from www262.sakura.ne.jp (www262.sakura.ne.jp. [202.181.97.72]) by mx.google.com with ESMTPS id y1si7918385itk.43.2018.10.10.06.11.03 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 10 Oct 2018 06:11:03 -0700 (PDT) Subject: Re: INFO: rcu detected stall in shmem_fault References: <000000000000dc48d40577d4a587@google.com> <201810100012.w9A0Cjtn047782@www262.sakura.ne.jp> <20181010085945.GC5873@dhcp22.suse.cz> <20181010113500.GH5873@dhcp22.suse.cz> <20181010114833.GB3949@tigerII.localdomain> <20181010122539.GI5873@dhcp22.suse.cz> From: Tetsuo Handa Message-ID: Date: Wed, 10 Oct 2018 22:10:31 +0900 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit Sender: owner-linux-mm@kvack.org List-ID: To: Dmitry Vyukov , Michal Hocko Cc: Sergey Senozhatsky , syzbot , Johannes Weiner , Andrew Morton , guro@fb.com, "Kirill A. Shutemov" , LKML , Linux-MM , David Rientjes , syzkaller-bugs , Yang Shi , Sergey Senozhatsky , Petr Mladek On 2018/10/10 21:36, Dmitry Vyukov wrote: > On Wed, Oct 10, 2018 at 2:29 PM, Dmitry Vyukov wrote: >> On Wed, Oct 10, 2018 at 2:25 PM, Michal Hocko wrote: >>> On Wed 10-10-18 20:48:33, Sergey Senozhatsky wrote: >>>> On (10/10/18 13:35), Michal Hocko wrote: >>>>>> Just flooding out of memory messages can trigger RCU stall problems. >>>>>> For example, a severe skbuff_head_cache or kmalloc-512 leak bug is causing >>>>> >>>>> [...] >>>>> >>>>> Quite some of them, indeed! I guess we want to rate limit the output. >>>>> What about the following? >>>> >>>> A bit unrelated, but while we are at it: >>>> >>>> I like it when we rate-limit printk-s that lookup the system. >>>> But it seems that default rate-limit values are not always good enough, >>>> DEFAULT_RATELIMIT_INTERVAL / DEFAULT_RATELIMIT_BURST can still be too >>>> verbose. For instance, when we have a very slow IPMI emulated serial >>>> console -- e.g. baud rate at 57600. DEFAULT_RATELIMIT_INTERVAL and >>>> DEFAULT_RATELIMIT_BURST can add new OOM headers and backtraces faster >>>> than we evict them. >>>> >>>> Does it sound reasonable enough to use larger than default rate-limits >>>> for printk-s in OOM print-outs? OOM reports tend to be somewhat large >>>> and the reported numbers are not always *very* unique. >>>> >>>> What do you think? >>> >>> I do not really care about the current inerval/burst values. This change >>> should be done seprately and ideally with some numbers. >> >> I think Sergey meant that this place may need to use >> larger-than-default values because it prints lots of output per >> instance (whereas the default limit is more tuned for cases that print >> just 1 line). Yes. The OOM killer tends to print a lot of messages (and I estimate that mutex_trylock(&oom_lock) accelerates wasting more CPU consumption by preemption). >> >> I've found at least 1 place that uses DEFAULT_RATELIMIT_INTERVAL*10: >> https://elixir.bootlin.com/linux/latest/source/fs/btrfs/extent-tree.c#L8365 >> Probably we need something similar here. Since printk() is a significantly CPU consuming operation, I think that what we need to guarantee is interval between the end of an OOM killer messages and the beginning of next OOM killer messages is large enough. For example, setup a timer with 5 seconds timeout upon the end of an OOM killer messages and check whether the timer already fired upon the beginning of next OOM killer messages. > > > In parallel with the kernel changes I've also made a change to > syzkaller that (1) makes it not use oom_score_adj=-1000, this hard > killing limit looks like quite risky thing, (2) increase memcg size > beyond expected KASAN quarantine size: > https://github.com/google/syzkaller/commit/adedaf77a18f3d03d695723c86fc083c3551ff5b > If this will stop the flow of hang/stall reports, then we can just > close all old reports as invalid. I don't think so. Only this report was different from others because printk() in this report was from memcg OOM events without eligible tasks whereas printk() in others are from global OOM events triggered by severe slab memory leak.