From: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
To: mhocko@kernel.org
Cc: linux-mm@kvack.org, rientjes@google.com, akpm@linux-foundation.org
Subject: Re: [PATCH] mm,oom: Re-enable OOM killer using timeout.
Date: Thu, 21 Apr 2016 20:49:16 +0900 [thread overview]
Message-ID: <201604212049.GFE34338.OQFOJSMOHFFLVt@I-love.SAKURA.ne.jp> (raw)
In-Reply-To: <20160420144758.GA7950@dhcp22.suse.cz>
Michal Hocko wrote:
> On Wed 20-04-16 06:55:42, Tetsuo Handa wrote:
> > Michal Hocko wrote:
> > > > This patch adds a timeout for handling corner cases where a TIF_MEMDIE
> > > > thread got stuck. Since the timeout is checked at oom_unkillable_task(),
> > > > oom_scan_process_thread() will not find TIF_MEMDIE thread
> > > > (for !oom_kill_allocating_task case) and oom_badness() will return 0
> > > > (for oom_kill_allocating_task case).
> > > >
> > > > By applying this patch, the kernel will automatically press SysRq-f if
> > > > the OOM reaper cannot reap the victim's memory, and we will never OOM
> > > > livelock forever as long as the OOM killer is called.
> > >
> > > Which will not guarantee anything as already pointed out several times
> > > before. So I think this is not really that useful. I have said it
> > > earlier and will repeat it again. Any timeout based solution which
> > > doesn't guarantee that the system will be in a consistent state (reboot,
> > > panic or kill all existing tasks) after the specified timeout is
> > > pointless.
> >
> > Triggering the reboot/panic is the worst action. Killing all existing tasks
> > is the next worst action. Thus, I prefer killing tasks one by one.
>
> killing a task by task doesn't guarantee any convergence to a usable
> state. If somebody really cares about these highly unlikely lockups
> I am pretty sure he would really appreciate to have a _reliable_ and
> _guaranteed_ way out of that situation. Having a fuzzy mechanism to do
> something in a good hope of resolving that state is just unhelpful.
Killing a task by task shall eventually converge to the kernel panic.
But since we now have the OOM reaper, the possibility of needing to kill
next task is very low. Killing a task by task via timeout is an insurance
for rare situations where the OOM reaper cannot reap the OOM-killed thread's
memory due to mmap_sem being held for write. (If TIF_MEMDIE were set to all
OOM-kiled thread groups, the OOM killer can converge to the kernel panic
more quickly by ignoring the rest of OOM-killed threads sharing the same
memory, but that is a different patch.)
>
> If I was an admin and had a machine on the other side of the globe and
> that machine just locked up due to OOM I would pretty much wanted to
> force reboot as my other means of fixing that situation would be pretty
> much close to zero otherwise.
I posted V2 of patch which also allows triggering the kernel panic via timeout.
>
> > I'm OK with shortening the timeout like N (when waiting for the 1st victim)
> > + N/2 (the 2nd victim) + N/4 (the 3rd victim) + N/8 (the 4th victim) + ...
> > but does it worth complicating the least unlikely path?
>
> No it is not IMHO.
>
> > > I believe that the chances of the lockup are much less likely with the
> > > oom reaper and that we are not really urged to provide a new knob with a
> > > random semantic. If we really want to have a timeout based thing better
> > > make it behave reliably.
> >
> > The threshold which the administrator can wait for ranges. Some may want to
> > set few seconds because of 10 seconds /dev/watchdog timeout, others may want
> > to set one minute because of not using watchdog. Thus, I think we should not
> > hard code the timeout.
>
> I guess you missed my point here. I didn't say this should be hardcoded
> in any way. I am just saying that if we really want to do some timeout
> based decisions we should better think about the semantic and that
> should provide a reliable and deterministic means to resolve the problem.
I thought you do not like the tunable timeout because tunable timeout
leads to "what is the best duration" discussion.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2016-04-21 11:49 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-04-19 15:06 Tetsuo Handa
2016-04-19 20:07 ` Michal Hocko
2016-04-19 21:55 ` Tetsuo Handa
2016-04-20 10:37 ` [PATCH v2] " Tetsuo Handa
2016-04-25 11:47 ` Michal Hocko
2016-04-26 14:00 ` Tetsuo Handa
2016-04-26 14:31 ` Michal Hocko
2016-04-27 10:43 ` Tetsuo Handa
2016-04-20 14:47 ` [PATCH] " Michal Hocko
2016-04-21 11:49 ` Tetsuo Handa [this message]
2016-04-21 13:07 ` Michal Hocko
2016-04-24 14:19 ` Tetsuo Handa
2016-04-25 9:55 ` Michal Hocko
2016-04-26 13:54 ` Michal Hocko
2016-04-27 10:43 ` Tetsuo Handa
2016-04-27 11:11 ` Michal Hocko
2016-05-14 0:39 ` Tetsuo Handa
2016-05-16 14:18 ` Michal Hocko
2016-05-17 11:08 ` Tetsuo Handa
2016-05-17 12:51 ` Michal Hocko
2016-04-26 14:00 ` Tetsuo Handa
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=201604212049.GFE34338.OQFOJSMOHFFLVt@I-love.SAKURA.ne.jp \
--to=penguin-kernel@i-love.sakura.ne.jp \
--cc=akpm@linux-foundation.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@kernel.org \
--cc=rientjes@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox