From: Michal Hocko <mhocko@suse.cz>
To: Johannes Weiner <hannes@cmpxchg.org>
Cc: Luigi Semenzato <semenzato@google.com>,
David Rientjes <rientjes@google.com>,
linux-mm@kvack.org, Greg Thelen <gthelen@google.com>,
Glauber Costa <glommer@gmail.com>, Mel Gorman <mgorman@suse.de>,
Andrew Morton <akpm@linux-foundation.org>,
KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>,
Rik van Riel <riel@redhat.com>, Joern Engel <joern@logfs.org>,
Hugh Dickins <hughd@google.com>,
LKML <linux-kernel@vger.kernel.org>
Subject: Re: user defined OOM policies
Date: Thu, 28 Nov 2013 12:36:41 +0100 [thread overview]
Message-ID: <20131128113641.GI2761@dhcp22.suse.cz> (raw)
In-Reply-To: <20131122180835.GO3556@cmpxchg.org>
On Fri 22-11-13 13:08:35, Johannes Weiner wrote:
> On Wed, Nov 20, 2013 at 11:03:33PM -0800, Luigi Semenzato wrote:
> > On Wed, Nov 20, 2013 at 7:36 PM, David Rientjes <rientjes@google.com> wrote:
> > > On Wed, 20 Nov 2013, Luigi Semenzato wrote:
> > >
> > >> Chrome OS uses a custom low-memory notification to minimize OOM kills.
> > >> When the notifier triggers, the Chrome browser tries to free memory,
> > >> including by shutting down processes, before the full OOM occurs. But
> > >> OOM kills cannot always be avoided, depending on the speed of
> > >> allocation and how much CPU the freeing tasks are able to use
> > >> (certainly they could be given higher priority, but it get complex).
> > >>
> > >> We may end up using memcg so we can use the cgroup
> > >> memory.pressure_level file instead of our own notifier, but we have no
> > >> need for finer control over OOM kills beyond the very useful kill
> > >> priority. One process at a time is good enough for us.
> > >>
> > >
> > > Even with your own custom low-memory notifier or memory.pressure_level,
> > > it's still possible that all memory is depleted and you run into an oom
> > > kill before your userspace had a chance to wakeup and prevent it. I think
> > > what you'll want is either your custom notifier of memory.pressure_level
> > > to do pre-oom freeing but fallback to a userspace oom handler that
> > > prevents kernel oom kills until it ensures userspace did everything it
> > > could to free unneeded memory, do any necessary logging, etc, and do so
> > > over a grace period of memory.oom_delay_millisecs before the kernel
> > > eventually steps in and kills.
> >
> > Yes, I agree that we can't always prevent OOM situations, and in fact
> > we tolerate OOM kills, although they have a worse impact on the users
> > than controlled freeing does.
> >
> > Well OK here it goes. I hate to be a party-pooper, but the notion of
> > a user-level OOM-handler scares me a bit for various reasons.
> >
> > 1. Our custom notifier sends low-memory warnings well ahead of memory
> > depletion. If we don't have enough time to free memory then, what can
> > the last-minute OOM handler do?
> >
> > 2. In addition to the time factor, it's not trivial to do anything,
> > including freeing memory, without allocating memory first, so we'll
> > need a reserve, but how much, and who is allowed to use it?
> >
> > 3. How does one select the OOM-handler timeout? If the freeing paths
> > in the code are swapped out, the time needed to bring them in can be
> > highly variable.
> >
> > 4. Why wouldn't the OOM-handler also do the killing itself? (Which is
> > essentially what we do.) Then all we need is a low-memory notifier
> > which can predict how quickly we'll run out of memory.
> >
> > 5. The use case mentioned earlier (the fact that the killing of one
> > process can make an entire group of processes useless) can be dealt
> > with using OOM priorities and user-level code.
>
> I would also be interested in the answers to all these questions.
>
> > I confess I am surprised that the OOM killer works as well as I think
> > it does. Adding a user-level component would bring a whole new level
> > of complexity to code that's already hard to fully comprehend, and
> > might not really address the fundamental issues.
>
> Agreed.
>
> OOM killing is supposed to be a last resort and should be avoided as
> much as possible. The situation is so precarious at this point that
> the thought of involving USERSPACE to fix it seems crazy to me.
Please remember that this discussion is about User/Admin defined policy
for OOM killer. Not necessarily user space handling of global OOM.
I am skeptical to userspace handler as well but I admit that there might
be usecases where this is doable. But let's focus on the proper
interface for the policies (aka what kind of action should be taken
under OOM - kill process, group, reboot, etc...).
> It would make much more sense to me to focus on early notifications
> and deal with looming situations while we still have the resources to
> do so.
We already have those at least in the memcg world (vmpressure).
> Before attempting to build a teleportation device in the kernel, maybe
> we should just stop painting ourselves into corners?
--
Michal Hocko
SUSE Labs
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2013-11-28 11:36 UTC|newest]
Thread overview: 65+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-11-19 13:14 Michal Hocko
2013-11-19 13:40 ` Michal Hocko
2013-11-20 8:02 ` David Rientjes
2013-11-20 15:22 ` Michal Hocko
2013-11-20 17:14 ` Luigi Semenzato
2013-11-21 3:36 ` David Rientjes
2013-11-21 7:03 ` Luigi Semenzato
2013-11-22 18:08 ` Johannes Weiner
2013-11-28 11:36 ` Michal Hocko [this message]
2013-11-26 1:29 ` David Rientjes
2013-11-28 11:42 ` Michal Hocko
2013-12-02 23:09 ` David Rientjes
2013-11-21 3:33 ` David Rientjes
2013-11-28 11:54 ` Michal Hocko
2013-12-02 23:07 ` David Rientjes
2013-12-04 5:19 ` [patch 1/8] fork: collapse copy_flags into copy_process David Rientjes
2013-12-04 5:19 ` [patch 2/8] mm, mempolicy: rename slab_node for clarity David Rientjes
2013-12-04 15:21 ` Christoph Lameter
2013-12-04 5:20 ` [patch 3/8] mm, mempolicy: remove per-process flag David Rientjes
2013-12-04 15:24 ` Christoph Lameter
2013-12-05 0:53 ` David Rientjes
2013-12-05 19:05 ` Christoph Lameter
2013-12-05 23:53 ` David Rientjes
2013-12-06 14:46 ` Christoph Lameter
2013-12-04 5:20 ` [patch 4/8] mm, memcg: add tunable for oom reserves David Rientjes
2013-12-04 5:20 ` [patch 5/8] res_counter: remove interface for locked charging and uncharging David Rientjes
2013-12-04 5:20 ` [patch 6/8] res_counter: add interface for maximum nofail charge David Rientjes
2013-12-04 5:20 ` [patch 7/8] mm, memcg: allow processes handling oom notifications to access reserves David Rientjes
2013-12-04 5:45 ` Johannes Weiner
2013-12-05 1:49 ` David Rientjes
2013-12-05 2:50 ` Tejun Heo
2013-12-05 23:49 ` David Rientjes
2013-12-06 17:34 ` Johannes Weiner
2013-12-07 16:38 ` Tim Hockin
2013-12-07 17:40 ` Johannes Weiner
2013-12-07 18:12 ` Tim Hockin
2013-12-07 19:06 ` Johannes Weiner
2013-12-07 21:04 ` Tim Hockin
2013-12-06 19:01 ` Tejun Heo
2013-12-09 20:10 ` David Rientjes
2013-12-09 22:37 ` Johannes Weiner
2013-12-10 21:50 ` Tejun Heo
2013-12-10 23:55 ` David Rientjes
2013-12-11 9:49 ` Mel Gorman
2013-12-11 12:42 ` Tejun Heo
2013-12-12 5:37 ` Tim Hockin
2013-12-12 14:21 ` Tejun Heo
2013-12-12 16:32 ` Michal Hocko
2013-12-12 16:37 ` Tejun Heo
2013-12-12 18:42 ` Tim Hockin
2013-12-12 19:23 ` Tejun Heo
2013-12-13 0:23 ` Tim Hockin
2013-12-13 11:47 ` Tejun Heo
2013-12-04 5:20 ` [patch 8/8] mm, memcg: add memcg oom reserve documentation David Rientjes
2013-11-20 17:25 ` user defined OOM policies Vladimir Murzin
2013-11-20 17:21 ` Vladimir Murzin
2013-11-20 17:33 ` Michal Hocko
2013-11-21 3:38 ` David Rientjes
2013-11-21 17:13 ` Michal Hocko
2013-11-26 1:36 ` David Rientjes
2013-11-22 7:28 ` Vladimir Murzin
2013-11-22 13:18 ` Michal Hocko
2013-11-20 7:50 ` David Rientjes
2013-11-22 0:19 ` Jörn Engel
2013-11-26 1:31 ` David Rientjes
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20131128113641.GI2761@dhcp22.suse.cz \
--to=mhocko@suse.cz \
--cc=akpm@linux-foundation.org \
--cc=glommer@gmail.com \
--cc=gthelen@google.com \
--cc=hannes@cmpxchg.org \
--cc=hughd@google.com \
--cc=joern@logfs.org \
--cc=kosaki.motohiro@jp.fujitsu.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mgorman@suse.de \
--cc=riel@redhat.com \
--cc=rientjes@google.com \
--cc=semenzato@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox