From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
To: Paul Menage <menage@google.com>
Cc: "linux-mm@kvack.org" <linux-mm@kvack.org>,
"balbir@linux.vnet.ibm.com" <balbir@linux.vnet.ibm.com>
Subject: Re: [discuss][memcg] oom-kill extension
Date: Wed, 29 Oct 2008 14:45:39 +0900 [thread overview]
Message-ID: <20081029144539.b6c96cb8.kamezawa.hiroyu@jp.fujitsu.com> (raw)
In-Reply-To: <6599ad830810282235w5ad7ff7cx4f8be4e1f58933a5@mail.gmail.com>
On Tue, 28 Oct 2008 22:35:21 -0700
"Paul Menage" <menage@google.com> wrote:
> On Tue, Oct 28, 2008 at 7:38 PM, KAMEZAWA Hiroyuki
> <kamezawa.hiroyu@jp.fujitsu.com> wrote:
> > Under memory resource controller(memcg), oom-killer can be invoked when it
> > reaches limit and no memory can be reclaimed.
> >
> > In general, not under memcg, oom-kill(or panic) is an only chance to recover
> > the system because there is no available memory. But when oom occurs under
> > memcg, it just reaches limit and it seems we can do something else.
> >
> > Does anyone have plan to enhance oom-kill ?
>
> We have an in-house implementation of a per-cgroup OOM handler that
> we've just ported from cpusets to cgroups. We were considering sending
> the patch in as a starting point for discussions - it's a bit of a
> kludge as it is.
>
Sounds interesting. (but I don't ask details now.)
> It's a standalone subsystem that can work with either the memory
> cgroup or with cpusets (where memory is constrained by numa nodes).
> The features are:
>
> - an oom.delay file that controls how long a thread will pause in the
> OOM killer waiting for a response from userspace (in milliseconds)
>
> - an oom.await file that a userspace handler can write a timeout value
> to, and be awoken either when a process in that cgroup enters the OOM
> killer, or the timeout expires.
>
> If a userspace thread catches and handles the OOM, the OOMing thread
> doesn't trigger a kill, but returns to alloc_pages to try again;
> alternatively userspace can cause the OOM killer to go ahead as
> normal.
>
the userland can know "bad process" under group ?
> We've found it works pretty successfully as a last-ditch notification
> to a daemon waiting in a system cgroup which can then expand the
> memory limits of the failing cgroup if necessary (potentially killing
> off processes from some other cgroup first if necessary to free up
> more memory).
>
This is a good news :)
> I'll try to get someone to send in the patch.
>
O.K. looking forward to see that.
Regards,
-Kame
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2008-10-29 5:46 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2008-10-29 2:38 KAMEZAWA Hiroyuki
2008-10-29 4:08 ` Balbir Singh
2008-10-29 5:00 ` KAMEZAWA Hiroyuki
2008-10-29 5:13 ` David Rientjes
2008-10-29 5:28 ` KAMEZAWA Hiroyuki
2008-10-29 6:55 ` KOSAKI Motohiro
2008-10-29 5:35 ` Paul Menage
2008-10-29 5:45 ` KAMEZAWA Hiroyuki [this message]
2008-10-29 5:49 ` Paul Menage
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20081029144539.b6c96cb8.kamezawa.hiroyu@jp.fujitsu.com \
--to=kamezawa.hiroyu@jp.fujitsu.com \
--cc=balbir@linux.vnet.ibm.com \
--cc=linux-mm@kvack.org \
--cc=menage@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox