linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Michal Hocko <mhocko@suse.cz>
To: Marian Marinov <mm@1h.com>
Cc: "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Johannes Weiner <hannes@cmpxchg.org>,
	KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>,
	KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>,
	Tejun Heo <tj@kernel.org>,
	linux-mm@kvack.org
Subject: Re: [RFC] oom, memcg: handle sysctl oom_kill_allocating_task while memcg oom happening
Date: Tue, 10 Jun 2014 13:52:54 +0200	[thread overview]
Message-ID: <20140610115254.GA25631@dhcp22.suse.cz> (raw)
In-Reply-To: <5396ED66.7090401@1h.com>

[More people to CC]
On Tue 10-06-14 14:35:02, Marian Marinov wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
> 
> Hello,

Hi,

> a while back in 2012 there was a request for this functionality.
>   oom, memcg: handle sysctl oom_kill_allocating_task while memcg oom
>   happening
>
> This is the thread: https://lkml.org/lkml/2012/10/16/168
>
> Now we run a several machines with around 10k processes on each
> machine, using containers.
>
> Regularly we see OOM from within a container that causes performance
> degradation.

What kind of performance degradation and which parts of the system are
affected?

memcg oom killer happens outside of any locks currently so the only
bottleneck I can see is the per-cgroup container which iterates all
tasks in the group. Is this what is going on here?

> We are running 3.12.20 with the following OOM configuration and memcg
> oom enabled:
> 
> vm.oom_dump_tasks = 0
> vm.oom_kill_allocating_task = 1
> vm.panic_on_oom = 0
> 
> When OOM occurs we see very high numbers for the loadavg and the
> overall responsiveness of the machine degrades.

What is the system waiting for?

> During these OOM states the load of the machine gradualy increases
> from 25 up to 120 in the interval of 10minutes.
>
> Once we manually bring down the memory usage of a container(killing
> some tasks) the load drops down to 25 within 5 to 7 minutes.

So the OOM killer is not able to find a victim to kill?

> I read the whole thread from 2012 but I do not see the expected
> behavior that is described by the people that commented the issue.

Why do you think that killing the allocating task would be helpful in
your case?

> In this case, with real usage for this patch, would it be considered
> for inclusion?

I would still prefer to fix the real issue which is not clear from your
description yet.

-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

       reply	other threads:[~2014-06-10 11:52 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <5396ED66.7090401@1h.com>
2014-06-10 11:52 ` Michal Hocko [this message]
2014-06-10 14:42   ` Marian Marinov
2014-06-10 22:37     ` David Rientjes

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20140610115254.GA25631@dhcp22.suse.cz \
    --to=mhocko@suse.cz \
    --cc=hannes@cmpxchg.org \
    --cc=kamezawa.hiroyu@jp.fujitsu.com \
    --cc=kosaki.motohiro@jp.fujitsu.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mm@1h.com \
    --cc=tj@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox