From: Michal Hocko <mhocko@suse.cz>
To: Richard Davies <richard@arachsys.com>
Cc: Dwight Engen <dwight.engen@oracle.com>,
Tim Hockin <thockin@google.com>,
Vladimir Davydov <vdavydov@parallels.com>,
David Rientjes <rientjes@google.com>,
Marian Marinov <mm@yuhu.biz>, Max Kellermann <mk@cm4all.com>,
Tim Hockin <thockin@hockin.org>,
Frederic Weisbecker <fweisbec@gmail.com>,
containers@lists.linux-foundation.org,
Serge Hallyn <serge.hallyn@ubuntu.com>,
Glauber Costa <glommer@parallels.com>,
linux-mm@kvack.org, William Dauchy <wdauchy@gmail.com>,
Johannes Weiner <hannes@cmpxchg.org>, Tejun Heo <tj@kernel.org>,
cgroups@vger.kernel.org, Daniel Walsh <dwalsh@redhat.com>
Subject: Re: Protection against container fork bombs [WAS: Re: memcg with kmem limit doesn't recover after disk i/o causes limit to be hit]
Date: Tue, 29 Apr 2014 21:03:06 +0200 [thread overview]
Message-ID: <20140429190306.GC25609@dhcp22.suse.cz> (raw)
In-Reply-To: <20140429183928.GF29606@alpha.arachsys.com>
On Tue 29-04-14 19:39:28, Richard Davies wrote:
> Michal Hocko wrote:
> > Richard Davies wrote:
> > > Dwight Engen wrote:
> > > > Is there a plan to separately account/limit stack pages vs kmem in
> > > > general? Richard would have to verify, but I suspect kmem is not
> > > > currently viable as a process limiter for him because
> > > > icache/dcache/stack is all accounted together.
> > >
> > > Certainly I would like to be able to limit container fork-bombs without
> > > limiting the amount of disk IO caching for processes in those containers.
> > >
> > > In my testing with of kmem limits, I needed a limit of 256MB or lower to
> > > catch fork bombs early enough. I would definitely like more than 256MB of
> > > disk caching.
> > >
> > > So if we go the "working kmem" route, I would like to be able to specify a
> > > limit excluding disk cache.
> >
> > Page cache (which is what you mean by disk cache probably) is a
> > userspace accounted memory with the memory cgroup controller. And you
> > do not have to limit that one.
>
> OK, that's helpful - thanks.
>
> As an aside, with the normal (non-kmem) cgroup controller, is there a way
> for me to exclude page cache and only limit the equivalent of the rss line
> in memory.stat?
No
> e.g. say I have a 256GB physical machine, running 200 containers, each with
> 1GB normal-mem limit (for running software) and 256MB kmem limit (to stop
> fork-bombs).
>
> The physical disk IO bandwidth is a shared resource between all the
> containers, so ideally I would like the kernel to used the 56GB of RAM as
> shared page cache however it best reduces physical IOPs, rather than having
> a per-container limit.
Then do not use any memory.limit_in_bytes and if there is a memory
pressure then the global reclaim will shrink all the containers
proportionally and the page cache will be the #1 target for the
reclaim (but we are getting off-topic here I am afraid).
--
Michal Hocko
SUSE Labs
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2014-04-29 19:03 UTC|newest]
Thread overview: 38+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-04-16 15:46 memcg with kmem limit doesn't recover after disk i/o causes limit to be hit Richard Davies
2014-04-18 15:59 ` Michal Hocko
2014-04-18 17:57 ` Vladimir Davydov
2014-04-18 18:20 ` Michal Hocko
2014-04-18 18:37 ` Vladimir Davydov
2014-04-20 14:28 ` Protection against container fork bombs [WAS: Re: memcg with kmem limit doesn't recover after disk i/o causes limit to be hit] Richard Davies
2014-04-20 18:35 ` Tim Hockin
2014-04-22 18:39 ` Dwight Engen
2014-04-22 20:05 ` Richard Davies
2014-04-22 20:13 ` Tim Hockin
2014-04-23 6:07 ` Marian Marinov
2014-04-23 12:49 ` Dwight Engen
2014-04-28 18:00 ` Serge Hallyn
2014-04-29 7:25 ` Michal Hocko
2014-04-29 13:03 ` Serge Hallyn
2014-04-29 13:57 ` Marian Marinov
2014-04-29 14:04 ` Tim Hockin
2014-04-29 15:43 ` Michal Hocko
2014-04-29 16:06 ` Tim Hockin
2014-04-29 16:51 ` Frederic Weisbecker
2014-04-29 16:59 ` Tim Hockin
2014-04-29 17:06 ` Michal Hocko
2014-04-29 17:30 ` Dwight Engen
2014-04-29 18:09 ` Richard Davies
2014-04-29 18:27 ` Michal Hocko
2014-04-29 18:39 ` Richard Davies
2014-04-29 19:03 ` Michal Hocko [this message]
2014-04-29 21:36 ` Marian Marinov
2014-04-30 13:31 ` Michal Hocko
2014-04-29 21:44 ` Frederic Weisbecker
2014-04-30 13:12 ` Daniel J Walsh
2014-04-30 13:28 ` Frederic Weisbecker
2014-05-06 11:40 ` Marian Marinov
2014-05-07 17:15 ` Dwight Engen
2014-05-07 22:39 ` Marian Marinov
2014-05-08 15:25 ` Richard Davies
2014-06-10 14:50 ` Marian Marinov
2014-06-10 12:18 ` Alin Dobre
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20140429190306.GC25609@dhcp22.suse.cz \
--to=mhocko@suse.cz \
--cc=cgroups@vger.kernel.org \
--cc=containers@lists.linux-foundation.org \
--cc=dwalsh@redhat.com \
--cc=dwight.engen@oracle.com \
--cc=fweisbec@gmail.com \
--cc=glommer@parallels.com \
--cc=hannes@cmpxchg.org \
--cc=linux-mm@kvack.org \
--cc=mk@cm4all.com \
--cc=mm@yuhu.biz \
--cc=richard@arachsys.com \
--cc=rientjes@google.com \
--cc=serge.hallyn@ubuntu.com \
--cc=thockin@google.com \
--cc=thockin@hockin.org \
--cc=tj@kernel.org \
--cc=vdavydov@parallels.com \
--cc=wdauchy@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox