From: Johannes Weiner <hannes@cmpxchg.org>
To: Vladimir Davydov <vdavydov@parallels.com>
Cc: akpm@linux-foundation.org, mhocko@suse.cz, cl@linux.com,
glommer@gmail.com, linux-mm@kvack.org,
linux-kernel@vger.kernel.org
Subject: Re: [PATCH -mm 0/8] memcg: reparent kmem on css offline
Date: Tue, 8 Jul 2014 18:19:06 -0400 [thread overview]
Message-ID: <20140708221906.GC29639@cmpxchg.org> (raw)
In-Reply-To: <53BAD567.8060506@parallels.com>
On Mon, Jul 07, 2014 at 09:14:15PM +0400, Vladimir Davydov wrote:
> 07.07.2014 18:25, Johannes Weiner:
> >In addition, Tejun made offlined css iterable and split css_tryget()
> >and css_tryget_online(), which would allow memcg to pin the css until
> >the last charge is gone while continuing to iterate and reclaim it on
> >hierarchical pressure, even after it was offlined.
>
> One more question.
>
> With reparenting enabled, the number of cgroups (lruvecs) that must be
> iterated on global reclaim is bound by the number of live containers,
> while w/o reparenting it's practically unbound, isn't it? Won't it be
> the source of latency spikes?
It might deteriorate a little bit, but it is a self-correcting problem
as soon as memory pressure kicks. Creating and destroying cgroups is
serialized at a global level, so I would expect the cost of doing that
at a high rate to become a problem before the csss become an issue for
the reclaim scanner.
At some point we will probably have to make the global reclaim cgroup
walk in shrink_zone() intermittent, but I'm not aware of any problems
with it so far.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
prev parent reply other threads:[~2014-07-08 22:19 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-07-07 12:00 Vladimir Davydov
2014-07-07 12:00 ` [PATCH -mm 1/8] memcg: add pointer from memcg_cache_params to owner cache Vladimir Davydov
2014-07-07 12:00 ` [PATCH -mm 2/8] memcg: keep all children of each root cache on a list Vladimir Davydov
2014-07-07 15:24 ` Christoph Lameter
2014-07-07 15:45 ` Vladimir Davydov
2014-07-07 12:00 ` [PATCH -mm 3/8] slab: guarantee unique kmem cache naming Vladimir Davydov
2014-07-07 12:00 ` [PATCH -mm 4/8] slub: remove kmemcg id from create_unique_id Vladimir Davydov
2014-07-07 12:00 ` [PATCH -mm 5/8] memcg: rework non-slab kmem pages charge path Vladimir Davydov
2014-07-07 12:00 ` [PATCH -mm 6/8] memcg: introduce kmem context Vladimir Davydov
2014-07-07 12:00 ` [PATCH -mm 7/8] memcg: move some kmem definitions upper Vladimir Davydov
2014-07-07 12:00 ` [PATCH -mm 8/8] memcg: reparent kmem context on memcg offline Vladimir Davydov
2014-07-07 14:25 ` [PATCH -mm 0/8] memcg: reparent kmem on css offline Johannes Weiner
2014-07-07 15:40 ` Vladimir Davydov
2014-07-08 22:05 ` Johannes Weiner
2014-07-09 7:25 ` Vladimir Davydov
2014-07-07 17:14 ` Vladimir Davydov
2014-07-08 22:19 ` Johannes Weiner [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20140708221906.GC29639@cmpxchg.org \
--to=hannes@cmpxchg.org \
--cc=akpm@linux-foundation.org \
--cc=cl@linux.com \
--cc=glommer@gmail.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@suse.cz \
--cc=vdavydov@parallels.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox