From: Glauber Costa <glommer@parallels.com>
To: Suleiman Souhlal <suleiman@google.com>
Cc: Suleiman Souhlal <ssouhlal@freebsd.org>,
cgroups@vger.kernel.org, kamezawa.hiroyu@jp.fujitsu.com,
penberg@kernel.org, yinghan@google.com, hughd@google.com,
gthelen@google.com, linux-mm@kvack.org, devel@openvz.org
Subject: Re: [PATCH 02/10] memcg: Uncharge all kmem when deleting a cgroup.
Date: Wed, 29 Feb 2012 13:51:25 -0300 [thread overview]
Message-ID: <4F4E578D.7080202@parallels.com> (raw)
In-Reply-To: <CABCjUKDYUwR9FsjFW_Ea30zbvFx80-ObuN92_cNcUfGjPqWJiQ@mail.gmail.com>
On 02/28/2012 09:24 PM, Suleiman Souhlal wrote:
> On Tue, Feb 28, 2012 at 11:00 AM, Glauber Costa<glommer@parallels.com> wrote:
>> On 02/27/2012 07:58 PM, Suleiman Souhlal wrote:
>>>
>>> A later patch will also use this to move the accounting to the root
>>> cgroup.
>>>
>>
>> Suleiman,
>>
>> Did you do any measurements to figure out how long does it take, average,
>> for dangling caches to go away ? Under memory pressure, let's say
>
> Unfortunately, I don't have any such measurements, other than a very artificial:
>
> # mkdir /dev/cgroup/memory/c
> # echo 1073741824> /dev/cgroup/memory/c/memory.limit_in_bytes
> # sync&& echo 3> /proc/sys/vm/drop_caches
> # echo $$> /dev/cgroup/memory/c/tasks
> # find /> /dev/null
> # grep '(c)' /proc/slabinfo | wc -l
> 42
> # echo $$> /dev/cgroup/memory/tasks
> # rmdir /dev/cgroup/memory/c
> # grep '(c)dead' /proc/slabinfo | wc -l
> 42
> # sleep 60&& sync&& for i in `seq 1 1000`; do echo 3>
> /proc/sys/vm/drop_caches ; done
> # grep '(c)dead' /proc/slabinfo | wc -l
> 6
> # sleep 60&& grep '(c)dead' /proc/slabinfo | wc -l
> 5
> # sleep 60&& grep '(c)dead' /proc/slabinfo | wc -l
> 5
>
> (Note that this is without any per-memcg shrinking patch applied. With
> shrinking, things will be a bit better, because deleting the cgroup
> will force the dentries to get shrunk.)
>
> Some of these dead caches may take a long time to go away, but we
> haven't found them to be a problem for us, so far.
>
Ok. When we start doing shrinking, however, I'd like to see a shrink
step being done before we destroy the memcg. This way we can at least
reduce the number of pages lying around.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2012-02-29 16:52 UTC|newest]
Thread overview: 52+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-02-27 22:58 [PATCH 00/10] memcg: Kernel Memory Accounting Suleiman Souhlal
2012-02-27 22:58 ` [PATCH 01/10] memcg: Kernel memory accounting infrastructure Suleiman Souhlal
2012-02-28 13:10 ` Glauber Costa
2012-02-29 0:37 ` Suleiman Souhlal
2012-02-28 13:11 ` Glauber Costa
2012-02-27 22:58 ` [PATCH 02/10] memcg: Uncharge all kmem when deleting a cgroup Suleiman Souhlal
2012-02-28 19:00 ` Glauber Costa
2012-02-29 0:24 ` Suleiman Souhlal
2012-02-29 16:51 ` Glauber Costa [this message]
2012-02-29 6:22 ` KAMEZAWA Hiroyuki
2012-02-29 19:00 ` Suleiman Souhlal
2012-02-27 22:58 ` [PATCH 03/10] memcg: Reclaim when more than one page needed Suleiman Souhlal
2012-02-29 6:18 ` KAMEZAWA Hiroyuki
2012-02-27 22:58 ` [PATCH 04/10] memcg: Introduce __GFP_NOACCOUNT Suleiman Souhlal
2012-02-29 6:00 ` KAMEZAWA Hiroyuki
2012-02-29 16:53 ` Glauber Costa
2012-02-29 19:09 ` Suleiman Souhlal
2012-03-01 0:10 ` KAMEZAWA Hiroyuki
2012-03-01 0:24 ` Glauber Costa
2012-03-01 6:05 ` KAMEZAWA Hiroyuki
2012-03-03 14:22 ` Glauber Costa
2012-03-03 16:38 ` Suleiman Souhlal
2012-03-03 23:24 ` Glauber Costa
2012-03-04 0:10 ` Suleiman Souhlal
2012-03-06 10:36 ` Glauber Costa
2012-03-06 16:13 ` Suleiman Souhlal
2012-03-06 18:31 ` Glauber Costa
2012-02-27 22:58 ` [PATCH 05/10] memcg: Slab accounting Suleiman Souhlal
2012-02-28 13:24 ` Glauber Costa
2012-02-28 23:31 ` Suleiman Souhlal
2012-02-29 17:00 ` Glauber Costa
2012-02-27 22:58 ` [PATCH 06/10] memcg: Track all the memcg children of a kmem_cache Suleiman Souhlal
2012-02-27 22:58 ` [PATCH 07/10] memcg: Stop res_counter underflows Suleiman Souhlal
2012-02-28 13:31 ` Glauber Costa
2012-02-28 23:07 ` Suleiman Souhlal
2012-02-29 17:05 ` Glauber Costa
2012-02-29 19:17 ` Suleiman Souhlal
2012-02-27 22:58 ` [PATCH 08/10] memcg: Add CONFIG_CGROUP_MEM_RES_CTLR_KMEM_ACCT_ROOT Suleiman Souhlal
2012-02-28 13:34 ` Glauber Costa
2012-02-28 23:36 ` Suleiman Souhlal
2012-02-28 23:54 ` KAMEZAWA Hiroyuki
2012-02-29 17:09 ` Glauber Costa
2012-02-29 19:24 ` Suleiman Souhlal
2012-02-27 22:58 ` [PATCH 09/10] memcg: Per-memcg memory.kmem.slabinfo file Suleiman Souhlal
2012-02-27 22:58 ` [PATCH 10/10] memcg: Document kernel memory accounting Suleiman Souhlal
2012-02-27 23:05 ` Randy Dunlap
2012-02-28 8:49 ` [PATCH 00/10] memcg: Kernel Memory Accounting Pekka Enberg
2012-02-28 22:12 ` Suleiman Souhlal
2012-02-28 13:03 ` Glauber Costa
2012-02-28 22:47 ` Suleiman Souhlal
2012-02-29 16:47 ` Glauber Costa
2012-02-29 19:28 ` Suleiman Souhlal
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4F4E578D.7080202@parallels.com \
--to=glommer@parallels.com \
--cc=cgroups@vger.kernel.org \
--cc=devel@openvz.org \
--cc=gthelen@google.com \
--cc=hughd@google.com \
--cc=kamezawa.hiroyu@jp.fujitsu.com \
--cc=linux-mm@kvack.org \
--cc=penberg@kernel.org \
--cc=ssouhlal@freebsd.org \
--cc=suleiman@google.com \
--cc=yinghan@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox