From: Glauber Costa <glommer@parallels.com>
To: Christoph Lameter <cl@linux.com>
Cc: Michal Hocko <mhocko@suse.cz>,
linux-kernel@vger.kernel.org, cgroups@vger.kernel.org,
linux-mm@kvack.org, kamezawa.hiroyu@jp.fujitsu.com,
Tejun Heo <tj@kernel.org>, Li Zefan <lizefan@huawei.com>,
Greg Thelen <gthelen@google.com>,
Suleiman Souhlal <suleiman@google.com>,
Johannes Weiner <hannes@cmpxchg.org>,
devel@openvz.org, David Rientjes <rientjes@google.com>
Subject: Re: [PATCH v3 00/28] kmem limitation for memcg
Date: Tue, 29 May 2012 19:44:54 +0400 [thread overview]
Message-ID: <4FC4EEF6.2050204@parallels.com> (raw)
In-Reply-To: <alpine.DEB.2.00.1205290955270.4666@router.home>
On 05/29/2012 07:07 PM, Christoph Lameter wrote:
> On Mon, 28 May 2012, Glauber Costa wrote:
>
>>> It would be best to merge these with my patchset to extract common code
>>> from the allocators. The modifications of individual slab allocators would
>>> then be not necessary anymore and it would save us a lot of work.
>>>
>> Some of them would not, some of them would still be. But also please note that
>> the patches here that deal with differences between allocators are usually the
>> low hanging fruits compared to the rest.
>>
>> I agree that long term it not only better, but inevitable, if we are going to
>> merge both.
>>
>> But right now, I think we should agree with the implementation itself - so if
>> you have any comments on how I am handling these, I'd be happy to hear. Then
>> we can probably set up a tree that does both, or get your patches merged and
>> I'll rebase, etc.
>
> Just looked over the patchset and its quite intrusive.
Thank you very much, Christoph, appreciate it.
> I have never been
> fond of cgroups (IMHO hardware needs to be partitioned at physical
> boundaries) so I have not too much insight into what is going on in that
> area.
There is certainly a big market for that, and certainly a big market for
what we're doing as well. So there are users interested in Containers
technology, and I don't really see it as "partitioning it here" vs
"partitioning there". It's just different.
Moreover, not everyone doing cgroups are doing containers. Some people
are isolating a service, or a paticular job.
I agree it is an intrusive change, but it used to be even more. I did my
best to diminish its large spread.
> The idea to just duplicate the caches leads to some weird stuff like the
> refcounting and the recovery of the arguments used during slab creation.
The refcounting is only needed so we are sure the parent cache won't go
away without the child caches going away. I can try to find a better way
to do that, specifically.
>
> I think it may be simplest to only account for the pages used by a slab in
> a memcg. That code could be added to the functions in the slab allocators
> that interface with the page allocators. Those are not that performance
> critical and would do not much harm.
No, I don't think so. Well, accounting the page is easy, but when we do
a new allocation, we need to match a process to its correspondent page.
This will likely lead to flushing the internal cpu caches of the slub,
for instance, hurting performance. That is because once we allocate a
page, all objects on that page need to belong to the same cgroup.
Also, you talk about intrusiveness, accounting pages is a lot more
intrusive, since then you need to know a lot about the internal
structure of each cache. Having the cache replicated has exactly the
effect of isolating it better.
I of course agree this is no walk in the park, but accounting something
that is internal to the cache, and that each cache will use and organize
in its own private way, doesn't make it any better.
> If you need per object accounting then the cleanest solution would be to
> duplicate the per node arrays per memcg (or only the statistics) and have
> the kmem_cache structure only once in memory.
No, it's all per-page. Nothing here is per-object, maybe you
misunderstood something?
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2012-05-29 15:47 UTC|newest]
Thread overview: 90+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-05-25 13:03 Glauber Costa
2012-05-25 13:03 ` [PATCH v3 01/28] slab: move FULL state transition to an initcall Glauber Costa
2012-05-25 13:03 ` [PATCH v3 02/28] memcg: Always free struct memcg through schedule_work() Glauber Costa
2012-05-25 13:03 ` [PATCH v3 03/28] slab: rename gfpflags to allocflags Glauber Costa
2012-05-25 13:03 ` [PATCH v3 04/28] memcg: Make it possible to use the stock for more than one page Glauber Costa
2012-05-25 13:03 ` [PATCH v3 05/28] memcg: Reclaim when more than one page needed Glauber Costa
2012-05-29 14:19 ` Christoph Lameter
2012-05-29 14:20 ` Christoph Lameter
2012-05-29 15:45 ` Glauber Costa
2012-05-25 13:03 ` [PATCH v3 06/28] slab: use obj_size field of struct kmem_cache when not debugging Glauber Costa
2012-05-25 13:03 ` [PATCH v3 07/28] memcg: change defines to an enum Glauber Costa
2012-05-25 13:03 ` [PATCH v3 08/28] res_counter: don't force return value checking in res_counter_charge_nofail Glauber Costa
2012-05-25 13:03 ` [PATCH v3 09/28] kmem slab accounting basic infrastructure Glauber Costa
2012-05-25 13:03 ` [PATCH v3 10/28] slab/slub: struct memcg_params Glauber Costa
2012-05-25 13:03 ` [PATCH v3 11/28] slub: consider a memcg parameter in kmem_create_cache Glauber Costa
2012-05-25 13:03 ` [PATCH v3 12/28] slab: pass memcg parameter to kmem_cache_create Glauber Costa
2012-05-29 14:27 ` Christoph Lameter
2012-05-29 15:50 ` Glauber Costa
2012-05-29 16:33 ` Christoph Lameter
2012-05-29 16:36 ` Glauber Costa
2012-05-29 16:52 ` Christoph Lameter
2012-05-29 16:59 ` Glauber Costa
2012-05-30 11:01 ` Frederic Weisbecker
2012-05-25 13:03 ` [PATCH v3 13/28] slub: create duplicate cache Glauber Costa
2012-05-29 14:36 ` Christoph Lameter
2012-05-29 15:56 ` Glauber Costa
2012-05-29 16:05 ` Christoph Lameter
2012-05-29 17:05 ` Glauber Costa
2012-05-29 17:25 ` Christoph Lameter
2012-05-29 17:27 ` Glauber Costa
2012-05-29 19:26 ` Christoph Lameter
2012-05-29 19:40 ` Glauber Costa
2012-05-29 19:55 ` Christoph Lameter
2012-05-29 20:08 ` Glauber Costa
2012-05-29 20:21 ` Christoph Lameter
2012-05-29 20:25 ` Glauber Costa
2012-05-30 1:29 ` Tejun Heo
2012-05-30 7:28 ` [Devel] " James Bottomley
2012-05-30 7:54 ` Glauber Costa
2012-05-30 8:02 ` Tejun Heo
2012-05-30 15:37 ` Christoph Lameter
2012-05-29 20:57 ` Suleiman Souhlal
2012-05-25 13:03 ` [PATCH v3 14/28] slab: " Glauber Costa
2012-05-25 13:03 ` [PATCH v3 15/28] slub: always get the cache from its page in kfree Glauber Costa
2012-05-29 14:42 ` Christoph Lameter
2012-05-29 15:59 ` Glauber Costa
2012-05-25 13:03 ` [PATCH v3 16/28] memcg: kmem controller charge/uncharge infrastructure Glauber Costa
2012-05-29 14:47 ` Christoph Lameter
2012-05-29 16:00 ` Glauber Costa
2012-05-30 12:17 ` Frederic Weisbecker
2012-05-30 12:26 ` Glauber Costa
2012-05-30 12:34 ` Frederic Weisbecker
2012-05-30 12:38 ` Glauber Costa
2012-05-30 13:11 ` Frederic Weisbecker
2012-05-30 13:09 ` Glauber Costa
2012-05-30 13:04 ` Frederic Weisbecker
2012-05-30 13:06 ` Glauber Costa
2012-05-30 13:37 ` Frederic Weisbecker
2012-05-30 13:37 ` Glauber Costa
2012-05-30 13:53 ` Frederic Weisbecker
2012-05-30 13:55 ` Glauber Costa
2012-05-30 15:33 ` Frederic Weisbecker
2012-05-30 16:16 ` Glauber Costa
2012-05-25 13:03 ` [PATCH v3 17/28] skip memcg kmem allocations in specified code regions Glauber Costa
2012-05-25 13:03 ` [PATCH v3 18/28] slub: charge allocation to a memcg Glauber Costa
2012-05-29 14:51 ` Christoph Lameter
2012-05-29 16:06 ` Glauber Costa
2012-05-25 13:03 ` [PATCH v3 19/28] slab: per-memcg accounting of slab caches Glauber Costa
2012-05-29 14:52 ` Christoph Lameter
2012-05-29 16:07 ` Glauber Costa
2012-05-29 16:13 ` Glauber Costa
2012-05-25 13:03 ` [PATCH v3 20/28] memcg: disable kmem code when not in use Glauber Costa
2012-05-25 13:03 ` [PATCH v3 21/28] memcg: destroy memcg caches Glauber Costa
2012-05-25 13:03 ` [PATCH v3 22/28] memcg/slub: shrink dead caches Glauber Costa
2012-05-25 13:03 ` [PATCH v3 23/28] slab: Track all the memcg children of a kmem_cache Glauber Costa
2012-05-25 13:03 ` [PATCH v3 24/28] memcg: Per-memcg memory.kmem.slabinfo file Glauber Costa
2012-05-25 13:03 ` [PATCH v3 25/28] slub: create slabinfo file for memcg Glauber Costa
2012-05-25 13:03 ` [PATCH v3 26/28] slub: track all children of a kmem cache Glauber Costa
2012-05-25 13:03 ` [PATCH v3 27/28] memcg: propagate kmem limiting information to children Glauber Costa
2012-05-25 13:03 ` [PATCH v3 28/28] Documentation: add documentation for slab tracker for memcg Glauber Costa
2012-05-25 13:34 ` [PATCH v3 00/28] kmem limitation " Michal Hocko
2012-05-25 14:34 ` Christoph Lameter
2012-05-28 8:32 ` Glauber Costa
2012-05-29 15:07 ` Christoph Lameter
2012-05-29 15:44 ` Glauber Costa [this message]
2012-05-29 16:01 ` Christoph Lameter
2012-06-07 10:26 ` Frederic Weisbecker
2012-06-07 10:53 ` Glauber Costa
2012-06-07 14:00 ` Frederic Weisbecker
2012-06-14 2:24 ` Kamezawa Hiroyuki
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4FC4EEF6.2050204@parallels.com \
--to=glommer@parallels.com \
--cc=cgroups@vger.kernel.org \
--cc=cl@linux.com \
--cc=devel@openvz.org \
--cc=gthelen@google.com \
--cc=hannes@cmpxchg.org \
--cc=kamezawa.hiroyu@jp.fujitsu.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lizefan@huawei.com \
--cc=mhocko@suse.cz \
--cc=rientjes@google.com \
--cc=suleiman@google.com \
--cc=tj@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox