linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Glauber Costa <glommer@parallels.com>
To: Sha Zhengju <handai.szj@gmail.com>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>,
	Michal Hocko <mhocko@suse.cz>,
	"linux-mm@kvack.org" <linux-mm@kvack.org>,
	Johannes Weiner <hannes@cmpxchg.org>, Tejun Heo <tj@kernel.org>,
	Cgroups <cgroups@vger.kernel.org>, Mel Gorman <mgorman@suse.de>
Subject: Re: per-cpu statistics
Date: Mon, 4 Mar 2013 11:25:11 +0400	[thread overview]
Message-ID: <51344C57.7030807@parallels.com> (raw)
In-Reply-To: <CAFj3OHXJckvDPWSnq9R8nZ00Sb0Juxq9oCrGCBeO0UZmgH6OzQ@mail.gmail.com>

On 03/01/2013 05:48 PM, Sha Zhengju wrote:
> Hi Glauber,
> 
> Forgive me, I'm replying not because I know the reason of current
> per-cpu implementation but that I notice you're mentioning something
> I'm also interested in. Below is the detail.
> 
> 
> I'm not sure I fully understand your points, root memcg now don't
> charge page already and only do some page stat
> accounting(CACHE/RSS/SWAP).

Can you point me to the final commits of this in the tree? I am using
the latest git mm from mhocko and it is not entirely clear for me what
are you talking about.

>  Now I'm also trying to do some
> optimization specific to the overhead of root memcg stat accounting,
> and the first attempt is posted here:
> https://lkml.org/lkml/2013/1/2/71 . But it only covered
> FILE_MAPPED/DIRTY/WRITEBACK(I've add the last two accounting in that
> patchset) and Michal Hock accepted the approach (so did Kame) and
> suggested I should handle all the stats in the same way including
> CACHE/RSS. But I do not handle things related to memcg LRU where I
> notice you have done some work.
> 
Yes, LRU is a bit tricky and it is what is keeping me from posting the
patchset I have. I haven't fully done it, but I am on my way.


> It's possible that we may take different ways to bypass root memcg
> stat accounting. The next round of the part will be sent out in
> following few days(doing some tests now), and for myself any comments
> and collaboration are welcome. (Glad to cc to you of course if you're
> also interest in it. :) )
> 

I am interested, of course. As you know, I started to work on this a
while ago and had to interrupt it for a while. I resumed it last week,
but if you managed to merge something already, I'd happy to rebase.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2013-03-04  7:24 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-02-28  7:59 Glauber Costa
2013-03-01 13:48 ` Sha Zhengju
2013-03-04  7:25   ` Glauber Costa [this message]
2013-03-05  7:17     ` Sha Zhengju
2013-03-04  0:55 ` Kamezawa Hiroyuki
2013-03-04  1:01   ` Tejun Heo

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=51344C57.7030807@parallels.com \
    --to=glommer@parallels.com \
    --cc=cgroups@vger.kernel.org \
    --cc=handai.szj@gmail.com \
    --cc=hannes@cmpxchg.org \
    --cc=kamezawa.hiroyu@jp.fujitsu.com \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@suse.de \
    --cc=mhocko@suse.cz \
    --cc=tj@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox