From: Sha Zhengju <handai.szj@gmail.com>
To: Michal Hocko <mhocko@suse.cz>
Cc: "linux-mm@kvack.org" <linux-mm@kvack.org>,
Cgroups <cgroups@vger.kernel.org>,
Wu Fengguang <fengguang.wu@intel.com>,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>,
Sha Zhengju <handai.szj@taobao.com>, Mel Gorman <mgorman@suse.de>,
Greg Thelen <gthelen@google.com>,
Glauber Costa <glommer@gmail.com>,
Andrew Morton <akpm@linux-foundation.org>
Subject: Re: [PATCH V4 5/6] memcg: patch mem_cgroup_{begin,end}_update_page_stat() out if only root memcg exists
Date: Sat, 13 Jul 2013 12:15:39 +0800 [thread overview]
Message-ID: <CAFj3OHU3UQ=25J=PMa5qRzkVejN10e92x=nEbQh2s08A8Od7Uw@mail.gmail.com> (raw)
In-Reply-To: <20130712132550.GD15307@dhcp22.suse.cz>
[-- Attachment #1: Type: text/plain, Size: 2989 bytes --]
在 2013-7-12 晚上9:25,"Michal Hocko" <mhocko@suse.cz>写道:
>
> On Fri 12-07-13 20:59:24, Sha Zhengju wrote:
> > Add cc to Glauber
> >
> > On Thu, Jul 11, 2013 at 10:56 PM, Michal Hocko <mhocko@suse.cz> wrote:
> > > On Sat 06-07-13 01:33:43, Sha Zhengju wrote:
> > >> From: Sha Zhengju <handai.szj@taobao.com>
> > >>
> > >> If memcg is enabled and no non-root memcg exists, all allocated
> > >> pages belongs to root_mem_cgroup and wil go through root memcg
> > >> statistics routines. So in order to reduce overheads after adding
> > >> memcg dirty/writeback accounting in hot paths, we use jump label to
> > >> patch mem_cgroup_{begin,end}_update_page_stat() in or out when not
> > >> used.
> > >
> > > I do not think this is enough. How much do you save? One atomic read.
> > > This doesn't seem like a killer.
> > >
> > > I hoped we could simply not account at all and move counters to the
root
> > > cgroup once the label gets enabled.
> >
> > I have thought of this approach before, but it would probably run into
> > another issue, e.g, each zone has a percpu stock named ->pageset to
> > optimize the increment and decrement operations, and I haven't figure
out a
> > simpler and cheaper approach to handle that stock numbers if moving
global
> > counters to root cgroup, maybe we can just leave them and can afford the
> > approximation?
>
> You can read per-cpu diffs during transition and tolerate small
> races. Or maybe simply summing NR_FILE_DIRTY for all zones would be
> sufficient.
Thanks, I'll have a try.
>
> > Glauber have already done lots of works here, in his previous patchset
he
> > also tried to move some global stats to root (
> > http://comments.gmane.org/gmane.linux.kernel.cgroups/6291). May I steal
> > some of your ideas here, Glauber? :P
> >
> >
> > >
> > > Besides that, the current patch is racy. Consider what happens when:
> > >
> > > mem_cgroup_begin_update_page_stat
> > > arm_inuse_keys
> > >
mem_cgroup_move_account
> > > mem_cgroup_move_account_page_stat
> > > mem_cgroup_end_update_page_stat
> > >
> > > The race window is small of course but it is there. I guess we need
> > > rcu_read_lock at least.
> >
> > Yes, you're right. I'm afraid we need to take care of the racy in the
next
> > updates as well. But mem_cgroup_begin/end_update_page_stat() already
have
> > rcu lock, so here we maybe only need a synchronize_rcu() after changing
> > memcg_inuse_key?
>
> Your patch doesn't take rcu_read_lock. synchronize_rcu might work but I
> am still not sure this would help to prevent from the overhead which
> IMHO comes from the accounting not a single atomic_read + rcu_read_lock
> which is the hot path of mem_cgroup_{begin,end}_update_page_stat.
I means I'll try to zero out accounting overhead in next version, but the
race will probably also occur in that case.
Thanks!
>
> [...]
> --
> Michal Hocko
> SUSE Labs
[-- Attachment #2: Type: text/html, Size: 4269 bytes --]
next prev parent reply other threads:[~2013-07-13 4:15 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-07-05 17:18 [PATCH V4 0/6] Memcg dirty/writeback page accounting Sha Zhengju
2013-07-05 17:21 ` [PATCH V4 1/6] memcg: remove MEMCG_NR_FILE_MAPPED Sha Zhengju
2013-07-11 13:39 ` Michal Hocko
2013-07-11 15:53 ` Sha Zhengju
2013-07-05 17:26 ` [PATCH V4 2/6] fs/ceph: vfs __set_page_dirty_nobuffers interface instead of doing it inside filesystem Sha Zhengju
2013-07-05 17:30 ` [PATCH V4 3/6] memcg: add per cgroup dirty pages accounting Sha Zhengju
2013-07-11 14:31 ` Michal Hocko
2013-07-11 16:49 ` Sha Zhengju
2013-07-05 17:32 ` [PATCH V4 4/6] memcg: add per cgroup writeback " Sha Zhengju
2013-07-11 14:40 ` Michal Hocko
2013-07-11 16:56 ` Sha Zhengju
2013-07-05 17:33 ` [PATCH V4 5/6] memcg: patch mem_cgroup_{begin,end}_update_page_stat() out if only root memcg exists Sha Zhengju
2013-07-11 14:56 ` Michal Hocko
2013-07-12 12:59 ` Sha Zhengju
2013-07-12 13:13 ` Sha Zhengju
2013-07-15 6:32 ` Glauber Costa
2013-07-12 13:25 ` Michal Hocko
2013-07-13 4:15 ` Sha Zhengju [this message]
2013-07-15 17:58 ` Greg Thelen
2013-07-16 4:26 ` Sha Zhengju
2013-07-05 17:34 ` [PATCH V4 6/6] memcg: Document cgroup dirty/writeback memory statistics Sha Zhengju
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CAFj3OHU3UQ=25J=PMa5qRzkVejN10e92x=nEbQh2s08A8Od7Uw@mail.gmail.com' \
--to=handai.szj@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=cgroups@vger.kernel.org \
--cc=fengguang.wu@intel.com \
--cc=glommer@gmail.com \
--cc=gthelen@google.com \
--cc=handai.szj@taobao.com \
--cc=kamezawa.hiroyu@jp.fujitsu.com \
--cc=linux-mm@kvack.org \
--cc=mgorman@suse.de \
--cc=mhocko@suse.cz \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox