linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
To: Michal Hocko <mhocko@suse.cz>
Cc: "linux-mm@kvack.org" <linux-mm@kvack.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"akpm@linux-foundation.org" <akpm@linux-foundation.org>,
	"nishimura@mxp.nes.nec.co.jp" <nishimura@mxp.nes.nec.co.jp>,
	"bsingharora@gmail.com" <bsingharora@gmail.com>,
	Ying Han <yinghan@google.com>
Subject: Re: [BUGFIX][PATCH v3] memcg: fix behavior of per cpu charge cache draining.
Date: Fri, 10 Jun 2011 17:39:58 +0900	[thread overview]
Message-ID: <20110610173958.d9ab901c.kamezawa.hiroyu@jp.fujitsu.com> (raw)
In-Reply-To: <20110610081218.GC4832@tiehlicka.suse.cz>

On Fri, 10 Jun 2011 10:12:19 +0200
Michal Hocko <mhocko@suse.cz> wrote:

> On Thu 09-06-11 09:30:45, KAMEZAWA Hiroyuki wrote:
> > From 0ebd8a90a91d50c512e7c63e5529a22e44e84c42 Mon Sep 17 00:00:00 2001
> > From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
> > Date: Wed, 8 Jun 2011 13:51:11 +0900
> > Subject: [PATCH] Fix behavior of per-cpu charge cache draining in memcg.
> > 
> > For performance, memory cgroup caches some "charge" from res_counter
> > into per cpu cache. This works well but because it's cache,
> > it needs to be flushed in some cases. Typical cases are
> > 	1. when someone hit limit.
> > 	2. when rmdir() is called and need to charges to be 0.
> > 
> > But "1" has problem.
> > 
> > Recently, with large SMP machines, we many kworker runs because
> > of flushing memcg's cache. Bad things in implementation are
> > 
> > a) it's called before calling try_to_free_mem_cgroup_pages()
> >    so, it's called immidiately when a task hit limit.
> >    (I though it was better to avoid to run into memory reclaim.
> >     But it was wrong decision.)
> > 
> > b) Even if a cpu contains a cache for memcg not related to
> >    a memcg which hits limit, drain code is called.
> > 
> > This patch fixes a) and b) by
> > 
> > A) delay calling of flushing until one run of try_to_free...
> >    Then, the number of calling is decreased.
> > B) check percpu cache contains a useful data or not.
> > plus
> > C) check asynchronous percpu draining doesn't run.
> > 
> > BTW, why this patch relpaces atomic_t counter with mutex is
> > to guarantee a memcg which is pointed by stock->cacne is
> > not destroyed while we check css_id.
> > 
> > Reported-by: Ying Han <yinghan@google.com>
> > Reviewed-by: Michal Hocko <mhocko@suse.cz>
> > Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
> > 
> > Changelog:
> >  - fixed typo.
> >  - fixed rcu_read_lock() and add strict mutal execution between
> >    asynchronous and synchronous flushing. It's requred for validness
> >    of cached pointer.
> >  - add root_mem->use_hierarchy check.
> > ---
> >  mm/memcontrol.c |   54 +++++++++++++++++++++++++++++++++++-------------------
> >  1 files changed, 35 insertions(+), 19 deletions(-)
> > 
> > diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> > index bd9052a..3baddcb 100644
> > --- a/mm/memcontrol.c
> > +++ b/mm/memcontrol.c
> [...]
> >  static struct mem_cgroup_per_zone *
> >  mem_cgroup_zoneinfo(struct mem_cgroup *mem, int nid, int zid)
> > @@ -1670,8 +1670,6 @@ static int mem_cgroup_hierarchical_reclaim(struct mem_cgroup *root_mem,
> >  		victim = mem_cgroup_select_victim(root_mem);
> >  		if (victim == root_mem) {
> >  			loop++;
> > -			if (loop >= 1)
> > -				drain_all_stock_async();
> >  			if (loop >= 2) {
> >  				/*
> >  				 * If we have not been able to reclaim
> > @@ -1723,6 +1721,7 @@ static int mem_cgroup_hierarchical_reclaim(struct mem_cgroup *root_mem,
> >  				return total;
> >  		} else if (mem_cgroup_margin(root_mem))
> >  			return total;
> > +		drain_all_stock_async(root_mem);
> >  	}
> >  	return total;
> >  }
> 
> I still think that we pointlessly reclaim even though we could have a
> lot of pages pre-charged in the cache (the more CPUs we have the more
> significant this might be).

The more CPUs, the more scan cost for each per-cpu memory, which makes
cache-miss.

I know placement of drain_all_stock_async() is not big problem on my host,
which has 2socket/8core cpus. But, assuming 1000+ cpu host, 
"when you hit limit, you'll see 1000*128bytes cache miss and need to call test_and_set for 1000+ cpus in bad case." doesn't seem much win.

If we implement "call-drain-only-nearby-cpus", I think we can call it before
calling try_to_free_mem_cgroup_pages(). I'll add it to my TO-DO-LIST.

How do you think ?

Thanks,
-Kame












--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2011-06-10  8:46 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-06-09  0:30 KAMEZAWA Hiroyuki
2011-06-09  1:30 ` Daisuke Nishimura
2011-06-10  8:12 ` Michal Hocko
2011-06-10  8:39   ` KAMEZAWA Hiroyuki [this message]
2011-06-10  9:08     ` Michal Hocko
2011-06-10  9:59       ` KAMEZAWA Hiroyuki
2011-06-10 11:04         ` Michal Hocko
2011-06-10 12:24           ` Hiroyuki Kamezawa
2011-06-10 13:31             ` Michal Hocko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20110610173958.d9ab901c.kamezawa.hiroyu@jp.fujitsu.com \
    --to=kamezawa.hiroyu@jp.fujitsu.com \
    --cc=akpm@linux-foundation.org \
    --cc=bsingharora@gmail.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@suse.cz \
    --cc=nishimura@mxp.nes.nec.co.jp \
    --cc=yinghan@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox