linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Michal Hocko <mhocko@suse.cz>
To: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: linux-mm@kvack.org, Balbir Singh <bsingharora@gmail.com>,
	Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>,
	linux-kernel@vger.kernel.org
Subject: Re: [PATCH 3/4] memcg: get rid of percpu_charge_mutex lock
Date: Thu, 21 Jul 2011 13:47:04 +0200	[thread overview]
Message-ID: <20110721114704.GC27855@tiehlicka.suse.cz> (raw)
In-Reply-To: <20110721193051.cd3266e5.kamezawa.hiroyu@jp.fujitsu.com>

On Thu 21-07-11 19:30:51, KAMEZAWA Hiroyuki wrote:
> On Thu, 21 Jul 2011 09:58:24 +0200
> Michal Hocko <mhocko@suse.cz> wrote:
> 
> > percpu_charge_mutex protects from multiple simultaneous per-cpu charge
> > caches draining because we might end up having too many work items.
> > At least this was the case until 26fe6168 (memcg: fix percpu cached
> > charge draining frequency) when we introduced a more targeted draining
> > for async mode.
> > Now that also sync draining is targeted we can safely remove mutex
> > because we will not send more work than the current number of CPUs.
> > FLUSHING_CACHED_CHARGE protects from sending the same work multiple
> > times and stock->nr_pages == 0 protects from pointless sending a work
> > if there is obviously nothing to be done. This is of course racy but we
> > can live with it as the race window is really small (we would have to
> > see FLUSHING_CACHED_CHARGE cleared while nr_pages would be still
> > non-zero).
> > The only remaining place where we can race is synchronous mode when we
> > rely on FLUSHING_CACHED_CHARGE test which might have been set by other
> > drainer on the same group but we should wait in that case as well.
> > 
> > Signed-off-by: Michal Hocko <mhocko@suse.cz>
> 
> A concern.
> 
> > ---
> >  mm/memcontrol.c |   12 ++----------
> >  1 files changed, 2 insertions(+), 10 deletions(-)
> > 
> > diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> > index 8180cd9..9d49a12 100644
> > --- a/mm/memcontrol.c
> > +++ b/mm/memcontrol.c
> > @@ -2065,7 +2065,6 @@ struct memcg_stock_pcp {
> >  #define FLUSHING_CACHED_CHARGE	(0)
> >  };
> >  static DEFINE_PER_CPU(struct memcg_stock_pcp, memcg_stock);
> > -static DEFINE_MUTEX(percpu_charge_mutex);
> >  
> >  /*
> >   * Try to consume stocked charge on this cpu. If success, one page is consumed
> > @@ -2166,7 +2165,8 @@ static void drain_all_stock(struct mem_cgroup *root_mem, bool sync)
> >  
> >  	for_each_online_cpu(cpu) {
> >  		struct memcg_stock_pcp *stock = &per_cpu(memcg_stock, cpu);
> > -		if (test_bit(FLUSHING_CACHED_CHARGE, &stock->flags))
> > +		if (root_mem == stock->cached &&
> > +				test_bit(FLUSHING_CACHED_CHARGE, &stock->flags))
> >  			flush_work(&stock->work);
> 
> Doesn't this new check handle hierarchy ?
> css_is_ancestor() will be required if you do this check.

Yes you are right. Will fix it. I will add a helper for the check.

> BTW, this change should be in other patch, I think.

I have put the change here intentionally because previously we were
protected by the lock so we couldn't race with somebody else so the
check was not necessary.

> 
> Thanks,
> -Kame

Thanks
-- 
Michal Hocko
SUSE Labs
SUSE LINUX s.r.o.
Lihovarska 1060/12
190 00 Praha 9    
Czech Republic

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2011-07-21 11:47 UTC|newest]

Thread overview: 26+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-07-21  9:41 [PATCH 0/4] memcg: cleanup per-cpu charge caches + fix unnecessary reclaim if there are still cached charges Michal Hocko
2011-07-21  7:38 ` [PATCH 1/4] memcg: do not try to drain per-cpu caches without pages Michal Hocko
2011-07-21 10:12   ` KAMEZAWA Hiroyuki
2011-07-21 11:36     ` Michal Hocko
2011-07-21 23:44       ` KAMEZAWA Hiroyuki
2011-07-22  9:19         ` Michal Hocko
2011-07-22  9:28           ` KAMEZAWA Hiroyuki
2011-07-22  9:58             ` Michal Hocko
2011-07-22 10:23               ` Michal Hocko
2011-07-21  7:50 ` [PATCH 2/4] memcg: unify sync and async per-cpu charge cache draining Michal Hocko
2011-07-21 10:25   ` KAMEZAWA Hiroyuki
2011-07-21 11:36     ` Michal Hocko
2011-07-21  7:58 ` [PATCH 3/4] memcg: get rid of percpu_charge_mutex lock Michal Hocko
2011-07-21 10:30   ` KAMEZAWA Hiroyuki
2011-07-21 11:47     ` Michal Hocko [this message]
2011-07-21 12:42       ` Michal Hocko
2011-07-21 23:49         ` KAMEZAWA Hiroyuki
2011-07-22  9:21           ` Michal Hocko
2011-07-22  0:27         ` Daisuke Nishimura
2011-07-22  9:41           ` Michal Hocko
2011-07-21  8:28 ` [PATCH 4/4] memcg: prevent from reclaiming if there are per-cpu cached charges Michal Hocko
2011-07-21 10:54   ` KAMEZAWA Hiroyuki
2011-07-21 12:30     ` Michal Hocko
2011-07-21 23:56       ` KAMEZAWA Hiroyuki
2011-07-22  0:18         ` KAMEZAWA Hiroyuki
2011-07-22  9:54         ` Michal Hocko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20110721114704.GC27855@tiehlicka.suse.cz \
    --to=mhocko@suse.cz \
    --cc=bsingharora@gmail.com \
    --cc=kamezawa.hiroyu@jp.fujitsu.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=nishimura@mxp.nes.nec.co.jp \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox