From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail143.messagelabs.com (mail143.messagelabs.com [216.82.254.35]) by kanga.kvack.org (Postfix) with SMTP id 68E8A6B004F for ; Sun, 14 Jun 2009 22:05:31 -0400 (EDT) Received: from m4.gw.fujitsu.co.jp ([10.0.50.74]) by fgwmail5.fujitsu.co.jp (Fujitsu Gateway) with ESMTP id n5F25bTG007418 for (envelope-from kamezawa.hiroyu@jp.fujitsu.com); Mon, 15 Jun 2009 11:05:37 +0900 Received: from smail (m4 [127.0.0.1]) by outgoing.m4.gw.fujitsu.co.jp (Postfix) with ESMTP id 25B7245DE6E for ; Mon, 15 Jun 2009 11:05:37 +0900 (JST) Received: from s4.gw.fujitsu.co.jp (s4.gw.fujitsu.co.jp [10.0.50.94]) by m4.gw.fujitsu.co.jp (Postfix) with ESMTP id EE4B645DE79 for ; Mon, 15 Jun 2009 11:05:36 +0900 (JST) Received: from s4.gw.fujitsu.co.jp (localhost.localdomain [127.0.0.1]) by s4.gw.fujitsu.co.jp (Postfix) with ESMTP id BAA65E08004 for ; Mon, 15 Jun 2009 11:05:36 +0900 (JST) Received: from m108.s.css.fujitsu.com (m108.s.css.fujitsu.com [10.249.87.108]) by s4.gw.fujitsu.co.jp (Postfix) with ESMTP id 55B4AE0800A for ; Mon, 15 Jun 2009 11:05:36 +0900 (JST) Date: Mon, 15 Jun 2009 11:04:01 +0900 From: KAMEZAWA Hiroyuki Subject: Re: Low overhead patches for the memory cgroup controller (v4) Message-Id: <20090615110401.edb6355c.kamezawa.hiroyu@jp.fujitsu.com> In-Reply-To: <20090614183740.GD23577@balbir.in.ibm.com> References: <20090515181639.GH4451@balbir.in.ibm.com> <20090518191107.8a7cc990.kamezawa.hiroyu@jp.fujitsu.com> <20090531235121.GA6120@balbir.in.ibm.com> <20090602085744.2eebf211.kamezawa.hiroyu@jp.fujitsu.com> <20090605053107.GF11755@balbir.in.ibm.com> <20090614183740.GD23577@balbir.in.ibm.com> Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Sender: owner-linux-mm@kvack.org To: balbir@linux.vnet.ibm.com Cc: Andrew Morton , "linux-mm@kvack.org" , "nishimura@mxp.nes.nec.co.jp" , "lizf@cn.fujitsu.com" , "menage@google.com" , KOSAKI Motohiro List-ID: On Mon, 15 Jun 2009 00:07:40 +0530 Balbir Singh wrote: > Here is v4 of the patches, please review and comment > > Feature: Remove the overhead associated with the root cgroup > > From: Balbir Singh > > changelog v4 -> v3 > 1. Rebase to mmotm 9th june 2009 > 2. Remove PageCgroupRoot, we have account LRU flags to indicate that > we do only accounting and no reclaim. > 3. pcg_default_flags has been used again, since PCGF_ROOT is gone, > we set PCGF_ACCT_LRU only in mem_cgroup_add_lru_list > 4. More LRU functions are aware of PageCgroupAcctLRU > > Changelog v3 -> v2 > > 1. Rebase to mmotm 2nd June 2009 > 2. Test with some of the test cases recommended by Daisuke-San > > Changelog v2 -> v1 > 1. Rebase to latest mmotm > > This patch changes the memory cgroup and removes the overhead associated > with accounting all pages in the root cgroup. As a side-effect, we can > no longer set a memory hard limit in the root cgroup. > > A new flag to track whether the page has been accounted or not > has been added as well. Flags are now set atomically for page_cgroup, > > Tests: > > Results (for v2) > > Obtained by > > 1. Using tmpfs for mounting filesystem > 2. Changing sync to be /bin/true (so that sync is not the bottleneck) > 3. Used -s #cpus*40 -e #cpus*40 > > Reaim > withoutpatch patch > AIM9 9532.48 9807.59 > dbase 19344.60 19285.71 > new_dbase 20101.65 20163.13 > shared 11827.77 11886.65 > compute 17317.38 17420.05 > Hmm, how much overhead this patch adds for non-root cgroup ? It seems getting better in general. But I have a few suggestions. > Signed-off-by: Balbir Singh > --- > > include/linux/page_cgroup.h | 5 ++++ > mm/memcontrol.c | 59 ++++++++++++++++++++++++++++++++++++------- > 2 files changed, 54 insertions(+), 10 deletions(-) > > > diff --git a/include/linux/page_cgroup.h b/include/linux/page_cgroup.h > index 7339c7b..57c4d50 100644 > --- a/include/linux/page_cgroup.h > +++ b/include/linux/page_cgroup.h > @@ -26,6 +26,7 @@ enum { > PCG_LOCK, /* page cgroup is locked */ > PCG_CACHE, /* charged as cache */ > PCG_USED, /* this object is in use. */ > + PCG_ACCT_LRU, /* page has been accounted for */ > }; > > #define TESTPCGFLAG(uname, lname) \ > @@ -46,6 +47,10 @@ TESTPCGFLAG(Cache, CACHE) > TESTPCGFLAG(Used, USED) > CLEARPCGFLAG(Used, USED) > > +SETPCGFLAG(AcctLRU, ACCT_LRU) > +CLEARPCGFLAG(AcctLRU, ACCT_LRU) > +TESTPCGFLAG(AcctLRU, ACCT_LRU) > + > static inline int page_cgroup_nid(struct page_cgroup *pc) > { > return page_to_nid(pc->page); > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > index 6ceb6f2..399d416 100644 > --- a/mm/memcontrol.c > +++ b/mm/memcontrol.c > @@ -43,6 +43,7 @@ > > struct cgroup_subsys mem_cgroup_subsys __read_mostly; > #define MEM_CGROUP_RECLAIM_RETRIES 5 > +struct mem_cgroup *root_mem_cgroup __read_mostly; > > #ifdef CONFIG_CGROUP_MEM_RES_CTLR_SWAP > /* Turned on only when memory cgroup is enabled && really_do_swap_account = 1 */ > @@ -219,6 +220,11 @@ static void mem_cgroup_get(struct mem_cgroup *mem); > static void mem_cgroup_put(struct mem_cgroup *mem); > static struct mem_cgroup *parent_mem_cgroup(struct mem_cgroup *mem); > > +static inline bool mem_cgroup_is_root(struct mem_cgroup *mem) > +{ > + return (mem == root_mem_cgroup); > +} > + > static void mem_cgroup_charge_statistics(struct mem_cgroup *mem, > struct page_cgroup *pc, > bool charge) > @@ -378,15 +384,25 @@ void mem_cgroup_del_lru_list(struct page *page, enum lru_list lru) > return; > pc = lookup_page_cgroup(page); > /* can happen while we handle swapcache. */ > - if (list_empty(&pc->lru) || !pc->mem_cgroup) > + mem = pc->mem_cgroup; > + if (!mem) > + return; > + if (mem_cgroup_is_root(mem)) { > + if (!PageCgroupAcctLRU(pc)) > + return; > + } else if (list_empty(&pc->lru)) > return; > + > /* > * We don't check PCG_USED bit. It's cleared when the "page" is finally > * removed from global LRU. > */ > mz = page_cgroup_zoneinfo(pc); > - mem = pc->mem_cgroup; > MEM_CGROUP_ZSTAT(mz, lru) -= 1; > + if (PageCgroupAcctLRU(pc)) { > + ClearPageCgroupAcctLRU(pc); > + return; > + } > list_del_init(&pc->lru); > return; > } Looking through the whole code, PageCgroupAcctLRU() is meaningful only when pc->mem_cgroup == root_mem_cgroup. Right ? I wonder making PageCgroupAcctLRU() be always meaningful and remove all !list_empty(&pc->lru) check is a way to go. If do so, this function can be written as == if (!PageCgroupAcctLRU(pc)) return; mem = pc->mem_cgroup; mz = page_cgroup_zoneinfo(pc); MEM_CGROUP_ZSTAT(mz, lru) -= 1; ClearPageCgroupAcctLRU(pc); /* We don't maintain LRU for root cgroup. Global LRU works for us. */ if (!mem_cgroup_is_root(mem)) list_del_init(&pc->lru); == This seems much straightforward. > @@ -410,8 +426,8 @@ void mem_cgroup_rotate_lru_list(struct page *page, enum lru_list lru) > * For making pc->mem_cgroup visible, insert smp_rmb() here. > */ > smp_rmb(); > - /* unused page is not rotated. */ > - if (!PageCgroupUsed(pc)) > + /* unused or root page is not rotated. */ > + if (!PageCgroupUsed(pc) || PageCgroupAcctLRU(pc)) > return; > mz = page_cgroup_zoneinfo(pc); > list_move(&pc->lru, &mz->lists[lru]); > @@ -435,6 +451,10 @@ void mem_cgroup_add_lru_list(struct page *page, enum lru_list lru) > > mz = page_cgroup_zoneinfo(pc); > MEM_CGROUP_ZSTAT(mz, lru) += 1; > + if (mem_cgroup_is_root(pc->mem_cgroup)) { > + SetPageCgroupAcctLRU(pc); > + return; > + } > list_add(&pc->lru, &mz->lists[lru]); > } With above (my) rule. Here will be SetPageCgroupAcctLRU(pc); if (!mem_cgroup_is_root(pc->mem_cgroup)) list_add(&pc->lru, &mz->lists[lru]); > @@ -445,12 +465,15 @@ void mem_cgroup_add_lru_list(struct page *page, enum lru_list lru) > * it again. This function is only used to charge SwapCache. It's done under > * lock_page and expected that zone->lru_lock is never held. > */ > -static void mem_cgroup_lru_del_before_commit_swapcache(struct page *page) > +static void mem_cgroup_lru_del_before_commit_swapcache(struct page *page, > + struct page_cgroup *pc) > { > unsigned long flags; > struct zone *zone = page_zone(page); > - struct page_cgroup *pc = lookup_page_cgroup(page); > > + if (!pc->mem_cgroup || > + (!PageCgroupAcctLRU(pc) && mem_cgroup_is_root(pc->mem_cgroup))) > + return; PageCgroupAcctLRU() check is done without zone->lock and this is racy if you check flag. Considering how "pagevec" works, this race tend to be big. > spin_lock_irqsave(&zone->lru_lock, flags); > /* > * Forget old LRU when this page_cgroup is *not* used. This Used bit > @@ -461,12 +484,15 @@ static void mem_cgroup_lru_del_before_commit_swapcache(struct page *page) > spin_unlock_irqrestore(&zone->lru_lock, flags); > } > > -static void mem_cgroup_lru_add_after_commit_swapcache(struct page *page) > +static void mem_cgroup_lru_add_after_commit_swapcache(struct page *page, > + struct page_cgroup *pc) > { > unsigned long flags; > struct zone *zone = page_zone(page); > - struct page_cgroup *pc = lookup_page_cgroup(page); > > + if (!pc->mem_cgroup || > + (!PageCgroupAcctLRU(pc) && mem_cgroup_is_root(pc->mem_cgroup))) > + return; The same comment as above. > spin_lock_irqsave(&zone->lru_lock, flags); > /* link when the page is linked to LRU but page_cgroup isn't */ > if (PageLRU(page) && list_empty(&pc->lru)) > @@ -478,8 +504,13 @@ static void mem_cgroup_lru_add_after_commit_swapcache(struct page *page) > void mem_cgroup_move_lists(struct page *page, > enum lru_list from, enum lru_list to) > { > + struct page_cgroup *pc = lookup_page_cgroup(page); > if (mem_cgroup_disabled()) > return; > + smp_rmb(); > + if (!pc->mem_cgroup || > + (!PageCgroupAcctLRU(pc) && mem_cgroup_is_root(pc->mem_cgroup))) > + return; > mem_cgroup_del_lru_list(page, from); > mem_cgroup_add_lru_list(page, to); > } Here, too. > @@ -1114,6 +1145,7 @@ static void __mem_cgroup_commit_charge(struct mem_cgroup *mem, > css_put(&mem->css); > return; > } > + > pc->mem_cgroup = mem; > smp_wmb(); > pc->flags = pcg_default_flags[ctype]; > @@ -1418,9 +1450,10 @@ __mem_cgroup_commit_charge_swapin(struct page *page, struct mem_cgroup *ptr, > if (!ptr) > return; > pc = lookup_page_cgroup(page); > - mem_cgroup_lru_del_before_commit_swapcache(page); > + smp_rmb(); > + mem_cgroup_lru_del_before_commit_swapcache(page, pc); > __mem_cgroup_commit_charge(ptr, pc, ctype); > - mem_cgroup_lru_add_after_commit_swapcache(page); > + mem_cgroup_lru_add_after_commit_swapcache(page, pc); Why this change ? When you adds memory barrier, plz add comments. > /* > * Now swap is on-memory. This means this page may be > * counted both as mem and swap....double count. > @@ -2055,6 +2088,10 @@ static int mem_cgroup_write(struct cgroup *cont, struct cftype *cft, > name = MEMFILE_ATTR(cft->private); > switch (name) { > case RES_LIMIT: > + if (mem_cgroup_is_root(memcg)) { /* Can't set limit on root */ > + ret = -EINVAL; > + break; > + } Could you add modification to Documentation in the next post ? > /* This function does all necessary parse...reuse it */ > ret = res_counter_memparse_write_strategy(buffer, &val); > if (ret) > @@ -2521,6 +2558,7 @@ mem_cgroup_create(struct cgroup_subsys *ss, struct cgroup *cont) > if (cont->parent == NULL) { > enable_swap_cgroup(); > parent = NULL; > + root_mem_cgroup = mem; > } else { > parent = mem_cgroup_from_cont(cont->parent); > mem->use_hierarchy = parent->use_hierarchy; > @@ -2549,6 +2587,7 @@ mem_cgroup_create(struct cgroup_subsys *ss, struct cgroup *cont) > return &mem->css; > free_out: > __mem_cgroup_free(mem); > + root_mem_cgroup = NULL; > return ERR_PTR(error); > } > Could you start next thread in the next post ? Once I read and make this from unread to read, this goes far deep of old mail tree ;) Regards, -Kame -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org