From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail190.messagelabs.com (mail190.messagelabs.com [216.82.249.51]) by kanga.kvack.org (Postfix) with SMTP id A44C56B0146 for ; Wed, 13 May 2009 20:09:27 -0400 (EDT) Received: from mt1.gw.fujitsu.co.jp ([10.0.50.74]) by fgwmail5.fujitsu.co.jp (Fujitsu Gateway) with ESMTP id n4E09Y62012832 for (envelope-from kamezawa.hiroyu@jp.fujitsu.com); Thu, 14 May 2009 09:09:34 +0900 Received: from smail (m4 [127.0.0.1]) by outgoing.m4.gw.fujitsu.co.jp (Postfix) with ESMTP id 59C4D45DE52 for ; Thu, 14 May 2009 09:09:34 +0900 (JST) Received: from s4.gw.fujitsu.co.jp (s4.gw.fujitsu.co.jp [10.0.50.94]) by m4.gw.fujitsu.co.jp (Postfix) with ESMTP id 38FD645DE51 for ; Thu, 14 May 2009 09:09:34 +0900 (JST) Received: from s4.gw.fujitsu.co.jp (localhost.localdomain [127.0.0.1]) by s4.gw.fujitsu.co.jp (Postfix) with ESMTP id 0FE321DB803A for ; Thu, 14 May 2009 09:09:34 +0900 (JST) Received: from m105.s.css.fujitsu.com (m105.s.css.fujitsu.com [10.249.87.105]) by s4.gw.fujitsu.co.jp (Postfix) with ESMTP id 881BD1DB803F for ; Thu, 14 May 2009 09:09:33 +0900 (JST) Date: Thu, 14 May 2009 09:08:02 +0900 From: KAMEZAWA Hiroyuki Subject: Re: [RFC] Low overhead patches for the memory resource controller Message-Id: <20090514090802.c5ac2246.kamezawa.hiroyu@jp.fujitsu.com> In-Reply-To: <20090513153218.GQ13394@balbir.in.ibm.com> References: <20090513153218.GQ13394@balbir.in.ibm.com> Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Sender: owner-linux-mm@kvack.org To: balbir@linux.vnet.ibm.com Cc: "linux-mm@kvack.org" , Andrew Morton , "nishimura@mxp.nes.nec.co.jp" , "lizf@cn.fujitsu.com" , KOSAKI Motohiro List-ID: On Wed, 13 May 2009 21:02:18 +0530 Balbir Singh wrote: > Important: Not for inclusion, for discussion only > > I've been experimenting with a version of the patches below. They add > a PCGF_ROOT flag for tracking pages belonging to the root cgroup and > disable LRU manipulation for them > > Caveats: > > 1. I've not checked accounting, accounting might be broken > 2. I've not made the root cgroup as non limitable, we need to disable > hard limits once we agree to go with this > > > Tests > > Quick tests show an improvement with AIM9 > > mmotm+patch mmtom-08-may-2009 > AIM9 1338.57 1338.17 > Dbase 18034.16 16021.58 > New Dbase 18482.24 16518.54 > Shared 9935.98 8882.11 > Compute 16619.81 15226.13 > > Comments on the approach much appreciated > > Feature: Remove the overhead associated with the root cgroup > > From: Balbir Singh > > This patch changes the memory cgroup and removes the overhead associated > with accounting all pages in the root cgroup. As a side-effect, we can > no longer set a memory hard limit in the root cgroup. > > A new flag is used to track page_cgroup associated with the root cgroup > pages. Hmm ? How about ignoring memcg completely when the thread belongs to ROOT cgroup rather than this halfway method ? Thanks, -Kame > --- > > include/linux/page_cgroup.h | 5 +++++ > mm/memcontrol.c | 23 +++++++++++++++++------ > mm/page_cgroup.c | 1 - > 3 files changed, 22 insertions(+), 7 deletions(-) > > > diff --git a/include/linux/page_cgroup.h b/include/linux/page_cgroup.h > index 7339c7b..9c88e85 100644 > --- a/include/linux/page_cgroup.h > +++ b/include/linux/page_cgroup.h > @@ -26,6 +26,7 @@ enum { > PCG_LOCK, /* page cgroup is locked */ > PCG_CACHE, /* charged as cache */ > PCG_USED, /* this object is in use. */ > + PCG_ROOT, /* page belongs to root cgroup */ > }; > > #define TESTPCGFLAG(uname, lname) \ > @@ -46,6 +47,10 @@ TESTPCGFLAG(Cache, CACHE) > TESTPCGFLAG(Used, USED) > CLEARPCGFLAG(Used, USED) > > +SETPCGFLAG(Root, ROOT) > +CLEARPCGFLAG(Root, ROOT) > +TESTPCGFLAG(Root, ROOT) > + > static inline int page_cgroup_nid(struct page_cgroup *pc) > { > return page_to_nid(pc->page); > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > index 9712ef7..2750bed 100644 > --- a/mm/memcontrol.c > +++ b/mm/memcontrol.c > @@ -43,6 +43,7 @@ > > struct cgroup_subsys mem_cgroup_subsys __read_mostly; > #define MEM_CGROUP_RECLAIM_RETRIES 5 > +struct mem_cgroup *root_mem_cgroup __read_mostly; > > #ifdef CONFIG_CGROUP_MEM_RES_CTLR_SWAP > /* Turned on only when memory cgroup is enabled && really_do_swap_account = 0 */ > @@ -196,6 +197,7 @@ enum charge_type { > #define PCGF_CACHE (1UL << PCG_CACHE) > #define PCGF_USED (1UL << PCG_USED) > #define PCGF_LOCK (1UL << PCG_LOCK) > +#define PCGF_ROOT (1UL << PCG_ROOT) > static const unsigned long > pcg_default_flags[NR_CHARGE_TYPE] = { > PCGF_CACHE | PCGF_USED | PCGF_LOCK, /* File Cache */ > @@ -422,6 +424,8 @@ void mem_cgroup_del_lru_list(struct page *page, enum lru_list lru) > /* can happen while we handle swapcache. */ > if (list_empty(&pc->lru) || !pc->mem_cgroup) > return; > + if (PageCgroupRoot(pc)) > + return; > /* > * We don't check PCG_USED bit. It's cleared when the "page" is finally > * removed from global LRU. > @@ -452,8 +456,8 @@ void mem_cgroup_rotate_lru_list(struct page *page, enum lru_list lru) > * For making pc->mem_cgroup visible, insert smp_rmb() here. > */ > smp_rmb(); > - /* unused page is not rotated. */ > - if (!PageCgroupUsed(pc)) > + /* unused or root page is not rotated. */ > + if (!PageCgroupUsed(pc) || PageCgroupRoot(pc)) > return; > mz = page_cgroup_zoneinfo(pc); > list_move(&pc->lru, &mz->lists[lru]); > @@ -472,7 +476,7 @@ void mem_cgroup_add_lru_list(struct page *page, enum lru_list lru) > * For making pc->mem_cgroup visible, insert smp_rmb() here. > */ > smp_rmb(); > - if (!PageCgroupUsed(pc)) > + if (!PageCgroupUsed(pc) || PageCgroupRoot(pc)) > return; > > mz = page_cgroup_zoneinfo(pc); > @@ -1114,9 +1118,12 @@ static void __mem_cgroup_commit_charge(struct mem_cgroup *mem, > css_put(&mem->css); > return; > } > - pc->mem_cgroup = mem; > - smp_wmb(); > - pc->flags = pcg_default_flags[ctype]; > + if (mem != root_mem_cgroup) { > + pc->mem_cgroup = mem; > + smp_wmb(); > + pc->flags = pcg_default_flags[ctype]; > + } else > + SetPageCgroupRoot(pc); > > mem_cgroup_charge_statistics(mem, pc, true); > > @@ -1521,6 +1528,8 @@ __mem_cgroup_uncharge_common(struct page *page, enum charge_type ctype) > mem_cgroup_charge_statistics(mem, pc, false); > > ClearPageCgroupUsed(pc); > + if (mem == root_mem_cgroup) > + ClearPageCgroupRoot(pc); > /* > * pc->mem_cgroup is not cleared here. It will be accessed when it's > * freed from LRU. This is safe because uncharged page is expected not > @@ -2504,6 +2513,7 @@ mem_cgroup_create(struct cgroup_subsys *ss, struct cgroup *cont) > if (cont->parent == NULL) { > enable_swap_cgroup(); > parent = NULL; > + root_mem_cgroup = mem; > } else { > parent = mem_cgroup_from_cont(cont->parent); > mem->use_hierarchy = parent->use_hierarchy; > @@ -2532,6 +2542,7 @@ mem_cgroup_create(struct cgroup_subsys *ss, struct cgroup *cont) > return &mem->css; > free_out: > __mem_cgroup_free(mem); > + root_mem_cgroup = NULL; > return ERR_PTR(error); > } > > diff --git a/mm/page_cgroup.c b/mm/page_cgroup.c > index 09b73c5..6145ff6 100644 > --- a/mm/page_cgroup.c > +++ b/mm/page_cgroup.c > @@ -276,7 +276,6 @@ void __meminit pgdat_page_cgroup_init(struct pglist_data *pgdat) > > #endif > > - > #ifdef CONFIG_CGROUP_MEM_RES_CTLR_SWAP > > static DEFINE_MUTEX(swap_cgroup_mutex); > > -- > Thanks! > Balbir > -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org