linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [RFC][PATCH 0/2] memcg: hierarchy support (v3)
@ 2008-06-04  4:58 KAMEZAWA Hiroyuki
  2008-06-04  5:01 ` [RFC][PATCH 1/2] memcg: res_counter hierarchy KAMEZAWA Hiroyuki
                   ` (3 more replies)
  0 siblings, 4 replies; 33+ messages in thread
From: KAMEZAWA Hiroyuki @ 2008-06-04  4:58 UTC (permalink / raw)
  To: linux-mm; +Cc: LKML, balbir, menage, xemul, yamamoto

Hi, this is third version.

While small changes in codes, the whole _tone_ of code is changed.
I'm not in hurry, any comments are welcome.

based on 2.6.26-rc2-mm1 + memcg patches in -mm queue.

Changes from v2:
 - Named as HardWall policy.
 - rewrote the code to be read easily. changed the name of functions.
 - Added text.
 - supported hierarchy_model parameter.
   Now, no_hierarchy and hardwall_hierarchy is implemented.

HardWall Policy:
  - designed for strict resource isolation under hierarchy.
    Usually, automatic load balancing between cgroup can break the
    users assumption even if it's implemented very well.
  - parent overcommits all children
     parent->usage = resource used by itself + resource moved to children.
     Of course, parent->limit > parent->usage. 
  - when child's limit is set, the resouce moves.
  - no automatic resource moving between parent <-> child

Example)
  1) Assume a cgroup with 1GB limits. (and no tasks belongs to this, now)
     - group_A limit=1G,usage=0M.

  2) create group B, C under A.
     - group A limit=1G, usage=0M
          - group B limit=0M, usage=0M.
          - group C limit=0M, usage=0M.

  3) increase group B's limit to 300M.
     - group A limit=1G, usage=300M.
          - group B limit=300M, usage=0M.
          - group C limit=0M, usage=0M.

  4) increase group C's limit to 500M
     - group A limit=1G, usage=800M.
          - group B limit=300M, usage=0M.
          - group C limit=500M, usage=0M.

  5) reduce group B's limit to 100M
     - group A limit=1G, usage=600M.
          - group B limit=100M, usage=0M.
          - group C limit=500M, usage=0M.


Thanks,
-Kame

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 33+ messages in thread

end of thread, other threads:[~2008-06-12  5:00 UTC | newest]

Thread overview: 33+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2008-06-04  4:58 [RFC][PATCH 0/2] memcg: hierarchy support (v3) KAMEZAWA Hiroyuki
2008-06-04  5:01 ` [RFC][PATCH 1/2] memcg: res_counter hierarchy KAMEZAWA Hiroyuki
2008-06-04  6:54   ` Li Zefan
2008-06-04  7:03     ` KAMEZAWA Hiroyuki
2008-06-04  7:20   ` YAMAMOTO Takashi
2008-06-04  7:32     ` KAMEZAWA Hiroyuki
2008-06-04  8:59   ` Paul Menage
2008-06-04  9:18     ` KAMEZAWA Hiroyuki
2008-06-09  9:48   ` Balbir Singh
2008-06-09 10:20     ` KAMEZAWA Hiroyuki
2008-06-09 10:37       ` Balbir Singh
2008-06-09 12:02       ` kamezawa.hiroyu
2008-06-11 23:24   ` Randy Dunlap
2008-06-12  4:59     ` KAMEZAWA Hiroyuki
2008-06-04  5:03 ` [RFC][PATCH 2/2] memcg: hardwall hierarhcy for memcg KAMEZAWA Hiroyuki
2008-06-04  6:42   ` Li Zefan
2008-06-04  6:54     ` KAMEZAWA Hiroyuki
2008-06-04  8:59   ` Paul Menage
2008-06-04  9:26     ` KAMEZAWA Hiroyuki
2008-06-04 12:53       ` Daisuke Nishimura
2008-06-04 12:32   ` Daisuke Nishimura
2008-06-05  0:04     ` KAMEZAWA Hiroyuki
2008-06-09 10:56   ` Balbir Singh
2008-06-09 12:09   ` kamezawa.hiroyu
2008-06-11 23:24   ` Randy Dunlap
2008-06-12  5:00     ` KAMEZAWA Hiroyuki
2008-06-04  8:59 ` [RFC][PATCH 0/2] memcg: hierarchy support (v3) Paul Menage
2008-06-04  9:15   ` KAMEZAWA Hiroyuki
2008-06-04  9:15     ` Paul Menage
2008-06-04  9:31       ` KAMEZAWA Hiroyuki
2008-06-09  9:30 ` Balbir Singh
2008-06-09  9:55   ` KAMEZAWA Hiroyuki
2008-06-09 10:33     ` Balbir Singh

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox