linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
To: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Greg Thelen <gthelen@google.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	linux-kernel@vger.kernel.org, linux-mm@kvack.org,
	containers@lists.osdl.org, Andrea Righi <arighi@develer.com>,
	Balbir Singh <balbir@linux.vnet.ibm.com>,
	Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>,
	Minchan Kim <minchan.kim@gmail.com>,
	Ciju Rajan K <ciju@linux.vnet.ibm.com>,
	David Rientjes <rientjes@google.com>
Subject: [RFC][PATCH 2/2] memcg: move_account optimization  by reduce locks (Re: [PATCH v3 04/11] memcg: add lock to synchronize page accounting and migration
Date: Tue, 19 Oct 2010 13:45:41 +0900	[thread overview]
Message-ID: <20101019134541.455eeaba.kamezawa.hiroyu@jp.fujitsu.com> (raw)
In-Reply-To: <20101019134308.3fe81638.kamezawa.hiroyu@jp.fujitsu.com>

From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>

reduce lock at account moving.

a patch "memcg: add lock to synchronize page accounting and migration" add
a new lock and make locking cost twice. This patch is for reducing the cost.

At moving charges by scanning page table, we do all jobs under pte_lock.
This means we never have race with "uncharge". Because of that,
we can remove lock_page_cgroup() in some situation.

The cost of moing 8G anon process
==
[mmotm-1013]
Before:
	real    0m0.792s
	user    0m0.000s
	sys     0m0.780s
	
[dirty-limit v3 patch]
        real    0m0.854s
        user    0m0.000s
        sys     0m0.842s
[get/put optimization ]
	real    0m0.757s
	user    0m0.000s
	sys     0m0.746s

[this patch]
	real    0m0.732s
	user    0m0.000s
	sys     0m0.721s

Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
---
 Documentation/cgroups/memory.txt |   23 ++++++++++++++++++++++-
 mm/memcontrol.c                  |   29 ++++++++++++++++++++++-------
 2 files changed, 44 insertions(+), 8 deletions(-)

Index: dirty_limit_new/mm/memcontrol.c
===================================================================
--- dirty_limit_new.orig/mm/memcontrol.c
+++ dirty_limit_new/mm/memcontrol.c
@@ -2386,7 +2386,6 @@ static void __mem_cgroup_move_account(st
 {
 	VM_BUG_ON(from == to);
 	VM_BUG_ON(PageLRU(pc->page));
-	VM_BUG_ON(!PageCgroupLocked(pc));
 	VM_BUG_ON(!PageCgroupUsed(pc));
 	VM_BUG_ON(pc->mem_cgroup != from);
 
@@ -2424,19 +2423,32 @@ static void __mem_cgroup_move_account(st
  * __mem_cgroup_move_account()
  */
 static int mem_cgroup_move_account(struct page_cgroup *pc,
-		struct mem_cgroup *from, struct mem_cgroup *to, bool uncharge)
+		struct mem_cgroup *from, struct mem_cgroup *to,
+		bool uncharge, bool stable)
 {
 	int ret = -EINVAL;
 	unsigned long flags;
-
-	lock_page_cgroup(pc);
+	/*
+	 * When stable==true, some lock (page_table_lock etc.) prevents
+	 * modification of PCG_USED bit and pc->mem_cgroup never be invalid.
+	 * IOW, there will be no race with charge/uncharge. From another point
+	 * of view, there will be other races with codes which accesses
+	 * pc->mem_cgroup under lock_page_cgroup(). Considering what
+	 * pc->mem_cgroup the codes will see, they'll see old or new value and
+	 * both of values will never be invalid while they holds
+	 * lock_page_cgroup(). There is no probelm to skip lock_page_cgroup
+	 * when we can.
+	 */
+	if (!stable)
+		lock_page_cgroup(pc);
 	if (PageCgroupUsed(pc) && pc->mem_cgroup == from) {
 		move_lock_page_cgroup(pc, &flags);
 		__mem_cgroup_move_account(pc, from, to, uncharge);
 		move_unlock_page_cgroup(pc, &flags);
 		ret = 0;
 	}
-	unlock_page_cgroup(pc);
+	if (!stable)
+		unlock_page_cgroup(pc);
 	/*
 	 * check events
 	 */
@@ -2474,7 +2486,7 @@ static int mem_cgroup_move_parent(struct
 	if (ret || !parent)
 		goto put_back;
 
-	ret = mem_cgroup_move_account(pc, child, parent, true);
+	ret = mem_cgroup_move_account(pc, child, parent, true, false);
 	if (ret)
 		mem_cgroup_cancel_charge(parent);
 put_back:
@@ -5156,6 +5168,7 @@ retry:
 		struct page *page;
 		struct page_cgroup *pc;
 		swp_entry_t ent;
+		bool mapped = false;
 
 		if (!mc.precharge)
 			break;
@@ -5163,12 +5176,14 @@ retry:
 		type = is_target_pte_for_mc(vma, addr, ptent, &target);
 		switch (type) {
 		case MC_TARGET_PAGE:
+			mapped = true;
+			/* Fall Through */
 		case MC_TARGET_UNMAPPED_PAGE:
 			page = target.page;
 			if (!isolate_lru_page(page)) {
 				pc = lookup_page_cgroup(page);
 				if (!mem_cgroup_move_account(pc, mc.from,
-						mc.to, false)) {
+						mc.to, false, mapped)) {
 					mc.precharge--;
 					/* we uncharge from mc.from later. */
 					mc.moved_charge++;
Index: dirty_limit_new/Documentation/cgroups/memory.txt
===================================================================
--- dirty_limit_new.orig/Documentation/cgroups/memory.txt
+++ dirty_limit_new/Documentation/cgroups/memory.txt
@@ -637,7 +637,28 @@ memory cgroup.
       | page_mapcount(page) > 1). You must enable Swap Extension(see 2.4) to
       | enable move of swap charges.
 
-8.3 TODO
+8.3 Implemenation Detail
+
+  At moving, we need to take care of races. At first thinking, there are
+  several sources of race when we overwrite pc->mem_cgroup.
+  - charge/uncharge
+  - file stat (dirty, writeback, etc..) accounting
+  - LRU add/remove
+
+  Against charge/uncharge, we do all "move" under pte_lock. So, if we move
+  chareges of a mapped pages, we don't need extra locks. If not mapped,
+  we need to take lock_page_cgroup.
+
+  Against file-stat accouning, we need some locks. Current implementation
+  uses 2 level locking, one is light-weight, another is heavy.
+  A light-weight scheme is to use per-cpu counter. If someone moving a charge
+  from a mem_cgroup, per-cpu "caution" counter is incremented and file-stat
+  update will use heavy lock. This heavy lock is a special lock for move_charge
+  and allow mutual execution of accessing pc->mem_cgroup.
+
+  Against LRU, we do isolate_lru_page() before move_account().
+
+8.4 TODO
 
 - Implement madvise(2) to let users decide the vma to be moved or not to be
   moved.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2010-10-19  4:51 UTC|newest]

Thread overview: 64+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-10-19  0:39 [PATCH v3 00/11] memcg: per cgroup dirty page accounting Greg Thelen
2010-10-19  0:39 ` [PATCH v3 01/11] memcg: add page_cgroup flags for dirty page tracking Greg Thelen
2010-10-19  4:31   ` Daisuke Nishimura
2010-10-19  0:39 ` [PATCH v3 02/11] memcg: document cgroup dirty memory interfaces Greg Thelen
2010-10-19  0:46   ` KAMEZAWA Hiroyuki
2010-10-19  8:27   ` Daisuke Nishimura
2010-10-19 21:00     ` Greg Thelen
2010-10-20  0:11       ` KAMEZAWA Hiroyuki
2010-10-20  0:45         ` Greg Thelen
2010-10-20  4:06           ` KAMEZAWA Hiroyuki
2010-10-20  4:25             ` Greg Thelen
2010-10-20  4:26               ` KAMEZAWA Hiroyuki
2010-10-20  0:48         ` Daisuke Nishimura
2010-10-20  1:14           ` KAMEZAWA Hiroyuki
2010-10-20  2:24             ` KAMEZAWA Hiroyuki
2010-10-20  3:47               ` Daisuke Nishimura
2010-10-19  0:39 ` [PATCH v3 03/11] memcg: create extensible page stat update routines Greg Thelen
2010-10-19  0:47   ` KAMEZAWA Hiroyuki
2010-10-19  4:52   ` Daisuke Nishimura
2010-10-19  0:39 ` [PATCH v3 04/11] memcg: add lock to synchronize page accounting and migration Greg Thelen
2010-10-19  0:45   ` KAMEZAWA Hiroyuki
2010-10-19  4:43     ` [RFC][PATCH 1/2] memcg: move_account optimization by reduct put,get page (Re: " KAMEZAWA Hiroyuki
2010-10-19  4:45       ` KAMEZAWA Hiroyuki [this message]
2010-10-19  1:17   ` Minchan Kim
2010-10-19  5:03   ` Daisuke Nishimura
2010-10-19  0:39 ` [PATCH v3 05/11] memcg: add dirty page accounting infrastructure Greg Thelen
2010-10-19  0:49   ` KAMEZAWA Hiroyuki
2010-10-20  0:53   ` Daisuke Nishimura
2010-10-19  0:39 ` [PATCH v3 06/11] memcg: add kernel calls for memcg dirty page stats Greg Thelen
2010-10-19  0:51   ` KAMEZAWA Hiroyuki
2010-10-19  7:03   ` Daisuke Nishimura
2010-10-19  0:39 ` [PATCH v3 07/11] memcg: add dirty limits to mem_cgroup Greg Thelen
2010-10-19  0:53   ` KAMEZAWA Hiroyuki
2010-10-20  0:50   ` Daisuke Nishimura
2010-10-20  4:08     ` Greg Thelen
2010-10-19  0:39 ` [PATCH v3 08/11] memcg: CPU hotplug lockdep warning fix Greg Thelen
2010-10-19  0:54   ` KAMEZAWA Hiroyuki
2010-10-20  3:47   ` Daisuke Nishimura
2010-10-19  0:39 ` [PATCH v3 09/11] memcg: add cgroupfs interface to memcg dirty limits Greg Thelen
2010-10-19  0:56   ` KAMEZAWA Hiroyuki
2010-10-20  3:31   ` Daisuke Nishimura
2010-10-20  3:44     ` KAMEZAWA Hiroyuki
2010-10-20  3:46     ` Daisuke Nishimura
2010-10-19  0:39 ` [PATCH v3 10/11] writeback: make determine_dirtyable_memory() static Greg Thelen
2010-10-19  0:57   ` KAMEZAWA Hiroyuki
2010-10-20  3:47   ` Daisuke Nishimura
2010-10-19  0:39 ` [PATCH v3 11/11] memcg: check memcg dirty limits in page writeback Greg Thelen
2010-10-19  1:00   ` KAMEZAWA Hiroyuki
2010-10-20  4:18     ` KAMEZAWA Hiroyuki
2010-10-20  4:33       ` Greg Thelen
2010-10-20  4:33         ` KAMEZAWA Hiroyuki
2010-10-20  4:34       ` Daisuke Nishimura
2010-10-20  5:25   ` Daisuke Nishimura
2010-10-20  3:21 ` [PATCH][memcg+dirtylimit] Fix overwriting global vm dirty limit setting by memcg (Re: [PATCH v3 00/11] memcg: per cgroup dirty page accounting KAMEZAWA Hiroyuki
2010-10-20  4:14   ` KAMEZAWA Hiroyuki
2010-10-20  5:02   ` [PATCH v2][memcg+dirtylimit] " KAMEZAWA Hiroyuki
2010-10-20  6:09     ` Daisuke Nishimura
2010-10-20 14:35     ` Minchan Kim
2010-10-21  0:10       ` KAMEZAWA Hiroyuki
2010-10-24 18:44     ` Greg Thelen
2010-10-25  0:24       ` KAMEZAWA Hiroyuki
2010-10-25  2:00       ` Daisuke Nishimura
2010-10-25  7:03       ` Ciju Rajan K
2010-10-25  7:08         ` KAMEZAWA Hiroyuki

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20101019134541.455eeaba.kamezawa.hiroyu@jp.fujitsu.com \
    --to=kamezawa.hiroyu@jp.fujitsu.com \
    --cc=akpm@linux-foundation.org \
    --cc=arighi@develer.com \
    --cc=balbir@linux.vnet.ibm.com \
    --cc=ciju@linux.vnet.ibm.com \
    --cc=containers@lists.osdl.org \
    --cc=gthelen@google.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=minchan.kim@gmail.com \
    --cc=nishimura@mxp.nes.nec.co.jp \
    --cc=rientjes@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox