From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
To: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: balbir@linux.vnet.ibm.com, "xemul@openvz.org" <xemul@openvz.org>,
"hugh@veritas.com" <hugh@veritas.com>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org,
menage@google.com
Subject: [RFC] [PATCH 3/9] memcg: move_account between groups
Date: Thu, 11 Sep 2008 20:14:51 +0900 [thread overview]
Message-ID: <20080911201451.6aecd29a.kamezawa.hiroyu@jp.fujitsu.com> (raw)
In-Reply-To: <20080911200855.94d33d3b.kamezawa.hiroyu@jp.fujitsu.com>
This patch provides a function to move account information of a page between
mem_cgroups.
This moving of page_cgroup is done under
- the page is locked.
- lru_lock of source/destination mem_cgroup is held.
Then, a routine which touches pc->mem_cgroup without page_lock() should
confirm pc->mem_cgroup is still valid or not. Typlical code can be following.
(while page is not under lock_page())
mem = pc->mem_cgroup;
mz = page_cgroup_zoneinfo(pc)
spin_lock_irqsave(&mz->lru_lock);
if (pc->mem_cgroup == mem)
...../* some list handling */
spin_unlock_irq(&mz->lru_lock);
If you find page_cgroup from mem_cgroup's LRU under mz->lru_lock, you don't
have to worry about anything.
Changelong: (v2) -> (v3)
- added lock_page_cgroup().
- splitted out from new-force-empty patch.
- added how-to-use text.
- fixed race in __mem_cgroup_uncharge_common().
Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
---
mm/memcontrol.c | 74 +++++++++++++++++++++++++++++++++++++++++++++++++++++---
1 file changed, 71 insertions(+), 3 deletions(-)
Index: mmtom-2.6.27-rc5+/mm/memcontrol.c
===================================================================
--- mmtom-2.6.27-rc5+.orig/mm/memcontrol.c
+++ mmtom-2.6.27-rc5+/mm/memcontrol.c
@@ -428,6 +428,7 @@ int task_in_mem_cgroup(struct task_struc
void mem_cgroup_move_lists(struct page *page, enum lru_list lru)
{
struct page_cgroup *pc;
+ struct mem_cgroup *mem;
struct mem_cgroup_per_zone *mz;
unsigned long flags;
@@ -446,9 +447,14 @@ void mem_cgroup_move_lists(struct page *
pc = page_get_page_cgroup(page);
if (pc) {
+ mem = pc->mem_cgroup;
mz = page_cgroup_zoneinfo(pc);
spin_lock_irqsave(&mz->lru_lock, flags);
- __mem_cgroup_move_lists(pc, lru);
+ /*
+ * check against the race with move_account.
+ */
+ if (likely(mem == pc->mem_cgroup))
+ __mem_cgroup_move_lists(pc, lru);
spin_unlock_irqrestore(&mz->lru_lock, flags);
}
unlock_page_cgroup(page);
@@ -569,6 +575,67 @@ unsigned long mem_cgroup_isolate_pages(u
return nr_taken;
}
+/**
+ * mem_cgroup_move_account - move account of the page
+ * @page ... the target page of being moved.
+ * @pc ... page_cgroup of the page.
+ * @from ... mem_cgroup which the page is moved from.
+ * @to ... mem_cgroup which the page is moved to.
+ *
+ * The caller must confirm following.
+ * 1. lock the page by lock_page().
+ * 2. disable irq.
+ * 3. lru_lock of old mem_cgroup should be held.
+ * 4. pc is guaranteed to be valid and on mem_cgroup's LRU.
+ *
+ * Because we cannot call try_to_free_page() here, the caller must guarantee
+ * this moving of change never fails. Currently this is called only against
+ * root cgroup, which has no limitation of resource.
+ * Returns 0 at success, returns 1 at failure.
+ */
+int mem_cgroup_move_account(struct page *page, struct page_cgroup *pc,
+ struct mem_cgroup *from, struct mem_cgroup *to)
+{
+ struct mem_cgroup_per_zone *from_mz, *to_mz;
+ int nid, zid;
+ int ret = 1;
+
+ VM_BUG_ON(!irqs_disabled());
+ VM_BUG_ON(!PageLocked(page));
+
+ nid = page_to_nid(page);
+ zid = page_zonenum(page);
+ from_mz = mem_cgroup_zoneinfo(from, nid, zid);
+ to_mz = mem_cgroup_zoneinfo(to, nid, zid);
+
+ if (res_counter_charge(&to->res, PAGE_SIZE)) {
+ /* Now, we assume no_limit...no failure here. */
+ return ret;
+ }
+ if (try_lock_page_cgroup(page))
+ return ret;
+
+ if (page_get_page_cgroup(page) != pc)
+ goto out;
+
+ if (spin_trylock(&to_mz->lru_lock)) {
+ __mem_cgroup_remove_list(from_mz, pc);
+ css_put(&from->css);
+ res_counter_uncharge(&from->res, PAGE_SIZE);
+ pc->mem_cgroup = to;
+ css_get(&to->css);
+ __mem_cgroup_add_list(to_mz, pc);
+ ret = 0;
+ spin_unlock(&to_mz->lru_lock);
+ } else {
+ res_counter_uncharge(&to->res, PAGE_SIZE);
+ }
+out:
+ unlock_page_cgroup(page);
+
+ return ret;
+}
+
/*
* Charge the memory controller for page usage.
* Return
@@ -761,16 +828,24 @@ __mem_cgroup_uncharge_common(struct page
if ((ctype == MEM_CGROUP_CHARGE_TYPE_MAPPED)
&& ((PageCgroupCache(pc) || page_mapped(page))))
goto unlock;
-
+retry:
+ mem = pc->mem_cgroup;
mz = page_cgroup_zoneinfo(pc);
spin_lock_irqsave(&mz->lru_lock, flags);
+ if (ctype == MEM_CGROUP_CHARGE_TYPE_MAPPED &&
+ unlikely(mem != pc->mem_cgroup)) {
+ /* MAPPED account can be done without lock_page().
+ Check race with mem_cgroup_move_account() */
+ spin_unlock_irqrestore(&mz->lru_lock, flags);
+ goto retry;
+ }
__mem_cgroup_remove_list(mz, pc);
spin_unlock_irqrestore(&mz->lru_lock, flags);
page_assign_page_cgroup(page, NULL);
unlock_page_cgroup(page);
- mem = pc->mem_cgroup;
+
res_counter_uncharge(&mem->res, PAGE_SIZE);
css_put(&mem->css);
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2008-09-11 11:14 UTC|newest]
Thread overview: 27+ messages / expand[flat|nested] mbox.gz Atom feed top
2008-09-11 11:08 [RFC] [PATCH 0/9] remove page_cgroup pointer (with some enhancements) KAMEZAWA Hiroyuki
2008-09-11 11:11 ` [RFC] [PATCH 1/9] memcg:make root no limit KAMEZAWA Hiroyuki
2008-09-11 11:13 ` [RFC] [PATCH 2/9] memcg: atomic page_cgroup flags KAMEZAWA Hiroyuki
2008-09-11 11:14 ` KAMEZAWA Hiroyuki [this message]
2008-09-12 4:36 ` [RFC] [PATCH 3/9] memcg: move_account between groups KAMEZAWA Hiroyuki
2008-09-11 11:16 ` [RFC] [PATCH 4/9] memcg: new force empty KAMEZAWA Hiroyuki
2008-09-11 11:17 ` [RFC] [PATCH 5/9] memcg: set mapping null before uncharge KAMEZAWA Hiroyuki
2008-09-11 11:18 ` [RFC] [PATCH 6/9] memcg: optimize stat KAMEZAWA Hiroyuki
2008-09-11 11:20 ` [RFC] [PATCH 7/9] memcg: charge likely success KAMEZAWA Hiroyuki
2008-09-11 11:22 ` [RFC] [PATCH 8/9] memcg: remove page_cgroup pointer from memmap KAMEZAWA Hiroyuki
2008-09-11 14:00 ` Nick Piggin
2008-09-11 14:38 ` kamezawa.hiroyu
2008-09-11 15:01 ` kamezawa.hiroyu
2008-09-12 16:12 ` Balbir Singh
2008-09-12 16:19 ` Dave Hansen
2008-09-12 16:23 ` Dave Hansen
2008-09-16 12:13 ` memcg: lazy_lru (was Re: [RFC] [PATCH 8/9] memcg: remove page_cgroup pointer from memmap) KAMEZAWA Hiroyuki
2008-09-16 12:17 ` [RFC][PATCH 10/9] get/put page at charge/uncharge KAMEZAWA Hiroyuki
2008-09-16 12:19 ` [RFC][PATCH 11/9] lazy lru free vector for memcg KAMEZAWA Hiroyuki
2008-09-16 12:23 ` Pavel Emelyanov
2008-09-16 13:02 ` kamezawa.hiroyu
2008-09-16 12:21 ` [RFC] [PATCH 12/9] lazy lru add vie per cpu " KAMEZAWA Hiroyuki
2008-09-11 11:24 ` [RFC] [PATCH 9/9] memcg: percpu page cgroup lookup cache KAMEZAWA Hiroyuki
2008-09-11 11:31 ` Nick Piggin
2008-09-11 12:49 ` kamezawa.hiroyu
2008-09-12 9:35 ` [RFC] [PATCH 0/9] remove page_cgroup pointer (with some enhancements) KAMEZAWA Hiroyuki
2008-09-12 10:18 ` KAMEZAWA Hiroyuki
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20080911201451.6aecd29a.kamezawa.hiroyu@jp.fujitsu.com \
--to=kamezawa.hiroyu@jp.fujitsu.com \
--cc=balbir@linux.vnet.ibm.com \
--cc=hugh@veritas.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=menage@google.com \
--cc=xemul@openvz.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox