From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
To: "linux-mm@kvack.org" <linux-mm@kvack.org>
Cc: "akpm@linux-foundation.org" <akpm@linux-foundation.org>,
"nishimura@mxp.nes.nec.co.jp" <nishimura@mxp.nes.nec.co.jp>,
"balbir@linux.vnet.ibm.com" <balbir@linux.vnet.ibm.com>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: [BUGFIX][PATCH -mm] fix bad call of memcg_oom_recover at cancel move.
Date: Thu, 17 Jun 2010 17:20:34 +0900 [thread overview]
Message-ID: <20100617172034.00ea8835.kamezawa.hiroyu@jp.fujitsu.com> (raw)
From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
When cgroup_cancel_attach() is called via cgroup_attach_task(),
mem_cgroup_clear_mc() can be called even when any migration
was done. In such case, mc.to and mc.from is NULL.
But, memcg-clean-up-waiting-move-acct-v2.patch
doesn't handle this correctly and pass NULL to memcg_oom_recover.
fix it.
BUG: unable to handle kernel paging request at 000000000000114c
IP: [<ffffffff81153bb9>] memcg_oom_recover+0x9/0x30
PGD 61ce4b067 PUD 613ea0067 PMD 0
Oops: 0000 [#1] SMP
<snip>
Call Trace:
[<ffffffff81155359>] mem_cgroup_clear_mc+0x119/0x1c0
[<ffffffff811554de>] mem_cgroup_cancel_attach+0xe/0x10
[<ffffffff810b619c>] cgroup_attach_task+0x26c/0x2c0
[<ffffffff810b6257>] cgroup_tasks_write+0x67/0x1c0
[<ffffffff81121555>] ? might_fault+0xa5/0xb0
[<ffffffff8112150c>] ? might_fault+0x5c/0xb0
[<ffffffff810b40a2>] cgroup_file_write+0x2d2/0x330
[<ffffffff81093aa2>] ? print_lock_contention_bug+0x22/0xf0
[<ffffffff81259fef>] ? security_file_permission+0x1f/0x80
[<ffffffff8115d998>] vfs_write+0xc8/0x190
[<ffffffff8115e3a1>] sys_write+0x51/0x90
[<ffffffff8100b072>] system_call_fastpath+0x16/0x1b
Code: 20 48 39 43 20 41 bc f0 ff ff ff 75 c7 45 88 ae 48 11 00 00 45 31 e4 eb bb 66 0f 1f 84 00 00 00 00 00 55 48 89 e5 0f 1f 44 00 00 <8b> 87 4c 11 00 00 85 c0 75 05 c9 c3 0f 1f 00 48 89 f9 31 d2 be
RIP [<ffffffff81153bb9>] memcg_oom_recover+0x9/0x30
Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
---
mm/memcontrol.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
Index: mmotm-2.6.35-0611/mm/memcontrol.c
===================================================================
--- mmotm-2.6.35-0611.orig/mm/memcontrol.c
+++ mmotm-2.6.35-0611/mm/memcontrol.c
@@ -4485,8 +4485,10 @@ static void mem_cgroup_clear_mc(void)
mc.to = NULL;
mc.moving_task = NULL;
spin_unlock(&mc.lock);
- memcg_oom_recover(from);
- memcg_oom_recover(to);
+ if (from)
+ memcg_oom_recover(from);
+ if (to)
+ memcg_oom_recover(to);
wake_up_all(&mc.waitq);
}
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next reply other threads:[~2010-06-17 8:25 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-06-17 8:20 KAMEZAWA Hiroyuki [this message]
2010-06-17 9:24 ` Balbir Singh
2010-06-18 1:57 ` Daisuke Nishimura
2010-06-18 2:17 ` KAMEZAWA Hiroyuki
2010-06-18 3:15 ` Balbir Singh
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20100617172034.00ea8835.kamezawa.hiroyu@jp.fujitsu.com \
--to=kamezawa.hiroyu@jp.fujitsu.com \
--cc=akpm@linux-foundation.org \
--cc=balbir@linux.vnet.ibm.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=nishimura@mxp.nes.nec.co.jp \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox