From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail172.messagelabs.com (mail172.messagelabs.com [216.82.254.3]) by kanga.kvack.org (Postfix) with ESMTP id 7A2036B003D for ; Wed, 4 Feb 2009 01:43:00 -0500 (EST) Received: from d23relay01.au.ibm.com (d23relay01.au.ibm.com [202.81.31.243]) by e23smtp01.au.ibm.com (8.13.1/8.13.1) with ESMTP id n146gdQE014818 for ; Wed, 4 Feb 2009 17:42:39 +1100 Received: from d23av03.au.ibm.com (d23av03.au.ibm.com [9.190.234.97]) by d23relay01.au.ibm.com (8.13.8/8.13.8/NCO v9.1) with ESMTP id n146hB5e323936 for ; Wed, 4 Feb 2009 17:43:12 +1100 Received: from d23av03.au.ibm.com (loopback [127.0.0.1]) by d23av03.au.ibm.com (8.12.11.20060308/8.13.3) with ESMTP id n146grWw011602 for ; Wed, 4 Feb 2009 17:42:54 +1100 Date: Wed, 4 Feb 2009 12:12:49 +0530 From: Balbir Singh Subject: Re: [-mm patch] Show memcg information during OOM (v3) Message-ID: <20090204064249.GC4456@balbir.in.ibm.com> Reply-To: balbir@linux.vnet.ibm.com References: <20090203172135.GF918@balbir.in.ibm.com> <4988E727.8030807@cn.fujitsu.com> <20090204033750.GB4456@balbir.in.ibm.com> <20090204142455.83c38ad6.kamezawa.hiroyu@jp.fujitsu.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline In-Reply-To: <20090204142455.83c38ad6.kamezawa.hiroyu@jp.fujitsu.com> Sender: owner-linux-mm@kvack.org To: KAMEZAWA Hiroyuki Cc: Li Zefan , Andrew Morton , "linux-kernel@vger.kernel.org" , "nishimura@mxp.nes.nec.co.jp" , "linux-mm@kvack.org" List-ID: * KAMEZAWA Hiroyuki [2009-02-04 14:24:55]: > On Wed, 4 Feb 2009 09:07:50 +0530 > Balbir Singh wrote: > > > > > +} > > > > + > > > > #endif /* CONFIG_CGROUP_MEM_CONT */ > > > > > > > > > > > +void mem_cgroup_print_oom_info(struct mem_cgroup *memcg, struct task_struct *p) > > > > +{ > > > > + struct cgroup *task_cgrp; > > > > + struct cgroup *mem_cgrp; > > > > + /* > > > > + * Need a buffer on stack, can't rely on allocations. The code relies > > > > > > I think it's in .bss section, but not on stack, and it's better to explain why > > > the static buffer is safe in the comment. > > > > > > > Yes, it is no longer on stack, in the original patch it was. I'll send > > an updated patch > > > In the newest mmotm, OOM kill message is following. > == > Feb 4 13:16:28 localhost kernel: [ 249.338911] malloc2 invoked oom-killer: gfp_mask=0xd0, order=0, oomkilladj=0 > Feb 4 13:16:28 localhost kernel: [ 249.339018] malloc2 cpuset=/ mems_allowed=0 > Feb 4 13:16:28 localhost kernel: [ 249.339023] Pid: 3459, comm: malloc2 Not tainted 2.6.29-rc3-mm1 #1 > Feb 4 13:16:28 localhost kernel: [ 249.339185] Call Trace: > Feb 4 13:16:28 localhost kernel: [ 249.339202] [] ? _spin_unlock+0x26/0x2a > Feb 4 13:16:28 localhost kernel: [ 249.339210] [] oom_kill_process+0x99/0x272 > Feb 4 13:16:28 localhost kernel: [ 249.339214] [] ? select_bad_process+0x9d/0xfa > Feb 4 13:16:28 localhost kernel: [ 249.339219] [] mem_cgroup_out_of_memory+0x65/0x82 > Feb 4 13:16:28 localhost kernel: [ 249.339224] [] __mem_cgroup_try_charge+0x14c/0x196 > Feb 4 13:16:28 localhost kernel: [ 249.339229] [] mem_cgroup_charge_common+0x47/0x72 > Feb 4 13:16:28 localhost kernel: [ 249.339234] [] mem_cgroup_newpage_charge+0x3e/0x4f > Feb 4 13:16:28 localhost kernel: [ 249.339239] [] handle_mm_fault+0x214/0x761 > Feb 4 13:16:28 localhost kernel: [ 249.339244] [] do_page_fault+0x248/0x25f > Feb 4 13:16:28 localhost kernel: [ 249.339249] [] page_fault+0x1f/0x30 > Feb 4 13:16:28 localhost kernel: [ 249.339260] Task in /group_A/01 killed as a result of limit of /group_A > Feb 4 13:16:28 localhost kernel: [ 249.339264] memory: usage 39168kB, limit 40960kB, failcnt 1 > Feb 4 13:16:28 localhost kernel: [ 249.339266] memory+swap: usage 40960kB, limit 40960kB, failcnt 15 > == > Task in /group_A/01 is killed by mem+swap limit of /group_A. > > Yeah, very nice look :) thank you. > Welcome! Thanks for the good suggestion earlier. > BTW, I wonder can't we show the path of mount point ? > /group_A/01 is /cgroup/group_A/01 and /group_A/ is /cgroup/group_A/ on this system. > Very difficult ? > No, it is not very difficult, we just need to append the mount point. The reason for not doing it is consistency with output of /proc//cgroup and other places where cgroup_path prints the path relative to the mount point. Since we are talking about memory, the administrator should know where it is mounted. Do you strongly feel the need to add mount point? My concern is consistency with other cgroup output (look at /proc/sched_debug) for example. -- Balbir -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org