From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail144.messagelabs.com (mail144.messagelabs.com [216.82.254.51]) by kanga.kvack.org (Postfix) with SMTP id 7C9C16B004F for ; Tue, 30 Jun 2009 21:36:05 -0400 (EDT) Received: from m3.gw.fujitsu.co.jp ([10.0.50.73]) by fgwmail7.fujitsu.co.jp (Fujitsu Gateway) with ESMTP id n611bBxa019727 for (envelope-from kosaki.motohiro@jp.fujitsu.com); Wed, 1 Jul 2009 10:37:11 +0900 Received: from smail (m3 [127.0.0.1]) by outgoing.m3.gw.fujitsu.co.jp (Postfix) with ESMTP id 898F645DD83 for ; Wed, 1 Jul 2009 10:37:11 +0900 (JST) Received: from s3.gw.fujitsu.co.jp (s3.gw.fujitsu.co.jp [10.0.50.93]) by m3.gw.fujitsu.co.jp (Postfix) with ESMTP id 55F3B45DD7D for ; Wed, 1 Jul 2009 10:37:11 +0900 (JST) Received: from s3.gw.fujitsu.co.jp (localhost.localdomain [127.0.0.1]) by s3.gw.fujitsu.co.jp (Postfix) with ESMTP id 239E91DB8038 for ; Wed, 1 Jul 2009 10:37:11 +0900 (JST) Received: from m105.s.css.fujitsu.com (m105.s.css.fujitsu.com [10.249.87.105]) by s3.gw.fujitsu.co.jp (Postfix) with ESMTP id 92066E08001 for ; Wed, 1 Jul 2009 10:37:10 +0900 (JST) From: KOSAKI Motohiro Subject: [PATCH v2] Show kernel stack usage to /proc/meminfo and OOM log In-Reply-To: <20090701082531.85C2.A69D9226@jp.fujitsu.com> References: <20090701082531.85C2.A69D9226@jp.fujitsu.com> Message-Id: <20090701103622.85CD.A69D9226@jp.fujitsu.com> MIME-Version: 1.0 Content-Type: text/plain; charset="US-ASCII" Content-Transfer-Encoding: 7bit Date: Wed, 1 Jul 2009 10:37:09 +0900 (JST) Sender: owner-linux-mm@kvack.org To: Christoph Lameter Cc: kosaki.motohiro@jp.fujitsu.com, Minchan Kim , Johannes Weiner , David Howells , "riel@redhat.com" , Andrew Morton , LKML , "peterz@infradead.org" , "tytso@mit.edu" , "linux-mm@kvack.org" , "elladan@eskimo.com" , "npiggin@suse.de" , "Barnes, Jesse" List-ID: Subject: [PATCH] Show kernel stack usage to /proc/meminfo and OOM log if the system have a lot of thread, kernel stack consume unignorable large size memory. IOW, it make a lot of unaccountable memory. Tons unaccountable memory bring to harder analyse memory related trouble. Then, kernel stack account is useful. Signed-off-by: KOSAKI Motohiro --- fs/proc/meminfo.c | 2 ++ include/linux/mmzone.h | 3 ++- kernel/fork.c | 12 ++++++++++++ mm/page_alloc.c | 6 ++++-- mm/vmstat.c | 1 + 5 files changed, 21 insertions(+), 3 deletions(-) Index: b/fs/proc/meminfo.c =================================================================== --- a/fs/proc/meminfo.c +++ b/fs/proc/meminfo.c @@ -85,6 +85,7 @@ static int meminfo_proc_show(struct seq_ "SReclaimable: %8lu kB\n" "SUnreclaim: %8lu kB\n" "PageTables: %8lu kB\n" + "KernelStack %8lu kB\n" #ifdef CONFIG_QUICKLIST "Quicklists: %8lu kB\n" #endif @@ -129,6 +130,7 @@ static int meminfo_proc_show(struct seq_ K(global_page_state(NR_SLAB_RECLAIMABLE)), K(global_page_state(NR_SLAB_UNRECLAIMABLE)), K(global_page_state(NR_PAGETABLE)), + K(global_page_state(NR_KERNEL_STACK)), #ifdef CONFIG_QUICKLIST K(quicklist_total_size()), #endif Index: b/include/linux/mmzone.h =================================================================== --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -94,10 +94,11 @@ enum zone_stat_item { NR_SLAB_RECLAIMABLE, NR_SLAB_UNRECLAIMABLE, NR_PAGETABLE, /* used for pagetables */ + NR_KERNEL_STACK, + /* Second 128 byte cacheline */ NR_UNSTABLE_NFS, /* NFS unstable pages */ NR_BOUNCE, NR_VMSCAN_WRITE, - /* Second 128 byte cacheline */ NR_WRITEBACK_TEMP, /* Writeback using temporary buffers */ #ifdef CONFIG_NUMA NUMA_HIT, /* allocated in intended node */ Index: b/kernel/fork.c =================================================================== --- a/kernel/fork.c +++ b/kernel/fork.c @@ -137,9 +137,18 @@ struct kmem_cache *vm_area_cachep; /* SLAB cache for mm_struct structures (tsk->mm) */ static struct kmem_cache *mm_cachep; +static void account_kernel_stack(struct thread_info *ti, int on) +{ + struct zone *zone = page_zone(virt_to_page(ti)); + int pages = THREAD_SIZE / PAGE_SIZE; + + mod_zone_page_state(zone, NR_KERNEL_STACK, on ? pages : -pages); +} + void free_task(struct task_struct *tsk) { prop_local_destroy_single(&tsk->dirties); + account_kernel_stack(tsk->stack, 0); free_thread_info(tsk->stack); rt_mutex_debug_task_free(tsk); ftrace_graph_exit_task(tsk); @@ -255,6 +264,9 @@ static struct task_struct *dup_task_stru tsk->btrace_seq = 0; #endif tsk->splice_pipe = NULL; + + account_kernel_stack(ti, 1); + return tsk; out: Index: b/mm/page_alloc.c =================================================================== --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -2119,7 +2119,8 @@ void show_free_areas(void) " inactive_file:%lu" " unevictable:%lu" " dirty:%lu writeback:%lu unstable:%lu\n" - " free:%lu slab:%lu mapped:%lu pagetables:%lu bounce:%lu\n", + " free:%lu slab:%lu mapped:%lu pagetables:%lu bounce:%lu\n" + " kernel_stack:%lu\n", global_page_state(NR_ACTIVE_ANON), global_page_state(NR_ACTIVE_FILE), global_page_state(NR_INACTIVE_ANON), @@ -2133,7 +2134,8 @@ void show_free_areas(void) global_page_state(NR_SLAB_UNRECLAIMABLE), global_page_state(NR_FILE_MAPPED), global_page_state(NR_PAGETABLE), - global_page_state(NR_BOUNCE)); + global_page_state(NR_BOUNCE), + global_page_state(NR_KERNEL_STACK)); for_each_populated_zone(zone) { int i; Index: b/mm/vmstat.c =================================================================== --- a/mm/vmstat.c +++ b/mm/vmstat.c @@ -639,6 +639,7 @@ static const char * const vmstat_text[] "nr_slab_reclaimable", "nr_slab_unreclaimable", "nr_page_table_pages", + "nr_kernel_stack", "nr_unstable", "nr_bounce", "nr_vmscan_write", -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org