linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
To: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Li Zefan <lizf@cn.fujitsu.com>,
	"linux-mm@kvack.org" <linux-mm@kvack.org>,
	"balbir@linux.vnet.ibm.com" <balbir@linux.vnet.ibm.com>,
	"nishimura@mxp.nes.nec.co.jp" <nishimura@mxp.nes.nec.co.jp>,
	pbadari@us.ibm.com, jblunck@suse.de, taka@valinux.co.jp,
	"akpm@linux-foundation.org" <akpm@linux-foundation.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	npiggin@suse.de
Subject: [PATCH 1/2] memcg: avoid unnecessary system-wide-oom-killer
Date: Fri, 21 Nov 2008 19:01:52 +0900	[thread overview]
Message-ID: <20081121190152.fa6843fb.kamezawa.hiroyu@jp.fujitsu.com> (raw)
In-Reply-To: <20081121185829.e04c8116.kamezawa.hiroyu@jp.fujitsu.com>

Current mmtom has new oom function as pagefault_out_of_memory().
It's added for select bad process rathar than killing current.

When memcg hit limit and calls OOM at page_fault, this handler
called and system-wide-oom handling happens.
(means kernel panics if panic_on_oom is true....)

For avoiding overkill, check memcg's recent behavior before
starting system-wide-oom.

And this patch also fixes to guarantee "don't accnout against
process with TIF_MEMDIE". This is necessary for smooth OOM.

Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>

---
 include/linux/memcontrol.h |    6 ++++++
 mm/memcontrol.c            |   33 +++++++++++++++++++++++++++++----
 mm/oom_kill.c              |    8 ++++++++
 3 files changed, 43 insertions(+), 4 deletions(-)

Index: mmotm-2.6.28-Nov20/include/linux/memcontrol.h
===================================================================
--- mmotm-2.6.28-Nov20.orig/include/linux/memcontrol.h
+++ mmotm-2.6.28-Nov20/include/linux/memcontrol.h
@@ -95,6 +95,8 @@ static inline bool mem_cgroup_disabled(v
 	return false;
 }
 
+extern bool mem_cgroup_oom_called(struct task_struct *task);
+
 #else /* CONFIG_CGROUP_MEM_RES_CTLR */
 struct mem_cgroup;
 
@@ -227,6 +229,10 @@ static inline bool mem_cgroup_disabled(v
 {
 	return true;
 }
+static inline bool mem_cgroup_oom_called(struct task_struct *task);
+{
+	return false;
+}
 #endif /* CONFIG_CGROUP_MEM_CONT */
 
 #endif /* _LINUX_MEMCONTROL_H */
Index: mmotm-2.6.28-Nov20/mm/oom_kill.c
===================================================================
--- mmotm-2.6.28-Nov20.orig/mm/oom_kill.c
+++ mmotm-2.6.28-Nov20/mm/oom_kill.c
@@ -560,6 +560,13 @@ void pagefault_out_of_memory(void)
 		/* Got some memory back in the last second. */
 		return;
 
+	/*
+	 * If this is from memcg, oom-killer is already invoked.
+	 * and not worth to go system-wide-oom.
+	 */
+	if (mem_cgroup_oom_called(current))
+		goto rest_and_return;
+
 	if (sysctl_panic_on_oom)
 		panic("out of memory from page fault. panic_on_oom is selected.\n");
 
@@ -571,6 +578,7 @@ void pagefault_out_of_memory(void)
 	 * Give "p" a good chance of killing itself before we
 	 * retry to allocate memory.
 	 */
+rest_and_return:
 	if (!test_thread_flag(TIF_MEMDIE))
 		schedule_timeout_uninterruptible(1);
 }
Index: mmotm-2.6.28-Nov20/mm/memcontrol.c
===================================================================
--- mmotm-2.6.28-Nov20.orig/mm/memcontrol.c
+++ mmotm-2.6.28-Nov20/mm/memcontrol.c
@@ -153,7 +153,7 @@ struct mem_cgroup {
 	 * Should the accounting and control be hierarchical, per subtree?
 	 */
 	bool use_hierarchy;
-
+	unsigned long	last_oom_jiffies;
 	int		obsolete;
 	atomic_t	refcnt;
 	/*
@@ -618,6 +618,22 @@ static int mem_cgroup_hierarchical_recla
 	return ret;
 }
 
+bool mem_cgroup_oom_called(struct task_struct *task)
+{
+	bool ret = false;
+	struct mem_cgroup *mem;
+	struct mm_struct *mm;
+
+	rcu_read_lock();
+	mm = task->mm;
+	if (!mm)
+		mm = &init_mm;
+	mem = mem_cgroup_from_task(rcu_dereference(mm->owner));
+	if (mem && time_before(jiffies, mem->last_oom_jiffies + HZ/10))
+		ret = true;
+	rcu_read_unlock();
+	return ret;
+}
 /*
  * Unlike exported interface, "oom" parameter is added. if oom==true,
  * oom-killer can be invoked.
@@ -629,6 +645,13 @@ static int __mem_cgroup_try_charge(struc
 	struct mem_cgroup *mem, *mem_over_limit;
 	int nr_retries = MEM_CGROUP_RECLAIM_RETRIES;
 	struct res_counter *fail_res;
+
+	if (unlikely(test_thread_flag(TIF_MEMDIE))) {
+		/* Don't account this! */
+		*memcg = NULL;
+		return 0;
+	}
+
 	/*
 	 * We always charge the cgroup the mm_struct belongs to.
 	 * The mm_struct's mem_cgroup changes on task migration if the
@@ -699,8 +722,10 @@ static int __mem_cgroup_try_charge(struc
 			continue;
 
 		if (!nr_retries--) {
-			if (oom)
+			if (oom) {
 				mem_cgroup_out_of_memory(mem, gfp_mask);
+				mem->last_oom_jiffies = jiffies;
+			}
 			goto nomem;
 		}
 	}
@@ -837,7 +862,7 @@ static int mem_cgroup_move_parent(struct
 
 
 	ret = __mem_cgroup_try_charge(NULL, gfp_mask, &parent, false);
-	if (ret)
+	if (ret || !parent)
 		return ret;
 
 	if (!get_page_unless_zero(page))
@@ -888,7 +913,7 @@ static int mem_cgroup_charge_common(stru
 
 	mem = memcg;
 	ret = __mem_cgroup_try_charge(mm, gfp_mask, &mem, true);
-	if (ret)
+	if (ret || !mem)
 		return ret;
 
 	__mem_cgroup_commit_charge(mem, pc, ctype);

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2008-11-21 10:02 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2008-11-14 10:12 [PATCH 0/9] memcg updates (14/Nov/2008) KAMEZAWA Hiroyuki
2008-11-14 10:14 ` [PATCH 1/9] memcg: memory hotpluf fix for notifier callback KAMEZAWA Hiroyuki
2008-11-14 10:15 ` [PATCH 2/9] memcg : reduce size of mem_cgroup by using nr_cpu_ids KAMEZAWA Hiroyuki
2008-11-14 10:16 ` [PATCH 3/9] memcg: new force_empty to free pages under group KAMEZAWA Hiroyuki
2008-11-14 10:17 ` [PATCH 4/9] memcg: handle swap caches KAMEZAWA Hiroyuki
2008-11-14 10:18 ` [PATCH 5/9] memcg : mem+swap controller Kconfig KAMEZAWA Hiroyuki
2008-11-14 10:18 ` [PATCH 6/9] memcg : swap cgroup for remembering usage KAMEZAWA Hiroyuki
2008-11-14 10:19 ` [PATCH 7/9] memcg : mem+swap controlelr core KAMEZAWA Hiroyuki
2008-11-21  2:40   ` Li Zefan
2008-11-21  2:44     ` KAMEZAWA Hiroyuki
2008-11-21  9:58     ` [PATCH 0/2] memcg: fix oom handling KAMEZAWA Hiroyuki
2008-11-21 10:01       ` KAMEZAWA Hiroyuki [this message]
2008-11-21 10:03       ` [PATCH 2/2] memcg: fix reclaim result checks KAMEZAWA Hiroyuki
2008-11-22  2:16       ` [PATCH 0/2] memcg: fix oom handling Li Zefan
2008-11-14 10:20 ` [PATCH 8/9] memcg : synchronized LRU KAMEZAWA Hiroyuki
2008-11-14 10:21 ` [PATCH 9/9] memcg : add mem_cgroup_disabled() KAMEZAWA Hiroyuki
2008-11-14 11:33 ` [PATCH 0/9] memcg updates (14/Nov/2008) Balbir Singh
2008-11-15  3:00 ` KAMEZAWA Hiroyuki
2008-11-15  7:25   ` Balbir Singh
2008-11-15  9:16     ` KAMEZAWA Hiroyuki
2008-11-15  9:19       ` Balbir Singh

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20081121190152.fa6843fb.kamezawa.hiroyu@jp.fujitsu.com \
    --to=kamezawa.hiroyu@jp.fujitsu.com \
    --cc=akpm@linux-foundation.org \
    --cc=balbir@linux.vnet.ibm.com \
    --cc=jblunck@suse.de \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=lizf@cn.fujitsu.com \
    --cc=nishimura@mxp.nes.nec.co.jp \
    --cc=npiggin@suse.de \
    --cc=pbadari@us.ibm.com \
    --cc=taka@valinux.co.jp \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox