* [RFC][PATCH 0/3] memcg: oom notifier at el. (v3)
@ 2010-03-11 7:53 KAMEZAWA Hiroyuki
2010-03-11 7:55 ` [RFC][PATCH 1/3] memcg: wake up filter in oom waitqueue KAMEZAWA Hiroyuki
` (2 more replies)
0 siblings, 3 replies; 10+ messages in thread
From: KAMEZAWA Hiroyuki @ 2010-03-11 7:53 UTC (permalink / raw)
To: linux-mm; +Cc: linux-kernel, nishimura, balbir, kirill
Updated against mmotm-Mar9.
This patch set's feature is
- add filter to memcg's oom waitq.
- oom kill notifier for memcg.
- oom kill disable for memcg.
Major changes since previous one are
- add filter to wakeup queue.
- use its own function and logic rather than reusing thresholds.
- some minor fixes.
If oom-killer disabled, all tasks under memcg will sleep in memcg_oom_waitq.
What users can do when memcg-oom-killer is disabled is:
- enlarge limit.
- kill some task. ---(*)
- move some task to other cgroup. (with account migration)
(This patchset doesn't handle a case when account migraion isn't set.)
The benefit of (*) is that the user can save information of all tasks before
killing and he can take coredump (by gcore.) of troublesome process.
I'm now wondering when I remove RFC...but I think this will not have
much HUNK with dirty_ratio sets.
If some codes are unclear, feel free to request me.
Thanks,
-Kame
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 10+ messages in thread
* [RFC][PATCH 1/3] memcg: wake up filter in oom waitqueue
2010-03-11 7:53 [RFC][PATCH 0/3] memcg: oom notifier at el. (v3) KAMEZAWA Hiroyuki
@ 2010-03-11 7:55 ` KAMEZAWA Hiroyuki
2010-03-12 2:30 ` Daisuke Nishimura
2010-03-11 7:57 ` [RFC][PATCH 2/3] memcg: oom notifier KAMEZAWA Hiroyuki
2010-03-11 7:58 ` [RFC][PATCH 3/3] memcg: oom kill disable and stop and go hooks KAMEZAWA Hiroyuki
2 siblings, 1 reply; 10+ messages in thread
From: KAMEZAWA Hiroyuki @ 2010-03-11 7:55 UTC (permalink / raw)
To: KAMEZAWA Hiroyuki; +Cc: linux-mm, linux-kernel, nishimura, balbir, kirill
From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
memcg's oom waitqueue is a system-wide wait_queue (for handling hierarchy.)
So, it's better to add custom wake function and do flitering in wake up path.
This patch adds a filtering feature for waking up oom-waiters.
Hierarchy is properly handled.
Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
---
mm/memcontrol.c | 61 ++++++++++++++++++++++++++++++++++++++++----------------
1 file changed, 44 insertions(+), 17 deletions(-)
Index: mmotm-2.6.34-Mar9/mm/memcontrol.c
===================================================================
--- mmotm-2.6.34-Mar9.orig/mm/memcontrol.c
+++ mmotm-2.6.34-Mar9/mm/memcontrol.c
@@ -1293,14 +1293,54 @@ static void mem_cgroup_oom_unlock(struct
static DEFINE_MUTEX(memcg_oom_mutex);
static DECLARE_WAIT_QUEUE_HEAD(memcg_oom_waitq);
+struct oom_wait_info {
+ struct mem_cgroup *mem;
+ wait_queue_t wait;
+};
+
+static int memcg_oom_wake_function(wait_queue_t *wait,
+ unsigned mode, int sync, void *arg)
+{
+ struct mem_cgroup *wake_mem = (struct mem_cgroup *)arg;
+ struct oom_wait_info *oom_wait_info;
+
+ /* both of oom_wait_info->mem and wake_mem are stable under us */
+ oom_wait_info = container_of(wait, struct oom_wait_info, wait);
+
+ if (oom_wait_info->mem == wake_mem)
+ goto wakeup;
+ /* if no hierarchy, no match */
+ if (!oom_wait_info->mem->use_hierarchy || !wake_mem->use_hierarchy)
+ return 0;
+ /* check hierarchy */
+ if (!css_is_ancestor(&oom_wait_info->mem->css, &wake_mem->css) &&
+ !css_is_ancestor(&wake_mem->css, &oom_wait_info->mem->css))
+ return 0;
+
+wakeup:
+ return autoremove_wake_function(wait, mode, sync, arg);
+}
+
+static void memcg_wakeup_oom(struct mem_cgroup *mem)
+{
+ /* for filtering, pass "mem" as argument. */
+ __wake_up(&memcg_oom_waitq, TASK_NORMAL, 0, mem);
+}
+
/*
* try to call OOM killer. returns false if we should exit memory-reclaim loop.
*/
bool mem_cgroup_handle_oom(struct mem_cgroup *mem, gfp_t mask)
{
- DEFINE_WAIT(wait);
+ struct oom_wait_info owait;
bool locked;
+ owait.mem = mem;
+ owait.wait.flags = 0;
+ owait.wait.func = memcg_oom_wake_function;
+ owait.wait.private = current;
+ INIT_LIST_HEAD(&owait.wait.task_list);
+
/* At first, try to OOM lock hierarchy under mem.*/
mutex_lock(&memcg_oom_mutex);
locked = mem_cgroup_oom_lock(mem);
@@ -1310,31 +1350,18 @@ bool mem_cgroup_handle_oom(struct mem_cg
* under OOM is always welcomed, use TASK_KILLABLE here.
*/
if (!locked)
- prepare_to_wait(&memcg_oom_waitq, &wait, TASK_KILLABLE);
+ prepare_to_wait(&memcg_oom_waitq, &owait.wait, TASK_KILLABLE);
mutex_unlock(&memcg_oom_mutex);
if (locked)
mem_cgroup_out_of_memory(mem, mask);
else {
schedule();
- finish_wait(&memcg_oom_waitq, &wait);
+ finish_wait(&memcg_oom_waitq, &owait.wait);
}
mutex_lock(&memcg_oom_mutex);
mem_cgroup_oom_unlock(mem);
- /*
- * Here, we use global waitq .....more fine grained waitq ?
- * Assume following hierarchy.
- * A/
- * 01
- * 02
- * assume OOM happens both in A and 01 at the same time. Tthey are
- * mutually exclusive by lock. (kill in 01 helps A.)
- * When we use per memcg waitq, we have to wake up waiters on A and 02
- * in addtion to waiters on 01. We use global waitq for avoiding mess.
- * It will not be a big problem.
- * (And a task may be moved to other groups while it's waiting for OOM.)
- */
- wake_up_all(&memcg_oom_waitq);
+ memcg_wakeup_oom(mem);
mutex_unlock(&memcg_oom_mutex);
if (test_thread_flag(TIF_MEMDIE) || fatal_signal_pending(current))
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 10+ messages in thread
* [RFC][PATCH 2/3] memcg: oom notifier
2010-03-11 7:53 [RFC][PATCH 0/3] memcg: oom notifier at el. (v3) KAMEZAWA Hiroyuki
2010-03-11 7:55 ` [RFC][PATCH 1/3] memcg: wake up filter in oom waitqueue KAMEZAWA Hiroyuki
@ 2010-03-11 7:57 ` KAMEZAWA Hiroyuki
2010-03-11 14:47 ` Kirill A. Shutemov
2010-03-11 7:58 ` [RFC][PATCH 3/3] memcg: oom kill disable and stop and go hooks KAMEZAWA Hiroyuki
2 siblings, 1 reply; 10+ messages in thread
From: KAMEZAWA Hiroyuki @ 2010-03-11 7:57 UTC (permalink / raw)
To: KAMEZAWA Hiroyuki; +Cc: linux-mm, linux-kernel, nishimura, balbir, kirill
From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Considering containers or other resource management softwares in userland,
event notification of OOM in memcg should be implemented.
Now, memcg has "threshold" notifier which uses eventfd, we can make
use of it for oom notification.
This patch adds oom notification eventfd callback for memcg. The usage
is very similar to threshold notifier, but control file is
memory.oom_control and no arguments other than eventfd is required.
% cgroup_event_notifier /cgroup/A/memory.oom_control dummy
(About cgroup_event_notifier, see Documentation/cgroup/)
TODO:
- add a knob to disable oom-kill under a memcg.
- add read/write function to oom_control
Changelog: 20100309
- splitted from threshold functions. use list rather than array.
- moved all to inside of mutex.
Changelog: 20100304
- renewed implemenation.
Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
---
Documentation/cgroups/memory.txt | 20 +++++++
mm/memcontrol.c | 105 ++++++++++++++++++++++++++++++++++++---
2 files changed, 116 insertions(+), 9 deletions(-)
Index: mmotm-2.6.34-Mar9/mm/memcontrol.c
===================================================================
--- mmotm-2.6.34-Mar9.orig/mm/memcontrol.c
+++ mmotm-2.6.34-Mar9/mm/memcontrol.c
@@ -149,6 +149,7 @@ struct mem_cgroup_threshold {
u64 threshold;
};
+/* For threshold */
struct mem_cgroup_threshold_ary {
/* An array index points to threshold just below usage. */
atomic_t current_threshold;
@@ -157,8 +158,14 @@ struct mem_cgroup_threshold_ary {
/* Array of thresholds */
struct mem_cgroup_threshold entries[0];
};
+/* for OOM */
+struct mem_cgroup_eventfd_list {
+ struct list_head list;
+ struct eventfd_ctx *eventfd;
+};
static void mem_cgroup_threshold(struct mem_cgroup *mem);
+static void mem_cgroup_oom_notify(struct mem_cgroup *mem);
/*
* The memory controller data structure. The memory controller controls both
@@ -220,6 +227,9 @@ struct mem_cgroup {
/* thresholds for mem+swap usage. RCU-protected */
struct mem_cgroup_threshold_ary *memsw_thresholds;
+ /* For oom notifier event fd */
+ struct list_head oom_notify;
+
/*
* Should we move charges of a task when a task is moved into this
* mem_cgroup ? And what type of charges should we move ?
@@ -282,9 +292,12 @@ enum charge_type {
/* for encoding cft->private value on file */
#define _MEM (0)
#define _MEMSWAP (1)
+#define _OOM_TYPE (2)
#define MEMFILE_PRIVATE(x, val) (((x) << 16) | (val))
#define MEMFILE_TYPE(val) (((val) >> 16) & 0xffff)
#define MEMFILE_ATTR(val) ((val) & 0xffff)
+/* Used for OOM nofiier */
+#define OOM_CONTROL (0)
/*
* Reclaim flags for mem_cgroup_hierarchical_reclaim
@@ -1351,6 +1364,8 @@ bool mem_cgroup_handle_oom(struct mem_cg
*/
if (!locked)
prepare_to_wait(&memcg_oom_waitq, &owait.wait, TASK_KILLABLE);
+ else
+ mem_cgroup_oom_notify(mem);
mutex_unlock(&memcg_oom_mutex);
if (locked)
@@ -3398,8 +3413,22 @@ static int compare_thresholds(const void
return _a->threshold - _b->threshold;
}
-static int mem_cgroup_register_event(struct cgroup *cgrp, struct cftype *cft,
- struct eventfd_ctx *eventfd, const char *args)
+static int mem_cgroup_oom_notify_cb(struct mem_cgroup *mem, void *data)
+{
+ struct mem_cgroup_eventfd_list *ev;
+
+ list_for_each_entry(ev, &mem->oom_notify, list)
+ eventfd_signal(ev->eventfd, 1);
+ return 0;
+}
+
+static void mem_cgroup_oom_notify(struct mem_cgroup *mem)
+{
+ mem_cgroup_walk_tree(mem, NULL, mem_cgroup_oom_notify_cb);
+}
+
+static int mem_cgroup_usage_register_event(struct cgroup *cgrp,
+ struct cftype *cft, struct eventfd_ctx *eventfd, const char *args)
{
struct mem_cgroup *memcg = mem_cgroup_from_cont(cgrp);
struct mem_cgroup_threshold_ary *thresholds, *thresholds_new;
@@ -3483,8 +3512,8 @@ unlock:
return ret;
}
-static int mem_cgroup_unregister_event(struct cgroup *cgrp, struct cftype *cft,
- struct eventfd_ctx *eventfd)
+static int mem_cgroup_usage_unregister_event(struct cgroup *cgrp,
+ struct cftype *cft, struct eventfd_ctx *eventfd)
{
struct mem_cgroup *memcg = mem_cgroup_from_cont(cgrp);
struct mem_cgroup_threshold_ary *thresholds, *thresholds_new;
@@ -3568,13 +3597,66 @@ unlock:
return ret;
}
+static int mem_cgroup_oom_register_event(struct cgroup *cgrp,
+ struct cftype *cft, struct eventfd_ctx *eventfd, const char *args)
+{
+ struct mem_cgroup *memcg = mem_cgroup_from_cont(cgrp);
+ struct mem_cgroup_eventfd_list *event;
+ int type = MEMFILE_TYPE(cft->private);
+ int ret = -ENOMEM;
+
+ BUG_ON(type != _OOM_TYPE);
+
+ mutex_lock(&memcg_oom_mutex);
+
+ /* Allocate memory for new array of thresholds */
+ event = kmalloc(sizeof(*event), GFP_KERNEL);
+ if (!event)
+ goto unlock;
+ /* Add new threshold */
+ event->eventfd = eventfd;
+ list_add(&event->list, &memcg->oom_notify);
+
+ /* already in OOM ? */
+ if (atomic_read(&memcg->oom_lock))
+ eventfd_signal(eventfd, 1);
+ ret = 0;
+unlock:
+ mutex_unlock(&memcg_oom_mutex);
+
+ return ret;
+}
+
+static int mem_cgroup_oom_unregister_event(struct cgroup *cgrp,
+ struct cftype *cft, struct eventfd_ctx *eventfd)
+{
+ struct mem_cgroup *mem = mem_cgroup_from_cont(cgrp);
+ struct mem_cgroup_eventfd_list *ev, *tmp;
+ int type = MEMFILE_TYPE(cft->private);
+
+ BUG_ON(type != _OOM_TYPE);
+
+ mutex_lock(&memcg_oom_mutex);
+
+ list_for_each_entry_safe(ev, tmp, &mem->oom_notify, list) {
+ if (ev->eventfd == eventfd) {
+ list_del(&ev->list);
+ kfree(ev);
+ }
+ }
+
+ mutex_unlock(&memcg_oom_mutex);
+
+ return 0;
+}
+
static struct cftype mem_cgroup_files[] = {
{
.name = "usage_in_bytes",
.private = MEMFILE_PRIVATE(_MEM, RES_USAGE),
.read_u64 = mem_cgroup_read,
- .register_event = mem_cgroup_register_event,
- .unregister_event = mem_cgroup_unregister_event,
+ .register_event = mem_cgroup_usage_register_event,
+ .unregister_event = mem_cgroup_usage_unregister_event,
},
{
.name = "max_usage_in_bytes",
@@ -3623,6 +3705,12 @@ static struct cftype mem_cgroup_files[]
.read_u64 = mem_cgroup_move_charge_read,
.write_u64 = mem_cgroup_move_charge_write,
},
+ {
+ .name = "oom_control",
+ .register_event = mem_cgroup_oom_register_event,
+ .unregister_event = mem_cgroup_oom_unregister_event,
+ .private = MEMFILE_PRIVATE(_OOM_TYPE, OOM_CONTROL),
+ },
};
#ifdef CONFIG_CGROUP_MEM_RES_CTLR_SWAP
@@ -3631,8 +3719,8 @@ static struct cftype memsw_cgroup_files[
.name = "memsw.usage_in_bytes",
.private = MEMFILE_PRIVATE(_MEMSWAP, RES_USAGE),
.read_u64 = mem_cgroup_read,
- .register_event = mem_cgroup_register_event,
- .unregister_event = mem_cgroup_unregister_event,
+ .register_event = mem_cgroup_usage_register_event,
+ .unregister_event = mem_cgroup_usage_unregister_event,
},
{
.name = "memsw.max_usage_in_bytes",
@@ -3876,6 +3964,7 @@ mem_cgroup_create(struct cgroup_subsys *
}
mem->last_scanned_child = 0;
spin_lock_init(&mem->reclaim_param_lock);
+ INIT_LIST_HEAD(&mem->oom_notify);
if (parent)
mem->swappiness = get_swappiness(parent);
Index: mmotm-2.6.34-Mar9/Documentation/cgroups/memory.txt
===================================================================
--- mmotm-2.6.34-Mar9.orig/Documentation/cgroups/memory.txt
+++ mmotm-2.6.34-Mar9/Documentation/cgroups/memory.txt
@@ -184,6 +184,9 @@ limits on the root cgroup.
Note2: When panic_on_oom is set to "2", the whole system will panic.
+When oom event notifier is registered, event will be delivered.
+(See oom_control section)
+
2. Locking
The memory controller uses the following hierarchy
@@ -488,7 +491,22 @@ threshold in any direction.
It's applicable for root and non-root cgroup.
-10. TODO
+10. OOM Control
+
+Memory controler implements oom notifier using cgroup notification
+API (See cgroups.txt). It allows to register multiple oom notification
+delivery and gets notification when oom happens.
+
+To register a notifier, application need:
+ - create an eventfd using eventfd(2)
+ - open memory.oom_control file
+ - write string like "<event_fd> <memory.oom_control>" to cgroup.event_control
+
+Application will be notifier through eventfd when oom happens.
+OOM notification doesn't work for root cgroup.
+
+
+11. TODO
1. Add support for accounting huge pages (as a separate controller)
2. Make per-cgroup scanner reclaim not-shared pages first
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 10+ messages in thread
* [RFC][PATCH 3/3] memcg: oom kill disable and stop and go hooks.
2010-03-11 7:53 [RFC][PATCH 0/3] memcg: oom notifier at el. (v3) KAMEZAWA Hiroyuki
2010-03-11 7:55 ` [RFC][PATCH 1/3] memcg: wake up filter in oom waitqueue KAMEZAWA Hiroyuki
2010-03-11 7:57 ` [RFC][PATCH 2/3] memcg: oom notifier KAMEZAWA Hiroyuki
@ 2010-03-11 7:58 ` KAMEZAWA Hiroyuki
2 siblings, 0 replies; 10+ messages in thread
From: KAMEZAWA Hiroyuki @ 2010-03-11 7:58 UTC (permalink / raw)
To: KAMEZAWA Hiroyuki; +Cc: linux-mm, linux-kernel, nishimura, balbir, kirill
From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
This adds a feature to disable oom-killer for memcg, if disabled,
of course, tasks under memcg will stop.
But now, we have oom-notifier for memcg. And the world around
memcg is not under out-of-memory. memcg's out-of-memory just
shows memcg hits limit. Then, administrator or
management daemon can recover the situation by
- kill some process
- enlarge limit, add more swap.
- migrate some tasks
- remove file cache on tmps (difficult ?)
TODO:
more brush up and find races.
Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
---
Documentation/cgroups/memory.txt | 19 ++++++
mm/memcontrol.c | 113 ++++++++++++++++++++++++++++++++-------
2 files changed, 113 insertions(+), 19 deletions(-)
Index: mmotm-2.6.34-Mar9/mm/memcontrol.c
===================================================================
--- mmotm-2.6.34-Mar9.orig/mm/memcontrol.c
+++ mmotm-2.6.34-Mar9/mm/memcontrol.c
@@ -235,7 +235,8 @@ struct mem_cgroup {
* mem_cgroup ? And what type of charges should we move ?
*/
unsigned long move_charge_at_immigrate;
-
+ /* Disable OOM killer */
+ unsigned long oom_kill_disable;
/*
* percpu counter.
*/
@@ -1340,20 +1341,26 @@ static void memcg_wakeup_oom(struct mem_
__wake_up(&memcg_oom_waitq, TASK_NORMAL, 0, mem);
}
+static void memcg_oom_recover(struct mem_cgroup *mem)
+{
+ if (mem->oom_kill_disable && atomic_read(&mem->oom_lock))
+ memcg_wakeup_oom(mem);
+}
+
/*
* try to call OOM killer. returns false if we should exit memory-reclaim loop.
*/
bool mem_cgroup_handle_oom(struct mem_cgroup *mem, gfp_t mask)
{
struct oom_wait_info owait;
- bool locked;
+ bool locked, need_to_kill;
owait.mem = mem;
owait.wait.flags = 0;
owait.wait.func = memcg_oom_wake_function;
owait.wait.private = current;
INIT_LIST_HEAD(&owait.wait.task_list);
-
+ need_to_kill = true;
/* At first, try to OOM lock hierarchy under mem.*/
mutex_lock(&memcg_oom_mutex);
locked = mem_cgroup_oom_lock(mem);
@@ -1362,15 +1369,17 @@ bool mem_cgroup_handle_oom(struct mem_cg
* accounting. So, UNINTERRUPTIBLE is appropriate. But SIGKILL
* under OOM is always welcomed, use TASK_KILLABLE here.
*/
- if (!locked)
- prepare_to_wait(&memcg_oom_waitq, &owait.wait, TASK_KILLABLE);
- else
+ prepare_to_wait(&memcg_oom_waitq, &owait.wait, TASK_KILLABLE);
+ if (!locked || mem->oom_kill_disable)
+ need_to_kill = false;
+ if (locked)
mem_cgroup_oom_notify(mem);
mutex_unlock(&memcg_oom_mutex);
- if (locked)
+ if (need_to_kill) {
+ finish_wait(&memcg_oom_waitq, &owait.wait);
mem_cgroup_out_of_memory(mem, mask);
- else {
+ } else {
schedule();
finish_wait(&memcg_oom_waitq, &owait.wait);
}
@@ -2162,15 +2171,6 @@ __do_uncharge(struct mem_cgroup *mem, co
/* If swapout, usage of swap doesn't decrease */
if (!do_swap_account || ctype == MEM_CGROUP_CHARGE_TYPE_SWAPOUT)
uncharge_memsw = false;
- /*
- * do_batch > 0 when unmapping pages or inode invalidate/truncate.
- * In those cases, all pages freed continously can be expected to be in
- * the same cgroup and we have chance to coalesce uncharges.
- * But we do uncharge one by one if this is killed by OOM(TIF_MEMDIE)
- * because we want to do uncharge as soon as possible.
- */
- if (!current->memcg_batch.do_batch || test_thread_flag(TIF_MEMDIE))
- goto direct_uncharge;
batch = ¤t->memcg_batch;
/*
@@ -2181,6 +2181,17 @@ __do_uncharge(struct mem_cgroup *mem, co
if (!batch->memcg)
batch->memcg = mem;
/*
+ * do_batch > 0 when unmapping pages or inode invalidate/truncate.
+ * In those cases, all pages freed continously can be expected to be in
+ * the same cgroup and we have chance to coalesce uncharges.
+ * But we do uncharge one by one if this is killed by OOM(TIF_MEMDIE)
+ * because we want to do uncharge as soon as possible.
+ */
+
+ if (!batch->do_batch || test_thread_flag(TIF_MEMDIE))
+ goto direct_uncharge;
+
+ /*
* In typical case, batch->memcg == mem. This means we can
* merge a series of uncharges to an uncharge of res_counter.
* If not, we uncharge res_counter ony by one.
@@ -2196,6 +2207,8 @@ direct_uncharge:
res_counter_uncharge(&mem->res, PAGE_SIZE);
if (uncharge_memsw)
res_counter_uncharge(&mem->memsw, PAGE_SIZE);
+ if (unlikely(batch->memcg != mem))
+ memcg_oom_recover(mem);
return;
}
@@ -2332,6 +2345,7 @@ void mem_cgroup_uncharge_end(void)
res_counter_uncharge(&batch->memcg->res, batch->bytes);
if (batch->memsw_bytes)
res_counter_uncharge(&batch->memcg->memsw, batch->memsw_bytes);
+ memcg_oom_recover(batch->memcg);
/* forget this pointer (for sanity check) */
batch->memcg = NULL;
}
@@ -2568,10 +2582,11 @@ static int mem_cgroup_resize_limit(struc
unsigned long long val)
{
int retry_count;
- u64 memswlimit;
+ u64 memswlimit, memlimit;
int ret = 0;
int children = mem_cgroup_count_children(memcg);
u64 curusage, oldusage;
+ int enlarge;
/*
* For keeping hierarchical_reclaim simple, how long we should retry
@@ -2582,6 +2597,7 @@ static int mem_cgroup_resize_limit(struc
oldusage = res_counter_read_u64(&memcg->res, RES_USAGE);
+ enlarge = 0;
while (retry_count) {
if (signal_pending(current)) {
ret = -EINTR;
@@ -2599,6 +2615,11 @@ static int mem_cgroup_resize_limit(struc
mutex_unlock(&set_limit_mutex);
break;
}
+
+ memlimit = res_counter_read_u64(&memcg->res, RES_LIMIT);
+ if (memlimit < val)
+ enlarge = 1;
+
ret = res_counter_set_limit(&memcg->res, val);
if (!ret) {
if (memswlimit == val)
@@ -2620,6 +2641,8 @@ static int mem_cgroup_resize_limit(struc
else
oldusage = curusage;
}
+ if (!ret && enlarge)
+ memcg_oom_recover(memcg);
return ret;
}
@@ -2628,9 +2651,10 @@ static int mem_cgroup_resize_memsw_limit
unsigned long long val)
{
int retry_count;
- u64 memlimit, oldusage, curusage;
+ u64 memlimit, memswlimit, oldusage, curusage;
int children = mem_cgroup_count_children(memcg);
int ret = -EBUSY;
+ int enlarge = 0;
/* see mem_cgroup_resize_res_limit */
retry_count = children * MEM_CGROUP_RECLAIM_RETRIES;
@@ -2652,6 +2676,9 @@ static int mem_cgroup_resize_memsw_limit
mutex_unlock(&set_limit_mutex);
break;
}
+ memswlimit = res_counter_read_u64(&memcg->memsw, RES_LIMIT);
+ if (memswlimit < val)
+ enlarge = 1;
ret = res_counter_set_limit(&memcg->memsw, val);
if (!ret) {
if (memlimit == val)
@@ -2674,6 +2701,8 @@ static int mem_cgroup_resize_memsw_limit
else
oldusage = curusage;
}
+ if (!ret && enlarge)
+ memcg_oom_recover(memcg);
return ret;
}
@@ -2865,6 +2894,7 @@ move_account:
if (ret)
break;
}
+ memcg_oom_recover(mem);
/* it seems parent cgroup doesn't have enough mem */
if (ret == -ENOMEM)
goto try_to_free;
@@ -3650,6 +3680,46 @@ static int mem_cgroup_oom_unregister_eve
return 0;
}
+static int mem_cgroup_oom_control_read(struct cgroup *cgrp,
+ struct cftype *cft, struct cgroup_map_cb *cb)
+{
+ struct mem_cgroup *mem = mem_cgroup_from_cont(cgrp);
+
+ cb->fill(cb, "oom_kill_disable", mem->oom_kill_disable);
+
+ if (atomic_read(&mem->oom_lock))
+ cb->fill(cb, "under_oom", 1);
+ else
+ cb->fill(cb, "under_oom", 0);
+ return 0;
+}
+
+/*
+ */
+static int mem_cgroup_oom_control_write(struct cgroup *cgrp,
+ struct cftype *cft, u64 val)
+{
+ struct mem_cgroup *mem = mem_cgroup_from_cont(cgrp);
+ struct mem_cgroup *parent;
+
+ /* cannot set to root cgroup and only 0 and 1 are allowed */
+ if (!cgrp->parent || !((val == 0) || (val == 1)))
+ return -EINVAL;
+
+ parent = mem_cgroup_from_cont(cgrp->parent);
+
+ cgroup_lock();
+ /* oom-kill-disable is a flag for subhierarchy. */
+ if ((parent->use_hierarchy) ||
+ (mem->use_hierarchy && !list_empty(&cgrp->children))) {
+ cgroup_unlock();
+ return -EINVAL;
+ }
+ mem->oom_kill_disable = val;
+ cgroup_unlock();
+ return 0;
+}
+
static struct cftype mem_cgroup_files[] = {
{
.name = "usage_in_bytes",
@@ -3707,6 +3777,8 @@ static struct cftype mem_cgroup_files[]
},
{
.name = "oom_control",
+ .read_map = mem_cgroup_oom_control_read,
+ .write_u64 = mem_cgroup_oom_control_write,
.register_event = mem_cgroup_oom_register_event,
.unregister_event = mem_cgroup_oom_unregister_event,
.private = MEMFILE_PRIVATE(_OOM_TYPE, OOM_CONTROL),
@@ -3946,6 +4018,7 @@ mem_cgroup_create(struct cgroup_subsys *
} else {
parent = mem_cgroup_from_cont(cont->parent);
mem->use_hierarchy = parent->use_hierarchy;
+ mem->oom_kill_disable = parent->oom_kill_disable;
}
if (parent && parent->use_hierarchy) {
@@ -4240,6 +4313,7 @@ static void mem_cgroup_clear_mc(void)
if (mc.precharge) {
__mem_cgroup_cancel_charge(mc.to, mc.precharge);
mc.precharge = 0;
+ memcg_oom_recover(mc.to);
}
/*
* we didn't uncharge from mc.from at mem_cgroup_move_account(), so
@@ -4248,6 +4322,7 @@ static void mem_cgroup_clear_mc(void)
if (mc.moved_charge) {
__mem_cgroup_cancel_charge(mc.from, mc.moved_charge);
mc.moved_charge = 0;
+ memcg_oom_recover(mc.from);
}
/* we must fixup refcnts and charges */
if (mc.moved_swap) {
Index: mmotm-2.6.34-Mar9/Documentation/cgroups/memory.txt
===================================================================
--- mmotm-2.6.34-Mar9.orig/Documentation/cgroups/memory.txt
+++ mmotm-2.6.34-Mar9/Documentation/cgroups/memory.txt
@@ -493,6 +493,8 @@ It's applicable for root and non-root cg
10. OOM Control
+memory.oom_control file is for OOM notification and other controls.
+
Memory controler implements oom notifier using cgroup notification
API (See cgroups.txt). It allows to register multiple oom notification
delivery and gets notification when oom happens.
@@ -505,6 +507,23 @@ To register a notifier, application need
Application will be notifier through eventfd when oom happens.
OOM notification doesn't work for root cgroup.
+You can disable oom-killer by writing "1" to memory.oom_control file.
+As.
+ #echo 1 > memory.oom_control
+
+This operation is only allowed to the top cgroup of subhierarchy.
+If oom-killer is disabled, tasks under cgroup will hang/sleep
+in memcg's oom-waitq when they request accountable memory.
+For running them, you have to relax the memcg's oom sitaution by
+ * enlarge limit
+ * kill some tasks.
+ * move some tasks to other group with account migration.
+Then, stopped tasks will work again.
+
+At reading, current status of OOM is shown.
+ oom_kill_disable 0 or 1 (if 1, oom-killer is disabled)
+ under_oom 0 or 1 (if 1, the memcg is under OOM,tasks may
+ be stopped.)
11. TODO
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [RFC][PATCH 2/3] memcg: oom notifier
2010-03-11 7:57 ` [RFC][PATCH 2/3] memcg: oom notifier KAMEZAWA Hiroyuki
@ 2010-03-11 14:47 ` Kirill A. Shutemov
2010-03-11 23:54 ` KAMEZAWA Hiroyuki
0 siblings, 1 reply; 10+ messages in thread
From: Kirill A. Shutemov @ 2010-03-11 14:47 UTC (permalink / raw)
To: KAMEZAWA Hiroyuki; +Cc: linux-mm, linux-kernel, nishimura, balbir
On Thu, Mar 11, 2010 at 9:57 AM, KAMEZAWA Hiroyuki
<kamezawa.hiroyu@jp.fujitsu.com> wrote:
> From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
>
> Considering containers or other resource management softwares in userland,
> event notification of OOM in memcg should be implemented.
> Now, memcg has "threshold" notifier which uses eventfd, we can make
> use of it for oom notification.
>
> This patch adds oom notification eventfd callback for memcg. The usage
> is very similar to threshold notifier, but control file is
> memory.oom_control and no arguments other than eventfd is required.
>
> % cgroup_event_notifier /cgroup/A/memory.oom_control dummy
> (About cgroup_event_notifier, see Documentation/cgroup/)
>
> TODO:
> - add a knob to disable oom-kill under a memcg.
> - add read/write function to oom_control
>
> Changelog: 20100309
> - splitted from threshold functions. use list rather than array.
> - moved all to inside of mutex.
> Changelog: 20100304
> - renewed implemenation.
>
> Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Looks great! Two remarks below.
Reviewed-by: Kirill A. Shutemov <kirill@shutemov.name>
> ---
> Documentation/cgroups/memory.txt | 20 +++++++
> mm/memcontrol.c | 105 ++++++++++++++++++++++++++++++++++++---
> 2 files changed, 116 insertions(+), 9 deletions(-)
>
> Index: mmotm-2.6.34-Mar9/mm/memcontrol.c
> ===================================================================
> --- mmotm-2.6.34-Mar9.orig/mm/memcontrol.c
> +++ mmotm-2.6.34-Mar9/mm/memcontrol.c
> @@ -149,6 +149,7 @@ struct mem_cgroup_threshold {
> u64 threshold;
> };
>
> +/* For threshold */
> struct mem_cgroup_threshold_ary {
> /* An array index points to threshold just below usage. */
> atomic_t current_threshold;
> @@ -157,8 +158,14 @@ struct mem_cgroup_threshold_ary {
> /* Array of thresholds */
> struct mem_cgroup_threshold entries[0];
> };
> +/* for OOM */
> +struct mem_cgroup_eventfd_list {
> + struct list_head list;
> + struct eventfd_ctx *eventfd;
> +};
>
> static void mem_cgroup_threshold(struct mem_cgroup *mem);
> +static void mem_cgroup_oom_notify(struct mem_cgroup *mem);
>
> /*
> * The memory controller data structure. The memory controller controls both
> @@ -220,6 +227,9 @@ struct mem_cgroup {
> /* thresholds for mem+swap usage. RCU-protected */
> struct mem_cgroup_threshold_ary *memsw_thresholds;
>
> + /* For oom notifier event fd */
> + struct list_head oom_notify;
> +
> /*
> * Should we move charges of a task when a task is moved into this
> * mem_cgroup ? And what type of charges should we move ?
> @@ -282,9 +292,12 @@ enum charge_type {
> /* for encoding cft->private value on file */
> #define _MEM (0)
> #define _MEMSWAP (1)
> +#define _OOM_TYPE (2)
> #define MEMFILE_PRIVATE(x, val) (((x) << 16) | (val))
> #define MEMFILE_TYPE(val) (((val) >> 16) & 0xffff)
> #define MEMFILE_ATTR(val) ((val) & 0xffff)
> +/* Used for OOM nofiier */
> +#define OOM_CONTROL (0)
>
> /*
> * Reclaim flags for mem_cgroup_hierarchical_reclaim
> @@ -1351,6 +1364,8 @@ bool mem_cgroup_handle_oom(struct mem_cg
> */
> if (!locked)
> prepare_to_wait(&memcg_oom_waitq, &owait.wait, TASK_KILLABLE);
> + else
> + mem_cgroup_oom_notify(mem);
> mutex_unlock(&memcg_oom_mutex);
>
> if (locked)
> @@ -3398,8 +3413,22 @@ static int compare_thresholds(const void
> return _a->threshold - _b->threshold;
> }
>
> -static int mem_cgroup_register_event(struct cgroup *cgrp, struct cftype *cft,
> - struct eventfd_ctx *eventfd, const char *args)
> +static int mem_cgroup_oom_notify_cb(struct mem_cgroup *mem, void *data)
> +{
> + struct mem_cgroup_eventfd_list *ev;
> +
> + list_for_each_entry(ev, &mem->oom_notify, list)
> + eventfd_signal(ev->eventfd, 1);
> + return 0;
> +}
> +
> +static void mem_cgroup_oom_notify(struct mem_cgroup *mem)
> +{
> + mem_cgroup_walk_tree(mem, NULL, mem_cgroup_oom_notify_cb);
> +}
> +
> +static int mem_cgroup_usage_register_event(struct cgroup *cgrp,
> + struct cftype *cft, struct eventfd_ctx *eventfd, const char *args)
> {
> struct mem_cgroup *memcg = mem_cgroup_from_cont(cgrp);
> struct mem_cgroup_threshold_ary *thresholds, *thresholds_new;
> @@ -3483,8 +3512,8 @@ unlock:
> return ret;
> }
>
> -static int mem_cgroup_unregister_event(struct cgroup *cgrp, struct cftype *cft,
> - struct eventfd_ctx *eventfd)
> +static int mem_cgroup_usage_unregister_event(struct cgroup *cgrp,
> + struct cftype *cft, struct eventfd_ctx *eventfd)
> {
> struct mem_cgroup *memcg = mem_cgroup_from_cont(cgrp);
> struct mem_cgroup_threshold_ary *thresholds, *thresholds_new;
> @@ -3568,13 +3597,66 @@ unlock:
> return ret;
> }
>
> +static int mem_cgroup_oom_register_event(struct cgroup *cgrp,
> + struct cftype *cft, struct eventfd_ctx *eventfd, const char *args)
> +{
> + struct mem_cgroup *memcg = mem_cgroup_from_cont(cgrp);
> + struct mem_cgroup_eventfd_list *event;
> + int type = MEMFILE_TYPE(cft->private);
> + int ret = -ENOMEM;
> +
> + BUG_ON(type != _OOM_TYPE);
> +
> + mutex_lock(&memcg_oom_mutex);
> +
> + /* Allocate memory for new array of thresholds */
Irrelevant comment?
> + event = kmalloc(sizeof(*event), GFP_KERNEL);
> + if (!event)
> + goto unlock;
> + /* Add new threshold */
Ditto.
> + event->eventfd = eventfd;
> + list_add(&event->list, &memcg->oom_notify);
> +
> + /* already in OOM ? */
> + if (atomic_read(&memcg->oom_lock))
> + eventfd_signal(eventfd, 1);
> + ret = 0;
> +unlock:
> + mutex_unlock(&memcg_oom_mutex);
> +
> + return ret;
> +}
> +
> +static int mem_cgroup_oom_unregister_event(struct cgroup *cgrp,
> + struct cftype *cft, struct eventfd_ctx *eventfd)
> +{
> + struct mem_cgroup *mem = mem_cgroup_from_cont(cgrp);
> + struct mem_cgroup_eventfd_list *ev, *tmp;
> + int type = MEMFILE_TYPE(cft->private);
> +
> + BUG_ON(type != _OOM_TYPE);
> +
> + mutex_lock(&memcg_oom_mutex);
> +
> + list_for_each_entry_safe(ev, tmp, &mem->oom_notify, list) {
> + if (ev->eventfd == eventfd) {
> + list_del(&ev->list);
> + kfree(ev);
> + }
> + }
> +
> + mutex_unlock(&memcg_oom_mutex);
> +
> + return 0;
> +}
> +
> static struct cftype mem_cgroup_files[] = {
> {
> .name = "usage_in_bytes",
> .private = MEMFILE_PRIVATE(_MEM, RES_USAGE),
> .read_u64 = mem_cgroup_read,
> - .register_event = mem_cgroup_register_event,
> - .unregister_event = mem_cgroup_unregister_event,
> + .register_event = mem_cgroup_usage_register_event,
> + .unregister_event = mem_cgroup_usage_unregister_event,
> },
> {
> .name = "max_usage_in_bytes",
> @@ -3623,6 +3705,12 @@ static struct cftype mem_cgroup_files[]
> .read_u64 = mem_cgroup_move_charge_read,
> .write_u64 = mem_cgroup_move_charge_write,
> },
> + {
> + .name = "oom_control",
> + .register_event = mem_cgroup_oom_register_event,
> + .unregister_event = mem_cgroup_oom_unregister_event,
> + .private = MEMFILE_PRIVATE(_OOM_TYPE, OOM_CONTROL),
> + },
> };
>
> #ifdef CONFIG_CGROUP_MEM_RES_CTLR_SWAP
> @@ -3631,8 +3719,8 @@ static struct cftype memsw_cgroup_files[
> .name = "memsw.usage_in_bytes",
> .private = MEMFILE_PRIVATE(_MEMSWAP, RES_USAGE),
> .read_u64 = mem_cgroup_read,
> - .register_event = mem_cgroup_register_event,
> - .unregister_event = mem_cgroup_unregister_event,
> + .register_event = mem_cgroup_usage_register_event,
> + .unregister_event = mem_cgroup_usage_unregister_event,
> },
> {
> .name = "memsw.max_usage_in_bytes",
> @@ -3876,6 +3964,7 @@ mem_cgroup_create(struct cgroup_subsys *
> }
> mem->last_scanned_child = 0;
> spin_lock_init(&mem->reclaim_param_lock);
> + INIT_LIST_HEAD(&mem->oom_notify);
>
> if (parent)
> mem->swappiness = get_swappiness(parent);
> Index: mmotm-2.6.34-Mar9/Documentation/cgroups/memory.txt
> ===================================================================
> --- mmotm-2.6.34-Mar9.orig/Documentation/cgroups/memory.txt
> +++ mmotm-2.6.34-Mar9/Documentation/cgroups/memory.txt
> @@ -184,6 +184,9 @@ limits on the root cgroup.
>
> Note2: When panic_on_oom is set to "2", the whole system will panic.
>
> +When oom event notifier is registered, event will be delivered.
> +(See oom_control section)
> +
> 2. Locking
>
> The memory controller uses the following hierarchy
> @@ -488,7 +491,22 @@ threshold in any direction.
>
> It's applicable for root and non-root cgroup.
>
> -10. TODO
> +10. OOM Control
> +
> +Memory controler implements oom notifier using cgroup notification
> +API (See cgroups.txt). It allows to register multiple oom notification
> +delivery and gets notification when oom happens.
> +
> +To register a notifier, application need:
> + - create an eventfd using eventfd(2)
> + - open memory.oom_control file
> + - write string like "<event_fd> <memory.oom_control>" to cgroup.event_control
> +
> +Application will be notifier through eventfd when oom happens.
> +OOM notification doesn't work for root cgroup.
> +
> +
> +11. TODO
>
> 1. Add support for accounting huge pages (as a separate controller)
> 2. Make per-cgroup scanner reclaim not-shared pages first
>
>
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [RFC][PATCH 2/3] memcg: oom notifier
2010-03-11 14:47 ` Kirill A. Shutemov
@ 2010-03-11 23:54 ` KAMEZAWA Hiroyuki
0 siblings, 0 replies; 10+ messages in thread
From: KAMEZAWA Hiroyuki @ 2010-03-11 23:54 UTC (permalink / raw)
To: Kirill A. Shutemov; +Cc: linux-mm, linux-kernel, nishimura, balbir
Thank you.
On Thu, 11 Mar 2010 16:47:00 +0200
"Kirill A. Shutemov" <kirill@shutemov.name> wrote:
> On Thu, Mar 11, 2010 at 9:57 AM, KAMEZAWA Hiroyuki
> <kamezawa.hiroyu@jp.fujitsu.com> wrote:
> > From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
> > A A A A /*
> > A A A A * Should we move charges of a task when a task is moved into this
> > A A A A * mem_cgroup ? And what type of charges should we move ?
> > @@ -282,9 +292,12 @@ enum charge_type {
> > A /* for encoding cft->private value on file */
> > A #define _MEM A A A A A A A A A (0)
> > A #define _MEMSWAP A A A A A A A (1)
> > +#define _OOM_TYPE A A A A A A A (2)
> > A #define MEMFILE_PRIVATE(x, val) A A A A (((x) << 16) | (val))
> > A #define MEMFILE_TYPE(val) A A A (((val) >> 16) & 0xffff)
> > A #define MEMFILE_ATTR(val) A A A ((val) & 0xffff)
> > +/* Used for OOM nofiier */
> > +#define OOM_CONTROL A A A A A A (0)
> >
> > A /*
> > A * Reclaim flags for mem_cgroup_hierarchical_reclaim
> > @@ -1351,6 +1364,8 @@ bool mem_cgroup_handle_oom(struct mem_cg
> > A A A A */
> > A A A A if (!locked)
> > A A A A A A A A prepare_to_wait(&memcg_oom_waitq, &owait.wait, TASK_KILLABLE);
> > + A A A else
> > + A A A A A A A mem_cgroup_oom_notify(mem);
> > A A A A mutex_unlock(&memcg_oom_mutex);
> >
> > A A A A if (locked)
> > @@ -3398,8 +3413,22 @@ static int compare_thresholds(const void
> > A A A A return _a->threshold - _b->threshold;
> > A }
> >
> > -static int mem_cgroup_register_event(struct cgroup *cgrp, struct cftype *cft,
> > - A A A A A A A struct eventfd_ctx *eventfd, const char *args)
> > +static int mem_cgroup_oom_notify_cb(struct mem_cgroup *mem, void *data)
> > +{
> > + A A A struct mem_cgroup_eventfd_list *ev;
> > +
> > + A A A list_for_each_entry(ev, &mem->oom_notify, list)
> > + A A A A A A A eventfd_signal(ev->eventfd, 1);
> > + A A A return 0;
> > +}
> > +
> > +static void mem_cgroup_oom_notify(struct mem_cgroup *mem)
> > +{
> > + A A A mem_cgroup_walk_tree(mem, NULL, mem_cgroup_oom_notify_cb);
> > +}
> > +
> > +static int mem_cgroup_usage_register_event(struct cgroup *cgrp,
> > + A A A struct cftype *cft, struct eventfd_ctx *eventfd, const char *args)
> > A {
> > A A A A struct mem_cgroup *memcg = mem_cgroup_from_cont(cgrp);
> > A A A A struct mem_cgroup_threshold_ary *thresholds, *thresholds_new;
> > @@ -3483,8 +3512,8 @@ unlock:
> > A A A A return ret;
> > A }
> >
> > -static int mem_cgroup_unregister_event(struct cgroup *cgrp, struct cftype *cft,
> > - A A A A A A A struct eventfd_ctx *eventfd)
> > +static int mem_cgroup_usage_unregister_event(struct cgroup *cgrp,
> > + A A A struct cftype *cft, struct eventfd_ctx *eventfd)
> > A {
> > A A A A struct mem_cgroup *memcg = mem_cgroup_from_cont(cgrp);
> > A A A A struct mem_cgroup_threshold_ary *thresholds, *thresholds_new;
> > @@ -3568,13 +3597,66 @@ unlock:
> > A A A A return ret;
> > A }
> >
> > +static int mem_cgroup_oom_register_event(struct cgroup *cgrp,
> > + A A A struct cftype *cft, struct eventfd_ctx *eventfd, const char *args)
> > +{
> > + A A A struct mem_cgroup *memcg = mem_cgroup_from_cont(cgrp);
> > + A A A struct mem_cgroup_eventfd_list *event;
> > + A A A int type = MEMFILE_TYPE(cft->private);
> > + A A A int ret = -ENOMEM;
> > +
> > + A A A BUG_ON(type != _OOM_TYPE);
> > +
> > + A A A mutex_lock(&memcg_oom_mutex);
> > +
> > + A A A /* Allocate memory for new array of thresholds */
>
> Irrelevant comment?
>
> > + A A A event = kmalloc(sizeof(*event), GFP_KERNEL);
> > + A A A if (!event)
> > + A A A A A A A goto unlock;
> > + A A A /* Add new threshold */
>
> Ditto.
>
Ah...sorry for garbages..I'll clean these up.
Thanks,
-Kame
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [RFC][PATCH 1/3] memcg: wake up filter in oom waitqueue
2010-03-11 7:55 ` [RFC][PATCH 1/3] memcg: wake up filter in oom waitqueue KAMEZAWA Hiroyuki
@ 2010-03-12 2:30 ` Daisuke Nishimura
2010-03-12 2:38 ` KAMEZAWA Hiroyuki
0 siblings, 1 reply; 10+ messages in thread
From: Daisuke Nishimura @ 2010-03-12 2:30 UTC (permalink / raw)
To: KAMEZAWA Hiroyuki
Cc: linux-mm, linux-kernel, balbir, kirill, Daisuke Nishimura
On Thu, 11 Mar 2010 16:55:59 +0900, KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> wrote:
> From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
>
> memcg's oom waitqueue is a system-wide wait_queue (for handling hierarchy.)
> So, it's better to add custom wake function and do flitering in wake up path.
>
> This patch adds a filtering feature for waking up oom-waiters.
> Hierarchy is properly handled.
>
>
> Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
> ---
> mm/memcontrol.c | 61 ++++++++++++++++++++++++++++++++++++++++----------------
> 1 file changed, 44 insertions(+), 17 deletions(-)
>
> Index: mmotm-2.6.34-Mar9/mm/memcontrol.c
> ===================================================================
> --- mmotm-2.6.34-Mar9.orig/mm/memcontrol.c
> +++ mmotm-2.6.34-Mar9/mm/memcontrol.c
> @@ -1293,14 +1293,54 @@ static void mem_cgroup_oom_unlock(struct
> static DEFINE_MUTEX(memcg_oom_mutex);
> static DECLARE_WAIT_QUEUE_HEAD(memcg_oom_waitq);
>
> +struct oom_wait_info {
> + struct mem_cgroup *mem;
> + wait_queue_t wait;
> +};
> +
> +static int memcg_oom_wake_function(wait_queue_t *wait,
> + unsigned mode, int sync, void *arg)
> +{
> + struct mem_cgroup *wake_mem = (struct mem_cgroup *)arg;
> + struct oom_wait_info *oom_wait_info;
> +
> + /* both of oom_wait_info->mem and wake_mem are stable under us */
> + oom_wait_info = container_of(wait, struct oom_wait_info, wait);
> +
> + if (oom_wait_info->mem == wake_mem)
> + goto wakeup;
> + /* if no hierarchy, no match */
> + if (!oom_wait_info->mem->use_hierarchy || !wake_mem->use_hierarchy)
> + return 0;
> + /* check hierarchy */
> + if (!css_is_ancestor(&oom_wait_info->mem->css, &wake_mem->css) &&
> + !css_is_ancestor(&wake_mem->css, &oom_wait_info->mem->css))
> + return 0;
> +
I think these conditions are wrong.
This can wake up tasks in oom_wait_info->mem when:
00/ <- wake_mem: use_hierarchy == false
aa/ <- oom_wait_info->mem: use_hierarchy == true;
It should be:
if((oom_wait_info->mem->use_hierarchy &&
css_is_ancestor(&wake_mem->css, &oom_wait_info->mem->css)) ||
(wake_mem->use_hierarchy &&
css_is_ancestor(&oom_wait_info->mem->css, &wake_mem->css)))
goto wakeup;
return 0;
But I like the goal of this patch.
Thanks,
Daisuke Nishimura.
> +wakeup:
> + return autoremove_wake_function(wait, mode, sync, arg);
> +}
> +
> +static void memcg_wakeup_oom(struct mem_cgroup *mem)
> +{
> + /* for filtering, pass "mem" as argument. */
> + __wake_up(&memcg_oom_waitq, TASK_NORMAL, 0, mem);
> +}
> +
> /*
> * try to call OOM killer. returns false if we should exit memory-reclaim loop.
> */
> bool mem_cgroup_handle_oom(struct mem_cgroup *mem, gfp_t mask)
> {
> - DEFINE_WAIT(wait);
> + struct oom_wait_info owait;
> bool locked;
>
> + owait.mem = mem;
> + owait.wait.flags = 0;
> + owait.wait.func = memcg_oom_wake_function;
> + owait.wait.private = current;
> + INIT_LIST_HEAD(&owait.wait.task_list);
> +
> /* At first, try to OOM lock hierarchy under mem.*/
> mutex_lock(&memcg_oom_mutex);
> locked = mem_cgroup_oom_lock(mem);
> @@ -1310,31 +1350,18 @@ bool mem_cgroup_handle_oom(struct mem_cg
> * under OOM is always welcomed, use TASK_KILLABLE here.
> */
> if (!locked)
> - prepare_to_wait(&memcg_oom_waitq, &wait, TASK_KILLABLE);
> + prepare_to_wait(&memcg_oom_waitq, &owait.wait, TASK_KILLABLE);
> mutex_unlock(&memcg_oom_mutex);
>
> if (locked)
> mem_cgroup_out_of_memory(mem, mask);
> else {
> schedule();
> - finish_wait(&memcg_oom_waitq, &wait);
> + finish_wait(&memcg_oom_waitq, &owait.wait);
> }
> mutex_lock(&memcg_oom_mutex);
> mem_cgroup_oom_unlock(mem);
> - /*
> - * Here, we use global waitq .....more fine grained waitq ?
> - * Assume following hierarchy.
> - * A/
> - * 01
> - * 02
> - * assume OOM happens both in A and 01 at the same time. Tthey are
> - * mutually exclusive by lock. (kill in 01 helps A.)
> - * When we use per memcg waitq, we have to wake up waiters on A and 02
> - * in addtion to waiters on 01. We use global waitq for avoiding mess.
> - * It will not be a big problem.
> - * (And a task may be moved to other groups while it's waiting for OOM.)
> - */
> - wake_up_all(&memcg_oom_waitq);
> + memcg_wakeup_oom(mem);
> mutex_unlock(&memcg_oom_mutex);
>
> if (test_thread_flag(TIF_MEMDIE) || fatal_signal_pending(current))
>
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [RFC][PATCH 1/3] memcg: wake up filter in oom waitqueue
2010-03-12 2:30 ` Daisuke Nishimura
@ 2010-03-12 2:38 ` KAMEZAWA Hiroyuki
2010-03-12 2:54 ` Daisuke Nishimura
0 siblings, 1 reply; 10+ messages in thread
From: KAMEZAWA Hiroyuki @ 2010-03-12 2:38 UTC (permalink / raw)
To: Daisuke Nishimura; +Cc: linux-mm, linux-kernel, balbir, kirill
On Fri, 12 Mar 2010 11:30:28 +0900
Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp> wrote:
> On Thu, 11 Mar 2010 16:55:59 +0900, KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> wrote:
> > From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
> > + /* check hierarchy */
> > + if (!css_is_ancestor(&oom_wait_info->mem->css, &wake_mem->css) &&
> > + !css_is_ancestor(&wake_mem->css, &oom_wait_info->mem->css))
> > + return 0;
> > +
> I think these conditions are wrong.
> This can wake up tasks in oom_wait_info->mem when:
>
> 00/ <- wake_mem: use_hierarchy == false
> aa/ <- oom_wait_info->mem: use_hierarchy == true;
>
Hmm. I think this line bails out above case.
> + if (!oom_wait_info->mem->use_hierarchy || !wake_mem->use_hierarchy)
> + return 0;
No ?
Thanks,
-Kame
> It should be:
>
> if((oom_wait_info->mem->use_hierarchy &&
> css_is_ancestor(&wake_mem->css, &oom_wait_info->mem->css)) ||
> (wake_mem->use_hierarchy &&
> css_is_ancestor(&oom_wait_info->mem->css, &wake_mem->css)))
> goto wakeup;
>
> return 0;
>
> But I like the goal of this patch.
>
> Thanks,
> Daisuke Nishimura.
>
> > +wakeup:
> > + return autoremove_wake_function(wait, mode, sync, arg);
> > +}
> > +
> > +static void memcg_wakeup_oom(struct mem_cgroup *mem)
> > +{
> > + /* for filtering, pass "mem" as argument. */
> > + __wake_up(&memcg_oom_waitq, TASK_NORMAL, 0, mem);
> > +}
> > +
> > /*
> > * try to call OOM killer. returns false if we should exit memory-reclaim loop.
> > */
> > bool mem_cgroup_handle_oom(struct mem_cgroup *mem, gfp_t mask)
> > {
> > - DEFINE_WAIT(wait);
> > + struct oom_wait_info owait;
> > bool locked;
> >
> > + owait.mem = mem;
> > + owait.wait.flags = 0;
> > + owait.wait.func = memcg_oom_wake_function;
> > + owait.wait.private = current;
> > + INIT_LIST_HEAD(&owait.wait.task_list);
> > +
> > /* At first, try to OOM lock hierarchy under mem.*/
> > mutex_lock(&memcg_oom_mutex);
> > locked = mem_cgroup_oom_lock(mem);
> > @@ -1310,31 +1350,18 @@ bool mem_cgroup_handle_oom(struct mem_cg
> > * under OOM is always welcomed, use TASK_KILLABLE here.
> > */
> > if (!locked)
> > - prepare_to_wait(&memcg_oom_waitq, &wait, TASK_KILLABLE);
> > + prepare_to_wait(&memcg_oom_waitq, &owait.wait, TASK_KILLABLE);
> > mutex_unlock(&memcg_oom_mutex);
> >
> > if (locked)
> > mem_cgroup_out_of_memory(mem, mask);
> > else {
> > schedule();
> > - finish_wait(&memcg_oom_waitq, &wait);
> > + finish_wait(&memcg_oom_waitq, &owait.wait);
> > }
> > mutex_lock(&memcg_oom_mutex);
> > mem_cgroup_oom_unlock(mem);
> > - /*
> > - * Here, we use global waitq .....more fine grained waitq ?
> > - * Assume following hierarchy.
> > - * A/
> > - * 01
> > - * 02
> > - * assume OOM happens both in A and 01 at the same time. Tthey are
> > - * mutually exclusive by lock. (kill in 01 helps A.)
> > - * When we use per memcg waitq, we have to wake up waiters on A and 02
> > - * in addtion to waiters on 01. We use global waitq for avoiding mess.
> > - * It will not be a big problem.
> > - * (And a task may be moved to other groups while it's waiting for OOM.)
> > - */
> > - wake_up_all(&memcg_oom_waitq);
> > + memcg_wakeup_oom(mem);
> > mutex_unlock(&memcg_oom_mutex);
> >
> > if (test_thread_flag(TIF_MEMDIE) || fatal_signal_pending(current))
> >
>
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [RFC][PATCH 1/3] memcg: wake up filter in oom waitqueue
2010-03-12 2:38 ` KAMEZAWA Hiroyuki
@ 2010-03-12 2:54 ` Daisuke Nishimura
2010-03-12 3:03 ` KAMEZAWA Hiroyuki
0 siblings, 1 reply; 10+ messages in thread
From: Daisuke Nishimura @ 2010-03-12 2:54 UTC (permalink / raw)
To: KAMEZAWA Hiroyuki
Cc: linux-mm, linux-kernel, balbir, kirill, Daisuke Nishimura
On Fri, 12 Mar 2010 11:38:38 +0900, KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> wrote:
> On Fri, 12 Mar 2010 11:30:28 +0900
> Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp> wrote:
>
> > On Thu, 11 Mar 2010 16:55:59 +0900, KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> wrote:
> > > From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
> > > + /* check hierarchy */
> > > + if (!css_is_ancestor(&oom_wait_info->mem->css, &wake_mem->css) &&
> > > + !css_is_ancestor(&wake_mem->css, &oom_wait_info->mem->css))
> > > + return 0;
> > > +
> > I think these conditions are wrong.
> > This can wake up tasks in oom_wait_info->mem when:
> >
> > 00/ <- wake_mem: use_hierarchy == false
> > aa/ <- oom_wait_info->mem: use_hierarchy == true;
> >
> Hmm. I think this line bails out above case.
>
> > + if (!oom_wait_info->mem->use_hierarchy || !wake_mem->use_hierarchy)
> > + return 0;
>
> No ?
>
Oops! you're right. I misunderstood the code.
Then, this patch looks good to me.
Reviewed-by: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
Thanks,
Daisuke Nishimura.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [RFC][PATCH 1/3] memcg: wake up filter in oom waitqueue
2010-03-12 2:54 ` Daisuke Nishimura
@ 2010-03-12 3:03 ` KAMEZAWA Hiroyuki
0 siblings, 0 replies; 10+ messages in thread
From: KAMEZAWA Hiroyuki @ 2010-03-12 3:03 UTC (permalink / raw)
To: Daisuke Nishimura; +Cc: linux-mm, linux-kernel, balbir, kirill
On Fri, 12 Mar 2010 11:54:29 +0900
Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp> wrote:
> On Fri, 12 Mar 2010 11:38:38 +0900, KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> wrote:
> > On Fri, 12 Mar 2010 11:30:28 +0900
> > Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp> wrote:
> >
> > > On Thu, 11 Mar 2010 16:55:59 +0900, KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> wrote:
> > > > From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
> > > > + /* check hierarchy */
> > > > + if (!css_is_ancestor(&oom_wait_info->mem->css, &wake_mem->css) &&
> > > > + !css_is_ancestor(&wake_mem->css, &oom_wait_info->mem->css))
> > > > + return 0;
> > > > +
> > > I think these conditions are wrong.
> > > This can wake up tasks in oom_wait_info->mem when:
> > >
> > > 00/ <- wake_mem: use_hierarchy == false
> > > aa/ <- oom_wait_info->mem: use_hierarchy == true;
> > >
> > Hmm. I think this line bails out above case.
> >
> > > + if (!oom_wait_info->mem->use_hierarchy || !wake_mem->use_hierarchy)
> > > + return 0;
> >
> > No ?
> >
> Oops! you're right. I misunderstood the code.
>
> Then, this patch looks good to me.
>
> Reviewed-by: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
>
Thank you very much!
-Kame
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2010-03-12 3:07 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2010-03-11 7:53 [RFC][PATCH 0/3] memcg: oom notifier at el. (v3) KAMEZAWA Hiroyuki
2010-03-11 7:55 ` [RFC][PATCH 1/3] memcg: wake up filter in oom waitqueue KAMEZAWA Hiroyuki
2010-03-12 2:30 ` Daisuke Nishimura
2010-03-12 2:38 ` KAMEZAWA Hiroyuki
2010-03-12 2:54 ` Daisuke Nishimura
2010-03-12 3:03 ` KAMEZAWA Hiroyuki
2010-03-11 7:57 ` [RFC][PATCH 2/3] memcg: oom notifier KAMEZAWA Hiroyuki
2010-03-11 14:47 ` Kirill A. Shutemov
2010-03-11 23:54 ` KAMEZAWA Hiroyuki
2010-03-11 7:58 ` [RFC][PATCH 3/3] memcg: oom kill disable and stop and go hooks KAMEZAWA Hiroyuki
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox