* [PATCH -mmotm 0/4] cleanups/fixes for memory cgroup
@ 2008-12-08 1:58 Daisuke Nishimura
2008-12-08 2:02 ` [PATCH -mmotm 1/4] memcg: don't trigger oom at page migration Daisuke Nishimura
` (3 more replies)
0 siblings, 4 replies; 7+ messages in thread
From: Daisuke Nishimura @ 2008-12-08 1:58 UTC (permalink / raw)
To: Andrew Morton
Cc: LKML, linux-mm, Balbir Singh, KAMEZAWA Hiroyuki, Pavel Emelyanov,
Li Zefan, Paul Menage, nishimura
Hi.
These are some cleanup/bug fix patches that I have now for memory cgroup.
Patches:
[1/4] memcg: don't trigger oom at page migration
[2/4] memcg: remove mem_cgroup_try_charge
[3/4] memcg: avoid deadlock caused by race between oom and cpuset_attach
[4/4] memcg: change try_to_free_pages to hierarchical_reclaim
There is no special meaning in patch order except for 1 and 2.
Thanks,
Daisuke Nishimura.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH -mmotm 1/4] memcg: don't trigger oom at page migration
2008-12-08 1:58 [PATCH -mmotm 0/4] cleanups/fixes for memory cgroup Daisuke Nishimura
@ 2008-12-08 2:02 ` Daisuke Nishimura
2008-12-08 2:03 ` [PATCH -mmotm 2/4] memcg: remove mem_cgroup_try_charge Daisuke Nishimura
` (2 subsequent siblings)
3 siblings, 0 replies; 7+ messages in thread
From: Daisuke Nishimura @ 2008-12-08 2:02 UTC (permalink / raw)
To: Andrew Morton
Cc: LKML, linux-mm, Balbir Singh, KAMEZAWA Hiroyuki, Pavel Emelyanov,
Li Zefan, Paul Menage, nishimura
I think triggering OOM at mem_cgroup_prepare_migration would be just a bit
overkill.
Returning -ENOMEM would be enough for mem_cgroup_prepare_migration.
The caller would handle the case anyway.
Signed-off-by: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Acked-by: Balbir Singh <balbir@linux.vnet.ibm.com>
---
mm/memcontrol.c | 2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index a4854a7..0683459 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -1331,7 +1331,7 @@ int mem_cgroup_prepare_migration(struct page *page, struct mem_cgroup **ptr)
unlock_page_cgroup(pc);
if (mem) {
- ret = mem_cgroup_try_charge(NULL, GFP_KERNEL, &mem);
+ ret = __mem_cgroup_try_charge(NULL, GFP_KERNEL, &mem, false);
css_put(&mem->css);
}
*ptr = mem;
^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH -mmotm 2/4] memcg: remove mem_cgroup_try_charge
2008-12-08 1:58 [PATCH -mmotm 0/4] cleanups/fixes for memory cgroup Daisuke Nishimura
2008-12-08 2:02 ` [PATCH -mmotm 1/4] memcg: don't trigger oom at page migration Daisuke Nishimura
@ 2008-12-08 2:03 ` Daisuke Nishimura
2008-12-08 2:05 ` [PATCH -mmotm 3/4] memcg: avoid dead lock caused by race between oom and cpuset_attach Daisuke Nishimura
2008-12-08 2:08 ` [PATCH -mmotm 4/4] memcg: change try_to_free_pages to hierarchical_reclaim Daisuke Nishimura
3 siblings, 0 replies; 7+ messages in thread
From: Daisuke Nishimura @ 2008-12-08 2:03 UTC (permalink / raw)
To: Andrew Morton
Cc: LKML, linux-mm, Balbir Singh, KAMEZAWA Hiroyuki, Pavel Emelyanov,
Li Zefan, Paul Menage, nishimura
After previous patch, mem_cgroup_try_charge is not used by anyone, so we can
remove it.
Signed-off-by: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
Acked-by:KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
---
include/linux/memcontrol.h | 8 --------
mm/memcontrol.c | 21 +--------------------
2 files changed, 1 insertions(+), 28 deletions(-)
diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 8752052..74c4009 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -40,8 +40,6 @@ struct mm_struct;
extern int mem_cgroup_newpage_charge(struct page *page, struct mm_struct *mm,
gfp_t gfp_mask);
/* for swap handling */
-extern int mem_cgroup_try_charge(struct mm_struct *mm,
- gfp_t gfp_mask, struct mem_cgroup **ptr);
extern int mem_cgroup_try_charge_swapin(struct mm_struct *mm,
struct page *page, gfp_t mask, struct mem_cgroup **ptr);
extern void mem_cgroup_commit_charge_swapin(struct page *page,
@@ -135,12 +133,6 @@ static inline int mem_cgroup_cache_charge(struct page *page,
return 0;
}
-static inline int mem_cgroup_try_charge(struct mm_struct *mm,
- gfp_t gfp_mask, struct mem_cgroup **ptr)
-{
- return 0;
-}
-
static inline int mem_cgroup_try_charge_swapin(struct mm_struct *mm,
struct page *page, gfp_t gfp_mask, struct mem_cgroup **ptr)
{
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 0683459..9877b03 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -809,27 +809,8 @@ nomem:
return -ENOMEM;
}
-/**
- * mem_cgroup_try_charge - get charge of PAGE_SIZE.
- * @mm: an mm_struct which is charged against. (when *memcg is NULL)
- * @gfp_mask: gfp_mask for reclaim.
- * @memcg: a pointer to memory cgroup which is charged against.
- *
- * charge against memory cgroup pointed by *memcg. if *memcg == NULL, estimated
- * memory cgroup from @mm is got and stored in *memcg.
- *
- * Returns 0 if success. -ENOMEM at failure.
- * This call can invoke OOM-Killer.
- */
-
-int mem_cgroup_try_charge(struct mm_struct *mm,
- gfp_t mask, struct mem_cgroup **memcg)
-{
- return __mem_cgroup_try_charge(mm, mask, memcg, true);
-}
-
/*
- * commit a charge got by mem_cgroup_try_charge() and makes page_cgroup to be
+ * commit a charge got by __mem_cgroup_try_charge() and makes page_cgroup to be
* USED state. If already USED, uncharge and return.
*/
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH -mmotm 3/4] memcg: avoid dead lock caused by race between oom and cpuset_attach
2008-12-08 1:58 [PATCH -mmotm 0/4] cleanups/fixes for memory cgroup Daisuke Nishimura
2008-12-08 2:02 ` [PATCH -mmotm 1/4] memcg: don't trigger oom at page migration Daisuke Nishimura
2008-12-08 2:03 ` [PATCH -mmotm 2/4] memcg: remove mem_cgroup_try_charge Daisuke Nishimura
@ 2008-12-08 2:05 ` Daisuke Nishimura
2008-12-08 2:40 ` Daisuke Nishimura
2008-12-09 6:41 ` Paul Menage
2008-12-08 2:08 ` [PATCH -mmotm 4/4] memcg: change try_to_free_pages to hierarchical_reclaim Daisuke Nishimura
3 siblings, 2 replies; 7+ messages in thread
From: Daisuke Nishimura @ 2008-12-08 2:05 UTC (permalink / raw)
To: Andrew Morton
Cc: LKML, linux-mm, Balbir Singh, KAMEZAWA Hiroyuki, Pavel Emelyanov,
Li Zefan, Paul Menage, nishimura
mpol_rebind_mm(), which can be called from cpuset_attach(), does down_write(mm->mmap_sem).
This means down_write(mm->mmap_sem) can be called under cgroup_mutex.
OTOH, page fault path does down_read(mm->mmap_sem) and calls mem_cgroup_try_charge_xxx(),
which may eventually calls mem_cgroup_out_of_memory(). And mem_cgroup_out_of_memory()
calls cgroup_lock().
This means cgroup_lock() can be called under down_read(mm->mmap_sem).
If those two paths race, dead lock can happen.
This patch avoid this dead lock by:
- remove cgroup_lock() from mem_cgroup_out_of_memory().
- define new mutex (memcg_tasklist) and serialize mem_cgroup_move_task()
(->attach handler of memory cgroup) and mem_cgroup_out_of_memory.
Signed-off-by: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
Reviewed-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu,com>
Acked-by: Balbir Singh <balbir@linux.vnet.ibm.com>
---
mm/memcontrol.c | 5 +++++
mm/oom_kill.c | 2 --
2 files changed, 5 insertions(+), 2 deletions(-)
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 9877b03..fec4fc3 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -51,6 +51,7 @@ static int really_do_swap_account __initdata = 1; /* for remember boot option*/
#define do_swap_account (0)
#endif
+static DEFINE_MUTEX(memcg_tasklist); /* can be hold under cgroup_mutex */
/*
* Statistics for memory cgroup.
@@ -797,7 +798,9 @@ static int __mem_cgroup_try_charge(struct mm_struct *mm,
if (!nr_retries--) {
if (oom) {
+ mutex_lock(&memcg_tasklist);
mem_cgroup_out_of_memory(mem_over_limit, gfp_mask);
+ mutex_unlock(&memcg_tasklist);
mem_over_limit->last_oom_jiffies = jiffies;
}
goto nomem;
@@ -2173,10 +2176,12 @@ static void mem_cgroup_move_task(struct cgroup_subsys *ss,
struct cgroup *old_cont,
struct task_struct *p)
{
+ mutex_lock(&memcg_tasklist);
/*
* FIXME: It's better to move charges of this process from old
* memcg to new memcg. But it's just on TODO-List now.
*/
+ mutex_unlock(&memcg_tasklist);
}
struct cgroup_subsys mem_cgroup_subsys = {
diff --git a/mm/oom_kill.c b/mm/oom_kill.c
index fd150e3..40ba050 100644
--- a/mm/oom_kill.c
+++ b/mm/oom_kill.c
@@ -429,7 +429,6 @@ void mem_cgroup_out_of_memory(struct mem_cgroup *mem, gfp_t gfp_mask)
unsigned long points = 0;
struct task_struct *p;
- cgroup_lock();
read_lock(&tasklist_lock);
retry:
p = select_bad_process(&points, mem);
@@ -444,7 +443,6 @@ retry:
goto retry;
out:
read_unlock(&tasklist_lock);
- cgroup_unlock();
}
#endif
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH -mmotm 4/4] memcg: change try_to_free_pages to hierarchical_reclaim
2008-12-08 1:58 [PATCH -mmotm 0/4] cleanups/fixes for memory cgroup Daisuke Nishimura
` (2 preceding siblings ...)
2008-12-08 2:05 ` [PATCH -mmotm 3/4] memcg: avoid dead lock caused by race between oom and cpuset_attach Daisuke Nishimura
@ 2008-12-08 2:08 ` Daisuke Nishimura
3 siblings, 0 replies; 7+ messages in thread
From: Daisuke Nishimura @ 2008-12-08 2:08 UTC (permalink / raw)
To: Andrew Morton
Cc: LKML, linux-mm, Balbir Singh, KAMEZAWA Hiroyuki, Pavel Emelyanov,
Li Zefan, Paul Menage, nishimura
mem_cgroup_hierarchicl_reclaim() works properly even when !use_hierarchy now
(by memcg-hierarchy-avoid-unnecessary-reclaim.patch), so, instead of
try_to_free_mem_cgroup_pages(), it should be used in many cases.
The only exception is force_empty. The group has no children in this case.
Signed-off-by: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Acked-by: Balbir Singh <balbir@linux.vnet.ibm.com>
---
mm/memcontrol.c | 12 ++++--------
1 files changed, 4 insertions(+), 8 deletions(-)
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index fec4fc3..b2b5c57 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -1400,8 +1400,7 @@ int mem_cgroup_shrink_usage(struct mm_struct *mm, gfp_t gfp_mask)
rcu_read_unlock();
do {
- progress = try_to_free_mem_cgroup_pages(mem, gfp_mask, true,
- get_swappiness(mem));
+ progress = mem_cgroup_hierarchical_reclaim(mem, gfp_mask, true);
progress += mem_cgroup_check_under_limit(mem);
} while (!progress && --retry);
@@ -1468,10 +1467,8 @@ static int mem_cgroup_resize_limit(struct mem_cgroup *memcg,
if (!ret)
break;
- progress = try_to_free_mem_cgroup_pages(memcg,
- GFP_KERNEL,
- false,
- get_swappiness(memcg));
+ progress = mem_cgroup_hierarchical_reclaim(memcg, GFP_KERNEL,
+ false);
if (!progress) retry_count--;
}
@@ -1515,8 +1512,7 @@ int mem_cgroup_resize_memsw_limit(struct mem_cgroup *memcg,
break;
oldusage = res_counter_read_u64(&memcg->memsw, RES_USAGE);
- try_to_free_mem_cgroup_pages(memcg, GFP_KERNEL, true,
- get_swappiness(memcg));
+ mem_cgroup_hierarchical_reclaim(memcg, GFP_KERNEL, true);
curusage = res_counter_read_u64(&memcg->memsw, RES_USAGE);
if (curusage >= oldusage)
retry_count--;
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH -mmotm 3/4] memcg: avoid dead lock caused by race between oom and cpuset_attach
2008-12-08 2:05 ` [PATCH -mmotm 3/4] memcg: avoid dead lock caused by race between oom and cpuset_attach Daisuke Nishimura
@ 2008-12-08 2:40 ` Daisuke Nishimura
2008-12-09 6:41 ` Paul Menage
1 sibling, 0 replies; 7+ messages in thread
From: Daisuke Nishimura @ 2008-12-08 2:40 UTC (permalink / raw)
To: Andrew Morton
Cc: LKML, linux-mm, Balbir Singh, KAMEZAWA Hiroyuki, Pavel Emelyanov,
Li Zefan, Paul Menage, nishimura
On Mon, 8 Dec 2008 11:05:11 +0900, Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp> wrote:
> mpol_rebind_mm(), which can be called from cpuset_attach(), does down_write(mm->mmap_sem).
> This means down_write(mm->mmap_sem) can be called under cgroup_mutex.
>
> OTOH, page fault path does down_read(mm->mmap_sem) and calls mem_cgroup_try_charge_xxx(),
> which may eventually calls mem_cgroup_out_of_memory(). And mem_cgroup_out_of_memory()
> calls cgroup_lock().
> This means cgroup_lock() can be called under down_read(mm->mmap_sem).
>
> If those two paths race, dead lock can happen.
>
> This patch avoid this dead lock by:
> - remove cgroup_lock() from mem_cgroup_out_of_memory().
> - define new mutex (memcg_tasklist) and serialize mem_cgroup_move_task()
> (->attach handler of memory cgroup) and mem_cgroup_out_of_memory.
>
> Signed-off-by: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
> Reviewed-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu,com>
Ooops, Kamezawa-san's address was invalid...
Reviewed-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Sorry.
Daisuke Nishimura.
> Acked-by: Balbir Singh <balbir@linux.vnet.ibm.com>
> ---
> mm/memcontrol.c | 5 +++++
> mm/oom_kill.c | 2 --
> 2 files changed, 5 insertions(+), 2 deletions(-)
>
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index 9877b03..fec4fc3 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -51,6 +51,7 @@ static int really_do_swap_account __initdata = 1; /* for remember boot option*/
> #define do_swap_account (0)
> #endif
>
> +static DEFINE_MUTEX(memcg_tasklist); /* can be hold under cgroup_mutex */
>
> /*
> * Statistics for memory cgroup.
> @@ -797,7 +798,9 @@ static int __mem_cgroup_try_charge(struct mm_struct *mm,
>
> if (!nr_retries--) {
> if (oom) {
> + mutex_lock(&memcg_tasklist);
> mem_cgroup_out_of_memory(mem_over_limit, gfp_mask);
> + mutex_unlock(&memcg_tasklist);
> mem_over_limit->last_oom_jiffies = jiffies;
> }
> goto nomem;
> @@ -2173,10 +2176,12 @@ static void mem_cgroup_move_task(struct cgroup_subsys *ss,
> struct cgroup *old_cont,
> struct task_struct *p)
> {
> + mutex_lock(&memcg_tasklist);
> /*
> * FIXME: It's better to move charges of this process from old
> * memcg to new memcg. But it's just on TODO-List now.
> */
> + mutex_unlock(&memcg_tasklist);
> }
>
> struct cgroup_subsys mem_cgroup_subsys = {
> diff --git a/mm/oom_kill.c b/mm/oom_kill.c
> index fd150e3..40ba050 100644
> --- a/mm/oom_kill.c
> +++ b/mm/oom_kill.c
> @@ -429,7 +429,6 @@ void mem_cgroup_out_of_memory(struct mem_cgroup *mem, gfp_t gfp_mask)
> unsigned long points = 0;
> struct task_struct *p;
>
> - cgroup_lock();
> read_lock(&tasklist_lock);
> retry:
> p = select_bad_process(&points, mem);
> @@ -444,7 +443,6 @@ retry:
> goto retry;
> out:
> read_unlock(&tasklist_lock);
> - cgroup_unlock();
> }
> #endif
>
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH -mmotm 3/4] memcg: avoid dead lock caused by race between oom and cpuset_attach
2008-12-08 2:05 ` [PATCH -mmotm 3/4] memcg: avoid dead lock caused by race between oom and cpuset_attach Daisuke Nishimura
2008-12-08 2:40 ` Daisuke Nishimura
@ 2008-12-09 6:41 ` Paul Menage
1 sibling, 0 replies; 7+ messages in thread
From: Paul Menage @ 2008-12-09 6:41 UTC (permalink / raw)
To: Daisuke Nishimura
Cc: Andrew Morton, LKML, linux-mm, Balbir Singh, KAMEZAWA Hiroyuki,
Pavel Emelyanov, Li Zefan
On Sun, Dec 7, 2008 at 6:05 PM, Daisuke Nishimura
<nishimura@mxp.nes.nec.co.jp> wrote:
> mpol_rebind_mm(), which can be called from cpuset_attach(), does down_write(mm->mmap_sem).
> This means down_write(mm->mmap_sem) can be called under cgroup_mutex.
>
> OTOH, page fault path does down_read(mm->mmap_sem) and calls mem_cgroup_try_charge_xxx(),
> which may eventually calls mem_cgroup_out_of_memory(). And mem_cgroup_out_of_memory()
> calls cgroup_lock().
> This means cgroup_lock() can be called under down_read(mm->mmap_sem).
We should probably try to get cgroup_lock() out of the cpuset code
that calls mpol_rebind_mm() as well.
Paul
>
> If those two paths race, dead lock can happen.
>
> This patch avoid this dead lock by:
> - remove cgroup_lock() from mem_cgroup_out_of_memory().
> - define new mutex (memcg_tasklist) and serialize mem_cgroup_move_task()
> (->attach handler of memory cgroup) and mem_cgroup_out_of_memory.
>
> Signed-off-by: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
> Reviewed-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu,com>
> Acked-by: Balbir Singh <balbir@linux.vnet.ibm.com>
> ---
> mm/memcontrol.c | 5 +++++
> mm/oom_kill.c | 2 --
> 2 files changed, 5 insertions(+), 2 deletions(-)
>
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index 9877b03..fec4fc3 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -51,6 +51,7 @@ static int really_do_swap_account __initdata = 1; /* for remember boot option*/
> #define do_swap_account (0)
> #endif
>
> +static DEFINE_MUTEX(memcg_tasklist); /* can be hold under cgroup_mutex */
>
> /*
> * Statistics for memory cgroup.
> @@ -797,7 +798,9 @@ static int __mem_cgroup_try_charge(struct mm_struct *mm,
>
> if (!nr_retries--) {
> if (oom) {
> + mutex_lock(&memcg_tasklist);
> mem_cgroup_out_of_memory(mem_over_limit, gfp_mask);
> + mutex_unlock(&memcg_tasklist);
> mem_over_limit->last_oom_jiffies = jiffies;
> }
> goto nomem;
> @@ -2173,10 +2176,12 @@ static void mem_cgroup_move_task(struct cgroup_subsys *ss,
> struct cgroup *old_cont,
> struct task_struct *p)
> {
> + mutex_lock(&memcg_tasklist);
> /*
> * FIXME: It's better to move charges of this process from old
> * memcg to new memcg. But it's just on TODO-List now.
> */
> + mutex_unlock(&memcg_tasklist);
> }
>
> struct cgroup_subsys mem_cgroup_subsys = {
> diff --git a/mm/oom_kill.c b/mm/oom_kill.c
> index fd150e3..40ba050 100644
> --- a/mm/oom_kill.c
> +++ b/mm/oom_kill.c
> @@ -429,7 +429,6 @@ void mem_cgroup_out_of_memory(struct mem_cgroup *mem, gfp_t gfp_mask)
> unsigned long points = 0;
> struct task_struct *p;
>
> - cgroup_lock();
> read_lock(&tasklist_lock);
> retry:
> p = select_bad_process(&points, mem);
> @@ -444,7 +443,6 @@ retry:
> goto retry;
> out:
> read_unlock(&tasklist_lock);
> - cgroup_unlock();
> }
> #endif
>
>
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2008-12-09 6:41 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2008-12-08 1:58 [PATCH -mmotm 0/4] cleanups/fixes for memory cgroup Daisuke Nishimura
2008-12-08 2:02 ` [PATCH -mmotm 1/4] memcg: don't trigger oom at page migration Daisuke Nishimura
2008-12-08 2:03 ` [PATCH -mmotm 2/4] memcg: remove mem_cgroup_try_charge Daisuke Nishimura
2008-12-08 2:05 ` [PATCH -mmotm 3/4] memcg: avoid dead lock caused by race between oom and cpuset_attach Daisuke Nishimura
2008-12-08 2:40 ` Daisuke Nishimura
2008-12-09 6:41 ` Paul Menage
2008-12-08 2:08 ` [PATCH -mmotm 4/4] memcg: change try_to_free_pages to hierarchical_reclaim Daisuke Nishimura
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox