* [PATCH 0/4] memcg: four fixes to current next
@ 2011-12-29 0:17 Hugh Dickins
2011-12-29 0:20 ` [PATCH 1/4] memcg: fix split_huge_page_refcounts Hugh Dickins
` (3 more replies)
0 siblings, 4 replies; 9+ messages in thread
From: Hugh Dickins @ 2011-12-29 0:17 UTC (permalink / raw)
To: Andrew Morton; +Cc: KAMEZAWA Hiroyuki, Johannes Weiner, Michal Hocko, linux-mm
Here are four memcg fixes to mmotm/next, based on 3.2.0-rc6-next-20111222
minus Mel's 11/11 "mm: isolate pages for immediate reclaim on their own LRU"
and its two corrections - as I already reported, that soon generates memcg
accounting problems of a similar kind to those fixed in 1/4 here.
[PATCH 1/4] memcg: fix split_huge_page_refcounts
[PATCH 2/4] memcg: fix NULL mem_cgroup_try_charge
[PATCH 3/4] memcg: fix page migration to reset_owner
[PATCH 4/4] memcg: fix mem_cgroup_print_bad_page
mm/huge_memory.c | 10 ----------
mm/memcontrol.c | 33 ++++++---------------------------
mm/migrate.c | 2 ++
mm/swap.c | 29 +++++++++++++++++++----------
4 files changed, 27 insertions(+), 47 deletions(-)
Hugh
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH 1/4] memcg: fix split_huge_page_refcounts
2011-12-29 0:17 [PATCH 0/4] memcg: four fixes to current next Hugh Dickins
@ 2011-12-29 0:20 ` Hugh Dickins
2012-01-05 5:55 ` KAMEZAWA Hiroyuki
2011-12-29 0:21 ` [PATCH 2/4] memcg: fix NULL mem_cgroup_try_charge Hugh Dickins
` (2 subsequent siblings)
3 siblings, 1 reply; 9+ messages in thread
From: Hugh Dickins @ 2011-12-29 0:20 UTC (permalink / raw)
To: Andrew Morton
Cc: KAMEZAWA Hiroyuki, Johannes Weiner, Michal Hocko,
Andrea Arcangeli, Shaohua Li, David Rientjes, linux-mm
This patch started off as a cleanup: __split_huge_page_refcounts() has to
cope with two scenarios, when the hugepage being split is already on LRU,
and when it is not; but why does it have to split that accounting across
three different sites? Consolidate it in lru_add_page_tail(), handling
evictable and unevictable alike, and use standard add_page_to_lru_list()
when accounting is needed (when the head is not yet on LRU).
But a recent regression in -next, I guess the removal of PageCgroupAcctLRU
test from mem_cgroup_split_huge_fixup(), makes this now a necessary fix:
under load, the MEM_CGROUP_ZSTAT count was wrapping to a huge number,
messing up reclaim calculations and causing a freeze at rmdir of cgroup.
Add a VM_BUG_ON to mem_cgroup_lru_del_list() when we're about to wrap
that count - this has not been the only such incident. Document that
lru_add_page_tail() is for Transparent HugePages by #ifdef around it.
Signed-off-by: Hugh Dickins <hughd@google.com>
---
I think this is a fix to
memcg: simplify LRU handling by new rule
but I've not tried applying immediately after that one,
just on top of next minus Mel's 11/11.
mm/huge_memory.c | 10 ----------
mm/memcontrol.c | 12 ++----------
mm/swap.c | 29 +++++++++++++++++++----------
3 files changed, 21 insertions(+), 30 deletions(-)
--- mmotm.orig/mm/huge_memory.c 2011-12-22 02:53:31.884041564 -0800
+++ mmotm/mm/huge_memory.c 2011-12-28 12:53:23.416367861 -0800
@@ -1229,7 +1229,6 @@ static void __split_huge_page_refcount(s
{
int i;
struct zone *zone = page_zone(page);
- int zonestat;
int tail_count = 0;
/* prevent PageLRU to go away from under us, and freeze lru stats */
@@ -1317,15 +1316,6 @@ static void __split_huge_page_refcount(s
__dec_zone_page_state(page, NR_ANON_TRANSPARENT_HUGEPAGES);
__mod_zone_page_state(zone, NR_ANON_PAGES, HPAGE_PMD_NR);
- /*
- * A hugepage counts for HPAGE_PMD_NR pages on the LRU statistics,
- * so adjust those appropriately if this page is on the LRU.
- */
- if (PageLRU(page)) {
- zonestat = NR_LRU_BASE + page_lru(page);
- __mod_zone_page_state(zone, zonestat, -(HPAGE_PMD_NR-1));
- }
-
ClearPageCompound(page);
compound_unlock(page);
spin_unlock_irq(&zone->lru_lock);
--- mmotm.orig/mm/memcontrol.c 2011-12-22 02:53:31.892041564 -0800
+++ mmotm/mm/memcontrol.c 2011-12-28 12:53:23.420367847 -0800
@@ -1076,6 +1076,7 @@ void mem_cgroup_lru_del_list(struct page
VM_BUG_ON(!memcg);
mz = page_cgroup_zoneinfo(memcg, page);
/* huge page split is done under lru_lock. so, we have no races. */
+ VM_BUG_ON(MEM_CGROUP_ZSTAT(mz, lru) < (1 << compound_order(page)));
MEM_CGROUP_ZSTAT(mz, lru) -= 1 << compound_order(page);
}
@@ -2468,9 +2469,7 @@ static void __mem_cgroup_commit_charge(s
void mem_cgroup_split_huge_fixup(struct page *head)
{
struct page_cgroup *head_pc = lookup_page_cgroup(head);
- struct mem_cgroup_per_zone *mz;
struct page_cgroup *pc;
- enum lru_list lru;
int i;
if (mem_cgroup_disabled())
@@ -2481,15 +2480,8 @@ void mem_cgroup_split_huge_fixup(struct
smp_wmb();/* see __commit_charge() */
pc->flags = head_pc->flags & ~PCGF_NOCOPY_AT_SPLIT;
}
- /*
- * Tail pages will be added to LRU.
- * We hold lru_lock,then,reduce counter directly.
- */
- lru = page_lru(head);
- mz = page_cgroup_zoneinfo(head_pc->mem_cgroup, head);
- MEM_CGROUP_ZSTAT(mz, lru) -= HPAGE_PMD_NR - 1;
}
-#endif
+#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
/**
* mem_cgroup_move_account - move account of the page
--- mmotm.orig/mm/swap.c 2011-12-28 12:32:02.764338005 -0800
+++ mmotm/mm/swap.c 2011-12-28 12:53:23.420367847 -0800
@@ -650,6 +650,7 @@ void __pagevec_release(struct pagevec *p
EXPORT_SYMBOL(__pagevec_release);
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
/* used by __split_huge_page_refcount() */
void lru_add_page_tail(struct zone* zone,
struct page *page, struct page *page_tail)
@@ -666,8 +667,6 @@ void lru_add_page_tail(struct zone* zone
SetPageLRU(page_tail);
if (page_evictable(page_tail, NULL)) {
- struct lruvec *lruvec;
-
if (PageActive(page)) {
SetPageActive(page_tail);
active = 1;
@@ -677,18 +676,28 @@ void lru_add_page_tail(struct zone* zone
lru = LRU_INACTIVE_ANON;
}
update_page_reclaim_stat(zone, page_tail, file, active);
- lruvec = mem_cgroup_lru_add_list(zone, page_tail, lru);
- if (likely(PageLRU(page)))
- list_add(&page_tail->lru, page->lru.prev);
- else
- list_add(&page_tail->lru, lruvec->lists[lru].prev);
- __mod_zone_page_state(zone, NR_LRU_BASE + lru,
- hpage_nr_pages(page_tail));
} else {
SetPageUnevictable(page_tail);
- add_page_to_lru_list(zone, page_tail, LRU_UNEVICTABLE);
+ lru = LRU_UNEVICTABLE;
+ }
+
+ if (likely(PageLRU(page)))
+ list_add_tail(&page_tail->lru, &page->lru);
+ else {
+ struct list_head *list_head;
+ /*
+ * Head page has not yet been counted, as an hpage,
+ * so we must account for each subpage individually.
+ *
+ * Use the standard add function to put page_tail on the list,
+ * but then correct its position so they all end up in order.
+ */
+ add_page_to_lru_list(zone, page_tail, lru);
+ list_head = page_tail->lru.prev;
+ list_move_tail(&page_tail->lru, list_head);
}
}
+#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
static void ____pagevec_lru_add_fn(struct page *page, void *arg)
{
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH 2/4] memcg: fix NULL mem_cgroup_try_charge
2011-12-29 0:17 [PATCH 0/4] memcg: four fixes to current next Hugh Dickins
2011-12-29 0:20 ` [PATCH 1/4] memcg: fix split_huge_page_refcounts Hugh Dickins
@ 2011-12-29 0:21 ` Hugh Dickins
2012-01-05 5:56 ` KAMEZAWA Hiroyuki
2011-12-29 0:23 ` [PATCH 3/4] memcg: fix page migration to reset_owner Hugh Dickins
2011-12-29 0:26 ` [PATCH 4/4] memcg: fix mem_cgroup_print_bad_page Hugh Dickins
3 siblings, 1 reply; 9+ messages in thread
From: Hugh Dickins @ 2011-12-29 0:21 UTC (permalink / raw)
To: Andrew Morton; +Cc: KAMEZAWA Hiroyuki, Johannes Weiner, Michal Hocko, linux-mm
There is one way out of __mem_cgroup_try_charge() which claims success
but still leaves memcg NULL, causing oops thereafter: make sure that
it is set to root_mem_cgroup in this case.
Signed-off-by: Hugh Dickins <hughd@google.com>
---
Fix to memcg: return -EINTR at bypassing try_charge()
mm/memcontrol.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
--- mmotm.orig/mm/memcontrol.c 2011-12-28 12:53:23.420367847 -0800
+++ mmotm/mm/memcontrol.c 2011-12-28 14:41:19.803018025 -0800
@@ -2263,7 +2263,9 @@ again:
* task-struct. So, mm->owner can be NULL.
*/
memcg = mem_cgroup_from_task(p);
- if (!memcg || mem_cgroup_is_root(memcg)) {
+ if (!memcg)
+ memcg = root_mem_cgroup;
+ if (mem_cgroup_is_root(memcg)) {
rcu_read_unlock();
goto done;
}
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH 3/4] memcg: fix page migration to reset_owner
2011-12-29 0:17 [PATCH 0/4] memcg: four fixes to current next Hugh Dickins
2011-12-29 0:20 ` [PATCH 1/4] memcg: fix split_huge_page_refcounts Hugh Dickins
2011-12-29 0:21 ` [PATCH 2/4] memcg: fix NULL mem_cgroup_try_charge Hugh Dickins
@ 2011-12-29 0:23 ` Hugh Dickins
2012-01-05 6:00 ` KAMEZAWA Hiroyuki
2011-12-29 0:26 ` [PATCH 4/4] memcg: fix mem_cgroup_print_bad_page Hugh Dickins
3 siblings, 1 reply; 9+ messages in thread
From: Hugh Dickins @ 2011-12-29 0:23 UTC (permalink / raw)
To: Andrew Morton; +Cc: KAMEZAWA Hiroyuki, Johannes Weiner, Michal Hocko, linux-mm
Usually, migration pages coming to unmap_and_move()'s putback_lru_page()
have been charged and have pc->mem_cgroup set; but there are several ways
in which a freshly allocated uncharged page can get there, oopsing when
added to LRU. Call mem_cgroup_reset_owner() immediately after allocating.
Signed-off-by: Hugh Dickins <hughd@google.com>
---
Fix N to
memcg: clear pc->mem_cgorup if necessary.
mm/migrate.c | 2 ++
1 file changed, 2 insertions(+)
--- mmotm.orig/mm/migrate.c 2011-12-22 02:53:31.900041565 -0800
+++ mmotm/mm/migrate.c 2011-12-28 14:52:37.243034125 -0800
@@ -841,6 +841,8 @@ static int unmap_and_move(new_page_t get
if (!newpage)
return -ENOMEM;
+ mem_cgroup_reset_owner(newpage);
+
if (page_count(page) == 1) {
/* page was freed from under us. So we are done. */
goto out;
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH 4/4] memcg: fix mem_cgroup_print_bad_page
2011-12-29 0:17 [PATCH 0/4] memcg: four fixes to current next Hugh Dickins
` (2 preceding siblings ...)
2011-12-29 0:23 ` [PATCH 3/4] memcg: fix page migration to reset_owner Hugh Dickins
@ 2011-12-29 0:26 ` Hugh Dickins
2012-01-05 6:01 ` KAMEZAWA Hiroyuki
3 siblings, 1 reply; 9+ messages in thread
From: Hugh Dickins @ 2011-12-29 0:26 UTC (permalink / raw)
To: Andrew Morton
Cc: Daisuke Nishimura, KAMEZAWA Hiroyuki, Johannes Weiner,
Michal Hocko, linux-mm
If DEBUG_VM, mem_cgroup_print_bad_page() is called whenever bad_page()
shows a "Bad page state" message, removes page from circulation, adds a
taint and continues. This is at a very low level, often when a spinlock
is held (sometimes when page table lock is held, for example).
We want to recover from this badness, not make it worse: we must not
kmalloc memory here, we must not do a cgroup path lookup via dubious
pointers. No doubt that code was useful to debug a particular case
at one time, and may be again, but take it out of the mainline kernel.
Signed-off-by: Hugh Dickins <hughd@google.com>
---
This goes back to 2.6.39; but it is under DEBUG_VM, so probably
doesn't need Cc stable.
mm/memcontrol.c | 17 +----------------
1 file changed, 1 insertion(+), 16 deletions(-)
--- mmotm.orig/mm/memcontrol.c 2011-12-28 14:41:19.803018025 -0800
+++ mmotm/mm/memcontrol.c 2011-12-28 15:07:26.887055270 -0800
@@ -3369,23 +3369,8 @@ void mem_cgroup_print_bad_page(struct pa
pc = lookup_page_cgroup_used(page);
if (pc) {
- int ret = -1;
- char *path;
-
- printk(KERN_ALERT "pc:%p pc->flags:%lx pc->mem_cgroup:%p",
+ printk(KERN_ALERT "pc:%p pc->flags:%lx pc->mem_cgroup:%p\n",
pc, pc->flags, pc->mem_cgroup);
-
- path = kmalloc(PATH_MAX, GFP_KERNEL);
- if (path) {
- rcu_read_lock();
- ret = cgroup_path(pc->mem_cgroup->css.cgroup,
- path, PATH_MAX);
- rcu_read_unlock();
- }
-
- printk(KERN_CONT "(%s)\n",
- (ret < 0) ? "cannot get the path" : path);
- kfree(path);
}
}
#endif
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 1/4] memcg: fix split_huge_page_refcounts
2011-12-29 0:20 ` [PATCH 1/4] memcg: fix split_huge_page_refcounts Hugh Dickins
@ 2012-01-05 5:55 ` KAMEZAWA Hiroyuki
0 siblings, 0 replies; 9+ messages in thread
From: KAMEZAWA Hiroyuki @ 2012-01-05 5:55 UTC (permalink / raw)
To: Hugh Dickins
Cc: Andrew Morton, Johannes Weiner, Michal Hocko, Andrea Arcangeli,
Shaohua Li, David Rientjes, linux-mm
On Wed, 28 Dec 2011 16:20:25 -0800 (PST)
Hugh Dickins <hughd@google.com> wrote:
> This patch started off as a cleanup: __split_huge_page_refcounts() has to
> cope with two scenarios, when the hugepage being split is already on LRU,
> and when it is not; but why does it have to split that accounting across
> three different sites? Consolidate it in lru_add_page_tail(), handling
> evictable and unevictable alike, and use standard add_page_to_lru_list()
> when accounting is needed (when the head is not yet on LRU).
>
> But a recent regression in -next, I guess the removal of PageCgroupAcctLRU
> test from mem_cgroup_split_huge_fixup(), makes this now a necessary fix:
> under load, the MEM_CGROUP_ZSTAT count was wrapping to a huge number,
> messing up reclaim calculations and causing a freeze at rmdir of cgroup.
>
> Add a VM_BUG_ON to mem_cgroup_lru_del_list() when we're about to wrap
> that count - this has not been the only such incident. Document that
> lru_add_page_tail() is for Transparent HugePages by #ifdef around it.
>
> Signed-off-by: Hugh Dickins <hughd@google.com>
seems saner.
Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Thank you.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 2/4] memcg: fix NULL mem_cgroup_try_charge
2011-12-29 0:21 ` [PATCH 2/4] memcg: fix NULL mem_cgroup_try_charge Hugh Dickins
@ 2012-01-05 5:56 ` KAMEZAWA Hiroyuki
0 siblings, 0 replies; 9+ messages in thread
From: KAMEZAWA Hiroyuki @ 2012-01-05 5:56 UTC (permalink / raw)
To: Hugh Dickins; +Cc: Andrew Morton, Johannes Weiner, Michal Hocko, linux-mm
On Wed, 28 Dec 2011 16:21:57 -0800 (PST)
Hugh Dickins <hughd@google.com> wrote:
> There is one way out of __mem_cgroup_try_charge() which claims success
> but still leaves memcg NULL, causing oops thereafter: make sure that
> it is set to root_mem_cgroup in this case.
>
> Signed-off-by: Hugh Dickins <hughd@google.com>
Acked-by: KAMEZWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
> ---
> Fix to memcg: return -EINTR at bypassing try_charge()
>
> mm/memcontrol.c | 4 +++-
> 1 file changed, 3 insertions(+), 1 deletion(-)
>
> --- mmotm.orig/mm/memcontrol.c 2011-12-28 12:53:23.420367847 -0800
> +++ mmotm/mm/memcontrol.c 2011-12-28 14:41:19.803018025 -0800
> @@ -2263,7 +2263,9 @@ again:
> * task-struct. So, mm->owner can be NULL.
> */
> memcg = mem_cgroup_from_task(p);
> - if (!memcg || mem_cgroup_is_root(memcg)) {
> + if (!memcg)
> + memcg = root_mem_cgroup;
> + if (mem_cgroup_is_root(memcg)) {
> rcu_read_unlock();
> goto done;
> }
>
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 3/4] memcg: fix page migration to reset_owner
2011-12-29 0:23 ` [PATCH 3/4] memcg: fix page migration to reset_owner Hugh Dickins
@ 2012-01-05 6:00 ` KAMEZAWA Hiroyuki
0 siblings, 0 replies; 9+ messages in thread
From: KAMEZAWA Hiroyuki @ 2012-01-05 6:00 UTC (permalink / raw)
To: Hugh Dickins; +Cc: Andrew Morton, Johannes Weiner, Michal Hocko, linux-mm
On Wed, 28 Dec 2011 16:23:29 -0800 (PST)
Hugh Dickins <hughd@google.com> wrote:
> Usually, migration pages coming to unmap_and_move()'s putback_lru_page()
> have been charged and have pc->mem_cgroup set; but there are several ways
> in which a freshly allocated uncharged page can get there, oopsing when
> added to LRU. Call mem_cgroup_reset_owner() immediately after allocating.
>
> Signed-off-by: Hugh Dickins <hughd@google.com>
Ah, ok. It calls putback_lru_page()...
Thank you very much!.
Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
> ---
> Fix N to
> memcg: clear pc->mem_cgorup if necessary.
>
> mm/migrate.c | 2 ++
> 1 file changed, 2 insertions(+)
>
> --- mmotm.orig/mm/migrate.c 2011-12-22 02:53:31.900041565 -0800
> +++ mmotm/mm/migrate.c 2011-12-28 14:52:37.243034125 -0800
> @@ -841,6 +841,8 @@ static int unmap_and_move(new_page_t get
> if (!newpage)
> return -ENOMEM;
>
> + mem_cgroup_reset_owner(newpage);
> +
> if (page_count(page) == 1) {
> /* page was freed from under us. So we are done. */
> goto out;
>
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 4/4] memcg: fix mem_cgroup_print_bad_page
2011-12-29 0:26 ` [PATCH 4/4] memcg: fix mem_cgroup_print_bad_page Hugh Dickins
@ 2012-01-05 6:01 ` KAMEZAWA Hiroyuki
0 siblings, 0 replies; 9+ messages in thread
From: KAMEZAWA Hiroyuki @ 2012-01-05 6:01 UTC (permalink / raw)
To: Hugh Dickins
Cc: Andrew Morton, Daisuke Nishimura, Johannes Weiner, Michal Hocko,
linux-mm
On Wed, 28 Dec 2011 16:26:02 -0800 (PST)
Hugh Dickins <hughd@google.com> wrote:
> If DEBUG_VM, mem_cgroup_print_bad_page() is called whenever bad_page()
> shows a "Bad page state" message, removes page from circulation, adds a
> taint and continues. This is at a very low level, often when a spinlock
> is held (sometimes when page table lock is held, for example).
>
> We want to recover from this badness, not make it worse: we must not
> kmalloc memory here, we must not do a cgroup path lookup via dubious
> pointers. No doubt that code was useful to debug a particular case
> at one time, and may be again, but take it out of the mainline kernel.
>
> Signed-off-by: Hugh Dickins <hughd@google.com>
Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2012-01-05 6:03 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-12-29 0:17 [PATCH 0/4] memcg: four fixes to current next Hugh Dickins
2011-12-29 0:20 ` [PATCH 1/4] memcg: fix split_huge_page_refcounts Hugh Dickins
2012-01-05 5:55 ` KAMEZAWA Hiroyuki
2011-12-29 0:21 ` [PATCH 2/4] memcg: fix NULL mem_cgroup_try_charge Hugh Dickins
2012-01-05 5:56 ` KAMEZAWA Hiroyuki
2011-12-29 0:23 ` [PATCH 3/4] memcg: fix page migration to reset_owner Hugh Dickins
2012-01-05 6:00 ` KAMEZAWA Hiroyuki
2011-12-29 0:26 ` [PATCH 4/4] memcg: fix mem_cgroup_print_bad_page Hugh Dickins
2012-01-05 6:01 ` KAMEZAWA Hiroyuki
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox