* [mmotm][PATCH 0/4] request for patch replacement
@ 2008-12-02 4:17 KAMEZAWA Hiroyuki
2008-12-02 4:18 ` [mmotm][PATCH 1/4] replacement-for-memcg-simple-migration-handling.patch KAMEZAWA Hiroyuki
` (4 more replies)
0 siblings, 5 replies; 8+ messages in thread
From: KAMEZAWA Hiroyuki @ 2008-12-02 4:17 UTC (permalink / raw)
To: akpm; +Cc: hugh, kamezawa.hiroyu, linux-mm, balbir, nishimura
Hi, I'm sorry for asking this.
please drop memcg-fix-gfp_mask-of-callers-of-charge.patch.
It got NACK. http://marc.info/?l=linux-kernel&m=122817796729117&w=2
To drop memcg-fix-gfp_mask-of-callers-of-charge.patch, some HUNKs in following
patches should be fixed. By dropping it, all gfp mask will turn to be GFP_KERNEL.
I'll consider how to handle this, later again.
I send replacment for 4 patches follows this mail.
==
memcg-simple-migration-handling.patch
memcg-handle-swap-caches.patch
memcg-memswap-controller-core.patch
memcg-memswap-controller-core-make-resize-limit-hold-mutex.patch
Fortunately, HUNK was not so many as expected.
Regards,
-Kame
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 8+ messages in thread
* [mmotm][PATCH 1/4] replacement-for-memcg-simple-migration-handling.patch
2008-12-02 4:17 [mmotm][PATCH 0/4] request for patch replacement KAMEZAWA Hiroyuki
@ 2008-12-02 4:18 ` KAMEZAWA Hiroyuki
2008-12-02 4:35 ` Balbir Singh
2008-12-02 4:19 ` [mmotm][PATCH 2/4] replacement-for-memcg-handle-swap-caches.patch KAMEZAWA Hiroyuki
` (3 subsequent siblings)
4 siblings, 1 reply; 8+ messages in thread
From: KAMEZAWA Hiroyuki @ 2008-12-02 4:18 UTC (permalink / raw)
To: KAMEZAWA Hiroyuki; +Cc: akpm, hugh, linux-mm, balbir, nishimura
From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Now, management of "charge" under page migration is done under following
manner. (Assume migrate page contents from oldpage to newpage)
before
- "newpage" is charged before migration.
at success.
- "oldpage" is uncharged at somewhere(unmap, radix-tree-replace)
at failure
- "newpage" is uncharged.
- "oldpage" is charged if necessary (*1)
But (*1) is not reliable....because of GFP_ATOMIC.
This patch tries to change behavior as following by charge/commit/cancel ops.
before
- charge PAGE_SIZE (no target page)
success
- commit charge against "newpage".
failure
- commit charge against "oldpage".
(PCG_USED bit works effectively to avoid double-counting)
- if "oldpage" is obsolete, cancel charge of PAGE_SIZE.
Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Reviewed-by: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
Cc: Balbir Singh <balbir@in.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
include/linux/memcontrol.h | 19 ++-----
mm/memcontrol.c | 109 +++++++++++++++++++++------------------------
mm/migrate.c | 42 +++++------------
3 files changed, 74 insertions(+), 96 deletions(-)
Index: mmotm-2.6.28-Nov30/include/linux/memcontrol.h
===================================================================
--- mmotm-2.6.28-Nov30.orig/include/linux/memcontrol.h
+++ mmotm-2.6.28-Nov30/include/linux/memcontrol.h
@@ -29,8 +29,6 @@ struct mm_struct;
extern int mem_cgroup_newpage_charge(struct page *page, struct mm_struct *mm,
gfp_t gfp_mask);
-extern int mem_cgroup_charge_migrate_fixup(struct page *page,
- struct mm_struct *mm, gfp_t gfp_mask);
/* for swap handling */
extern int mem_cgroup_try_charge(struct mm_struct *mm,
gfp_t gfp_mask, struct mem_cgroup **ptr);
@@ -60,8 +58,9 @@ extern struct mem_cgroup *mem_cgroup_fro
((cgroup) == mem_cgroup_from_task((mm)->owner))
extern int
-mem_cgroup_prepare_migration(struct page *page, struct page *newpage);
-extern void mem_cgroup_end_migration(struct page *page);
+mem_cgroup_prepare_migration(struct page *page, struct mem_cgroup **ptr);
+extern void mem_cgroup_end_migration(struct mem_cgroup *mem,
+ struct page *oldpage, struct page *newpage);
/*
* For memory reclaim.
@@ -94,12 +93,6 @@ static inline int mem_cgroup_cache_charg
return 0;
}
-static inline int mem_cgroup_charge_migrate_fixup(struct page *page,
- struct mm_struct *mm, gfp_t gfp_mask)
-{
- return 0;
-}
-
static inline int mem_cgroup_try_charge(struct mm_struct *mm,
gfp_t gfp_mask, struct mem_cgroup **ptr)
{
@@ -144,12 +137,14 @@ static inline int task_in_mem_cgroup(str
}
static inline int
-mem_cgroup_prepare_migration(struct page *page, struct page *newpage)
+mem_cgroup_prepare_migration(struct page *page, struct mem_cgroup **ptr)
{
return 0;
}
-static inline void mem_cgroup_end_migration(struct page *page)
+static inline void mem_cgroup_end_migration(struct mem_cgroup *mem,
+ struct page *oldpage,
+ struct page *newpage)
{
}
Index: mmotm-2.6.28-Nov30/mm/memcontrol.c
===================================================================
--- mmotm-2.6.28-Nov30.orig/mm/memcontrol.c
+++ mmotm-2.6.28-Nov30/mm/memcontrol.c
@@ -627,34 +627,6 @@ int mem_cgroup_newpage_charge(struct pag
MEM_CGROUP_CHARGE_TYPE_MAPPED, NULL);
}
-/*
- * same as mem_cgroup_newpage_charge(), now.
- * But what we assume is different from newpage, and this is special case.
- * treat this in special function. easy for maintenance.
- */
-
-int mem_cgroup_charge_migrate_fixup(struct page *page,
- struct mm_struct *mm, gfp_t gfp_mask)
-{
- if (mem_cgroup_subsys.disabled)
- return 0;
-
- if (PageCompound(page))
- return 0;
-
- if (page_mapped(page) || (page->mapping && !PageAnon(page)))
- return 0;
-
- if (unlikely(!mm))
- mm = &init_mm;
-
- return mem_cgroup_charge_common(page, mm, gfp_mask,
- MEM_CGROUP_CHARGE_TYPE_MAPPED, NULL);
-}
-
-
-
-
int mem_cgroup_cache_charge(struct page *page, struct mm_struct *mm,
gfp_t gfp_mask)
{
@@ -697,7 +669,6 @@ int mem_cgroup_cache_charge(struct page
MEM_CGROUP_CHARGE_TYPE_SHMEM, NULL);
}
-
void mem_cgroup_commit_charge_swapin(struct page *page, struct mem_cgroup *ptr)
{
struct page_cgroup *pc;
@@ -782,13 +753,13 @@ void mem_cgroup_uncharge_cache_page(stru
}
/*
- * Before starting migration, account against new page.
+ * Before starting migration, account PAGE_SIZE to mem_cgroup that the old
+ * page belongs to.
*/
-int mem_cgroup_prepare_migration(struct page *page, struct page *newpage)
+int mem_cgroup_prepare_migration(struct page *page, struct mem_cgroup **ptr)
{
struct page_cgroup *pc;
struct mem_cgroup *mem = NULL;
- enum charge_type ctype = MEM_CGROUP_CHARGE_TYPE_MAPPED;
int ret = 0;
if (mem_cgroup_subsys.disabled)
@@ -799,41 +770,67 @@ int mem_cgroup_prepare_migration(struct
if (PageCgroupUsed(pc)) {
mem = pc->mem_cgroup;
css_get(&mem->css);
- if (PageCgroupCache(pc)) {
- if (page_is_file_cache(page))
- ctype = MEM_CGROUP_CHARGE_TYPE_CACHE;
- else
- ctype = MEM_CGROUP_CHARGE_TYPE_SHMEM;
- }
}
unlock_page_cgroup(pc);
+
if (mem) {
- ret = mem_cgroup_charge_common(newpage, NULL, GFP_KERNEL,
- ctype, mem);
+ ret = mem_cgroup_try_charge(NULL, GFP_KERNEL, &mem);
css_put(&mem->css);
}
+ *ptr = mem;
return ret;
}
-/* remove redundant charge if migration failed*/
-void mem_cgroup_end_migration(struct page *newpage)
+ /* remove redundant charge if migration failed*/
+void mem_cgroup_end_migration(struct mem_cgroup *mem,
+ struct page *oldpage, struct page *newpage)
{
+ struct page *target, *unused;
+ struct page_cgroup *pc;
+ enum charge_type ctype;
+
+ if (!mem)
+ return;
+
+ /* at migration success, oldpage->mapping is NULL. */
+ if (oldpage->mapping) {
+ target = oldpage;
+ unused = NULL;
+ } else {
+ target = newpage;
+ unused = oldpage;
+ }
+
+ if (PageAnon(target))
+ ctype = MEM_CGROUP_CHARGE_TYPE_MAPPED;
+ else if (page_is_file_cache(target))
+ ctype = MEM_CGROUP_CHARGE_TYPE_CACHE;
+ else
+ ctype = MEM_CGROUP_CHARGE_TYPE_SHMEM;
+
+ /* unused page is not on radix-tree now. */
+ if (unused && ctype != MEM_CGROUP_CHARGE_TYPE_MAPPED)
+ __mem_cgroup_uncharge_common(unused, ctype);
+
+ pc = lookup_page_cgroup(target);
/*
- * At success, page->mapping is not NULL.
- * special rollback care is necessary when
- * 1. at migration failure. (newpage->mapping is cleared in this case)
- * 2. the newpage was moved but not remapped again because the task
- * exits and the newpage is obsolete. In this case, the new page
- * may be a swapcache. So, we just call mem_cgroup_uncharge_page()
- * always for avoiding mess. The page_cgroup will be removed if
- * unnecessary. File cache pages is still on radix-tree. Don't
- * care it.
+ * __mem_cgroup_commit_charge() check PCG_USED bit of page_cgroup.
+ * So, double-counting is effectively avoided.
+ */
+ __mem_cgroup_commit_charge(mem, pc, ctype);
+
+ /*
+ * Both of oldpage and newpage are still under lock_page().
+ * Then, we don't have to care about race in radix-tree.
+ * But we have to be careful that this page is unmapped or not.
+ *
+ * There is a case for !page_mapped(). At the start of
+ * migration, oldpage was mapped. But now, it's zapped.
+ * But we know *target* page is not freed/reused under us.
+ * mem_cgroup_uncharge_page() does all necessary checks.
*/
- if (!newpage->mapping)
- __mem_cgroup_uncharge_common(newpage,
- MEM_CGROUP_CHARGE_TYPE_FORCE);
- else if (PageAnon(newpage))
- mem_cgroup_uncharge_page(newpage);
+ if (ctype == MEM_CGROUP_CHARGE_TYPE_MAPPED)
+ mem_cgroup_uncharge_page(target);
}
/*
Index: mmotm-2.6.28-Nov30/mm/migrate.c
===================================================================
--- mmotm-2.6.28-Nov30.orig/mm/migrate.c
+++ mmotm-2.6.28-Nov30/mm/migrate.c
@@ -121,20 +121,6 @@ static void remove_migration_pte(struct
if (!is_migration_entry(entry) || migration_entry_to_page(entry) != old)
goto out;
- /*
- * Yes, ignore the return value from a GFP_ATOMIC mem_cgroup_charge.
- * Failure is not an option here: we're now expected to remove every
- * migration pte, and will cause crashes otherwise. Normally this
- * is not an issue: mem_cgroup_prepare_migration bumped up the old
- * page_cgroup count for safety, that's now attached to the new page,
- * so this charge should just be another incrementation of the count,
- * to keep in balance with rmap.c's mem_cgroup_uncharging. But if
- * there's been a force_empty, those reference counts may no longer
- * be reliable, and this charge can actually fail: oh well, we don't
- * make the situation any worse by proceeding as if it had succeeded.
- */
- mem_cgroup_charge_migrate_fixup(new, mm, GFP_ATOMIC);
-
get_page(new);
pte = pte_mkold(mk_pte(new, vma->vm_page_prot));
if (is_write_migration_entry(entry))
@@ -378,9 +364,6 @@ static void migrate_page_copy(struct pag
anon = PageAnon(page);
page->mapping = NULL;
- if (!anon) /* This page was removed from radix-tree. */
- mem_cgroup_uncharge_cache_page(page);
-
/*
* If any waiters have accumulated on the new page then
* wake them up.
@@ -613,6 +596,7 @@ static int unmap_and_move(new_page_t get
struct page *newpage = get_new_page(page, private, &result);
int rcu_locked = 0;
int charge = 0;
+ struct mem_cgroup *mem;
if (!newpage)
return -ENOMEM;
@@ -622,24 +606,26 @@ static int unmap_and_move(new_page_t get
goto move_newpage;
}
- charge = mem_cgroup_prepare_migration(page, newpage);
- if (charge == -ENOMEM) {
- rc = -ENOMEM;
- goto move_newpage;
- }
/* prepare cgroup just returns 0 or -ENOMEM */
- BUG_ON(charge);
-
rc = -EAGAIN;
+
if (!trylock_page(page)) {
if (!force)
goto move_newpage;
lock_page(page);
}
+ /* charge against new page */
+ charge = mem_cgroup_prepare_migration(page, &mem);
+ if (charge == -ENOMEM) {
+ rc = -ENOMEM;
+ goto unlock;
+ }
+ BUG_ON(charge);
+
if (PageWriteback(page)) {
if (!force)
- goto unlock;
+ goto uncharge;
wait_on_page_writeback(page);
}
/*
@@ -692,7 +678,9 @@ static int unmap_and_move(new_page_t get
rcu_unlock:
if (rcu_locked)
rcu_read_unlock();
-
+uncharge:
+ if (!charge)
+ mem_cgroup_end_migration(mem, page, newpage);
unlock:
unlock_page(page);
@@ -708,8 +696,6 @@ unlock:
}
move_newpage:
- if (!charge)
- mem_cgroup_end_migration(newpage);
/*
* Move the new page to the LRU. If migration was not successful
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 8+ messages in thread
* [mmotm][PATCH 2/4] replacement-for-memcg-handle-swap-caches.patch
2008-12-02 4:17 [mmotm][PATCH 0/4] request for patch replacement KAMEZAWA Hiroyuki
2008-12-02 4:18 ` [mmotm][PATCH 1/4] replacement-for-memcg-simple-migration-handling.patch KAMEZAWA Hiroyuki
@ 2008-12-02 4:19 ` KAMEZAWA Hiroyuki
2008-12-02 4:20 ` [mmotm][PATCH 3/4] replacement-for-memcg-memswap-controller-core.patch KAMEZAWA Hiroyuki
` (2 subsequent siblings)
4 siblings, 0 replies; 8+ messages in thread
From: KAMEZAWA Hiroyuki @ 2008-12-02 4:19 UTC (permalink / raw)
To: KAMEZAWA Hiroyuki; +Cc: akpm, hugh, linux-mm, balbir, nishimura
From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
SwapCache support for memory resource controller (memcg)
Before mem+swap controller, memcg itself should handle SwapCache in proper
way. This is cut-out from it.
In current memcg, SwapCache is just leaked and the user can create tons of
SwapCache. This is a leak of account and should be handled.
SwapCache accounting is done as following.
charge (anon)
- charged when it's mapped.
(because of readahead, charge at add_to_swap_cache() is not sane)
uncharge (anon)
- uncharged when it's dropped from swapcache and fully unmapped.
means it's not uncharged at unmap.
Note: delete from swap cache at swap-in is done after rmap information
is established.
charge (shmem)
- charged at swap-in. this prevents charge at add_to_page_cache().
uncharge (shmem)
- uncharged when it's dropped from swapcache and not on shmem's
radix-tree.
at migration, check against 'old page' is modified to handle shmem.
Comparing to the old version discussed (and caused troubles), we have
advantages of
- PCG_USED bit.
- simple migrating handling.
So, situation is much easier than several months ago, maybe.
Reviewed-by: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
Tested-by: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Hugh Dickins <hugh@veritas.com>
Cc: Li Zefan <lizf@cn.fujitsu.com>
Cc: Balbir Singh <balbir@in.ibm.com>
Cc: Pavel Emelyanov <xemul@openvz.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Documentation/controllers/memory.txt | 5 ++
include/linux/swap.h | 16 ++++++++
mm/memcontrol.c | 67 +++++++++++++++++++++++++++++++----
mm/shmem.c | 17 +++++++-
mm/swap_state.c | 1
5 files changed, 98 insertions(+), 8 deletions(-)
Index: mmotm-2.6.28-Nov30/Documentation/controllers/memory.txt
===================================================================
--- mmotm-2.6.28-Nov30.orig/Documentation/controllers/memory.txt
+++ mmotm-2.6.28-Nov30/Documentation/controllers/memory.txt
@@ -137,6 +137,11 @@ behind this approach is that a cgroup th
page will eventually get charged for it (once it is uncharged from
the cgroup that brought it in -- this will happen on memory pressure).
+Exception: When you do swapoff and make swapped-out pages of shmem(tmpfs) to
+be backed into memory in force, charges for pages are accounted against the
+caller of swapoff rather than the users of shmem.
+
+
2.4 Reclaim
Each cgroup maintains a per cgroup LRU that consists of an active
Index: mmotm-2.6.28-Nov30/include/linux/swap.h
===================================================================
--- mmotm-2.6.28-Nov30.orig/include/linux/swap.h
+++ mmotm-2.6.28-Nov30/include/linux/swap.h
@@ -336,6 +336,22 @@ static inline void disable_swap_token(vo
put_swap_token(swap_token_mm);
}
+#ifdef CONFIG_CGROUP_MEM_RES_CTLR
+extern int mem_cgroup_cache_charge_swapin(struct page *page,
+ struct mm_struct *mm, gfp_t mask, bool locked);
+extern void mem_cgroup_uncharge_swapcache(struct page *page);
+#else
+static inline
+int mem_cgroup_cache_charge_swapin(struct page *page,
+ struct mm_struct *mm, gfp_t mask, bool locked)
+{
+ return 0;
+}
+static inline void mem_cgroup_uncharge_swapcache(struct page *page)
+{
+}
+#endif
+
#else /* CONFIG_SWAP */
#define nr_swap_pages 0L
Index: mmotm-2.6.28-Nov30/mm/memcontrol.c
===================================================================
--- mmotm-2.6.28-Nov30.orig/mm/memcontrol.c
+++ mmotm-2.6.28-Nov30/mm/memcontrol.c
@@ -21,6 +21,7 @@
#include <linux/memcontrol.h>
#include <linux/cgroup.h>
#include <linux/mm.h>
+#include <linux/pagemap.h>
#include <linux/smp.h>
#include <linux/page-flags.h>
#include <linux/backing-dev.h>
@@ -139,6 +140,7 @@ enum charge_type {
MEM_CGROUP_CHARGE_TYPE_MAPPED,
MEM_CGROUP_CHARGE_TYPE_SHMEM, /* used by page migration of shmem */
MEM_CGROUP_CHARGE_TYPE_FORCE, /* used by force_empty */
+ MEM_CGROUP_CHARGE_TYPE_SWAPOUT, /* for accounting swapcache */
NR_CHARGE_TYPE,
};
@@ -780,6 +782,33 @@ int mem_cgroup_cache_charge(struct page
MEM_CGROUP_CHARGE_TYPE_SHMEM, NULL);
}
+#ifdef CONFIG_SWAP
+int mem_cgroup_cache_charge_swapin(struct page *page,
+ struct mm_struct *mm, gfp_t mask, bool locked)
+{
+ int ret = 0;
+
+ if (mem_cgroup_subsys.disabled)
+ return 0;
+ if (unlikely(!mm))
+ mm = &init_mm;
+ if (!locked)
+ lock_page(page);
+ /*
+ * If not locked, the page can be dropped from SwapCache until
+ * we reach here.
+ */
+ if (PageSwapCache(page)) {
+ ret = mem_cgroup_charge_common(page, mm, mask,
+ MEM_CGROUP_CHARGE_TYPE_SHMEM, NULL);
+ }
+ if (!locked)
+ unlock_page(page);
+
+ return ret;
+}
+#endif
+
void mem_cgroup_commit_charge_swapin(struct page *page, struct mem_cgroup *ptr)
{
struct page_cgroup *pc;
@@ -817,6 +846,9 @@ __mem_cgroup_uncharge_common(struct page
if (mem_cgroup_subsys.disabled)
return;
+ if (PageSwapCache(page))
+ return;
+
/*
* Check if our page_cgroup is valid
*/
@@ -825,12 +857,26 @@ __mem_cgroup_uncharge_common(struct page
return;
lock_page_cgroup(pc);
- if ((ctype == MEM_CGROUP_CHARGE_TYPE_MAPPED && page_mapped(page))
- || !PageCgroupUsed(pc)) {
- /* This happens at race in zap_pte_range() and do_swap_page()*/
- unlock_page_cgroup(pc);
- return;
+
+ if (!PageCgroupUsed(pc))
+ goto unlock_out;
+
+ switch (ctype) {
+ case MEM_CGROUP_CHARGE_TYPE_MAPPED:
+ if (page_mapped(page))
+ goto unlock_out;
+ break;
+ case MEM_CGROUP_CHARGE_TYPE_SWAPOUT:
+ if (!PageAnon(page)) { /* Shared memory */
+ if (page->mapping && !page_is_file_cache(page))
+ goto unlock_out;
+ } else if (page_mapped(page)) /* Anon */
+ goto unlock_out;
+ break;
+ default:
+ break;
}
+
ClearPageCgroupUsed(pc);
mem = pc->mem_cgroup;
@@ -844,6 +890,10 @@ __mem_cgroup_uncharge_common(struct page
css_put(&mem->css);
return;
+
+unlock_out:
+ unlock_page_cgroup(pc);
+ return;
}
void mem_cgroup_uncharge_page(struct page *page)
@@ -863,6 +913,11 @@ void mem_cgroup_uncharge_cache_page(stru
__mem_cgroup_uncharge_common(page, MEM_CGROUP_CHARGE_TYPE_CACHE);
}
+void mem_cgroup_uncharge_swapcache(struct page *page)
+{
+ __mem_cgroup_uncharge_common(page, MEM_CGROUP_CHARGE_TYPE_SWAPOUT);
+}
+
/*
* Before starting migration, account PAGE_SIZE to mem_cgroup that the old
* page belongs to.
@@ -920,7 +975,7 @@ void mem_cgroup_end_migration(struct mem
ctype = MEM_CGROUP_CHARGE_TYPE_SHMEM;
/* unused page is not on radix-tree now. */
- if (unused && ctype != MEM_CGROUP_CHARGE_TYPE_MAPPED)
+ if (unused)
__mem_cgroup_uncharge_common(unused, ctype);
pc = lookup_page_cgroup(target);
Index: mmotm-2.6.28-Nov30/mm/shmem.c
===================================================================
--- mmotm-2.6.28-Nov30.orig/mm/shmem.c
+++ mmotm-2.6.28-Nov30/mm/shmem.c
@@ -920,8 +920,11 @@ found:
error = 1;
if (!inode)
goto out;
- /* Precharge page using GFP_KERNEL while we can wait */
- error = mem_cgroup_cache_charge(page, current->mm, GFP_KERNEL);
+ /*
+ * Charged back to the user(not to caller) when swap account is used.
+ */
+ error = mem_cgroup_cache_charge_swapin(page,
+ current->mm, GFP_KERNEL, true);
if (error)
goto out;
error = radix_tree_preload(GFP_KERNEL);
@@ -1258,6 +1261,16 @@ repeat:
goto repeat;
}
wait_on_page_locked(swappage);
+ /*
+ * We want to avoid charge at add_to_page_cache().
+ * charge against this swap cache here.
+ */
+ if (mem_cgroup_cache_charge_swapin(swappage,
+ current->mm, gfp, false)) {
+ page_cache_release(swappage);
+ error = -ENOMEM;
+ goto failed;
+ }
page_cache_release(swappage);
goto repeat;
}
Index: mmotm-2.6.28-Nov30/mm/swap_state.c
===================================================================
--- mmotm-2.6.28-Nov30.orig/mm/swap_state.c
+++ mmotm-2.6.28-Nov30/mm/swap_state.c
@@ -118,6 +118,7 @@ void __delete_from_swap_cache(struct pag
total_swapcache_pages--;
__dec_zone_page_state(page, NR_FILE_PAGES);
INC_CACHE_INFO(del_total);
+ mem_cgroup_uncharge_swapcache(page);
}
/**
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 8+ messages in thread
* [mmotm][PATCH 3/4] replacement-for-memcg-memswap-controller-core.patch
2008-12-02 4:17 [mmotm][PATCH 0/4] request for patch replacement KAMEZAWA Hiroyuki
2008-12-02 4:18 ` [mmotm][PATCH 1/4] replacement-for-memcg-simple-migration-handling.patch KAMEZAWA Hiroyuki
2008-12-02 4:19 ` [mmotm][PATCH 2/4] replacement-for-memcg-handle-swap-caches.patch KAMEZAWA Hiroyuki
@ 2008-12-02 4:20 ` KAMEZAWA Hiroyuki
2008-12-02 4:21 ` [mmotm][PATCH 4/4] replacement-for-memcg-memswap-controller-core-make-resize-limit-hold-mutex.patch KAMEZAWA Hiroyuki
2008-12-03 7:49 ` [mmotm][PATCH 0/4] request for patch replacement KAMEZAWA Hiroyuki
4 siblings, 0 replies; 8+ messages in thread
From: KAMEZAWA Hiroyuki @ 2008-12-02 4:20 UTC (permalink / raw)
To: KAMEZAWA Hiroyuki; +Cc: akpm, hugh, linux-mm, balbir, nishimura
From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
This patch implements per cgroup limit for usage of memory+swap. However
there are SwapCache, double counting of swap-cache and swap-entry is
avoided.
Mem+Swap controller works as following.
- memory usage is limited by memory.limit_in_bytes.
- memory + swap usage is limited by memory.memsw_limit_in_bytes.
This has following benefits.
- A user can limit total resource usage of mem+swap.
Without this, because memory resource controller doesn't take care of
usage of swap, a process can exhaust all the swap (by memory leak.)
We can avoid this case.
And Swap is shared resource but it cannot be reclaimed (goes back to memory)
until it's used. This characteristic can be trouble when the memory
is divided into some parts by cpuset or memcg.
Assume group A and group B.
After some application executes, the system can be..
Group A -- very large free memory space but occupy 99% of swap.
Group B -- under memory shortage but cannot use swap...it's nearly full.
Ability to set appropriate swap limit for each group is required.
Maybe someone wonder "why not swap but mem+swap ?"
- The global LRU(kswapd) can swap out arbitrary pages. Swap-out means
to move account from memory to swap...there is no change in usage of
mem+swap.
In other words, when we want to limit the usage of swap without affecting
global LRU, mem+swap limit is better than just limiting swap.
Accounting target information is stored in swap_cgroup which is
per swap entry record.
Charge is done as following.
map
- charge page and memsw.
unmap
- uncharge page/memsw if not SwapCache.
swap-out (__delete_from_swap_cache)
- uncharge page
- record mem_cgroup information to swap_cgroup.
swap-in (do_swap_page)
- charged as page and memsw.
record in swap_cgroup is cleared.
memsw accounting is decremented.
swap-free (swap_free())
- if swap entry is freed, memsw is uncharged by PAGE_SIZE.
There are people work under never-swap environments and consider swap as
something bad. For such people, this mem+swap controller extension is just an
overhead. This overhead is avoided by config or boot option.
(see Kconfig. detail is not in this patch.)
TODO:
- maybe more optimization can be don in swap-in path. (but not very safe.)
But we just do simple accounting at this stage.
Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Hugh Dickins <hugh@veritas.com>
Cc: Li Zefan <lizf@cn.fujitsu.com>
Cc: Balbir Singh <balbir@in.ibm.com>
Cc: Pavel Emelyanov <xemul@openvz.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Documentation/controllers/memory.txt | 29 ++
include/linux/memcontrol.h | 11 -
include/linux/swap.h | 14 +
mm/memcontrol.c | 372 +++++++++++++++++++++++++++++++----
mm/memory.c | 3
mm/swap_state.c | 5
mm/swapfile.c | 11 -
mm/vmscan.c | 6
8 files changed, 403 insertions(+), 48 deletions(-)
Index: mmotm-2.6.28-Nov30/Documentation/controllers/memory.txt
===================================================================
--- mmotm-2.6.28-Nov30.orig/Documentation/controllers/memory.txt
+++ mmotm-2.6.28-Nov30/Documentation/controllers/memory.txt
@@ -137,12 +137,32 @@ behind this approach is that a cgroup th
page will eventually get charged for it (once it is uncharged from
the cgroup that brought it in -- this will happen on memory pressure).
-Exception: When you do swapoff and make swapped-out pages of shmem(tmpfs) to
+Exception: If CONFIG_CGROUP_CGROUP_MEM_RES_CTLR_SWAP is not used..
+When you do swapoff and make swapped-out pages of shmem(tmpfs) to
be backed into memory in force, charges for pages are accounted against the
caller of swapoff rather than the users of shmem.
-2.4 Reclaim
+2.4 Swap Extension (CONFIG_CGROUP_MEM_RES_CTLR_SWAP)
+Swap Extension allows you to record charge for swap. A swapped-in page is
+charged back to original page allocator if possible.
+
+When swap is accounted, following files are added.
+ - memory.memsw.usage_in_bytes.
+ - memory.memsw.limit_in_bytes.
+
+usage of mem+swap is limited by memsw.limit_in_bytes.
+
+Note: why 'mem+swap' rather than swap.
+The global LRU(kswapd) can swap out arbitrary pages. Swap-out means
+to move account from memory to swap...there is no change in usage of
+mem+swap.
+
+In other words, when we want to limit the usage of swap without affecting
+global LRU, mem+swap limit is better than just limiting swap from OS point
+of view.
+
+2.5 Reclaim
Each cgroup maintains a per cgroup LRU that consists of an active
and inactive list. When a cgroup goes over its limit, we first try
@@ -246,6 +266,11 @@ Such charges are freed(at default) or mo
both of RSS and CACHES are moved to parent.
If both of them are busy, rmdir() returns -EBUSY. See 5.1 Also.
+Charges recorded in swap information is not updated at removal of cgroup.
+Recorded information is discarded and a cgroup which uses swap (swapcache)
+will be charged as a new owner of it.
+
+
5. Misc. interfaces.
5.1 force_empty
Index: mmotm-2.6.28-Nov30/include/linux/memcontrol.h
===================================================================
--- mmotm-2.6.28-Nov30.orig/include/linux/memcontrol.h
+++ mmotm-2.6.28-Nov30/include/linux/memcontrol.h
@@ -32,6 +32,8 @@ extern int mem_cgroup_newpage_charge(str
/* for swap handling */
extern int mem_cgroup_try_charge(struct mm_struct *mm,
gfp_t gfp_mask, struct mem_cgroup **ptr);
+extern int mem_cgroup_try_charge_swapin(struct mm_struct *mm,
+ struct page *page, gfp_t mask, struct mem_cgroup **ptr);
extern void mem_cgroup_commit_charge_swapin(struct page *page,
struct mem_cgroup *ptr);
extern void mem_cgroup_cancel_charge_swapin(struct mem_cgroup *ptr);
@@ -80,7 +82,6 @@ extern long mem_cgroup_calc_reclaim(stru
#ifdef CONFIG_CGROUP_MEM_RES_CTLR_SWAP
extern int do_swap_account;
#endif
-
#else /* CONFIG_CGROUP_MEM_RES_CTLR */
struct mem_cgroup;
@@ -97,7 +98,13 @@ static inline int mem_cgroup_cache_charg
}
static inline int mem_cgroup_try_charge(struct mm_struct *mm,
- gfp_t gfp_mask, struct mem_cgroup **ptr)
+ gfp_t gfp_mask, struct mem_cgroup **ptr)
+{
+ return 0;
+}
+
+static inline int mem_cgroup_try_charge_swapin(struct mm_struct *mm,
+ struct page *page, gfp_t gfp_mask, struct mem_cgroup **ptr)
{
return 0;
}
Index: mmotm-2.6.28-Nov30/include/linux/swap.h
===================================================================
--- mmotm-2.6.28-Nov30.orig/include/linux/swap.h
+++ mmotm-2.6.28-Nov30/include/linux/swap.h
@@ -214,7 +214,7 @@ static inline void lru_cache_add_active_
extern unsigned long try_to_free_pages(struct zonelist *zonelist, int order,
gfp_t gfp_mask);
extern unsigned long try_to_free_mem_cgroup_pages(struct mem_cgroup *mem,
- gfp_t gfp_mask);
+ gfp_t gfp_mask, bool noswap);
extern int __isolate_lru_page(struct page *page, int mode, int file);
extern unsigned long shrink_all_memory(unsigned long nr_pages);
extern int vm_swappiness;
@@ -339,7 +339,7 @@ static inline void disable_swap_token(vo
#ifdef CONFIG_CGROUP_MEM_RES_CTLR
extern int mem_cgroup_cache_charge_swapin(struct page *page,
struct mm_struct *mm, gfp_t mask, bool locked);
-extern void mem_cgroup_uncharge_swapcache(struct page *page);
+extern void mem_cgroup_uncharge_swapcache(struct page *page, swp_entry_t ent);
#else
static inline
int mem_cgroup_cache_charge_swapin(struct page *page,
@@ -347,7 +347,15 @@ int mem_cgroup_cache_charge_swapin(struc
{
return 0;
}
-static inline void mem_cgroup_uncharge_swapcache(struct page *page)
+static inline void
+mem_cgroup_uncharge_swapcache(struct page *page, swp_entry_t ent)
+{
+}
+#endif
+#ifdef CONFIG_CGROUP_MEM_RES_CTLR_SWAP
+extern void mem_cgroup_uncharge_swap(swp_entry_t ent);
+#else
+static inline void mem_cgroup_uncharge_swap(swp_entry_t ent)
{
}
#endif
Index: mmotm-2.6.28-Nov30/mm/memcontrol.c
===================================================================
--- mmotm-2.6.28-Nov30.orig/mm/memcontrol.c
+++ mmotm-2.6.28-Nov30/mm/memcontrol.c
@@ -132,12 +132,18 @@ struct mem_cgroup {
*/
struct res_counter res;
/*
+ * the counter to account for mem+swap usage.
+ */
+ struct res_counter memsw;
+ /*
* Per cgroup active and inactive list, similar to the
* per zone LRU lists.
*/
struct mem_cgroup_lru_info info;
int prev_priority; /* for recording reclaim priority */
+ int obsolete;
+ atomic_t refcnt;
/*
* statistics. This must be placed at the end of memcg.
*/
@@ -167,6 +173,17 @@ pcg_default_flags[NR_CHARGE_TYPE] = {
0, /* FORCE */
};
+
+/* for encoding cft->private value on file */
+#define _MEM (0)
+#define _MEMSWAP (1)
+#define MEMFILE_PRIVATE(x, val) (((x) << 16) | (val))
+#define MEMFILE_TYPE(val) (((val) >> 16) & 0xffff)
+#define MEMFILE_ATTR(val) ((val) & 0xffff)
+
+static void mem_cgroup_get(struct mem_cgroup *mem);
+static void mem_cgroup_put(struct mem_cgroup *mem);
+
/*
* Always modified under lru lock. Then, not necessary to preempt_disable()
*/
@@ -485,7 +502,8 @@ unsigned long mem_cgroup_isolate_pages(u
* oom-killer can be invoked.
*/
static int __mem_cgroup_try_charge(struct mm_struct *mm,
- gfp_t gfp_mask, struct mem_cgroup **memcg, bool oom)
+ gfp_t gfp_mask, struct mem_cgroup **memcg,
+ bool oom)
{
struct mem_cgroup *mem;
int nr_retries = MEM_CGROUP_RECLAIM_RETRIES;
@@ -513,12 +531,25 @@ static int __mem_cgroup_try_charge(struc
css_get(&mem->css);
}
-
- while (unlikely(res_counter_charge(&mem->res, PAGE_SIZE))) {
+ while (1) {
+ int ret;
+ bool noswap = false;
+
+ ret = res_counter_charge(&mem->res, PAGE_SIZE);
+ if (likely(!ret)) {
+ if (!do_swap_account)
+ break;
+ ret = res_counter_charge(&mem->memsw, PAGE_SIZE);
+ if (likely(!ret))
+ break;
+ /* mem+swap counter fails */
+ res_counter_uncharge(&mem->res, PAGE_SIZE);
+ noswap = true;
+ }
if (!(gfp_mask & __GFP_WAIT))
goto nomem;
- if (try_to_free_mem_cgroup_pages(mem, gfp_mask))
+ if (try_to_free_mem_cgroup_pages(mem, gfp_mask, noswap))
continue;
/*
@@ -527,8 +558,13 @@ static int __mem_cgroup_try_charge(struc
* moved to swap cache or just unmapped from the cgroup.
* Check the limit again to see if the reclaim reduced the
* current usage of the cgroup before giving up
+ *
*/
- if (res_counter_check_under_limit(&mem->res))
+ if (!do_swap_account &&
+ res_counter_check_under_limit(&mem->res))
+ continue;
+ if (do_swap_account &&
+ res_counter_check_under_limit(&mem->memsw))
continue;
if (!nr_retries--) {
@@ -582,6 +618,8 @@ static void __mem_cgroup_commit_charge(s
if (unlikely(PageCgroupUsed(pc))) {
unlock_page_cgroup(pc);
res_counter_uncharge(&mem->res, PAGE_SIZE);
+ if (do_swap_account)
+ res_counter_uncharge(&mem->memsw, PAGE_SIZE);
css_put(&mem->css);
return;
}
@@ -646,6 +684,8 @@ static int mem_cgroup_move_account(struc
__mem_cgroup_remove_list(from_mz, pc);
css_put(&from->css);
res_counter_uncharge(&from->res, PAGE_SIZE);
+ if (do_swap_account)
+ res_counter_uncharge(&from->memsw, PAGE_SIZE);
pc->mem_cgroup = to;
css_get(&to->css);
__mem_cgroup_add_list(to_mz, pc, false);
@@ -692,8 +732,11 @@ static int mem_cgroup_move_parent(struct
/* drop extra refcnt */
css_put(&parent->css);
/* uncharge if move fails */
- if (ret)
+ if (ret) {
res_counter_uncharge(&parent->res, PAGE_SIZE);
+ if (do_swap_account)
+ res_counter_uncharge(&parent->memsw, PAGE_SIZE);
+ }
return ret;
}
@@ -791,7 +834,34 @@ int mem_cgroup_cache_charge(struct page
MEM_CGROUP_CHARGE_TYPE_SHMEM, NULL);
}
+int mem_cgroup_try_charge_swapin(struct mm_struct *mm,
+ struct page *page,
+ gfp_t mask, struct mem_cgroup **ptr)
+{
+ struct mem_cgroup *mem;
+ swp_entry_t ent;
+
+ if (mem_cgroup_subsys.disabled)
+ return 0;
+
+ if (!do_swap_account)
+ goto charge_cur_mm;
+
+ ent.val = page_private(page);
+
+ mem = lookup_swap_cgroup(ent);
+ if (!mem || mem->obsolete)
+ goto charge_cur_mm;
+ *ptr = mem;
+ return __mem_cgroup_try_charge(NULL, mask, ptr, true);
+charge_cur_mm:
+ if (unlikely(!mm))
+ mm = &init_mm;
+ return __mem_cgroup_try_charge(mm, mask, ptr, true);
+}
+
#ifdef CONFIG_SWAP
+
int mem_cgroup_cache_charge_swapin(struct page *page,
struct mm_struct *mm, gfp_t mask, bool locked)
{
@@ -808,8 +878,28 @@ int mem_cgroup_cache_charge_swapin(struc
* we reach here.
*/
if (PageSwapCache(page)) {
+ struct mem_cgroup *mem = NULL;
+ swp_entry_t ent;
+
+ ent.val = page_private(page);
+ if (do_swap_account) {
+ mem = lookup_swap_cgroup(ent);
+ if (mem && mem->obsolete)
+ mem = NULL;
+ if (mem)
+ mm = NULL;
+ }
ret = mem_cgroup_charge_common(page, mm, mask,
- MEM_CGROUP_CHARGE_TYPE_SHMEM, NULL);
+ MEM_CGROUP_CHARGE_TYPE_SHMEM, mem);
+
+ if (!ret && do_swap_account) {
+ /* avoid double counting */
+ mem = swap_cgroup_record(ent, NULL);
+ if (mem) {
+ res_counter_uncharge(&mem->memsw, PAGE_SIZE);
+ mem_cgroup_put(mem);
+ }
+ }
}
if (!locked)
unlock_page(page);
@@ -828,6 +918,23 @@ void mem_cgroup_commit_charge_swapin(str
return;
pc = lookup_page_cgroup(page);
__mem_cgroup_commit_charge(ptr, pc, MEM_CGROUP_CHARGE_TYPE_MAPPED);
+ /*
+ * Now swap is on-memory. This means this page may be
+ * counted both as mem and swap....double count.
+ * Fix it by uncharging from memsw. This SwapCache is stable
+ * because we're still under lock_page().
+ */
+ if (do_swap_account) {
+ swp_entry_t ent = {.val = page_private(page)};
+ struct mem_cgroup *memcg;
+ memcg = swap_cgroup_record(ent, NULL);
+ if (memcg) {
+ /* If memcg is obsolete, memcg can be != ptr */
+ res_counter_uncharge(&memcg->memsw, PAGE_SIZE);
+ mem_cgroup_put(memcg);
+ }
+
+ }
}
void mem_cgroup_cancel_charge_swapin(struct mem_cgroup *mem)
@@ -837,6 +944,8 @@ void mem_cgroup_cancel_charge_swapin(str
if (!mem)
return;
res_counter_uncharge(&mem->res, PAGE_SIZE);
+ if (do_swap_account)
+ res_counter_uncharge(&mem->memsw, PAGE_SIZE);
css_put(&mem->css);
}
@@ -844,29 +953,31 @@ void mem_cgroup_cancel_charge_swapin(str
/*
* uncharge if !page_mapped(page)
*/
-static void
+static struct mem_cgroup *
__mem_cgroup_uncharge_common(struct page *page, enum charge_type ctype)
{
struct page_cgroup *pc;
- struct mem_cgroup *mem;
+ struct mem_cgroup *mem = NULL;
struct mem_cgroup_per_zone *mz;
unsigned long flags;
if (mem_cgroup_subsys.disabled)
- return;
+ return NULL;
if (PageSwapCache(page))
- return;
+ return NULL;
/*
* Check if our page_cgroup is valid
*/
pc = lookup_page_cgroup(page);
if (unlikely(!pc || !PageCgroupUsed(pc)))
- return;
+ return NULL;
lock_page_cgroup(pc);
+ mem = pc->mem_cgroup;
+
if (!PageCgroupUsed(pc))
goto unlock_out;
@@ -886,8 +997,11 @@ __mem_cgroup_uncharge_common(struct page
break;
}
+ res_counter_uncharge(&mem->res, PAGE_SIZE);
+ if (do_swap_account && (ctype != MEM_CGROUP_CHARGE_TYPE_SWAPOUT))
+ res_counter_uncharge(&mem->memsw, PAGE_SIZE);
+
ClearPageCgroupUsed(pc);
- mem = pc->mem_cgroup;
mz = page_cgroup_zoneinfo(pc);
spin_lock_irqsave(&mz->lru_lock, flags);
@@ -895,14 +1009,13 @@ __mem_cgroup_uncharge_common(struct page
spin_unlock_irqrestore(&mz->lru_lock, flags);
unlock_page_cgroup(pc);
- res_counter_uncharge(&mem->res, PAGE_SIZE);
css_put(&mem->css);
- return;
+ return mem;
unlock_out:
unlock_page_cgroup(pc);
- return;
+ return NULL;
}
void mem_cgroup_uncharge_page(struct page *page)
@@ -922,11 +1035,43 @@ void mem_cgroup_uncharge_cache_page(stru
__mem_cgroup_uncharge_common(page, MEM_CGROUP_CHARGE_TYPE_CACHE);
}
-void mem_cgroup_uncharge_swapcache(struct page *page)
+/*
+ * called from __delete_from_swap_cache() and drop "page" account.
+ * memcg information is recorded to swap_cgroup of "ent"
+ */
+void mem_cgroup_uncharge_swapcache(struct page *page, swp_entry_t ent)
{
- __mem_cgroup_uncharge_common(page, MEM_CGROUP_CHARGE_TYPE_SWAPOUT);
+ struct mem_cgroup *memcg;
+
+ memcg = __mem_cgroup_uncharge_common(page,
+ MEM_CGROUP_CHARGE_TYPE_SWAPOUT);
+ /* record memcg information */
+ if (do_swap_account && memcg) {
+ swap_cgroup_record(ent, memcg);
+ mem_cgroup_get(memcg);
+ }
}
+#ifdef CONFIG_CGROUP_MEM_RES_CTLR_SWAP
+/*
+ * called from swap_entry_free(). remove record in swap_cgroup and
+ * uncharge "memsw" account.
+ */
+void mem_cgroup_uncharge_swap(swp_entry_t ent)
+{
+ struct mem_cgroup *memcg;
+
+ if (!do_swap_account)
+ return;
+
+ memcg = swap_cgroup_record(ent, NULL);
+ if (memcg) {
+ res_counter_uncharge(&memcg->memsw, PAGE_SIZE);
+ mem_cgroup_put(memcg);
+ }
+}
+#endif
+
/*
* Before starting migration, account PAGE_SIZE to mem_cgroup that the old
* page belongs to.
@@ -1034,7 +1179,7 @@ int mem_cgroup_shrink_usage(struct mm_st
rcu_read_unlock();
do {
- progress = try_to_free_mem_cgroup_pages(mem, gfp_mask);
+ progress = try_to_free_mem_cgroup_pages(mem, gfp_mask, true);
progress += res_counter_check_under_limit(&mem->res);
} while (!progress && --retry);
@@ -1052,6 +1197,11 @@ static int mem_cgroup_resize_limit(struc
int progress;
int ret = 0;
+ if (do_swap_account) {
+ if (val > memcg->memsw.limit)
+ return -EINVAL;
+ }
+
while (res_counter_set_limit(&memcg->res, val)) {
if (signal_pending(current)) {
ret = -EINTR;
@@ -1061,13 +1211,55 @@ static int mem_cgroup_resize_limit(struc
ret = -EBUSY;
break;
}
- progress = try_to_free_mem_cgroup_pages(memcg, GFP_KERNEL);
+ progress = try_to_free_mem_cgroup_pages(memcg,
+ GFP_KERNEL, false);
if (!progress)
retry_count--;
}
return ret;
}
+int mem_cgroup_resize_memsw_limit(struct mem_cgroup *memcg,
+ unsigned long long val)
+{
+ int retry_count = MEM_CGROUP_RECLAIM_RETRIES;
+ unsigned long flags;
+ u64 memlimit, oldusage, curusage;
+ int ret;
+
+ if (!do_swap_account)
+ return -EINVAL;
+
+ while (retry_count) {
+ if (signal_pending(current)) {
+ ret = -EINTR;
+ break;
+ }
+ /*
+ * Rather than hide all in some function, I do this in
+ * open coded manner. You see what this really does.
+ * We have to guarantee mem->res.limit < mem->memsw.limit.
+ */
+ spin_lock_irqsave(&memcg->res.lock, flags);
+ memlimit = memcg->res.limit;
+ if (memlimit > val) {
+ spin_unlock_irqrestore(&memcg->res.lock, flags);
+ ret = -EINVAL;
+ break;
+ }
+ ret = res_counter_set_limit(&memcg->memsw, val);
+ oldusage = memcg->memsw.usage;
+ spin_unlock_irqrestore(&memcg->res.lock, flags);
+
+ if (!ret)
+ break;
+ try_to_free_mem_cgroup_pages(memcg, GFP_KERNEL, true);
+ curusage = res_counter_read_u64(&memcg->memsw, RES_USAGE);
+ if (curusage >= oldusage)
+ retry_count--;
+ }
+ return ret;
+}
/*
* This routine traverse page_cgroup in given list and drop them all.
@@ -1192,7 +1384,7 @@ try_to_free:
goto out;
}
progress = try_to_free_mem_cgroup_pages(mem,
- GFP_HIGHUSER_MOVABLE);
+ GFP_HIGHUSER_MOVABLE, false);
if (!progress) {
nr_retries--;
/* maybe some writeback is necessary */
@@ -1215,8 +1407,25 @@ int mem_cgroup_force_empty_write(struct
static u64 mem_cgroup_read(struct cgroup *cont, struct cftype *cft)
{
- return res_counter_read_u64(&mem_cgroup_from_cont(cont)->res,
- cft->private);
+ struct mem_cgroup *mem = mem_cgroup_from_cont(cont);
+ u64 val = 0;
+ int type, name;
+
+ type = MEMFILE_TYPE(cft->private);
+ name = MEMFILE_ATTR(cft->private);
+ switch (type) {
+ case _MEM:
+ val = res_counter_read_u64(&mem->res, name);
+ break;
+ case _MEMSWAP:
+ if (do_swap_account)
+ val = res_counter_read_u64(&mem->memsw, name);
+ break;
+ default:
+ BUG();
+ break;
+ }
+ return val;
}
/*
* The user of this function is...
@@ -1226,15 +1435,22 @@ static int mem_cgroup_write(struct cgrou
const char *buffer)
{
struct mem_cgroup *memcg = mem_cgroup_from_cont(cont);
+ int type, name;
unsigned long long val;
int ret;
- switch (cft->private) {
+ type = MEMFILE_TYPE(cft->private);
+ name = MEMFILE_ATTR(cft->private);
+ switch (name) {
case RES_LIMIT:
/* This function does all necessary parse...reuse it */
ret = res_counter_memparse_write_strategy(buffer, &val);
- if (!ret)
+ if (ret)
+ break;
+ if (type == _MEM)
ret = mem_cgroup_resize_limit(memcg, val);
+ else
+ ret = mem_cgroup_resize_memsw_limit(memcg, val);
break;
default:
ret = -EINVAL; /* should be BUG() ? */
@@ -1246,14 +1462,23 @@ static int mem_cgroup_write(struct cgrou
static int mem_cgroup_reset(struct cgroup *cont, unsigned int event)
{
struct mem_cgroup *mem;
+ int type, name;
mem = mem_cgroup_from_cont(cont);
- switch (event) {
+ type = MEMFILE_TYPE(event);
+ name = MEMFILE_ATTR(event);
+ switch (name) {
case RES_MAX_USAGE:
- res_counter_reset_max(&mem->res);
+ if (type == _MEM)
+ res_counter_reset_max(&mem->res);
+ else
+ res_counter_reset_max(&mem->memsw);
break;
case RES_FAILCNT:
- res_counter_reset_failcnt(&mem->res);
+ if (type == _MEM)
+ res_counter_reset_failcnt(&mem->res);
+ else
+ res_counter_reset_failcnt(&mem->memsw);
break;
}
return 0;
@@ -1314,24 +1539,24 @@ static int mem_control_stat_show(struct
static struct cftype mem_cgroup_files[] = {
{
.name = "usage_in_bytes",
- .private = RES_USAGE,
+ .private = MEMFILE_PRIVATE(_MEM, RES_USAGE),
.read_u64 = mem_cgroup_read,
},
{
.name = "max_usage_in_bytes",
- .private = RES_MAX_USAGE,
+ .private = MEMFILE_PRIVATE(_MEM, RES_MAX_USAGE),
.trigger = mem_cgroup_reset,
.read_u64 = mem_cgroup_read,
},
{
.name = "limit_in_bytes",
- .private = RES_LIMIT,
+ .private = MEMFILE_PRIVATE(_MEM, RES_LIMIT),
.write_string = mem_cgroup_write,
.read_u64 = mem_cgroup_read,
},
{
.name = "failcnt",
- .private = RES_FAILCNT,
+ .private = MEMFILE_PRIVATE(_MEM, RES_FAILCNT),
.trigger = mem_cgroup_reset,
.read_u64 = mem_cgroup_read,
},
@@ -1345,6 +1570,47 @@ static struct cftype mem_cgroup_files[]
},
};
+#ifdef CONFIG_CGROUP_MEM_RES_CTLR_SWAP
+static struct cftype memsw_cgroup_files[] = {
+ {
+ .name = "memsw.usage_in_bytes",
+ .private = MEMFILE_PRIVATE(_MEMSWAP, RES_USAGE),
+ .read_u64 = mem_cgroup_read,
+ },
+ {
+ .name = "memsw.max_usage_in_bytes",
+ .private = MEMFILE_PRIVATE(_MEMSWAP, RES_MAX_USAGE),
+ .trigger = mem_cgroup_reset,
+ .read_u64 = mem_cgroup_read,
+ },
+ {
+ .name = "memsw.limit_in_bytes",
+ .private = MEMFILE_PRIVATE(_MEMSWAP, RES_LIMIT),
+ .write_string = mem_cgroup_write,
+ .read_u64 = mem_cgroup_read,
+ },
+ {
+ .name = "memsw.failcnt",
+ .private = MEMFILE_PRIVATE(_MEMSWAP, RES_FAILCNT),
+ .trigger = mem_cgroup_reset,
+ .read_u64 = mem_cgroup_read,
+ },
+};
+
+static int register_memsw_files(struct cgroup *cont, struct cgroup_subsys *ss)
+{
+ if (!do_swap_account)
+ return 0;
+ return cgroup_add_files(cont, ss, memsw_cgroup_files,
+ ARRAY_SIZE(memsw_cgroup_files));
+};
+#else
+static int register_memsw_files(struct cgroup *cont, struct cgroup_subsys *ss)
+{
+ return 0;
+}
+#endif
+
static int alloc_mem_cgroup_per_zone_info(struct mem_cgroup *mem, int node)
{
struct mem_cgroup_per_node *pn;
@@ -1403,14 +1669,44 @@ static struct mem_cgroup *mem_cgroup_all
return mem;
}
+/*
+ * At destroying mem_cgroup, references from swap_cgroup can remain.
+ * (scanning all at force_empty is too costly...)
+ *
+ * Instead of clearing all references at force_empty, we remember
+ * the number of reference from swap_cgroup and free mem_cgroup when
+ * it goes down to 0.
+ *
+ * When mem_cgroup is destroyed, mem->obsolete will be set to 0 and
+ * entry which points to this memcg will be ignore at swapin.
+ *
+ * Removal of cgroup itself succeeds regardless of refs from swap.
+ */
+
static void mem_cgroup_free(struct mem_cgroup *mem)
{
+ if (atomic_read(&mem->refcnt) > 0)
+ return;
if (mem_cgroup_size() < PAGE_SIZE)
kfree(mem);
else
vfree(mem);
}
+static void mem_cgroup_get(struct mem_cgroup *mem)
+{
+ atomic_inc(&mem->refcnt);
+}
+
+static void mem_cgroup_put(struct mem_cgroup *mem)
+{
+ if (atomic_dec_and_test(&mem->refcnt)) {
+ if (!mem->obsolete)
+ return;
+ mem_cgroup_free(mem);
+ }
+}
+
#ifdef CONFIG_CGROUP_MEM_RES_CTLR_SWAP
static void __init enable_swap_cgroup(void)
@@ -1435,6 +1731,7 @@ mem_cgroup_create(struct cgroup_subsys *
return ERR_PTR(-ENOMEM);
res_counter_init(&mem->res);
+ res_counter_init(&mem->memsw);
for_each_node_state(node, N_POSSIBLE)
if (alloc_mem_cgroup_per_zone_info(mem, node))
@@ -1455,6 +1752,7 @@ static void mem_cgroup_pre_destroy(struc
struct cgroup *cont)
{
struct mem_cgroup *mem = mem_cgroup_from_cont(cont);
+ mem->obsolete = 1;
mem_cgroup_force_empty(mem, false);
}
@@ -1473,8 +1771,14 @@ static void mem_cgroup_destroy(struct cg
static int mem_cgroup_populate(struct cgroup_subsys *ss,
struct cgroup *cont)
{
- return cgroup_add_files(cont, ss, mem_cgroup_files,
- ARRAY_SIZE(mem_cgroup_files));
+ int ret;
+
+ ret = cgroup_add_files(cont, ss, mem_cgroup_files,
+ ARRAY_SIZE(mem_cgroup_files));
+
+ if (!ret)
+ ret = register_memsw_files(cont, ss);
+ return ret;
}
static void mem_cgroup_move_task(struct cgroup_subsys *ss,
Index: mmotm-2.6.28-Nov30/mm/memory.c
===================================================================
--- mmotm-2.6.28-Nov30.orig/mm/memory.c
+++ mmotm-2.6.28-Nov30/mm/memory.c
@@ -2344,7 +2344,8 @@ static int do_swap_page(struct mm_struct
lock_page(page);
delayacct_clear_flag(DELAYACCT_PF_SWAPIN);
- if (mem_cgroup_try_charge(mm, GFP_KERNEL, &ptr) == -ENOMEM) {
+ if (mem_cgroup_try_charge_swapin(mm, page,
+ GFP_KERNEL, &ptr) == -ENOMEM) {
ret = VM_FAULT_OOM;
unlock_page(page);
goto out;
Index: mmotm-2.6.28-Nov30/mm/swap_state.c
===================================================================
--- mmotm-2.6.28-Nov30.orig/mm/swap_state.c
+++ mmotm-2.6.28-Nov30/mm/swap_state.c
@@ -17,6 +17,7 @@
#include <linux/backing-dev.h>
#include <linux/pagevec.h>
#include <linux/migrate.h>
+#include <linux/page_cgroup.h>
#include <asm/pgtable.h>
@@ -108,6 +109,8 @@ int add_to_swap_cache(struct page *page,
*/
void __delete_from_swap_cache(struct page *page)
{
+ swp_entry_t ent = {.val = page_private(page)};
+
VM_BUG_ON(!PageLocked(page));
VM_BUG_ON(!PageSwapCache(page));
VM_BUG_ON(PageWriteback(page));
@@ -118,7 +121,7 @@ void __delete_from_swap_cache(struct pag
total_swapcache_pages--;
__dec_zone_page_state(page, NR_FILE_PAGES);
INC_CACHE_INFO(del_total);
- mem_cgroup_uncharge_swapcache(page);
+ mem_cgroup_uncharge_swapcache(page, ent);
}
/**
Index: mmotm-2.6.28-Nov30/mm/swapfile.c
===================================================================
--- mmotm-2.6.28-Nov30.orig/mm/swapfile.c
+++ mmotm-2.6.28-Nov30/mm/swapfile.c
@@ -471,8 +471,9 @@ out:
return NULL;
}
-static int swap_entry_free(struct swap_info_struct *p, unsigned long offset)
+static int swap_entry_free(struct swap_info_struct *p, swp_entry_t ent)
{
+ unsigned long offset = swp_offset(ent);
int count = p->swap_map[offset];
if (count < SWAP_MAP_MAX) {
@@ -487,6 +488,7 @@ static int swap_entry_free(struct swap_i
swap_list.next = p - swap_info;
nr_swap_pages++;
p->inuse_pages--;
+ mem_cgroup_uncharge_swap(ent);
}
}
return count;
@@ -502,7 +504,7 @@ void swap_free(swp_entry_t entry)
p = swap_info_get(entry);
if (p) {
- swap_entry_free(p, swp_offset(entry));
+ swap_entry_free(p, entry);
spin_unlock(&swap_lock);
}
}
@@ -582,7 +584,7 @@ void free_swap_and_cache(swp_entry_t ent
p = swap_info_get(entry);
if (p) {
- if (swap_entry_free(p, swp_offset(entry)) == 1) {
+ if (swap_entry_free(p, entry) == 1) {
page = find_get_page(&swapper_space, entry.val);
if (page && !trylock_page(page)) {
page_cache_release(page);
@@ -695,7 +697,8 @@ static int unuse_pte(struct vm_area_stru
pte_t *pte;
int ret = 1;
- if (mem_cgroup_try_charge(vma->vm_mm, GFP_KERNEL, &ptr))
+ if (mem_cgroup_try_charge_swapin(vma->vm_mm, page,
+ GFP_KERNEL, &ptr))
ret = -ENOMEM;
pte = pte_offset_map_lock(vma->vm_mm, pmd, addr, &ptl);
Index: mmotm-2.6.28-Nov30/mm/vmscan.c
===================================================================
--- mmotm-2.6.28-Nov30.orig/mm/vmscan.c
+++ mmotm-2.6.28-Nov30/mm/vmscan.c
@@ -1710,7 +1710,8 @@ unsigned long try_to_free_pages(struct z
#ifdef CONFIG_CGROUP_MEM_RES_CTLR
unsigned long try_to_free_mem_cgroup_pages(struct mem_cgroup *mem_cont,
- gfp_t gfp_mask)
+ gfp_t gfp_mask,
+ bool noswap)
{
struct scan_control sc = {
.may_writepage = !laptop_mode,
@@ -1723,6 +1724,9 @@ unsigned long try_to_free_mem_cgroup_pag
};
struct zonelist *zonelist;
+ if (noswap)
+ sc.may_swap = 0;
+
sc.gfp_mask = (gfp_mask & GFP_RECLAIM_MASK) |
(GFP_HIGHUSER_MOVABLE & ~GFP_RECLAIM_MASK);
zonelist = NODE_DATA(numa_node_id())->node_zonelists;
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 8+ messages in thread
* [mmotm][PATCH 4/4] replacement-for-memcg-memswap-controller-core-make-resize-limit-hold-mutex.patch
2008-12-02 4:17 [mmotm][PATCH 0/4] request for patch replacement KAMEZAWA Hiroyuki
` (2 preceding siblings ...)
2008-12-02 4:20 ` [mmotm][PATCH 3/4] replacement-for-memcg-memswap-controller-core.patch KAMEZAWA Hiroyuki
@ 2008-12-02 4:21 ` KAMEZAWA Hiroyuki
2008-12-03 7:49 ` [mmotm][PATCH 0/4] request for patch replacement KAMEZAWA Hiroyuki
4 siblings, 0 replies; 8+ messages in thread
From: KAMEZAWA Hiroyuki @ 2008-12-02 4:21 UTC (permalink / raw)
To: KAMEZAWA Hiroyuki; +Cc: akpm, hugh, linux-mm, balbir, nishimura
From: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
mem_cgroup_resize_memsw_limit() try to hold memsw.lock while holding
res.lock, so below message is showed when trying to write
memory.memsw.limit_in_bytes file.
[ INFO: possible recursive locking detected ]
2.6.28-rc4-mm1-mmotm-2008-11-14-20-50-ef4e17ef #1
bash/4406 is trying to acquire lock:
(&counter->lock){....}, at: [<c0498408>] mem_cgroup_resize_memsw_limit+0x8d/0x113
but task is already holding lock:
(&counter->lock){....}, at: [<c04983d6>] mem_cgroup_resize_memsw_limit+0x5b/0x113
other info that might help us debug this:
1 lock held by bash/4406:
#0: (&counter->lock){....}, at: [<c04983d6>] mem_cgroup_resize_memsw_limit+0x5b/0x113
stack backtrace:
Pid: 4406, comm: bash Not tainted 2.6.28-rc4-mm1-mmotm-2008-11-14-20-50-ef4e17ef #1
Call Trace:
[<c066e60f>] ? printk+0xf/0x18
[<c044d0c0>] __lock_acquire+0xc67/0x1353
[<c044d793>] ? __lock_acquire+0x133a/0x1353
[<c044d81c>] lock_acquire+0x70/0x97
[<c0498408>] ? mem_cgroup_resize_memsw_limit+0x8d/0x113
[<c0671519>] _spin_lock_irqsave+0x3a/0x6d
[<c0498408>] ? mem_cgroup_resize_memsw_limit+0x8d/0x113
[<c0498408>] mem_cgroup_resize_memsw_limit+0x8d/0x113
[<c0518a6c>] ? memparse+0x14/0x66
[<c0498594>] mem_cgroup_write+0x4a/0x50
[<c045e063>] cgroup_file_write+0x181/0x1c6
[<c0449e43>] ? lock_release_holdtime+0x1a/0x168
[<c04ec725>] ? security_file_permission+0xf/0x11
[<c049b5f0>] ? rw_verify_area+0x76/0x97
[<c045dee2>] ? cgroup_file_write+0x0/0x1c6
[<c049bce6>] vfs_write+0x8a/0x12e
[<c049be23>] sys_write+0x3b/0x60
[<c0403867>] sysenter_do_call+0x12/0x3f
This patch define a new mutex and make both mem_cgroup_resize_limit and
mem_cgroup_memsw_resize_limit hold it to remove spin_lock_irqsave.
Signed-off-by: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Hugh Dickins <hugh@veritas.com>
Cc: Li Zefan <lizf@cn.fujitsu.com>
Cc: Balbir Singh <balbir@in.ibm.com>
Cc: Pavel Emelyanov <xemul@openvz.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Index: mmotm-2.6.28-Dec01/mm/memcontrol.c
===================================================================
--- mmotm-2.6.28-Dec01.orig/mm/memcontrol.c
+++ mmotm-2.6.28-Dec01/mm/memcontrol.c
@@ -27,6 +27,7 @@
#include <linux/backing-dev.h>
#include <linux/bit_spinlock.h>
#include <linux/rcupdate.h>
+#include <linux/mutex.h>
#include <linux/slab.h>
#include <linux/swap.h>
#include <linux/spinlock.h>
@@ -1189,32 +1190,43 @@ int mem_cgroup_shrink_usage(struct mm_st
return 0;
}
+static DEFINE_MUTEX(set_limit_mutex);
+
static int mem_cgroup_resize_limit(struct mem_cgroup *memcg,
- unsigned long long val)
+ unsigned long long val)
{
int retry_count = MEM_CGROUP_RECLAIM_RETRIES;
int progress;
+ u64 memswlimit;
int ret = 0;
- if (do_swap_account) {
- if (val > memcg->memsw.limit)
- return -EINVAL;
- }
-
- while (res_counter_set_limit(&memcg->res, val)) {
+ while (retry_count) {
if (signal_pending(current)) {
ret = -EINTR;
break;
}
- if (!retry_count) {
- ret = -EBUSY;
+ /*
+ * Rather than hide all in some function, I do this in
+ * open coded manner. You see what this really does.
+ * We have to guarantee mem->res.limit < mem->memsw.limit.
+ */
+ mutex_lock(&set_limit_mutex);
+ memswlimit = res_counter_read_u64(&memcg->memsw, RES_LIMIT);
+ if (memswlimit < val) {
+ ret = -EINVAL;
+ mutex_unlock(&set_limit_mutex);
break;
}
+ ret = res_counter_set_limit(&memcg->res, val);
+ mutex_unlock(&set_limit_mutex);
+
+ if (!ret)
+ break;
+
progress = try_to_free_mem_cgroup_pages(memcg,
GFP_KERNEL, false);
- if (!progress)
- retry_count--;
+ if (!progress) retry_count--;
}
return ret;
}
@@ -1223,7 +1235,6 @@ int mem_cgroup_resize_memsw_limit(struct
unsigned long long val)
{
int retry_count = MEM_CGROUP_RECLAIM_RETRIES;
- unsigned long flags;
u64 memlimit, oldusage, curusage;
int ret;
@@ -1240,19 +1251,20 @@ int mem_cgroup_resize_memsw_limit(struct
* open coded manner. You see what this really does.
* We have to guarantee mem->res.limit < mem->memsw.limit.
*/
- spin_lock_irqsave(&memcg->res.lock, flags);
- memlimit = memcg->res.limit;
+ mutex_lock(&set_limit_mutex);
+ memlimit = res_counter_read_u64(&memcg->res, RES_LIMIT);
if (memlimit > val) {
- spin_unlock_irqrestore(&memcg->res.lock, flags);
ret = -EINVAL;
+ mutex_unlock(&set_limit_mutex);
break;
}
ret = res_counter_set_limit(&memcg->memsw, val);
- oldusage = memcg->memsw.usage;
- spin_unlock_irqrestore(&memcg->res.lock, flags);
+ mutex_unlock(&set_limit_mutex);
if (!ret)
break;
+
+ oldusage = res_counter_read_u64(&memcg->memsw, RES_USAGE);
try_to_free_mem_cgroup_pages(memcg, GFP_KERNEL, true);
curusage = res_counter_read_u64(&memcg->memsw, RES_USAGE);
if (curusage >= oldusage)
@@ -1261,6 +1273,7 @@ int mem_cgroup_resize_memsw_limit(struct
return ret;
}
+
/*
* This routine traverse page_cgroup in given list and drop them all.
* *And* this routine doesn't reclaim page itself, just removes page_cgroup.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [mmotm][PATCH 1/4] replacement-for-memcg-simple-migration-handling.patch
2008-12-02 4:18 ` [mmotm][PATCH 1/4] replacement-for-memcg-simple-migration-handling.patch KAMEZAWA Hiroyuki
@ 2008-12-02 4:35 ` Balbir Singh
2008-12-02 4:49 ` KAMEZAWA Hiroyuki
0 siblings, 1 reply; 8+ messages in thread
From: Balbir Singh @ 2008-12-02 4:35 UTC (permalink / raw)
To: KAMEZAWA Hiroyuki; +Cc: akpm, hugh, linux-mm, nishimura
* KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> [2008-12-02 13:18:40]:
> From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
>
> Now, management of "charge" under page migration is done under following
> manner. (Assume migrate page contents from oldpage to newpage)
>
> before
> - "newpage" is charged before migration.
> at success.
> - "oldpage" is uncharged at somewhere(unmap, radix-tree-replace)
> at failure
> - "newpage" is uncharged.
> - "oldpage" is charged if necessary (*1)
>
> But (*1) is not reliable....because of GFP_ATOMIC.
>
Kamezawa,
You did share page migration test cases with me, but I would really
like to see a page migration test scenario or rather a set of test
scenarios for the memory controller. Sudhir has added some LTP test
cases, but for now I would be satisfied with Documentation updates for
testing the various memory controller features (sort of build a
regression set of cases in documented form and automate it later). I
can start with what I have, I would request you to update the
migration cases and any other case you have.
--
Balbir
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [mmotm][PATCH 1/4] replacement-for-memcg-simple-migration-handling.patch
2008-12-02 4:35 ` Balbir Singh
@ 2008-12-02 4:49 ` KAMEZAWA Hiroyuki
0 siblings, 0 replies; 8+ messages in thread
From: KAMEZAWA Hiroyuki @ 2008-12-02 4:49 UTC (permalink / raw)
To: balbir; +Cc: akpm, hugh, linux-mm, nishimura
On Tue, 2 Dec 2008 10:05:31 +0530
Balbir Singh <balbir@linux.vnet.ibm.com> wrote:
> * KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> [2008-12-02 13:18:40]:
>
> > From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
> >
> > Now, management of "charge" under page migration is done under following
> > manner. (Assume migrate page contents from oldpage to newpage)
> >
> > before
> > - "newpage" is charged before migration.
> > at success.
> > - "oldpage" is uncharged at somewhere(unmap, radix-tree-replace)
> > at failure
> > - "newpage" is uncharged.
> > - "oldpage" is charged if necessary (*1)
> >
> > But (*1) is not reliable....because of GFP_ATOMIC.
> >
>
> Kamezawa,
>
> You did share page migration test cases with me, but I would really
> like to see a page migration test scenario or rather a set of test
> scenarios for the memory controller. Sudhir has added some LTP test
> cases, but for now I would be satisfied with Documentation updates for
> testing the various memory controller features (sort of build a
> regression set of cases in documented form and automate it later). I
> can start with what I have, I would request you to update the
> migration cases and any other case you have.
>
Hmm. will consider some.
But there is not some much "features" of memcg. Just "handlers" for some
memory jobs. So please be careful what is API we must keep and what is
current behavior.
-Kame
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [mmotm][PATCH 0/4] request for patch replacement
2008-12-02 4:17 [mmotm][PATCH 0/4] request for patch replacement KAMEZAWA Hiroyuki
` (3 preceding siblings ...)
2008-12-02 4:21 ` [mmotm][PATCH 4/4] replacement-for-memcg-memswap-controller-core-make-resize-limit-hold-mutex.patch KAMEZAWA Hiroyuki
@ 2008-12-03 7:49 ` KAMEZAWA Hiroyuki
4 siblings, 0 replies; 8+ messages in thread
From: KAMEZAWA Hiroyuki @ 2008-12-03 7:49 UTC (permalink / raw)
To: KAMEZAWA Hiroyuki; +Cc: akpm, hugh, linux-mm, balbir, nishimura
On Tue, 2 Dec 2008 13:17:23 +0900
KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> wrote:
> Hi, I'm sorry for asking this.
>
> please drop memcg-fix-gfp_mask-of-callers-of-charge.patch.
>
> It got NACK. http://marc.info/?l=linux-kernel&m=122817796729117&w=2
>
Please ignore this. memcg-revert-gfp-mask-fix.patch does all necessary fixes.
Sorry,
-Kame
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2008-12-03 7:50 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2008-12-02 4:17 [mmotm][PATCH 0/4] request for patch replacement KAMEZAWA Hiroyuki
2008-12-02 4:18 ` [mmotm][PATCH 1/4] replacement-for-memcg-simple-migration-handling.patch KAMEZAWA Hiroyuki
2008-12-02 4:35 ` Balbir Singh
2008-12-02 4:49 ` KAMEZAWA Hiroyuki
2008-12-02 4:19 ` [mmotm][PATCH 2/4] replacement-for-memcg-handle-swap-caches.patch KAMEZAWA Hiroyuki
2008-12-02 4:20 ` [mmotm][PATCH 3/4] replacement-for-memcg-memswap-controller-core.patch KAMEZAWA Hiroyuki
2008-12-02 4:21 ` [mmotm][PATCH 4/4] replacement-for-memcg-memswap-controller-core-make-resize-limit-hold-mutex.patch KAMEZAWA Hiroyuki
2008-12-03 7:49 ` [mmotm][PATCH 0/4] request for patch replacement KAMEZAWA Hiroyuki
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox