From: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: linux-mm <linux-mm@kvack.org>,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>,
Balbir Singh <balbir@in.ibm.com>,
Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
Subject: [PATCH 1/2] memcg: fix shrink_usage
Date: Tue, 21 Apr 2009 14:29:18 +0900 [thread overview]
Message-ID: <20090421142918.16026817.nishimura@mxp.nes.nec.co.jp> (raw)
In-Reply-To: <20090421142641.aa4efa2f.nishimura@mxp.nes.nec.co.jp>
Current mem_cgroup_shrink_usage has two problems.
1. It doesn't call mem_cgroup_out_of_memory and doesn't update last_oom_jiffies,
so pagefault_out_of_memory invokes global OOM.
2. Considering hierarchy, shrinking has to be done from the mem_over_limit,
not from the memcg which the page would be charged to.
mem_cgroup_try_charge_swapin does all of these works properly,
so we use it and call cancel_charge_swapin when it succeeded.
The name of "shrink_usage" is not appropriate for this behavior,
so we change it too.
Signed-off-by: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
---
include/linux/memcontrol.h | 4 ++--
mm/memcontrol.c | 33 ++++++++++++---------------------
mm/shmem.c | 8 ++++++--
3 files changed, 20 insertions(+), 25 deletions(-)
diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 18146c9..928b714 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -56,7 +56,7 @@ extern void mem_cgroup_move_lists(struct page *page,
enum lru_list from, enum lru_list to);
extern void mem_cgroup_uncharge_page(struct page *page);
extern void mem_cgroup_uncharge_cache_page(struct page *page);
-extern int mem_cgroup_shrink_usage(struct page *page,
+extern int mem_cgroup_shmem_charge_fallback(struct page *page,
struct mm_struct *mm, gfp_t gfp_mask);
extern unsigned long mem_cgroup_isolate_pages(unsigned long nr_to_scan,
@@ -155,7 +155,7 @@ static inline void mem_cgroup_uncharge_cache_page(struct page *page)
{
}
-static inline int mem_cgroup_shrink_usage(struct page *page,
+static inline int mem_cgroup_shmem_charge_fallback(struct page *page,
struct mm_struct *mm, gfp_t gfp_mask)
{
return 0;
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 2fc6d6c..619b0c1 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -1618,37 +1618,28 @@ void mem_cgroup_end_migration(struct mem_cgroup *mem,
}
/*
- * A call to try to shrink memory usage under specified resource controller.
- * This is typically used for page reclaiming for shmem for reducing side
- * effect of page allocation from shmem, which is used by some mem_cgroup.
+ * A call to try to shrink memory usage on charge failure at shmem's swapin.
+ * Calling hierarchical_reclaim is not enough because we should update
+ * last_oom_jiffies to prevent pagefault_out_of_memory from invoking global OOM.
+ * Moreover considering hierarchy, we should reclaim from the mem_over_limit,
+ * not from the memcg which this page would be charged to.
+ * try_charge_swapin does all of these works properly.
*/
-int mem_cgroup_shrink_usage(struct page *page,
+int mem_cgroup_shmem_charge_fallback(struct page *page,
struct mm_struct *mm,
gfp_t gfp_mask)
{
struct mem_cgroup *mem = NULL;
- int progress = 0;
- int retry = MEM_CGROUP_RECLAIM_RETRIES;
+ int ret;
if (mem_cgroup_disabled())
return 0;
- if (page)
- mem = try_get_mem_cgroup_from_swapcache(page);
- if (!mem && mm)
- mem = try_get_mem_cgroup_from_mm(mm);
- if (unlikely(!mem))
- return 0;
- do {
- progress = mem_cgroup_hierarchical_reclaim(mem,
- gfp_mask, true, false);
- progress += mem_cgroup_check_under_limit(mem);
- } while (!progress && --retry);
+ ret = mem_cgroup_try_charge_swapin(mm, page, gfp_mask, &mem);
+ if (!ret)
+ mem_cgroup_cancel_charge_swapin(mem); /* it does !mem check */
- css_put(&mem->css);
- if (!retry)
- return -ENOMEM;
- return 0;
+ return ret;
}
static DEFINE_MUTEX(set_limit_mutex);
diff --git a/mm/shmem.c b/mm/shmem.c
index d94d2e9..2419562 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -1325,8 +1325,12 @@ repeat:
shmem_swp_unmap(entry);
spin_unlock(&info->lock);
if (error == -ENOMEM) {
- /* allow reclaim from this memory cgroup */
- error = mem_cgroup_shrink_usage(swappage,
+ /*
+ * reclaim from proper memory cgroup and
+ * call memcg's OOM if needed.
+ */
+ error = mem_cgroup_shmem_charge_fallback(
+ swappage,
current->mm,
gfp);
if (error) {
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2009-04-21 5:31 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-04-21 5:26 [PATCH 0/2] memcg: some fixes for -rc Daisuke Nishimura
2009-04-21 5:29 ` Daisuke Nishimura [this message]
2009-04-21 6:54 ` [PATCH 1/2] memcg: fix shrink_usage KAMEZAWA Hiroyuki
2009-04-21 5:29 ` [PATCH 2/2] memcg: free unused swapcache at the end of page migration Daisuke Nishimura
2009-04-21 6:53 ` KAMEZAWA Hiroyuki
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20090421142918.16026817.nishimura@mxp.nes.nec.co.jp \
--to=nishimura@mxp.nes.nec.co.jp \
--cc=akpm@linux-foundation.org \
--cc=balbir@in.ibm.com \
--cc=kamezawa.hiroyu@jp.fujitsu.com \
--cc=linux-mm@kvack.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox