* [PATCH 01/09] inactive_anon_is_low() move to vmscan.c
2008-11-30 10:54 [PATCH 00/09] memcg: split-lru feature for memcg KOSAKI Motohiro
@ 2008-11-30 10:55 ` KOSAKI Motohiro
2008-11-30 15:18 ` Rik van Riel
2008-11-30 10:56 ` [PATCH 02/09] memcg: make inactive_anon_is_low() KOSAKI Motohiro
` (8 subsequent siblings)
9 siblings, 1 reply; 23+ messages in thread
From: KOSAKI Motohiro @ 2008-11-30 10:55 UTC (permalink / raw)
To: LKML, linux-mm, Andrew Morton, Balbir Singh, KAMEZAWA Hiroyuki,
Rik van Riel
Cc: kosaki.motohiro
The inactive_anon_is_low() is called only vmscan.
Then it can move to vmscan.c
This patch doesn't have any functional change.
Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
---
include/linux/mm_inline.h | 19 -------------------
mm/vmscan.c | 20 ++++++++++++++++++++
2 files changed, 20 insertions(+), 19 deletions(-)
Index: b/include/linux/mm_inline.h
===================================================================
--- a/include/linux/mm_inline.h
+++ b/include/linux/mm_inline.h
@@ -81,23 +81,4 @@ static inline enum lru_list page_lru(str
return lru;
}
-/**
- * inactive_anon_is_low - check if anonymous pages need to be deactivated
- * @zone: zone to check
- *
- * Returns true if the zone does not have enough inactive anon pages,
- * meaning some active anon pages need to be deactivated.
- */
-static inline int inactive_anon_is_low(struct zone *zone)
-{
- unsigned long active, inactive;
-
- active = zone_page_state(zone, NR_ACTIVE_ANON);
- inactive = zone_page_state(zone, NR_INACTIVE_ANON);
-
- if (inactive * zone->inactive_ratio < active)
- return 1;
-
- return 0;
-}
#endif
Index: b/mm/vmscan.c
===================================================================
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1350,6 +1350,26 @@ static void shrink_active_list(unsigned
pagevec_release(&pvec);
}
+/**
+ * inactive_anon_is_low - check if anonymous pages need to be deactivated
+ * @zone: zone to check
+ *
+ * Returns true if the zone does not have enough inactive anon pages,
+ * meaning some active anon pages need to be deactivated.
+ */
+static int inactive_anon_is_low(struct zone *zone)
+{
+ unsigned long active, inactive;
+
+ active = zone_page_state(zone, NR_ACTIVE_ANON);
+ inactive = zone_page_state(zone, NR_INACTIVE_ANON);
+
+ if (inactive * zone->inactive_ratio < active)
+ return 1;
+
+ return 0;
+}
+
static unsigned long shrink_list(enum lru_list lru, unsigned long nr_to_scan,
struct zone *zone, struct scan_control *sc, int priority)
{
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 23+ messages in thread* Re: [PATCH 01/09] inactive_anon_is_low() move to vmscan.c
2008-11-30 10:55 ` [PATCH 01/09] inactive_anon_is_low() move to vmscan.c KOSAKI Motohiro
@ 2008-11-30 15:18 ` Rik van Riel
0 siblings, 0 replies; 23+ messages in thread
From: Rik van Riel @ 2008-11-30 15:18 UTC (permalink / raw)
To: KOSAKI Motohiro
Cc: LKML, linux-mm, Andrew Morton, Balbir Singh, KAMEZAWA Hiroyuki
KOSAKI Motohiro wrote:
> The inactive_anon_is_low() is called only vmscan.
> Then it can move to vmscan.c
>
> This patch doesn't have any functional change.
>
> Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Acked-by: Rik van Riel <riel@redhat.com>
--
All rights reversed.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 23+ messages in thread
* [PATCH 02/09] memcg: make inactive_anon_is_low()
2008-11-30 10:54 [PATCH 00/09] memcg: split-lru feature for memcg KOSAKI Motohiro
2008-11-30 10:55 ` [PATCH 01/09] inactive_anon_is_low() move to vmscan.c KOSAKI Motohiro
@ 2008-11-30 10:56 ` KOSAKI Motohiro
2008-11-30 12:25 ` Cyrill Gorcunov
` (2 more replies)
2008-11-30 10:57 ` [PATCH 03/09] introduce zone_reclaim struct KOSAKI Motohiro
` (7 subsequent siblings)
9 siblings, 3 replies; 23+ messages in thread
From: KOSAKI Motohiro @ 2008-11-30 10:56 UTC (permalink / raw)
To: LKML, linux-mm, Andrew Morton, Balbir Singh, KAMEZAWA Hiroyuki,
Rik van Riel
Cc: kosaki.motohiro
make inactive_anon_is_low for memcgroup.
it improve active_anon vs inactive_anon ratio balancing.
Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
---
include/linux/memcontrol.h | 10 ++++++++++
mm/memcontrol.c | 38 +++++++++++++++++++++++++++++++++++++-
mm/vmscan.c | 36 +++++++++++++++++++++++-------------
3 files changed, 70 insertions(+), 14 deletions(-)
Index: b/include/linux/memcontrol.h
===================================================================
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -90,6 +90,8 @@ extern void mem_cgroup_record_reclaim_pr
extern long mem_cgroup_calc_reclaim(struct mem_cgroup *mem, struct zone *zone,
int priority, enum lru_list lru);
+int mem_cgroup_inactive_anon_is_low(struct mem_cgroup *memcg,
+ struct zone *zone);
#ifdef CONFIG_CGROUP_MEM_RES_CTLR_SWAP
extern int do_swap_account;
@@ -241,6 +243,14 @@ static inline bool mem_cgroup_oom_called
{
return false;
}
+
+static inline int
+mem_cgroup_inactive_anon_is_low(struct mem_cgroup *memcg, struct zone *zone)
+{
+ return 1;
+}
+
+
#endif /* CONFIG_CGROUP_MEM_CONT */
#endif /* _LINUX_MEMCONTROL_H */
Index: b/mm/memcontrol.c
===================================================================
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -156,6 +156,9 @@ struct mem_cgroup {
unsigned long last_oom_jiffies;
int obsolete;
atomic_t refcnt;
+
+ int inactive_ratio;
+
/*
* statistics. This must be placed at the end of memcg.
*/
@@ -428,6 +431,20 @@ long mem_cgroup_calc_reclaim(struct mem_
return (nr_pages >> priority);
}
+int mem_cgroup_inactive_anon_is_low(struct mem_cgroup *memcg, struct zone *zone)
+{
+ unsigned long active;
+ unsigned long inactive;
+
+ inactive = mem_cgroup_get_all_zonestat(memcg, LRU_INACTIVE_ANON);
+ active = mem_cgroup_get_all_zonestat(memcg, LRU_ACTIVE_ANON);
+
+ if (inactive * memcg->inactive_ratio < active)
+ return 1;
+
+ return 0;
+}
+
unsigned long mem_cgroup_isolate_pages(unsigned long nr_to_scan,
struct list_head *dst,
unsigned long *scanned, int order,
@@ -1343,6 +1360,19 @@ int mem_cgroup_shrink_usage(struct mm_st
return 0;
}
+static void mem_cgroup_set_inactive_ratio(struct mem_cgroup *memcg)
+{
+ unsigned int gb, ratio;
+
+ gb = res_counter_read_u64(&memcg->res, RES_LIMIT) >> 30;
+ ratio = int_sqrt(10 * gb);
+ if (!ratio)
+ ratio = 1;
+
+ memcg->inactive_ratio = ratio;
+
+}
+
static DEFINE_MUTEX(set_limit_mutex);
static int mem_cgroup_resize_limit(struct mem_cgroup *memcg,
@@ -1381,6 +1411,11 @@ static int mem_cgroup_resize_limit(struc
GFP_HIGHUSER_MOVABLE, false);
if (!progress) retry_count--;
}
+
+ if (!ret)
+ mem_cgroup_set_inactive_ratio(memcg);
+
+
return ret;
}
@@ -1423,6 +1458,7 @@ int mem_cgroup_resize_memsw_limit(struct
if (curusage >= oldusage)
retry_count--;
}
+
return ret;
}
@@ -1965,7 +2001,7 @@ mem_cgroup_create(struct cgroup_subsys *
res_counter_init(&mem->res, NULL);
res_counter_init(&mem->memsw, NULL);
}
-
+ mem_cgroup_set_inactive_ratio(mem);
mem->last_scanned_child = NULL;
return &mem->css;
Index: b/mm/vmscan.c
===================================================================
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1350,14 +1350,7 @@ static void shrink_active_list(unsigned
pagevec_release(&pvec);
}
-/**
- * inactive_anon_is_low - check if anonymous pages need to be deactivated
- * @zone: zone to check
- *
- * Returns true if the zone does not have enough inactive anon pages,
- * meaning some active anon pages need to be deactivated.
- */
-static int inactive_anon_is_low(struct zone *zone)
+static int inactive_anon_is_low_global(struct zone *zone)
{
unsigned long active, inactive;
@@ -1370,6 +1363,25 @@ static int inactive_anon_is_low(struct z
return 0;
}
+/**
+ * inactive_anon_is_low - check if anonymous pages need to be deactivated
+ * @zone: zone to check
+ * @sc: scan control of this context
+ *
+ * Returns true if the zone does not have enough inactive anon pages,
+ * meaning some active anon pages need to be deactivated.
+ */
+static int inactive_anon_is_low(struct zone *zone, struct scan_control *sc)
+{
+ int low;
+
+ if (scan_global_lru(sc))
+ low = inactive_anon_is_low_global(zone);
+ else
+ low = mem_cgroup_inactive_anon_is_low(sc->mem_cgroup, zone);
+ return low;
+}
+
static unsigned long shrink_list(enum lru_list lru, unsigned long nr_to_scan,
struct zone *zone, struct scan_control *sc, int priority)
{
@@ -1381,7 +1393,7 @@ static unsigned long shrink_list(enum lr
}
if (lru == LRU_ACTIVE_ANON &&
- (!scan_global_lru(sc) || inactive_anon_is_low(zone))) {
+ inactive_anon_is_low(zone, sc)) {
shrink_active_list(nr_to_scan, zone, sc, priority, file);
return 0;
}
@@ -1542,9 +1554,7 @@ static void shrink_zone(int priority, st
* Even if we did not try to evict anon pages at all, we want to
* rebalance the anon lru active/inactive ratio.
*/
- if (!scan_global_lru(sc) || inactive_anon_is_low(zone))
- shrink_active_list(SWAP_CLUSTER_MAX, zone, sc, priority, 0);
- else if (!scan_global_lru(sc))
+ if (inactive_anon_is_low(zone, sc))
shrink_active_list(SWAP_CLUSTER_MAX, zone, sc, priority, 0);
throttle_vm_writeout(sc->gfp_mask);
@@ -1840,7 +1850,7 @@ loop_again:
* Do some background aging of the anon list, to give
* pages a chance to be referenced before reclaiming.
*/
- if (inactive_anon_is_low(zone))
+ if (inactive_anon_is_low(zone, &sc))
shrink_active_list(SWAP_CLUSTER_MAX, zone,
&sc, priority, 0);
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 23+ messages in thread* Re: [PATCH 02/09] memcg: make inactive_anon_is_low()
2008-11-30 10:56 ` [PATCH 02/09] memcg: make inactive_anon_is_low() KOSAKI Motohiro
@ 2008-11-30 12:25 ` Cyrill Gorcunov
2008-11-30 14:00 ` KOSAKI Motohiro
2008-11-30 12:50 ` Pekka Enberg
2008-11-30 15:24 ` Rik van Riel
2 siblings, 1 reply; 23+ messages in thread
From: Cyrill Gorcunov @ 2008-11-30 12:25 UTC (permalink / raw)
To: KOSAKI Motohiro
Cc: LKML, linux-mm, Andrew Morton, Balbir Singh, KAMEZAWA Hiroyuki,
Rik van Riel
[KOSAKI Motohiro - Sun, Nov 30, 2008 at 07:56:37PM +0900]
| make inactive_anon_is_low for memcgroup.
| it improve active_anon vs inactive_anon ratio balancing.
|
|
| Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
| ---
...
| +static void mem_cgroup_set_inactive_ratio(struct mem_cgroup *memcg)
| +{
| + unsigned int gb, ratio;
| +
| + gb = res_counter_read_u64(&memcg->res, RES_LIMIT) >> 30;
| + ratio = int_sqrt(10 * gb);
| + if (!ratio)
| + ratio = 1;
Hi Kosaki,
maybe better would be
gb = ...;
if (gb) {
ratio = int_sqrt(10 * gb);
} else
ratio = 1;
| +
| + memcg->inactive_ratio = ratio;
| +
| +}
| +
...
- Cyrill -
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 23+ messages in thread* Re: [PATCH 02/09] memcg: make inactive_anon_is_low()
2008-11-30 12:25 ` Cyrill Gorcunov
@ 2008-11-30 14:00 ` KOSAKI Motohiro
0 siblings, 0 replies; 23+ messages in thread
From: KOSAKI Motohiro @ 2008-11-30 14:00 UTC (permalink / raw)
To: Cyrill Gorcunov
Cc: LKML, linux-mm, Andrew Morton, Balbir Singh, KAMEZAWA Hiroyuki,
Rik van Riel
> | +static void mem_cgroup_set_inactive_ratio(struct mem_cgroup *memcg)
> | +{
> | + unsigned int gb, ratio;
> | +
> | + gb = res_counter_read_u64(&memcg->res, RES_LIMIT) >> 30;
> | + ratio = int_sqrt(10 * gb);
> | + if (!ratio)
> | + ratio = 1;
>
> Hi Kosaki,
>
> maybe better would be
>
> gb = ...;
> if (gb) {
> ratio = int_sqrt(10 * gb);
> } else
> ratio = 1;
>
Will fix.
Thanks.
Actually, setup_per_zone_inactive_ratio() (it calcule for global
reclaim) also have the same non-easy review thning.
I also fix it later.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH 02/09] memcg: make inactive_anon_is_low()
2008-11-30 10:56 ` [PATCH 02/09] memcg: make inactive_anon_is_low() KOSAKI Motohiro
2008-11-30 12:25 ` Cyrill Gorcunov
@ 2008-11-30 12:50 ` Pekka Enberg
2008-11-30 14:04 ` KOSAKI Motohiro
2008-11-30 15:24 ` Rik van Riel
2 siblings, 1 reply; 23+ messages in thread
From: Pekka Enberg @ 2008-11-30 12:50 UTC (permalink / raw)
To: KOSAKI Motohiro
Cc: LKML, linux-mm, Andrew Morton, Balbir Singh, KAMEZAWA Hiroyuki,
Rik van Riel
On Sun, Nov 30, 2008 at 12:56 PM, KOSAKI Motohiro
<kosaki.motohiro@jp.fujitsu.com> wrote:
> make inactive_anon_is_low for memcgroup.
> it improve active_anon vs inactive_anon ratio balancing.
The subject line of this patch seems to be truncated and the changelog
seems bit terse. While the change may be obvious to memcg developers,
it's not for the casual reader.
>
>
> Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
> ---
> include/linux/memcontrol.h | 10 ++++++++++
> mm/memcontrol.c | 38 +++++++++++++++++++++++++++++++++++++-
> mm/vmscan.c | 36 +++++++++++++++++++++++-------------
> 3 files changed, 70 insertions(+), 14 deletions(-)
>
> Index: b/include/linux/memcontrol.h
> ===================================================================
> --- a/include/linux/memcontrol.h
> +++ b/include/linux/memcontrol.h
> @@ -90,6 +90,8 @@ extern void mem_cgroup_record_reclaim_pr
>
> extern long mem_cgroup_calc_reclaim(struct mem_cgroup *mem, struct zone *zone,
> int priority, enum lru_list lru);
> +int mem_cgroup_inactive_anon_is_low(struct mem_cgroup *memcg,
> + struct zone *zone);
>
> #ifdef CONFIG_CGROUP_MEM_RES_CTLR_SWAP
> extern int do_swap_account;
> @@ -241,6 +243,14 @@ static inline bool mem_cgroup_oom_called
> {
> return false;
> }
> +
> +static inline int
> +mem_cgroup_inactive_anon_is_low(struct mem_cgroup *memcg, struct zone *zone)
> +{
> + return 1;
> +}
> +
> +
An extra newline here.
> #endif /* CONFIG_CGROUP_MEM_CONT */
>
> #endif /* _LINUX_MEMCONTROL_H */
> Index: b/mm/memcontrol.c
> ===================================================================
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -156,6 +156,9 @@ struct mem_cgroup {
> unsigned long last_oom_jiffies;
> int obsolete;
> atomic_t refcnt;
> +
> + int inactive_ratio;
> +
Is there a reason why this is not unsigned long? A comment here
explaining what ->inactive_ratio is used for would be nice.
> +static void mem_cgroup_set_inactive_ratio(struct mem_cgroup *memcg)
> +{
> + unsigned int gb, ratio;
> +
> + gb = res_counter_read_u64(&memcg->res, RES_LIMIT) >> 30;
> + ratio = int_sqrt(10 * gb);
You might want to consider adding a comment explaining what the above
calculation is supposed to be doing.
> + if (!ratio)
> + ratio = 1;
> +
> + memcg->inactive_ratio = ratio;
> +
> +}
> +
> static DEFINE_MUTEX(set_limit_mutex);
>
> static int mem_cgroup_resize_limit(struct mem_cgroup *memcg,
> @@ -1381,6 +1411,11 @@ static int mem_cgroup_resize_limit(struc
> GFP_HIGHUSER_MOVABLE, false);
> if (!progress) retry_count--;
> }
> +
> + if (!ret)
> + mem_cgroup_set_inactive_ratio(memcg);
> +
> +
An extra newline here.
> return ret;
> }
>
> @@ -1423,6 +1458,7 @@ int mem_cgroup_resize_memsw_limit(struct
> if (curusage >= oldusage)
> retry_count--;
> }
> +
> return ret;
> }
There's some diff noise here.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 23+ messages in thread* Re: [PATCH 02/09] memcg: make inactive_anon_is_low()
2008-11-30 12:50 ` Pekka Enberg
@ 2008-11-30 14:04 ` KOSAKI Motohiro
0 siblings, 0 replies; 23+ messages in thread
From: KOSAKI Motohiro @ 2008-11-30 14:04 UTC (permalink / raw)
To: Pekka Enberg
Cc: LKML, linux-mm, Andrew Morton, Balbir Singh, KAMEZAWA Hiroyuki,
Rik van Riel
>> make inactive_anon_is_low for memcgroup.
>> it improve active_anon vs inactive_anon ratio balancing.
>
> The subject line of this patch seems to be truncated and the changelog
> seems bit terse. While the change may be obvious to memcg developers,
> it's not for the casual reader.
Yes, I'm wrong.
Will fix.
>> +static inline int
>> +mem_cgroup_inactive_anon_is_low(struct mem_cgroup *memcg, struct zone *zone)
>> +{
>> + return 1;
>> +}
>> +
>> +
>
> An extra newline here.
Will fix.
===================================================================
>> --- a/mm/memcontrol.c
>> +++ b/mm/memcontrol.c
>> @@ -156,6 +156,9 @@ struct mem_cgroup {
>> unsigned long last_oom_jiffies;
>> int obsolete;
>> atomic_t refcnt;
>> +
>> + int inactive_ratio;
>> +
>
> Is there a reason why this is not unsigned long? A comment here
> explaining what ->inactive_ratio is used for would be nice.
Ah sorry.
the type of zone->inactive_ratio is unsigned int.
Then, I'd like to change it to unsigned int.
because difference of the global reclaim easily cause silly mistake and bug.
>> +static void mem_cgroup_set_inactive_ratio(struct mem_cgroup *memcg)
>> +{
>> + unsigned int gb, ratio;
>> +
>> + gb = res_counter_read_u64(&memcg->res, RES_LIMIT) >> 30;
>> + ratio = int_sqrt(10 * gb);
>
> You might want to consider adding a comment explaining what the above
> calculation is supposed to be doing.
Yes, Of cource.
Thanks.
>> static DEFINE_MUTEX(set_limit_mutex);
>>
>> static int mem_cgroup_resize_limit(struct mem_cgroup *memcg,
>> @@ -1381,6 +1411,11 @@ static int mem_cgroup_resize_limit(struc
>> GFP_HIGHUSER_MOVABLE, false);
>> if (!progress) retry_count--;
>> }
>> +
>> + if (!ret)
>> + mem_cgroup_set_inactive_ratio(memcg);
>> +
>> +
>
> An extra newline here.
Will fix.
>> @@ -1423,6 +1458,7 @@ int mem_cgroup_resize_memsw_limit(struct
>> if (curusage >= oldusage)
>> retry_count--;
>> }
>> +
>> return ret;
>> }
>
> There's some diff noise here.
ditto.
thanks.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH 02/09] memcg: make inactive_anon_is_low()
2008-11-30 10:56 ` [PATCH 02/09] memcg: make inactive_anon_is_low() KOSAKI Motohiro
2008-11-30 12:25 ` Cyrill Gorcunov
2008-11-30 12:50 ` Pekka Enberg
@ 2008-11-30 15:24 ` Rik van Riel
2 siblings, 0 replies; 23+ messages in thread
From: Rik van Riel @ 2008-11-30 15:24 UTC (permalink / raw)
To: KOSAKI Motohiro
Cc: LKML, linux-mm, Andrew Morton, Balbir Singh, KAMEZAWA Hiroyuki
KOSAKI Motohiro wrote:
> make inactive_anon_is_low for memcgroup.
> it improve active_anon vs inactive_anon ratio balancing.
>
>
> Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Looking forward to the cleanups, but since the code is correct
here's an early:
Acked-by: Rik van Riel <riel@redhat.com>
--
All rights reversed.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 23+ messages in thread
* [PATCH 03/09] introduce zone_reclaim struct
2008-11-30 10:54 [PATCH 00/09] memcg: split-lru feature for memcg KOSAKI Motohiro
2008-11-30 10:55 ` [PATCH 01/09] inactive_anon_is_low() move to vmscan.c KOSAKI Motohiro
2008-11-30 10:56 ` [PATCH 02/09] memcg: make inactive_anon_is_low() KOSAKI Motohiro
@ 2008-11-30 10:57 ` KOSAKI Motohiro
2008-11-30 15:27 ` Rik van Riel
2008-11-30 10:59 ` [PATCH 04/09] memcg: make zone_reclaim_stat KOSAKI Motohiro
` (6 subsequent siblings)
9 siblings, 1 reply; 23+ messages in thread
From: KOSAKI Motohiro @ 2008-11-30 10:57 UTC (permalink / raw)
To: LKML, linux-mm, Andrew Morton, Balbir Singh, KAMEZAWA Hiroyuki,
Rik van Riel
Cc: kosaki.motohiro
make zone_reclam_stat strcut for latter enhancement.
latter patch use this.
this patch doesn't any functional change.
Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
---
include/linux/mmzone.h | 24 ++++++++++++++----------
mm/page_alloc.c | 8 ++++----
mm/swap.c | 12 ++++++++----
mm/vmscan.c | 47 ++++++++++++++++++++++++++++++-----------------
4 files changed, 56 insertions(+), 35 deletions(-)
Index: b/include/linux/mmzone.h
===================================================================
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -263,6 +263,19 @@ enum zone_type {
#error ZONES_SHIFT -- too many zones configured adjust calculation
#endif
+struct zone_reclaim_stat {
+ /*
+ * The pageout code in vmscan.c keeps track of how many of the
+ * mem/swap backed and file backed pages are refeferenced.
+ * The higher the rotated/scanned ratio, the more valuable
+ * that cache is.
+ *
+ * The anon LRU stats live in [0], file LRU stats in [1]
+ */
+ unsigned long recent_rotated[2];
+ unsigned long recent_scanned[2];
+};
+
struct zone {
/* Fields commonly accessed by the page allocator */
unsigned long pages_min, pages_low, pages_high;
@@ -315,16 +328,7 @@ struct zone {
unsigned long nr_scan;
} lru[NR_LRU_LISTS];
- /*
- * The pageout code in vmscan.c keeps track of how many of the
- * mem/swap backed and file backed pages are refeferenced.
- * The higher the rotated/scanned ratio, the more valuable
- * that cache is.
- *
- * The anon LRU stats live in [0], file LRU stats in [1]
- */
- unsigned long recent_rotated[2];
- unsigned long recent_scanned[2];
+ struct zone_reclaim_stat reclaim_stat;
unsigned long pages_scanned; /* since last reclaim */
unsigned long slab_defrag_counter; /* since last defrag */
Index: b/mm/page_alloc.c
===================================================================
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3522,10 +3522,10 @@ static void __paginginit free_area_init_
INIT_LIST_HEAD(&zone->lru[l].list);
zone->lru[l].nr_scan = 0;
}
- zone->recent_rotated[0] = 0;
- zone->recent_rotated[1] = 0;
- zone->recent_scanned[0] = 0;
- zone->recent_scanned[1] = 0;
+ zone->reclaim_stat.recent_rotated[0] = 0;
+ zone->reclaim_stat.recent_rotated[1] = 0;
+ zone->reclaim_stat.recent_scanned[0] = 0;
+ zone->reclaim_stat.recent_scanned[1] = 0;
zap_zone_vm_stats(zone);
zone->flags = 0;
if (!size)
Index: b/mm/swap.c
===================================================================
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -157,6 +157,7 @@ void rotate_reclaimable_page(struct pag
void activate_page(struct page *page)
{
struct zone *zone = page_zone(page);
+ struct zone_reclaim_stat *reclaim_stat = &zone->reclaim_stat;
spin_lock_irq(&zone->lru_lock);
if (PageLRU(page) && !PageActive(page) && !PageUnevictable(page)) {
@@ -169,8 +170,8 @@ void activate_page(struct page *page)
add_page_to_lru_list(zone, page, lru);
__count_vm_event(PGACTIVATE);
- zone->recent_rotated[!!file]++;
- zone->recent_scanned[!!file]++;
+ reclaim_stat->recent_rotated[!!file]++;
+ reclaim_stat->recent_scanned[!!file]++;
}
spin_unlock_irq(&zone->lru_lock);
}
@@ -398,6 +399,8 @@ void ____pagevec_lru_add(struct pagevec
{
int i;
struct zone *zone = NULL;
+ struct zone_reclaim_stat *reclaim_stat = NULL;
+
VM_BUG_ON(is_unevictable_lru(lru));
for (i = 0; i < pagevec_count(pvec); i++) {
@@ -409,6 +412,7 @@ void ____pagevec_lru_add(struct pagevec
if (zone)
spin_unlock_irq(&zone->lru_lock);
zone = pagezone;
+ reclaim_stat = &zone->reclaim_stat;
spin_lock_irq(&zone->lru_lock);
}
VM_BUG_ON(PageActive(page));
@@ -416,10 +420,10 @@ void ____pagevec_lru_add(struct pagevec
VM_BUG_ON(PageLRU(page));
SetPageLRU(page);
file = is_file_lru(lru);
- zone->recent_scanned[file]++;
+ reclaim_stat->recent_scanned[file]++;
if (is_active_lru(lru)) {
SetPageActive(page);
- zone->recent_rotated[file]++;
+ reclaim_stat->recent_rotated[file]++;
}
add_page_to_lru_list(zone, page, lru);
}
Index: b/mm/vmscan.c
===================================================================
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -131,6 +131,12 @@ static DECLARE_RWSEM(shrinker_rwsem);
#define scan_global_lru(sc) (1)
#endif
+static struct zone_reclaim_stat *get_reclaim_stat(struct zone *zone,
+ struct scan_control *sc)
+{
+ return &zone->reclaim_stat;
+}
+
/*
* Add a shrinker callback to be called from the vm
*/
@@ -1083,6 +1089,7 @@ static unsigned long shrink_inactive_lis
struct pagevec pvec;
unsigned long nr_scanned = 0;
unsigned long nr_reclaimed = 0;
+ struct zone_reclaim_stat *reclaim_stat = get_reclaim_stat(zone, sc);
pagevec_init(&pvec, 1);
@@ -1126,10 +1133,14 @@ static unsigned long shrink_inactive_lis
if (scan_global_lru(sc)) {
zone->pages_scanned += nr_scan;
- zone->recent_scanned[0] += count[LRU_INACTIVE_ANON];
- zone->recent_scanned[0] += count[LRU_ACTIVE_ANON];
- zone->recent_scanned[1] += count[LRU_INACTIVE_FILE];
- zone->recent_scanned[1] += count[LRU_ACTIVE_FILE];
+ reclaim_stat->recent_scanned[0] +=
+ count[LRU_INACTIVE_ANON];
+ reclaim_stat->recent_scanned[0] +=
+ count[LRU_ACTIVE_ANON];
+ reclaim_stat->recent_scanned[1] +=
+ count[LRU_INACTIVE_FILE];
+ reclaim_stat->recent_scanned[1] +=
+ count[LRU_ACTIVE_FILE];
}
spin_unlock_irq(&zone->lru_lock);
@@ -1190,7 +1201,7 @@ static unsigned long shrink_inactive_lis
add_page_to_lru_list(zone, page, lru);
if (PageActive(page) && scan_global_lru(sc)) {
int file = !!page_is_file_cache(page);
- zone->recent_rotated[file]++;
+ reclaim_stat->recent_rotated[file]++;
}
if (!pagevec_add(&pvec, page)) {
spin_unlock_irq(&zone->lru_lock);
@@ -1255,6 +1266,7 @@ static void shrink_active_list(unsigned
struct page *page;
struct pagevec pvec;
enum lru_list lru;
+ struct zone_reclaim_stat *reclaim_stat = get_reclaim_stat(zone, sc);
lru_add_drain();
spin_lock_irq(&zone->lru_lock);
@@ -1267,7 +1279,7 @@ static void shrink_active_list(unsigned
*/
if (scan_global_lru(sc)) {
zone->pages_scanned += pgscanned;
- zone->recent_scanned[!!file] += pgmoved;
+ reclaim_stat->recent_scanned[!!file] += pgmoved;
}
if (file)
@@ -1302,7 +1314,7 @@ static void shrink_active_list(unsigned
* pages in get_scan_ratio.
*/
if (scan_global_lru(sc))
- zone->recent_rotated[!!file] += pgmoved;
+ reclaim_stat->recent_rotated[!!file] += pgmoved;
/*
* Move the pages to the [file or anon] inactive list.
@@ -1415,6 +1427,7 @@ static void get_scan_ratio(struct zone *
unsigned long anon, file, free;
unsigned long anon_prio, file_prio;
unsigned long ap, fp;
+ struct zone_reclaim_stat *reclaim_stat = get_reclaim_stat(zone, sc);
/* If we have no swap space, do not bother scanning anon pages. */
if (nr_swap_pages <= 0) {
@@ -1447,17 +1460,17 @@ static void get_scan_ratio(struct zone *
*
* anon in [0], file in [1]
*/
- if (unlikely(zone->recent_scanned[0] > anon / 4)) {
+ if (unlikely(reclaim_stat->recent_scanned[0] > anon / 4)) {
spin_lock_irq(&zone->lru_lock);
- zone->recent_scanned[0] /= 2;
- zone->recent_rotated[0] /= 2;
+ reclaim_stat->recent_scanned[0] /= 2;
+ reclaim_stat->recent_rotated[0] /= 2;
spin_unlock_irq(&zone->lru_lock);
}
- if (unlikely(zone->recent_scanned[1] > file / 4)) {
+ if (unlikely(reclaim_stat->recent_scanned[1] > file / 4)) {
spin_lock_irq(&zone->lru_lock);
- zone->recent_scanned[1] /= 2;
- zone->recent_rotated[1] /= 2;
+ reclaim_stat->recent_scanned[1] /= 2;
+ reclaim_stat->recent_rotated[1] /= 2;
spin_unlock_irq(&zone->lru_lock);
}
@@ -1473,11 +1486,11 @@ static void get_scan_ratio(struct zone *
* proportional to the fraction of recently scanned pages on
* each list that were recently referenced and in active use.
*/
- ap = (anon_prio + 1) * (zone->recent_scanned[0] + 1);
- ap /= zone->recent_rotated[0] + 1;
+ ap = (anon_prio + 1) * (reclaim_stat->recent_scanned[0] + 1);
+ ap /= reclaim_stat->recent_rotated[0] + 1;
- fp = (file_prio + 1) * (zone->recent_scanned[1] + 1);
- fp /= zone->recent_rotated[1] + 1;
+ fp = (file_prio + 1) * (reclaim_stat->recent_scanned[1] + 1);
+ fp /= reclaim_stat->recent_rotated[1] + 1;
/* Normalize to percentages */
percent[0] = 100 * ap / (ap + fp + 1);
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 23+ messages in thread* Re: [PATCH 03/09] introduce zone_reclaim struct
2008-11-30 10:57 ` [PATCH 03/09] introduce zone_reclaim struct KOSAKI Motohiro
@ 2008-11-30 15:27 ` Rik van Riel
0 siblings, 0 replies; 23+ messages in thread
From: Rik van Riel @ 2008-11-30 15:27 UTC (permalink / raw)
To: KOSAKI Motohiro
Cc: LKML, linux-mm, Andrew Morton, Balbir Singh, KAMEZAWA Hiroyuki
KOSAKI Motohiro wrote:
> make zone_reclam_stat strcut for latter enhancement.
> latter patch use this.
>
> this patch doesn't any functional change.
>
>
> Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Acked-by: Rik van Riel <riel@redhat.com>
--
All rights reversed.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 23+ messages in thread
* [PATCH 04/09] memcg: make zone_reclaim_stat
2008-11-30 10:54 [PATCH 00/09] memcg: split-lru feature for memcg KOSAKI Motohiro
` (2 preceding siblings ...)
2008-11-30 10:57 ` [PATCH 03/09] introduce zone_reclaim struct KOSAKI Motohiro
@ 2008-11-30 10:59 ` KOSAKI Motohiro
2008-11-30 16:06 ` Rik van Riel
2008-11-30 16:08 ` Rik van Riel
2008-11-30 10:59 ` [PATCH 05/09] make zone_nr_pages() helper function KOSAKI Motohiro
` (5 subsequent siblings)
9 siblings, 2 replies; 23+ messages in thread
From: KOSAKI Motohiro @ 2008-11-30 10:59 UTC (permalink / raw)
To: LKML, linux-mm, Andrew Morton, Balbir Singh, KAMEZAWA Hiroyuki,
Rik van Riel
Cc: kosaki.motohiro
introduce mem_cgroup_per_zone::reclaim_stat member and its statics collect
function.
latter patch use it.
Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
---
include/linux/memcontrol.h | 16 ++++++++++++++++
mm/memcontrol.c | 21 +++++++++++++++++++++
mm/swap.c | 10 ++++++++++
mm/vmscan.c | 27 +++++++++++++--------------
4 files changed, 60 insertions(+), 14 deletions(-)
Index: b/include/linux/memcontrol.h
===================================================================
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -92,6 +92,10 @@ extern long mem_cgroup_calc_reclaim(stru
int priority, enum lru_list lru);
int mem_cgroup_inactive_anon_is_low(struct mem_cgroup *memcg,
struct zone *zone);
+struct zone_reclaim_stat *mem_cgroup_get_reclaim_stat(struct mem_cgroup *memcg,
+ struct zone *zone);
+struct zone_reclaim_stat*
+mem_cgroup_get_reclaim_stat_by_page(struct page *page);
#ifdef CONFIG_CGROUP_MEM_RES_CTLR_SWAP
extern int do_swap_account;
@@ -250,6 +254,18 @@ mem_cgroup_inactive_anon_is_low(struct m
return 1;
}
+static inline struct zone_reclaim_stat*
+mem_cgroup_get_reclaim_stat(struct mem_cgroup *memcg, struct zone *zone)
+{
+ return NULL;
+}
+
+struct zone_reclaim_stat*
+mem_cgroup_get_reclaim_stat_by_page(struct page *page)
+{
+ return NULL;
+}
+
#endif /* CONFIG_CGROUP_MEM_CONT */
Index: b/mm/memcontrol.c
===================================================================
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -103,6 +103,8 @@ struct mem_cgroup_per_zone {
*/
struct list_head lists[NR_LRU_LISTS];
unsigned long count[NR_LRU_LISTS];
+
+ struct zone_reclaim_stat reclaim_stat;
};
/* Macro for accessing counter */
#define MEM_CGROUP_ZSTAT(mz, idx) ((mz)->count[(idx)])
@@ -445,6 +447,25 @@ int mem_cgroup_inactive_anon_is_low(stru
return 0;
}
+struct zone_reclaim_stat *mem_cgroup_get_reclaim_stat(struct mem_cgroup *memcg,
+ struct zone *zone)
+{
+ int nid = zone->zone_pgdat->node_id;
+ int zid = zone_idx(zone);
+ struct mem_cgroup_per_zone *mz = mem_cgroup_zoneinfo(memcg, nid, zid);
+
+ return &mz->reclaim_stat;
+}
+
+struct zone_reclaim_stat *mem_cgroup_get_reclaim_stat_by_page(struct page *page)
+{
+ struct page_cgroup *pc = lookup_page_cgroup(page);
+ struct mem_cgroup_per_zone *mz = page_cgroup_zoneinfo(pc);
+
+ return &mz->reclaim_stat;
+}
+
+
unsigned long mem_cgroup_isolate_pages(unsigned long nr_to_scan,
struct list_head *dst,
unsigned long *scanned, int order,
Index: b/mm/swap.c
===================================================================
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -158,6 +158,7 @@ void activate_page(struct page *page)
{
struct zone *zone = page_zone(page);
struct zone_reclaim_stat *reclaim_stat = &zone->reclaim_stat;
+ struct zone_reclaim_stat *memcg_reclaim_stat;
spin_lock_irq(&zone->lru_lock);
if (PageLRU(page) && !PageActive(page) && !PageUnevictable(page)) {
@@ -172,6 +173,10 @@ void activate_page(struct page *page)
reclaim_stat->recent_rotated[!!file]++;
reclaim_stat->recent_scanned[!!file]++;
+
+ memcg_reclaim_stat = mem_cgroup_get_reclaim_stat_by_page(page);
+ memcg_reclaim_stat->recent_rotated[!!file]++;
+ memcg_reclaim_stat->recent_scanned[!!file]++;
}
spin_unlock_irq(&zone->lru_lock);
}
@@ -400,6 +405,7 @@ void ____pagevec_lru_add(struct pagevec
int i;
struct zone *zone = NULL;
struct zone_reclaim_stat *reclaim_stat = NULL;
+ struct zone_reclaim_stat *memcg_reclaim_stat = NULL;
VM_BUG_ON(is_unevictable_lru(lru));
@@ -413,6 +419,8 @@ void ____pagevec_lru_add(struct pagevec
spin_unlock_irq(&zone->lru_lock);
zone = pagezone;
reclaim_stat = &zone->reclaim_stat;
+ memcg_reclaim_stat =
+ mem_cgroup_get_reclaim_stat_by_page(page);
spin_lock_irq(&zone->lru_lock);
}
VM_BUG_ON(PageActive(page));
@@ -421,9 +429,11 @@ void ____pagevec_lru_add(struct pagevec
SetPageLRU(page);
file = is_file_lru(lru);
reclaim_stat->recent_scanned[file]++;
+ memcg_reclaim_stat->recent_scanned[file]++;
if (is_active_lru(lru)) {
SetPageActive(page);
reclaim_stat->recent_rotated[file]++;
+ memcg_reclaim_stat->recent_rotated[file]++;
}
add_page_to_lru_list(zone, page, lru);
}
Index: b/mm/vmscan.c
===================================================================
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -134,6 +134,9 @@ static DECLARE_RWSEM(shrinker_rwsem);
static struct zone_reclaim_stat *get_reclaim_stat(struct zone *zone,
struct scan_control *sc)
{
+ if (!scan_global_lru(sc))
+ mem_cgroup_get_reclaim_stat(sc->mem_cgroup, zone);
+
return &zone->reclaim_stat;
}
@@ -1131,17 +1134,14 @@ static unsigned long shrink_inactive_lis
__mod_zone_page_state(zone, NR_INACTIVE_ANON,
-count[LRU_INACTIVE_ANON]);
- if (scan_global_lru(sc)) {
+ if (scan_global_lru(sc))
zone->pages_scanned += nr_scan;
- reclaim_stat->recent_scanned[0] +=
- count[LRU_INACTIVE_ANON];
- reclaim_stat->recent_scanned[0] +=
- count[LRU_ACTIVE_ANON];
- reclaim_stat->recent_scanned[1] +=
- count[LRU_INACTIVE_FILE];
- reclaim_stat->recent_scanned[1] +=
- count[LRU_ACTIVE_FILE];
- }
+
+ reclaim_stat->recent_scanned[0] += count[LRU_INACTIVE_ANON];
+ reclaim_stat->recent_scanned[0] += count[LRU_ACTIVE_ANON];
+ reclaim_stat->recent_scanned[1] += count[LRU_INACTIVE_FILE];
+ reclaim_stat->recent_scanned[1] += count[LRU_ACTIVE_FILE];
+
spin_unlock_irq(&zone->lru_lock);
nr_scanned += nr_scan;
@@ -1199,7 +1199,7 @@ static unsigned long shrink_inactive_lis
SetPageLRU(page);
lru = page_lru(page);
add_page_to_lru_list(zone, page, lru);
- if (PageActive(page) && scan_global_lru(sc)) {
+ if (PageActive(page)) {
int file = !!page_is_file_cache(page);
reclaim_stat->recent_rotated[file]++;
}
@@ -1279,8 +1279,8 @@ static void shrink_active_list(unsigned
*/
if (scan_global_lru(sc)) {
zone->pages_scanned += pgscanned;
- reclaim_stat->recent_scanned[!!file] += pgmoved;
}
+ reclaim_stat->recent_scanned[!!file] += pgmoved;
if (file)
__mod_zone_page_state(zone, NR_ACTIVE_FILE, -pgmoved);
@@ -1313,8 +1313,7 @@ static void shrink_active_list(unsigned
* This helps balance scan pressure between file and anonymous
* pages in get_scan_ratio.
*/
- if (scan_global_lru(sc))
- reclaim_stat->recent_rotated[!!file] += pgmoved;
+ reclaim_stat->recent_rotated[!!file] += pgmoved;
/*
* Move the pages to the [file or anon] inactive list.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 23+ messages in thread* Re: [PATCH 04/09] memcg: make zone_reclaim_stat
2008-11-30 10:59 ` [PATCH 04/09] memcg: make zone_reclaim_stat KOSAKI Motohiro
@ 2008-11-30 16:06 ` Rik van Riel
2008-12-01 0:48 ` KOSAKI Motohiro
2008-11-30 16:08 ` Rik van Riel
1 sibling, 1 reply; 23+ messages in thread
From: Rik van Riel @ 2008-11-30 16:06 UTC (permalink / raw)
To: KOSAKI Motohiro
Cc: LKML, linux-mm, Andrew Morton, Balbir Singh, KAMEZAWA Hiroyuki
KOSAKI Motohiro wrote:
> +struct zone_reclaim_stat*
> +mem_cgroup_get_reclaim_stat_by_page(struct page *page)
> +{
> + return NULL;
> +}
> + memcg_reclaim_stat = mem_cgroup_get_reclaim_stat_by_page(page);
> + memcg_reclaim_stat->recent_rotated[!!file]++;
> + memcg_reclaim_stat->recent_scanned[!!file]++;
Won't this cause a null pointer dereference when
not using memcg?
--
All rights reversed.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 23+ messages in thread* Re: [PATCH 04/09] memcg: make zone_reclaim_stat
2008-11-30 16:06 ` Rik van Riel
@ 2008-12-01 0:48 ` KOSAKI Motohiro
0 siblings, 0 replies; 23+ messages in thread
From: KOSAKI Motohiro @ 2008-12-01 0:48 UTC (permalink / raw)
To: Rik van Riel
Cc: kosaki.motohiro, LKML, linux-mm, Andrew Morton, Balbir Singh,
KAMEZAWA Hiroyuki
> > +struct zone_reclaim_stat*
> > +mem_cgroup_get_reclaim_stat_by_page(struct page *page)
> > +{
> > + return NULL;
> > +}
>
> > + memcg_reclaim_stat = mem_cgroup_get_reclaim_stat_by_page(page);
> > + memcg_reclaim_stat->recent_rotated[!!file]++;
> > + memcg_reclaim_stat->recent_scanned[!!file]++;
>
> Won't this cause a null pointer dereference when
> not using memcg?
Ahhh, thank you.
that is definitly silly bug.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH 04/09] memcg: make zone_reclaim_stat
2008-11-30 10:59 ` [PATCH 04/09] memcg: make zone_reclaim_stat KOSAKI Motohiro
2008-11-30 16:06 ` Rik van Riel
@ 2008-11-30 16:08 ` Rik van Riel
2008-12-01 0:50 ` KOSAKI Motohiro
1 sibling, 1 reply; 23+ messages in thread
From: Rik van Riel @ 2008-11-30 16:08 UTC (permalink / raw)
To: KOSAKI Motohiro
Cc: LKML, linux-mm, Andrew Morton, Balbir Singh, KAMEZAWA Hiroyuki
KOSAKI Motohiro wrote:
> @@ -172,6 +173,10 @@ void activate_page(struct page *page)
>
> reclaim_stat->recent_rotated[!!file]++;
> reclaim_stat->recent_scanned[!!file]++;
> +
> + memcg_reclaim_stat = mem_cgroup_get_reclaim_stat_by_page(page);
> + memcg_reclaim_stat->recent_rotated[!!file]++;
> + memcg_reclaim_stat->recent_scanned[!!file]++;
Also, manipulation of the zone based reclaim_stats happens
under the lru lock.
What protects the memcg reclaim stat?
--
All rights reversed.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH 04/09] memcg: make zone_reclaim_stat
2008-11-30 16:08 ` Rik van Riel
@ 2008-12-01 0:50 ` KOSAKI Motohiro
0 siblings, 0 replies; 23+ messages in thread
From: KOSAKI Motohiro @ 2008-12-01 0:50 UTC (permalink / raw)
To: Rik van Riel
Cc: kosaki.motohiro, LKML, linux-mm, Andrew Morton, Balbir Singh,
KAMEZAWA Hiroyuki
> KOSAKI Motohiro wrote:
>
> > @@ -172,6 +173,10 @@ void activate_page(struct page *page)
> >
> > reclaim_stat->recent_rotated[!!file]++;
> > reclaim_stat->recent_scanned[!!file]++;
> > +
> > + memcg_reclaim_stat = mem_cgroup_get_reclaim_stat_by_page(page);
> > + memcg_reclaim_stat->recent_rotated[!!file]++;
> > + memcg_reclaim_stat->recent_scanned[!!file]++;
>
> Also, manipulation of the zone based reclaim_stats happens
> under the lru lock.
>
> What protects the memcg reclaim stat?
memcg zone and memcg zone_reclaim_stat also use zone->lru_lock.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 23+ messages in thread
* [PATCH 05/09] make zone_nr_pages() helper function
2008-11-30 10:54 [PATCH 00/09] memcg: split-lru feature for memcg KOSAKI Motohiro
` (3 preceding siblings ...)
2008-11-30 10:59 ` [PATCH 04/09] memcg: make zone_reclaim_stat KOSAKI Motohiro
@ 2008-11-30 10:59 ` KOSAKI Motohiro
2008-11-30 16:10 ` Rik van Riel
2008-11-30 11:00 ` [PATCH 06/09] make get_scan_ratio() to memcg awareness KOSAKI Motohiro
` (4 subsequent siblings)
9 siblings, 1 reply; 23+ messages in thread
From: KOSAKI Motohiro @ 2008-11-30 10:59 UTC (permalink / raw)
To: LKML, linux-mm, Andrew Morton, Balbir Singh, KAMEZAWA Hiroyuki,
Rik van Riel
Cc: kosaki.motohiro
make zone_nr_pages() function.
it is used by latter patch.
this patch doesn't have any functional change.
Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
---
mm/vmscan.c | 15 +++++++++++----
1 file changed, 11 insertions(+), 4 deletions(-)
Index: b/mm/vmscan.c
===================================================================
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -140,6 +140,13 @@ static struct zone_reclaim_stat *get_rec
return &zone->reclaim_stat;
}
+static unsigned long zone_nr_pages(struct zone *zone, struct scan_control *sc,
+ enum lru_list lru)
+{
+ return zone_page_state(zone, NR_LRU_BASE + lru);
+}
+
+
/*
* Add a shrinker callback to be called from the vm
*/
@@ -1435,10 +1442,10 @@ static void get_scan_ratio(struct zone *
return;
}
- anon = zone_page_state(zone, NR_ACTIVE_ANON) +
- zone_page_state(zone, NR_INACTIVE_ANON);
- file = zone_page_state(zone, NR_ACTIVE_FILE) +
- zone_page_state(zone, NR_INACTIVE_FILE);
+ anon = zone_nr_pages(zone, sc, LRU_ACTIVE_ANON) +
+ zone_nr_pages(zone, sc, LRU_INACTIVE_ANON);
+ file = zone_nr_pages(zone, sc, LRU_ACTIVE_FILE) +
+ zone_nr_pages(zone, sc, LRU_INACTIVE_FILE);
free = zone_page_state(zone, NR_FREE_PAGES);
/* If we have very few page cache pages, force-scan anon pages. */
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 23+ messages in thread* Re: [PATCH 05/09] make zone_nr_pages() helper function
2008-11-30 10:59 ` [PATCH 05/09] make zone_nr_pages() helper function KOSAKI Motohiro
@ 2008-11-30 16:10 ` Rik van Riel
0 siblings, 0 replies; 23+ messages in thread
From: Rik van Riel @ 2008-11-30 16:10 UTC (permalink / raw)
To: KOSAKI Motohiro
Cc: LKML, linux-mm, Andrew Morton, Balbir Singh, KAMEZAWA Hiroyuki
KOSAKI Motohiro wrote:
> make zone_nr_pages() function.
> it is used by latter patch.
>
> this patch doesn't have any functional change.
>
> Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Acked-by: Rik van Riel <riel@redhat.com>
--
All rights reversed.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 23+ messages in thread
* [PATCH 06/09] make get_scan_ratio() to memcg awareness
2008-11-30 10:54 [PATCH 00/09] memcg: split-lru feature for memcg KOSAKI Motohiro
` (4 preceding siblings ...)
2008-11-30 10:59 ` [PATCH 05/09] make zone_nr_pages() helper function KOSAKI Motohiro
@ 2008-11-30 11:00 ` KOSAKI Motohiro
2008-11-30 11:01 ` [PATCH 07/09] memcg: remove mem_cgroup_calc_reclaim() KOSAKI Motohiro
` (3 subsequent siblings)
9 siblings, 0 replies; 23+ messages in thread
From: KOSAKI Motohiro @ 2008-11-30 11:00 UTC (permalink / raw)
To: LKML, linux-mm, Andrew Morton, Balbir Singh, KAMEZAWA Hiroyuki,
Rik van Riel
Cc: kosaki.motohiro
make mem_cgroup_zone_nr_pages() memcgroup aware and get_scan_ratio() too.
this patch doesn't have any functional change.
Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
---
include/linux/memcontrol.h | 7 +++++++
mm/memcontrol.c | 11 +++++++++++
mm/vmscan.c | 19 +++++++++++++------
3 files changed, 31 insertions(+), 6 deletions(-)
Index: b/include/linux/memcontrol.h
===================================================================
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -96,6 +96,9 @@ struct zone_reclaim_stat *mem_cgroup_get
struct zone *zone);
struct zone_reclaim_stat*
mem_cgroup_get_reclaim_stat_by_page(struct page *page);
+unsigned long mem_cgroup_zone_nr_pages(struct mem_cgroup *memcg,
+ struct zone *zone,
+ enum lru_list lru);
#ifdef CONFIG_CGROUP_MEM_RES_CTLR_SWAP
extern int do_swap_account;
@@ -266,6 +269,10 @@ mem_cgroup_get_reclaim_stat_by_page(stru
return NULL;
}
+static inline unsigned long
+mem_cgroup_zone_nr_pages(struct mem_cgroup *memcg, struct zone *zone,
+ enum lru_list lru);
+
#endif /* CONFIG_CGROUP_MEM_CONT */
Index: b/mm/memcontrol.c
===================================================================
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -465,6 +465,17 @@ struct zone_reclaim_stat *mem_cgroup_get
return &mz->reclaim_stat;
}
+unsigned long mem_cgroup_zone_nr_pages(struct mem_cgroup *memcg,
+ struct zone *zone,
+ enum lru_list lru)
+{
+ int nid = zone->zone_pgdat->node_id;
+ int zid = zone_idx(zone);
+ struct mem_cgroup_per_zone *mz = mem_cgroup_zoneinfo(memcg, nid, zid);
+
+ return MEM_CGROUP_ZSTAT(mz, lru);
+}
+
unsigned long mem_cgroup_isolate_pages(unsigned long nr_to_scan,
struct list_head *dst,
Index: b/mm/vmscan.c
===================================================================
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -143,6 +143,9 @@ static struct zone_reclaim_stat *get_rec
static unsigned long zone_nr_pages(struct zone *zone, struct scan_control *sc,
enum lru_list lru)
{
+ if (!scan_global_lru(sc))
+ return mem_cgroup_zone_nr_pages(sc->mem_cgroup, zone, lru);
+
return zone_page_state(zone, NR_LRU_BASE + lru);
}
@@ -1446,13 +1449,16 @@ static void get_scan_ratio(struct zone *
zone_nr_pages(zone, sc, LRU_INACTIVE_ANON);
file = zone_nr_pages(zone, sc, LRU_ACTIVE_FILE) +
zone_nr_pages(zone, sc, LRU_INACTIVE_FILE);
- free = zone_page_state(zone, NR_FREE_PAGES);
- /* If we have very few page cache pages, force-scan anon pages. */
- if (unlikely(file + free <= zone->pages_high)) {
- percent[0] = 100;
- percent[1] = 0;
- return;
+ if (scan_global_lru(sc)) {
+ free = zone_page_state(zone, NR_FREE_PAGES);
+ /* If we have very few page cache pages,
+ force-scan anon pages. */
+ if (unlikely(file + free <= zone->pages_high)) {
+ percent[0] = 100;
+ percent[1] = 0;
+ return;
+ }
}
/*
@@ -1527,6 +1533,7 @@ static void shrink_zone(int priority, st
scan >>= priority;
scan = (scan * percent[file]) / 100;
}
+
zone->lru[l].nr_scan += scan;
nr[l] = zone->lru[l].nr_scan;
if (nr[l] >= sc->swap_cluster_max)
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 23+ messages in thread* [PATCH 07/09] memcg: remove mem_cgroup_calc_reclaim()
2008-11-30 10:54 [PATCH 00/09] memcg: split-lru feature for memcg KOSAKI Motohiro
` (5 preceding siblings ...)
2008-11-30 11:00 ` [PATCH 06/09] make get_scan_ratio() to memcg awareness KOSAKI Motohiro
@ 2008-11-30 11:01 ` KOSAKI Motohiro
2008-11-30 11:02 ` [PATCH 08/09] memcg: show inactive_ratio KOSAKI Motohiro
` (2 subsequent siblings)
9 siblings, 0 replies; 23+ messages in thread
From: KOSAKI Motohiro @ 2008-11-30 11:01 UTC (permalink / raw)
To: LKML, linux-mm, Andrew Morton, Balbir Singh, KAMEZAWA Hiroyuki,
Rik van Riel
Cc: kosaki.motohiro
Now, we can remove mem_cgroup_calc_reclaim() and mem cgroup reclaim also can
use the same routine of global reclaim.
it improve anon/file reclaim balancing on mem cgroup reclaim.
Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
---
include/linux/memcontrol.h | 10 ----------
mm/memcontrol.c | 21 ---------------------
mm/vmscan.c | 27 ++++++++++-----------------
3 files changed, 10 insertions(+), 48 deletions(-)
Index: b/include/linux/memcontrol.h
===================================================================
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -87,9 +87,6 @@ extern void mem_cgroup_note_reclaim_prio
int priority);
extern void mem_cgroup_record_reclaim_priority(struct mem_cgroup *mem,
int priority);
-
-extern long mem_cgroup_calc_reclaim(struct mem_cgroup *mem, struct zone *zone,
- int priority, enum lru_list lru);
int mem_cgroup_inactive_anon_is_low(struct mem_cgroup *memcg,
struct zone *zone);
struct zone_reclaim_stat *mem_cgroup_get_reclaim_stat(struct mem_cgroup *memcg,
@@ -234,13 +231,6 @@ static inline void mem_cgroup_record_rec
{
}
-static inline long mem_cgroup_calc_reclaim(struct mem_cgroup *mem,
- struct zone *zone, int priority,
- enum lru_list lru)
-{
- return 0;
-}
-
static inline bool mem_cgroup_disabled(void)
{
return true;
Index: b/mm/memcontrol.c
===================================================================
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -412,27 +412,6 @@ void mem_cgroup_record_reclaim_priority(
mem->prev_priority = priority;
}
-/*
- * Calculate # of pages to be scanned in this priority/zone.
- * See also vmscan.c
- *
- * priority starts from "DEF_PRIORITY" and decremented in each loop.
- * (see include/linux/mmzone.h)
- */
-
-long mem_cgroup_calc_reclaim(struct mem_cgroup *mem, struct zone *zone,
- int priority, enum lru_list lru)
-{
- long nr_pages;
- int nid = zone->zone_pgdat->node_id;
- int zid = zone_idx(zone);
- struct mem_cgroup_per_zone *mz = mem_cgroup_zoneinfo(mem, nid, zid);
-
- nr_pages = MEM_CGROUP_ZSTAT(mz, lru);
-
- return (nr_pages >> priority);
-}
-
int mem_cgroup_inactive_anon_is_low(struct mem_cgroup *memcg, struct zone *zone)
{
unsigned long active;
Index: b/mm/vmscan.c
===================================================================
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1524,31 +1524,24 @@ static void shrink_zone(int priority, st
get_scan_ratio(zone, sc, percent);
for_each_evictable_lru(l) {
- if (scan_global_lru(sc)) {
- int file = is_file_lru(l);
- int scan;
+ int file = is_file_lru(l);
+ int scan;
- scan = zone_page_state(zone, NR_LRU_BASE + l);
- if (priority) {
- scan >>= priority;
- scan = (scan * percent[file]) / 100;
- }
+ scan = zone_page_state(zone, NR_LRU_BASE + l);
+ if (priority) {
+ scan >>= priority;
+ scan = (scan * percent[file]) / 100;
+ }
+ if (scan_global_lru(sc)) {
zone->lru[l].nr_scan += scan;
nr[l] = zone->lru[l].nr_scan;
if (nr[l] >= sc->swap_cluster_max)
zone->lru[l].nr_scan = 0;
else
nr[l] = 0;
- } else {
- /*
- * This reclaim occurs not because zone memory shortage
- * but because memory controller hits its limit.
- * Don't modify zone reclaim related data.
- */
- nr[l] = mem_cgroup_calc_reclaim(sc->mem_cgroup, zone,
- priority, l);
- }
+ } else
+ nr[l] = scan;
}
while (nr[LRU_INACTIVE_ANON] || nr[LRU_ACTIVE_FILE] ||
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 23+ messages in thread* [PATCH 08/09] memcg: show inactive_ratio
2008-11-30 10:54 [PATCH 00/09] memcg: split-lru feature for memcg KOSAKI Motohiro
` (6 preceding siblings ...)
2008-11-30 11:01 ` [PATCH 07/09] memcg: remove mem_cgroup_calc_reclaim() KOSAKI Motohiro
@ 2008-11-30 11:02 ` KOSAKI Motohiro
2008-11-30 11:03 ` [PATCH 09/09] memcg: show reclaim stat KOSAKI Motohiro
2008-12-01 2:00 ` [PATCH 00/09] memcg: split-lru feature for memcg KAMEZAWA Hiroyuki
9 siblings, 0 replies; 23+ messages in thread
From: KOSAKI Motohiro @ 2008-11-30 11:02 UTC (permalink / raw)
To: LKML, linux-mm, Andrew Morton, Balbir Singh, KAMEZAWA Hiroyuki,
Rik van Riel
Cc: kosaki.motohiro
add inactive_ratio field to memory.stat file.
Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
---
mm/memcontrol.c | 3 +++
1 file changed, 3 insertions(+)
Index: b/mm/memcontrol.c
===================================================================
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -1784,6 +1784,9 @@ static int mem_control_stat_show(struct
cb->fill(cb, "unevictable", unevictable * PAGE_SIZE);
}
+
+ cb->fill(cb, "inactive_ratio", mem_cont->inactive_ratio);
+
return 0;
}
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 23+ messages in thread* [PATCH 09/09] memcg: show reclaim stat
2008-11-30 10:54 [PATCH 00/09] memcg: split-lru feature for memcg KOSAKI Motohiro
` (7 preceding siblings ...)
2008-11-30 11:02 ` [PATCH 08/09] memcg: show inactive_ratio KOSAKI Motohiro
@ 2008-11-30 11:03 ` KOSAKI Motohiro
2008-12-01 2:00 ` [PATCH 00/09] memcg: split-lru feature for memcg KAMEZAWA Hiroyuki
9 siblings, 0 replies; 23+ messages in thread
From: KOSAKI Motohiro @ 2008-11-30 11:03 UTC (permalink / raw)
To: LKML, linux-mm, Andrew Morton, Balbir Singh, KAMEZAWA Hiroyuki,
Rik van Riel
Cc: kosaki.motohiro
added following four field to memory.stat file.
- recent_rotated_anon
- recent_rotated_file
- recent_scanned_anon
- recent_scanned_file
Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
---
mm/memcontrol.c | 25 +++++++++++++++++++++++++
1 file changed, 25 insertions(+)
Index: b/mm/memcontrol.c
===================================================================
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -1787,6 +1787,31 @@ static int mem_control_stat_show(struct
cb->fill(cb, "inactive_ratio", mem_cont->inactive_ratio);
+ {
+ int nid, zid;
+ struct mem_cgroup_per_zone *mz;
+ unsigned long recent_rotated[2] = {0, 0};
+ unsigned long recent_scanned[2] = {0, 0};
+
+ for_each_online_node(nid)
+ for (zid = 0; zid < MAX_NR_ZONES; zid++) {
+ mz = mem_cgroup_zoneinfo(mem_cont, nid, zid);
+
+ recent_rotated[0] +=
+ mz->reclaim_stat.recent_rotated[0];
+ recent_rotated[1] +=
+ mz->reclaim_stat.recent_rotated[1];
+ recent_scanned[0] +=
+ mz->reclaim_stat.recent_scanned[0];
+ recent_scanned[1] +=
+ mz->reclaim_stat.recent_scanned[1];
+ }
+ cb->fill(cb, "recent_rotated_anon", recent_rotated[0]);
+ cb->fill(cb, "recent_rotated_file", recent_rotated[1]);
+ cb->fill(cb, "recent_scanned_anon", recent_scanned[0]);
+ cb->fill(cb, "recent_scanned_file", recent_scanned[1]);
+ }
+
return 0;
}
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 23+ messages in thread* Re: [PATCH 00/09] memcg: split-lru feature for memcg
2008-11-30 10:54 [PATCH 00/09] memcg: split-lru feature for memcg KOSAKI Motohiro
` (8 preceding siblings ...)
2008-11-30 11:03 ` [PATCH 09/09] memcg: show reclaim stat KOSAKI Motohiro
@ 2008-12-01 2:00 ` KAMEZAWA Hiroyuki
9 siblings, 0 replies; 23+ messages in thread
From: KAMEZAWA Hiroyuki @ 2008-12-01 2:00 UTC (permalink / raw)
To: KOSAKI Motohiro; +Cc: LKML, linux-mm, Andrew Morton, Balbir Singh, Rik van Riel
On Sun, 30 Nov 2008 19:54:08 +0900 (JST)
KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> wrote:
> Recently, SplitLRU patch series dramatically improvement VM reclaim
> logic.
>
> it have following improvement.
> (1) splite lru per page type
> (2) introduce inactive/active anon balancing logic
> (3) introduce anon/file balancing logic
>
> Unfortunately, the improvement of memcgroup reclaim is incomplete.
> Currently, it only has (1), but doesn't have (2) and (3).
>
>
> This patch introduce (2) and (3) improvements to memcgroup.
> this implementation is straightforward porting from global reclaim.
>
> Therefere
> - code is simple.
> - memcg reclaim become efficiency as global reclaim.
> - the logic is the same as global lru.
> then, memcg reclaim debugging become easily.
>
>
> this patch series has three part
>
> [part 1: inactive-anon vs active-anon balancing improvement]
> [01/09] inactive_anon_is_low() move to vmscan.c
> [02/09] memcg: make inactive_anon_is_low()
>
> [part 2: anon vs file balancing improvement]
> [03/09] introduce zone_reclaim struct
> [04/09] memcg: make zone_reclaim_stat
> [05/09] make zone_nr_pages() helper function
> [06/09] make get_scan_ratio() to memcg awareness
> [07/09] memcg: remove mem_cgroup_calc_reclaim()
>
> [part 3: add split-lru related statics field to /cgroup/memory.stat]
> [08/09] memcg: show inactive_ratio
> [09/09] memcg: show reclaim stat
>
> patch against: mmotm 29 Nov 2008
>
Hi, kosaki. thank you for your work.
My request is
. split global-lru part and memcg part explicitly.
There are Nishimura's patch and my patch under development.
I may have to prepare weekly-update queue again.
Thanks,
-Kame
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 23+ messages in thread