linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] mm/memcg: add MAX_CHARGE_BATCH to limit unnecessary charge overhead
@ 2012-06-24  2:16 Wanpeng Li
  2012-06-24  9:46 ` Johannes Weiner
  0 siblings, 1 reply; 8+ messages in thread
From: Wanpeng Li @ 2012-06-24  2:16 UTC (permalink / raw)
  To: linux-mm
  Cc: Johannes Weiner, Michal Hocko, Balbir Singh, KAMEZAWA Hiroyuki,
	cgroups, linux-kernel, Gavin Shan, Wanpeng Li

From: Wanpeng Li <liwp@linux.vnet.ibm.com>

Since exceeded unused cached charges would add pressure to
mem_cgroup_do_charge, more overhead would burn cpu cycles when
mem_cgroup_do_charge cause page reclaim or even OOM be triggered
just for such exceeded unused cached charges. Add MAX_CHARGE_BATCH
to limit max cached charges.

Signed-off-by: Wanpeng Li <liwp.linux@gmail.com>
---
 mm/memcontrol.c |   16 ++++++++++++++++
 1 file changed, 16 insertions(+)

diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 0e092eb..1ff317a 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -1954,6 +1954,14 @@ void mem_cgroup_update_page_stat(struct page *page,
  * TODO: maybe necessary to use big numbers in big irons.
  */
 #define CHARGE_BATCH	32U
+
+/*
+ * Max size of charge stock. Since exceeded unused cached charges would
+ * add pressure to mem_cgroup_do_charge which will cause page reclaim or
+ * even oom be triggered.
+ */
+#define MAX_CHARGE_BATCH 1024U
+
 struct memcg_stock_pcp {
 	struct mem_cgroup *cached; /* this never be root cgroup */
 	unsigned int nr_pages;
@@ -2250,6 +2258,7 @@ static int __mem_cgroup_try_charge(struct mm_struct *mm,
 	unsigned int batch = max(CHARGE_BATCH, nr_pages);
 	int nr_oom_retries = MEM_CGROUP_RECLAIM_RETRIES;
 	struct mem_cgroup *memcg = NULL;
+	struct memcg_stock_pcp *stock;
 	int ret;
 
 	/*
@@ -2320,6 +2329,13 @@ again:
 		rcu_read_unlock();
 	}
 
+	stock = &get_cpu_var(memcg_stock);
+	if (memcg == stock->cached && stock->nr_pages) {
+		if (stock->nr_pages > MAX_CHARGE_BATCH)
+			batch = nr_pages;
+	}
+	put_cpu_var(memcg_stock);
+
 	do {
 		bool oom_check;
 
-- 
1.7.9.5

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] mm/memcg: add MAX_CHARGE_BATCH to limit unnecessary charge overhead
  2012-06-24  2:16 [PATCH] mm/memcg: add MAX_CHARGE_BATCH to limit unnecessary charge overhead Wanpeng Li
@ 2012-06-24  9:46 ` Johannes Weiner
  2012-06-24 10:08   ` Wanpeng Li
  0 siblings, 1 reply; 8+ messages in thread
From: Johannes Weiner @ 2012-06-24  9:46 UTC (permalink / raw)
  To: Wanpeng Li
  Cc: linux-mm, Michal Hocko, Balbir Singh, KAMEZAWA Hiroyuki, cgroups,
	linux-kernel, Gavin Shan

On Sun, Jun 24, 2012 at 10:16:09AM +0800, Wanpeng Li wrote:
> From: Wanpeng Li <liwp@linux.vnet.ibm.com>
> 
> Since exceeded unused cached charges would add pressure to
> mem_cgroup_do_charge, more overhead would burn cpu cycles when
> mem_cgroup_do_charge cause page reclaim or even OOM be triggered
> just for such exceeded unused cached charges. Add MAX_CHARGE_BATCH
> to limit max cached charges.
> 
> Signed-off-by: Wanpeng Li <liwp.linux@gmail.com>
> ---
>  mm/memcontrol.c |   16 ++++++++++++++++
>  1 file changed, 16 insertions(+)
> 
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index 0e092eb..1ff317a 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -1954,6 +1954,14 @@ void mem_cgroup_update_page_stat(struct page *page,
>   * TODO: maybe necessary to use big numbers in big irons.
>   */
>  #define CHARGE_BATCH	32U
> +
> +/*
> + * Max size of charge stock. Since exceeded unused cached charges would
> + * add pressure to mem_cgroup_do_charge which will cause page reclaim or
> + * even oom be triggered.
> + */
> +#define MAX_CHARGE_BATCH 1024U
> +
>  struct memcg_stock_pcp {
>  	struct mem_cgroup *cached; /* this never be root cgroup */
>  	unsigned int nr_pages;
> @@ -2250,6 +2258,7 @@ static int __mem_cgroup_try_charge(struct mm_struct *mm,
>  	unsigned int batch = max(CHARGE_BATCH, nr_pages);
>  	int nr_oom_retries = MEM_CGROUP_RECLAIM_RETRIES;
>  	struct mem_cgroup *memcg = NULL;
> +	struct memcg_stock_pcp *stock;
>  	int ret;
>  
>  	/*
> @@ -2320,6 +2329,13 @@ again:
>  		rcu_read_unlock();
>  	}
>  
> +	stock = &get_cpu_var(memcg_stock);
> +	if (memcg == stock->cached && stock->nr_pages) {
> +		if (stock->nr_pages > MAX_CHARGE_BATCH)
> +			batch = nr_pages;
> +	}
> +	put_cpu_var(memcg_stock);

The only way excessive stock can build up is if the charging task gets
rescheduled, after trying to consume stock a few lines above, to a cpu
it was running on when it built up stock in the past.

    consume_stock()
      memcg != stock->cached:
        return false
    do_charge()
    <reschedule>
    refill_stock()
      memcg == stock->cached:
        stock->nr_pages += nr_pages

It's very unlikely and a single call into target reclaim will drain
all stock of the memcg, so this will self-correct quickly.

And your patch won't change any of that.

What you /could/ do is stick that check into refill_stock() and invoke
res_counter_uncharge() if it gets excessive.  But I really don't see a
practical problem here...

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] mm/memcg: add MAX_CHARGE_BATCH to limit unnecessary charge overhead
  2012-06-24  9:46 ` Johannes Weiner
@ 2012-06-24 10:08   ` Wanpeng Li
  2012-06-24 10:13     ` Wanpeng Li
  2012-06-24 10:19     ` Johannes Weiner
  0 siblings, 2 replies; 8+ messages in thread
From: Wanpeng Li @ 2012-06-24 10:08 UTC (permalink / raw)
  To: Johannes Weiner
  Cc: Michal Hocko, Balbir Singh, KAMEZAWA Hiroyuki, cgroups,
	linux-kernel, Gavin Shan, Wanpeng Li, linux-mm

On Sun, Jun 24, 2012 at 11:46:14AM +0200, Johannes Weiner wrote:
>On Sun, Jun 24, 2012 at 10:16:09AM +0800, Wanpeng Li wrote:
>> From: Wanpeng Li <liwp@linux.vnet.ibm.com>
>> 
>> Since exceeded unused cached charges would add pressure to
>> mem_cgroup_do_charge, more overhead would burn cpu cycles when
>> mem_cgroup_do_charge cause page reclaim or even OOM be triggered
>> just for such exceeded unused cached charges. Add MAX_CHARGE_BATCH
>> to limit max cached charges.
>> 
>> Signed-off-by: Wanpeng Li <liwp.linux@gmail.com>
>> ---
>>  mm/memcontrol.c |   16 ++++++++++++++++
>>  1 file changed, 16 insertions(+)
>> 
>> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
>> index 0e092eb..1ff317a 100644
>> --- a/mm/memcontrol.c
>> +++ b/mm/memcontrol.c
>> @@ -1954,6 +1954,14 @@ void mem_cgroup_update_page_stat(struct page *page,
>>   * TODO: maybe necessary to use big numbers in big irons.
>>   */
>>  #define CHARGE_BATCH	32U
>> +
>> +/*
>> + * Max size of charge stock. Since exceeded unused cached charges would
>> + * add pressure to mem_cgroup_do_charge which will cause page reclaim or
>> + * even oom be triggered.
>> + */
>> +#define MAX_CHARGE_BATCH 1024U
>> +
>>  struct memcg_stock_pcp {
>>  	struct mem_cgroup *cached; /* this never be root cgroup */
>>  	unsigned int nr_pages;
>> @@ -2250,6 +2258,7 @@ static int __mem_cgroup_try_charge(struct mm_struct *mm,
>>  	unsigned int batch = max(CHARGE_BATCH, nr_pages);
>>  	int nr_oom_retries = MEM_CGROUP_RECLAIM_RETRIES;
>>  	struct mem_cgroup *memcg = NULL;
>> +	struct memcg_stock_pcp *stock;
>>  	int ret;
>>  
>>  	/*
>> @@ -2320,6 +2329,13 @@ again:
>>  		rcu_read_unlock();
>>  	}
>>  
>> +	stock = &get_cpu_var(memcg_stock);
>> +	if (memcg == stock->cached && stock->nr_pages) {
>> +		if (stock->nr_pages > MAX_CHARGE_BATCH)
>> +			batch = nr_pages;
>> +	}
>> +	put_cpu_var(memcg_stock);
>
>The only way excessive stock can build up is if the charging task gets
>rescheduled, after trying to consume stock a few lines above, to a cpu
>it was running on when it built up stock in the past.
>
>    consume_stock()
>      memcg != stock->cached:
>        return false
>    do_charge()
>    <reschedule>
>    refill_stock()
>      memcg == stock->cached:
>        stock->nr_pages += nr_pages

__mem_cgroup_try_charge() {
	unsigned int batch = max(CHARGE_BATCH, nr_pages);
	[...]
	mem_cgroup_do_charge(memcg, gfp_mask, batch, oom_check);
	[...]
	if(batch > nr_pages)
		refill_stock(memcg, batch - nr_pages);
}

Consider this scenario, If one task wants to charge nr_pages = 1,
then batch = max(32,1) = 32, this time 31 excess charges 
will be charged in mem_cgroup_do_charge and then add to stock by
refill_stock. Generally there are many tasks in one memory cgroup and 
maybe charges frequency. In this situation, limit will reach soon, 
and cause mem_cgroup_reclaim to call try_to_free_mem_cgroup_pages.

Regards,
Wanpeng Li
>
>It's very unlikely and a single call into target reclaim will drain
>all stock of the memcg, so this will self-correct quickly.
>
>And your patch won't change any of that.
>
>What you /could/ do is stick that check into refill_stock() and invoke
>res_counter_uncharge() if it gets excessive.  But I really don't see a
>practical problem here...

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] mm/memcg: add MAX_CHARGE_BATCH to limit unnecessary charge overhead
  2012-06-24 10:08   ` Wanpeng Li
@ 2012-06-24 10:13     ` Wanpeng Li
  2012-06-24 10:19     ` Johannes Weiner
  1 sibling, 0 replies; 8+ messages in thread
From: Wanpeng Li @ 2012-06-24 10:13 UTC (permalink / raw)
  To: Johannes Weiner
  Cc: Michal Hocko, Balbir Singh, KAMEZAWA Hiroyuki, cgroups,
	linux-kernel, Gavin Shan, Wanpeng Li, linux-mm

On Sun, Jun 24, 2012 at 06:08:26PM +0800, Wanpeng Li wrote:
>On Sun, Jun 24, 2012 at 11:46:14AM +0200, Johannes Weiner wrote:
>>On Sun, Jun 24, 2012 at 10:16:09AM +0800, Wanpeng Li wrote:
>>> From: Wanpeng Li <liwp@linux.vnet.ibm.com>
>>> 
>>> Since exceeded unused cached charges would add pressure to
>>> mem_cgroup_do_charge, more overhead would burn cpu cycles when
>>> mem_cgroup_do_charge cause page reclaim or even OOM be triggered
>>> just for such exceeded unused cached charges. Add MAX_CHARGE_BATCH
>>> to limit max cached charges.
>>> 
>>> Signed-off-by: Wanpeng Li <liwp.linux@gmail.com>
>>> ---
>>>  mm/memcontrol.c |   16 ++++++++++++++++
>>>  1 file changed, 16 insertions(+)
>>> 
>>> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
>>> index 0e092eb..1ff317a 100644
>>> --- a/mm/memcontrol.c
>>> +++ b/mm/memcontrol.c
>>> @@ -1954,6 +1954,14 @@ void mem_cgroup_update_page_stat(struct page *page,
>>>   * TODO: maybe necessary to use big numbers in big irons.
>>>   */
>>>  #define CHARGE_BATCH	32U
>>> +
>>> +/*
>>> + * Max size of charge stock. Since exceeded unused cached charges would
>>> + * add pressure to mem_cgroup_do_charge which will cause page reclaim or
>>> + * even oom be triggered.
>>> + */
>>> +#define MAX_CHARGE_BATCH 1024U
>>> +
>>>  struct memcg_stock_pcp {
>>>  	struct mem_cgroup *cached; /* this never be root cgroup */
>>>  	unsigned int nr_pages;
>>> @@ -2250,6 +2258,7 @@ static int __mem_cgroup_try_charge(struct mm_struct *mm,
>>>  	unsigned int batch = max(CHARGE_BATCH, nr_pages);
>>>  	int nr_oom_retries = MEM_CGROUP_RECLAIM_RETRIES;
>>>  	struct mem_cgroup *memcg = NULL;
>>> +	struct memcg_stock_pcp *stock;
>>>  	int ret;
>>>  
>>>  	/*
>>> @@ -2320,6 +2329,13 @@ again:
>>>  		rcu_read_unlock();
>>>  	}
>>>  
>>> +	stock = &get_cpu_var(memcg_stock);
>>> +	if (memcg == stock->cached && stock->nr_pages) {
>>> +		if (stock->nr_pages > MAX_CHARGE_BATCH)
>>> +			batch = nr_pages;
>>> +	}
>>> +	put_cpu_var(memcg_stock);
>>
>>The only way excessive stock can build up is if the charging task gets
>>rescheduled, after trying to consume stock a few lines above, to a cpu
>>it was running on when it built up stock in the past.
>>
>>    consume_stock()
>>      memcg != stock->cached:
>>        return false
>>    do_charge()
>>    <reschedule>
>>    refill_stock()
>>      memcg == stock->cached:
>>        stock->nr_pages += nr_pages
>
>__mem_cgroup_try_charge() {
>	unsigned int batch = max(CHARGE_BATCH, nr_pages);
>	[...]
>	mem_cgroup_do_charge(memcg, gfp_mask, batch, oom_check);
>	[...]
>	if(batch > nr_pages)
>		refill_stock(memcg, batch - nr_pages);
>}
>
>Consider this scenario, If one task wants to charge nr_pages = 1,
>then batch = max(32,1) = 32, this time 31 excess charges
Sorry, the scenario is charge nr_pages = 2, batch = max(32, 2) = 32,
this time 30 excess charges will be charged.
>will be charged in mem_cgroup_do_charge and then add to stock by
>refill_stock. Generally there are many tasks in one memory cgroup and 
>maybe charges frequency. In this situation, limit will reach soon, 
>and cause mem_cgroup_reclaim to call try_to_free_mem_cgroup_pages.
>
>Regards,
>Wanpeng Li
>>
>>It's very unlikely and a single call into target reclaim will drain
>>all stock of the memcg, so this will self-correct quickly.
>>
>>And your patch won't change any of that.
>>
>>What you /could/ do is stick that check into refill_stock() and invoke
>>res_counter_uncharge() if it gets excessive.  But I really don't see a
>>practical problem here...

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] mm/memcg: add MAX_CHARGE_BATCH to limit unnecessary charge overhead
  2012-06-24 10:08   ` Wanpeng Li
  2012-06-24 10:13     ` Wanpeng Li
@ 2012-06-24 10:19     ` Johannes Weiner
  2012-06-24 10:32       ` Wanpeng Li
  1 sibling, 1 reply; 8+ messages in thread
From: Johannes Weiner @ 2012-06-24 10:19 UTC (permalink / raw)
  To: Wanpeng Li
  Cc: Michal Hocko, Balbir Singh, KAMEZAWA Hiroyuki, cgroups,
	linux-kernel, Gavin Shan, linux-mm

On Sun, Jun 24, 2012 at 06:08:26PM +0800, Wanpeng Li wrote:
> On Sun, Jun 24, 2012 at 11:46:14AM +0200, Johannes Weiner wrote:
> >On Sun, Jun 24, 2012 at 10:16:09AM +0800, Wanpeng Li wrote:
> >> From: Wanpeng Li <liwp@linux.vnet.ibm.com>
> >> 
> >> Since exceeded unused cached charges would add pressure to
> >> mem_cgroup_do_charge, more overhead would burn cpu cycles when
> >> mem_cgroup_do_charge cause page reclaim or even OOM be triggered
> >> just for such exceeded unused cached charges. Add MAX_CHARGE_BATCH
> >> to limit max cached charges.
> >> 
> >> Signed-off-by: Wanpeng Li <liwp.linux@gmail.com>
> >> ---
> >>  mm/memcontrol.c |   16 ++++++++++++++++
> >>  1 file changed, 16 insertions(+)
> >> 
> >> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> >> index 0e092eb..1ff317a 100644
> >> --- a/mm/memcontrol.c
> >> +++ b/mm/memcontrol.c
> >> @@ -1954,6 +1954,14 @@ void mem_cgroup_update_page_stat(struct page *page,
> >>   * TODO: maybe necessary to use big numbers in big irons.
> >>   */
> >>  #define CHARGE_BATCH	32U
> >> +
> >> +/*
> >> + * Max size of charge stock. Since exceeded unused cached charges would
> >> + * add pressure to mem_cgroup_do_charge which will cause page reclaim or
> >> + * even oom be triggered.
> >> + */
> >> +#define MAX_CHARGE_BATCH 1024U
> >> +
> >>  struct memcg_stock_pcp {
> >>  	struct mem_cgroup *cached; /* this never be root cgroup */
> >>  	unsigned int nr_pages;
> >> @@ -2250,6 +2258,7 @@ static int __mem_cgroup_try_charge(struct mm_struct *mm,
> >>  	unsigned int batch = max(CHARGE_BATCH, nr_pages);
> >>  	int nr_oom_retries = MEM_CGROUP_RECLAIM_RETRIES;
> >>  	struct mem_cgroup *memcg = NULL;
> >> +	struct memcg_stock_pcp *stock;
> >>  	int ret;
> >>  
> >>  	/*
> >> @@ -2320,6 +2329,13 @@ again:
> >>  		rcu_read_unlock();
> >>  	}
> >>  
> >> +	stock = &get_cpu_var(memcg_stock);
> >> +	if (memcg == stock->cached && stock->nr_pages) {
> >> +		if (stock->nr_pages > MAX_CHARGE_BATCH)
> >> +			batch = nr_pages;
> >> +	}
> >> +	put_cpu_var(memcg_stock);
> >
> >The only way excessive stock can build up is if the charging task gets
> >rescheduled, after trying to consume stock a few lines above, to a cpu
> >it was running on when it built up stock in the past.
> >
> >    consume_stock()
> >      memcg != stock->cached:
> >        return false
> >    do_charge()
> >    <reschedule>
> >    refill_stock()
> >      memcg == stock->cached:
> >        stock->nr_pages += nr_pages
> 
> __mem_cgroup_try_charge() {
> 	unsigned int batch = max(CHARGE_BATCH, nr_pages);
> 	[...]
> 	mem_cgroup_do_charge(memcg, gfp_mask, batch, oom_check);
> 	[...]
> 	if(batch > nr_pages)
> 		refill_stock(memcg, batch - nr_pages);
> }
> 
> Consider this scenario, If one task wants to charge nr_pages = 1,
> then batch = max(32,1) = 32, this time 31 excess charges 
> will be charged in mem_cgroup_do_charge and then add to stock by
> refill_stock. Generally there are many tasks in one memory cgroup and 
> maybe charges frequency. In this situation, limit will reach soon, 
> and cause mem_cgroup_reclaim to call try_to_free_mem_cgroup_pages.

But the stock is not a black hole that gets built up for giggles!  The
next time the processes want to charge a page on this cpu, they will
consume it from the stock.  Not add more pages to it.  Look at where
consume_stock() is called.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] mm/memcg: add MAX_CHARGE_BATCH to limit unnecessary charge overhead
  2012-06-24 10:19     ` Johannes Weiner
@ 2012-06-24 10:32       ` Wanpeng Li
  2012-06-24 18:33         ` Johannes Weiner
  2012-06-25  3:02         ` Kamezawa Hiroyuki
  0 siblings, 2 replies; 8+ messages in thread
From: Wanpeng Li @ 2012-06-24 10:32 UTC (permalink / raw)
  To: Johannes Weiner
  Cc: Michal Hocko, Balbir Singh, KAMEZAWA Hiroyuki, cgroups,
	linux-kernel, Gavin Shan, Wanpeng Li, linux-mm

On Sun, Jun 24, 2012 at 12:19:48PM +0200, Johannes Weiner wrote:
>On Sun, Jun 24, 2012 at 06:08:26PM +0800, Wanpeng Li wrote:
>> On Sun, Jun 24, 2012 at 11:46:14AM +0200, Johannes Weiner wrote:
>> >On Sun, Jun 24, 2012 at 10:16:09AM +0800, Wanpeng Li wrote:
>> >> From: Wanpeng Li <liwp@linux.vnet.ibm.com>
>> >> 
>> >> Since exceeded unused cached charges would add pressure to
>> >> mem_cgroup_do_charge, more overhead would burn cpu cycles when
>> >> mem_cgroup_do_charge cause page reclaim or even OOM be triggered
>> >> just for such exceeded unused cached charges. Add MAX_CHARGE_BATCH
>> >> to limit max cached charges.
>> >> 
>> >> Signed-off-by: Wanpeng Li <liwp.linux@gmail.com>
>> >> ---
>> >>  mm/memcontrol.c |   16 ++++++++++++++++
>> >>  1 file changed, 16 insertions(+)
>> >> 
>> >> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
>> >> index 0e092eb..1ff317a 100644
>> >> --- a/mm/memcontrol.c
>> >> +++ b/mm/memcontrol.c
>> >> @@ -1954,6 +1954,14 @@ void mem_cgroup_update_page_stat(struct page *page,
>> >>   * TODO: maybe necessary to use big numbers in big irons.
>> >>   */
>> >>  #define CHARGE_BATCH	32U
>> >> +
>> >> +/*
>> >> + * Max size of charge stock. Since exceeded unused cached charges would
>> >> + * add pressure to mem_cgroup_do_charge which will cause page reclaim or
>> >> + * even oom be triggered.
>> >> + */
>> >> +#define MAX_CHARGE_BATCH 1024U
>> >> +
>> >>  struct memcg_stock_pcp {
>> >>  	struct mem_cgroup *cached; /* this never be root cgroup */
>> >>  	unsigned int nr_pages;
>> >> @@ -2250,6 +2258,7 @@ static int __mem_cgroup_try_charge(struct mm_struct *mm,
>> >>  	unsigned int batch = max(CHARGE_BATCH, nr_pages);
>> >>  	int nr_oom_retries = MEM_CGROUP_RECLAIM_RETRIES;
>> >>  	struct mem_cgroup *memcg = NULL;
>> >> +	struct memcg_stock_pcp *stock;
>> >>  	int ret;
>> >>  
>> >>  	/*
>> >> @@ -2320,6 +2329,13 @@ again:
>> >>  		rcu_read_unlock();
>> >>  	}
>> >>  
>> >> +	stock = &get_cpu_var(memcg_stock);
>> >> +	if (memcg == stock->cached && stock->nr_pages) {
>> >> +		if (stock->nr_pages > MAX_CHARGE_BATCH)
>> >> +			batch = nr_pages;
>> >> +	}
>> >> +	put_cpu_var(memcg_stock);
>> >
>> >The only way excessive stock can build up is if the charging task gets
>> >rescheduled, after trying to consume stock a few lines above, to a cpu
>> >it was running on when it built up stock in the past.
>> >
>> >    consume_stock()
>> >      memcg != stock->cached:
>> >        return false
>> >    do_charge()
>> >    <reschedule>
>> >    refill_stock()
>> >      memcg == stock->cached:
>> >        stock->nr_pages += nr_pages
>> 
>> __mem_cgroup_try_charge() {
>> 	unsigned int batch = max(CHARGE_BATCH, nr_pages);
>> 	[...]
>> 	mem_cgroup_do_charge(memcg, gfp_mask, batch, oom_check);
>> 	[...]
>> 	if(batch > nr_pages)
>> 		refill_stock(memcg, batch - nr_pages);
>> }
>> 
>> Consider this scenario, If one task wants to charge nr_pages = 1,
>> then batch = max(32,1) = 32, this time 31 excess charges 
>> will be charged in mem_cgroup_do_charge and then add to stock by
>> refill_stock. Generally there are many tasks in one memory cgroup and 
>> maybe charges frequency. In this situation, limit will reach soon, 
>> and cause mem_cgroup_reclaim to call try_to_free_mem_cgroup_pages.
>
>But the stock is not a black hole that gets built up for giggles!  The
>next time the processes want to charge a page on this cpu, they will
>consume it from the stock.  Not add more pages to it.  Look at where
>consume_stock() is called.

if(nr_pages == 1 && consume_stock(memcg))
	goto done;

Only when charge one page will call consume_stock. You can see the codes
in mem_cgroup_charge_common() which also call __mem_cgroup_try_charge, 
when both transparent huge and hugetlbfs pages, nr_pages will larger than 1.













--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] mm/memcg: add MAX_CHARGE_BATCH to limit unnecessary charge overhead
  2012-06-24 10:32       ` Wanpeng Li
@ 2012-06-24 18:33         ` Johannes Weiner
  2012-06-25  3:02         ` Kamezawa Hiroyuki
  1 sibling, 0 replies; 8+ messages in thread
From: Johannes Weiner @ 2012-06-24 18:33 UTC (permalink / raw)
  To: Wanpeng Li
  Cc: Michal Hocko, Balbir Singh, KAMEZAWA Hiroyuki, cgroups,
	linux-kernel, Gavin Shan, linux-mm

On Sun, Jun 24, 2012 at 06:32:58PM +0800, Wanpeng Li wrote:
> On Sun, Jun 24, 2012 at 12:19:48PM +0200, Johannes Weiner wrote:
> >On Sun, Jun 24, 2012 at 06:08:26PM +0800, Wanpeng Li wrote:
> >> On Sun, Jun 24, 2012 at 11:46:14AM +0200, Johannes Weiner wrote:
> >> >On Sun, Jun 24, 2012 at 10:16:09AM +0800, Wanpeng Li wrote:
> >> >> From: Wanpeng Li <liwp@linux.vnet.ibm.com>
> >> >> 
> >> >> Since exceeded unused cached charges would add pressure to
> >> >> mem_cgroup_do_charge, more overhead would burn cpu cycles when
> >> >> mem_cgroup_do_charge cause page reclaim or even OOM be triggered
> >> >> just for such exceeded unused cached charges. Add MAX_CHARGE_BATCH
> >> >> to limit max cached charges.
> >> >> 
> >> >> Signed-off-by: Wanpeng Li <liwp.linux@gmail.com>
> >> >> ---
> >> >>  mm/memcontrol.c |   16 ++++++++++++++++
> >> >>  1 file changed, 16 insertions(+)
> >> >> 
> >> >> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> >> >> index 0e092eb..1ff317a 100644
> >> >> --- a/mm/memcontrol.c
> >> >> +++ b/mm/memcontrol.c
> >> >> @@ -1954,6 +1954,14 @@ void mem_cgroup_update_page_stat(struct page *page,
> >> >>   * TODO: maybe necessary to use big numbers in big irons.
> >> >>   */
> >> >>  #define CHARGE_BATCH	32U
> >> >> +
> >> >> +/*
> >> >> + * Max size of charge stock. Since exceeded unused cached charges would
> >> >> + * add pressure to mem_cgroup_do_charge which will cause page reclaim or
> >> >> + * even oom be triggered.
> >> >> + */
> >> >> +#define MAX_CHARGE_BATCH 1024U
> >> >> +
> >> >>  struct memcg_stock_pcp {
> >> >>  	struct mem_cgroup *cached; /* this never be root cgroup */
> >> >>  	unsigned int nr_pages;
> >> >> @@ -2250,6 +2258,7 @@ static int __mem_cgroup_try_charge(struct mm_struct *mm,
> >> >>  	unsigned int batch = max(CHARGE_BATCH, nr_pages);
> >> >>  	int nr_oom_retries = MEM_CGROUP_RECLAIM_RETRIES;
> >> >>  	struct mem_cgroup *memcg = NULL;
> >> >> +	struct memcg_stock_pcp *stock;
> >> >>  	int ret;
> >> >>  
> >> >>  	/*
> >> >> @@ -2320,6 +2329,13 @@ again:
> >> >>  		rcu_read_unlock();
> >> >>  	}
> >> >>  
> >> >> +	stock = &get_cpu_var(memcg_stock);
> >> >> +	if (memcg == stock->cached && stock->nr_pages) {
> >> >> +		if (stock->nr_pages > MAX_CHARGE_BATCH)
> >> >> +			batch = nr_pages;
> >> >> +	}
> >> >> +	put_cpu_var(memcg_stock);
> >> >
> >> >The only way excessive stock can build up is if the charging task gets
> >> >rescheduled, after trying to consume stock a few lines above, to a cpu
> >> >it was running on when it built up stock in the past.
> >> >
> >> >    consume_stock()
> >> >      memcg != stock->cached:
> >> >        return false
> >> >    do_charge()
> >> >    <reschedule>
> >> >    refill_stock()
> >> >      memcg == stock->cached:
> >> >        stock->nr_pages += nr_pages
> >> 
> >> __mem_cgroup_try_charge() {
> >> 	unsigned int batch = max(CHARGE_BATCH, nr_pages);
> >> 	[...]
> >> 	mem_cgroup_do_charge(memcg, gfp_mask, batch, oom_check);
> >> 	[...]
> >> 	if(batch > nr_pages)
> >> 		refill_stock(memcg, batch - nr_pages);
> >> }
> >> 
> >> Consider this scenario, If one task wants to charge nr_pages = 1,
> >> then batch = max(32,1) = 32, this time 31 excess charges 
> >> will be charged in mem_cgroup_do_charge and then add to stock by
> >> refill_stock. Generally there are many tasks in one memory cgroup and 
> >> maybe charges frequency. In this situation, limit will reach soon, 
> >> and cause mem_cgroup_reclaim to call try_to_free_mem_cgroup_pages.
> >
> >But the stock is not a black hole that gets built up for giggles!  The
> >next time the processes want to charge a page on this cpu, they will
> >consume it from the stock.  Not add more pages to it.  Look at where
> >consume_stock() is called.
> 
> if(nr_pages == 1 && consume_stock(memcg))
> 	goto done;
> 
> Only when charge one page will call consume_stock. You can see the codes
> in mem_cgroup_charge_common() which also call __mem_cgroup_try_charge, 
> when both transparent huge and hugetlbfs pages, nr_pages will larger than 1.

In which case, nr_pages will be bigger than CHARGE_BATCH, in which
case batch equals nr_pages, in which case stock won't be refilled:

  unsigned int batch = max(CHARGE_BATCH, nr_pages);
  ...
  if (batch > nr_pages)
    refill_stock(memcg, batch - nr_pages);

We could maybe make this

  if (nr_pages == 1 && batch > nr_pages)
    ...

for clarity, but it won't make a behavioural difference.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] mm/memcg: add MAX_CHARGE_BATCH to limit unnecessary charge overhead
  2012-06-24 10:32       ` Wanpeng Li
  2012-06-24 18:33         ` Johannes Weiner
@ 2012-06-25  3:02         ` Kamezawa Hiroyuki
  1 sibling, 0 replies; 8+ messages in thread
From: Kamezawa Hiroyuki @ 2012-06-25  3:02 UTC (permalink / raw)
  To: Wanpeng Li
  Cc: Johannes Weiner, Michal Hocko, Balbir Singh, cgroups,
	linux-kernel, Gavin Shan, linux-mm

(2012/06/24 19:32), Wanpeng Li wrote:
> On Sun, Jun 24, 2012 at 12:19:48PM +0200, Johannes Weiner wrote:
>> On Sun, Jun 24, 2012 at 06:08:26PM +0800, Wanpeng Li wrote:
>>> On Sun, Jun 24, 2012 at 11:46:14AM +0200, Johannes Weiner wrote:
>>>> On Sun, Jun 24, 2012 at 10:16:09AM +0800, Wanpeng Li wrote:
>>>>> From: Wanpeng Li <liwp@linux.vnet.ibm.com>
>>>>>
>>>>> Since exceeded unused cached charges would add pressure to
>>>>> mem_cgroup_do_charge, more overhead would burn cpu cycles when
>>>>> mem_cgroup_do_charge cause page reclaim or even OOM be triggered
>>>>> just for such exceeded unused cached charges. Add MAX_CHARGE_BATCH
>>>>> to limit max cached charges.
>>>>>
>>>>> Signed-off-by: Wanpeng Li <liwp.linux@gmail.com>
>>>>> ---
>>>>>   mm/memcontrol.c |   16 ++++++++++++++++
>>>>>   1 file changed, 16 insertions(+)
>>>>>
>>>>> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
>>>>> index 0e092eb..1ff317a 100644
>>>>> --- a/mm/memcontrol.c
>>>>> +++ b/mm/memcontrol.c
>>>>> @@ -1954,6 +1954,14 @@ void mem_cgroup_update_page_stat(struct page *page,
>>>>>    * TODO: maybe necessary to use big numbers in big irons.
>>>>>    */
>>>>>   #define CHARGE_BATCH	32U
>>>>> +
>>>>> +/*
>>>>> + * Max size of charge stock. Since exceeded unused cached charges would
>>>>> + * add pressure to mem_cgroup_do_charge which will cause page reclaim or
>>>>> + * even oom be triggered.
>>>>> + */
>>>>> +#define MAX_CHARGE_BATCH 1024U
>>>>> +
>>>>>   struct memcg_stock_pcp {
>>>>>   	struct mem_cgroup *cached; /* this never be root cgroup */
>>>>>   	unsigned int nr_pages;
>>>>> @@ -2250,6 +2258,7 @@ static int __mem_cgroup_try_charge(struct mm_struct *mm,
>>>>>   	unsigned int batch = max(CHARGE_BATCH, nr_pages);
>>>>>   	int nr_oom_retries = MEM_CGROUP_RECLAIM_RETRIES;
>>>>>   	struct mem_cgroup *memcg = NULL;
>>>>> +	struct memcg_stock_pcp *stock;
>>>>>   	int ret;
>>>>>
>>>>>   	/*
>>>>> @@ -2320,6 +2329,13 @@ again:
>>>>>   		rcu_read_unlock();
>>>>>   	}
>>>>>
>>>>> +	stock = &get_cpu_var(memcg_stock);
>>>>> +	if (memcg == stock->cached && stock->nr_pages) {
>>>>> +		if (stock->nr_pages > MAX_CHARGE_BATCH)
>>>>> +			batch = nr_pages;
>>>>> +	}
>>>>> +	put_cpu_var(memcg_stock);
>>>>
>>>> The only way excessive stock can build up is if the charging task gets
>>>> rescheduled, after trying to consume stock a few lines above, to a cpu
>>>> it was running on when it built up stock in the past.
>>>>
>>>>     consume_stock()
>>>>       memcg != stock->cached:
>>>>         return false
>>>>     do_charge()
>>>>     <reschedule>
>>>>     refill_stock()
>>>>       memcg == stock->cached:
>>>>         stock->nr_pages += nr_pages
>>>
>>> __mem_cgroup_try_charge() {
>>> 	unsigned int batch = max(CHARGE_BATCH, nr_pages);
>>> 	[...]
>>> 	mem_cgroup_do_charge(memcg, gfp_mask, batch, oom_check);
>>> 	[...]
>>> 	if(batch > nr_pages)
>>> 		refill_stock(memcg, batch - nr_pages);
>>> }
>>>
>>> Consider this scenario, If one task wants to charge nr_pages = 1,
>>> then batch = max(32,1) = 32, this time 31 excess charges
>>> will be charged in mem_cgroup_do_charge and then add to stock by
>>> refill_stock. Generally there are many tasks in one memory cgroup and
>>> maybe charges frequency. In this situation, limit will reach soon,
>>> and cause mem_cgroup_reclaim to call try_to_free_mem_cgroup_pages.
>>
>> But the stock is not a black hole that gets built up for giggles!  The
>> next time the processes want to charge a page on this cpu, they will
>> consume it from the stock.  Not add more pages to it.  Look at where
>> consume_stock() is called.
>
> if(nr_pages == 1 && consume_stock(memcg))
> 	goto done;
>
> Only when charge one page will call consume_stock. You can see the codes
> in mem_cgroup_charge_common() which also call __mem_cgroup_try_charge,
> when both transparent huge and hugetlbfs pages, nr_pages will larger than 1.
>

Because THP charges 2M bytes at once, the optimization by 'stock' will have no
effects. (It merges 512page faults into a page fault.)
I think you can't see any performance difference even if we handle THP
pages with 'stock'.

And I think MAX_CHARGE_BATCH=1024 is too big...If you have 256cpus, you'll
have 1GB of cached charges...it means 1GB of inaccuracy of usage.
If you want to enlarge it, please show performance benefit.

Thanks,
-Kame





--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2012-06-25  3:04 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-06-24  2:16 [PATCH] mm/memcg: add MAX_CHARGE_BATCH to limit unnecessary charge overhead Wanpeng Li
2012-06-24  9:46 ` Johannes Weiner
2012-06-24 10:08   ` Wanpeng Li
2012-06-24 10:13     ` Wanpeng Li
2012-06-24 10:19     ` Johannes Weiner
2012-06-24 10:32       ` Wanpeng Li
2012-06-24 18:33         ` Johannes Weiner
2012-06-25  3:02         ` Kamezawa Hiroyuki

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox