From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail143.messagelabs.com (mail143.messagelabs.com [216.82.254.35]) by kanga.kvack.org (Postfix) with ESMTP id 4FE8B9000C1 for ; Tue, 26 Apr 2011 04:43:25 -0400 (EDT) Received: from hpaq1.eem.corp.google.com (hpaq1.eem.corp.google.com [172.25.149.1]) by smtp-out.google.com with ESMTP id p3Q8hJVZ021387 for ; Tue, 26 Apr 2011 01:43:19 -0700 Received: from qyl38 (qyl38.prod.google.com [10.241.83.230]) by hpaq1.eem.corp.google.com with ESMTP id p3Q8h4CY014126 (version=TLSv1/SSLv3 cipher=RC4-SHA bits=128 verify=NOT) for ; Tue, 26 Apr 2011 01:43:18 -0700 Received: by qyl38 with SMTP id 38so318539qyl.1 for ; Tue, 26 Apr 2011 01:43:17 -0700 (PDT) MIME-Version: 1.0 In-Reply-To: <20110426164341.fb6c80a4.kamezawa.hiroyu@jp.fujitsu.com> References: <20110425182529.c7c37bb4.kamezawa.hiroyu@jp.fujitsu.com> <20110425191437.d881ee68.kamezawa.hiroyu@jp.fujitsu.com> <20110426103859.05eb7a35.kamezawa.hiroyu@jp.fujitsu.com> <20110426164341.fb6c80a4.kamezawa.hiroyu@jp.fujitsu.com> Date: Tue, 26 Apr 2011 01:43:17 -0700 Message-ID: Subject: Re: [PATCH 0/7] memcg background reclaim , yet another one. From: Ying Han Content-Type: multipart/alternative; boundary=0050450161c3ca64b704a1ce501e Sender: owner-linux-mm@kvack.org List-ID: To: KAMEZAWA Hiroyuki Cc: "linux-mm@kvack.org" , "kosaki.motohiro@jp.fujitsu.com" , "balbir@linux.vnet.ibm.com" , "nishimura@mxp.nes.nec.co.jp" , "akpm@linux-foundation.org" , Johannes Weiner , "minchan.kim@gmail.com" , Michal Hocko , Greg Thelen , Hugh Dickins --0050450161c3ca64b704a1ce501e Content-Type: text/plain; charset=ISO-8859-1 On Tue, Apr 26, 2011 at 12:43 AM, KAMEZAWA Hiroyuki < kamezawa.hiroyu@jp.fujitsu.com> wrote: > On Tue, 26 Apr 2011 00:19:46 -0700 > Ying Han wrote: > > > On Mon, Apr 25, 2011 at 6:38 PM, KAMEZAWA Hiroyuki > > wrote: > > > On Mon, 25 Apr 2011 15:21:21 -0700 > > > Ying Han wrote: > > > >> Thank you for putting time on implementing the patch. I think it is > > >> definitely a good idea to have the two alternatives on the table since > > >> people has asked the questions. Before going down to the track, i have > > >> thought about the two approaches and also discussed with Greg and Hugh > > >> (cc-ed), i would like to clarify some of the pros and cons on both > > >> approaches. In general, I think the workqueue is not the right answer > > >> for this purpose. > > >> > > >> The thread-pool model > > >> Pros: > > >> 1. there is no isolation between memcg background reclaim, since the > > >> memcg threads are shared. That isolation including all the resources > > >> that the per-memcg background reclaim will need to access, like cpu > > >> time. One thing we are missing for the shared worker model is the > > >> individual cpu scheduling ability. We need the ability to isolate and > > >> count the resource assumption per memcg, and including how much > > >> cputime and where to run the per-memcg kswapd thread. > > >> > > > > > > IIUC, new threads for workqueue will be created if necessary in > automatic. > > > > > I read your patches today, but i might missed some details while I was > > reading it. I will read them through tomorrow. > > > > Thank you. > > > The question I was wondering here is > > 1. how to do cpu cgroup limit per-memcg including the kswapd time. > > I'd like to add some limitation based on elapsed time. For example, > only allow to run 10ms within 1sec. It's a background job should be > limited. Or, simply adds static delay per memcg at queue_delayed_work(). > Then, the user can limit scan/sec. But what I wonder now is what is the > good interface....msec/sec ? scan/sec, free/sec ? etc... > > > > 2. how to do numa awareness cpu scheduling if i want to do cpumask on > > the memcg-kswapd close to the numa node where all the pages of the > > memcg allocated. > > > > I guess the second one should have been covered. If not, it shouldn't > > be a big effort to fix that. And any suggestions on the first one. > > > > Interesting. If we use WQ_CPU_INTENSIVE + queue_work_on() instead > of WQ_UNBOUND, we can control which cpu to do jobs. > > "The default cpu" to run wmark-reclaim can by calculated by > css_id(&mem->css) % num_online_cpus() or some round robin at > memcg creation. Anyway, we'll need to use WQ_CPU_INTENSIVE. > It may give us good result than WQ_UNBOUND... > > Adding an interface for limiting cpu is...hmm. per memcg ? or > as the generic memcg param ? It will a memcg parameter not > a threads's. > > To clarify a bit, my question was meant to account it but not necessary to > limit it. We can use existing cpu cgroup to do the cpu limiting, and I am > just wondering how to configure it for the memcg kswapd thread. Let's say in the per-memcg-kswapd model, i can echo the kswapd thread pid into the cpu cgroup ( the same set of process of memcg, but in a cpu limiting cgroup instead). If the kswapd is shared, we might need extra work to account the cpu cycles correspondingly. > > > > >> 4. the kswapd threads are created and destroyed dynamically. are we > > >> talking about allocating 8k of stack for kswapd when we are under > > >> memory pressure? In the other case, all the memory are preallocated. > > >> > > > > > > I think workqueue is there for avoiding 'making kthread dynamically'. > > > We can save much codes. > > > > So right now, the workqueue is configured as unbounded. which means > > the worse case we might create > > the same number of workers as the number of memcgs. ( if each memcg > > takes long time to do the reclaim). So this might not be a problem, > > but I would like to confirm. > > > From documenation, max_active unbound workqueue (default) is > == > Currently, for a bound wq, the maximum limit for @max_active is 512 > and the default value used when 0 is specified is 256. For an unbound > wq, the limit is higher of 512 and 4 * num_possible_cpus(). These > values are chosen sufficiently high such that they are not the > limiting factor while providing protection in runaway cases. > == > 512 ? If wmark-reclaim burns cpu (and get rechedule), new kthread will > be created. > > Ok, so we have here max(512, 4*num_possible_cpus) execution context per cpu, and that should be less or equal to the number of memcgs on the system. (since we have one work item per memcg). > > > > > > >> 5. the workqueue is scary and might introduce issues sooner or later. > > >> Also, why we think the background reclaim fits into the workqueue > > >> model, and be more specific, how that share the same logic of other > > >> parts of the system using workqueue. > > >> > > > > > > Ok, with using workqueue. > > > > > > 1. The number of threads can be changed dynamically with regard to > system > > > workload without adding any codes. workqueue is for this kind of > > > background jobs. gcwq has a hooks to scheduler and it works well. > > > With per-memcg thread model, we'll never be able to do such. > > > > > > 2. We can avoid having unncessary threads. > > > If it sleeps most of time, why we need to keep it ? No, it's > unnecessary. > > > It should be on-demand. freezer() etc need to stop all threads and > > > thousands of sleeping threads will be harmful. > > > You can see how 'ps -elf' gets slow when the number of threads > increases. > > > > In general, i am not strongly against the workqueue but trying to > > understand the procs and cons between the two approaches. The first > > one is definitely simpler and more straight-forward, and I was > > suggesting to start with something simple and improve it later if we > > see problems. But I will read your path through tomorrow and also > > willing to see comments from others. > > > > Thank you for the efforts! > > > > you, too. > > Anyway, get_scan_count() seems to be a big problem and I'll cut out it > as independent patch. > sounds good to me. --Ying > Thanks, > -Kame > > > > > > --0050450161c3ca64b704a1ce501e Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable

On Tue, Apr 26, 2011 at 12:43 AM, KAMEZA= WA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> wrote:
On Tue, 26 Apr 2011 00:19:46 -0700
Ying Han <yingha= n@google.com> wrote:

> On Mon, Apr 25, 2011 at 6:38 PM, KAMEZAWA Hiroyuki
> <kamezawa.hiroyu@= jp.fujitsu.com> wrote:
> > On Mon, 25 Apr 2011 15:21:21 -0700
> > Ying Han <yinghan@google= .com> wrote:

> >> Thank you for putting time on impleme= nting the patch. I think it is
> >> definitely a good idea to have the two alternatives on the ta= ble since
> >> people has asked the questions. Before going down to the trac= k, i have
> >> thought about the two approaches and also discussed with Greg= and Hugh
> >> (cc-ed), =A0i would like to clarify some of the pros and cons= on both
> >> approaches. =A0In general, I think the workqueue is not the r= ight answer
> >> for this purpose.
> >>
> >> The thread-pool model
> >> Pros:
> >> 1. there is no isolation between memcg background reclaim, si= nce the
> >> memcg threads are shared. That isolation including all the re= sources
> >> that the per-memcg background reclaim will need to access, li= ke cpu
> >> time. One thing we are missing for the shared worker model is= the
> >> individual cpu scheduling ability. We need the ability to iso= late and
> >> count the resource assumption per memcg, and including how mu= ch
> >> cputime and where to run the per-memcg kswapd thread.
> >>
> >
> > IIUC, new threads for workqueue will be created if necessary in a= utomatic.
> >
> I read your patches today, but i might missed some details while I was=
> reading it. I will read them through tomorrow.
>

Thank you.

> The question I was wondering here is
> 1. how to do cpu cgroup limit per-memcg including the kswapd time.

I'd like to add some limitation based on elapsed time. For exampl= e,
only allow to run 10ms within 1sec. It's a background job should be
limited. Or, simply adds static delay per memcg at queue_delayed_work(). Then, the user can limit scan/sec. But what I wonder now is what is the
good interface....msec/sec ? scan/sec, free/sec ? etc...


> 2. how to do numa awareness cpu scheduling if i want to do cpumask on<= br> > the memcg-kswapd close to the numa node where all the pages of the
> memcg allocated.
>
> I guess the second one should have been covered. If not, it shouldn= 9;t
> be a big effort to fix that. And any suggestions on the first one.
>

Interesting. If we use WQ_CPU_INTENSIVE + queue_work_on() instead
of WQ_UNBOUND, we can control which cpu to do jobs.

"The default cpu" to run wmark-reclaim can by calculated by
css_id(&mem->css) % num_online_cpus() or some round robin at
memcg creation. Anyway, we'll need to use WQ_CPU_INTENSIVE.
It may give us good result than WQ_UNBOUND...

Adding an interface for limiting cpu is...hmm. per memcg ? or
as the generic memcg param ? It will a memcg parameter not
a threads's.
=A0
To clarify a bit, my question was meant to accoun= t it but not necessary to limit it. We can use existing cpu cgroup to do th= e cpu limiting, and I am
just wondering how to configure it for the memcg ks= wapd thread.

=A0=A0 Let's say in the per-memcg= -kswapd model, i can echo the kswapd thread pid into the cpu cgroup ( the s= ame set of process of memcg, but in a cpu limiting cgroup instead). =A0If t= he kswapd is shared, we might need extra work to account the cpu cycles cor= respondingly.

> >
> >> 4. the kswapd threads are created and destroyed dynamically. = are we
> >> talking about allocating 8k of stack for kswapd when we are u= nder
> >> memory pressure? In the other case, all the memory are preall= ocated.
> >>
> >
> > I think workqueue is there for avoiding 'making kthread dynam= ically'.
> > We can save much codes.
>
> So right now, the workqueue is configured as unbounded. which means > the worse case we might create
> the same number of workers as the number of memcgs. ( if each memcg > takes long time to do the reclaim). So this might not be a problem, > but I would like to confirm.
>
From documenation, max_active unbound workqueue (default) is
=3D=3D
Currently, for a bound wq, the maximum limit for @max_active is 512
and the default value used when 0 is specified is 256. =A0For an unbound wq, the limit is higher of 512 and 4 * num_possible_cpus(). =A0These
values are chosen sufficiently high such that they are not the
limiting factor while providing protection in runaway cases.
=3D=3D
512 ? =A0If wmark-reclaim burns cpu (and get rechedule), new kthread will be created.

Ok, so we have here max(512, = 4*num_possible_cpus)=A0execution context per cpu, and that should be
<= div>less or equal to the number of memcgs on the system. (since we have one= work item per memcg).=A0=A0

> >
> >> 5. the workqueue is scary and might introduce issues sooner o= r later.
> >> Also, why we think the background reclaim fits into the workq= ueue
> >> model, and be more specific, how that share the same logic of= other
> >> parts of the system using workqueue.
> >>
> >
> > Ok, with using workqueue.
> >
> > =A01. The number of threads can be changed dynamically with regar= d to system
> > =A0 =A0 workload without adding any codes. workqueue is for this = kind of
> > =A0 =A0 background jobs. gcwq has a hooks to scheduler and it wor= ks well.
> > =A0 =A0 With per-memcg thread model, we'll never be able to d= o such.
> >
> > =A02. We can avoid having unncessary threads.
> > =A0 =A0 If it sleeps most of time, why we need to keep it ? No, i= t's unnecessary.
> > =A0 =A0 It should be on-demand. freezer() etc need to stop all th= reads and
> > =A0 =A0 thousands of sleeping threads will be harmful.
> > =A0 =A0 You can see how 'ps -elf' gets slow when the numb= er of threads increases.
>
> In general, i am not strongly against the workqueue but trying to
> understand the procs and cons between the two approaches. The first > one is definitely simpler and more straight-forward, and I was
> suggesting to start with something simple and improve it later if we > see problems. But I will read your path through tomorrow and also
> willing to see comments from others.
>
> Thank you for the efforts!
>

you, too.

Anyway, get_scan_count() seems to be a big problem and I'll cut out it<= br> as independent patch.

sounds good to me= .=A0

--Ying
=A0
Thanks,
-Kame






--0050450161c3ca64b704a1ce501e-- -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/ Don't email: email@kvack.org