<
kamezawa.hiroyu@jp.fujitsu.com> wrote:
> On Thu, 21 Apr 2011 17:10:23 +0900
> Minchan Kim <
minchan.kim@gmail.com> wrote:
>
>> Hi Kame,
>>
>> On Thu, Apr 21, 2011 at 12:43 PM, KAMEZAWA Hiroyuki
>> <
kamezawa.hiroyu@jp.fujitsu.com> wrote:
>> > Ying, please take this just a hint, you don't need to implement this as is.
>> > ==
>> > Now, memcg-kswapd is created per a cgroup. Considering there are users
>> > who creates hundreds on cgroup on a system, it consumes too much
>> > resources, memory, cputime.
>> >
>> > This patch creates a thread pool for memcg-kswapd. All memcg which
>> > needs background recalim are linked to a list and memcg-kswapd
>> > picks up a memcg from the list and run reclaim. This reclaimes
>> > SWAP_CLUSTER_MAX of pages and putback the memcg to the lail of
>> > list. memcg-kswapd will visit memcgs in round-robin manner and
>> > reduce usages.
>> >
>>
>> I didn't look at code yet but as I just look over the description, I
>> have a concern.
>> We have discussed LRU separation between global and memcg.
>
> Please discuss global LRU in other thread. memcg-kswapd is not related
> to global LRU _at all_.
>
> And this patch set is independent from the things we discussed at LSF.
>
>
>> The clear goal is that how to keep _fairness_.
>>
>> For example,
>>
>> memcg-1 : # pages of LRU : 64
>> memcg-2 : # pages of LRU : 128
>> memcg-3 : # pages of LRU : 256
>>
>> If we have to reclaim 96 pages, memcg-1 would be lost half of pages.
>> It's much greater than others so memcg 1's page LRU rotation cycle
>> would be very fast, then working set pages in memcg-1 don't have a
>> chance to promote.
>> Is it fair?
>>
>> I think we should consider memcg-LRU size as doing round-robin.
>>
>
> This set doesn't implement a feature to handle your example case, at all.