From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail203.messagelabs.com (mail203.messagelabs.com [216.82.254.243]) by kanga.kvack.org (Postfix) with SMTP id A332B6B0078 for ; Thu, 9 Jun 2011 11:00:48 -0400 (EDT) Date: Thu, 9 Jun 2011 17:00:26 +0200 From: Michal Hocko Subject: Re: [patch 4/8] memcg: rework soft limit reclaim Message-ID: <20110609150026.GD3994@tiehlicka.suse.cz> References: <1306909519-7286-1-git-send-email-hannes@cmpxchg.org> <1306909519-7286-5-git-send-email-hannes@cmpxchg.org> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: Sender: owner-linux-mm@kvack.org List-ID: To: Ying Han , Johannes Weiner Cc: KAMEZAWA Hiroyuki , Daisuke Nishimura , Balbir Singh , Andrew Morton , Rik van Riel , Minchan Kim , KOSAKI Motohiro , Mel Gorman , Greg Thelen , Michel Lespinasse , "linux-mm@kvack.org" , linux-kernel On Thu 02-06-11 22:25:29, Ying Han wrote: > On Thu, Jun 2, 2011 at 2:55 PM, Ying Han wrote: > > On Tue, May 31, 2011 at 11:25 PM, Johannes Weiner wrote: > >> Currently, soft limit reclaim is entered from kswapd, where it selects [...] > >> diff --git a/mm/vmscan.c b/mm/vmscan.c > >> index c7d4b44..0163840 100644 > >> --- a/mm/vmscan.c > >> +++ b/mm/vmscan.c > >> @@ -1988,9 +1988,13 @@ static void shrink_zone(int priority, struct zone *zone, > >> unsigned long reclaimed = sc->nr_reclaimed; > >> unsigned long scanned = sc->nr_scanned; > >> unsigned long nr_reclaimed; > >> + int epriority = priority; > >> + > >> + if (mem_cgroup_soft_limit_exceeded(root, mem)) > >> + epriority -= 1; > > > > Here we grant the ability to shrink from all the memcgs, but only > > higher the priority for those exceed the soft_limit. That is a design > > change > > for the "soft_limit" which giving a hint to which memcgs to reclaim > > from first under global memory pressure. > > > Basically, we shouldn't reclaim from a memcg under its soft_limit > unless we have trouble reclaim pages from others. Agreed. > Something like the following makes better sense: > > diff --git a/mm/vmscan.c b/mm/vmscan.c > index bdc2fd3..b82ba8c 100644 > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -1989,6 +1989,8 @@ restart: > throttle_vm_writeout(sc->gfp_mask); > } > > +#define MEMCG_SOFTLIMIT_RECLAIM_PRIORITY 2 > + > static void shrink_zone(int priority, struct zone *zone, > struct scan_control *sc) > { > @@ -2001,13 +2003,13 @@ static void shrink_zone(int priority, struct zone *zone, > unsigned long reclaimed = sc->nr_reclaimed; > unsigned long scanned = sc->nr_scanned; > unsigned long nr_reclaimed; > - int epriority = priority; > > - if (mem_cgroup_soft_limit_exceeded(root, mem)) > - epriority -= 1; > + if (!mem_cgroup_soft_limit_exceeded(root, mem) && > + priority > MEMCG_SOFTLIMIT_RECLAIM_PRIORITY) > + continue; yes, this makes sense but I am not sure about the right(tm) value of the MEMCG_SOFTLIMIT_RECLAIM_PRIORITY. 2 sounds too low. You would do quite a lot of loops (DEFAULT_PRIORITY-MEMCG_SOFTLIMIT_RECLAIM_PRIORITY) * zones * memcg_count without any progress (assuming that all of them are under soft limit which doesn't sound like a totally artificial configuration) until you allow reclaiming from groups that are under soft limit. Then, when you finally get to reclaiming, you scan rather aggressively. Maybe something like 3/4 of DEFAULT_PRIORITY? You would get 3 times over all (unbalanced) zones and all cgroups that are above the limit (scanning max{1/4096+1/2048+1/1024, 3*SWAP_CLUSTER_MAX} of the LRUs for each cgroup) which could be enough to collect the low hanging fruit. -- Michal Hocko SUSE Labs SUSE LINUX s.r.o. Lihovarska 1060/12 190 00 Praha 9 Czech Republic -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/ Don't email: email@kvack.org