From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail137.messagelabs.com (mail137.messagelabs.com [216.82.249.19]) by kanga.kvack.org (Postfix) with ESMTP id 18ECB9000BD for ; Wed, 21 Sep 2011 09:10:51 -0400 (EDT) Date: Wed, 21 Sep 2011 15:10:45 +0200 From: Michal Hocko Subject: Re: [patch 08/11] mm: vmscan: convert global reclaim to per-memcg LRU lists Message-ID: <20110921131045.GD8501@tiehlicka.suse.cz> References: <1315825048-3437-1-git-send-email-jweiner@redhat.com> <1315825048-3437-9-git-send-email-jweiner@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1315825048-3437-9-git-send-email-jweiner@redhat.com> Sender: owner-linux-mm@kvack.org List-ID: To: Johannes Weiner Cc: Andrew Morton , KAMEZAWA Hiroyuki , Daisuke Nishimura , Balbir Singh , Ying Han , Greg Thelen , Michel Lespinasse , Rik van Riel , Minchan Kim , Christoph Hellwig , linux-mm@kvack.org, linux-kernel@vger.kernel.org On Mon 12-09-11 12:57:25, Johannes Weiner wrote: > The global per-zone LRU lists are about to go away on memcg-enabled > kernels, global reclaim must be able to find its pages on the > per-memcg LRU lists. > > Since the LRU pages of a zone are distributed over all existing memory > cgroups, a scan target for a zone is complete when all memory cgroups > are scanned for their proportional share of a zone's memory. > > The forced scanning of small scan targets from kswapd is limited to > zones marked unreclaimable, otherwise kswapd can quickly overreclaim > by force-scanning the LRU lists of multiple memory cgroups. > > Signed-off-by: Johannes Weiner Reviewed-by: Michal Hocko Minor nit bellow > --- > mm/vmscan.c | 39 ++++++++++++++++++++++----------------- > 1 files changed, 22 insertions(+), 17 deletions(-) > > diff --git a/mm/vmscan.c b/mm/vmscan.c > index bb4d8b8..053609e 100644 > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -2451,13 +2445,24 @@ unsigned long try_to_free_mem_cgroup_pages(struct mem_cgroup *mem_cont, > static void age_active_anon(struct zone *zone, struct scan_control *sc, > int priority) > { > - struct mem_cgroup_zone mz = { > - .mem_cgroup = NULL, > - .zone = zone, > - }; > + struct mem_cgroup *mem; > + > + if (!total_swap_pages) > + return; > + > + mem = mem_cgroup_iter(NULL, NULL, NULL); Wouldn't be for_each_mem_cgroup more appropriate? Macro is not exported but probably worth exporting? The same applies for scan_zone_unevictable_pages from the previous patch. > + do { > + struct mem_cgroup_zone mz = { > + .mem_cgroup = mem, > + .zone = zone, > + }; > > - if (inactive_anon_is_low(&mz)) > - shrink_active_list(SWAP_CLUSTER_MAX, &mz, sc, priority, 0); > + if (inactive_anon_is_low(&mz)) > + shrink_active_list(SWAP_CLUSTER_MAX, &mz, > + sc, priority, 0); > + > + mem = mem_cgroup_iter(NULL, mem, NULL); > + } while (mem); > } -- Michal Hocko SUSE Labs SUSE LINUX s.r.o. Lihovarska 1060/12 190 00 Praha 9 Czech Republic -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/ Don't email: email@kvack.org