From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail138.messagelabs.com (mail138.messagelabs.com [216.82.249.35]) by kanga.kvack.org (Postfix) with ESMTP id B2C5C900138 for ; Mon, 29 Aug 2011 03:28:27 -0400 (EDT) Received: from hpaq14.eem.corp.google.com (hpaq14.eem.corp.google.com [172.25.149.14]) by smtp-out.google.com with ESMTP id p7T7SOun013282 for ; Mon, 29 Aug 2011 00:28:24 -0700 Received: from qyk4 (qyk4.prod.google.com [10.241.83.132]) by hpaq14.eem.corp.google.com with ESMTP id p7T7S8Dd028561 (version=TLSv1/SSLv3 cipher=RC4-SHA bits=128 verify=NOT) for ; Mon, 29 Aug 2011 00:28:23 -0700 Received: by qyk4 with SMTP id 4so1644680qyk.13 for ; Mon, 29 Aug 2011 00:28:22 -0700 (PDT) MIME-Version: 1.0 In-Reply-To: <20110720003653.GA667@cmpxchg.org> References: <1306909519-7286-1-git-send-email-hannes@cmpxchg.org> <1306909519-7286-8-git-send-email-hannes@cmpxchg.org> <20110720003653.GA667@cmpxchg.org> Date: Mon, 29 Aug 2011 00:28:22 -0700 Message-ID: Subject: Re: [patch 7/8] vmscan: memcg-aware unevictable page rescue scanner From: Ying Han Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Sender: owner-linux-mm@kvack.org List-ID: To: Johannes Weiner Cc: KAMEZAWA Hiroyuki , Daisuke Nishimura , Balbir Singh , Michal Hocko , Andrew Morton , Rik van Riel , Minchan Kim , KOSAKI Motohiro , Mel Gorman , Greg Thelen , Michel Lespinasse , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Hugh Dickins On Tue, Jul 19, 2011 at 5:36 PM, Johannes Weiner wrote= : > On Tue, Jul 19, 2011 at 03:47:43PM -0700, Ying Han wrote: >> On Tue, May 31, 2011 at 11:25 PM, Johannes Weiner wr= ote: >> >> > Once the per-memcg lru lists are exclusive, the unevictable page >> > rescue scanner can no longer work on the global zone lru lists. >> > >> > This converts it to go through all memcgs and scan their respective >> > unevictable lists instead. >> > >> > Signed-off-by: Johannes Weiner >> > --- >> > =A0include/linux/memcontrol.h | =A0 =A02 + >> > =A0mm/memcontrol.c =A0 =A0 =A0 =A0 =A0 =A0| =A0 11 +++++++++ >> > =A0mm/vmscan.c =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0| =A0 53 >> > +++++++++++++++++++++++++++---------------- >> > =A03 files changed, 46 insertions(+), 20 deletions(-) >> > >> > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h >> > index cb02c00..56c1def 100644 >> > --- a/include/linux/memcontrol.h >> > +++ b/include/linux/memcontrol.h >> > @@ -60,6 +60,8 @@ extern void mem_cgroup_cancel_charge_swapin(struct >> > mem_cgroup *ptr); >> > >> > =A0extern int mem_cgroup_cache_charge(struct page *page, struct mm_str= uct >> > *mm, >> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 =A0gfp_t gfp_mask); >> > +struct page *mem_cgroup_lru_to_page(struct zone *, struct mem_cgroup = *, >> > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 = enum lru_list); >> > >> >> Did we miss a #ifdef case for this function? I got compile error by >> disabling memcg. > > I assume it's because the call to it is not optimized away properly in > the disabled case. =A0I'll have it fixed in the next round, thanks for > letting me know. > Hi Johannes: This is the change for the hierarchy_walk() sent on the other patch, also including a fix. Please consider to fold in your patch: Fix the hierarchy_walk() in the unevictable page rescue scanner the patch including changes 1. adjust the change in hierarchy_walk() which needs to hold the reference = to the first mem_cgroup. 2. add stop_hierarchy_walk() at the end which is missed on the original pat= ch. Signed-off-by: Ying Han Change-Id: I72fb5d351faf0f111c8c99edd90b6cfee6281d3f --- mm/memcontrol.c | 3 +++ mm/vmscan.c | 7 ++++--- 2 files changed, 7 insertions(+), 3 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 9bcd429..426092b 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -1514,6 +1514,9 @@ void mem_cgroup_stop_hierarchy_walk(struct mem_cgroup *target, >------>------->------->------- struct mem_cgroup *first, >------>------->------->------- struct mem_cgroup *mem) { +>------if (!target) +>------>-------target =3D root_mem_cgroup; + >------if (mem && mem !=3D target) >------>-------css_put(&mem->css); =B7 diff --git a/mm/vmscan.c b/mm/vmscan.c index 290998e..fd9593b 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -4110,9 +4110,9 @@ static struct page *lru_tailpage(struct zone *zone, struct mem_cgroup *mem, #define SCAN_UNEVICTABLE_BATCH_SIZE 16UL /* arbitrary lock hold batch size= */ static void scan_zone_unevictable_pages(struct zone *zone) { ->------struct mem_cgroup *first, *mem =3D NULL; +>------struct mem_cgroup *first, *mem; =B7 ->------first =3D mem =3D mem_cgroup_hierarchy_walk(NULL, mem); +>------first =3D mem =3D mem_cgroup_hierarchy_walk(NULL, NULL, NULL); >------do { >------>-------unsigned long nr_to_scan; =B7 @@ -4139,8 +4139,9 @@ static void scan_zone_unevictable_pages(struct zone *= zone) >------>------->-------spin_unlock_irq(&zone->lru_lock); >------>------->-------nr_to_scan -=3D batch_size; >------>-------} ->------>-------mem =3D mem_cgroup_hierarchy_walk(NULL, mem); +>------>-------mem =3D mem_cgroup_hierarchy_walk(NULL, first, mem); >------} while (mem !=3D first); +>------mem_cgroup_stop_hierarchy_walk(NULL, first, mem); } --Ying -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/ Don't email: email@kvack.org