From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from psmtp.com (na3sys010amx186.postini.com [74.125.245.186]) by kanga.kvack.org (Postfix) with SMTP id 9DEA76B13F0 for ; Thu, 2 Feb 2012 20:40:29 -0500 (EST) Received: by pbaa12 with SMTP id a12so3118648pba.14 for ; Thu, 02 Feb 2012 17:40:28 -0800 (PST) Date: Thu, 2 Feb 2012 17:40:10 -0800 (PST) From: Hugh Dickins Subject: Re: [PATCH] mm: vmscan: handle isolated pages with lru lock released In-Reply-To: <20120116092745.7721ff31.kamezawa.hiroyu@jp.fujitsu.com> Message-ID: References: <20120116092745.7721ff31.kamezawa.hiroyu@jp.fujitsu.com> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Sender: owner-linux-mm@kvack.org List-ID: To: Andrew Morton Cc: Hillf Danton , KAMEZAWA Hiroyuki , Rik van Riel , David Rientjes , linux-mm@kvack.org, linux-kernel@vger.kernel.org From: Hillf Danton When shrinking inactive lru list, isolated pages are queued on locally private list, so the lock-hold time could be reduced if pages are counted without lock protection. To achieve that, firstly updating reclaim stat is delayed until the putback stage, after reacquiring the lru lock. Secondly, operations related to vm and zone stats are now proteced with preemption disabled as they are per-cpu operations. Signed-off-by: Hillf Danton Acked-by: Hugh Dickins Reviewed-by: KAMEZAWA Hiroyuki --- KAMEZAWA-san and I both admired this patch from Hillf; Rik and David liked its precursor: I think we'd all be glad to see it in linux-next. mm/vmscan.c | 21 ++++++++++----------- 1 file changed, 10 insertions(+), 11 deletions(-) --- a/mm/vmscan.c Sat Jan 14 14:02:20 2012 +++ b/mm/vmscan.c Sat Jan 14 20:00:46 2012 @@ -1414,7 +1414,6 @@ update_isolated_counts(struct mem_cgroup unsigned long *nr_anon, unsigned long *nr_file) { - struct zone_reclaim_stat *reclaim_stat = get_reclaim_stat(mz); struct zone *zone = mz->zone; unsigned int count[NR_LRU_LISTS] = { 0, }; unsigned long nr_active = 0; @@ -1435,6 +1434,7 @@ update_isolated_counts(struct mem_cgroup count[lru] += numpages; } + preempt_disable(); __count_vm_events(PGDEACTIVATE, nr_active); __mod_zone_page_state(zone, NR_ACTIVE_FILE, @@ -1449,8 +1449,9 @@ update_isolated_counts(struct mem_cgroup *nr_anon = count[LRU_ACTIVE_ANON] + count[LRU_INACTIVE_ANON]; *nr_file = count[LRU_ACTIVE_FILE] + count[LRU_INACTIVE_FILE]; - reclaim_stat->recent_scanned[0] += *nr_anon; - reclaim_stat->recent_scanned[1] += *nr_file; + __mod_zone_page_state(zone, NR_ISOLATED_ANON, *nr_anon); + __mod_zone_page_state(zone, NR_ISOLATED_FILE, *nr_file); + preempt_enable(); } /* @@ -1512,6 +1513,7 @@ shrink_inactive_list(unsigned long nr_to unsigned long nr_writeback = 0; isolate_mode_t reclaim_mode = ISOLATE_INACTIVE; struct zone *zone = mz->zone; + struct zone_reclaim_stat *reclaim_stat = get_reclaim_stat(mz); while (unlikely(too_many_isolated(zone, file, sc))) { congestion_wait(BLK_RW_ASYNC, HZ/10); @@ -1546,19 +1548,13 @@ shrink_inactive_list(unsigned long nr_to __count_zone_vm_events(PGSCAN_DIRECT, zone, nr_scanned); } + spin_unlock_irq(&zone->lru_lock); - if (nr_taken == 0) { - spin_unlock_irq(&zone->lru_lock); + if (nr_taken == 0) return 0; - } update_isolated_counts(mz, &page_list, &nr_anon, &nr_file); - __mod_zone_page_state(zone, NR_ISOLATED_ANON, nr_anon); - __mod_zone_page_state(zone, NR_ISOLATED_FILE, nr_file); - - spin_unlock_irq(&zone->lru_lock); - nr_reclaimed = shrink_page_list(&page_list, mz, sc, priority, &nr_dirty, &nr_writeback); @@ -1570,6 +1566,9 @@ shrink_inactive_list(unsigned long nr_to } spin_lock_irq(&zone->lru_lock); + + reclaim_stat->recent_scanned[0] += nr_anon; + reclaim_stat->recent_scanned[1] += nr_file; if (current_is_kswapd()) __count_vm_events(KSWAPD_STEAL, nr_reclaimed); -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/ Don't email: email@kvack.org