From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail191.messagelabs.com (mail191.messagelabs.com [216.82.242.19]) by kanga.kvack.org (Postfix) with SMTP id CB64D6B0087 for ; Tue, 9 Nov 2010 12:27:52 -0500 (EST) Date: Tue, 9 Nov 2010 11:27:43 -0600 (CST) From: Christoph Lameter Subject: vmstat: Optimize zone counter modifications through the use of this cpu operations Message-ID: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Sender: owner-linux-mm@kvack.org To: akpm@linux-foundation.org Cc: linux-mm@kvack.org, eric.dumazet@gmail.com List-ID: this cpu operations can be used to slightly optimize the functions. The changes will avoid some address calculations and replace them with the use of the percpu segment register. If one would have this_cpu_inc_return and this_cpu_dec_return then it would be possible to optimize inc_zone_page_state and dec_zone_page_state even more. before: linux-2.6$ size mm/vmstat.o text data bss dec hex filename 8914 606 264 9784 2638 mm/vmstat.o after: linux-2.6$ size mm/vmstat.o text data bss dec hex filename 8904 606 264 9774 262e mm/vmstat.o Signed-off-by: Christoph Lameter --- mm/vmstat.c | 56 ++++++++++++++++++++++++++++++++------------------------ 1 file changed, 32 insertions(+), 24 deletions(-) Index: linux-2.6/mm/vmstat.c =================================================================== --- linux-2.6.orig/mm/vmstat.c 2010-11-09 11:18:22.000000000 -0600 +++ linux-2.6/mm/vmstat.c 2010-11-09 11:19:06.000000000 -0600 @@ -167,18 +167,20 @@ static void refresh_zone_stat_thresholds void __mod_zone_page_state(struct zone *zone, enum zone_stat_item item, int delta) { - struct per_cpu_pageset *pcp = this_cpu_ptr(zone->pageset); - - s8 *p = pcp->vm_stat_diff + item; + struct per_cpu_pageset * __percpu pcp = zone->pageset; + s8 * __percpu p = pcp->vm_stat_diff + item; long x; + long t; + + x = delta + __this_cpu_read(*p); - x = delta + *p; + t = __this_cpu_read(pcp->stat_threshold); - if (unlikely(x > pcp->stat_threshold || x < -pcp->stat_threshold)) { + if (unlikely(x > t || x < -t)) { zone_page_state_add(x, zone, item); x = 0; } - *p = x; + __this_cpu_write(*p, x); } EXPORT_SYMBOL(__mod_zone_page_state); @@ -221,16 +223,19 @@ EXPORT_SYMBOL(mod_zone_page_state); */ void __inc_zone_state(struct zone *zone, enum zone_stat_item item) { - struct per_cpu_pageset *pcp = this_cpu_ptr(zone->pageset); - s8 *p = pcp->vm_stat_diff + item; - - (*p)++; + struct per_cpu_pageset * __percpu pcp = zone->pageset; + s8 * __percpu p = pcp->vm_stat_diff + item; + int v, t; + + __this_cpu_inc(*p); + + v = __this_cpu_read(*p); + t = __this_cpu_read(pcp->stat_threshold); + if (unlikely(v > t)) { + int overstep = t / 2; - if (unlikely(*p > pcp->stat_threshold)) { - int overstep = pcp->stat_threshold / 2; - - zone_page_state_add(*p + overstep, zone, item); - *p = -overstep; + zone_page_state_add(v + overstep, zone, item); + __this_cpu_write(*p, overstep); } } @@ -242,16 +247,19 @@ EXPORT_SYMBOL(__inc_zone_page_state); void __dec_zone_state(struct zone *zone, enum zone_stat_item item) { - struct per_cpu_pageset *pcp = this_cpu_ptr(zone->pageset); - s8 *p = pcp->vm_stat_diff + item; - - (*p)--; - - if (unlikely(*p < - pcp->stat_threshold)) { - int overstep = pcp->stat_threshold / 2; + struct per_cpu_pageset * __percpu pcp = zone->pageset; + s8 * __percpu p = pcp->vm_stat_diff + item; + int v, t; + + __this_cpu_dec(*p); + + v = __this_cpu_read(*p); + t = __this_cpu_read(pcp->stat_threshold); + if (unlikely(v < - t)) { + int overstep = t / 2; - zone_page_state_add(*p - overstep, zone, item); - *p = overstep; + zone_page_state_add(v - overstep, zone, item); + __this_cpu_write(*p, overstep); } } -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Fight unfair telecom policy in Canada: sign http://dissolvethecrtc.ca/ Don't email: email@kvack.org