From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail172.messagelabs.com (mail172.messagelabs.com [216.82.254.3]) by kanga.kvack.org (Postfix) with SMTP id 4B0CD6B003D for ; Tue, 21 Apr 2009 07:03:13 -0400 (EDT) Received: from m5.gw.fujitsu.co.jp ([10.0.50.75]) by fgwmail6.fujitsu.co.jp (Fujitsu Gateway) with ESMTP id n3LB3Dfh020057 for (envelope-from kosaki.motohiro@jp.fujitsu.com); Tue, 21 Apr 2009 20:03:14 +0900 Received: from smail (m5 [127.0.0.1]) by outgoing.m5.gw.fujitsu.co.jp (Postfix) with ESMTP id 55F9145DE53 for ; Tue, 21 Apr 2009 20:03:13 +0900 (JST) Received: from s5.gw.fujitsu.co.jp (s5.gw.fujitsu.co.jp [10.0.50.95]) by m5.gw.fujitsu.co.jp (Postfix) with ESMTP id F0B5345DE5A for ; Tue, 21 Apr 2009 20:03:12 +0900 (JST) Received: from s5.gw.fujitsu.co.jp (localhost.localdomain [127.0.0.1]) by s5.gw.fujitsu.co.jp (Postfix) with ESMTP id B2502E08004 for ; Tue, 21 Apr 2009 20:03:12 +0900 (JST) Received: from ml14.s.css.fujitsu.com (ml14.s.css.fujitsu.com [10.249.87.104]) by s5.gw.fujitsu.co.jp (Postfix) with ESMTP id D56631DB803C for ; Tue, 21 Apr 2009 20:03:11 +0900 (JST) From: KOSAKI Motohiro Subject: Re: [PATCH 17/25] Do not call get_pageblock_migratetype() more than necessary In-Reply-To: <1240266011-11140-18-git-send-email-mel@csn.ul.ie> References: <1240266011-11140-1-git-send-email-mel@csn.ul.ie> <1240266011-11140-18-git-send-email-mel@csn.ul.ie> Message-Id: <20090421200154.F174.A69D9226@jp.fujitsu.com> MIME-Version: 1.0 Content-Type: text/plain; charset="US-ASCII" Content-Transfer-Encoding: 7bit Date: Tue, 21 Apr 2009 20:03:10 +0900 (JST) Sender: owner-linux-mm@kvack.org To: Mel Gorman Cc: kosaki.motohiro@jp.fujitsu.com, Linux Memory Management List , Christoph Lameter , Nick Piggin , Linux Kernel Mailing List , Lin Ming , Zhang Yanmin , Peter Zijlstra , Pekka Enberg , Andrew Morton List-ID: > get_pageblock_migratetype() is potentially called twice for every page > free. Once, when being freed to the pcp lists and once when being freed > back to buddy. When freeing from the pcp lists, it is known what the > pageblock type was at the time of free so use it rather than rechecking. > In low memory situations under memory pressure, this might skew > anti-fragmentation slightly but the interference is minimal and > decisions that are fragmenting memory are being made anyway. > > Signed-off-by: Mel Gorman > Reviewed-by: Christoph Lameter > --- > mm/page_alloc.c | 16 ++++++++++------ > 1 files changed, 10 insertions(+), 6 deletions(-) > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index c57c602..a1ca038 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -456,16 +456,18 @@ static inline int page_is_buddy(struct page *page, struct page *buddy, > */ > > static inline void __free_one_page(struct page *page, > - struct zone *zone, unsigned int order) > + struct zone *zone, unsigned int order, > + int migratetype) > { > unsigned long page_idx; > int order_size = 1 << order; > - int migratetype = get_pageblock_migratetype(page); > > if (unlikely(PageCompound(page))) > if (unlikely(destroy_compound_page(page, order))) > return; > > + VM_BUG_ON(migratetype == -1); > + > page_idx = page_to_pfn(page) & ((1 << MAX_ORDER) - 1); > > VM_BUG_ON(page_idx & (order_size - 1)); > @@ -534,17 +536,18 @@ static void free_pages_bulk(struct zone *zone, int count, > page = list_entry(list->prev, struct page, lru); > /* have to delete it as __free_one_page list manipulates */ > list_del(&page->lru); > - __free_one_page(page, zone, order); > + __free_one_page(page, zone, order, page_private(page)); > } > spin_unlock(&zone->lock); looks good. Reviewed-by: KOSAKI Motohiro btw, I can't review rest patch today. I plan to do that tommorow, sorry. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org