From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail137.messagelabs.com (mail137.messagelabs.com [216.82.249.19]) by kanga.kvack.org (Postfix) with SMTP id C63766B0055 for ; Tue, 21 Apr 2009 20:13:02 -0400 (EDT) Received: from m2.gw.fujitsu.co.jp ([10.0.50.72]) by fgwmail7.fujitsu.co.jp (Fujitsu Gateway) with ESMTP id n3M0DCmg012763 for (envelope-from kosaki.motohiro@jp.fujitsu.com); Wed, 22 Apr 2009 09:13:12 +0900 Received: from smail (m2 [127.0.0.1]) by outgoing.m2.gw.fujitsu.co.jp (Postfix) with ESMTP id 87E1045DE5D for ; Wed, 22 Apr 2009 09:13:12 +0900 (JST) Received: from s2.gw.fujitsu.co.jp (s2.gw.fujitsu.co.jp [10.0.50.92]) by m2.gw.fujitsu.co.jp (Postfix) with ESMTP id 5A50C45DE51 for ; Wed, 22 Apr 2009 09:13:12 +0900 (JST) Received: from s2.gw.fujitsu.co.jp (localhost.localdomain [127.0.0.1]) by s2.gw.fujitsu.co.jp (Postfix) with ESMTP id 3AFBC1DB803F for ; Wed, 22 Apr 2009 09:13:12 +0900 (JST) Received: from ml13.s.css.fujitsu.com (ml13.s.css.fujitsu.com [10.249.87.103]) by s2.gw.fujitsu.co.jp (Postfix) with ESMTP id B6EFE1DB803A for ; Wed, 22 Apr 2009 09:13:11 +0900 (JST) From: KOSAKI Motohiro Subject: Re: [PATCH 18/25] Do not disable interrupts in free_page_mlock() In-Reply-To: <1240266011-11140-19-git-send-email-mel@csn.ul.ie> References: <1240266011-11140-1-git-send-email-mel@csn.ul.ie> <1240266011-11140-19-git-send-email-mel@csn.ul.ie> Message-Id: <20090422091144.626B.A69D9226@jp.fujitsu.com> MIME-Version: 1.0 Content-Type: text/plain; charset="US-ASCII" Content-Transfer-Encoding: 7bit Date: Wed, 22 Apr 2009 09:13:10 +0900 (JST) Sender: owner-linux-mm@kvack.org To: Mel Gorman Cc: kosaki.motohiro@jp.fujitsu.com, Linux Memory Management List , Christoph Lameter , Nick Piggin , Linux Kernel Mailing List , Lin Ming , Zhang Yanmin , Peter Zijlstra , Pekka Enberg , Andrew Morton , Lee Schermerhorn List-ID: (cc to Lee) > free_page_mlock() tests and clears PG_mlocked using locked versions of the > bit operations. If set, it disables interrupts to update counters and this > happens on every page free even though interrupts are disabled very shortly > afterwards a second time. This is wasteful. > > This patch splits what free_page_mlock() does. The bit check is still > made. However, the update of counters is delayed until the interrupts are > disabled and the non-lock version for clearing the bit is used. One potential > weirdness with this split is that the counters do not get updated if the > bad_page() check is triggered but a system showing bad pages is getting > screwed already. Looks good. thanks good improvement. I hope Lee also ack this. Reviewed-by: KOSAKI Motohiro -kosaki > > Signed-off-by: Mel Gorman > Reviewed-by: Christoph Lameter > --- > mm/internal.h | 11 +++-------- > mm/page_alloc.c | 8 +++++++- > 2 files changed, 10 insertions(+), 9 deletions(-) > > diff --git a/mm/internal.h b/mm/internal.h > index 987bb03..58ec1bc 100644 > --- a/mm/internal.h > +++ b/mm/internal.h > @@ -157,14 +157,9 @@ static inline void mlock_migrate_page(struct page *newpage, struct page *page) > */ > static inline void free_page_mlock(struct page *page) > { > - if (unlikely(TestClearPageMlocked(page))) { > - unsigned long flags; > - > - local_irq_save(flags); > - __dec_zone_page_state(page, NR_MLOCK); > - __count_vm_event(UNEVICTABLE_MLOCKFREED); > - local_irq_restore(flags); > - } > + __ClearPageMlocked(page); > + __dec_zone_page_state(page, NR_MLOCK); > + __count_vm_event(UNEVICTABLE_MLOCKFREED); > } > > #else /* CONFIG_HAVE_MLOCKED_PAGE_BIT */ > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index a1ca038..bf4b8d9 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -499,7 +499,6 @@ static inline void __free_one_page(struct page *page, > > static inline int free_pages_check(struct page *page) > { > - free_page_mlock(page); > if (unlikely(page_mapcount(page) | > (page->mapping != NULL) | > (page_count(page) != 0) | > @@ -556,6 +555,7 @@ static void __free_pages_ok(struct page *page, unsigned int order) > unsigned long flags; > int i; > int bad = 0; > + int clearMlocked = PageMlocked(page); > > for (i = 0 ; i < (1 << order) ; ++i) > bad += free_pages_check(page + i); > @@ -571,6 +571,8 @@ static void __free_pages_ok(struct page *page, unsigned int order) > kernel_map_pages(page, 1 << order, 0); > > local_irq_save(flags); > + if (unlikely(clearMlocked)) > + free_page_mlock(page); > __count_vm_events(PGFREE, 1 << order); > free_one_page(page_zone(page), page, order, > get_pageblock_migratetype(page)); > @@ -1018,6 +1020,7 @@ static void free_hot_cold_page(struct page *page, int cold) > struct zone *zone = page_zone(page); > struct per_cpu_pages *pcp; > unsigned long flags; > + int clearMlocked = PageMlocked(page); > > if (PageAnon(page)) > page->mapping = NULL; > @@ -1033,7 +1036,10 @@ static void free_hot_cold_page(struct page *page, int cold) > > pcp = &zone_pcp(zone, get_cpu())->pcp; > local_irq_save(flags); > + if (unlikely(clearMlocked)) > + free_page_mlock(page); > __count_vm_event(PGFREE); > + > if (cold) > list_add_tail(&page->lru, &pcp->list); > else > -- > 1.5.6.5 > -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org