From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail172.messagelabs.com (mail172.messagelabs.com [216.82.254.3]) by kanga.kvack.org (Postfix) with SMTP id E21AF6B0055 for ; Mon, 15 Jun 2009 05:41:33 -0400 (EDT) Received: from m6.gw.fujitsu.co.jp ([10.0.50.76]) by fgwmail6.fujitsu.co.jp (Fujitsu Gateway) with ESMTP id n5F9goBk007004 for (envelope-from kamezawa.hiroyu@jp.fujitsu.com); Mon, 15 Jun 2009 18:42:51 +0900 Received: from smail (m6 [127.0.0.1]) by outgoing.m6.gw.fujitsu.co.jp (Postfix) with ESMTP id 4245445DE50 for ; Mon, 15 Jun 2009 18:42:49 +0900 (JST) Received: from s6.gw.fujitsu.co.jp (s6.gw.fujitsu.co.jp [10.0.50.96]) by m6.gw.fujitsu.co.jp (Postfix) with ESMTP id DCB1745DD70 for ; Mon, 15 Jun 2009 18:42:48 +0900 (JST) Received: from s6.gw.fujitsu.co.jp (localhost.localdomain [127.0.0.1]) by s6.gw.fujitsu.co.jp (Postfix) with ESMTP id 74721E08001 for ; Mon, 15 Jun 2009 18:42:48 +0900 (JST) Received: from ml13.s.css.fujitsu.com (ml13.s.css.fujitsu.com [10.249.87.103]) by s6.gw.fujitsu.co.jp (Postfix) with ESMTP id 15C191DB8037 for ; Mon, 15 Jun 2009 18:42:45 +0900 (JST) Date: Mon, 15 Jun 2009 18:41:12 +0900 From: KAMEZAWA Hiroyuki Subject: Re: [PATCH 10/22] HWPOISON: check and isolate corrupted free pages v2 Message-Id: <20090615184112.ed8e2f03.kamezawa.hiroyu@jp.fujitsu.com> In-Reply-To: <20090615031253.715406280@intel.com> References: <20090615024520.786814520@intel.com> <20090615031253.715406280@intel.com> Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Sender: owner-linux-mm@kvack.org To: Wu Fengguang Cc: Andrew Morton , LKML , Andi Kleen , Ingo Molnar , Mel Gorman , Thomas Gleixner , "H. Peter Anvin" , Peter Zijlstra , Nick Piggin , Hugh Dickins , Andi Kleen , "riel@redhat.com" , "chris.mason@oracle.com" , "linux-mm@kvack.org" List-ID: On Mon, 15 Jun 2009 10:45:30 +0800 Wu Fengguang wrote: > From: Wu Fengguang > > If memory corruption hits the free buddy pages, we can safely ignore them. > No one will access them until page allocation time, then prep_new_page() > will automatically check and isolate PG_hwpoison page for us (for 0-order > allocation). > > This patch expands prep_new_page() to check every component page in a high > order page allocation, in order to completely stop PG_hwpoison pages from > being recirculated. > > Note that the common case -- only allocating a single page, doesn't > do any more work than before. Allocating > order 0 does a bit more work, > but that's relatively uncommon. > > This simple implementation may drop some innocent neighbor pages, hopefully > it is not a big problem because the event should be rare enough. > > This patch adds some runtime costs to high order page users. > > [AK: Improved description] > > v2: Andi Kleen: > Port to -mm code > Move check into separate function. > Don't dump stack in bad_pages for hwpoisoned pages. > > Signed-off-by: Wu Fengguang > Signed-off-by: Andi Kleen > > --- > mm/page_alloc.c | 20 +++++++++++++++++++- > 1 file changed, 19 insertions(+), 1 deletion(-) > > --- sound-2.6.orig/mm/page_alloc.c > +++ sound-2.6/mm/page_alloc.c > @@ -233,6 +233,12 @@ static void bad_page(struct page *page) > static unsigned long nr_shown; > static unsigned long nr_unshown; > > + /* Don't complain about poisoned pages */ > + if (PageHWPoison(page)) { > + __ClearPageBuddy(page); > + return; > + } Hmm ? why __ClearPageBuddy() is necessary ? Thanks, -Kame > + > /* > * Allow a burst of 60 reports, then keep quiet for that minute; > * or allow a steady drip of one report per second. > @@ -646,7 +652,7 @@ static inline void expand(struct zone *z > /* > * This page is about to be returned from the page allocator > */ > -static int prep_new_page(struct page *page, int order, gfp_t gfp_flags) > +static inline int check_new_page(struct page *page) > { > if (unlikely(page_mapcount(page) | > (page->mapping != NULL) | > @@ -655,6 +661,18 @@ static int prep_new_page(struct page *pa > bad_page(page); > return 1; > } > + return 0; > +} > + > +static int prep_new_page(struct page *page, int order, gfp_t gfp_flags) > +{ > + int i; > + > + for (i = 0; i < (1 << order); i++) { > + struct page *p = page + i; > + if (unlikely(check_new_page(p))) > + return 1; > + } > > set_page_private(page, 0); > set_page_refcounted(page); > > -- > > -- > To unsubscribe, send a message with 'unsubscribe linux-mm' in > the body to majordomo@kvack.org. For more info on Linux MM, > see: http://www.linux-mm.org/ . > Don't email: email@kvack.org > -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org