From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail191.messagelabs.com (mail191.messagelabs.com [216.82.242.19]) by kanga.kvack.org (Postfix) with SMTP id AA5DA600803 for ; Mon, 23 Aug 2010 19:17:56 -0400 (EDT) Received: from m6.gw.fujitsu.co.jp ([10.0.50.76]) by fgwmail7.fujitsu.co.jp (Fujitsu Gateway) with ESMTP id o7NNHrUv020803 for (envelope-from kosaki.motohiro@jp.fujitsu.com); Tue, 24 Aug 2010 08:17:53 +0900 Received: from smail (m6 [127.0.0.1]) by outgoing.m6.gw.fujitsu.co.jp (Postfix) with ESMTP id 2330B45DE4F for ; Tue, 24 Aug 2010 08:17:53 +0900 (JST) Received: from s6.gw.fujitsu.co.jp (s6.gw.fujitsu.co.jp [10.0.50.96]) by m6.gw.fujitsu.co.jp (Postfix) with ESMTP id 0073B45DE4E for ; Tue, 24 Aug 2010 08:17:53 +0900 (JST) Received: from s6.gw.fujitsu.co.jp (localhost.localdomain [127.0.0.1]) by s6.gw.fujitsu.co.jp (Postfix) with ESMTP id DEB111DB8013 for ; Tue, 24 Aug 2010 08:17:52 +0900 (JST) Received: from m107.s.css.fujitsu.com (m107.s.css.fujitsu.com [10.249.87.107]) by s6.gw.fujitsu.co.jp (Postfix) with ESMTP id 99ABC1DB8012 for ; Tue, 24 Aug 2010 08:17:52 +0900 (JST) From: KOSAKI Motohiro Subject: Re: [PATCH 3/3] mm: page allocator: Drain per-cpu lists after direct reclaim allocation fails In-Reply-To: <1282550442-15193-4-git-send-email-mel@csn.ul.ie> References: <1282550442-15193-1-git-send-email-mel@csn.ul.ie> <1282550442-15193-4-git-send-email-mel@csn.ul.ie> Message-Id: <20100824081531.6035.A69D9226@jp.fujitsu.com> MIME-Version: 1.0 Content-Type: text/plain; charset="US-ASCII" Content-Transfer-Encoding: 7bit Date: Tue, 24 Aug 2010 08:17:51 +0900 (JST) Sender: owner-linux-mm@kvack.org To: Mel Gorman Cc: kosaki.motohiro@jp.fujitsu.com, Andrew Morton , Linux Kernel List , linux-mm@kvack.org, Rik van Riel , Johannes Weiner , Minchan Kim , Christoph Lameter , KAMEZAWA Hiroyuki List-ID: > When under significant memory pressure, a process enters direct reclaim > and immediately afterwards tries to allocate a page. If it fails and no > further progress is made, it's possible the system will go OOM. However, > on systems with large amounts of memory, it's possible that a significant > number of pages are on per-cpu lists and inaccessible to the calling > process. This leads to a process entering direct reclaim more often than > it should increasing the pressure on the system and compounding the problem. > > This patch notes that if direct reclaim is making progress but > allocations are still failing that the system is already under heavy > pressure. In this case, it drains the per-cpu lists and tries the > allocation a second time before continuing. > > Signed-off-by: Mel Gorman > Reviewed-by: Minchan Kim > Reviewed-by: KAMEZAWA Hiroyuki > --- > mm/page_alloc.c | 20 ++++++++++++++++---- > 1 files changed, 16 insertions(+), 4 deletions(-) > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index bbaa959..750e1dc 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -1847,6 +1847,7 @@ __alloc_pages_direct_reclaim(gfp_t gfp_mask, unsigned int order, > struct page *page = NULL; > struct reclaim_state reclaim_state; > struct task_struct *p = current; > + bool drained = false; > > cond_resched(); > > @@ -1865,14 +1866,25 @@ __alloc_pages_direct_reclaim(gfp_t gfp_mask, unsigned int order, > > cond_resched(); > > - if (order != 0) > - drain_all_pages(); > + if (unlikely(!(*did_some_progress))) > + return NULL; > > - if (likely(*did_some_progress)) > - page = get_page_from_freelist(gfp_mask, nodemask, order, > +retry: > + page = get_page_from_freelist(gfp_mask, nodemask, order, > zonelist, high_zoneidx, > alloc_flags, preferred_zone, > migratetype); > + > + /* > + * If an allocation failed after direct reclaim, it could be because > + * pages are pinned on the per-cpu lists. Drain them and try again > + */ > + if (!page && !drained) { > + drain_all_pages(); > + drained = true; > + goto retry; > + } > + > return page; I haven't read all of this patch series. (iow, this mail is luckly on top of my mail box now) but at least I think this one is correct and good. Reviewed-by: KOSAKI Motohiro -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org