From mboxrd@z Thu Jan 1 00:00:00 1970 Date: Mon, 23 Jul 2007 19:48:56 -0700 From: Randy Dunlap Subject: Re: [patch] fix hugetlb page allocation leak Message-Id: <20070723194856.40d35666.randy.dunlap@oracle.com> In-Reply-To: <20070723172019.376ca936.akpm@linux-foundation.org> References: <20070723172019.376ca936.akpm@linux-foundation.org> Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Sender: owner-linux-mm@kvack.org Return-Path: To: Andrew Morton Cc: Ken Chen , linux-mm@kvack.org List-ID: On Mon, 23 Jul 2007 17:20:19 -0700 Andrew Morton wrote: > On Mon, 23 Jul 2007 17:11:49 -0700 > "Ken Chen" wrote: > > > dequeue_huge_page() has a serious memory leak upon hugetlb page > > allocation. The for loop continues on allocating hugetlb pages out of > > all allowable zone, where this function is supposedly only dequeue one > > and only one pages. > > > > Fixed it by breaking out of the for loop once a hugetlb page is found. > > > > > > Signed-off-by: Ken Chen Acked-and-tested-by: Randy Dunlap Thanks. > > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > > index f127940..d7ca59d 100644 > > --- a/mm/hugetlb.c > > +++ b/mm/hugetlb.c > > @@ -84,6 +84,7 @@ static struct page *dequeue_huge_page(st > > list_del(&page->lru); > > free_huge_pages--; > > free_huge_pages_node[nid]--; > > + break; > > } > > } > > return page; > > that would be due to some idiot merging untested stuff. well, I should have reported it earlier... :( --- ~Randy *** Remember to use Documentation/SubmitChecklist when testing your code *** -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org