* [patch] fix hugetlb page allocation leak
@ 2007-07-24 0:11 Ken Chen
2007-07-24 0:20 ` Andrew Morton
0 siblings, 1 reply; 7+ messages in thread
From: Ken Chen @ 2007-07-24 0:11 UTC (permalink / raw)
To: Randy Dunlap, Andrew Morton; +Cc: linux-mm
dequeue_huge_page() has a serious memory leak upon hugetlb page
allocation. The for loop continues on allocating hugetlb pages out of
all allowable zone, where this function is supposedly only dequeue one
and only one pages.
Fixed it by breaking out of the for loop once a hugetlb page is found.
Signed-off-by: Ken Chen <kenchen@google.com>
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index f127940..d7ca59d 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -84,6 +84,7 @@ static struct page *dequeue_huge_page(st
list_del(&page->lru);
free_huge_pages--;
free_huge_pages_node[nid]--;
+ break;
}
}
return page;
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [patch] fix hugetlb page allocation leak
2007-07-24 0:11 [patch] fix hugetlb page allocation leak Ken Chen
@ 2007-07-24 0:20 ` Andrew Morton
2007-07-24 2:48 ` Randy Dunlap
2007-07-24 15:44 ` Nish Aravamudan
0 siblings, 2 replies; 7+ messages in thread
From: Andrew Morton @ 2007-07-24 0:20 UTC (permalink / raw)
To: Ken Chen; +Cc: Randy Dunlap, linux-mm
On Mon, 23 Jul 2007 17:11:49 -0700
"Ken Chen" <kenchen@google.com> wrote:
> dequeue_huge_page() has a serious memory leak upon hugetlb page
> allocation. The for loop continues on allocating hugetlb pages out of
> all allowable zone, where this function is supposedly only dequeue one
> and only one pages.
>
> Fixed it by breaking out of the for loop once a hugetlb page is found.
>
>
> Signed-off-by: Ken Chen <kenchen@google.com>
>
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index f127940..d7ca59d 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -84,6 +84,7 @@ static struct page *dequeue_huge_page(st
> list_del(&page->lru);
> free_huge_pages--;
> free_huge_pages_node[nid]--;
> + break;
> }
> }
> return page;
that would be due to some idiot merging untested stuff.
Thanks.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [patch] fix hugetlb page allocation leak
2007-07-24 0:20 ` Andrew Morton
@ 2007-07-24 2:48 ` Randy Dunlap
2007-07-24 17:13 ` Mel Gorman
2007-07-24 15:44 ` Nish Aravamudan
1 sibling, 1 reply; 7+ messages in thread
From: Randy Dunlap @ 2007-07-24 2:48 UTC (permalink / raw)
To: Andrew Morton; +Cc: Ken Chen, linux-mm
On Mon, 23 Jul 2007 17:20:19 -0700 Andrew Morton wrote:
> On Mon, 23 Jul 2007 17:11:49 -0700
> "Ken Chen" <kenchen@google.com> wrote:
>
> > dequeue_huge_page() has a serious memory leak upon hugetlb page
> > allocation. The for loop continues on allocating hugetlb pages out of
> > all allowable zone, where this function is supposedly only dequeue one
> > and only one pages.
> >
> > Fixed it by breaking out of the for loop once a hugetlb page is found.
> >
> >
> > Signed-off-by: Ken Chen <kenchen@google.com>
Acked-and-tested-by: Randy Dunlap <randy.dunlap@oracle.com>
Thanks.
> > diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> > index f127940..d7ca59d 100644
> > --- a/mm/hugetlb.c
> > +++ b/mm/hugetlb.c
> > @@ -84,6 +84,7 @@ static struct page *dequeue_huge_page(st
> > list_del(&page->lru);
> > free_huge_pages--;
> > free_huge_pages_node[nid]--;
> > + break;
> > }
> > }
> > return page;
>
> that would be due to some idiot merging untested stuff.
well, I should have reported it earlier... :(
---
~Randy
*** Remember to use Documentation/SubmitChecklist when testing your code ***
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [patch] fix hugetlb page allocation leak
2007-07-24 0:20 ` Andrew Morton
2007-07-24 2:48 ` Randy Dunlap
@ 2007-07-24 15:44 ` Nish Aravamudan
2007-07-24 16:51 ` Andrew Morton
1 sibling, 1 reply; 7+ messages in thread
From: Nish Aravamudan @ 2007-07-24 15:44 UTC (permalink / raw)
To: Andrew Morton; +Cc: Ken Chen, Randy Dunlap, linux-mm
On 7/23/07, Andrew Morton <akpm@linux-foundation.org> wrote:
> On Mon, 23 Jul 2007 17:11:49 -0700
> "Ken Chen" <kenchen@google.com> wrote:
>
> > dequeue_huge_page() has a serious memory leak upon hugetlb page
> > allocation. The for loop continues on allocating hugetlb pages out of
> > all allowable zone, where this function is supposedly only dequeue one
> > and only one pages.
> >
> > Fixed it by breaking out of the for loop once a hugetlb page is found.
> >
> >
> > Signed-off-by: Ken Chen <kenchen@google.com>
> >
> > diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> > index f127940..d7ca59d 100644
> > --- a/mm/hugetlb.c
> > +++ b/mm/hugetlb.c
> > @@ -84,6 +84,7 @@ static struct page *dequeue_huge_page(st
> > list_del(&page->lru);
> > free_huge_pages--;
> > free_huge_pages_node[nid]--;
> > + break;
> > }
> > }
> > return page;
>
> that would be due to some idiot merging untested stuff.
This would be due to 3abf7afd406866a84276d3ed04f4edf6070c9cb5 right?
Now, I wrote 31a5c6e4f25704f51f9a1373f0784034306d4cf1 which I'm
assuming introduced this compile warning. But on my box, I see no such
warning. I would like to think I wouldn't have submitted a patch that
introduce the warning, even if it was trivial like that one. Which
compiler were you using, Andrew?
And if anything, I think it's a gcc bug, no? I don't see how nid could
be used if it wasn't initialized by the zone_to_nid() call. Shouldn't
this have got one of those uninitialized_var() things? I guess the
code reorder (if it had included the 'break') would be just as good,
but I'm not sure.
Thanks,
Nish
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [patch] fix hugetlb page allocation leak
2007-07-24 15:44 ` Nish Aravamudan
@ 2007-07-24 16:51 ` Andrew Morton
2007-07-24 16:57 ` Nish Aravamudan
0 siblings, 1 reply; 7+ messages in thread
From: Andrew Morton @ 2007-07-24 16:51 UTC (permalink / raw)
To: Nish Aravamudan; +Cc: Ken Chen, Randy Dunlap, linux-mm
On Tue, 24 Jul 2007 08:44:01 -0700 "Nish Aravamudan" <nish.aravamudan@gmail.com> wrote:
> On 7/23/07, Andrew Morton <akpm@linux-foundation.org> wrote:
> > On Mon, 23 Jul 2007 17:11:49 -0700
> > "Ken Chen" <kenchen@google.com> wrote:
> >
> > > dequeue_huge_page() has a serious memory leak upon hugetlb page
> > > allocation. The for loop continues on allocating hugetlb pages out of
> > > all allowable zone, where this function is supposedly only dequeue one
> > > and only one pages.
> > >
> > > Fixed it by breaking out of the for loop once a hugetlb page is found.
> > >
> > >
> > > Signed-off-by: Ken Chen <kenchen@google.com>
> > >
> > > diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> > > index f127940..d7ca59d 100644
> > > --- a/mm/hugetlb.c
> > > +++ b/mm/hugetlb.c
> > > @@ -84,6 +84,7 @@ static struct page *dequeue_huge_page(st
> > > list_del(&page->lru);
> > > free_huge_pages--;
> > > free_huge_pages_node[nid]--;
> > > + break;
> > > }
> > > }
> > > return page;
> >
> > that would be due to some idiot merging untested stuff.
>
> This would be due to 3abf7afd406866a84276d3ed04f4edf6070c9cb5 right?
yep.
> Now, I wrote 31a5c6e4f25704f51f9a1373f0784034306d4cf1 which I'm
> assuming introduced this compile warning. But on my box, I see no such
> warning. I would like to think I wouldn't have submitted a patch that
> introduce the warning, even if it was trivial like that one. Which
> compiler were you using, Andrew?
I expect it was gcc-4.1.0.
But most gcc's will get confused over that code sequence.
> And if anything, I think it's a gcc bug, no? I don't see how nid could
> be used if it wasn't initialized by the zone_to_nid() call. Shouldn't
> this have got one of those uninitialized_var() things? I guess the
> code reorder (if it had included the 'break') would be just as good,
> but I'm not sure.
Yes, gcc gets things wrong.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [patch] fix hugetlb page allocation leak
2007-07-24 16:51 ` Andrew Morton
@ 2007-07-24 16:57 ` Nish Aravamudan
0 siblings, 0 replies; 7+ messages in thread
From: Nish Aravamudan @ 2007-07-24 16:57 UTC (permalink / raw)
To: Andrew Morton; +Cc: Ken Chen, Randy Dunlap, linux-mm
On 7/24/07, Andrew Morton <akpm@linux-foundation.org> wrote:
> On Tue, 24 Jul 2007 08:44:01 -0700 "Nish Aravamudan" <nish.aravamudan@gmail.com> wrote:
>
> > On 7/23/07, Andrew Morton <akpm@linux-foundation.org> wrote:
> > > On Mon, 23 Jul 2007 17:11:49 -0700
> > > "Ken Chen" <kenchen@google.com> wrote:
> > >
> > > > dequeue_huge_page() has a serious memory leak upon hugetlb page
> > > > allocation. The for loop continues on allocating hugetlb pages out of
> > > > all allowable zone, where this function is supposedly only dequeue one
> > > > and only one pages.
> > > >
> > > > Fixed it by breaking out of the for loop once a hugetlb page is found.
> > > >
> > > >
> > > > Signed-off-by: Ken Chen <kenchen@google.com>
> > > >
> > > > diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> > > > index f127940..d7ca59d 100644
> > > > --- a/mm/hugetlb.c
> > > > +++ b/mm/hugetlb.c
> > > > @@ -84,6 +84,7 @@ static struct page *dequeue_huge_page(st
> > > > list_del(&page->lru);
> > > > free_huge_pages--;
> > > > free_huge_pages_node[nid]--;
> > > > + break;
> > > > }
> > > > }
> > > > return page;
> > >
> > > that would be due to some idiot merging untested stuff.
> >
> > This would be due to 3abf7afd406866a84276d3ed04f4edf6070c9cb5 right?
>
> yep.
>
> > Now, I wrote 31a5c6e4f25704f51f9a1373f0784034306d4cf1 which I'm
> > assuming introduced this compile warning. But on my box, I see no such
> > warning. I would like to think I wouldn't have submitted a patch that
> > introduce the warning, even if it was trivial like that one. Which
> > compiler were you using, Andrew?
>
> I expect it was gcc-4.1.0.
>
> But most gcc's will get confused over that code sequence.
Hrm, I'm using
gcc (GCC) 4.1.2 (Ubuntu 4.1.2-0ubuntu4)
I wonder why it didn't trigger here :( Oh well, I'll add some hackery
to my scripts to make it cross-test across all available gcc's.
Thanks,
Nish
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [patch] fix hugetlb page allocation leak
2007-07-24 2:48 ` Randy Dunlap
@ 2007-07-24 17:13 ` Mel Gorman
0 siblings, 0 replies; 7+ messages in thread
From: Mel Gorman @ 2007-07-24 17:13 UTC (permalink / raw)
To: Randy Dunlap; +Cc: Andrew Morton, Ken Chen, linux-mm
On (23/07/07 19:48), Randy Dunlap didst pronounce:
> On Mon, 23 Jul 2007 17:20:19 -0700 Andrew Morton wrote:
>
> > On Mon, 23 Jul 2007 17:11:49 -0700
> > "Ken Chen" <kenchen@google.com> wrote:
> >
> > > dequeue_huge_page() has a serious memory leak upon hugetlb page
> > > allocation. The for loop continues on allocating hugetlb pages out of
> > > all allowable zone, where this function is supposedly only dequeue one
> > > and only one pages.
> > >
> > > Fixed it by breaking out of the for loop once a hugetlb page is found.
> > >
> > >
> > > Signed-off-by: Ken Chen <kenchen@google.com>
>
> Acked-and-tested-by: Randy Dunlap <randy.dunlap@oracle.com>
>
Confirmed. Before the patch, I'm seeing pages leak where the pool still
has pages after 0 is written to /proc/sys/vm/nr_hugepages . After the
patch, it seems fine.
--
Mel Gorman
Part-time Phd Student Linux Technology Center
University of Limerick IBM Dublin Software Lab
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2007-07-24 17:13 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2007-07-24 0:11 [patch] fix hugetlb page allocation leak Ken Chen
2007-07-24 0:20 ` Andrew Morton
2007-07-24 2:48 ` Randy Dunlap
2007-07-24 17:13 ` Mel Gorman
2007-07-24 15:44 ` Nish Aravamudan
2007-07-24 16:51 ` Andrew Morton
2007-07-24 16:57 ` Nish Aravamudan
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox