* [PATCH] hugetlb: Update -mm patches to fix pool resizing
@ 2007-10-04 14:38 Adam Litke
0 siblings, 0 replies; only message in thread
From: Adam Litke @ 2007-10-04 14:38 UTC (permalink / raw)
To: Andrew Morton; +Cc: linux-mm
Hi Andrew. Here is a port of my explicit resizing corner-case fix that will
apply on top of the dynamic pool resizing patches now in -mm. Thanks.
Signed-off-by: Adam Litke <agl@us.ibm.com>
>From the original mainline patch notes...
> Changes in V2:
> - Removed now unnecessary check as suggested by Ken Chen
>
> When shrinking the size of the hugetlb pool via the nr_hugepages sysctl, we
> are careful to keep enough pages around to satisfy reservations. But the
> calculation is flawed for the following scenario:
>
> Action Pool Counters (Total, Free, Resv)
> ====== =============
> Set pool to 1 page 1 1 0
> Map 1 page MAP_PRIVATE 1 1 0
> Touch the page to fault it in 1 0 0
> Set pool to 3 pages 3 2 0
> Map 2 pages MAP_SHARED 3 2 2
> Set pool to 2 pages 2 1 2 <-- Mistake, should be 3 2 2
> Touch the 2 shared pages 2 0 1 <-- Program crashes here
>
> The last touch above will terminate the process due to lack of huge pages.
>
> This patch corrects the calculation so that it factors in pages being used
> for private mappings. Andrew, this is a standalone fix suitable for
> mainline. It is also now corrected in my latest dynamic pool resizing
> patchset which I will send out soon.
>
> Signed-off-by: Adam Litke <agl@us.ibm.com>
> Acked-by: Ken Chen <kenchen@google.com>
---
hugetlb.c | 9 ++++-----
1 file changed, 4 insertions(+), 5 deletions(-)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index dabe3d6..9bec60d 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -297,14 +297,14 @@ static void try_to_free_low(unsigned long count)
for (i = 0; i < MAX_NUMNODES; ++i) {
struct page *page, *next;
list_for_each_entry_safe(page, next, &hugepage_freelists[i], lru) {
+ if (count >= nr_huge_pages)
+ return;
if (PageHighMem(page))
continue;
list_del(&page->lru);
update_and_free_page(page);
free_huge_pages--;
free_huge_pages_node[page_to_nid(page)]--;
- if (count >= nr_huge_pages)
- return;
}
}
}
@@ -344,8 +344,6 @@ static unsigned long set_max_huge_pages(unsigned long count)
goto out;
}
- if (count >= persistent_huge_pages)
- goto out;
/*
* Decrease the pool size
@@ -354,7 +352,8 @@ static unsigned long set_max_huge_pages(unsigned long count)
* pages into surplus state as needed so the pool will shrink
* to the desired size as pages become free.
*/
- min_count = max(count, resv_huge_pages);
+ min_count = resv_huge_pages + nr_huge_pages - free_huge_pages;
+ min_count = max(count, min_count);
try_to_free_low(min_count);
while (min_count < persistent_huge_pages) {
struct page *page = dequeue_huge_page(NULL, 0);
--
Adam Litke - (agl at us.ibm.com)
IBM Linux Technology Center
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] only message in thread
only message in thread, other threads:[~2007-10-04 14:38 UTC | newest]
Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2007-10-04 14:38 [PATCH] hugetlb: Update -mm patches to fix pool resizing Adam Litke
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox