From: Mel Gorman <mel@csn.ul.ie>
To: linux-mm@kvack.org
Cc: Mel Gorman <mel@csn.ul.ie>,
linux-kernel@vger.kernel.org, clameter@sgi.com
Subject: [PATCH 6/11] Move free pages between lists on steal
Date: Tue, 21 Nov 2006 22:52:23 +0000 (GMT) [thread overview]
Message-ID: <20061121225223.11710.73233.sendpatchset@skynet.skynet.ie> (raw)
In-Reply-To: <20061121225022.11710.72178.sendpatchset@skynet.skynet.ie>
When a fallback occurs, there will be free pages for one allocation type
stored on the list for another. When a large steal occurs, this patch will
move all the free pages within one list to the another.
Signed-off-by: Mel Gorman <mel@csn.ul.ie>
---
page_alloc.c | 80 +++++++++++++++++++++++++++++++++++++++++++++++++-----
1 files changed, 73 insertions(+), 7 deletions(-)
diff -rup -X /usr/src/patchset-0.6/bin//dontdiff linux-2.6.19-rc5-mm2-006_drainpercpu/mm/page_alloc.c linux-2.6.19-rc5-mm2-007_movefree/mm/page_alloc.c
--- linux-2.6.19-rc5-mm2-006_drainpercpu/mm/page_alloc.c 2006-11-21 10:54:11.000000000 +0000
+++ linux-2.6.19-rc5-mm2-007_movefree/mm/page_alloc.c 2006-11-21 10:56:06.000000000 +0000
@@ -661,6 +661,62 @@ static int prep_new_page(struct page *pa
}
#ifdef CONFIG_PAGE_CLUSTERING
+/*
+ * Move the free pages in a range to the free lists of the requested type.
+ * Note that start_page and end_pages are not aligned in a MAX_ORDER_NR_PAGES
+ * boundary. If alignment is required, use move_freepages_block()
+ */
+int move_freepages(struct zone *zone,
+ struct page *start_page, struct page *end_page,
+ int migratetype)
+{
+ struct page *page;
+ unsigned long order;
+ int blocks_moved = 0;
+
+ BUG_ON(page_zone(start_page) != page_zone(end_page));
+
+ for (page = start_page; page < end_page;) {
+ if (!PageBuddy(page)) {
+ page++;
+ continue;
+ }
+#ifdef CONFIG_HOLES_IN_ZONE
+ if (!pfn_valid(page_to_pfn(page))) {
+ page++;
+ continue;
+ }
+#endif
+
+ order = page_order(page);
+ list_del(&page->lru);
+ list_add(&page->lru,
+ &zone->free_area[order].free_list[migratetype]);
+ page += 1 << order;
+ blocks_moved++;
+ }
+
+ return blocks_moved;
+}
+
+int move_freepages_block(struct zone *zone, struct page *page, int migratetype)
+{
+ unsigned long start_pfn;
+ struct page *start_page, *end_page;
+
+ start_pfn = page_to_pfn(page);
+ start_pfn = start_pfn & ~(MAX_ORDER_NR_PAGES-1);
+ start_page = pfn_to_page(start_pfn);
+ end_page = start_page + MAX_ORDER_NR_PAGES;
+
+ if (page_zone(page) != page_zone(start_page))
+ start_page = page;
+ if (page_zone(page) != page_zone(end_page))
+ return 0;
+
+ return move_freepages(zone, start_page, end_page, migratetype);
+}
+
/* Remove an element from the buddy allocator from the fallback list */
static struct page *__rmqueue_fallback(struct zone *zone, int order,
int start_migratetype)
@@ -681,24 +737,34 @@ static struct page *__rmqueue_fallback(s
struct page, lru);
area->nr_free--;
- /*
- * If breaking a large block of pages, place the buddies
- * on the preferred allocation list
- */
- if (unlikely(current_order >= MAX_ORDER / 2))
- migratetype = !migratetype;
-
/* Remove the page from the freelists */
list_del(&page->lru);
rmv_page_order(page);
zone->free_pages -= 1UL << order;
expand(zone, page, order, current_order, area, migratetype);
+
+ /* Move free pages between lists if stealing a large block */
+ if (current_order > MAX_ORDER / 2)
+ move_freepages_block(zone, page, start_migratetype);
+
return page;
}
return NULL;
}
#else
+int move_freepages(struct zone *zone,
+ struct page *start_page, struct page *end_page,
+ int migratetype)
+{
+ return 0;
+}
+
+int move_freepages_block(struct zone *zone, struct page *page, int migratetype)
+{
+ return 0;
+}
+
static struct page *__rmqueue_fallback(struct zone *zone, unsigned int order,
int migratetype)
{
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2006-11-21 22:52 UTC|newest]
Thread overview: 34+ messages / expand[flat|nested] mbox.gz Atom feed top
2006-11-21 22:50 [PATCH 0/11] Avoiding fragmentation with page clustering v27 Mel Gorman
2006-11-21 22:50 ` [PATCH 1/11] Add __GFP_MOVABLE flag and update callers Mel Gorman
2006-11-21 23:30 ` Christoph Lameter
2006-11-21 23:43 ` Mel Gorman
2006-11-21 23:51 ` Dave Hansen
2006-11-22 0:44 ` Linus Torvalds
2006-11-23 16:36 ` Mel Gorman
2006-11-23 17:11 ` Linus Torvalds
2006-11-24 10:44 ` Mel Gorman
2006-11-24 19:57 ` Hugh Dickins
2006-11-24 20:13 ` Mel Gorman
2006-11-24 21:06 ` Hugh Dickins
2006-11-25 11:47 ` Mel Gorman
2006-11-25 19:01 ` Linus Torvalds
2006-11-26 0:44 ` Hugh Dickins
2006-11-27 16:32 ` Mel Gorman
2006-11-27 17:28 ` Christoph Lameter
2006-11-27 19:48 ` Add __GFP_MOVABLE for callers to flag allocations that may be migrated Mel Gorman
2006-11-24 17:59 ` [PATCH 1/11] Add __GFP_MOVABLE flag and update callers Christoph Lameter
2006-11-24 18:11 ` Linus Torvalds
2006-11-24 20:04 ` Mel Gorman
2006-11-22 2:25 ` Christoph Lameter
2006-11-23 15:00 ` Mel Gorman
2006-11-21 22:51 ` [PATCH 2/11] Split the free lists for movable and unmovable allocations Mel Gorman
2006-11-21 22:51 ` [PATCH 3/11] Choose pages from the per-cpu list based on migration type Mel Gorman
2006-11-21 22:51 ` [PATCH 4/11] Add a configure option for page clustering Mel Gorman
2006-11-21 22:52 ` [PATCH 5/11] Drain per-cpu lists when high-order allocations fail Mel Gorman
2006-11-21 22:52 ` Mel Gorman [this message]
2006-11-21 22:52 ` [PATCH 7/11] Mark short-lived and reclaimable kernel allocations Mel Gorman
2006-11-21 22:53 ` [PATCH 8/11] [DEBUG] Add statistics Mel Gorman
2006-11-21 22:53 ` [PATCH 9/11] Add a bitmap that is used to track flags affecting a block of pages Mel Gorman
2006-11-21 22:53 ` [PATCH 10/11] Remove dependency on page->flag bits Mel Gorman
2006-11-21 22:54 ` [PATCH 11/11] Use pageblock flags for page clustering Mel Gorman
-- strict thread matches above, loose matches on Subject: below --
2006-11-01 11:16 [PATCH 0/11] Avoiding fragmentation with subzone groupings v26 Mel Gorman
2006-11-01 11:18 ` [PATCH 6/11] Move free pages between lists on steal Mel Gorman
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20061121225223.11710.73233.sendpatchset@skynet.skynet.ie \
--to=mel@csn.ul.ie \
--cc=clameter@sgi.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox