From: Mel Gorman <mel@csn.ul.ie>
To: linux-mm@kvack.org
Cc: Mel Gorman <mel@csn.ul.ie>, linux-kernel@vger.kernel.org
Subject: [PATCH 6/11] Move free pages between lists on steal
Date: Wed, 1 Nov 2006 11:18:20 +0000 (GMT) [thread overview]
Message-ID: <20061101111820.18798.35738.sendpatchset@skynet.skynet.ie> (raw)
In-Reply-To: <20061101111620.18798.34778.sendpatchset@skynet.skynet.ie>
When a fallback occurs, there will be free pages for one allocation type
stored on the list for another. When a large steal occurs, this patch
will move all the free pages within one list to one allocation type.
Signed-off-by: Mel Gorman <mel@csn.ul.ie>
---
page_alloc.c | 84 ++++++++++++++++++++++++++++++++++++++++++++++++------
1 files changed, 75 insertions(+), 9 deletions(-)
diff -rup -X /usr/src/patchset-0.6/bin//dontdiff linux-2.6.19-rc4-mm1-005_drainpercpu/mm/page_alloc.c linux-2.6.19-rc4-mm1-006_movefree/mm/page_alloc.c
--- linux-2.6.19-rc4-mm1-005_drainpercpu/mm/page_alloc.c 2006-10-31 13:44:09.000000000 +0000
+++ linux-2.6.19-rc4-mm1-006_movefree/mm/page_alloc.c 2006-10-31 13:50:10.000000000 +0000
@@ -654,6 +654,62 @@ static int prep_new_page(struct page *pa
}
#ifdef CONFIG_PAGEALLOC_ANTIFRAG
+/*
+ * Move the free pages in a range to the free lists of the requested type.
+ * Note that start_page and end_pages are not aligned in a MAX_ORDER_NR_PAGES
+ * boundary. If alignment is required, use move_freepages_block()
+ */
+int move_freepages(struct zone *zone,
+ struct page *start_page, struct page *end_page,
+ int rclmtype)
+{
+ struct page *page;
+ unsigned long order;
+ int blocks_moved = 0;
+
+ BUG_ON(page_zone(start_page) != page_zone(end_page));
+
+ for (page = start_page; page < end_page;) {
+ if (!PageBuddy(page)) {
+ page++;
+ continue;
+ }
+#ifdef CONFIG_HOLES_IN_ZONE
+ if (!pfn_valid(page_to_pfn(page))) {
+ page++;
+ continue;
+ }
+#endif
+
+ order = page_order(page);
+ list_del(&page->lru);
+ list_add(&page->lru,
+ &zone->free_area[order].free_list[rclmtype]);
+ page += 1 << order;
+ blocks_moved++;
+ }
+
+ return blocks_moved;
+}
+
+int move_freepages_block(struct zone *zone, struct page *page, int rclmtype)
+{
+ unsigned long start_pfn;
+ struct page *start_page, *end_page;
+
+ start_pfn = page_to_pfn(page);
+ start_pfn = start_pfn & ~(MAX_ORDER_NR_PAGES-1);
+ start_page = pfn_to_page(start_pfn);
+ end_page = start_page + MAX_ORDER_NR_PAGES;
+
+ if (page_zone(page) != page_zone(start_page))
+ start_page = page;
+ if (page_zone(page) != page_zone(end_page))
+ return 0;
+
+ return move_freepages(zone, start_page, end_page, rclmtype);
+}
+
/* Remove an element from the buddy allocator from the fallback list */
static struct page *__rmqueue_fallback(struct zone *zone, int order,
gfp_t gfp_flags)
@@ -661,10 +717,10 @@ static struct page *__rmqueue_fallback(s
struct free_area * area;
int current_order;
struct page *page;
- int rclmtype = gfpflags_to_rclmtype(gfp_flags);
+ int start_rclmtype = gfpflags_to_rclmtype(gfp_flags);
+ int rclmtype = !start_rclmtype;
/* Find the largest possible block of pages in the other list */
- rclmtype = !rclmtype;
for (current_order = MAX_ORDER-1; current_order >= order;
--current_order) {
area = &(zone->free_area[current_order]);
@@ -675,24 +731,34 @@ static struct page *__rmqueue_fallback(s
struct page, lru);
area->nr_free--;
- /*
- * If breaking a large block of pages, place the buddies
- * on the preferred allocation list
- */
- if (unlikely(current_order >= MAX_ORDER / 2))
- rclmtype = !rclmtype;
-
/* Remove the page from the freelists */
list_del(&page->lru);
rmv_page_order(page);
zone->free_pages -= 1UL << order;
expand(zone, page, order, current_order, area, rclmtype);
+
+ /* Move free pages between lists if stealing a large block */
+ if (current_order > MAX_ORDER / 2)
+ move_freepages_block(zone, page, start_rclmtype);
+
return page;
}
return NULL;
}
#else
+int move_freepages(struct zone *zone,
+ struct page *start_page, struct page *end_page,
+ int rclmtype)
+{
+ return 0;
+}
+
+int move_freepages_block(struct zone *zone, struct page *page, int rclmtype)
+{
+ return 0;
+}
+
static struct page *__rmqueue_fallback(struct zone *zone, unsigned int order,
int rclmtype)
{
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2006-11-01 11:18 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2006-11-01 11:16 [PATCH 0/11] Avoiding fragmentation with subzone groupings v26 Mel Gorman
2006-11-01 11:16 ` [PATCH 1/11] Add __GFP_EASYRCLM flag and update callers Mel Gorman
2006-11-01 11:17 ` [PATCH 2/11] Split the free lists into kernel and user parts Mel Gorman
2006-11-01 11:17 ` [PATCH 3/11] Split the per-cpu lists into RCLM_TYPES lists Mel Gorman
2006-11-01 11:17 ` [PATCH 4/11] Add a configure option for anti-fragmentation Mel Gorman
2006-11-01 11:18 ` [PATCH 5/11] Drain per-cpu lists when high-order allocations fail Mel Gorman
2006-11-01 11:18 ` Mel Gorman [this message]
2006-11-01 11:18 ` [PATCH 7/11] Introduce the RCLM_KERN allocation type Mel Gorman
2006-11-01 11:19 ` [PATCH 8/11] [DEBUG] Add statistics Mel Gorman
2006-11-01 11:19 ` [PATCH 9/11] Add a bitmap that is used to track flags affecting a block of pages Mel Gorman
2006-11-01 11:19 ` [PATCH 10/11] Remove dependency on page->flag bits Mel Gorman
2006-11-01 11:20 ` [PATCH 11/11] Use pageblock flags for anti-fragmentation Mel Gorman
[not found] ` <p734ptilcie.fsf@verdi.suse.de>
2006-11-02 11:21 ` [PATCH 0/11] Avoiding fragmentation with subzone groupings v26 Mel Gorman
2006-11-21 22:50 [PATCH 0/11] Avoiding fragmentation with page clustering v27 Mel Gorman
2006-11-21 22:52 ` [PATCH 6/11] Move free pages between lists on steal Mel Gorman
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20061101111820.18798.35738.sendpatchset@skynet.skynet.ie \
--to=mel@csn.ul.ie \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox