* [PATCH 1/4] Remove unnecessary check for MIGRATE_RESERVE during boot
2007-04-10 16:02 [PATCH 0/4] Updates to groupings pages by mobility patches Mel Gorman
@ 2007-04-10 16:03 ` Mel Gorman
2007-04-10 16:03 ` [PATCH 2/4] Do not group pages by mobility type on low memory systems Mel Gorman
` (2 subsequent siblings)
3 siblings, 0 replies; 5+ messages in thread
From: Mel Gorman @ 2007-04-10 16:03 UTC (permalink / raw)
To: akpm; +Cc: Mel Gorman, linux-mm
At boot time, a number of MAX_ORDER_NR_PAGES get marked MIGRATE_RESERVE and
the remainder get marked MIGRATE_MOVABLE. The blocks are marked MOVABLE in
memmap_init_zone() before any blocks are marked reserve. A check is made in
memmap_init_zone() for (get_pageblock_migratetype(page) != MIGRATE_RESERVE)
which is a waste of time because the reserve has not been set yet. This
oversight was because an early version of the MIGRATE_RESERVE patch set
blocks MIGRATE_RESERVE earlier. This patch gets rid of the redundant check.
This should be considered a fix for
bias-the-location-of-pages-freed-for-min_free_kbytes-in-the-same-max_order_nr_pages-blocks.patch
Signed-off-by: Mel Gorman <mel@csn.ul.ie>
Acked-by: Andy Whitcroft <apw@shadowen.org>
---
page_alloc.c | 3 +--
1 files changed, 1 insertion(+), 2 deletions(-)
diff -rup -X /usr/src/patchset-0.6/bin//dontdiff linux-2.6.21-rc6-mm1-clean/mm/page_alloc.c linux-2.6.21-rc6-mm1-001_remove_unnecessary_check/mm/page_alloc.c
--- linux-2.6.21-rc6-mm1-clean/mm/page_alloc.c 2007-04-09 23:26:16.000000000 +0100
+++ linux-2.6.21-rc6-mm1-001_remove_unnecessary_check/mm/page_alloc.c 2007-04-09 23:27:58.000000000 +0100
@@ -2468,8 +2468,7 @@ void __meminit memmap_init_zone(unsigned
* the start are marked MIGRATE_RESERVE by
* setup_zone_migrate_reserve()
*/
- if ((pfn & (MAX_ORDER_NR_PAGES-1)) == 0 &&
- get_pageblock_migratetype(page) != MIGRATE_RESERVE)
+ if ((pfn & (MAX_ORDER_NR_PAGES-1)))
set_pageblock_migratetype(page, MIGRATE_MOVABLE);
INIT_LIST_HEAD(&page->lru);
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 5+ messages in thread* [PATCH 2/4] Do not group pages by mobility type on low memory systems
2007-04-10 16:02 [PATCH 0/4] Updates to groupings pages by mobility patches Mel Gorman
2007-04-10 16:03 ` [PATCH 1/4] Remove unnecessary check for MIGRATE_RESERVE during boot Mel Gorman
@ 2007-04-10 16:03 ` Mel Gorman
2007-04-10 16:03 ` [PATCH 3/4] Reduce the amount of time spent in the per-cpu allocator Mel Gorman
2007-04-10 16:04 ` [PATCH 4/4] Do not block align PFN when looking up the pageblock PFN Mel Gorman
3 siblings, 0 replies; 5+ messages in thread
From: Mel Gorman @ 2007-04-10 16:03 UTC (permalink / raw)
To: akpm; +Cc: Mel Gorman, linux-mm
Grouping pages by mobility can only successfully operate when there
are more MAX_ORDER_NR_PAGES areas than mobility types. When there are
insufficient areas, fallbacks cannot be avoided. This has noticeable
performance impacts on machines with small amounts of memory in comparison
to MAX_ORDER_NR_PAGES. For example, on IA64 with a configuration including
huge pages spans 1GiB with MAX_ORDER_NR_PAGES so would need at least 4GiB
of RAM before grouping pages by mobility would be useful. In comparison,
an x86 would need 16MB.
This patch checks the size of vm_total_pages in build_all_zonelists(). If
there are not enough areas, mobility is effectivly disabled by considering
all allocations as the same type (UNMOVABLE). This is achived via a
__read_mostly flag.
With this patch, performance is comparable to disabling grouping pages
by mobility at compile-time on a test machine with insufficient memory.
With this patch, it is reasonable to get rid of grouping pages by mobility
a compile-time option.
Signed-off-by: Mel Gorman <mel@csn.ul.ie>
Acked-by: Andy Whitcroft <apw@shadowen.org>
---
page_alloc.c | 27 +++++++++++++++++++++++++--
1 files changed, 25 insertions(+), 2 deletions(-)
diff -rup -X /usr/src/patchset-0.6/bin//dontdiff linux-2.6.21-rc6-mm1-001_remove_unnecessary_check/mm/page_alloc.c linux-2.6.21-rc6-mm1-002_disable_on_smallmem/mm/page_alloc.c
--- linux-2.6.21-rc6-mm1-001_remove_unnecessary_check/mm/page_alloc.c 2007-04-09 23:27:58.000000000 +0100
+++ linux-2.6.21-rc6-mm1-002_disable_on_smallmem/mm/page_alloc.c 2007-04-10 11:28:01.000000000 +0100
@@ -145,8 +145,13 @@ static unsigned long __meminitdata dma_r
#endif /* CONFIG_ARCH_POPULATES_NODE_MAP */
#ifdef CONFIG_PAGE_GROUP_BY_MOBILITY
+int page_group_by_mobility_disabled __read_mostly;
+
static inline int get_pageblock_migratetype(struct page *page)
{
+ if (unlikely(page_group_by_mobility_disabled))
+ return MIGRATE_UNMOVABLE;
+
return get_pageblock_flags_group(page, PB_migrate, PB_migrate_end);
}
@@ -160,6 +165,9 @@ static inline int allocflags_to_migratet
{
WARN_ON((gfp_flags & GFP_MOVABLE_MASK) == GFP_MOVABLE_MASK);
+ if (unlikely(page_group_by_mobility_disabled))
+ return MIGRATE_UNMOVABLE;
+
/* Cluster high-order atomic allocations together */
if (unlikely(order > 0) &&
(!(gfp_flags & __GFP_WAIT) || in_interrupt()))
@@ -2298,8 +2306,23 @@ void __meminit build_all_zonelists(void)
/* cpuset refresh routine should be here */
}
vm_total_pages = nr_free_pagecache_pages();
- printk("Built %i zonelists. Total pages: %ld\n",
- num_online_nodes(), vm_total_pages);
+
+ /*
+ * Disable grouping by mobility if the number of pages in the
+ * system is too low to allow the mechanism to work. It would be
+ * more accurate, but expensive to check per-zone. This check is
+ * made on memory-hotadd so a system can start with mobility
+ * disabled and enable it later
+ */
+ if (vm_total_pages < (MAX_ORDER_NR_PAGES * MIGRATE_TYPES))
+ page_group_by_mobility_disabled = 1;
+ else
+ page_group_by_mobility_disabled = 0;
+
+ printk("Built %i zonelists, mobility grouping %s. Total pages: %ld\n",
+ num_online_nodes(),
+ page_group_by_mobility_disabled ? "off" : "on",
+ vm_total_pages);
}
/*
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 5+ messages in thread* [PATCH 3/4] Reduce the amount of time spent in the per-cpu allocator
2007-04-10 16:02 [PATCH 0/4] Updates to groupings pages by mobility patches Mel Gorman
2007-04-10 16:03 ` [PATCH 1/4] Remove unnecessary check for MIGRATE_RESERVE during boot Mel Gorman
2007-04-10 16:03 ` [PATCH 2/4] Do not group pages by mobility type on low memory systems Mel Gorman
@ 2007-04-10 16:03 ` Mel Gorman
2007-04-10 16:04 ` [PATCH 4/4] Do not block align PFN when looking up the pageblock PFN Mel Gorman
3 siblings, 0 replies; 5+ messages in thread
From: Mel Gorman @ 2007-04-10 16:03 UTC (permalink / raw)
To: akpm; +Cc: Mel Gorman, linux-mm
The per-cpu allocator is the most frequently entered path in the page
allocator as the majority of allocations are order-0 allocations that use it.
This patch is mainly a re-ordering to give the patch a cleaner flow and
make it more human-readable.
Performance wise, an unlikely() is added for a branch that is rarely executed
which improves performance very slightly. A VM_BUG_ON() is removed because
when the situation does occur, it means we are just really low on memory
not that the VM is buggy.
Signed-off-by: Mel Gorman <mel@csn.ul.ie>
Acked-by: Andy Whitcroft <apw@shadowen.org>
---
page_alloc.c | 22 +++++++---------------
1 files changed, 7 insertions(+), 15 deletions(-)
diff -rup -X /usr/src/patchset-0.6/bin//dontdiff linux-2.6.21-rc6-mm1-002_disable_on_smallmem/mm/page_alloc.c linux-2.6.21-rc6-mm1-003_streamline_percpu/mm/page_alloc.c
--- linux-2.6.21-rc6-mm1-002_disable_on_smallmem/mm/page_alloc.c 2007-04-10 11:28:01.000000000 +0100
+++ linux-2.6.21-rc6-mm1-003_streamline_percpu/mm/page_alloc.c 2007-04-10 11:35:34.000000000 +0100
@@ -1204,33 +1204,25 @@ again:
if (unlikely(!pcp->count))
goto failed;
}
+
#ifdef CONFIG_PAGE_GROUP_BY_MOBILITY
/* Find a page of the appropriate migrate type */
- list_for_each_entry(page, &pcp->list, lru) {
- if (page_private(page) == migratetype) {
- list_del(&page->lru);
- pcp->count--;
+ list_for_each_entry(page, &pcp->list, lru)
+ if (page_private(page) == migratetype)
break;
- }
- }
- /*
- * Check if a page of the appropriate migrate type
- * was found. If not, allocate more to the pcp list
- */
- if (&page->lru == &pcp->list) {
+ /* Allocate more to the pcp list if necessary */
+ if (unlikely(&page->lru == &pcp->list)) {
pcp->count += rmqueue_bulk(zone, 0,
pcp->batch, &pcp->list, migratetype);
page = list_entry(pcp->list.next, struct page, lru);
- VM_BUG_ON(page_private(page) != migratetype);
- list_del(&page->lru);
- pcp->count--;
}
#else
page = list_entry(pcp->list.next, struct page, lru);
+#endif /* CONFIG_PAGE_GROUP_BY_MOBILITY */
+
list_del(&page->lru);
pcp->count--;
-#endif /* CONFIG_PAGE_GROUP_BY_MOBILITY */
} else {
spin_lock_irqsave(&zone->lock, flags);
page = __rmqueue(zone, order, migratetype);
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 5+ messages in thread* [PATCH 4/4] Do not block align PFN when looking up the pageblock PFN
2007-04-10 16:02 [PATCH 0/4] Updates to groupings pages by mobility patches Mel Gorman
` (2 preceding siblings ...)
2007-04-10 16:03 ` [PATCH 3/4] Reduce the amount of time spent in the per-cpu allocator Mel Gorman
@ 2007-04-10 16:04 ` Mel Gorman
3 siblings, 0 replies; 5+ messages in thread
From: Mel Gorman @ 2007-04-10 16:04 UTC (permalink / raw)
To: akpm; +Cc: Mel Gorman, linux-mm
The pageblock flags store bits representing a MAX_ORDER_NR_PAGES block of
pages. When calling get_pageblock_bitmap(), a non-aligned PFN is passed
which is then aligned to the MAX_ORDER_NR_PAGES block. This alignment
is unnecessary.
This patch should be considered a fix to the patch
add-a-bitmap-that-is-used-to-track-flags-affecting-a-block-of-pages.patch .
Signed-off-by: Mel Gorman <mel@csn.ul.ie>
Acked-by: Andy Whitcroft <apw@shadowen.org>
---
page_alloc.c | 4 +---
1 files changed, 1 insertion(+), 3 deletions(-)
diff -rup -X /usr/src/patchset-0.6/bin//dontdiff linux-2.6.21-rc6-mm1-003_streamline_percpu/mm/page_alloc.c linux-2.6.21-rc6-mm1-004_noblockpfn_sparsemem/mm/page_alloc.c
--- linux-2.6.21-rc6-mm1-003_streamline_percpu/mm/page_alloc.c 2007-04-10 11:35:34.000000000 +0100
+++ linux-2.6.21-rc6-mm1-004_noblockpfn_sparsemem/mm/page_alloc.c 2007-04-10 11:37:28.000000000 +0100
@@ -4169,9 +4169,7 @@ static inline unsigned long *get_pageblo
unsigned long pfn)
{
#ifdef CONFIG_SPARSEMEM
- unsigned long blockpfn;
- blockpfn = pfn & ~(MAX_ORDER_NR_PAGES - 1);
- return __pfn_to_section(blockpfn)->pageblock_flags;
+ return __pfn_to_section(pfn)->pageblock_flags;
#else
return zone->pageblock_flags;
#endif /* CONFIG_SPARSEMEM */
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 5+ messages in thread