From: Michal Nazarewicz <m.nazarewicz@samsung.com>
To: Michal Nazarewicz <mina86@mina86.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
Ankita Garg <ankita@in.ibm.com>,
Daniel Walker <dwalker@codeaurora.org>,
Johan MOSSBERG <johan.xx.mossberg@stericsson.com>,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>,
Marek Szyprowski <m.szyprowski@samsung.com>,
Mel Gorman <mel@csn.ul.ie>,
linux-arm-kernel@lists.infradead.org,
linux-kernel@vger.kernel.org, linux-media@vger.kernel.org,
linux-mm@kvack.org
Subject: [PATCHv8 08/12] mm: MIGRATE_CMA migration type added
Date: Wed, 15 Dec 2010 21:34:28 +0100 [thread overview]
Message-ID: <cf5d348eabe3feeda65a2ee93467e29a674c6690.1292443200.git.m.nazarewicz@samsung.com> (raw)
In-Reply-To: <cover.1292443200.git.m.nazarewicz@samsung.com>
The MIGRATE_CMA migration type has two main characteristics:
(i) only movable pages can be allocated from MIGRATE_CMA
pageblocks and (ii) page allocator will never change migration
type of MIGRATE_CMA pageblocks.
This guarantees that page in a MIGRATE_CMA page block can
always be migrated somewhere else (unless there's no memory left
in the system).
It is designed to be used with Contiguous Memory Allocator
(CMA) for allocating big chunks (eg. 10MiB) of physically
contiguous memory. Once driver requests contiguous memory,
CMA will migrate pages from MIGRATE_CMA pageblocks.
To minimise number of migrations, MIGRATE_CMA migration type
is the last type tried when page allocator falls back to other
migration types then requested.
Signed-off-by: Michal Nazarewicz <m.nazarewicz@samsung.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
---
include/linux/mmzone.h | 43 ++++++++++++++++++++----
mm/Kconfig | 14 ++++++++
mm/compaction.c | 10 +++++
mm/internal.h | 3 ++
mm/page_alloc.c | 87 +++++++++++++++++++++++++++++++++++++----------
5 files changed, 131 insertions(+), 26 deletions(-)
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 39c24eb..cc798b1 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -35,13 +35,37 @@
*/
#define PAGE_ALLOC_COSTLY_ORDER 3
-#define MIGRATE_UNMOVABLE 0
-#define MIGRATE_RECLAIMABLE 1
-#define MIGRATE_MOVABLE 2
-#define MIGRATE_PCPTYPES 3 /* the number of types on the pcp lists */
-#define MIGRATE_RESERVE 3
-#define MIGRATE_ISOLATE 4 /* can't allocate from here */
-#define MIGRATE_TYPES 5
+enum {
+ MIGRATE_UNMOVABLE,
+ MIGRATE_RECLAIMABLE,
+ MIGRATE_MOVABLE,
+ MIGRATE_PCPTYPES, /* the number of types on the pcp lists */
+ MIGRATE_RESERVE = MIGRATE_PCPTYPES,
+#ifdef CONFIG_MIGRATE_CMA
+ /*
+ * MIGRATE_CMA migration type is designed to mimic the way
+ * ZONE_MOVABLE works. Only movable pages can be allocated
+ * from MIGRATE_CMA pageblocks and page allocator never
+ * implicitly change migration type of MIGRATE_CMA pageblock.
+ *
+ * The way to use it is to change migratetype of a range of
+ * pageblocks to MIGRATE_CMA which can be done by
+ * __free_pageblock_cma() function. What is important though
+ * is that a range of pageblocks must be aligned to
+ * MAX_ORDER_NR_PAGES should biggest page be bigger then
+ * a single pageblock.
+ */
+ MIGRATE_CMA,
+#endif
+ MIGRATE_ISOLATE, /* can't allocate from here */
+ MIGRATE_TYPES
+};
+
+#ifdef CONFIG_MIGRATE_CMA
+# define is_migrate_cma(migratetype) unlikely((migratetype) == MIGRATE_CMA)
+#else
+# define is_migrate_cma(migratetype) false
+#endif
#define for_each_migratetype_order(order, type) \
for (order = 0; order < MAX_ORDER; order++) \
@@ -54,6 +78,11 @@ static inline int get_pageblock_migratetype(struct page *page)
return get_pageblock_flags_group(page, PB_migrate, PB_migrate_end);
}
+static inline bool is_pageblock_cma(struct page *page)
+{
+ return is_migrate_cma(get_pageblock_migratetype(page));
+}
+
struct free_area {
struct list_head free_list[MIGRATE_TYPES];
unsigned long nr_free;
diff --git a/mm/Kconfig b/mm/Kconfig
index 2beab4d..32fb085 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -343,6 +343,20 @@ config CMA
For more information see <include/linux/cma.h>. If unsure, say "n".
+config MIGRATE_CMA
+ bool "Use MIGRATE_CMA migratetype"
+ depends on CMA
+ default y
+ help
+ This enables the use the MIGRATE_CMA migrate type in the CMA.
+ MIGRATE_CMA lets CMA work on almost arbitrary memory range and
+ not only inside ZONE_MOVABLE.
+
+ This option can also be selected by code that uses MIGRATE_CMA
+ even if CMA is not present.
+
+ If unsure, say "y".
+
config CMA_DEBUG
bool "CMA debug messages (DEVELOPEMENT)"
depends on CMA
diff --git a/mm/compaction.c b/mm/compaction.c
index 4d709ee..c5e404b 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -113,6 +113,16 @@ static bool suitable_migration_target(struct page *page)
if (migratetype == MIGRATE_ISOLATE || migratetype == MIGRATE_RESERVE)
return false;
+ /* Keep MIGRATE_CMA alone as well. */
+ /*
+ * XXX Revisit. We currently cannot let compaction touch CMA
+ * pages since compaction insists on changing their migration
+ * type to MIGRATE_MOVABLE (see split_free_page() called from
+ * isolate_freepages_block() above).
+ */
+ if (is_migrate_cma(migratetype))
+ return false;
+
/* If the page is a large free page, then allow migration */
if (PageBuddy(page) && page_order(page) >= pageblock_order)
return true;
diff --git a/mm/internal.h b/mm/internal.h
index dedb0af..cc24e74 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -49,6 +49,9 @@ extern void putback_lru_page(struct page *page);
* in mm/page_alloc.c
*/
extern void __free_pages_bootmem(struct page *page, unsigned int order);
+#ifdef CONFIG_MIGRATE_CMA
+extern void __free_pageblock_cma(struct page *page);
+#endif
extern void prep_compound_page(struct page *page, unsigned long order);
#ifdef CONFIG_MEMORY_FAILURE
extern bool is_free_buddy_page(struct page *page);
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 008a6e8..e706282 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -717,6 +717,30 @@ void __meminit __free_pages_bootmem(struct page *page, unsigned int order)
}
}
+#ifdef CONFIG_MIGRATE_CMA
+
+/*
+ * Free whole pageblock and set it's migration type to MIGRATE_CMA.
+ */
+void __init __free_pageblock_cma(struct page *page)
+{
+ struct page *p = page;
+ unsigned i = pageblock_nr_pages;
+
+ prefetchw(p);
+ do {
+ if (--i)
+ prefetchw(p + 1);
+ __ClearPageReserved(p);
+ set_page_count(p, 0);
+ } while (++p, i);
+
+ set_page_refcounted(page);
+ set_pageblock_migratetype(page, MIGRATE_CMA);
+ __free_pages(page, pageblock_order);
+}
+
+#endif
/*
* The order of subdivision here is critical for the IO subsystem.
@@ -824,11 +848,15 @@ struct page *__rmqueue_smallest(struct zone *zone, unsigned int order,
* This array describes the order lists are fallen back to when
* the free lists for the desirable migrate type are depleted
*/
-static int fallbacks[MIGRATE_TYPES][MIGRATE_TYPES-1] = {
+static int fallbacks[MIGRATE_TYPES][4] = {
[MIGRATE_UNMOVABLE] = { MIGRATE_RECLAIMABLE, MIGRATE_MOVABLE, MIGRATE_RESERVE },
[MIGRATE_RECLAIMABLE] = { MIGRATE_UNMOVABLE, MIGRATE_MOVABLE, MIGRATE_RESERVE },
+#ifdef CONFIG_MIGRATE_CMA
+ [MIGRATE_MOVABLE] = { MIGRATE_RECLAIMABLE, MIGRATE_UNMOVABLE, MIGRATE_CMA , MIGRATE_RESERVE },
+#else
[MIGRATE_MOVABLE] = { MIGRATE_RECLAIMABLE, MIGRATE_UNMOVABLE, MIGRATE_RESERVE },
- [MIGRATE_RESERVE] = { MIGRATE_RESERVE, MIGRATE_RESERVE, MIGRATE_RESERVE }, /* Never used */
+#endif
+ [MIGRATE_RESERVE] = { MIGRATE_RESERVE }, /* Never used */
};
/*
@@ -924,12 +952,12 @@ __rmqueue_fallback(struct zone *zone, int order, int start_migratetype)
/* Find the largest possible block of pages in the other list */
for (current_order = MAX_ORDER-1; current_order >= order;
--current_order) {
- for (i = 0; i < MIGRATE_TYPES - 1; i++) {
+ for (i = 0; i < ARRAY_SIZE(fallbacks[0]); i++) {
migratetype = fallbacks[start_migratetype][i];
/* MIGRATE_RESERVE handled later if necessary */
if (migratetype == MIGRATE_RESERVE)
- continue;
+ break;
area = &(zone->free_area[current_order]);
if (list_empty(&area->free_list[migratetype]))
@@ -944,19 +972,29 @@ __rmqueue_fallback(struct zone *zone, int order, int start_migratetype)
* pages to the preferred allocation list. If falling
* back for a reclaimable kernel allocation, be more
* agressive about taking ownership of free pages
+ *
+ * On the other hand, never change migration
+ * type of MIGRATE_CMA pageblocks nor move CMA
+ * pages on different free lists. We don't
+ * want unmovable pages to be allocated from
+ * MIGRATE_CMA areas.
*/
- if (unlikely(current_order >= (pageblock_order >> 1)) ||
- start_migratetype == MIGRATE_RECLAIMABLE ||
- page_group_by_mobility_disabled) {
- unsigned long pages;
+ if (!is_pageblock_cma(page) &&
+ (unlikely(current_order >= pageblock_order / 2) ||
+ start_migratetype == MIGRATE_RECLAIMABLE ||
+ page_group_by_mobility_disabled)) {
+ int pages;
pages = move_freepages_block(zone, page,
- start_migratetype);
+ start_migratetype);
- /* Claim the whole block if over half of it is free */
+ /*
+ * Claim the whole block if over half
+ * of it is free
+ */
if (pages >= (1 << (pageblock_order-1)) ||
- page_group_by_mobility_disabled)
+ page_group_by_mobility_disabled)
set_pageblock_migratetype(page,
- start_migratetype);
+ start_migratetype);
migratetype = start_migratetype;
}
@@ -966,11 +1004,14 @@ __rmqueue_fallback(struct zone *zone, int order, int start_migratetype)
rmv_page_order(page);
/* Take ownership for orders >= pageblock_order */
- if (current_order >= pageblock_order)
+ if (current_order >= pageblock_order &&
+ !is_pageblock_cma(page))
change_pageblock_range(page, current_order,
start_migratetype);
- expand(zone, page, order, current_order, area, migratetype);
+ expand(zone, page, order, current_order, area,
+ is_migrate_cma(start_migratetype)
+ ? start_migratetype : migratetype);
trace_mm_page_alloc_extfrag(page, order, current_order,
start_migratetype, migratetype);
@@ -1042,7 +1083,12 @@ static int rmqueue_bulk(struct zone *zone, unsigned int order,
list_add(&page->lru, list);
else
list_add_tail(&page->lru, list);
- set_page_private(page, migratetype);
+#ifdef CONFIG_MIGRATE_CMA
+ if (is_pageblock_cma(page))
+ set_page_private(page, MIGRATE_CMA);
+ else
+#endif
+ set_page_private(page, migratetype);
list = &page->lru;
}
__mod_zone_page_state(zone, NR_FREE_PAGES, -(i << order));
@@ -1178,8 +1224,8 @@ void free_hot_cold_page(struct page *page, int cold)
/*
* We only track unmovable, reclaimable and movable on pcp lists.
* Free ISOLATE pages back to the allocator because they are being
- * offlined but treat RESERVE as movable pages so we can get those
- * areas back if necessary. Otherwise, we may have to free
+ * offlined but treat RESERVE and CMA as movable pages so we can get
+ * those areas back if necessary. Otherwise, we may have to free
* excessively into the page allocator
*/
if (migratetype >= MIGRATE_PCPTYPES) {
@@ -1272,7 +1318,9 @@ int split_free_page(struct page *page)
if (order >= pageblock_order - 1) {
struct page *endpage = page + (1 << order) - 1;
for (; page < endpage; page += pageblock_nr_pages)
- set_pageblock_migratetype(page, MIGRATE_MOVABLE);
+ if (!is_pageblock_cma(page))
+ set_pageblock_migratetype(page,
+ MIGRATE_MOVABLE);
}
return 1 << order;
@@ -5309,7 +5357,8 @@ __count_immobile_pages(struct zone *zone, struct page *page, int count)
if (zone_idx(zone) == ZONE_MOVABLE)
return true;
- if (get_pageblock_migratetype(page) == MIGRATE_MOVABLE)
+ if (get_pageblock_migratetype(page) == MIGRATE_MOVABLE ||
+ is_pageblock_cma(page))
return true;
pfn = page_to_pfn(page);
--
1.7.2.3
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom policy in Canada: sign http://dissolvethecrtc.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2010-12-15 20:38 UTC|newest]
Thread overview: 41+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-12-15 20:34 [PATCHv8 00/12] Contiguous Memory Allocator Michal Nazarewicz
2010-12-15 20:34 ` [PATCHv8 01/12] mm: migrate.c: fix compilation error Michal Nazarewicz
2010-12-15 20:34 ` [PATCHv8 02/12] lib: bitmap: Added alignment offset for bitmap_find_next_zero_area() Michal Nazarewicz
2010-12-15 20:34 ` [PATCHv8 03/12] lib: genalloc: Generic allocator improvements Michal Nazarewicz
2010-12-15 20:34 ` [PATCHv8 04/12] mm: move some functions from memory_hotplug.c to page_isolation.c Michal Nazarewicz
2010-12-15 20:34 ` [PATCHv8 05/12] mm: alloc_contig_freed_pages() added Michal Nazarewicz
2010-12-15 20:34 ` [PATCHv8 06/12] mm: alloc_contig_range() added Michal Nazarewicz
2010-12-15 20:34 ` [PATCHv8 07/12] mm: cma: Contiguous Memory Allocator added Michal Nazarewicz
2011-02-02 12:43 ` Ankita Garg
2011-02-02 14:58 ` Michal Nazarewicz
2010-12-15 20:34 ` Michal Nazarewicz [this message]
2010-12-15 20:34 ` [PATCHv8 09/12] mm: MIGRATE_CMA isolation functions added Michal Nazarewicz
2010-12-15 20:34 ` [PATCHv8 10/12] mm: MIGRATE_CMA support added to CMA Michal Nazarewicz
2010-12-15 20:34 ` [PATCHv8 11/12] mm: cma: Test device and application added Michal Nazarewicz
2010-12-15 20:34 ` [PATCHv8 12/12] ARM: cma: Added CMA to Aquila, Goni and c210 universal boards Michal Nazarewicz
2010-12-23 9:30 ` [PATCHv8 00/12] Contiguous Memory Allocator Kyungmin Park
2010-12-23 10:06 ` Russell King - ARM Linux
2010-12-23 10:58 ` Marek Szyprowski
2010-12-23 12:19 ` Russell King - ARM Linux
2010-12-23 13:09 ` Marek Szyprowski
2010-12-23 13:44 ` Russell King - ARM Linux
2011-01-12 18:49 ` Marek Szyprowski
2011-01-12 19:04 ` Nicolas Pitre
2011-01-13 7:01 ` Marek Szyprowski
2010-12-23 13:35 ` Tomasz Fujak
2010-12-23 13:48 ` Russell King - ARM Linux
2010-12-23 14:04 ` Tomasz Fujak
2010-12-23 14:16 ` Russell King - ARM Linux
2010-12-23 14:42 ` Felipe Contreras
2010-12-23 15:02 ` Michal Nazarewicz
2010-12-23 18:04 ` David Brown
2010-12-23 13:41 ` Michal Nazarewicz
2010-12-23 13:51 ` Russell King - ARM Linux
2010-12-23 14:08 ` Tomasz Fujak
2010-12-23 14:20 ` Russell King - ARM Linux
2010-12-23 15:35 ` Tomasz Fujak
2011-01-04 23:12 ` Jamie Lokier
2011-01-04 16:23 ` Johan MOSSBERG
2011-01-04 16:59 ` Michał Nazarewicz
2011-01-04 17:19 ` Russell King - ARM Linux
2011-01-04 17:31 ` Santosh Shilimkar
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=cf5d348eabe3feeda65a2ee93467e29a674c6690.1292443200.git.m.nazarewicz@samsung.com \
--to=m.nazarewicz@samsung.com \
--cc=akpm@linux-foundation.org \
--cc=ankita@in.ibm.com \
--cc=dwalker@codeaurora.org \
--cc=johan.xx.mossberg@stericsson.com \
--cc=kamezawa.hiroyu@jp.fujitsu.com \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-media@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=m.szyprowski@samsung.com \
--cc=mel@csn.ul.ie \
--cc=mina86@mina86.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox