* [PATCH v2 1/4] cma: fix counting of isolated pages
2012-09-03 15:33 [PATCH v2 0/4] cma: fix watermark checking Bartlomiej Zolnierkiewicz
@ 2012-09-03 15:33 ` Bartlomiej Zolnierkiewicz
2012-09-03 15:33 ` [PATCH v2 2/4] cma: count free CMA pages Bartlomiej Zolnierkiewicz
` (3 subsequent siblings)
4 siblings, 0 replies; 6+ messages in thread
From: Bartlomiej Zolnierkiewicz @ 2012-09-03 15:33 UTC (permalink / raw)
To: linux-mm
Cc: m.szyprowski, mina86, minchan, mgorman, kyungmin.park,
Bartlomiej Zolnierkiewicz
Isolated free pages shouldn't be accounted to NR_FREE_PAGES counter.
Fix it by properly decreasing/increasing NR_FREE_PAGES counter in
set_migratetype_isolate()/unset_migratetype_isolate() and removing
counter adjustment for isolated pages from free_one_page() and
split_free_page().
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
Cc: Michal Nazarewicz <mina86@mina86.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Mel Gorman <mgorman@suse.de>
Signed-off-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
---
mm/page_alloc.c | 7 +++++--
mm/page_isolation.c | 13 ++++++++++---
2 files changed, 15 insertions(+), 5 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index b94429e..64ccf72 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -688,7 +688,8 @@ static void free_one_page(struct zone *zone, struct page *page, int order,
zone->pages_scanned = 0;
__free_one_page(page, zone, order, migratetype);
- __mod_zone_page_state(zone, NR_FREE_PAGES, 1 << order);
+ if (migratetype != MIGRATE_ISOLATE)
+ __mod_zone_page_state(zone, NR_FREE_PAGES, 1 << order);
spin_unlock(&zone->lock);
}
@@ -1412,7 +1413,9 @@ int split_free_page(struct page *page, bool check_wmark)
list_del(&page->lru);
zone->free_area[order].nr_free--;
rmv_page_order(page);
- __mod_zone_page_state(zone, NR_FREE_PAGES, -(1UL << order));
+
+ if (get_pageblock_migratetype(page) != MIGRATE_ISOLATE)
+ __mod_zone_page_state(zone, NR_FREE_PAGES, -(1UL << order));
/* Split into individual pages */
set_page_refcounted(page);
diff --git a/mm/page_isolation.c b/mm/page_isolation.c
index 247d1f1..d210cc8 100644
--- a/mm/page_isolation.c
+++ b/mm/page_isolation.c
@@ -76,8 +76,12 @@ int set_migratetype_isolate(struct page *page)
out:
if (!ret) {
+ unsigned long nr_pages;
+
set_pageblock_isolate(page);
- move_freepages_block(zone, page, MIGRATE_ISOLATE);
+ nr_pages = move_freepages_block(zone, page, MIGRATE_ISOLATE);
+
+ __mod_zone_page_state(zone, NR_FREE_PAGES, -nr_pages);
}
spin_unlock_irqrestore(&zone->lock, flags);
@@ -89,12 +93,15 @@ out:
void unset_migratetype_isolate(struct page *page, unsigned migratetype)
{
struct zone *zone;
- unsigned long flags;
+ unsigned long flags, nr_pages;
+
zone = page_zone(page);
+
spin_lock_irqsave(&zone->lock, flags);
if (get_pageblock_migratetype(page) != MIGRATE_ISOLATE)
goto out;
- move_freepages_block(zone, page, migratetype);
+ nr_pages = move_freepages_block(zone, page, migratetype);
+ __mod_zone_page_state(zone, NR_FREE_PAGES, nr_pages);
restore_pageblock_isolate(page, migratetype);
out:
spin_unlock_irqrestore(&zone->lock, flags);
--
1.7.11.3
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 6+ messages in thread* [PATCH v2 2/4] cma: count free CMA pages
2012-09-03 15:33 [PATCH v2 0/4] cma: fix watermark checking Bartlomiej Zolnierkiewicz
2012-09-03 15:33 ` [PATCH v2 1/4] cma: fix counting of isolated pages Bartlomiej Zolnierkiewicz
@ 2012-09-03 15:33 ` Bartlomiej Zolnierkiewicz
2012-09-03 15:33 ` [PATCH v2 3/4] mm: add accounting for CMA pages and use them for watermark calculation Bartlomiej Zolnierkiewicz
` (2 subsequent siblings)
4 siblings, 0 replies; 6+ messages in thread
From: Bartlomiej Zolnierkiewicz @ 2012-09-03 15:33 UTC (permalink / raw)
To: linux-mm
Cc: m.szyprowski, mina86, minchan, mgorman, kyungmin.park,
Bartlomiej Zolnierkiewicz
Add NR_FREE_CMA_PAGES counter to be later used for checking watermark
in __zone_watermark_ok(). For simplicity and to avoid #ifdef hell make
this counter always available (not only when CONFIG_CMA=y).
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
Cc: Michal Nazarewicz <mina86@mina86.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Mel Gorman <mgorman@suse.de>
Signed-off-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
---
include/linux/mmzone.h | 1 +
mm/page_alloc.c | 36 ++++++++++++++++++++++++++++++++----
mm/page_isolation.c | 7 +++++++
mm/vmstat.c | 1 +
4 files changed, 41 insertions(+), 4 deletions(-)
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index ca034a1..904889d 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -140,6 +140,7 @@ enum zone_stat_item {
NUMA_OTHER, /* allocation from other node */
#endif
NR_ANON_TRANSPARENT_HUGEPAGES,
+ NR_FREE_CMA_PAGES,
NR_VM_ZONE_STAT_ITEMS };
/*
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 64ccf72..8afae42 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -559,6 +559,9 @@ static inline void __free_one_page(struct page *page,
clear_page_guard_flag(buddy);
set_page_private(page, 0);
__mod_zone_page_state(zone, NR_FREE_PAGES, 1 << order);
+ if (is_migrate_cma(migratetype))
+ __mod_zone_page_state(zone, NR_FREE_CMA_PAGES,
+ 1 << order);
} else {
list_del(&buddy->lru);
zone->free_area[order].nr_free--;
@@ -674,6 +677,8 @@ static void free_pcppages_bulk(struct zone *zone, int count,
/* MIGRATE_MOVABLE list may include MIGRATE_RESERVEs */
__free_one_page(page, zone, 0, page_private(page));
trace_mm_page_pcpu_drain(page, 0, page_private(page));
+ if (is_migrate_cma(page_private(page)))
+ __mod_zone_page_state(zone, NR_FREE_CMA_PAGES, 1);
} while (--to_free && --batch_free && !list_empty(list));
}
__mod_zone_page_state(zone, NR_FREE_PAGES, count);
@@ -688,8 +693,12 @@ static void free_one_page(struct zone *zone, struct page *page, int order,
zone->pages_scanned = 0;
__free_one_page(page, zone, order, migratetype);
- if (migratetype != MIGRATE_ISOLATE)
+ if (migratetype != MIGRATE_ISOLATE) {
__mod_zone_page_state(zone, NR_FREE_PAGES, 1 << order);
+ if (is_migrate_cma(migratetype))
+ __mod_zone_page_state(zone, NR_FREE_CMA_PAGES,
+ 1 << order);
+ }
spin_unlock(&zone->lock);
}
@@ -813,6 +822,9 @@ static inline void expand(struct zone *zone, struct page *page,
set_page_private(&page[size], high);
/* Guard pages are not available for any usage */
__mod_zone_page_state(zone, NR_FREE_PAGES, -(1 << high));
+ if (is_migrate_cma(migratetype))
+ __mod_zone_page_state(zone, NR_FREE_CMA_PAGES,
+ -(1 << high));
continue;
}
#endif
@@ -1138,6 +1150,9 @@ static int rmqueue_bulk(struct zone *zone, unsigned int order,
}
set_page_private(page, mt);
list = &page->lru;
+ if (is_migrate_cma(mt))
+ __mod_zone_page_state(zone, NR_FREE_CMA_PAGES,
+ -(1 << order));
}
__mod_zone_page_state(zone, NR_FREE_PAGES, -(i << order));
spin_unlock(&zone->lock);
@@ -1396,6 +1411,7 @@ int split_free_page(struct page *page, bool check_wmark)
unsigned int order;
unsigned long watermark;
struct zone *zone;
+ int mt;
BUG_ON(!PageBuddy(page));
@@ -1414,8 +1430,13 @@ int split_free_page(struct page *page, bool check_wmark)
zone->free_area[order].nr_free--;
rmv_page_order(page);
- if (get_pageblock_migratetype(page) != MIGRATE_ISOLATE)
+ mt = get_pageblock_migratetype(page);
+ if (mt != MIGRATE_ISOLATE) {
__mod_zone_page_state(zone, NR_FREE_PAGES, -(1UL << order));
+ if (is_migrate_cma(mt))
+ __mod_zone_page_state(zone, NR_FREE_CMA_PAGES,
+ -(1UL << order));
+ }
/* Split into individual pages */
set_page_refcounted(page);
@@ -1490,6 +1511,9 @@ again:
spin_unlock(&zone->lock);
if (!page)
goto failed;
+ if (is_migrate_cma(get_pageblock_migratetype(page)))
+ __mod_zone_page_state(zone, NR_FREE_CMA_PAGES,
+ -(1 << order));
__mod_zone_page_state(zone, NR_FREE_PAGES, -(1 << order));
}
@@ -2852,7 +2876,8 @@ void show_free_areas(unsigned int filter)
" unevictable:%lu"
" dirty:%lu writeback:%lu unstable:%lu\n"
" free:%lu slab_reclaimable:%lu slab_unreclaimable:%lu\n"
- " mapped:%lu shmem:%lu pagetables:%lu bounce:%lu\n",
+ " mapped:%lu shmem:%lu pagetables:%lu bounce:%lu\n"
+ " free_cma:%lu\n",
global_page_state(NR_ACTIVE_ANON),
global_page_state(NR_INACTIVE_ANON),
global_page_state(NR_ISOLATED_ANON),
@@ -2869,7 +2894,8 @@ void show_free_areas(unsigned int filter)
global_page_state(NR_FILE_MAPPED),
global_page_state(NR_SHMEM),
global_page_state(NR_PAGETABLE),
- global_page_state(NR_BOUNCE));
+ global_page_state(NR_BOUNCE),
+ global_page_state(NR_FREE_CMA_PAGES));
for_each_populated_zone(zone) {
int i;
@@ -2901,6 +2927,7 @@ void show_free_areas(unsigned int filter)
" pagetables:%lukB"
" unstable:%lukB"
" bounce:%lukB"
+ " free_cma:%lukB"
" writeback_tmp:%lukB"
" pages_scanned:%lu"
" all_unreclaimable? %s"
@@ -2930,6 +2957,7 @@ void show_free_areas(unsigned int filter)
K(zone_page_state(zone, NR_PAGETABLE)),
K(zone_page_state(zone, NR_UNSTABLE_NFS)),
K(zone_page_state(zone, NR_BOUNCE)),
+ K(zone_page_state(zone, NR_FREE_CMA_PAGES)),
K(zone_page_state(zone, NR_WRITEBACK_TEMP)),
zone->pages_scanned,
(zone->all_unreclaimable ? "yes" : "no")
diff --git a/mm/page_isolation.c b/mm/page_isolation.c
index d210cc8..6ead34d 100644
--- a/mm/page_isolation.c
+++ b/mm/page_isolation.c
@@ -77,11 +77,15 @@ int set_migratetype_isolate(struct page *page)
out:
if (!ret) {
unsigned long nr_pages;
+ int mt = get_pageblock_migratetype(page);
set_pageblock_isolate(page);
nr_pages = move_freepages_block(zone, page, MIGRATE_ISOLATE);
__mod_zone_page_state(zone, NR_FREE_PAGES, -nr_pages);
+ if (is_migrate_cma(mt))
+ __mod_zone_page_state(zone, NR_FREE_CMA_PAGES,
+ -nr_pages);
}
spin_unlock_irqrestore(&zone->lock, flags);
@@ -102,6 +106,9 @@ void unset_migratetype_isolate(struct page *page, unsigned migratetype)
goto out;
nr_pages = move_freepages_block(zone, page, migratetype);
__mod_zone_page_state(zone, NR_FREE_PAGES, nr_pages);
+ if (is_migrate_cma(migratetype))
+ __mod_zone_page_state(zone, NR_FREE_CMA_PAGES,
+ nr_pages);
restore_pageblock_isolate(page, migratetype);
out:
spin_unlock_irqrestore(&zone->lock, flags);
diff --git a/mm/vmstat.c b/mm/vmstat.c
index df7a674..7c102e6 100644
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -722,6 +722,7 @@ const char * const vmstat_text[] = {
"numa_other",
#endif
"nr_anon_transparent_hugepages",
+ "nr_free_cma",
"nr_dirty_threshold",
"nr_dirty_background_threshold",
--
1.7.11.3
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 6+ messages in thread* [PATCH v2 3/4] mm: add accounting for CMA pages and use them for watermark calculation
2012-09-03 15:33 [PATCH v2 0/4] cma: fix watermark checking Bartlomiej Zolnierkiewicz
2012-09-03 15:33 ` [PATCH v2 1/4] cma: fix counting of isolated pages Bartlomiej Zolnierkiewicz
2012-09-03 15:33 ` [PATCH v2 2/4] cma: count free CMA pages Bartlomiej Zolnierkiewicz
@ 2012-09-03 15:33 ` Bartlomiej Zolnierkiewicz
2012-09-03 15:33 ` [PATCH v2 4/4] cma: fix watermark checking Bartlomiej Zolnierkiewicz
2012-09-03 15:54 ` [PATCH v2 0/4] " Michal Nazarewicz
4 siblings, 0 replies; 6+ messages in thread
From: Bartlomiej Zolnierkiewicz @ 2012-09-03 15:33 UTC (permalink / raw)
To: linux-mm
Cc: m.szyprowski, mina86, minchan, mgorman, kyungmin.park,
Bartlomiej Zolnierkiewicz
From: Marek Szyprowski <m.szyprowski@samsung.com>
During watermark check we need to decrease available free pages number
by free CMA pages number because unmovable allocations cannot use pages
from CMA areas.
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Cc: Michal Nazarewicz <mina86@mina86.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Mel Gorman <mgorman@suse.de>
Signed-off-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
---
mm/page_alloc.c | 10 ++++++----
1 file changed, 6 insertions(+), 4 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 8afae42..115edf6 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1626,7 +1626,7 @@ static inline bool should_fail_alloc_page(gfp_t gfp_mask, unsigned int order)
* of the allocation.
*/
static bool __zone_watermark_ok(struct zone *z, int order, unsigned long mark,
- int classzone_idx, int alloc_flags, long free_pages)
+ int classzone_idx, int alloc_flags, long free_pages, long free_cma_pages)
{
/* free_pages my go negative - that's OK */
long min = mark;
@@ -1639,7 +1639,7 @@ static bool __zone_watermark_ok(struct zone *z, int order, unsigned long mark,
if (alloc_flags & ALLOC_HARDER)
min -= min / 4;
- if (free_pages <= min + lowmem_reserve)
+ if (free_pages - free_cma_pages <= min + lowmem_reserve)
return false;
for (o = 0; o < order; o++) {
/* At the next order, this order's pages become unavailable */
@@ -1672,13 +1672,15 @@ bool zone_watermark_ok(struct zone *z, int order, unsigned long mark,
int classzone_idx, int alloc_flags)
{
return __zone_watermark_ok(z, order, mark, classzone_idx, alloc_flags,
- zone_page_state(z, NR_FREE_PAGES));
+ zone_page_state(z, NR_FREE_PAGES),
+ zone_page_state(z, NR_FREE_CMA_PAGES));
}
bool zone_watermark_ok_safe(struct zone *z, int order, unsigned long mark,
int classzone_idx, int alloc_flags)
{
long free_pages = zone_page_state(z, NR_FREE_PAGES);
+ long free_cma_pages = zone_page_state(z, NR_FREE_CMA_PAGES);
if (z->percpu_drift_mark && free_pages < z->percpu_drift_mark)
free_pages = zone_page_state_snapshot(z, NR_FREE_PAGES);
@@ -1692,7 +1694,7 @@ bool zone_watermark_ok_safe(struct zone *z, int order, unsigned long mark,
*/
free_pages -= nr_zone_isolate_freepages(z);
return __zone_watermark_ok(z, order, mark, classzone_idx, alloc_flags,
- free_pages);
+ free_pages, free_cma_pages);
}
#ifdef CONFIG_NUMA
--
1.7.11.3
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 6+ messages in thread* [PATCH v2 4/4] cma: fix watermark checking
2012-09-03 15:33 [PATCH v2 0/4] cma: fix watermark checking Bartlomiej Zolnierkiewicz
` (2 preceding siblings ...)
2012-09-03 15:33 ` [PATCH v2 3/4] mm: add accounting for CMA pages and use them for watermark calculation Bartlomiej Zolnierkiewicz
@ 2012-09-03 15:33 ` Bartlomiej Zolnierkiewicz
2012-09-03 15:54 ` [PATCH v2 0/4] " Michal Nazarewicz
4 siblings, 0 replies; 6+ messages in thread
From: Bartlomiej Zolnierkiewicz @ 2012-09-03 15:33 UTC (permalink / raw)
To: linux-mm
Cc: m.szyprowski, mina86, minchan, mgorman, kyungmin.park,
Bartlomiej Zolnierkiewicz
Pass GFP flags to [__]zone_watermark_ok() and use them to account
free CMA pages only when necessary (there is no need to check
watermark against only non-CMA free pages for movable allocations).
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
Cc: Michal Nazarewicz <mina86@mina86.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Mel Gorman <mgorman@suse.de>
Signed-off-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
---
include/linux/mmzone.h | 2 +-
mm/compaction.c | 11 ++++++-----
mm/page_alloc.c | 29 +++++++++++++++++++----------
mm/vmscan.c | 4 ++--
4 files changed, 28 insertions(+), 18 deletions(-)
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 904889d..308bb91 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -725,7 +725,7 @@ extern struct mutex zonelists_mutex;
void build_all_zonelists(pg_data_t *pgdat, struct zone *zone);
void wakeup_kswapd(struct zone *zone, int order, enum zone_type classzone_idx);
bool zone_watermark_ok(struct zone *z, int order, unsigned long mark,
- int classzone_idx, int alloc_flags);
+ int classzone_idx, int alloc_flags, gfp_t gfp_flags);
bool zone_watermark_ok_safe(struct zone *z, int order, unsigned long mark,
int classzone_idx, int alloc_flags);
enum memmap_context {
diff --git a/mm/compaction.c b/mm/compaction.c
index 8afa6dc..48efdc3 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -626,7 +626,7 @@ static int compact_finished(struct zone *zone,
watermark = low_wmark_pages(zone);
watermark += (1 << cc->order);
- if (!zone_watermark_ok(zone, cc->order, watermark, 0, 0))
+ if (!zone_watermark_ok(zone, cc->order, watermark, 0, 0, 0))
return COMPACT_CONTINUE;
/* Direct compactor: Is a suitable page free? */
@@ -668,7 +668,7 @@ unsigned long compaction_suitable(struct zone *zone, int order)
* allocated and for a short time, the footprint is higher
*/
watermark = low_wmark_pages(zone) + (2UL << order);
- if (!zone_watermark_ok(zone, 0, watermark, 0, 0))
+ if (!zone_watermark_ok(zone, 0, watermark, 0, 0, 0))
return COMPACT_SKIPPED;
/*
@@ -687,7 +687,7 @@ unsigned long compaction_suitable(struct zone *zone, int order)
return COMPACT_SKIPPED;
if (fragindex == -1000 && zone_watermark_ok(zone, order, watermark,
- 0, 0))
+ 0, 0, 0))
return COMPACT_PARTIAL;
return COMPACT_CONTINUE;
@@ -829,7 +829,8 @@ unsigned long try_to_compact_pages(struct zonelist *zonelist,
rc = max(status, rc);
/* If a normal allocation would succeed, stop compacting */
- if (zone_watermark_ok(zone, order, low_wmark_pages(zone), 0, 0))
+ if (zone_watermark_ok(zone, order, low_wmark_pages(zone), 0, 0,
+ gfp_mask))
break;
}
@@ -860,7 +861,7 @@ static int __compact_pgdat(pg_data_t *pgdat, struct compact_control *cc)
if (cc->order > 0) {
int ok = zone_watermark_ok(zone, cc->order,
- low_wmark_pages(zone), 0, 0);
+ low_wmark_pages(zone), 0, 0, 0);
if (ok && cc->order > zone->compact_order_failed)
zone->compact_order_failed = cc->order + 1;
/* Currently async compaction is never deferred. */
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 115edf6..389188a 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1421,7 +1421,7 @@ int split_free_page(struct page *page, bool check_wmark)
if (check_wmark) {
/* Obey watermarks as if the page was being allocated */
watermark = low_wmark_pages(zone) + (1 << order);
- if (!zone_watermark_ok(zone, 0, watermark, 0, 0))
+ if (!zone_watermark_ok(zone, 0, watermark, 0, 0, 0))
return 0;
}
@@ -1626,12 +1626,13 @@ static inline bool should_fail_alloc_page(gfp_t gfp_mask, unsigned int order)
* of the allocation.
*/
static bool __zone_watermark_ok(struct zone *z, int order, unsigned long mark,
- int classzone_idx, int alloc_flags, long free_pages, long free_cma_pages)
+ int classzone_idx, int alloc_flags, long free_pages,
+ long free_cma_pages, gfp_t gfp_flags)
{
/* free_pages my go negative - that's OK */
long min = mark;
long lowmem_reserve = z->lowmem_reserve[classzone_idx];
- int o;
+ int mt, o;
free_pages -= (1 << order) - 1;
if (alloc_flags & ALLOC_HIGH)
@@ -1639,8 +1640,14 @@ static bool __zone_watermark_ok(struct zone *z, int order, unsigned long mark,
if (alloc_flags & ALLOC_HARDER)
min -= min / 4;
- if (free_pages - free_cma_pages <= min + lowmem_reserve)
- return false;
+ mt = allocflags_to_migratetype(gfp_flags);
+ if (mt == MIGRATE_MOVABLE) {
+ if (free_pages <= min + lowmem_reserve)
+ return false;
+ } else {
+ if (free_pages - free_cma_pages <= min + lowmem_reserve)
+ return false;
+ }
for (o = 0; o < order; o++) {
/* At the next order, this order's pages become unavailable */
free_pages -= z->free_area[o].nr_free << o;
@@ -1669,11 +1676,12 @@ static inline unsigned long nr_zone_isolate_freepages(struct zone *zone)
#endif
bool zone_watermark_ok(struct zone *z, int order, unsigned long mark,
- int classzone_idx, int alloc_flags)
+ int classzone_idx, int alloc_flags, gfp_t gfp_flags)
{
return __zone_watermark_ok(z, order, mark, classzone_idx, alloc_flags,
zone_page_state(z, NR_FREE_PAGES),
- zone_page_state(z, NR_FREE_CMA_PAGES));
+ zone_page_state(z, NR_FREE_CMA_PAGES),
+ gfp_flags);
}
bool zone_watermark_ok_safe(struct zone *z, int order, unsigned long mark,
@@ -1694,7 +1702,7 @@ bool zone_watermark_ok_safe(struct zone *z, int order, unsigned long mark,
*/
free_pages -= nr_zone_isolate_freepages(z);
return __zone_watermark_ok(z, order, mark, classzone_idx, alloc_flags,
- free_pages, free_cma_pages);
+ free_pages, free_cma_pages, 0);
}
#ifdef CONFIG_NUMA
@@ -1904,7 +1912,7 @@ zonelist_scan:
mark = zone->watermark[alloc_flags & ALLOC_WMARK_MASK];
if (zone_watermark_ok(zone, order, mark,
- classzone_idx, alloc_flags))
+ classzone_idx, alloc_flags, gfp_mask))
goto try_this_zone;
if (NUMA_BUILD && !did_zlc_setup && nr_online_nodes > 1) {
@@ -1940,7 +1948,8 @@ zonelist_scan:
default:
/* did we reclaim enough */
if (!zone_watermark_ok(zone, order, mark,
- classzone_idx, alloc_flags))
+ classzone_idx, alloc_flags,
+ gfp_mask))
goto this_zone_full;
}
}
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 8d01243..4a10038b 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2777,14 +2777,14 @@ out:
/* Confirm the zone is balanced for order-0 */
if (!zone_watermark_ok(zone, 0,
- high_wmark_pages(zone), 0, 0)) {
+ high_wmark_pages(zone), 0, 0, 0)) {
order = sc.order = 0;
goto loop_again;
}
/* Check if the memory needs to be defragmented. */
if (zone_watermark_ok(zone, order,
- low_wmark_pages(zone), *classzone_idx, 0))
+ low_wmark_pages(zone), *classzone_idx, 0, 0))
zones_need_compaction = 0;
/* If balanced, clear the congested flag */
--
1.7.11.3
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 6+ messages in thread* Re: [PATCH v2 0/4] cma: fix watermark checking
2012-09-03 15:33 [PATCH v2 0/4] cma: fix watermark checking Bartlomiej Zolnierkiewicz
` (3 preceding siblings ...)
2012-09-03 15:33 ` [PATCH v2 4/4] cma: fix watermark checking Bartlomiej Zolnierkiewicz
@ 2012-09-03 15:54 ` Michal Nazarewicz
4 siblings, 0 replies; 6+ messages in thread
From: Michal Nazarewicz @ 2012-09-03 15:54 UTC (permalink / raw)
To: Bartlomiej Zolnierkiewicz, linux-mm
Cc: m.szyprowski, minchan, mgorman, kyungmin.park
[-- Attachment #1: Type: text/plain, Size: 1270 bytes --]
Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com> writes:
> Free pages belonging to Contiguous Memory Allocator (CMA) areas cannot be
> used by unmovable allocations and this fact should be accounted for while
> doing zone watermark checking. Additionaly while CMA pages are isolated
> they shouldn't be included in the total number of free pages (as they
> cannot be allocated while they are isolated). The following patch series
> should fix both issues. It is based on top of recent Minchan's CMA series
> (https://lkml.org/lkml/2012/8/14/81 "[RFC 0/2] Reduce alloc_contig_range
> latency").
>
> v2:
> - no need to call get_pageblock_migratetype() in free_one_page() in patch #1
> (thanks to review from Michal Nazarewicz)
> - fix issues pointed in http://www.spinics.net/lists/linux-mm/msg41017.html
> in patch #2 (ditto)
> - remove no longer needed is_cma_pageblock() from patch #2
I'm not an expert on watermarks but the code of the whole patchset looks
good to me.
--
Best regards, _ _
.o. | Liege of Serenely Enlightened Majesty of o' \,=./ `o
..o | Computer Science, Michał “mina86” Nazarewicz (o o)
ooo +----<email/xmpp: mpn@google.com>--------------ooO--(_)--Ooo--
[-- Attachment #2.1: Type: text/plain, Size: 0 bytes --]
[-- Attachment #2.2: Type: application/pgp-signature, Size: 835 bytes --]
^ permalink raw reply [flat|nested] 6+ messages in thread