From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from psmtp.com (na3sys010amx205.postini.com [74.125.245.205]) by kanga.kvack.org (Postfix) with SMTP id 7C9826B0070 for ; Mon, 3 Sep 2012 11:33:35 -0400 (EDT) Received: from epcpsbgm1.samsung.com (epcpsbgm1 [203.254.230.26]) by mailout3.samsung.com (Oracle Communications Messaging Server 7u4-24.01(7.0.4.24.0) 64bit (built Nov 17 2011)) with ESMTP id <0M9S007AX5V8IHE0@mailout3.samsung.com> for linux-mm@kvack.org; Tue, 04 Sep 2012 00:33:34 +0900 (KST) Received: from mcdsrvbld02.digital.local ([106.116.37.23]) by mmp1.samsung.com (Oracle Communications Messaging Server 7u4-24.01 (7.0.4.24.0) 64bit (built Nov 17 2011)) with ESMTPA id <0M9S000OM5VGTX90@mmp1.samsung.com> for linux-mm@kvack.org; Tue, 04 Sep 2012 00:33:34 +0900 (KST) From: Bartlomiej Zolnierkiewicz Subject: [PATCH v2 3/4] mm: add accounting for CMA pages and use them for watermark calculation Date: Mon, 03 Sep 2012 17:33:03 +0200 Message-id: <1346686384-1866-4-git-send-email-b.zolnierkie@samsung.com> In-reply-to: <1346686384-1866-1-git-send-email-b.zolnierkie@samsung.com> References: <1346686384-1866-1-git-send-email-b.zolnierkie@samsung.com> Sender: owner-linux-mm@kvack.org List-ID: To: linux-mm@kvack.org Cc: m.szyprowski@samsung.com, mina86@mina86.com, minchan@kernel.org, mgorman@suse.de, kyungmin.park@samsung.com, Bartlomiej Zolnierkiewicz From: Marek Szyprowski During watermark check we need to decrease available free pages number by free CMA pages number because unmovable allocations cannot use pages from CMA areas. Signed-off-by: Marek Szyprowski Cc: Michal Nazarewicz Cc: Minchan Kim Cc: Mel Gorman Signed-off-by: Bartlomiej Zolnierkiewicz Signed-off-by: Kyungmin Park --- mm/page_alloc.c | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 8afae42..115edf6 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1626,7 +1626,7 @@ static inline bool should_fail_alloc_page(gfp_t gfp_mask, unsigned int order) * of the allocation. */ static bool __zone_watermark_ok(struct zone *z, int order, unsigned long mark, - int classzone_idx, int alloc_flags, long free_pages) + int classzone_idx, int alloc_flags, long free_pages, long free_cma_pages) { /* free_pages my go negative - that's OK */ long min = mark; @@ -1639,7 +1639,7 @@ static bool __zone_watermark_ok(struct zone *z, int order, unsigned long mark, if (alloc_flags & ALLOC_HARDER) min -= min / 4; - if (free_pages <= min + lowmem_reserve) + if (free_pages - free_cma_pages <= min + lowmem_reserve) return false; for (o = 0; o < order; o++) { /* At the next order, this order's pages become unavailable */ @@ -1672,13 +1672,15 @@ bool zone_watermark_ok(struct zone *z, int order, unsigned long mark, int classzone_idx, int alloc_flags) { return __zone_watermark_ok(z, order, mark, classzone_idx, alloc_flags, - zone_page_state(z, NR_FREE_PAGES)); + zone_page_state(z, NR_FREE_PAGES), + zone_page_state(z, NR_FREE_CMA_PAGES)); } bool zone_watermark_ok_safe(struct zone *z, int order, unsigned long mark, int classzone_idx, int alloc_flags) { long free_pages = zone_page_state(z, NR_FREE_PAGES); + long free_cma_pages = zone_page_state(z, NR_FREE_CMA_PAGES); if (z->percpu_drift_mark && free_pages < z->percpu_drift_mark) free_pages = zone_page_state_snapshot(z, NR_FREE_PAGES); @@ -1692,7 +1694,7 @@ bool zone_watermark_ok_safe(struct zone *z, int order, unsigned long mark, */ free_pages -= nr_zone_isolate_freepages(z); return __zone_watermark_ok(z, order, mark, classzone_idx, alloc_flags, - free_pages); + free_pages, free_cma_pages); } #ifdef CONFIG_NUMA -- 1.7.11.3 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org