From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from d01relay04.pok.ibm.com (d01relay04.pok.ibm.com [9.56.227.236]) by e1.ny.us.ibm.com (8.12.11/8.12.11) with ESMTP id j8QKBp6x007360 for ; Mon, 26 Sep 2005 16:11:51 -0400 Received: from d01av04.pok.ibm.com (d01av04.pok.ibm.com [9.56.224.64]) by d01relay04.pok.ibm.com (8.12.10/NCO/VERS6.7) with ESMTP id j8QKBpEZ080442 for ; Mon, 26 Sep 2005 16:11:51 -0400 Received: from d01av04.pok.ibm.com (loopback [127.0.0.1]) by d01av04.pok.ibm.com (8.12.11/8.13.3) with ESMTP id j8QKBpRn022309 for ; Mon, 26 Sep 2005 16:11:51 -0400 Message-ID: <43385605.7050406@austin.ibm.com> Date: Mon, 26 Sep 2005 15:11:49 -0500 From: Joel Schopp MIME-Version: 1.0 Subject: [PATCH 5/9] propagate defrag alloc types References: <4338537E.8070603@austin.ibm.com> In-Reply-To: <4338537E.8070603@austin.ibm.com> Content-Type: multipart/mixed; boundary="------------020107000809090901030105" Sender: owner-linux-mm@kvack.org Return-Path: To: Andrew Morton Cc: Joel Schopp , lhms , Linux Memory Management List , linux-kernel@vger.kernel.org, Mel Gorman , Mike Kravetz List-ID: This is a multi-part message in MIME format. --------------020107000809090901030105 Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit Now that we have this new information of alloctype, this patch propagates it to functions where it will be useful. Signed-off-by: Mel Gorman Signed-off-by: Joel Schopp --------------020107000809090901030105 Content-Type: text/plain; name="5_propogate_defrag_types" Content-Transfer-Encoding: 7bit Content-Disposition: inline; filename="5_propogate_defrag_types" Index: 2.6.13-joel2/mm/page_alloc.c =================================================================== --- 2.6.13-joel2.orig/mm/page_alloc.c 2005-09-20 14:16:35.%N -0500 +++ 2.6.13-joel2/mm/page_alloc.c 2005-09-20 15:08:05.%N -0500 @@ -559,7 +559,8 @@ static inline struct page* steal_largepa * Do the hard work of removing an element from the buddy allocator. * Call me with the zone->lock already held. */ -static struct page *__rmqueue(struct zone *zone, unsigned int order) +static struct page *__rmqueue(struct zone *zone, unsigned int order, + int alloctype) { struct free_area * area; unsigned int current_order; @@ -587,7 +588,8 @@ static struct page *__rmqueue(struct zon * Returns the number of new pages which were placed at *list. */ static int rmqueue_bulk(struct zone *zone, unsigned int order, - unsigned long count, struct list_head *list) + unsigned long count, struct list_head *list, + int alloctype) { unsigned long flags; int i; @@ -596,7 +598,7 @@ static int rmqueue_bulk(struct zone *zon spin_lock_irqsave(&zone->lock, flags); for (i = 0; i < count; ++i) { - page = __rmqueue(zone, order); + page = __rmqueue(zone, order, alloctype); if (page == NULL) break; allocated++; @@ -775,7 +777,8 @@ static inline void prep_zero_page(struct * or two. */ static struct page * -buffered_rmqueue(struct zone *zone, int order, unsigned int __nocast gfp_flags) +buffered_rmqueue(struct zone *zone, int order, unsigned int __nocast gfp_flags, + int alloctype) { unsigned long flags; struct page *page = NULL; @@ -787,8 +790,8 @@ buffered_rmqueue(struct zone *zone, int pcp = &zone_pcp(zone, get_cpu())->pcp[cold]; local_irq_save(flags); if (pcp->count <= pcp->low) - pcp->count += rmqueue_bulk(zone, 0, - pcp->batch, &pcp->list); + pcp->count += rmqueue_bulk(zone, 0, pcp->batch, + &pcp->list, alloctype); if (pcp->count) { page = list_entry(pcp->list.next, struct page, lru); list_del(&page->lru); @@ -800,7 +803,7 @@ buffered_rmqueue(struct zone *zone, int if (page == NULL) { spin_lock_irqsave(&zone->lock, flags); - page = __rmqueue(zone, order); + page = __rmqueue(zone, order, alloctype); spin_unlock_irqrestore(&zone->lock, flags); } @@ -876,7 +879,9 @@ __alloc_pages(unsigned int __nocast gfp_ int do_retry; int can_try_harder; int did_some_progress; - + int alloctype; + + alloctype = (gfp_mask & __GFP_RCLM_BITS); might_sleep_if(wait); /* @@ -921,7 +926,7 @@ zone_reclaim_retry: } } - page = buffered_rmqueue(z, order, gfp_mask); + page = buffered_rmqueue(z, order, gfp_mask, alloctype); if (page) goto got_pg; } @@ -945,7 +950,7 @@ zone_reclaim_retry: if (wait && !cpuset_zone_allowed(z)) continue; - page = buffered_rmqueue(z, order, gfp_mask); + page = buffered_rmqueue(z, order, gfp_mask, alloctype); if (page) goto got_pg; } @@ -959,7 +964,8 @@ zone_reclaim_retry: for (i = 0; (z = zones[i]) != NULL; i++) { if (!cpuset_zone_allowed(z)) continue; - page = buffered_rmqueue(z, order, gfp_mask); + page = buffered_rmqueue(z, order, gfp_mask, + alloctype); if (page) goto got_pg; } @@ -996,7 +1002,7 @@ rebalance: if (!cpuset_zone_allowed(z)) continue; - page = buffered_rmqueue(z, order, gfp_mask); + page = buffered_rmqueue(z, order, gfp_mask, alloctype); if (page) goto got_pg; } @@ -1015,7 +1021,7 @@ rebalance: if (!cpuset_zone_allowed(z)) continue; - page = buffered_rmqueue(z, order, gfp_mask); + page = buffered_rmqueue(z, order, gfp_mask, alloctype); if (page) goto got_pg; } --------------020107000809090901030105-- -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org