From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail203.messagelabs.com (mail203.messagelabs.com [216.82.254.243]) by kanga.kvack.org (Postfix) with ESMTP id A3B57600337 for ; Thu, 8 Apr 2010 12:18:26 -0400 (EDT) Date: Thu, 8 Apr 2010 18:18:14 +0200 From: Johannes Weiner Subject: Re: [PATCH 56 of 67] Memory compaction core Message-ID: <20100408161814.GC28964@cmpxchg.org> References: Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Sender: owner-linux-mm@kvack.org To: Andrea Arcangeli Cc: linux-mm@kvack.org, Andrew Morton , Marcelo Tosatti , Adam Litke , Avi Kivity , Izik Eidus , Hugh Dickins , Nick Piggin , Rik van Riel , Mel Gorman , Dave Hansen , Benjamin Herrenschmidt , Ingo Molnar , Mike Travis , KAMEZAWA Hiroyuki , Christoph Lameter , Chris Wright , bpicco@redhat.com, KOSAKI Motohiro , Balbir Singh , Arnd Bergmann , "Michael S. Tsirkin" , Peter Zijlstra , Daisuke Nishimura , Chris Mason List-ID: Andrea, On Thu, Apr 08, 2010 at 03:51:39AM +0200, Andrea Arcangeli wrote: > +static unsigned long isolate_migratepages(struct zone *zone, > + struct compact_control *cc) > +{ > + unsigned long low_pfn, end_pfn; > + struct list_head *migratelist = &cc->migratepages; > + > + /* Do not scan outside zone boundaries */ > + low_pfn = max(cc->migrate_pfn, zone->zone_start_pfn); > + > + /* Only scan within a pageblock boundary */ > + end_pfn = ALIGN(low_pfn + pageblock_nr_pages, pageblock_nr_pages); > + > + /* Do not cross the free scanner or scan within a memory hole */ > + if (end_pfn > cc->free_pfn || !pfn_valid(low_pfn)) { > + cc->migrate_pfn = end_pfn; > + return 0; > + } > + > + /* > + * Ensure that there are not too many pages isolated from the LRU > + * list by either parallel reclaimers or compaction. If there are, > + * delay for some time until fewer pages are isolated > + */ > + while (unlikely(too_many_isolated(zone))) { > + congestion_wait(BLK_RW_ASYNC, HZ/10); > + > + if (fatal_signal_pending(current)) > + return 0; > + } > + > + /* Time to isolate some pages for migration */ > + spin_lock_irq(&zone->lru_lock); > + for (; low_pfn < end_pfn; low_pfn++) { > + struct page *page; > + if (!pfn_valid_within(low_pfn)) > + continue; > + > + /* Get the page and skip if free */ > + page = pfn_to_page(low_pfn); > + if (PageBuddy(page)) { Should this be if (PageBuddy(page) || PageTransHuge(page)) { > + low_pfn += (1 << page_order(page)) - 1; > + continue; > + } instead? > + > + /* Try isolate the page */ > + if (__isolate_lru_page(page, ISOLATE_BOTH, 0) != 0) > + continue; > + > + /* Successfully isolated */ > + del_page_from_lru_list(zone, page, page_lru(page)); > + list_add(&page->lru, migratelist); > + mem_cgroup_del_lru(page); > + cc->nr_migratepages++; > + > + /* Avoid isolating too much */ > + if (cc->nr_migratepages == COMPACT_CLUSTER_MAX) > + break; > + } > + > + acct_isolated(zone, cc); > + > + spin_unlock_irq(&zone->lru_lock); > + cc->migrate_pfn = low_pfn; > + > + return cc->nr_migratepages; -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org