linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: David Rientjes <rientjes@google.com>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: Vlastimil Babka <vbabka@suse.cz>, Mel Gorman <mgorman@suse.de>,
	Rik van Riel <riel@redhat.com>,
	Joonsoo Kim <iamjoonsoo.kim@lge.com>,
	Greg Thelen <gthelen@google.com>, Hugh Dickins <hughd@google.com>,
	linux-kernel@vger.kernel.org, linux-mm@kvack.org
Subject: [patch v4 4/6] mm, compaction: embed migration mode in compact_control
Date: Wed, 7 May 2014 03:36:46 -0700 (PDT)	[thread overview]
Message-ID: <alpine.DEB.2.02.1405070336200.16568@chino.kir.corp.google.com> (raw)
In-Reply-To: <536A030D.4070407@suse.cz>

We're going to want to manipulate the migration mode for compaction in the page 
allocator, and currently compact_control's sync field is only a bool.  

Currently, we only do MIGRATE_ASYNC or MIGRATE_SYNC_LIGHT compaction depending 
on the value of this bool.  Convert the bool to enum migrate_mode and pass the 
migration mode in directly.  Later, we'll want to avoid MIGRATE_SYNC_LIGHT for 
thp allocations in the pagefault patch to avoid unnecessary latency.

This also alters compaction triggered from sysfs, either for the entire system 
or for a node, to force MIGRATE_SYNC.

Suggested-by: Mel Gorman <mgorman@suse.de>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: David Rientjes <rientjes@google.com>
---
 v4 of this patch only: converted name of formal from "sync" to "mode" for
                        try_to_compact_pages() per Vlastimil

 include/linux/compaction.h |  4 ++--
 mm/compaction.c            | 38 ++++++++++++++++++++------------------
 mm/internal.h              |  2 +-
 mm/page_alloc.c            | 37 ++++++++++++++++---------------------
 4 files changed, 39 insertions(+), 42 deletions(-)

diff --git a/include/linux/compaction.h b/include/linux/compaction.h
--- a/include/linux/compaction.h
+++ b/include/linux/compaction.h
@@ -22,7 +22,7 @@ extern int sysctl_extfrag_handler(struct ctl_table *table, int write,
 extern int fragmentation_index(struct zone *zone, unsigned int order);
 extern unsigned long try_to_compact_pages(struct zonelist *zonelist,
 			int order, gfp_t gfp_mask, nodemask_t *mask,
-			bool sync, bool *contended);
+			enum migrate_mode mode, bool *contended);
 extern void compact_pgdat(pg_data_t *pgdat, int order);
 extern void reset_isolation_suitable(pg_data_t *pgdat);
 extern unsigned long compaction_suitable(struct zone *zone, int order);
@@ -91,7 +91,7 @@ static inline bool compaction_restarting(struct zone *zone, int order)
 #else
 static inline unsigned long try_to_compact_pages(struct zonelist *zonelist,
 			int order, gfp_t gfp_mask, nodemask_t *nodemask,
-			bool sync, bool *contended)
+			enum migrate_mode mode, bool *contended)
 {
 	return COMPACT_CONTINUE;
 }
diff --git a/mm/compaction.c b/mm/compaction.c
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -161,7 +161,8 @@ static void update_pageblock_skip(struct compact_control *cc,
 			return;
 		if (pfn > zone->compact_cached_migrate_pfn[0])
 			zone->compact_cached_migrate_pfn[0] = pfn;
-		if (cc->sync && pfn > zone->compact_cached_migrate_pfn[1])
+		if (cc->mode != MIGRATE_ASYNC &&
+		    pfn > zone->compact_cached_migrate_pfn[1])
 			zone->compact_cached_migrate_pfn[1] = pfn;
 	} else {
 		if (cc->finished_update_free)
@@ -208,7 +209,7 @@ static bool compact_checklock_irqsave(spinlock_t *lock, unsigned long *flags,
 		}
 
 		/* async aborts if taking too long or contended */
-		if (!cc->sync) {
+		if (cc->mode == MIGRATE_ASYNC) {
 			cc->contended = true;
 			return false;
 		}
@@ -479,7 +480,8 @@ isolate_migratepages_range(struct zone *zone, struct compact_control *cc,
 	bool locked = false;
 	struct page *page = NULL, *valid_page = NULL;
 	bool set_unsuitable = true;
-	const isolate_mode_t mode = (!cc->sync ? ISOLATE_ASYNC_MIGRATE : 0) |
+	const isolate_mode_t mode = (cc->mode == MIGRATE_ASYNC ?
+					ISOLATE_ASYNC_MIGRATE : 0) |
 				    (unevictable ? ISOLATE_UNEVICTABLE : 0);
 
 	/*
@@ -489,7 +491,7 @@ isolate_migratepages_range(struct zone *zone, struct compact_control *cc,
 	 */
 	while (unlikely(too_many_isolated(zone))) {
 		/* async migration should just abort */
-		if (!cc->sync)
+		if (cc->mode == MIGRATE_ASYNC)
 			return 0;
 
 		congestion_wait(BLK_RW_ASYNC, HZ/10);
@@ -554,7 +556,8 @@ isolate_migratepages_range(struct zone *zone, struct compact_control *cc,
 			 * the minimum amount of work satisfies the allocation
 			 */
 			mt = get_pageblock_migratetype(page);
-			if (!cc->sync && !migrate_async_suitable(mt)) {
+			if (cc->mode == MIGRATE_ASYNC &&
+			    !migrate_async_suitable(mt)) {
 				set_unsuitable = false;
 				goto next_pageblock;
 			}
@@ -990,6 +993,7 @@ static int compact_zone(struct zone *zone, struct compact_control *cc)
 	int ret;
 	unsigned long start_pfn = zone->zone_start_pfn;
 	unsigned long end_pfn = zone_end_pfn(zone);
+	const bool sync = cc->mode != MIGRATE_ASYNC;
 
 	ret = compaction_suitable(zone, cc->order);
 	switch (ret) {
@@ -1015,7 +1019,7 @@ static int compact_zone(struct zone *zone, struct compact_control *cc)
 	 * information on where the scanners should start but check that it
 	 * is initialised by ensuring the values are within zone boundaries.
 	 */
-	cc->migrate_pfn = zone->compact_cached_migrate_pfn[cc->sync];
+	cc->migrate_pfn = zone->compact_cached_migrate_pfn[sync];
 	cc->free_pfn = zone->compact_cached_free_pfn;
 	if (cc->free_pfn < start_pfn || cc->free_pfn > end_pfn) {
 		cc->free_pfn = end_pfn & ~(pageblock_nr_pages-1);
@@ -1049,8 +1053,7 @@ static int compact_zone(struct zone *zone, struct compact_control *cc)
 
 		nr_migrate = cc->nr_migratepages;
 		err = migrate_pages(&cc->migratepages, compaction_alloc,
-				compaction_free, (unsigned long)cc,
-				cc->sync ? MIGRATE_SYNC_LIGHT : MIGRATE_ASYNC,
+				compaction_free, (unsigned long)cc, cc->mode,
 				MR_COMPACTION);
 		update_nr_listpages(cc);
 		nr_remaining = cc->nr_migratepages;
@@ -1083,9 +1086,8 @@ out:
 	return ret;
 }
 
-static unsigned long compact_zone_order(struct zone *zone,
-				 int order, gfp_t gfp_mask,
-				 bool sync, bool *contended)
+static unsigned long compact_zone_order(struct zone *zone, int order,
+		gfp_t gfp_mask, enum migrate_mode mode, bool *contended)
 {
 	unsigned long ret;
 	struct compact_control cc = {
@@ -1094,7 +1096,7 @@ static unsigned long compact_zone_order(struct zone *zone,
 		.order = order,
 		.migratetype = allocflags_to_migratetype(gfp_mask),
 		.zone = zone,
-		.sync = sync,
+		.mode = mode,
 	};
 	INIT_LIST_HEAD(&cc.freepages);
 	INIT_LIST_HEAD(&cc.migratepages);
@@ -1116,7 +1118,7 @@ int sysctl_extfrag_threshold = 500;
  * @order: The order of the current allocation
  * @gfp_mask: The GFP mask of the current allocation
  * @nodemask: The allowed nodes to allocate from
- * @sync: Whether migration is synchronous or not
+ * @mode: The migration mode for async, sync light, or sync migration
  * @contended: Return value that is true if compaction was aborted due to lock contention
  * @page: Optionally capture a free page of the requested order during compaction
  *
@@ -1124,7 +1126,7 @@ int sysctl_extfrag_threshold = 500;
  */
 unsigned long try_to_compact_pages(struct zonelist *zonelist,
 			int order, gfp_t gfp_mask, nodemask_t *nodemask,
-			bool sync, bool *contended)
+			enum migrate_mode mode, bool *contended)
 {
 	enum zone_type high_zoneidx = gfp_zone(gfp_mask);
 	int may_enter_fs = gfp_mask & __GFP_FS;
@@ -1149,7 +1151,7 @@ unsigned long try_to_compact_pages(struct zonelist *zonelist,
 								nodemask) {
 		int status;
 
-		status = compact_zone_order(zone, order, gfp_mask, sync,
+		status = compact_zone_order(zone, order, gfp_mask, mode,
 						contended);
 		rc = max(status, rc);
 
@@ -1189,7 +1191,7 @@ static void __compact_pgdat(pg_data_t *pgdat, struct compact_control *cc)
 						low_wmark_pages(zone), 0, 0))
 				compaction_defer_reset(zone, cc->order, false);
 			/* Currently async compaction is never deferred. */
-			else if (cc->sync)
+			else if (cc->mode != MIGRATE_ASYNC)
 				defer_compaction(zone, cc->order);
 		}
 
@@ -1202,7 +1204,7 @@ void compact_pgdat(pg_data_t *pgdat, int order)
 {
 	struct compact_control cc = {
 		.order = order,
-		.sync = false,
+		.mode = MIGRATE_ASYNC,
 	};
 
 	if (!order)
@@ -1215,7 +1217,7 @@ static void compact_node(int nid)
 {
 	struct compact_control cc = {
 		.order = -1,
-		.sync = true,
+		.mode = MIGRATE_SYNC,
 		.ignore_skip_hint = true,
 	};
 
diff --git a/mm/internal.h b/mm/internal.h
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -134,7 +134,7 @@ struct compact_control {
 	unsigned long nr_migratepages;	/* Number of pages to migrate */
 	unsigned long free_pfn;		/* isolate_freepages search base */
 	unsigned long migrate_pfn;	/* isolate_migratepages search base */
-	bool sync;			/* Synchronous migration */
+	enum migrate_mode mode;		/* Async or sync migration mode */
 	bool ignore_skip_hint;		/* Scan blocks even if marked skip */
 	bool finished_update_free;	/* True when the zone cached pfns are
 					 * no longer being updated
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2226,7 +2226,7 @@ static struct page *
 __alloc_pages_direct_compact(gfp_t gfp_mask, unsigned int order,
 	struct zonelist *zonelist, enum zone_type high_zoneidx,
 	nodemask_t *nodemask, int alloc_flags, struct zone *preferred_zone,
-	int migratetype, bool sync_migration,
+	int migratetype, enum migrate_mode mode,
 	bool *contended_compaction, bool *deferred_compaction,
 	unsigned long *did_some_progress)
 {
@@ -2240,7 +2240,7 @@ __alloc_pages_direct_compact(gfp_t gfp_mask, unsigned int order,
 
 	current->flags |= PF_MEMALLOC;
 	*did_some_progress = try_to_compact_pages(zonelist, order, gfp_mask,
-						nodemask, sync_migration,
+						nodemask, mode,
 						contended_compaction);
 	current->flags &= ~PF_MEMALLOC;
 
@@ -2273,7 +2273,7 @@ __alloc_pages_direct_compact(gfp_t gfp_mask, unsigned int order,
 		 * As async compaction considers a subset of pageblocks, only
 		 * defer if the failure was a sync compaction failure.
 		 */
-		if (sync_migration)
+		if (mode != MIGRATE_ASYNC)
 			defer_compaction(preferred_zone, order);
 
 		cond_resched();
@@ -2286,9 +2286,8 @@ static inline struct page *
 __alloc_pages_direct_compact(gfp_t gfp_mask, unsigned int order,
 	struct zonelist *zonelist, enum zone_type high_zoneidx,
 	nodemask_t *nodemask, int alloc_flags, struct zone *preferred_zone,
-	int migratetype, bool sync_migration,
-	bool *contended_compaction, bool *deferred_compaction,
-	unsigned long *did_some_progress)
+	int migratetype, enum migrate_mode mode, bool *contended_compaction,
+	bool *deferred_compaction, unsigned long *did_some_progress)
 {
 	return NULL;
 }
@@ -2483,7 +2482,7 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 	int alloc_flags;
 	unsigned long pages_reclaimed = 0;
 	unsigned long did_some_progress;
-	bool sync_migration = false;
+	enum migrate_mode migration_mode = MIGRATE_ASYNC;
 	bool deferred_compaction = false;
 	bool contended_compaction = false;
 
@@ -2577,17 +2576,15 @@ rebalance:
 	 * Try direct compaction. The first pass is asynchronous. Subsequent
 	 * attempts after direct reclaim are synchronous
 	 */
-	page = __alloc_pages_direct_compact(gfp_mask, order,
-					zonelist, high_zoneidx,
-					nodemask,
-					alloc_flags, preferred_zone,
-					migratetype, sync_migration,
-					&contended_compaction,
+	page = __alloc_pages_direct_compact(gfp_mask, order, zonelist,
+					high_zoneidx, nodemask, alloc_flags,
+					preferred_zone, migratetype,
+					migration_mode, &contended_compaction,
 					&deferred_compaction,
 					&did_some_progress);
 	if (page)
 		goto got_pg;
-	sync_migration = true;
+	migration_mode = MIGRATE_SYNC_LIGHT;
 
 	/*
 	 * If compaction is deferred for high-order allocations, it is because
@@ -2662,12 +2659,10 @@ rebalance:
 		 * direct reclaim and reclaim/compaction depends on compaction
 		 * being called after reclaim so call directly if necessary
 		 */
-		page = __alloc_pages_direct_compact(gfp_mask, order,
-					zonelist, high_zoneidx,
-					nodemask,
-					alloc_flags, preferred_zone,
-					migratetype, sync_migration,
-					&contended_compaction,
+		page = __alloc_pages_direct_compact(gfp_mask, order, zonelist,
+					high_zoneidx, nodemask, alloc_flags,
+					preferred_zone, migratetype,
+					migration_mode, &contended_compaction,
 					&deferred_compaction,
 					&did_some_progress);
 		if (page)
@@ -6254,7 +6249,7 @@ int alloc_contig_range(unsigned long start, unsigned long end,
 		.nr_migratepages = 0,
 		.order = -1,
 		.zone = page_zone(pfn_to_page(start)),
-		.sync = true,
+		.sync = MIGRATE_SYNC_LIGHT,
 		.ignore_skip_hint = true,
 	};
 	INIT_LIST_HEAD(&cc.migratepages);

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2014-05-07 10:36 UTC|newest]

Thread overview: 135+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-05-01  0:45 [patch 1/2] mm, migration: add destination page freeing callback David Rientjes
2014-05-01  0:45 ` [patch 2/2] mm, compaction: return failed migration target pages back to freelist David Rientjes
2014-05-01  5:10   ` Naoya Horiguchi
2014-05-01 21:02     ` David Rientjes
2014-05-01  5:08 ` [patch 1/2] mm, migration: add destination page freeing callback Naoya Horiguchi
     [not found] ` <5361d71e.236ec20a.1b3d.ffffc8aeSMTPIN_ADDED_BROKEN@mx.google.com>
2014-05-01 21:02   ` David Rientjes
2014-05-01 21:35 ` [patch v2 1/4] " David Rientjes
2014-05-01 21:35   ` [patch v2 2/4] mm, compaction: return failed migration target pages back to freelist David Rientjes
2014-05-02 10:11     ` Mel Gorman
2014-05-02 15:23     ` Vlastimil Babka
2014-05-02 15:26       ` [PATCH] mm/compaction: do not count migratepages when unnecessary Vlastimil Babka
2014-05-06 21:18         ` Naoya Horiguchi
     [not found]         ` <1399411134-k43fsr0p@n-horiguchi@ah.jp.nec.com>
2014-05-07  9:33           ` Vlastimil Babka
2014-05-02 15:27       ` [PATCH 2/2] mm/compaction: avoid rescanning pageblocks in isolate_freepages Vlastimil Babka
2014-05-06 22:19         ` Naoya Horiguchi
     [not found]         ` <1399414778-xakujfb3@n-horiguchi@ah.jp.nec.com>
2014-05-07  9:22           ` Vlastimil Babka
2014-05-02 15:29       ` [PATCH 1/2] mm/compaction: do not count migratepages when unnecessary Vlastimil Babka
2014-05-01 21:35   ` [patch v2 3/4] mm, compaction: add per-zone migration pfn cache for async compaction David Rientjes
2014-05-05  9:34     ` Vlastimil Babka
2014-05-05  9:51       ` David Rientjes
2014-05-05 14:24         ` Vlastimil Babka
2014-05-06  0:29           ` David Rientjes
2014-05-06 11:52             ` Vlastimil Babka
2014-05-01 21:35   ` [patch v2 4/4] mm, thp: do not perform sync compaction on pagefault David Rientjes
2014-05-02 10:22     ` Mel Gorman
2014-05-02 11:22       ` David Rientjes
2014-05-02 11:58         ` Mel Gorman
2014-05-02 20:29           ` David Rientjes
2014-05-05 14:48             ` Vlastimil Babka
2014-05-06  8:55             ` Mel Gorman
2014-05-06 15:05               ` Vlastimil Babka
2014-05-02 10:10   ` [patch v2 1/4] mm, migration: add destination page freeing callback Mel Gorman
2014-05-07  2:22   ` [patch v3 1/6] " David Rientjes
2014-05-07  2:22     ` [patch v3 2/6] mm, compaction: return failed migration target pages back to freelist David Rientjes
2014-05-07 14:14       ` Naoya Horiguchi
2014-05-07 21:15       ` Andrew Morton
2014-05-07 21:21         ` David Rientjes
2014-05-12  8:35           ` Vlastimil Babka
2014-05-07 21:39         ` Greg Thelen
2014-05-12  8:37           ` Vlastimil Babka
2014-05-07  2:22     ` [patch v3 3/6] mm, compaction: add per-zone migration pfn cache for async compaction David Rientjes
2014-05-07  9:34       ` Vlastimil Babka
2014-05-07 20:56       ` Naoya Horiguchi
2014-05-07  2:22     ` [patch v3 4/6] mm, compaction: embed migration mode in compact_control David Rientjes
2014-05-07  9:55       ` Vlastimil Babka
2014-05-07 10:36         ` David Rientjes [this message]
2014-05-09 22:03           ` [patch v4 " Andrew Morton
2014-05-07  2:22     ` [patch v3 5/6] mm, thp: avoid excessive compaction latency during fault David Rientjes
2014-05-07  9:39       ` Mel Gorman
2014-05-08  5:30       ` [patch -mm] mm, thp: avoid excessive compaction latency during fault fix David Rientjes
2014-05-13 10:00         ` Vlastimil Babka
2014-05-22  2:49           ` David Rientjes
2014-05-22  8:43             ` Vlastimil Babka
2014-05-07  2:22     ` [patch v3 6/6] mm, compaction: terminate async compaction when rescheduling David Rientjes
2014-05-07  9:41       ` Mel Gorman
2014-05-07 12:09       ` [PATCH v2 1/2] mm/compaction: do not count migratepages when unnecessary Vlastimil Babka
2014-05-07 12:09         ` [PATCH v2 2/2] mm/compaction: avoid rescanning pageblocks in isolate_freepages Vlastimil Babka
2014-05-07 21:47           ` David Rientjes
2014-05-07 22:06           ` Naoya Horiguchi
2014-05-08  5:28           ` Joonsoo Kim
2014-05-12  9:09             ` Vlastimil Babka
2014-05-13  1:15               ` Joonsoo Kim
2014-05-09 15:49           ` Michal Nazarewicz
2014-05-19 10:14           ` Vlastimil Babka
2014-05-22  2:51             ` David Rientjes
2014-05-07 21:44         ` [PATCH v2 1/2] mm/compaction: do not count migratepages when unnecessary David Rientjes
2014-05-09 15:48         ` Michal Nazarewicz
2014-05-12  9:51           ` Vlastimil Babka
2014-05-07 12:10       ` [patch v3 6/6] mm, compaction: terminate async compaction when rescheduling Vlastimil Babka
2014-05-07 21:20       ` Andrew Morton
2014-05-07 21:28         ` David Rientjes
2014-05-08  5:17       ` Joonsoo Kim
2014-05-12 14:15         ` [PATCH] mm, compaction: properly signal and act upon lock and need_sched() contention Vlastimil Babka
2014-05-12 15:34           ` Naoya Horiguchi
     [not found]           ` <1399908847-ouuxeneo@n-horiguchi@ah.jp.nec.com>
2014-05-12 15:45             ` Vlastimil Babka
2014-05-12 15:53               ` Naoya Horiguchi
2014-05-12 20:28           ` David Rientjes
2014-05-13  8:50             ` Vlastimil Babka
2014-05-13  0:44           ` Joonsoo Kim
2014-05-13  8:54             ` Vlastimil Babka
2014-05-15  2:21               ` Joonsoo Kim
2014-05-16  9:47           ` [PATCH v2] " Vlastimil Babka
2014-05-16 17:33             ` Michal Nazarewicz
2014-05-19 23:37             ` Andrew Morton
2014-05-21 14:13               ` Vlastimil Babka
2014-05-21 20:11                 ` Andrew Morton
2014-05-22  3:20             ` compaction is still too expensive for thp (was: [PATCH v2] mm, compaction: properly signal and act upon lock and need_sched() contention) David Rientjes
2014-05-22  8:10               ` compaction is still too expensive for thp Vlastimil Babka
2014-05-22  8:55                 ` David Rientjes
2014-05-22 12:03                   ` Vlastimil Babka
2014-06-04  0:29                     ` [patch -mm 1/3] mm: rename allocflags_to_migratetype for clarity David Rientjes
2014-06-04  0:29                       ` [patch -mm 2/3] mm, compaction: pass gfp mask to compact_control David Rientjes
2014-06-04  0:30                       ` [patch -mm 3/3] mm, compaction: avoid compacting memory for thp if pageblock cannot become free David Rientjes
2014-06-04 11:04                         ` Mel Gorman
2014-06-04 22:02                           ` David Rientjes
2014-06-04 16:07                         ` Vlastimil Babka
2014-06-04 16:11               ` [RFC PATCH 1/6] mm, compaction: periodically drop lock and restore IRQs in scanners Vlastimil Babka
2014-06-04 16:11                 ` [RFC PATCH 2/6] mm, compaction: skip rechecks when lock was already held Vlastimil Babka
2014-06-04 23:46                   ` David Rientjes
2014-06-04 16:11                 ` [RFC PATCH 3/6] mm, compaction: remember position within pageblock in free pages scanner Vlastimil Babka
2014-06-04 16:11                 ` [RFC PATCH 4/6] mm, compaction: skip buddy pages by their order in the migrate scanner Vlastimil Babka
2014-06-05  0:02                   ` David Rientjes
2014-06-05  9:24                     ` Vlastimil Babka
2014-06-05 21:30                       ` David Rientjes
2014-06-06  7:20                         ` Vlastimil Babka
2014-06-09  9:09                           ` David Rientjes
2014-06-09 11:35                             ` Vlastimil Babka
2014-06-09 22:25                               ` David Rientjes
2014-06-10  7:26                                 ` Vlastimil Babka
2014-06-10 23:54                                   ` David Rientjes
2014-06-11 12:18                                     ` Vlastimil Babka
2014-06-12  0:21                                       ` David Rientjes
2014-06-12 11:56                                         ` Vlastimil Babka
2014-06-12 21:48                                           ` David Rientjes
2014-06-04 16:11                 ` [RFC PATCH 5/6] mm, compaction: try to capture the just-created high-order freepage Vlastimil Babka
2014-06-04 16:11                 ` [RFC PATCH 6/6] mm, compaction: don't migrate in blocks that cannot be fully compacted in async direct compaction Vlastimil Babka
2014-06-05  0:08                   ` David Rientjes
2014-06-05 15:38                     ` Vlastimil Babka
2014-06-05 21:38                       ` David Rientjes
2014-06-06  7:33                         ` Vlastimil Babka
2014-06-09  9:06                           ` David Rientjes
2014-06-12 12:18                             ` Vlastimil Babka
2014-06-04 23:39                 ` [RFC PATCH 1/6] mm, compaction: periodically drop lock and restore IRQs in scanners David Rientjes
2014-06-05  9:05                   ` Vlastimil Babka
2014-05-22 23:49             ` [PATCH v2] mm, compaction: properly signal and act upon lock and need_sched() contention Kevin Hilman
2014-05-23  2:48               ` Shawn Guo
2014-05-23  8:34                 ` Vlastimil Babka
2014-05-23 10:49                   ` Shawn Guo
2014-05-23 15:07                   ` Kevin Hilman
2014-05-30 16:59                   ` Stephen Warren
2014-06-02 13:35                   ` Fabio Estevam
2014-06-02 14:33                     ` [PATCH -mm] mm, compaction: properly signal and act upon lock and need_sched() contention - fix Vlastimil Babka
2014-06-02 15:18                       ` Fabio Estevam
2014-06-02 20:09                       ` David Rientjes
2014-05-02 13:16 ` [patch 1/2] mm, migration: add destination page freeing callback Vlastimil Babka

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=alpine.DEB.2.02.1405070336200.16568@chino.kir.corp.google.com \
    --to=rientjes@google.com \
    --cc=akpm@linux-foundation.org \
    --cc=gthelen@google.com \
    --cc=hughd@google.com \
    --cc=iamjoonsoo.kim@lge.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@suse.de \
    --cc=riel@redhat.com \
    --cc=vbabka@suse.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox