linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCHv2 1/5] vmscan: separate sc.swap_cluster_max and sc.nr_max_reclaim
@ 2009-11-01 15:08 KOSAKI Motohiro
  2009-11-01 15:09 ` [PATCHv2 2/5] vmscan: Kill hibernation specific reclaim logic and unify it KOSAKI Motohiro
                   ` (4 more replies)
  0 siblings, 5 replies; 32+ messages in thread
From: KOSAKI Motohiro @ 2009-11-01 15:08 UTC (permalink / raw)
  To: Rafael J. Wysocki, Rik van Riel, LKML, linux-mm, Andrew Morton
  Cc: kosaki.motohiro

Currently, sc.scap_cluster_max has double meanings.

 1) reclaim batch size as isolate_lru_pages()'s argument
 2) reclaim baling out thresolds

The two meanings pretty unrelated. Thus, Let's separate it.
this patch doesn't change any behavior.

Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Rafael J. Wysocki <rjw@sisk.pl>
Reviewed-by: Rik van Riel <riel@redhat.com>
---
 mm/vmscan.c |   21 +++++++++++++++------
 1 files changed, 15 insertions(+), 6 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index f805958..6a3eb9f 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -55,6 +55,9 @@ struct scan_control {
 	/* Number of pages freed so far during a call to shrink_zones() */
 	unsigned long nr_reclaimed;
 
+	/* How many pages shrink_list() should reclaim */
+	unsigned long nr_to_reclaim;
+
 	/* This context's GFP mask */
 	gfp_t gfp_mask;
 
@@ -1585,6 +1588,7 @@ static void shrink_zone(int priority, struct zone *zone,
 	enum lru_list l;
 	unsigned long nr_reclaimed = sc->nr_reclaimed;
 	unsigned long swap_cluster_max = sc->swap_cluster_max;
+	unsigned long nr_to_reclaim = sc->nr_to_reclaim;
 	struct zone_reclaim_stat *reclaim_stat = get_reclaim_stat(zone, sc);
 	int noswap = 0;
 
@@ -1634,8 +1638,7 @@ static void shrink_zone(int priority, struct zone *zone,
 		 * with multiple processes reclaiming pages, the total
 		 * freeing target can get unreasonably large.
 		 */
-		if (nr_reclaimed > swap_cluster_max &&
-			priority < DEF_PRIORITY && !current_is_kswapd())
+		if (nr_reclaimed > nr_to_reclaim && priority < DEF_PRIORITY)
 			break;
 	}
 
@@ -1733,6 +1736,7 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
 	struct zoneref *z;
 	struct zone *zone;
 	enum zone_type high_zoneidx = gfp_zone(sc->gfp_mask);
+	unsigned long writeback_threshold;
 
 	delayacct_freepages_start();
 
@@ -1768,7 +1772,7 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
 			}
 		}
 		total_scanned += sc->nr_scanned;
-		if (sc->nr_reclaimed >= sc->swap_cluster_max) {
+		if (sc->nr_reclaimed >= sc->nr_to_reclaim) {
 			ret = sc->nr_reclaimed;
 			goto out;
 		}
@@ -1780,8 +1784,8 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
 		 * that's undesirable in laptop mode, where we *want* lumpy
 		 * writeout.  So in laptop mode, write out the whole world.
 		 */
-		if (total_scanned > sc->swap_cluster_max +
-					sc->swap_cluster_max / 2) {
+		writeback_threshold = sc->nr_to_reclaim + sc->nr_to_reclaim / 2;
+		if (total_scanned > writeback_threshold) {
 			wakeup_flusher_threads(laptop_mode ? 0 : total_scanned);
 			sc->may_writepage = 1;
 		}
@@ -1827,6 +1831,7 @@ unsigned long try_to_free_pages(struct zonelist *zonelist, int order,
 		.gfp_mask = gfp_mask,
 		.may_writepage = !laptop_mode,
 		.swap_cluster_max = SWAP_CLUSTER_MAX,
+		.nr_to_reclaim = SWAP_CLUSTER_MAX,
 		.may_unmap = 1,
 		.may_swap = 1,
 		.swappiness = vm_swappiness,
@@ -1885,6 +1890,7 @@ unsigned long try_to_free_mem_cgroup_pages(struct mem_cgroup *mem_cont,
 		.may_unmap = 1,
 		.may_swap = !noswap,
 		.swap_cluster_max = SWAP_CLUSTER_MAX,
+		.nr_to_reclaim = SWAP_CLUSTER_MAX,
 		.swappiness = swappiness,
 		.order = 0,
 		.mem_cgroup = mem_cont,
@@ -1932,6 +1938,7 @@ static unsigned long balance_pgdat(pg_data_t *pgdat, int order)
 		.may_unmap = 1,
 		.may_swap = 1,
 		.swap_cluster_max = SWAP_CLUSTER_MAX,
+		.nr_to_reclaim = ULONG_MAX,
 		.swappiness = vm_swappiness,
 		.order = order,
 		.mem_cgroup = NULL,
@@ -2549,7 +2556,9 @@ static int __zone_reclaim(struct zone *zone, gfp_t gfp_mask, unsigned int order)
 		.may_unmap = !!(zone_reclaim_mode & RECLAIM_SWAP),
 		.may_swap = 1,
 		.swap_cluster_max = max_t(unsigned long, nr_pages,
-					SWAP_CLUSTER_MAX),
+				       SWAP_CLUSTER_MAX),
+		.nr_to_reclaim = max_t(unsigned long, nr_pages,
+				       SWAP_CLUSTER_MAX),
 		.gfp_mask = gfp_mask,
 		.swappiness = vm_swappiness,
 		.order = order,
-- 
1.6.2.5



--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 32+ messages in thread

end of thread, other threads:[~2009-11-13  5:08 UTC | newest]

Thread overview: 32+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2009-11-01 15:08 [PATCHv2 1/5] vmscan: separate sc.swap_cluster_max and sc.nr_max_reclaim KOSAKI Motohiro
2009-11-01 15:09 ` [PATCHv2 2/5] vmscan: Kill hibernation specific reclaim logic and unify it KOSAKI Motohiro
2009-11-01 15:12   ` Rik van Riel
2009-11-01 21:38   ` Rafael J. Wysocki
2009-11-02 15:35     ` KOSAKI Motohiro
2009-11-02 19:03       ` Rafael J. Wysocki
2009-11-03 14:00         ` KOSAKI Motohiro
2009-11-03 21:51           ` Rafael J. Wysocki
2009-11-01 22:01   ` Nigel Cunningham
2009-11-02 15:35     ` KOSAKI Motohiro
2009-11-02 19:05       ` Rafael J. Wysocki
2009-11-02 21:19       ` Nigel Cunningham
2009-11-03 11:30         ` Rafael J. Wysocki
2009-11-03 21:12           ` Nigel Cunningham
2009-11-03 22:00             ` Rafael J. Wysocki
2009-11-12 12:33               ` using highmem for atomic copy of lowmem was " Pavel Machek
2009-11-12 23:33                 ` Rafael J. Wysocki
2009-11-03 14:00         ` KOSAKI Motohiro
2009-11-03 21:52           ` Rafael J. Wysocki
2009-11-01 15:11 ` [PATCHv2 3/5] vmscan: Stop zone_reclaim()'s wrong swap_cluster_max usage KOSAKI Motohiro
2009-11-01 17:51   ` Rik van Riel
2009-11-02  0:40   ` Minchan Kim
2009-11-01 15:12 ` [PATCHv2 4/5] vmscan: Kill sc.swap_cluster_max KOSAKI Motohiro
2009-11-01 17:56   ` Rik van Riel
2009-11-02  0:46   ` Minchan Kim
2009-11-01 15:13 ` [PATCHv2 5/5][nit fix] vmscan Make consistent of reclaim bale out between do_try_to_free_page and shrink_zone KOSAKI Motohiro
2009-11-01 17:58   ` Rik van Riel
2009-11-02  0:48   ` Minchan Kim
2009-11-02  0:35 ` [PATCHv2 1/5] vmscan: separate sc.swap_cluster_max and sc.nr_max_reclaim Minchan Kim
2009-11-02  0:48   ` Minchan Kim
2009-11-02 15:35   ` KOSAKI Motohiro
2009-11-02 23:34     ` Minchan Kim

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox