linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
To: "linux-mm@kvack.org" <linux-mm@kvack.org>
Subject: [RFC] patch for mulitiple lru in a zone [1/2] cleanup setup_per_zone_pages_min()
Date: Fri, 31 Aug 2007 16:46:11 +0900	[thread overview]
Message-ID: <20070831164611.2c29de69.kamezawa.hiroyu@jp.fujitsu.com> (raw)

setup_per_zone_pages_min() takes zone->lru_lock which modifing zone's 
pages_min,low,high values.
But refererer of these values seems not to take care of taking lock.

Instead of taking lock, using ordered modification of 3 values looks better.

Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>

 mm/page_alloc.c |   20 +++++++++++++-------
 1 file changed, 13 insertions(+), 7 deletions(-)

Index: linux-2.6.23-rc4/mm/page_alloc.c
===================================================================
--- linux-2.6.23-rc4.orig/mm/page_alloc.c
+++ linux-2.6.23-rc4/mm/page_alloc.c
@@ -3629,7 +3629,6 @@ void setup_per_zone_pages_min(void)
 	unsigned long pages_min = min_free_kbytes >> (PAGE_SHIFT - 10);
 	unsigned long lowmem_pages = 0;
 	struct zone *zone;
-	unsigned long flags;
 
 	/* Calculate total number of !ZONE_HIGHMEM pages */
 	for_each_zone(zone) {
@@ -3639,8 +3638,8 @@ void setup_per_zone_pages_min(void)
 
 	for_each_zone(zone) {
 		u64 tmp;
+		unsigned long zone_pages_min;
 
-		spin_lock_irqsave(&zone->lru_lock, flags);
 		tmp = (u64)pages_min * zone->present_pages;
 		do_div(tmp, lowmem_pages);
 		if (is_highmem(zone)) {
@@ -3660,18 +3659,24 @@ void setup_per_zone_pages_min(void)
 				min_pages = SWAP_CLUSTER_MAX;
 			if (min_pages > 128)
 				min_pages = 128;
-			zone->pages_min = min_pages;
+			zone_pages_min = min_pages;
 		} else {
 			/*
 			 * If it's a lowmem zone, reserve a number of pages
 			 * proportionate to the zone's size.
 			 */
-			zone->pages_min = tmp;
+			zone_pages_min = tmp;
+		}
+		/* keep min < low < high during this change */
+		if (zone_pages_min < zone->pages_min) {
+			xchg(&zone->pages_min, zone_pages_min);
+			xchg(&zone->pages_low, zone_pages_min + (tmp >> 2));
+			xchg(&zone->pages_high, zone_pages_min + (tmp >> 1));
+		} else {
+			xchg(&zone->pages_high, zone_pages_min + (tmp >> 1));
+			xchg(&zone->pages_low, zone_pages_min + (tmp >> 2));
+			xchg(&zone->pages_min, zone_pages_min);
 		}
-
-		zone->pages_low   = zone->pages_min + (tmp >> 2);
-		zone->pages_high  = zone->pages_min + (tmp >> 1);
-		spin_unlock_irqrestore(&zone->lru_lock, flags);
 	}
 
 	/* update totalreserve_pages */

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

             reply	other threads:[~2007-08-31  7:44 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2007-08-31  7:46 KAMEZAWA Hiroyuki [this message]
2007-08-31  7:52 ` [RFC] patch for mulitiple lru in a zone [2/2] separate lru form zone (just for hearing advice/opinion) KAMEZAWA Hiroyuki

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20070831164611.2c29de69.kamezawa.hiroyu@jp.fujitsu.com \
    --to=kamezawa.hiroyu@jp.fujitsu.com \
    --cc=linux-mm@kvack.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox