linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* vm patch for highmem
@ 2001-08-28 15:17 Arjan van de Ven
  0 siblings, 0 replies; only message in thread
From: Arjan van de Ven @ 2001-08-28 15:17 UTC (permalink / raw)
  To: riel

Hi

The patch below changes the highmem bouncebuffers to increase performance.
Initial reports are that it matters A LOT.

What it does: it 1) increases the emergemcy pool and 2) it tries to grab a
page from the pool for EVERY bounce first, until the pool is half empty, and
only THEN does it try to get a page from the VM.
While this penalizes the low zone by making it have less pages, it also
leaves the VM totally alone for normal loads; only under more extreme loads
does the vm get involved.

Comments?

Greetings,
   Arjan van de Ven

--- linux/mm/highmem.c.org	Thu Aug 23 09:23:11 2001
+++ linux/mm/highmem.c	Thu Aug 23 10:21:33 2001
@@ -159,7 +159,11 @@
 	spin_unlock(&kmap_lock);
 }
 
-#define POOL_SIZE 32
+#ifdef CONFIG_HIGHMEM64G
+#define POOL_SIZE 256
+#else
+#define POOL_SIZE 64
+#endif
 
 /*
  * This lock gets no contention at all, normally.
@@ -306,10 +310,24 @@
 struct page *alloc_bounce_page (void)
 {
 	struct list_head *tmp;
-	struct page *page;
+	struct page *page = NULL;
+	int estimated_left;
+	int iteration=0;
 
 repeat_alloc:
-	page = alloc_page(GFP_NOIO);
+
+	spin_lock_irq(&emergency_lock);
+	estimated_left = nr_emergency_pages;
+	spin_unlock_irq(&emergency_lock);
+
+	/* If there are plenty of spare pages, use some of them first. If the
+	   pool is at least half depleted, use the VM to allocate memory.
+	   This allows moderate loads to continue without blocking here,
+	   while higher loads get throttled by the VM.
+        */
+	if ((estimated_left<=POOL_SIZE/2)&&(!iteration))
+		page = alloc_page(GFP_NOIO);
+	
 	if (page)
 		return page;
 	/*
@@ -338,16 +356,30 @@
 	current->policy |= SCHED_YIELD;
 	__set_current_state(TASK_RUNNING);
 	schedule();
+	iteration++;
 	goto repeat_alloc;
 }
 
 struct buffer_head *alloc_bounce_bh (void)
 {
 	struct list_head *tmp;
-	struct buffer_head *bh;
+	struct buffer_head *bh = NULL;
+	int estimated_left;
+	int iteration=0;
 
 repeat_alloc:
-	bh = kmem_cache_alloc(bh_cachep, SLAB_NOIO);
+
+	spin_lock_irq(&emergency_lock);
+	estimated_left = nr_emergency_bhs;
+	spin_unlock_irq(&emergency_lock);
+
+	/* If there are plenty of spare bh's, use some of them first. If the
+	   pool is at least half depleted, use the VM to allocate memory.
+	   This allows moderate loads to continue without blocking here,
+	   while higher loads get throttled by the VM.
+        */
+	if ((estimated_left<=POOL_SIZE/2)&&(!iteration))
+		bh = kmem_cache_alloc(bh_cachep, SLAB_NOIO);
 	if (bh)
 		return bh;
 	/*
@@ -376,6 +408,7 @@
 	current->policy |= SCHED_YIELD;
 	__set_current_state(TASK_RUNNING);
 	schedule();
+	iteration++;
 	goto repeat_alloc;
 }
 
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/

^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2001-08-28 16:12 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2001-08-28 15:17 vm patch for highmem Arjan van de Ven

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox