* [PATCH] "drop behind" for buffers
@ 2001-08-14 6:41 Rik van Riel
0 siblings, 0 replies; only message in thread
From: Rik van Riel @ 2001-08-14 6:41 UTC (permalink / raw)
To: Alan Cox; +Cc: linux-mm
Hi Alan,
the patch below bypasses page aging and drops buffers directly
onto the inactive_dirty list when we have an excessive amount
of buffercache pages.
This should provide some of the benefits of drop behind for
buffercache pages, while still giving the buffercache pages
a good chance to stay resident in memory by being referenced
while on the inactive_dirty list (and moved back onto the
active list).
regards,
Rik
--
IA64: a worthy successor to i860.
--- linux/mm/vmscan.c.buffer Thu Aug 9 17:54:24 2001
+++ linux/mm/vmscan.c Thu Aug 9 17:55:09 2001
@@ -708,6 +708,8 @@
* This function will scan a portion of the active list to find
* unused pages, those pages will then be moved to the inactive list.
*/
+#define too_many_buffers (atomic_read(&buffermem_pages) > \
+ (num_physpages * buffer_mem.borrow_percent / 100))
int refill_inactive_scan(zone_t *zone, unsigned int priority, int target)
{
struct list_head * page_lru;
@@ -770,6 +772,18 @@
page_active = 1;
}
}
+
+ /*
+ * If the amount of buffer cache pages is too
+ * high we just move every buffer cache page we
+ * find to the inactive list. Eventually they'll
+ * be reclaimed there...
+ */
+ if (page->buffers && !page->mapping && too_many_buffers) {
+ deactivate_page_nolock(page);
+ page_active = 0;
+ }
+
/*
* If the page is still on the active list, move it
* to the other end of the list. Otherwise we exit if
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/
^ permalink raw reply [flat|nested] only message in thread
only message in thread, other threads:[~2001-08-14 6:41 UTC | newest]
Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2001-08-14 6:41 [PATCH] "drop behind" for buffers Rik van Riel
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox