* [PATCH] ksm: drain pagevecs to lru
@ 2011-01-11 7:30 Hugh Dickins
0 siblings, 0 replies; only message in thread
From: Hugh Dickins @ 2011-01-11 7:30 UTC (permalink / raw)
To: Andrew Morton; +Cc: Andrea Arcangeli, CAI Qian, linux-mm
It was hard to explain the page counts which were causing new LTP tests
of KSM to fail: we need to drain the per-cpu pagevecs to LRU occasionally.
Reported-by: CAI Qian <caiqian@redhat.com>
Signed-off-by: Hugh Dickins <hughd@google.com>
---
mm/ksm.c | 12 ++++++++++++
1 file changed, 12 insertions(+)
--- 2.6.37/mm/ksm.c 2010-12-24 19:31:45.000000000 -0800
+++ linux/mm/ksm.c 2011-01-02 15:06:52.000000000 -0800
@@ -1247,6 +1247,18 @@ static struct rmap_item *scan_get_next_r
slot = ksm_scan.mm_slot;
if (slot == &ksm_mm_head) {
+ /*
+ * A number of pages can hang around indefinitely on per-cpu
+ * pagevecs, raised page count preventing write_protect_page
+ * from merging them. Though it doesn't really matter much,
+ * it is puzzling to see some stuck in pages_volatile until
+ * other activity jostles them out, and they also prevented
+ * LTP's KSM test from succeeding deterministically; so drain
+ * them here (here rather than on entry to ksm_do_scan(),
+ * so we don't IPI too often when pages_to_scan is set low).
+ */
+ lru_add_drain_all();
+
root_unstable_tree = RB_ROOT;
spin_lock(&ksm_mmlist_lock);
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom policy in Canada: sign http://dissolvethecrtc.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] only message in thread
only message in thread, other threads:[~2011-01-11 7:30 UTC | newest]
Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-01-11 7:30 [PATCH] ksm: drain pagevecs to lru Hugh Dickins
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox