From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from psmtp.com (na3sys010amx116.postini.com [74.125.245.116]) by kanga.kvack.org (Postfix) with SMTP id 21DA36B00F6 for ; Thu, 3 May 2012 10:57:23 -0400 (EDT) Received: by wgbdt14 with SMTP id dt14so1608175wgb.26 for ; Thu, 03 May 2012 07:57:21 -0700 (PDT) From: Gilad Ben-Yossef Subject: [PATCH v1 4/6] mm: make lru_drain selective where it schedules work Date: Thu, 3 May 2012 17:56:00 +0300 Message-Id: <1336056962-10465-5-git-send-email-gilad@benyossef.com> In-Reply-To: <1336056962-10465-1-git-send-email-gilad@benyossef.com> References: <1336056962-10465-1-git-send-email-gilad@benyossef.com> Sender: owner-linux-mm@kvack.org List-ID: To: linux-kernel@vger.kernel.org Cc: Gilad Ben-Yossef , Thomas Gleixner , Tejun Heo , John Stultz , Andrew Morton , KOSAKI Motohiro , Mel Gorman , Mike Frysinger , David Rientjes , Hugh Dickins , Minchan Kim , Konstantin Khlebnikov , Christoph Lameter , Chris Metcalf , Hakan Akkan , Max Krasnyansky , Frederic Weisbecker , linux-mm@kvack.org lru drain work is being done by scheduling a work queue on each CPU, whether it has LRU pages to drain or not, thus creating interference on isolated CPUs. This patch uses schedule_on_each_cpu_cond() to schedule the work only on CPUs where it seems that there are LRUs to drain. Signed-off-by: Gilad Ben-Yossef CC: Thomas Gleixner CC: Tejun Heo CC: John Stultz CC: Andrew Morton CC: KOSAKI Motohiro CC: Mel Gorman CC: Mike Frysinger CC: David Rientjes CC: Hugh Dickins CC: Minchan Kim CC: Konstantin Khlebnikov CC: Christoph Lameter CC: Chris Metcalf CC: Hakan Akkan CC: Max Krasnyansky CC: Frederic Weisbecker CC: linux-kernel@vger.kernel.org CC: linux-mm@kvack.org --- mm/swap.c | 25 ++++++++++++++++++++++++- 1 files changed, 24 insertions(+), 1 deletions(-) diff --git a/mm/swap.c b/mm/swap.c index 5c13f13..ab07b62 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -562,12 +562,35 @@ static void lru_add_drain_per_cpu(struct work_struct *dummy) lru_add_drain(); } +static bool lru_drain_cpu(int cpu) +{ + struct pagevec *pvecs = per_cpu(lru_add_pvecs, cpu); + struct pagevec *pvec; + int lru; + + for_each_lru(lru) { + pvec = &pvecs[lru - LRU_BASE]; + if (pagevec_count(pvec)) + return true; + } + + pvec = &per_cpu(lru_rotate_pvecs, cpu); + if (pagevec_count(pvec)) + return true; + + pvec = &per_cpu(lru_deactivate_pvecs, cpu); + if (pagevec_count(pvec)) + return true; + + return false; +} + /* * Returns 0 for success */ int lru_add_drain_all(void) { - return schedule_on_each_cpu(lru_add_drain_per_cpu); + return schedule_on_each_cpu_cond(lru_add_drain_per_cpu, lru_drain_cpu); } /* -- 1.7.0.4 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/ Don't email: email@kvack.org