From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-qa0-f49.google.com (mail-qa0-f49.google.com [209.85.216.49]) by kanga.kvack.org (Postfix) with ESMTP id 520CB6B0036 for ; Mon, 27 Jan 2014 17:06:13 -0500 (EST) Received: by mail-qa0-f49.google.com with SMTP id w8so8066861qac.36 for ; Mon, 27 Jan 2014 14:06:13 -0800 (PST) Received: from shelob.surriel.com (shelob.surriel.com. [74.92.59.67]) by mx.google.com with ESMTPS id q16si9714613qga.31.2014.01.27.14.06.04 for (version=TLSv1 cipher=RC4-SHA bits=128/128); Mon, 27 Jan 2014 14:06:04 -0800 (PST) From: riel@redhat.com Subject: [PATCH 7/9] numa,sched: do statistics calculation using local variables only Date: Mon, 27 Jan 2014 17:03:46 -0500 Message-Id: <1390860228-21539-8-git-send-email-riel@redhat.com> In-Reply-To: <1390860228-21539-1-git-send-email-riel@redhat.com> References: <1390860228-21539-1-git-send-email-riel@redhat.com> Sender: owner-linux-mm@kvack.org List-ID: To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, peterz@infradead.org, mgorman@suse.de, mingo@redhat.com, chegu_vinod@hp.com From: Rik van Riel The current code in task_numa_placement calculates the difference between the old and the new value, but also temporarily stores half of the old value in the per-process variables. The NUMA balancing code looks at those per-process variables, and having other tasks temporarily see halved statistics could lead to unwanted numa migrations. This can be avoided by doing all the math in local variables. This change also simplifies the code a little. Cc: Peter Zijlstra Cc: Mel Gorman Cc: Ingo Molnar Cc: Chegu Vinod Acked-by: Mel Gorman Signed-off-by: Rik van Riel --- kernel/sched/fair.c | 12 ++++-------- 1 file changed, 4 insertions(+), 8 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 8fb81c5..eb32b3e 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -1513,12 +1513,9 @@ static void task_numa_placement(struct task_struct *p) long diff, f_diff, f_weight; i = task_faults_idx(nid, priv); - diff = -p->numa_faults_memory[i]; - f_diff = -p->numa_faults_cpu[i]; /* Decay existing window, copy faults since last scan */ - p->numa_faults_memory[i] >>= 1; - p->numa_faults_memory[i] += p->numa_faults_buffer_memory[i]; + diff = p->numa_faults_buffer_memory[i] - p->numa_faults_memory[i] / 2; fault_types[priv] += p->numa_faults_buffer_memory[i]; p->numa_faults_buffer_memory[i] = 0; @@ -1532,13 +1529,12 @@ static void task_numa_placement(struct task_struct *p) f_weight = (65536 * runtime) / (period + 1); f_weight = (f_weight * p->numa_faults_buffer_cpu[i]) / (total_faults + 1); - p->numa_faults_cpu[i] >>= 1; - p->numa_faults_cpu[i] += f_weight; + f_diff = f_weight - p->numa_faults_cpu[i] / 2; p->numa_faults_buffer_cpu[i] = 0; + p->numa_faults_memory[i] += diff; + p->numa_faults_cpu[i] += f_diff; faults += p->numa_faults_memory[i]; - diff += p->numa_faults_memory[i]; - f_diff += p->numa_faults_cpu[i]; p->total_numa_faults += diff; if (p->numa_group) { /* safe because we can only change our own group */ -- 1.8.4.2 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org