linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Peter Zijlstra <peterz@infradead.org>
To: riel@redhat.com
Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org,
	chegu_vinod@hp.com, mgorman@suse.de, mingo@redhat.com
Subject: Re: [PATCH 6/7] numa,sched: normalize faults_from stats and weigh by CPU use
Date: Mon, 20 Jan 2014 17:57:47 +0100	[thread overview]
Message-ID: <20140120165747.GL31570@twins.programming.kicks-ass.net> (raw)
In-Reply-To: <1389993129-28180-7-git-send-email-riel@redhat.com>

On Fri, Jan 17, 2014 at 04:12:08PM -0500, riel@redhat.com wrote:
> diff --git a/include/linux/sched.h b/include/linux/sched.h
> index 0af6c1a..52de567 100644
> --- a/include/linux/sched.h
> +++ b/include/linux/sched.h
> @@ -1471,6 +1471,8 @@ struct task_struct {
>  	int numa_preferred_nid;
>  	unsigned long numa_migrate_retry;
>  	u64 node_stamp;			/* migration stamp  */
> +	u64 last_task_numa_placement;
> +	u64 last_sum_exec_runtime;
>  	struct callback_head numa_work;
>  
>  	struct list_head numa_entry;

> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 8e0a53a..0d395a0 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -1422,11 +1422,41 @@ static void update_task_scan_period(struct task_struct *p,
>  	memset(p->numa_faults_locality, 0, sizeof(p->numa_faults_locality));
>  }
>  
> +/*
> + * Get the fraction of time the task has been running since the last
> + * NUMA placement cycle. The scheduler keeps similar statistics, but
> + * decays those on a 32ms period, which is orders of magnitude off
> + * from the dozens-of-seconds NUMA balancing period. Use the scheduler
> + * stats only if the task is so new there are no NUMA statistics yet.
> + */
> +static u64 numa_get_avg_runtime(struct task_struct *p, u64 *period)
> +{
> +	u64 runtime, delta, now;
> +	/* Use the start of this time slice to avoid calculations. */
> +	now = p->se.exec_start;
> +	runtime = p->se.sum_exec_runtime;
> +
> +	if (p->last_task_numa_placement) {
> +		delta = runtime - p->last_sum_exec_runtime;
> +		*period = now - p->last_task_numa_placement;
> +	} else {
> +		delta = p->se.avg.runnable_avg_sum;
> +		*period = p->se.avg.runnable_avg_period;
> +	}
> +
> +	p->last_sum_exec_runtime = runtime;
> +	p->last_task_numa_placement = now;
> +
> +	return delta;
> +}

Have you tried what happens if you use p->se.avg.runnable_avg_sum /
p->se.avg.runnable_avg_period instead? If that also works it avoids
growing the datastructures and keeping of yet another set of runtime
stats.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2014-01-20 16:58 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-01-17 21:12 [PATCH v2 0/7] pseudo-interleaving for automatic NUMA balancing riel
2014-01-17 21:12 ` [PATCH 1/7] numa,sched,mm: remove p->numa_migrate_deferred riel
2014-01-17 21:12 ` [PATCH 2/7] numa,sched: track from which nodes NUMA faults are triggered riel
2014-01-17 21:12 ` [PATCH 3/7] numa,sched: build per numa_group active node mask from faults_from statistics riel
2014-01-20 16:31   ` Peter Zijlstra
2014-01-20 18:55     ` Rik van Riel
2014-01-20 16:55   ` Peter Zijlstra
2014-01-17 21:12 ` [PATCH 4/7] numa,sched: tracepoints for NUMA balancing active nodemask changes riel
2014-01-20 16:52   ` Peter Zijlstra
2014-01-20 18:51     ` Rik van Riel
2014-01-20 19:05     ` Steven Rostedt
2014-01-17 21:12 ` [PATCH 5/7] numa,sched,mm: use active_nodes nodemask to limit numa migrations riel
2014-01-17 21:12 ` [PATCH 6/7] numa,sched: normalize faults_from stats and weigh by CPU use riel
2014-01-20 16:57   ` Peter Zijlstra [this message]
2014-01-20 19:02     ` Rik van Riel
2014-01-20 19:10       ` Peter Zijlstra
2014-01-17 21:12 ` [PATCH 7/7] numa,sched: do statistics calculation using local variables only riel
2014-01-18  3:31   ` Rik van Riel
2014-01-18 22:05 ` [PATCH v2 0/7] pseudo-interleaving for automatic NUMA balancing Chegu Vinod

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20140120165747.GL31570@twins.programming.kicks-ass.net \
    --to=peterz@infradead.org \
    --cc=chegu_vinod@hp.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@suse.de \
    --cc=mingo@redhat.com \
    --cc=riel@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox