From: Mel Gorman <mgorman@suse.de>
To: Rik van Riel <riel@redhat.com>
Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org,
peterz@infradead.org, mingo@redhat.com, chegu_vinod@hp.com
Subject: Re: [PATCH 2/6] numa,sched: track from which nodes NUMA faults are triggered
Date: Fri, 24 Jan 2014 14:14:34 +0000 [thread overview]
Message-ID: <20140124141434.GX4963@suse.de> (raw)
In-Reply-To: <52DEF41F.1040105@redhat.com>
On Tue, Jan 21, 2014 at 05:26:39PM -0500, Rik van Riel wrote:
> On 01/21/2014 07:21 AM, Mel Gorman wrote:
> > On Mon, Jan 20, 2014 at 02:21:03PM -0500, riel@redhat.com wrote:
>
> >> +++ b/include/linux/sched.h
> >> @@ -1492,6 +1492,14 @@ struct task_struct {
> >> unsigned long *numa_faults_buffer;
> >>
> >> /*
> >> + * Track the nodes where faults are incurred. This is not very
> >> + * interesting on a per-task basis, but it help with smarter
> >> + * numa memory placement for groups of processes.
> >> + */
> >> + unsigned long *numa_faults_from;
> >> + unsigned long *numa_faults_from_buffer;
> >> +
> >
> > As an aside I wonder if we can derive any useful metric from this
>
> It may provide for a better way to tune the numa scan interval
> than the current code, since the "local vs remote" ratio is not
> going to provide us much useful info when dealing with a workload
> that is spread across multiple numa nodes.
>
Agreed. Local vs Remote handles the easier cases, particularly where the
workload has been configured to have aspects of it fit within NUMA nodes
(e.g. multiple JVMs, multiple virtual machines etc) but it's nowhere near
as useful for large single-image workloads spanning the full machine
I think in this New World Order that for single instance workloads we
would instead take into account the balance of all remote nodes. So if
all remote nodes are roughly even in terms of usage and we've decided to
interleave then slow the scan rate if the remote active nodes are evenly used
It's not something for this series just yet but I have observed a higher
system CPU usage as a result of this series. It's still far lower than
the overhead we had in the past but this is one potential idea that would
allow us to reduce the system overhead again in the future.
> >> grp->total_faults = p->total_numa_faults;
> >> @@ -1526,7 +1536,7 @@ static void task_numa_group(struct task_struct *p, int cpupid, int flags,
> >>
> >> double_lock(&my_grp->lock, &grp->lock);
> >>
> >> - for (i = 0; i < 2*nr_node_ids; i++) {
> >> + for (i = 0; i < 4*nr_node_ids; i++) {
> >> my_grp->faults[i] -= p->numa_faults[i];
> >> grp->faults[i] += p->numa_faults[i];
> >> }
> >
> > The same obscure trick is used throughout and I'm not sure how
> > maintainable that will be. Would it be better to be explicit about this?
>
> I have made a cleanup patch for this, using the defines you
> suggested.
>
> >> @@ -1634,6 +1649,7 @@ void task_numa_fault(int last_cpupid, int node, int pages, int flags)
> >> p->numa_pages_migrated += pages;
> >>
> >> p->numa_faults_buffer[task_faults_idx(node, priv)] += pages;
> >> + p->numa_faults_from_buffer[task_faults_idx(this_node, priv)] += pages;
> >> p->numa_faults_locality[!!(flags & TNF_FAULT_LOCAL)] += pages;
> >
> > this_node and node is similarly ambiguous in terms of name. Rename of
> > data_node and cpu_node would have been clearer.
>
> I added a patch in the next version of the series.
>
> Don't want to make the series too large, though :)
>
Understood, it's a bit of a mouthful already.
--
Mel Gorman
SUSE Labs
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2014-01-24 14:14 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-01-20 19:21 [PATCH v3 0/6] pseudo-interleaving for automatic NUMA balancing riel
2014-01-20 19:21 ` [PATCH 1/6] numa,sched,mm: remove p->numa_migrate_deferred riel
2014-01-21 11:52 ` Mel Gorman
2014-01-20 19:21 ` [PATCH 2/6] numa,sched: track from which nodes NUMA faults are triggered riel
2014-01-21 12:21 ` Mel Gorman
2014-01-21 22:26 ` Rik van Riel
2014-01-24 14:14 ` Mel Gorman [this message]
2014-01-20 19:21 ` [PATCH 3/6] numa,sched: build per numa_group active node mask from faults_from statistics riel
2014-01-21 14:19 ` Mel Gorman
2014-01-21 15:09 ` Rik van Riel
2014-01-21 15:41 ` Mel Gorman
2014-01-20 19:21 ` [PATCH 4/6] numa,sched,mm: use active_nodes nodemask to limit numa migrations riel
2014-01-21 15:08 ` Mel Gorman
2014-01-20 19:21 ` [PATCH 5/6] numa,sched: normalize faults_from stats and weigh by CPU use riel
2014-01-21 15:56 ` Mel Gorman
2014-01-21 21:05 ` Rik van Riel
2014-01-20 19:21 ` [PATCH 6/6] numa,sched: do statistics calculation using local variables only riel
2014-01-21 16:15 ` Mel Gorman
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20140124141434.GX4963@suse.de \
--to=mgorman@suse.de \
--cc=chegu_vinod@hp.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mingo@redhat.com \
--cc=peterz@infradead.org \
--cc=riel@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox