linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v4 0/9] pseudo-interleaving for automatic NUMA balancing
@ 2014-01-21 22:20 riel
  2014-01-21 22:20 ` [PATCH 1/9] numa,sched,mm: remove p->numa_migrate_deferred riel
                   ` (8 more replies)
  0 siblings, 9 replies; 17+ messages in thread
From: riel @ 2014-01-21 22:20 UTC (permalink / raw)
  To: linux-kernel; +Cc: linux-mm, peterz, mgorman, mingo, chegu_vinod

The current automatic NUMA balancing code base has issues with
workloads that do not fit on one NUMA load. Page migration is
slowed down, but memory distribution between the nodes where
the workload runs is essentially random, often resulting in a
suboptimal amount of memory bandwidth being available to the
workload.

In order to maximize performance of workloads that do not fit in one NUMA
node, we want to satisfy the following criteria:
1) keep private memory local to each thread
2) avoid excessive NUMA migration of pages
3) distribute shared memory across the active nodes, to
   maximize memory bandwidth available to the workload

This patch series identifies the NUMA nodes on which the workload
is actively running, and balances (somewhat lazily) the memory
between those nodes, satisfying the criteria above.

As usual, the series has had some performance testing, but it
could always benefit from more testing, on other systems.

Changes since v3:
 - various code cleanups suggested by Mel Gorman (some in their own patches)
 - after some testing, switch back to the NUMA specific CPU use stats,
   since that results in a 1% performance increase for two 8-warehouse
   specjbb instances on a 4-node system, and reduced page migration across
   the board
Changes since v2:
 - dropped tracepoint (for now?)
 - implement obvious improvements suggested by Peter
 - use the scheduler maintained CPU use statistics, drop
   the NUMA specific ones for now. We can add those later
   if they turn out to be beneficial
Changes since v1:
 - fix divide by zero found by Chegu Vinod
 - improve comment, as suggested by Peter Zijlstra
 - do stats calculations in task_numa_placement in local variables


Some performance numbers, with two 40-warehouse specjbb instances
on an 8 node system with 10 CPU cores per node, using a pre-cleanup
version of these patches, courtesy of Chegu Vinod:

numactl manual pinning
spec1.txt:           throughput =     755900.20 SPECjbb2005 bops
spec2.txt:           throughput =     754914.40 SPECjbb2005 bops

NO-pinning results (Automatic NUMA balancing, with patches)
spec1.txt:           throughput =     706439.84 SPECjbb2005 bops
spec2.txt:           throughput =     729347.75 SPECjbb2005 bops

NO-pinning results (Automatic NUMA balancing, without patches)
spec1.txt:           throughput =     667988.47 SPECjbb2005 bops
spec2.txt:           throughput =     638220.45 SPECjbb2005 bops

No Automatic NUMA and NO-pinning results
spec1.txt:           throughput =     544120.97 SPECjbb2005 bops
spec2.txt:           throughput =     453553.41 SPECjbb2005 bops


My own performance numbers are not as relevant, since I have been
running with a more hostile workload on purpose, and I have run
into a scheduler issue that caused the workload to run on only
two of the four NUMA nodes on my test system...

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 17+ messages in thread
* [PATCH v5 0/9] numa,sched,mm: pseudo-interleaving for automatic NUMA balancing
@ 2014-01-27 22:03 riel
  2014-01-27 22:03 ` [PATCH 1/9] numa,sched,mm: remove p->numa_migrate_deferred riel
  0 siblings, 1 reply; 17+ messages in thread
From: riel @ 2014-01-27 22:03 UTC (permalink / raw)
  To: linux-kernel; +Cc: linux-mm, peterz, mgorman, mingo, chegu_vinod

The current automatic NUMA balancing code base has issues with
workloads that do not fit on one NUMA load. Page migration is
slowed down, but memory distribution between the nodes where
the workload runs is essentially random, often resulting in a
suboptimal amount of memory bandwidth being available to the
workload.

In order to maximize performance of workloads that do not fit in one NUMA
node, we want to satisfy the following criteria:
1) keep private memory local to each thread
2) avoid excessive NUMA migration of pages
3) distribute shared memory across the active nodes, to
   maximize memory bandwidth available to the workload

This patch series identifies the NUMA nodes on which the workload
is actively running, and balances (somewhat lazily) the memory
between those nodes, satisfying the criteria above.

As usual, the series has had some performance testing, but it
could always benefit from more testing, on other systems.

Changes since v4:
 - remove some code that did not help performance
 - implement all the cleanups suggested by Mel Gorman
 - lots more testing, by Chegu Vinod and myself
 - rebase against -tip instead of -next, to make merging easier
Changes since v3:
 - various code cleanups suggested by Mel Gorman (some in their own patches)
 - after some testing, switch back to the NUMA specific CPU use stats,
   since that results in a 1% performance increase for two 8-warehouse
   specjbb instances on a 4-node system, and reduced page migration across
   the board
Changes since v2:
 - dropped tracepoint (for now?)
 - implement obvious improvements suggested by Peter
 - use the scheduler maintained CPU use statistics, drop
   the NUMA specific ones for now. We can add those later
   if they turn out to be beneficial
Changes since v1:
 - fix divide by zero found by Chegu Vinod
 - improve comment, as suggested by Peter Zijlstra
 - do stats calculations in task_numa_placement in local variables


Some performance numbers, with two 40-warehouse specjbb instances
on an 8 node system with 10 CPU cores per node, using a pre-cleanup
version of these patches, courtesy of Chegu Vinod:

numactl manual pinning
spec1.txt:           throughput =     755900.20 SPECjbb2005 bops
spec2.txt:           throughput =     754914.40 SPECjbb2005 bops

NO-pinning results (Automatic NUMA balancing, with patches)
spec1.txt:           throughput =     706439.84 SPECjbb2005 bops
spec2.txt:           throughput =     729347.75 SPECjbb2005 bops

NO-pinning results (Automatic NUMA balancing, without patches)
spec1.txt:           throughput =     667988.47 SPECjbb2005 bops
spec2.txt:           throughput =     638220.45 SPECjbb2005 bops

No Automatic NUMA and NO-pinning results
spec1.txt:           throughput =     544120.97 SPECjbb2005 bops
spec2.txt:           throughput =     453553.41 SPECjbb2005 bops

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2014-01-27 22:05 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-01-21 22:20 [PATCH v4 0/9] pseudo-interleaving for automatic NUMA balancing riel
2014-01-21 22:20 ` [PATCH 1/9] numa,sched,mm: remove p->numa_migrate_deferred riel
2014-01-21 22:20 ` [PATCH 2/9] rename p->numa_faults to numa_faults_memory riel
2014-01-24 15:25   ` Mel Gorman
2014-01-21 22:20 ` [PATCH 3/9] numa,sched: track from which nodes NUMA faults are triggered riel
2014-01-24 15:31   ` Mel Gorman
2014-01-21 22:20 ` [PATCH 4/9] numa,sched: build per numa_group active node mask from numa_faults_cpu statistics riel
2014-01-24 15:38   ` Mel Gorman
2014-01-21 22:20 ` [PATCH 5/9] numa,sched,mm: use active_nodes nodemask to limit numa migrations riel
2014-01-21 22:20 ` [PATCH 6/9] numa,sched: normalize faults_cpu stats and weigh by CPU use riel
2014-01-21 22:20 ` [PATCH 7/9] numa,sched: do statistics calculation using local variables only riel
2014-01-21 22:20 ` [PATCH 8/9] numa,sched: rename variables in task_numa_fault riel
2014-01-24 15:42   ` Mel Gorman
2014-01-21 22:20 ` [PATCH 9/9] numa,sched: define some magic numbers riel
2014-01-21 23:58   ` Rik van Riel
2014-01-24 15:44     ` Mel Gorman
2014-01-27 22:03 [PATCH v5 0/9] numa,sched,mm: pseudo-interleaving for automatic NUMA balancing riel
2014-01-27 22:03 ` [PATCH 1/9] numa,sched,mm: remove p->numa_migrate_deferred riel

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox