From: Paul Jackson <pj@sgi.com>
To: David Rientjes <rientjes@cs.washington.edu>
Cc: linux-mm@kvack.org, akpm@osdl.org, nickpiggin@yahoo.com.au,
ak@suse.de, mbligh@google.com, rohitseth@google.com,
menage@google.com, clameter@sgi.com
Subject: Re: [RFC] another way to speed up fake numa node page_alloc
Date: Wed, 4 Oct 2006 19:53:13 -0700 [thread overview]
Message-ID: <20061004195313.892838e4.pj@sgi.com> (raw)
In-Reply-To: <Pine.LNX.4.64N.0610041931170.32103@attu2.cs.washington.edu>
David wrote:
> The only change that would be required is
> to abstract a macro to test against if NUMA emulation was configured
> correctly at boot-time instead of just NUMA_BUILD.
Why add any logic to avoid this zonelist caching on systems not using
numa emulation?
Leaving this zonelist caching enabled all the time:
1) improves test coverage of it, and
2) benefits those real numa systems that might have
long zonelist scans in the future.
My experience on my current customer base with cpusets is almost
entirely with HPC (High Performance Computing) apps, which usually
manage their memory layout very closely. These workloads would tend to
have very short zonelist scans and benefit little from this speed up.
As cpusets gets wider use on more varied workloads, I would expect
that some of these varied workloads would stress the zonelist scanning
more.
And there's still a pretty good chance, though I can't document it,
that we've already seen performance problems, even on existing HPC
workloads, with this zonelist scan.
So ... I ask again ... why avoid this speed up on systems not emulating
nodes?
--
I won't rest till it's the best ...
Programmer, Linux Scalability
Paul Jackson <pj@sgi.com> 1.925.600.0401
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2006-10-05 2:53 UTC|newest]
Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top
2006-09-25 9:14 Paul Jackson
2006-09-26 6:08 ` David Rientjes
2006-09-26 7:06 ` Paul Jackson
2006-09-26 18:17 ` David Rientjes
2006-09-26 19:24 ` Paul Jackson
2006-09-26 19:58 ` David Rientjes
2006-09-26 21:48 ` Paul Jackson
2006-10-02 6:18 ` Paul Jackson
2006-10-02 6:31 ` David Rientjes
2006-10-02 6:48 ` Paul Jackson
2006-10-02 7:05 ` David Rientjes
2006-10-02 8:41 ` Paul Jackson
2006-10-03 18:15 ` Paul Jackson
2006-10-03 19:37 ` David Rientjes
2006-10-04 15:45 ` Paul Jackson
2006-10-04 16:11 ` Christoph Lameter
2006-10-04 22:10 ` David Rientjes
2006-10-05 2:27 ` Paul Jackson
2006-10-05 2:37 ` David Rientjes
2006-10-05 2:53 ` Paul Jackson [this message]
2006-10-05 3:00 ` David Rientjes
2006-10-05 3:26 ` Paul Jackson
2006-10-05 3:49 ` David Rientjes
2006-10-05 4:07 ` Andrew Morton
2006-10-05 4:14 ` Paul Jackson
2006-10-05 4:50 ` David Rientjes
2006-10-05 4:53 ` Paul Jackson
2006-10-11 3:42 ` Paul Jackson
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20061004195313.892838e4.pj@sgi.com \
--to=pj@sgi.com \
--cc=ak@suse.de \
--cc=akpm@osdl.org \
--cc=clameter@sgi.com \
--cc=linux-mm@kvack.org \
--cc=mbligh@google.com \
--cc=menage@google.com \
--cc=nickpiggin@yahoo.com.au \
--cc=rientjes@cs.washington.edu \
--cc=rohitseth@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox