linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Christoph Lameter <clameter@sgi.com>
To: "Jörn Engel" <joern@logfs.org>
Cc: Ray Lee <ray-lk@madrabbit.org>,
	akpm@linux-foundation.org, linux-kernel@vger.kernel.org,
	linux-mm@kvack.org, Mel Gorman <mel@csn.ul.ie>,
	Andi Kleen <ak@suse.de>, Andy Whitcroft <apw@shadowen.org>
Subject: Re: x86_64: Make sparsemem/vmemmap the default memory model
Date: Tue, 13 Nov 2007 13:52:17 -0800 (PST)	[thread overview]
Message-ID: <Pine.LNX.4.64.0711131349300.3714@schroedinger.engr.sgi.com> (raw)
In-Reply-To: <20071113204100.GB20167@lazybastard.org>

[-- Attachment #1: Type: TEXT/PLAIN, Size: 1480 bytes --]

On Tue, 13 Nov 2007, Jörn Engel wrote:

> On Mon, 12 November 2007 20:41:10 -0800, Christoph Lameter wrote:
> > On Mon, 12 Nov 2007, Ray Lee wrote:
> > 
> > > Discontig obviously needs to die. However, FlatMem is consistently
> > > faster, averaging about 2.1% better overall for your numbers above. Is
> > > the page allocator not, erm, a fast path, where that matters?
> > > 
> > > Order	Flat	Sparse	% diff
> > > 0	639	641	0.3
> > 
> > IMHO Order 0 currently matters most and the difference is negligible 
> > there.
> 
> Is it?  I am a bit concerned about the non-monotonic distribution.
> Difference starts a near-0, grows to 4.4, drops to near-0, grows to 4.9,
> drops to near-0.

The problem also is that the comparison here is between a SMP config for 
flatmem vs a NUMA config for sparsemem. There is additional overhead in 
the NUMA config. 

The effect may also be due to the system being able to place 
some pages in the same 2MB section as the memmap with flatmem. However, 
that is only feasable immeidately after bootup. In regular operations this 
should vanish.

Could you run your own test to verify?

> Is there an explanation for this behaviour?  More to the point, could
> repeated runs also return 4% difference for order-0?

I hope I have given some above. The number of the page allocator suggests 
that we have far too much fat in the allocation paths. IMHO reasonable 
numbers for an order-0 alloc should be ~100 cycles.

  reply	other threads:[~2007-11-13 21:52 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2007-11-12 23:52 Christoph Lameter
2007-11-12 23:59 ` Andi Kleen
2007-11-13  0:42   ` Christoph Lameter
2007-11-13  0:49     ` Andi Kleen
2007-11-13  3:42       ` Christoph Lameter
2007-11-13  4:27         ` Ray Lee
2007-11-13  4:41           ` Christoph Lameter
2007-11-13 20:41             ` Jörn Engel
2007-11-13 21:52               ` Christoph Lameter [this message]
2007-11-13 22:30                 ` Jörn Engel
2007-11-15 22:12         ` Andrew Morton
2007-11-16  2:24           ` Christoph Lameter
2007-11-16  2:52             ` Andrew Morton
2007-11-16  3:55               ` x86_64: Make sparsemem/vmemmap the default memory model V2 Christoph Lameter

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Pine.LNX.4.64.0711131349300.3714@schroedinger.engr.sgi.com \
    --to=clameter@sgi.com \
    --cc=ak@suse.de \
    --cc=akpm@linux-foundation.org \
    --cc=apw@shadowen.org \
    --cc=joern@logfs.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mel@csn.ul.ie \
    --cc=ray-lk@madrabbit.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox