linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Paul Jackson <pj@sgi.com>
To: David Rientjes <rientjes@cs.washington.edu>
Cc: akpm@osdl.org, linux-mm@kvack.org, nickpiggin@yahoo.com.au,
	ak@suse.de, mbligh@google.com, rohitseth@google.com,
	menage@google.com, clameter@sgi.com
Subject: Re: [RFC] memory page_alloc zonelist caching speedup
Date: Tue, 10 Oct 2006 00:03:31 -0700	[thread overview]
Message-ID: <20061010000331.bcc10007.pj@sgi.com> (raw)
In-Reply-To: <Pine.LNX.4.64N.0610092331120.17087@attu3.cs.washington.edu>

> When a free occurs for a given zone, increment its counter.  If that 
> reaches some threshold, zap that node in the nodemask so it's checked on 
> the next alloc.  All the infrastructure is already there for this support 
> in your patch.

It's not an issue of infrastructure.  As you say, that's likely already
there.

It's the inherent problem in scaling an N-by-N information flow,
with tasks running on each of N nodes wanting to know the latest
free counters on each of N nodes.  This cannot be done with a small
constant (or linear, but so small it is nearly constant) cache
footprint for both the freers and allocators, avoiding hot cache lines.

In your phrasing, this shows up in the "zap that node in the nodemask"
step.

We don't have -a- nodemask.

My latest patch has a bitmask (of length longer than a nodemask,
typically) in each zonelist.  No way do we want to walk down each
zonelist, one each per node, per ZONE type, examining each zone to see
if it's on our node of interest, so we can clear the corresponding bit
in the bitmask.  Not on every page free.  Way too expensive.

-- 
                  I won't rest till it's the best ...
                  Programmer, Linux Scalability
                  Paul Jackson <pj@sgi.com> 1.925.600.0401

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2006-10-10  7:03 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2006-10-09 10:54 [RFC] memory page alloc minor cleanups Paul Jackson, Paul Jackson
2006-10-09 10:54 ` [RFC] memory page_alloc zonelist caching speedup Paul Jackson
2006-10-09 18:12   ` Andrew Morton
2006-10-09 22:02     ` Paul Jackson
2006-10-10  4:51       ` Paul Jackson
2006-10-10  6:34         ` David Rientjes
2006-10-10  7:03           ` Paul Jackson [this message]
2006-10-10 17:07             ` Christoph Lameter
2006-10-10 19:35               ` Paul Jackson
2006-10-10  6:45       ` Paul Jackson
2006-10-09 11:08 ` [RFC] memory page alloc minor cleanups Christoph Lameter
2006-10-09 11:50   ` Paul Jackson
2006-10-09 17:12     ` Christoph Lameter
2006-10-09 13:11 ` Nick Piggin
2006-10-09 20:24   ` Paul Jackson
2006-10-10  1:45     ` Paul Jackson

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20061010000331.bcc10007.pj@sgi.com \
    --to=pj@sgi.com \
    --cc=ak@suse.de \
    --cc=akpm@osdl.org \
    --cc=clameter@sgi.com \
    --cc=linux-mm@kvack.org \
    --cc=mbligh@google.com \
    --cc=menage@google.com \
    --cc=nickpiggin@yahoo.com.au \
    --cc=rientjes@cs.washington.edu \
    --cc=rohitseth@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox