From: David Rientjes <rientjes@google.com>
To: Christoph Lameter <cl@linux-foundation.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>,
Mel Gorman <mel@csn.ul.ie>, Nick Piggin <npiggin@suse.de>,
Pekka Enberg <penberg@cs.helsinki.fi>,
heiko.carstens@de.ibm.com, sachinp@in.ibm.com,
linux-kernel@vger.kernel.org, linux-mm@kvack.org,
Tejun Heo <tj@kernel.org>,
Lee Schermerhorn <Lee.Schermerhorn@hp.com>
Subject: Re: [RFC PATCH 0/3] Fix SLQB on memoryless configurations V2
Date: Tue, 22 Sep 2009 00:59:18 -0700 (PDT) [thread overview]
Message-ID: <alpine.DEB.1.00.0909220023070.9061@chino.kir.corp.google.com> (raw)
In-Reply-To: <alpine.DEB.1.10.0909220227050.3719@V090114053VZO-1>
On Tue, 22 Sep 2009, Christoph Lameter wrote:
> How would you deal with a memoryless node that has lets say 4 processors
> and some I/O devices? Now the memory policy is round robin and there are 4
> nodes at the same distance with 4G memory each. Does one of the nodes now
> become priviledged under your plan? How do you equally use memory from all
> these nodes?
>
If the distance between the memoryless node with the cpus/devices and all
4G nodes is the same, then this is UMA and no abstraction is necessary:
there's no reason to support interleaving of memory allocations amongst
four different regions of memory if there's no difference in latencies to
those regions.
It is possible, however, to have a system configured in such a way that
representing all devices, including memory, at a single level of
abstraction isn't possible. An example is a four cpu system where cpus
0-1 have local distance to all memory and cpus 2-3 have remote distance.
A solution would be to abstract everything into "system localities" like
the ACPI specification does. These localities in my plan are slightly
different, though: they are limited to only a single class of device.
A locality is simply an aggregate of a particular type of device; a device
is bound to a locality if it shares the same proximity as all other
devices in that locality to all other localities. In other words, the
previous example would have two cpu localities: one with cpus 0-1 and one
with cpus 2-3. If cpu 0 had a different proximity than cpu 1 to a pci
bus, however, there would be three cpu localities.
The equivalent of proximity domains then describes the distance between
all localities; these distances need not be one-way, it is possible for
distance in one direction to be different from the opposite direction,
just as ACPI pxm's allow.
A "node" in this plan is simply a system locality consisting of memory.
For subsystems such as slab allocators, all we require is cpu_to_node()
tables which would map cpu localities to nodes and describe them in terms
of local or remote distance (or whatever the SLIT says, if provided). All
present day information can still be represented in this model, we've just
added additional layers of abstraction internally.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2009-09-22 7:59 UTC|newest]
Thread overview: 35+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-09-21 16:10 Mel Gorman
2009-09-21 16:10 ` [PATCH 1/3] powerpc: Allocate per-cpu areas for node IDs for SLQB to use as per-node areas Mel Gorman
2009-09-21 17:17 ` Daniel Walker
2009-09-21 17:24 ` Randy Dunlap
2009-09-21 17:29 ` Daniel Walker
2009-09-21 17:42 ` Mel Gorman
2009-09-22 0:01 ` Tejun Heo
2009-09-22 9:32 ` Mel Gorman
2009-09-21 16:10 ` [PATCH 2/3] slqb: Record what node is local to a kmem_cache_cpu Mel Gorman
2009-09-21 16:10 ` [PATCH 3/3] slqb: Allow SLQB to be used on PPC Mel Gorman
2009-09-22 9:30 ` Heiko Carstens
2009-09-22 9:32 ` Mel Gorman
2009-09-21 17:46 ` [RFC PATCH 0/3] Fix SLQB on memoryless configurations V2 Mel Gorman
2009-09-21 17:54 ` Christoph Lameter
2009-09-21 18:05 ` Pekka Enberg
2009-09-21 18:07 ` Mel Gorman
2009-09-21 18:17 ` Christoph Lameter
2009-09-22 10:05 ` Mel Gorman
2009-09-22 10:21 ` Pekka Enberg
2009-09-22 10:24 ` Mel Gorman
2009-09-22 5:03 ` Sachin Sant
2009-09-22 10:07 ` Mel Gorman
2009-09-22 12:55 ` Mel Gorman
2009-09-22 13:05 ` Sachin Sant
2009-09-22 13:20 ` Mel Gorman
[not found] ` <363172900909220629j2f5174cbo9fe027354948d37@mail.gmail.com>
2009-09-22 13:38 ` Mel Gorman
2009-09-22 23:07 ` Christoph Lameter
2009-09-22 0:00 ` Benjamin Herrenschmidt
2009-09-22 0:19 ` David Rientjes
2009-09-22 6:30 ` Christoph Lameter
2009-09-22 7:59 ` David Rientjes [this message]
2009-09-22 8:11 ` Benjamin Herrenschmidt
2009-09-22 8:44 ` David Rientjes
2009-09-22 15:26 ` Mel Gorman
2009-09-22 17:31 ` David Rientjes
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=alpine.DEB.1.00.0909220023070.9061@chino.kir.corp.google.com \
--to=rientjes@google.com \
--cc=Lee.Schermerhorn@hp.com \
--cc=benh@kernel.crashing.org \
--cc=cl@linux-foundation.org \
--cc=heiko.carstens@de.ibm.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mel@csn.ul.ie \
--cc=npiggin@suse.de \
--cc=penberg@cs.helsinki.fi \
--cc=sachinp@in.ibm.com \
--cc=tj@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox