linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Andrea Arcangeli <andrea@suse.de>
To: Mark_H_Johnson.RTS@raytheon.com
Cc: linux-mm@kvack.org, riel@nl.linux.org,
	Linus Torvalds <torvalds@transmeta.com>
Subject: Re: 2.3.x mem balancing
Date: Wed, 26 Apr 2000 19:06:57 +0200 (CEST)	[thread overview]
Message-ID: <Pine.LNX.4.21.0004261823290.1687-100000@alpha.random> (raw)
In-Reply-To: <852568CD.0057D4FC.00@raylex-gh01.eo.ray.com>

On Wed, 26 Apr 2000 Mark_H_Johnson.RTS@raytheon.com wrote:

>In the context of "memory balancing" - all processors and all memory is NOT
>equal in a NUMA system. To get the best performance from the hardware, you
>prefer to put "all" of the memory for each process into a single memory unit -
>then run that process from a processor "near" that memory unit. This seemingly

The classzone approch (aka overlapped zones approch) is irrelevant with
NUMA problematics as far I can tell.

NUMA is a problematic that belongs outside the pg_data_t. It doesn't
matter how we restructure the internal of the zone_t.

I only changed the internal structure of one node. Not at all how to
policy the allocations and the balance between different nodes (that
decisions have to live in the linux/arch/ tree and not in __alloc_pages).

On NUMA hardware you have only one zone per node since nobody uses ISA-DMA
on such machines and you have PCI64 or you can use the PCI-DMA sg for
PCI32. So on NUMA hardware you are going to have only one zone per node
(at least this was the setup of the NUMA machine I was playing with). So
you don't mind at all about classzone/zone. Classzone and zone are the
same thing in such a setup, they both are the plain ZONE_DMA zone_t.
Finished. Said that you don't care anymore about the changes of how the
overlapped zones are handled since you don't have overlapped zones in
first place.

Now on NUMA when you want to allocate memory you have to use
alloc_pages_node so that you can tell also which node allocate from.

Here Linus was proposing of making alloc_pages_node this way:

	alloc_pages_node(nid, gfpmask, order)
	{
		zonelist_t ** zonelist = nid2zonelist(nid, gfpmask);

		__alloc_pages(zonelist, order);
	}

and then having the automatic falling back between nodes and numa memory
balancing handled by __alloc_pages and by the current 2.3.99-pre6-5
zonelist falling back trick.

I care to explain why I think that's not the right approch for handling
NUMA allocations and balancing decisions.

As first it's clear that the above described NUMA approch is abusing
zonelist_t by looking the size of the zonelist_t structure:

	typedef struct zonelist_struct {
		zone_t * zones [MAX_NR_ZONES+1]; // NULL delimited
		int gfp_mask;
	} zonelist_t;

If zonelist was designed for NUMA it should be something like:

	typedef struct zonelist_struct {
		zone_t * zones [max(MAX_NR_ZONES*MAX_NR_NODES)+1]; // NULL delimited
		int gfp_mask;
	} zonelist_t;

however we can fix that easily by enlarging the zones array in the
zonelist.

and as second the zonelist-NUMA solution isn't enough flexible since if
there's lots of cache allocate in one node we may prefer to move or shrink
the cache than to allocate mapped areas of the same task in different
nodes (as the __alloc_pages would do).

With the zonelist_t abused to do NUMA we _don't_ have flexibility.

If you move the NUMA balancing and node selection into the higher layer
as I was proposing, instead you can do clever things.

And as soon as you move the decisions at the higher layer you don't mind
anymore about the node internals. Or better you only care to be able to
find the current life-state of a node and you of course can do that. Then
once you know the state of the interesting node you can do the decision of
what to do _still_ at the highlever layer.

At the highlevel layer you can see that the node is filled with 90% of
cache, and then you can say: ok allocate from this node anyway and let it 
to shrink some cache if necessary.

Then the lower layer (__alloc_pages) will do automatically the balancing
and it will try to allocate memory from such node. You can also grab the
per-node spinlock in the highlever layer before checking the state of the
node so that you'll know the state of the node won't change from under you
while doing the allocation.

>These are issues that need to be addressed if you expect to use this

I always tried to keep these issues in mind (also before writing the
classzone approch).

Andrea

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux.eu.org/Linux-MM/

  reply	other threads:[~2000-04-26 17:06 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2000-04-26 16:03 Mark_H_Johnson.RTS
2000-04-26 17:06 ` Andrea Arcangeli [this message]
2000-04-26 17:36   ` Kanoj Sarcar
2000-04-26 21:58     ` Andrea Arcangeli
2000-04-26 17:43 ` Kanoj Sarcar
  -- strict thread matches above, loose matches on Subject: below --
2000-04-26 19:06 frankeh
     [not found] <Pine.LNX.4.21.0004250401520.4898-100000@alpha.random>
2000-04-25 16:57 ` Linus Torvalds
2000-04-25 17:50   ` Rik van Riel
2000-04-25 18:11     ` Jeff Garzik
2000-04-25 18:33       ` Rik van Riel
2000-04-25 18:53     ` Linus Torvalds
2000-04-25 19:27       ` Rik van Riel
2000-04-26  0:26         ` Linus Torvalds
2000-04-26  1:19           ` Rik van Riel
2000-04-26  1:07   ` Andrea Arcangeli
2000-04-26  2:10     ` Rik van Riel
2000-04-26 11:24       ` Stephen C. Tweedie
2000-04-26 16:44         ` Linus Torvalds
2000-04-26 17:13           ` Rik van Riel
2000-04-26 17:24             ` Linus Torvalds
2000-04-27 13:22               ` Stephen C. Tweedie
2000-04-26 14:19       ` Andrea Arcangeli
2000-04-26 16:52         ` Linus Torvalds
2000-04-26 17:49           ` Andrea Arcangeli

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Pine.LNX.4.21.0004261823290.1687-100000@alpha.random \
    --to=andrea@suse.de \
    --cc=Mark_H_Johnson.RTS@raytheon.com \
    --cc=linux-mm@kvack.org \
    --cc=riel@nl.linux.org \
    --cc=torvalds@transmeta.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox