From: Christoph Lameter <cl@linux-foundation.org>
To: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
Cc: linux-mm <linux-mm@kvack.org>, Nick Piggin <npiggin@suse.de>,
Pekka Enberg <penberg@cs.helsinki.fi>,
Andrew Morton <akpm@linux-foundation.org>,
Eric Whitney <eric.whitney@hp.com>
Subject: Re: [PATCH/RFC] slab: handle memoryless nodes efficiently
Date: Thu, 29 Oct 2009 19:33:20 -0400 (EDT) [thread overview]
Message-ID: <alpine.DEB.1.10.0910291924100.32581@V090114053VZO-1> (raw)
In-Reply-To: <1256843939.16599.71.camel@useless.americas.hpqcorp.net>
On Thu, 29 Oct 2009, Lee Schermerhorn wrote:
> > We can then use that in various subsystems and could use it consistently
> > also in slab.c
>
> Where should we put it? In page_alloc.c that manages the zonelists.
Thats the obvious place.
> > One problem with such a scheme (and also this patch) is that multiple
> > memory nodes may be at the same distance to a processor on a memoryless
> > node. Should the allocation not take memory from any of these nodes?
>
> Well, this is the case for normal page allocations as well, but we
> choose one, in build_zonelists(), that we'll use whenever a page
> allocation overflows the target node selected by the mempolicy. So,
> that seemed a reasonable node to use for slab allocations.
>
> Thoughts?
Not exactly. For a memoryless node the per cpu array is always empty
and there are no pages that can be allocated from the node. Therefore
we always call fallback_alloc. fallback_alloc is expensive. The speed
increase you see is from not using fallback_alloc anymore.
Look at slab.c:fallback_alloc. First we determine the node that is
determined by the policy. That is memoryless in our case. Then we walk
down the zonelist (obeying cpuset restriction) of that node trying to see
if we have slab object on any of the nodes of the zones listed.
If that fails then we call the page allocator and specify the
("memoryless") node. The page allocator will fallback according to policy
and then we will get a page from the node that the page allocator
determines.
So the concept of a numa_mem_node_id() currently does not exist. If you
add it then memory policies and/or cpusets will only be partially obeyed.
With fallback_alloc the page allocator may fall back to other nodes if
nearer ones are overallocated. With the regular alloc function of slab.c
that sets GFP_THISNODE this is not allowed to occur. So the introduction
of a numa_mem_node_id() will cause an imbalance. Fallback to other nodes
will no longer occur.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
prev parent reply other threads:[~2009-10-29 19:34 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-10-29 17:08 Lee Schermerhorn
2009-10-29 21:30 ` Christoph Lameter
2009-10-29 19:18 ` Lee Schermerhorn
2009-10-29 23:33 ` Christoph Lameter [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=alpine.DEB.1.10.0910291924100.32581@V090114053VZO-1 \
--to=cl@linux-foundation.org \
--cc=Lee.Schermerhorn@hp.com \
--cc=akpm@linux-foundation.org \
--cc=eric.whitney@hp.com \
--cc=linux-mm@kvack.org \
--cc=npiggin@suse.de \
--cc=penberg@cs.helsinki.fi \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox