From: Nick Piggin <nickpiggin@yahoo.com.au>
To: Christoph Lameter <cl@linux-foundation.org>
Cc: Nick Piggin <npiggin@suse.de>,
Pekka Enberg <penberg@cs.helsinki.fi>,
"Zhang, Yanmin" <yanmin_zhang@linux.intel.com>,
Lin Ming <ming.m.lin@intel.com>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org,
Andrew Morton <akpm@linux-foundation.org>,
Linus Torvalds <torvalds@linux-foundation.org>
Subject: Re: [patch] SLQB slab allocator
Date: Wed, 4 Feb 2009 15:07:32 +1100 [thread overview]
Message-ID: <200902041507.33464.nickpiggin@yahoo.com.au> (raw)
In-Reply-To: <alpine.DEB.1.10.0902031217390.17910@qirst.com>
On Wednesday 04 February 2009 04:33:14 Christoph Lameter wrote:
> On Tue, 3 Feb 2009, Nick Piggin wrote:
> > Quite obviously it should. Behaviour of a slab allocation on behalf of
> > some task constrained within a given node should not depend on the task
> > which has previously run on this CPU and made some allocations. Surely
> > you can see this behaviour is not nice.
>
> If you want cache hot objects then its better to use what a prior task
> has used. This opportunistic use is only done if the task is not asking
> for memory from a specifc node. There is another tradeoff here.
>
> SLABs method there is to ignore all caching advantages even if the task
> did not ask for memory from a specific node. So it gets cache cold objects
> and if the node to allow from is remote then it always must use the slow
> path.
Yeah, but I don't think you actually demonstrated any real advantages
to it, and there are obvious failure modes where constraints aren't
obeyed, so I'm going to leave it as-is in SLQB.
Objects where cache hotness tends to be most important are the shorter
lived ones, and objects where constraints matter are longer lived ones,
so I think this is pretty reasonable.
Also, you've just been spending lots of time arguing that cache hotness
is not so important (because SLUB doesn't do LIFO like SLAB and SLQB).
> > > Which have similar issues since memory policy application is depending
> > > on a task policy and on memory migration that has been applied to an
> > > address range.
> >
> > What similar issues? If a task ask to have slab allocations constrained
> > to node 0, then SLUB hands out objects from other nodes, then that's bad.
>
> Of course. A task can ask to have allocations from node 0 and it will get
> the object from node 0. But if the task does not care to ask for data
> from a specific node then it can be satisfied from the cpu slab which
> contains cache hot objects.
But if it is using constrained allocations, then it is also asking for
allocations from node 0.
> > > > But that is wrong. The lists obviously have high water marks that
> > > > get trimmed down. Periodic trimming as I keep saying basically is
> > > > alrady so infrequent that it is irrelevant (millions of objects
> > > > per cpu can be allocated anyway between existing trimming interval)
> > >
> > > Trimming through water marks and allocating memory from the page
> > > allocator is going to be very frequent if you continually allocate on
> > > one processor and free on another.
> >
> > Um yes, that's the point. But you previously claimed that it would just
> > grow unconstrained. Which is obviously wrong. So I don't understand what
> > your point is.
>
> It will grow unconstrained if you elect to defer queue processing. That
> was what we discussed.
And I just keep pointing out that you are wrong (this must be the 4th time).
We were talking about deferring the periodic queue reaping. SLQB will still
constrain the queue sizes to the high watermarks.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2009-02-04 4:08 UTC|newest]
Thread overview: 99+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-01-14 9:04 Nick Piggin
2009-01-14 10:53 ` Pekka Enberg
2009-01-14 11:47 ` Nick Piggin
2009-01-14 13:44 ` Pekka Enberg
2009-01-14 14:22 ` Nick Piggin
2009-01-14 14:45 ` Pekka Enberg
2009-01-14 15:09 ` Nick Piggin
2009-01-14 15:22 ` Nick Piggin
2009-01-14 15:30 ` Pekka Enberg
2009-01-14 15:59 ` Nick Piggin
2009-01-14 18:40 ` Christoph Lameter
2009-01-15 6:19 ` Nick Piggin
2009-01-15 20:47 ` Christoph Lameter
2009-01-16 3:43 ` Nick Piggin
2009-01-16 21:25 ` Christoph Lameter
2009-01-19 6:18 ` Nick Piggin
2009-01-22 0:13 ` Christoph Lameter
2009-01-22 9:27 ` Pekka Enberg
2009-01-22 9:30 ` Zhang, Yanmin
2009-01-22 9:33 ` Pekka Enberg
2009-01-23 15:32 ` Christoph Lameter
2009-01-23 15:37 ` Pekka Enberg
2009-01-23 15:42 ` Christoph Lameter
2009-01-23 15:32 ` Christoph Lameter
2009-01-23 4:09 ` Nick Piggin
2009-01-23 15:41 ` Christoph Lameter
2009-01-23 15:53 ` Nick Piggin
2009-01-26 17:28 ` Christoph Lameter
2009-02-03 1:53 ` Nick Piggin
2009-02-03 17:33 ` Christoph Lameter
2009-02-03 18:42 ` Pekka Enberg
2009-02-03 18:47 ` Pekka Enberg
2009-02-04 4:22 ` Nick Piggin
2009-02-04 20:09 ` Christoph Lameter
2009-02-05 3:18 ` Nick Piggin
2009-02-04 20:10 ` Christoph Lameter
2009-02-05 3:14 ` Nick Piggin
2009-02-04 4:07 ` Nick Piggin [this message]
2009-01-14 18:01 ` Christoph Lameter
2009-01-15 6:03 ` Nick Piggin
2009-01-15 20:05 ` Christoph Lameter
2009-01-16 3:19 ` Nick Piggin
2009-01-16 21:07 ` Christoph Lameter
2009-01-19 5:47 ` Nick Piggin
2009-01-22 0:19 ` Christoph Lameter
2009-01-23 4:17 ` Nick Piggin
2009-01-23 15:52 ` Christoph Lameter
2009-01-23 16:10 ` Nick Piggin
2009-01-23 17:09 ` Nick Piggin
2009-01-26 17:46 ` Christoph Lameter
2009-02-03 1:42 ` Nick Piggin
2009-01-26 17:34 ` Christoph Lameter
2009-02-03 1:48 ` Nick Piggin
2009-01-21 14:30 Nick Piggin
2009-01-21 14:59 ` Ingo Molnar
2009-01-21 15:17 ` Nick Piggin
2009-01-21 16:56 ` Nick Piggin
2009-01-21 17:40 ` Ingo Molnar
2009-01-23 3:31 ` Nick Piggin
2009-01-23 6:14 ` Nick Piggin
2009-01-23 12:56 ` Ingo Molnar
2009-01-21 17:59 ` Joe Perches
2009-01-23 3:35 ` Nick Piggin
2009-01-23 4:00 ` Joe Perches
2009-01-21 18:10 ` Hugh Dickins
2009-01-22 10:01 ` Pekka Enberg
2009-01-22 12:47 ` Hugh Dickins
2009-01-23 14:23 ` Hugh Dickins
2009-01-23 14:30 ` Pekka Enberg
2009-02-02 3:38 ` Zhang, Yanmin
2009-02-02 9:00 ` Pekka Enberg
2009-02-02 15:00 ` Christoph Lameter
2009-02-03 1:34 ` Zhang, Yanmin
2009-02-03 7:29 ` Zhang, Yanmin
2009-02-03 12:18 ` Hugh Dickins
2009-02-04 2:21 ` Zhang, Yanmin
2009-02-05 19:04 ` Hugh Dickins
2009-02-06 0:47 ` Zhang, Yanmin
2009-02-06 8:57 ` Pekka Enberg
2009-02-06 12:33 ` Hugh Dickins
2009-02-10 8:56 ` Zhang, Yanmin
2009-02-02 11:50 ` Hugh Dickins
2009-01-23 3:55 ` Nick Piggin
2009-01-23 13:57 ` Hugh Dickins
2009-01-22 8:45 ` Zhang, Yanmin
2009-01-23 3:57 ` Nick Piggin
2009-01-23 9:00 ` Nick Piggin
2009-01-23 13:34 ` Hugh Dickins
2009-01-23 13:44 ` Nick Piggin
2009-01-23 9:55 ` Andi Kleen
2009-01-23 10:13 ` Pekka Enberg
2009-01-23 11:25 ` Nick Piggin
2009-01-23 11:57 ` Andi Kleen
2009-01-23 13:18 ` Nick Piggin
2009-01-23 14:04 ` Andi Kleen
2009-01-23 14:27 ` Nick Piggin
2009-01-23 15:06 ` Andi Kleen
2009-01-23 15:15 ` Nick Piggin
2009-01-23 12:55 ` Nick Piggin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=200902041507.33464.nickpiggin@yahoo.com.au \
--to=nickpiggin@yahoo.com.au \
--cc=akpm@linux-foundation.org \
--cc=cl@linux-foundation.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=ming.m.lin@intel.com \
--cc=npiggin@suse.de \
--cc=penberg@cs.helsinki.fi \
--cc=torvalds@linux-foundation.org \
--cc=yanmin_zhang@linux.intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox