linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Christoph Lameter <clameter@engr.sgi.com>
To: Con Kolivas <kernel@kolivas.org>
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, alokk@calsoftinc.com
Subject: Re: [RFC, PATCH] Slab counter troubles with swap prefetch?
Date: Thu, 10 Nov 2005 15:13:29 -0800 (PST)	[thread overview]
Message-ID: <Pine.LNX.4.62.0511101510240.16588@schroedinger.engr.sgi.com> (raw)
In-Reply-To: <200511111007.12872.kernel@kolivas.org>

On Fri, 11 Nov 2005, Con Kolivas wrote:

> > This patch splits the counter into the nr_local_slab which reflects
> > slab pages allocated from the local zones (and this number is useful
> > at least as a guidance for the VM) and the remotely allocated pages.
> 
> How large a contribution is the remote slab size likely to be? Would this 
> information be useful to anyone potentially in future code besides swap 
> prefetch? The nature of prefetch is that this is only a fairly coarse measure 
> of how full the vm is with data we don't want to displace. Thus it is also 
> not important that it is very accurate. 

The size of the remote cache depends on many factors. The application can 
influence that by setting memory policies. 

> Unless the remote slab size can be a very large contribution, or having local 

Yes it can be quite large. On some of my tests with applications these are 
100%. This is typical if the application sets the policy in such a way 
that all allocations are off node or if the kernel has to allocate memory 
on a certain node for a device.

> and remote slab sizes is useful potentially to some other code I'm inclined 
> to say this is unnecessary. A simple comment saying something like "the 
> nr_slab estimation is artificially elevated by remote slab pages on numa, 
> however this contribution is not important to the accuracy of this 
> algorithm". Of course it is nice to be more accurate and if you think 
> worthwhile then we can do this - I'll be happy to be guided by your 
> judgement.

> As a side note I doubt any serious size numa hardware will ever be idle enough 
> by swap prefetch standards to even start prefetching swap pages. If you think 
> hardware of this sort is likely to benefit from swap prefetch then perhaps we 
> should look at relaxing the conditions under which prefetching occurs.

Small scale NUMA machines may benefit from swap prefetch but on larger 
machines people usually try to avoid swap altogether.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2005-11-10 23:13 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2005-11-10 21:55 Christoph Lameter
2005-11-10 23:07 ` Con Kolivas
2005-11-10 23:13   ` Christoph Lameter [this message]
2005-11-10 23:17     ` Con Kolivas
2005-11-11  3:50     ` Con Kolivas
2005-11-11 17:43       ` Christoph Lameter

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Pine.LNX.4.62.0511101510240.16588@schroedinger.engr.sgi.com \
    --to=clameter@engr.sgi.com \
    --cc=alokk@calsoftinc.com \
    --cc=kernel@kolivas.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox