linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Andy Whitcroft <apw@shadowen.org>
To: Christoph Lameter <clameter@sgi.com>
Cc: linux-mm@kvack.org, ak@suse.de
Subject: Re: virtual mmap basics
Date: Tue, 26 Sep 2006 13:06:28 +0100	[thread overview]
Message-ID: <451917C4.1050603@shadowen.org> (raw)
In-Reply-To: <Pine.LNX.4.64.0609251631060.25028@schroedinger.engr.sgi.com>

Christoph Lameter wrote:
> On Mon, 25 Sep 2006, Andy Whitcroft wrote:
> 
>>> Using a virtual memmap there would allow relocation of the memmap array 
>>> into high memory and would double the available low memory. So may be 
>>> worth even on this 32 bit platform to sacrifice 1/8th of the virtual 
>>> address space for memmap.
>> How does moving to a virtual memmap help here.  The virtual mem_map also
>> has to be allocated in KVA, any KVA used for it is not available to and
>> thereby shrinks the size of zone NORMAL?  The size of NORMAL in x86 is
>> defined by the addressable space in kernel mode (by KVA size), 1GB less
>> other things we have mapped.  Virtual map would be one of those.
> 
> Hmmm... Strange architecture and I may be a bit ignorant on this one. You 
> could reserve the 1G for kernel 1-1 mapped. 2nd G for VMALLOC / virtual 
> memmap and the remaining 2G for user space? Probably wont work since you 
> would have to decrease user space from 3G to 2G?

Not strange?  Limited.  Any 32bit architecture has this limitation.
They can only map a limited amount of memory into the kernel at the same
time.

Yes some users already change their U/K split on 32bit do do this, but
as you say its not a general solution here.

> Having the virtual memmap in high memory also allows you to place the 
> sections of the memmap that map files of that node into the memory of the 
> node itself. This alone would get you a nice performance boost.

We already do this.  We don't allocate mem_map out of zone normal, we
pull it from the end of each node.  This is then mapped after the end of
zone normal and before vmap space.  mem_map is physically node local,
but mapped into KVA.

>>> So far I am not seeing any convincing case for the current sparsemem table 
>>> lookups. But there must have been some reason that such an implementation 
>>> was chosen. What was it?
>> As I said the problem is not memory but KVA space.  Zone normal is all
>> the pages we can map into the kernel address space, its 1Gb less the
>> kernel itself, less vmap space.  In the current NUMA scheme its then
>> less the mem_map allocated out of HIGHMEM but mapped into KVA.  In
>> vmem_map its allocated out of HIGHMEM but mapped into KVA.  The loss is
>> the same.
> 
> Yup the only way around would be to decrease user space sizes.
> 
> But then we are talking about a rare breed of NUMA machine, right?

We are talking about any 32bit NUMA.  Getting rarer yes.

-apw

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2006-09-26 12:06 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2006-09-24 16:59 Christoph Lameter
2006-09-25 12:28 ` Andy Whitcroft
2006-09-25 16:27   ` Christoph Lameter
2006-09-25 17:11     ` Christoph Lameter
2006-09-25 21:05       ` Christoph Lameter
2006-09-25 22:22         ` Andy Whitcroft
2006-09-25 23:37           ` Christoph Lameter
2006-09-26 12:06             ` Andy Whitcroft [this message]
2006-09-25 18:09     ` Andy Whitcroft
2006-09-25 21:00       ` Christoph Lameter
2006-09-25 23:54         ` virtual memmap sparsity: Dealing with fragmented MAX_ORDER blocks Christoph Lameter
2006-09-26  0:31           ` Christoph Lameter
2006-09-26 12:11             ` Andy Whitcroft
2006-09-26 15:23               ` Christoph Lameter
2006-09-26  8:16           ` Mel Gorman

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=451917C4.1050603@shadowen.org \
    --to=apw@shadowen.org \
    --cc=ak@suse.de \
    --cc=clameter@sgi.com \
    --cc=linux-mm@kvack.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox