From: Andrea Arcangeli <andrea@suse.de>
To: William Lee Irwin III <wli@holomorphy.com>,
Rik van Riel <riel@redhat.com>,
"Martin J. Bligh" <mbligh@aracnet.com>,
Mel Gorman <mel@csn.ul.ie>,
Linux Memory Management List <linux-mm@kvack.org>,
Linux Kernel Mailing List <linux-kernel@vger.kernel.org>
Subject: Re: What to expect with the 2.6 VM
Date: Thu, 3 Jul 2003 21:27:50 +0200 [thread overview]
Message-ID: <20030703192750.GM23578@dualathlon.random> (raw)
In-Reply-To: <20030703185341.GJ20413@holomorphy.com>
On Thu, Jul 03, 2003 at 11:53:41AM -0700, William Lee Irwin III wrote:
> On Thu, 3 Jul 2003, Andrea Arcangeli wrote:
> >> even if you don't use largepages as you should, the ram cost of the pte
> >> is nothing on 64bit archs, all you care about is to use all the mhz and
> >> tlb entries of the cpu.
>
> On Thu, Jul 03, 2003 at 09:06:32AM -0400, Rik van Riel wrote:
> > That depends on the number of Oracle processes you have.
> > Say that page tables need 0.1% of the space of the virtual
> > space they map. With 1000 Oracle users you'd end up needing
> > as much memory in page tables as your shm segment is large.
> > Of course, in this situation either the application should
> > use large pages or the kernel should simply reclaim the
> > page tables (possible while holding the mmap_sem for write).
>
> No, it is not true that pagetable space can be wantonly wasted
> on 64-bit.
>
> Try mmap()'ing something sufficiently huge and accessing on average
> every PAGE_SIZE'th virtual page, in a single-threaded single process.
> e.g. various indexing schemes might do this. This is 1 pagetable page
> per page of data (worse if shared), which blows major goats.
that's the very old exploit that touches 1 page per pmd.
>
> There's a reason why those things use inverted pagetables... at any
> rate, compacting virtualspace with remap_file_pages() solves it too.
>
> Large pages won't help, since the data isn't contiguous.
if you can't use a sane design it's not a kernel issue. this is bad
userspace code seeking like crazy on disk too, working around it with a
kernel feature sounds worthless. If algorithms have no locality at all,
and they spread 1 page per pmd that's their problem.
the easiest way to waste ram with bad code is to add this in the first
line of the main of a program:
p = malloc(1G)
bzero(p, 1G)
you don't need 1 page per pmd to waste ram. Should we also write a
kernel feature that checks if the page is zero and drops it so the above
won't swap etc..?
If you can come up with a real life example where the 1 page per pmd
scattered over 1T of address space (we're talking about the file here of
course, the on disk representation of the data) is the very best design
possible ever (without any concept of locality at all) and it speeds up
things of orders of magnitude not to have any locality at all,
especially for the huge seeking it will generate no matter what the
pagetable side is, you will then change my mind about it.
Andrea
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"aart@kvack.org"> aart@kvack.org </a>
next prev parent reply other threads:[~2003-07-03 19:27 UTC|newest]
Thread overview: 74+ messages / expand[flat|nested] mbox.gz Atom feed top
2003-07-01 1:39 Mel Gorman
2003-06-30 17:43 ` Daniel Phillips
2003-07-01 20:10 ` Martin J. Bligh
2003-07-01 21:41 ` Mel Gorman
2003-07-01 21:51 ` Davide Libenzi
2003-07-01 21:58 ` Martin J. Bligh
2003-07-02 9:01 ` Mel Gorman
2003-07-01 2:25 ` Andrea Arcangeli
2003-07-01 3:02 ` Andrew Morton
2003-07-01 3:22 ` Andrea Arcangeli
2003-07-01 3:25 ` Andrea Arcangeli
2003-07-01 3:29 ` Rik van Riel
2003-07-01 4:04 ` Andrea Arcangeli
2003-07-01 11:01 ` Hugh Dickins
2003-07-01 3:25 ` William Lee Irwin III
2003-07-01 4:39 ` Andrea Arcangeli
2003-07-01 6:33 ` William Lee Irwin III
2003-07-01 7:49 ` Andrea Arcangeli
2003-07-01 8:59 ` William Lee Irwin III
2003-07-01 9:27 ` Andrea Arcangeli
2003-07-01 14:24 ` Martin J. Bligh
2003-07-01 16:22 ` William Lee Irwin III
2003-07-01 17:54 ` Martin J. Bligh
2003-07-02 3:04 ` Andrea Arcangeli
2003-07-01 14:42 ` Martin J. Bligh
2003-07-01 21:45 ` Mel Gorman
2003-07-01 22:06 ` Martin J. Bligh
2003-07-01 21:46 ` Mel Gorman
2003-07-02 3:08 ` Andrea Arcangeli
2003-07-02 15:57 ` Mel Gorman
2003-07-02 17:11 ` Andrea Arcangeli
2003-07-02 17:10 ` Martin J. Bligh
2003-07-02 17:47 ` Andrea Arcangeli
2003-07-02 17:52 ` Martin J. Bligh
2003-07-02 18:13 ` Andrea Arcangeli
2003-07-02 18:05 ` Rik van Riel
2003-07-02 20:05 ` Martin J. Bligh
2003-07-02 21:40 ` William Lee Irwin III
2003-07-02 21:48 ` Martin J. Bligh
2003-07-02 22:14 ` William Lee Irwin III
2003-07-02 22:02 ` Andrea Arcangeli
2003-07-02 22:15 ` William Lee Irwin III
2003-07-02 22:26 ` Andrea Arcangeli
2003-07-02 23:11 ` William Lee Irwin III
2003-07-02 23:30 ` Andrea Arcangeli
2003-07-02 23:55 ` William Lee Irwin III
2003-07-03 11:31 ` Andrea Arcangeli
2003-07-03 11:46 ` William Lee Irwin III
2003-07-03 12:58 ` Andrea Arcangeli
2003-07-03 13:06 ` Rik van Riel
2003-07-03 13:48 ` Andrea Arcangeli
2003-07-03 18:53 ` William Lee Irwin III
2003-07-03 19:27 ` Andrea Arcangeli [this message]
2003-07-03 19:32 ` Rik van Riel
2003-07-03 20:16 ` William Lee Irwin III
2003-07-04 0:40 ` Andrea Arcangeli
2003-07-04 1:46 ` William Lee Irwin III
2003-07-04 2:34 ` Andrea Arcangeli
2003-07-04 4:10 ` William Lee Irwin III
2003-07-04 5:54 ` Andrea Arcangeli
2003-07-04 8:15 ` William Lee Irwin III
2003-07-04 23:44 ` Andrea Arcangeli
2003-07-05 0:05 ` William Lee Irwin III
2003-07-05 0:08 ` Andrea Arcangeli
2003-07-03 18:48 ` Jamie Lokier
2003-07-03 18:54 ` William Lee Irwin III
2003-07-03 19:33 ` Andrea Arcangeli
2003-07-03 22:21 ` William Lee Irwin III
2003-07-04 0:46 ` Andrea Arcangeli
2003-07-04 1:33 ` Jamie Lokier
2003-07-04 1:36 ` William Lee Irwin III
2003-07-03 19:06 ` Andrew Morton
2003-07-03 19:34 ` Andrea Arcangeli
2003-07-02 18:07 ` Rik van Riel
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20030703192750.GM23578@dualathlon.random \
--to=andrea@suse.de \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mbligh@aracnet.com \
--cc=mel@csn.ul.ie \
--cc=riel@redhat.com \
--cc=wli@holomorphy.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox