linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Ben LaHaise <bcrl@redhat.com>
To: Linus Torvalds <torvalds@transmeta.com>
Cc: Hugh Dickins <hugh@veritas.com>
Subject: Re: Large PAGE_SIZE
Date: Thu, 5 Jul 2001 16:41:58 -0400 (EDT)	[thread overview]
Message-ID: <Pine.LNX.4.33.0107051613540.1702-100000@toomuch.toronto.redhat.com> (raw)
In-Reply-To: <Pine.LNX.4.33.0107051148430.22414-100000@penguin.transmeta.com>

On Thu, 5 Jul 2001, Linus Torvalds wrote:

> > It may come down to Ben having 2**N more struct pages than I do:
> > greater flexibility, but significant waste of kernel virtual.
>
> The waste of kernel virtual memory space is actually a good point. Already
> on big x86 machines the "struct page[]" array is a big memory-user. That
> may indeed be the biggest argument for increasing PAGE_SIZE.

I think the two patches will be complementary as they have different
effects.  Basically, we want to limit the degree which PAGE_SIZE increases
as increasing it too much can result in increased memory usage and
overhead for COW.  PAGE_CACHE_SIZE probably wants to be increased further,
simply to improve io efficiency.

On the topic of struct page size, yes it is too large.  There are a few
things we can do here to make things more efficient, like seperating the
notition of struct page and the page cache, but we have to be careful not
to split things up too much as 64 bytes is ideal for processors like the
Athlon, whereas the P4 really wants 128 byte to avoid false cache line
sharing on SMP.  I've got a few ideas on the page cache front to explore
in the next month or two that could result in another 12 bytes of savings
per page, plus we can look into other things like reducing the overhead of
the wait queue and the other contents of struct page.

		-ben

ps, would you mind if I forward the messages in this thread to linux-mm so
that other people can see the discussion?


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/

  reply	other threads:[~2001-07-09  2:45 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2001-07-05  5:06 [wip-PATCH] rfi: PAGE_CACHE_SIZE suppoort Ben LaHaise
2001-07-05  5:55 ` Linus Torvalds
2001-07-05 16:45   ` Large PAGE_SIZE Hugh Dickins
2001-07-05 17:13     ` Linus Torvalds
2001-07-05 18:38       ` Hugh Dickins
2001-07-05 18:53         ` Linus Torvalds
2001-07-05 20:41           ` Ben LaHaise [this message]
2001-07-05 20:59             ` Hugh Dickins
2001-07-06  5:11             ` Linus Torvalds
2001-07-09  3:04           ` [wip-PATCH] " Ben LaHaise
2001-07-09 11:18             ` Hugh Dickins
2001-07-09 13:13               ` Jeff Garzik
2001-07-09 14:18                 ` Hugh Dickins
2001-07-09 14:33                   ` Jeff Garzik
2001-07-09 17:21             ` Hugh Dickins
2001-07-10  5:53               ` Ben LaHaise
2001-07-10 16:42                 ` Hugh Dickins
2001-07-18  0:02     ` Hugh Dickins
2001-07-18 18:48       ` Hugh Dickins
2001-07-22 23:08         ` Hugh Dickins

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Pine.LNX.4.33.0107051613540.1702-100000@toomuch.toronto.redhat.com \
    --to=bcrl@redhat.com \
    --cc=hugh@veritas.com \
    --cc=torvalds@transmeta.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox