linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Rik van Riel <riel@conectiva.com.br>
To: Matthew Dillon <dillon@apollo.backplane.com>
Cc: linux-mm@kvack.org
Subject: Re: [RFC] 2.3/4 VM queues idea
Date: Wed, 24 May 2000 19:44:06 -0300 (BRST)	[thread overview]
Message-ID: <Pine.LNX.4.21.0005241937350.24993-100000@duckman.distro.conectiva> (raw)
In-Reply-To: <200005242057.NAA77059@apollo.backplane.com>

On Wed, 24 May 2000, Matthew Dillon wrote:

> :>        Two things can be done:  First, you collect a bunch of pages to be
> :>        laundered before issuing the I/O, allowing you to sort the I/O
> :>        (this is what you suggest in your design ideas email).  (p.p.s.
> :>        don't launder more then 64 or so pages at a time, doing so will just
> :>        stall other processes trying to do normal I/O).
> :> 
> :>        Second, you can locate other pages nearby the ones you've decided to
> :>        launder and launder them as well, getting the most out of the disk
> :>        seeking you have to do anyway.
> :
> :Virtual page scanning should provide us with some of these
> :benefits. Also, we'll allocate the swap entry at unmapping
> :time and can make sure to unmap virtually close pages at
> :the same time so they'll end up close to each other in the
> :inactive queue.
> :
> :This isn't going to be as good as it could be, but it's
> :probably as good as it can get without getting more invasive
> :with our changes to the source tree...
> 
>     Virtual page scanning will help with clustering, but unless you
>     already have a good page candidate to base your virtual scan on
>     you will not be able to *find* a good page candidate to base the
>     clustering around.  Or at least not find one easily.  Virtual
>     page scanning has severe scaleability problems over physical page
>     scanning.  For example, what happens when you have an oracle database
>     running with a hundred independant (non-threaded) processes mapping
>     300MB+ of shared memory?

Ohhh definately. It's just that coding up the administrative changes
required to support this would be too big a change for Linux 2.4...

>     So it can be a toss-up.  I don't think *anyone* (linux, freebsd, solaris,
>     or anyone else) has yet written the definitive swap allocation algorithm!

We still have some time. There's little chance of implementing it in
Linux before kernel version 2.5, so we should have some time left to
design the "definitive" algorithm.

For now I'll be focussing on having something decent in kernel 2.4,
we really need it to be better than 2.2. Keeping the virtual
scanning but combining it with a multi-queue system for the unmapped
pages (with all mapped pages residing in the active queue) should
at least provide us with a predictable, robust and moderately good
VM subsystem for the next stable kernel series.

regards,

Rik
--
The Internet is not a network of computers. It is a network
of people. That is its real strength.

Wanna talk about the kernel?  irc.openprojects.net / #kernelnewbies
http://www.conectiva.com/		http://www.surriel.com/

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux.eu.org/Linux-MM/

  reply	other threads:[~2000-05-24 22:44 UTC|newest]

Thread overview: 35+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2000-05-24 16:16 Matthew Dillon
2000-05-24 18:51 ` Rik van Riel
2000-05-24 20:57   ` Matthew Dillon
2000-05-24 22:44     ` Rik van Riel [this message]
2000-05-25  9:52     ` Jamie Lokier
2000-05-25 16:18       ` Matthew Dillon
2000-05-25 16:50         ` Jamie Lokier
2000-05-25 17:17           ` Rik van Riel
2000-05-25 17:53             ` Matthew Dillon
2000-05-26 11:38               ` Jamie Lokier
2000-05-26 11:08           ` Stephen C. Tweedie
2000-05-26 11:22             ` Jamie Lokier
2000-05-26 13:15               ` Stephen C. Tweedie
2000-05-26 14:31                 ` Jamie Lokier
2000-05-26 14:38                   ` Stephen C. Tweedie
2000-05-26 15:59                     ` Matthew Dillon
2000-05-26 16:36                     ` Jamie Lokier
2000-05-26 16:40                       ` Stephen C. Tweedie
2000-05-26 16:55                         ` Matthew Dillon
2000-05-26 17:05                           ` Jamie Lokier
2000-05-26 17:35                             ` Matthew Dillon
2000-05-26 17:46                               ` Stephen C. Tweedie
2000-05-26 17:02                         ` Jamie Lokier
2000-05-26 17:15                           ` Stephen C. Tweedie
2000-05-26 20:41                             ` Jamie Lokier
2000-05-28 22:42                               ` Stephen Tweedie
2000-05-26 15:45                   ` Matthew Dillon
2000-05-26 12:04             ` Rik van Riel
  -- strict thread matches above, loose matches on Subject: below --
2000-05-24 19:37 Mark_H_Johnson
2000-05-24 20:35 ` Matthew Dillon
2000-05-24 15:11 Rik van Riel
2000-05-24 22:44 ` Juan J. Quintela
2000-05-24 23:32   ` Rik van Riel
2000-05-26 11:11 ` Stephen C. Tweedie
2000-05-26 11:49   ` Rik van Riel

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Pine.LNX.4.21.0005241937350.24993-100000@duckman.distro.conectiva \
    --to=riel@conectiva.com.br \
    --cc=dillon@apollo.backplane.com \
    --cc=linux-mm@kvack.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox