linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Rik van Riel <H.H.vanRiel@phys.uu.nl>
To: "Stephen C. Tweedie" <sct@redhat.com>
Cc: Linux MM <linux-mm@kvack.org>, Alan Cox <alan@lxorguk.ukuu.org.uk>
Subject: Re: readahead/behind algorithm
Date: Tue, 8 Dec 1998 00:12:01 +0100 (CET)	[thread overview]
Message-ID: <Pine.LNX.3.96.981208000524.3961D-100000@mirkwood.dummy.home> (raw)
In-Reply-To: <199812072256.WAA04256@dax.scot.redhat.com>

On Mon, 7 Dec 1998, Stephen C. Tweedie wrote:
> On Mon, 7 Dec 1998 21:17:56 +0100 (CET), Rik van Riel
> <H.H.vanRiel@phys.uu.nl> said:
> 
> > I've thought a bit about what the 'ideal' readahead/behind
> > algorithm would be and reached the following conclusion.
> 
> > 1. we test the presence of pages in the proximity of the
> >    faulting page (31 behind, 32 ahead) building a map of
> >    64 pages.
> 
> It will only be useful to start getting complex here if we take more
> care about maintaining the logical contiguity of pages when we swap
> them.

We will need to test 2 proximities, the one in the process'
address space (of which I was talking here) and the one in
swap space. We only read those virtual addresses that can
be scooped off of swap in one sweep and those that can be
properly clustered.

The rest we put in a new swap bitmap (actually two bitmaps,
which get cleaned&written alternately so we can forget old
requests without forgetting new ones) and we read those pages
only when they become clusterable due to other I/O requests
happening near them or when the process faults on them.

> If swap gets fragmented, then doing this sort of readahead will just
> use up bandwidth without giving any measurable performance gains. 
> It would be better thinking along those lines right now, I suspect. 

At the moment, yes. To do the stuff I wrote about we'd need to
clone about half the code from vmscan.c and pass the faulting
address to swap_in() as an extra parameter.

We probably only want this (expensive!) swapin code on machines
than run multiple large simulations and have loads of memory
but even more swap. It will be a lot of fun to write though :)

cheers,

Rik -- the flu hits, the flu hits, the flu hits -- MORE
+-------------------------------------------------------------------+
| Linux memory management tour guide.        H.H.vanRiel@phys.uu.nl |
| Scouting Vries cubscout leader.      http://www.phys.uu.nl/~riel/ |
+-------------------------------------------------------------------+

--
This is a majordomo managed list.  To unsubscribe, send a message with
the body 'unsubscribe linux-mm me@address' to: majordomo@kvack.org

      reply	other threads:[~1998-12-07 23:40 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
1998-12-07 20:17 Rik van Riel
1998-12-07 22:56 ` Stephen C. Tweedie
1998-12-07 23:12   ` Rik van Riel [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Pine.LNX.3.96.981208000524.3961D-100000@mirkwood.dummy.home \
    --to=h.h.vanriel@phys.uu.nl \
    --cc=alan@lxorguk.ukuu.org.uk \
    --cc=linux-mm@kvack.org \
    --cc=sct@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox