linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Jonathan Morton <chromi@cyberspace.org>
To: Linus Torvalds <torvalds@transmeta.com>
Cc: linux-mm@kvack.org
Subject: Re: [RFC][DATA] re "ongoing vm suckage"
Date: Sat, 4 Aug 2001 21:54:54 +0100	[thread overview]
Message-ID: <a05100301b791ed376e26@[192.168.239.101]> (raw)
In-Reply-To: <Pine.LNX.4.33.0108040952460.1203-100000@penguin.transmeta.com>

>  > I'm testing 2.4.8-pre4 -- MUCH better interactivity behavior now.
>
>Good.. However..
>
>>  I've been testing ext3/raid5 for several weeks now and this is usable now.
>>  My system is Dual 1Ghz/2GRam/4GSwap fibrechannel.
>>  But...the single thread i/o performance is down.
>
>Bad. And before we get too happy about the interactive thing, let's
>remember that sometimes interactivity comes at the expense of throughput,
>and maybe if we fix the throughput we'll be back where we started.

<snip>

>Rule of thumb: even on fast disks, the average seek time (and between
>requests you almost always have to seek) is on the order of a few
>milliseconds. With a large write-queue (256 total requests means 128 write
>requests) you can basically get single-request latencies of up to a
>second. Which is really bad.

Hard disks, no matter how new or old, tend to have 1/3-stroke seek 
times in the approximate range 5-20ms.  Virtually every other type of 
drive (mainly optical or removable-magnetic) has much slower seek 
times than that - typical new CD-ROM is 80ms+, MO ~100ms, old CD-ROM 
300ms+, dunno any stats for Zip or Jaz - but in general fewer 
processes will be accessing removable media at any one time.

A usable metric might be the amount of sequential I/O possible per 
seek time - this would give a better idea of how much batching to do. 
An interesting application of this is in the hardware of my 
PowerBook's DVD drive, which is capable of playing audio from one 
section of a CD-ROM while reading data from another (most drives will 
simply cancel audio playback on a data request).  It reads about 3 
seconds of audio at a high spinrate, then switches to the data track 
until the audio buffer is almost exhausted, then switches right back 
to the audio track.

I/O-per-seek values are very high for RAID arrays and for writing 
FLASH, high for new hard disks (about 150KB for one of mine), medium 
for new CD-ROM and removable drives, and low for certain classes of 
optical drive, old hard disks, reading FLASH, and instant-access 
media.  Clearing a 128-request queue by writing to a non-LIMDOW MO 
drive can take a very long time!  :)

>One partial solution may be the just make the read queue deeper than the
>write queue. That's a bit more complicated than just changing a single
>value, though - you'd need to make the batching threshold be dependent on
>read-write too etc. But it would probably not be a bad idea to change the
>"split requests evenly" to do even "split requests 2:1 to read:write".

I don't think this will make much of a difference.  The real problem 
could be that devices are plugged when the queue is empty, and not 
unplugged again until absolutely necessary - ie. if the queue is full 
or there is memory pressure.  Since changing the queue size made a 
difference, clearly the queue is filling up, processes are blocking 
on the full queue, and we get right back to good old scheduling 
latency.

I think devices should be unplugged periodically if there is 
*anything* on the queue.  By my book, once there are a few requests 
waiting, it starts being profitable to service them quickly and keep 
the queue moving at all times.  If requests are coming in quickly 
from several directions, they will be merged as and when needed - 
there is no point in waiting around forever for mergable requests 
that never arrive.

>  > I"m seeing a lot more CPU Usage for the 1st thread than previous tests --
>>  perhaps we've shortened the queue too much and it's throttling the read?
>  > Why would CPU usage go up and I/O go down?
>
>I'd guess it's calling the scheduler more. With fast disks and a queue
>that runs out, you'd probably go into a series of extremely short
>stop-start behaviour. Or something similar.

Maybe we need a per-process queue as well as a per-disk queue.  It 
doesn't need to be large - even 4 or 16 requests might help - but it 
would allow a process to submit requests and get it's foot in the 
door even when the per-disk queue is full.  Combining this with the 
shorter per-disk queue might keep the interactivity boost while 
restoring most of the throughput, especially in the multiple-process 
case.  I don't think merging or elevatoring will be needed for the 
per-process queues, which should simplify implementation.

BUT that would mean each per-process queue would have to be scanned 
every time the per-disk queues became non-full.  This might be 
expensive.

An alternative strategy might be to reserve a proportion of each 
per-disk queue for processes that don't already have a request in the 
queue.  This would have a similar effect, but it means extra storage 
per request (for the PID) and the whole queue must be scanned on each 
request-add to check whether it's allowed.  By the look of it, the 
request structure is reasonably large already and there's a fair 
amount of scanning of the queue done already (by the elevator), so 
this might be more acceptable.
-- 
--------------------------------------------------------------
from:     Jonathan "Chromatix" Morton
mail:     chromi@cyberspace.org  (not for attachments)
website:  http://www.chromatix.uklinux.net/vnc/
geekcode: GCS$/E dpu(!) s:- a20 C+++ UL++ P L+++ E W+ N- o? K? w--- O-- M++$
           V? PS PE- Y+ PGP++ t- 5- X- R !tv b++ DI+++ D G e+ h+ r++ y+(*)
tagline:  The key to knowledge is not to rely on people to teach you it.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/

  reply	other threads:[~2001-08-04 20:54 UTC|newest]

Thread overview: 54+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2001-08-03 23:44 Ben LaHaise
2001-08-04  1:29 ` Rik van Riel
2001-08-04  3:06   ` Daniel Phillips
2001-08-04  3:13     ` Linus Torvalds
2001-08-04  3:23       ` Rik van Riel
2001-08-04  3:35         ` Linus Torvalds
2001-08-04  3:26       ` Ben LaHaise
2001-08-04  3:34         ` Rik van Riel
2001-08-04  3:38         ` Linus Torvalds
2001-08-04  3:48         ` Linus Torvalds
2001-08-04  4:14           ` Ben LaHaise
2001-08-04  4:20             ` Linus Torvalds
2001-08-04  4:39               ` Ben LaHaise
2001-08-04  4:47                 ` Linus Torvalds
2001-08-04  5:13                   ` Ben LaHaise
2001-08-04  5:28                     ` Linus Torvalds
2001-08-04  6:37                     ` Linus Torvalds
2001-08-04  5:38                       ` Marcelo Tosatti
2001-08-04  7:13                         ` Rik van Riel
2001-08-04 23:28                           ` [PATCH] Unlazy activate (was: re "ongoing vm suckage") Daniel Phillips
2001-08-04 14:22                       ` [RFC][DATA] re "ongoing vm suckage" Mike Black
2001-08-04 17:08                         ` Linus Torvalds
2001-08-04 20:54                           ` Jonathan Morton [this message]
2001-08-05  4:19                           ` Michael Rothwell
2001-08-05 18:40                             ` Marcelo Tosatti
2001-08-05 20:20                             ` Linus Torvalds
2001-08-06 20:32                               ` Rob Landley
2001-08-05 15:24                           ` Mike Black
2001-08-05 20:04                             ` Linus Torvalds
2001-08-05 20:23                               ` Alan Cox
2001-08-05 20:33                                 ` Linus Torvalds
2001-08-04 16:21                       ` Mark Hemment
2001-08-07 15:45                       ` Ben LaHaise
2001-08-07 16:22                         ` Linus Torvalds
2001-08-07 16:51                           ` Ben LaHaise
2001-08-07 17:08                             ` Linus Torvalds
2001-08-07 18:17                             ` Andrew Morton
2001-08-07 18:40                               ` Ben LaHaise
2001-08-07 21:33                                 ` Daniel Phillips
2001-08-07 21:33                             ` Linus Torvalds
2001-08-07 17:04                           ` Linus Torvalds
2001-08-07 17:11                             ` Rik van Riel
2001-08-07 19:12                               ` Linus Torvalds
2001-08-07 19:21                                 ` Rik van Riel
2001-08-07 20:50                                   ` Linus Torvalds
2001-08-07 23:36                                   ` Linus Torvalds
2001-08-07 17:26                             ` Chris Mason
2001-08-07 18:13                               ` Daniel Phillips
2001-08-07 18:40                                 ` Chris Mason
2001-08-07 19:52                                   ` Linus Torvalds
2001-08-07 20:22                                     ` Chris Mason
2001-08-08  1:08                                       ` Theodore Tso
2001-08-08  1:13                                         ` Linus Torvalds
2001-08-08  2:25                             ` Marcelo Tosatti

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='a05100301b791ed376e26@[192.168.239.101]' \
    --to=chromi@cyberspace.org \
    --cc=linux-mm@kvack.org \
    --cc=torvalds@transmeta.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox