From: Matthew Dillon <dillon@apollo.backplane.com>
To: Daniel Phillips <phillips@innominate.de>
Cc: Rik van Riel <riel@conectiva.com.br>, linux-mm@kvack.org
Subject: Re: Interesting item came up while working on FreeBSD's pageout daemon
Date: Thu, 28 Dec 2000 22:24:03 -0800 (PST) [thread overview]
Message-ID: <200012290624.eBT6O3s14135@apollo.backplane.com> (raw)
In-Reply-To: <00122900094502.00966@gimli>
:Thanks for clearing that up, but it doesn't change the observation -
:it still looks like he's keeping dirty pages 'on probation' twice as
:long as before. Having each page take an extra lap the inactive_dirty
:list isn't exactly equivalent to just scanning the list more slowly,
:but it's darn close. Is there a fundamental difference?
:
:--
:Daniel
Well, scanning the list more slowly would still give dirty and clean
pages the same effective priority relative to each other before being
cleaned. Giving the dirty pages an extra lap around the inactive
queue gives clean pages a significantly higher priority over dirty
pages in regards to choosing which page to launder next.
So there is a big difference there.
The effect of this (and, more importantly, limiting the number of dirty
pages one is willing to launder in the first pageout pass) is rather
significant due to the big difference in cost in dealing with clean
pages verses dirty pages.
'cleaning' a clean page means simply throwing it away, which costs maybe
a microsecond of cpu time and no I/O. 'cleaning' a dirty page requires
flushing it to its backing store prior to throwing it away, which costs
a significant bit of cpu and at least one write I/O. One write I/O
may not seem like a lot, but if the disk is already loaded down and the
write I/O has to seek we are talking at least 5 milliseconds of disk
time eaten by the operation. Multiply this by the number of dirty pages
being flushed and it can cost a huge and very noticeable portion of
your disk bandwidth, verses zip for throwing away a clean page.
Due to the (relatively speaking) huge cost involved in laundering a dirty
page, the extra cpu time we eat giving the dirty pages a longer life on
the inactive queue in the hopes of avoiding the flush, or skipping them
entirely with a per-pass dirty page flushing limit, is well worth it.
This is a classic algorithmic tradeoff... spend a little extra cpu to
choose the best pages to launder in order to save a whole lot of cpu
(and disk I/O) later on.
-Matt
Matthew Dillon
<dillon@backplane.com>
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux.eu.org/Linux-MM/
next prev parent reply other threads:[~2000-12-29 6:24 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2000-12-16 20:16 Matthew Dillon
2000-12-21 16:47 ` Daniel Phillips
2000-12-21 19:42 ` Rik van Riel
2000-12-22 3:20 ` Matthew Dillon
2000-12-28 23:04 ` Daniel Phillips
2000-12-29 6:24 ` Matthew Dillon [this message]
2000-12-29 14:19 ` Daniel Phillips
2000-12-29 19:58 ` James Antill
2000-12-29 23:12 ` Daniel Phillips
2000-12-29 23:00 ` Daniel Phillips
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=200012290624.eBT6O3s14135@apollo.backplane.com \
--to=dillon@apollo.backplane.com \
--cc=linux-mm@kvack.org \
--cc=phillips@innominate.de \
--cc=riel@conectiva.com.br \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox