From: Rik van Riel <riel@conectiva.com.br>
To: "Kurtis D. Rader" <kdrader@us.ibm.com>
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: Re: [RFC][PATCH] IO wait accounting
Date: Tue, 14 May 2002 18:38:54 -0300 (BRT) [thread overview]
Message-ID: <Pine.LNX.4.44L.0205141835490.32261-100000@imladris.surriel.com> (raw)
In-Reply-To: <20020514124915.F21303@us.ibm.com>
On Tue, 14 May 2002, Kurtis D. Rader wrote:
> On the topic of how this is defined by other UNIXes ...
> On each todclock() interrupt (100 times per second) the sum of the
>
> 1) number of processes currently waiting on the swap in queue,
> 2) number of processes waiting for a page to be brought into memory,
> 3) number of processes waiting on filesystem I/O, and
> 4) number or processes waiting on physical/raw I/O
>
> is calculated. The smaller of that value and the number of CPUs
> currently idle is added to the procstat.ps_cpuwait counter (sar's %wio).
> This means that wait time is a subset of idle time.
This is basically what my patch does, except that it doesn't take
the minimum of the number of threads waiting on IO and the number
of idle CPUs. I'm still thinking about a cheap way to make this
work ...
> The rationale for separating out I/O wait time is that since an I/O
> operation may complete at any instant, and the process will be marked
> runable and begin consuming CPU cycles, the CPUs should not really be
> considered idle. The %wio metric most definitely does not tell you
> anything about how busy the disk subsystem is or whether the disks are
> overloaded. It can indicate whether or not the workload is I/O bound. Or,
> to look at it another way, %wio is good for tracking how much busier the
> CPUs would be if you could make the disk subsystem infinitely fast.
Indeed, this would be a good paragraph to copy into the procps
manual ;)
kind regards,
Rik
--
Bravely reimplemented by the knights who say "NIH".
http://www.surriel.com/ http://distro.conectiva.com/
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/
prev parent reply other threads:[~2002-05-14 21:38 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2002-05-09 0:55 Rik van Riel
2002-05-09 14:30 ` Bill Davidsen
2002-05-09 19:08 ` Rik van Riel
2002-05-12 19:05 ` Zlatko Calusic
2002-05-12 21:14 ` Rik van Riel
2002-05-13 11:40 ` BALBIR SINGH
2002-05-13 13:58 ` Zlatko Calusic
2002-05-13 14:32 ` Rik van Riel
2002-05-13 11:45 ` Zlatko Calusic
2002-05-13 13:34 ` Rik van Riel
2002-05-13 16:08 ` Bill Davidsen
2002-05-14 19:49 ` Kurtis D. Rader
2002-05-14 21:38 ` Rik van Riel [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Pine.LNX.4.44L.0205141835490.32261-100000@imladris.surriel.com \
--to=riel@conectiva.com.br \
--cc=kdrader@us.ibm.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox