linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Jonathan Morton <chromi@cyberspace.org>
To: "James A. Sutherland" <jas88@cam.ac.uk>
Cc: "Stephen C. Tweedie" <sct@redhat.com>,
	Rik van Riel <riel@conectiva.com.br>,
	"Eric W. Biederman" <ebiederm@xmission.com>,
	Slats Grobnik <kannzas@excite.com>,
	linux-mm@kvack.org, Andrew Morton <andrewm@uow.edu.au>
Subject: Re: [PATCH] a simple OOM killer to save me from Netscape
Date: Tue, 17 Apr 2001 15:26:12 +0100	[thread overview]
Message-ID: <l03130301b701fc801a61@[192.168.239.105]> (raw)
In-Reply-To: <3ormdto78qla1qir8c62i2tuope82bt1u0@4ax.com>

>It's a very black art, this; "clever" page replacement algorithms will
>probably go some way towards helping, but there will always be a point
>when you really are thrashing - at which point, I think the best
>solution is to suspend processes alternately until the problem is
>resolved.

I've got an even better idea.  Monitor each process's "working set" - ie.
the set of unique pages it regularly "uses" or pages in over some period of
(real) time.  In the event of thrashing, processes should be reserved an
amount of physical RAM equal to their working set, except for processes
which have "unreasonably large" working sets.  These last should be given
some arbitrarily small and fixed working set - they will perform just the
same as if nothing was done, but everything else runs *far* better.

The parameters for the above algorithm would be threefold:

- The time over which the working set is calculated (a decaying weight
would probably work)
- Determining "unreasonably large" (probably can be done by taking the
largest working set(s) on the system and penalising those until the total
working set is within the physical limit of the machine)
- How small the "fixed working set" should be for penalised processes (such
as a fair proportion of the un-reserved physical memory)

If this is done properly, well-behaved processes like XMMS, shells and
system monitors can continue working normally even if a ton of "memory
hogs" attempt to thrash the system to it's knees.  I suspect even a runaway
Netscape would be handled sensibly by this technique.  Best of all, this
technique doesn't involve arbitrarily killing or suspending processes.

It is still possible, mostly on small systems, to have *every* active
process thrashing in this manner.  However, I would submit that if it gets
this far, the system can safely be considered overloaded.  :)  It has
certainly got to be an improvement over the current situation, where just 3
or 4 runaway processes can bring down my well-endowed machine, and Netscape
can crunch a typical desktop.

The disadvantage, of course, is that some record has to be kept of the
working set of each process over time.  This could be some overhead in
terms of storage, but probably won't be much of a CPU burden.

Interestingly, the "working set" calculation yielded by this method, if
made available to userland, could possibly aid optimisation by application
programmers.  It is well known to systems programmers that if the working
set exceeds the size of the largest CPU cache on the system, performance is
limited to the speed and latency of DRAM (both of which are exceptionally
poor) - but it is considerably less well known to applications programmers.

As for how to actually implement this...  don't ask me.  I'm still a kernel
newbie, really!

--------------------------------------------------------------
from:     Jonathan "Chromatix" Morton
mail:     chromi@cyberspace.org  (not for attachments)
big-mail: chromatix@penguinpowered.com
uni-mail: j.d.morton@lancaster.ac.uk

The key to knowledge is not to rely on people to teach you it.

Get VNC Server for Macintosh from http://www.chromatix.uklinux.net/vnc/

-----BEGIN GEEK CODE BLOCK-----
Version 3.12
GCS$/E/S dpu(!) s:- a20 C+++ UL++ P L+++ E W+ N- o? K? w--- O-- M++$ V? PS
PE- Y+ PGP++ t- 5- X- R !tv b++ DI+++ D G e+ h+ r++ y+(*)
-----END GEEK CODE BLOCK-----


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux.eu.org/Linux-MM/

  reply	other threads:[~2001-04-17 14:26 UTC|newest]

Thread overview: 53+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2001-04-12 16:58 Slats Grobnik
2001-04-12 18:25 ` Rik van Riel
2001-04-12 18:49   ` James A. Sutherland
2001-04-13  6:45   ` Eric W. Biederman
2001-04-13 16:20     ` Rik van Riel
2001-04-14  1:20       ` Stephen C. Tweedie
2001-04-16 21:06         ` James A. Sutherland
2001-04-16 21:40           ` Jonathan Morton
2001-04-16 22:12             ` Rik van Riel
2001-04-16 22:21             ` James A. Sutherland
2001-04-17 14:26               ` Jonathan Morton [this message]
2001-04-17 19:53                 ` Rik van Riel
2001-04-17 20:44                   ` James A. Sutherland
2001-04-17 20:59                     ` Jonathan Morton
2001-04-17 21:09                       ` James A. Sutherland
2001-04-14  7:00       ` Eric W. Biederman
2001-04-15  5:05         ` Rik van Riel
2001-04-15  5:20           ` Rik van Riel
2001-04-16 11:52         ` Szabolcs Szakacsits
2001-04-16 12:17       ` suspend processes at load (was Re: a simple OOM ...) Szabolcs Szakacsits
2001-04-17 19:48         ` Rik van Riel
2001-04-18 21:32           ` Szabolcs Szakacsits
2001-04-18 20:38             ` James A. Sutherland
2001-04-18 23:25               ` Szabolcs Szakacsits
2001-04-18 22:29                 ` Rik van Riel
2001-04-19 10:14                   ` Stephen C. Tweedie
2001-04-19 13:23                   ` Szabolcs Szakacsits
2001-04-19  2:11                 ` Rik van Riel
2001-04-19  7:08                   ` James A. Sutherland
2001-04-19 13:37                     ` Szabolcs Szakacsits
2001-04-19 12:26                       ` Christoph Rohland
2001-04-19 12:30                       ` James A. Sutherland
2001-04-19  9:15                 ` James A. Sutherland
2001-04-19 18:34             ` Dave McCracken
2001-04-19 18:47               ` James A. Sutherland
2001-04-19 18:53                 ` Dave McCracken
2001-04-19 19:10                   ` James A. Sutherland
2001-04-20 14:58                     ` Rik van Riel
2001-04-21  6:10                       ` James A. Sutherland
2001-04-19 19:13                   ` Rik van Riel
2001-04-19 19:47                     ` Gerrit Huizenga
2001-04-20 12:44                       ` Szabolcs Szakacsits
2001-04-19 20:06                     ` James A. Sutherland
2001-04-20 12:29                     ` Szabolcs Szakacsits
2001-04-20 11:50                       ` Jonathan Morton
2001-04-20 13:32                         ` Szabolcs Szakacsits
2001-04-20 14:30                           ` Rik van Riel
2001-04-22 10:21                       ` James A. Sutherland
2001-04-20 12:25                 ` Szabolcs Szakacsits
2001-04-21  6:08                   ` James A. Sutherland
2001-04-20 12:18               ` Szabolcs Szakacsits
2001-04-22 10:19                 ` James A. Sutherland
2001-04-17 10:58 ` limit for number of processes Uman

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='l03130301b701fc801a61@[192.168.239.105]' \
    --to=chromi@cyberspace.org \
    --cc=andrewm@uow.edu.au \
    --cc=ebiederm@xmission.com \
    --cc=jas88@cam.ac.uk \
    --cc=kannzas@excite.com \
    --cc=linux-mm@kvack.org \
    --cc=riel@conectiva.com.br \
    --cc=sct@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox