linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Craig Kulesa <ckulesa@loke.as.arizona.edu>
To: linux-mm@kvack.org, linux-kernel@vger.rutgers.edu
Subject: Re: PATCH: Possible solution to VM problems (take 2)
Date: Thu, 18 May 2000 03:17:25 -0700 (MST)	[thread overview]
Message-ID: <Pine.LNX.4.21.0005180221450.7333-100000@loke.as.arizona.edu> (raw)
In-Reply-To: <Pine.LNX.4.21.0005140101390.4107-100000@loke.as.arizona.edu>


[Regarding Juan Quintela's wait_buffers_02.patch against pre9-2]

Wow. Much better!

The system doesn't hang itself in a CPU-thrashing-knot everytime an app
runs the used+cache allocations up to the limit of physical memory.  Cache
relinquishes gracefully, disk activity is dramatically less.  kswapd is
quiet again, whereas in pre8 it was eating 1/4 the integrated CPU time
as X11 at times.

I'm also not having the "cache content problems" I wrote about a few days
ago either.  Netscape, for example, is now perfectly content to load from
cache in 32 MB of RAM with room to spare.  General VM behavior has
pretty decent "feel" from 16 MB to 128 MB on 4 systems from 486DX2/66 to
PIII/500 under normal development load. 

In contrast, doing _anything_ while building a kernel on a 32 MB
Pentium/75 with pre8 was nothing short of a hair-pulling
experience.  [20 seconds for a bloody xterm?!]  It's smooth and
responsive now, even when assembling 40 MB RPM packages. Paging remains
gentle and not too distracting. Good. 

A stubborn problem that remains is the behavior when lots of
dirty pages pile up quickly.  Doing a giant 'dd' from /dev/zero to a
file on disk still causes gaps of unresponsiveness.  Here's a short vmstat
session on a 128 MB PIII system performing a 'dd if=/dev/zero of=dummy.dat
bs=1024k count=256':

   procs                      memory    swap          io     system         cpu
 r  b  w   swpd   free   buff  cache  si  so    bi    bo   in    cs  us  sy  id
 0  0  0   1392 100844    320  14000   0   0     0     0  186   409   0   0 100
 1  0  1   1392  53652    420  60080   0   0    12  3195  169   133   1  30  69
 0  1  1   1392  27572    444  85324   0   0     0  3487  329   495   0  18  82
 0  1  1   1392  15376    456  97128   0   0     0  3251  314   468   0   9  91
 0  1  2   1392   2332    472 109716   0   0    17  3089  308   466   0  11  89
 2  1  1   2820   2220    144 114392 380 1676   663 26644 20977 31578   0  10  90
 1  2  0   3560   2796    160 114220 284 792   303  9168 6542  7826   0  11  89
 4  2  1   3948   2824    168 114748 388 476   536 12975 9753 14203   1  11  88
 0  5  0   3944   2744    244 114496 552  88   791  4667 3827  4721   1   3  96
 2  0  0   3944   1512    416 115544  72   0   370     0  492  1417   0   3  97
 0  2  0   3916   2668    556 113800 132  36   330     9  415  1845   6   8  86
 1  0  0   3916   1876    720 114172   0   0   166     0  308  1333  14   6  80
 1  0  0   3912   2292    720 114244  76   0    19     0  347  1126   2   2  96
 2  0  0   3912   2292    720 114244   0   0     0     0  136   195   0   0 100

Guess the line when UI responsiveness was lost. :)

Yup.  Nothing abnormal happens until freemem decreases to zero, and then
the excrement hits the fan (albeit fairly briefly in this test).  After
the first wave of dirty pages are written out and the cache stabilizes,
user responsiveness seems to smooth out again. 

On the plus side...
It's relevant to note that this test caused rather reliable OOM
terminations of XFree86 from pre7-x (if not earlier) until this patch. I
haven't been able to generate any OOM process kills yet. And I've tried to
be very imaginative. :)

There's still some work needed, but Juan's patch seems to be resulting in
behavior that is clearly on the right track.  Great job guys, and thanks! 


Respectfully,
Craig Kulesa

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux.eu.org/Linux-MM/

  reply	other threads:[~2000-05-18 10:17 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2000-05-14  9:48 Summary of recent VM behavior [2.3.99-pre8] Craig Kulesa
2000-05-18 10:17 ` Craig Kulesa [this message]
2000-05-18 10:59   ` PATCH: Possible solution to VM problems (take 2) Jan Niehusmann
2000-05-18 13:41     ` Rik van Riel
2000-05-18 13:49       ` Stephen C. Tweedie
2000-05-16 19:32 [dirtypatch] quickhack to make pre8/9 behave (fwd) Rik van Riel
2000-05-17  0:28 ` PATCH: less dirty (Re: [dirtypatch] quickhack to make pre8/9 behave (fwd)) Juan J. Quintela
2000-05-17 20:45   ` PATCH: Possible solution to VM problems Juan J. Quintela
2000-05-17 23:31     ` PATCH: Possible solution to VM problems (take 2) Juan J. Quintela
2000-05-18  0:12       ` Juan J. Quintela
2000-05-18  1:07         ` Rik van Riel
2000-05-21  8:14         ` Linus Torvalds
2000-05-21 16:01           ` Rik van Riel
2000-05-21 17:15             ` Linus Torvalds
2000-05-21 19:02               ` Rik van Riel
2000-05-18  5:58 Neil Schemenauer

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Pine.LNX.4.21.0005180221450.7333-100000@loke.as.arizona.edu \
    --to=ckulesa@loke.as.arizona.edu \
    --cc=linux-kernel@vger.rutgers.edu \
    --cc=linux-mm@kvack.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox