linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* Re: PATCH: Possible solution to VM problems (take 2)
@ 2000-05-18  5:58 Neil Schemenauer
  0 siblings, 0 replies; 12+ messages in thread
From: Neil Schemenauer @ 2000-05-18  5:58 UTC (permalink / raw)
  To: linux-mm; +Cc: quintela, riel

Rik van Riel:
> I am now testing the patch on my small test machine and must
> say that things look just *great*. I can start up a gimp while
> bonnie is running without having much impact on the speed of
> either.
> 
> Interactive performance is nice and stability seems to be
> great as well.

We using the same patch?  I applied wait_buffers_02.patch from
Juan's site to pre9-2.  Running "Bonnie -s 250" on a 128 MB
machine causes extremely poor interactive performance.  The
machine is totaly unresponsive for up to a minute at a time.

    Neil
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux.eu.org/Linux-MM/

^ permalink raw reply	[flat|nested] 12+ messages in thread
* [dirtypatch] quickhack to make pre8/9 behave (fwd)
@ 2000-05-16 19:32 Rik van Riel
  2000-05-17  0:28 ` PATCH: less dirty (Re: [dirtypatch] quickhack to make pre8/9 behave (fwd)) Juan J. Quintela
  0 siblings, 1 reply; 12+ messages in thread
From: Rik van Riel @ 2000-05-16 19:32 UTC (permalink / raw)
  To: linux-mm; +Cc: Linus Torvalds, Stephen C. Tweedie

[ARGHHH, this time -with- patch, thanks RogerL]

Hi,

with the quick&dirty patch below the system:
- gracefully (more or less) survives mmap002
- has good performance on mmap002

To me this patch shows that we really want to wait
for dirty page IO to finish before randomly evicting
the (wrong) clean pages and dying horribly.

This is a dirty hack which should be replaced by whichever
solution people thing should be implemented to have the
allocator waiting for dirty pages to be flushed out.

regards,

Rik
--
The Internet is not a network of computers. It is a network
of people. That is its real strength.

Wanna talk about the kernel?  irc.openprojects.net / #kernelnewbies
http://www.conectiva.com/		http://www.surriel.com/



--- fs/buffer.c.orig	Mon May 15 09:49:46 2000
+++ fs/buffer.c	Tue May 16 14:53:08 2000
@@ -2124,11 +2124,16 @@
 static void sync_page_buffers(struct buffer_head *bh)
 {
 	struct buffer_head * tmp;
+	static int rand = 0;
+	if (++rand > 64)
+		rand = 0;
 
 	tmp = bh;
 	do {
 		struct buffer_head *p = tmp;
 		tmp = tmp->b_this_page;
+		if (buffer_locked(p) && !rand)
+			__wait_on_buffer(p);
 		if (buffer_dirty(p) && !buffer_locked(p))
 			ll_rw_block(WRITE, 1, &p);
 	} while (tmp != bh);

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux.eu.org/Linux-MM/

^ permalink raw reply	[flat|nested] 12+ messages in thread
* Summary of recent VM behavior [2.3.99-pre8]
@ 2000-05-14  9:48 Craig Kulesa
  2000-05-18 10:17 ` PATCH: Possible solution to VM problems (take 2) Craig Kulesa
  0 siblings, 1 reply; 12+ messages in thread
From: Craig Kulesa @ 2000-05-14  9:48 UTC (permalink / raw)
  To: linux-mm, linux-kernel


Greetings...

Below are a summary of issues that I've encountered in the pre7 and pre8
kernels (at least on mid-range hardware).  I'd appreciate comments, any
enlightening information or pointers to documentation so I can answer the
questions myself. :) Also consider me a guinea pig for patches... 


1)  Unnecessary OOM situations, killing of processes
    (pathological)

Example:  On a 64 MB box, dd'ing >64 MB from /dev/zero to a file
on disk runs the kernel aground, usually killing a large RSS process 
like X11. This has been a consistent problem since pre6(-7?). This
behavior seems quite broken.  

I assume this is in the mmap code.  Cache increases as the file is written
but when the limit of physical memory is reached, problems ensue.  The CPU
is consumed ("hijacked") by kswapd or other internal kernel operations; as
though mmap'ed allocations can't be shrunk effectively (or quickly).

Not a problem w/ classzone.


2)  What's in the cache anyways?
    (puzzling)

Example: Play mp3's on an otherwise unloaded 64 MB system until cache
fills the rest of physical RAM. Then open an xterm (or GNU emacs,
or...).  After less than 10 MB of mp3 data goes goes by, close the
xterm. Open a new one. The xterm code is not in cache but is loaded from
scratch from disk, with a flurry of disk I/O (but no swapped pages). 
Why? The cache allocation is almost 50 MB -- *why* isn't it in there
somewhere?

One might imagine that the previous mp3's are solidly in cache, but
loading an mp3 only 15 MB earlier in the queue... comes from disk and not
from cache!  Why?

Another example on a 40 MB system: Open a lightweight X11/WindowMaker
session. Open Netscape 4.72 (Navigator). Close it. Log out. Login again,
load Netscape. Both X, window manager, and Netscape seem to come
straight from disk, with no swapped pages.  But the buffer cache is 
25 MB!  What's in there if the applications aren't? 

This is also seen on a 32 MB system by simply opening Navigator, closing
it, and opening it again. In kernel 2.2.xx and 2.3.99-pre5 (or with
classzone), it comes quickly out of cache.  In pre8, there's substantial
disk I/O, and about half of the pages are read from disk and not the
cache.  (??)

Before pre6 and with AA's classzone patch, a 25 MB cache seemed to contain
the "last" 25 MB of mmap'd files or I/O buffers. This doesn't seem true
anymore (?!), and it's an impediment to performance on at least
lower-end hardware.


3) Slow I/O performance

Disk access seems to incur large CPU overhead once physical memory must be
shared between "application" memory and cache.  kswapd is invoked
excessively, applications that stream data from disk hesitate, even the
mouse pointer becomes jumpy. The system load is ~50% higher in heavy disk
access than in earlier 2.2 and 2.3 kernels. 

Untarring the kernel source is a good example of this. Even a 128 MB
system doesn't do this smoothly in pre8. 

The overall memory usage in pre6 and later seems good -- there is no
gratuitous swapping as seen in pre5 (and earlier in pre2-3 etc). But the
general impression is that in the mmap code (somewhere else?), there are a
LOT of pages moved around or scanned that incurs expensive system
overhead. 

Before an "improved" means of handling vm pages (like the active/inactive
lists that Rik is working on), surely the current code in vmscan and
filemap (etc) should be shown to be fast and not conducive to this
puzzling, even pathological, behavior?  


4)  Confusion about inode_cache and dentry_cache

I'm surely confused here, but in kernel 2.3 the inode_cache and
dentry_cache are not as limited as in kernel 2.2.  Thusly,
sample applications like Redhat's 'slocate' daemon or any global use of
the "find" command will cause these slab caches to fill quickly. These
caches are effectively released under memory pressure. No problem.

But why do these "caches" show up as "used app memory" and not cache in
common tools like 'free' (or /proc/meminfo)?  This looks like a recipe for
lots of confused souls once kernel 2.4 is adopted by major distributions. 

Thoughts?


Craig Kulesa
Steward Observatory, Tucson AZ
ckulesa@as.arizona.edu

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux.eu.org/Linux-MM/

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2000-05-21 19:02 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2000-05-18  5:58 PATCH: Possible solution to VM problems (take 2) Neil Schemenauer
  -- strict thread matches above, loose matches on Subject: below --
2000-05-16 19:32 [dirtypatch] quickhack to make pre8/9 behave (fwd) Rik van Riel
2000-05-17  0:28 ` PATCH: less dirty (Re: [dirtypatch] quickhack to make pre8/9 behave (fwd)) Juan J. Quintela
2000-05-17 20:45   ` PATCH: Possible solution to VM problems Juan J. Quintela
2000-05-17 23:31     ` PATCH: Possible solution to VM problems (take 2) Juan J. Quintela
2000-05-18  0:12       ` Juan J. Quintela
2000-05-18  1:07         ` Rik van Riel
2000-05-21  8:14         ` Linus Torvalds
2000-05-21 16:01           ` Rik van Riel
2000-05-21 17:15             ` Linus Torvalds
2000-05-21 19:02               ` Rik van Riel
2000-05-14  9:48 Summary of recent VM behavior [2.3.99-pre8] Craig Kulesa
2000-05-18 10:17 ` PATCH: Possible solution to VM problems (take 2) Craig Kulesa
2000-05-18 10:59   ` Jan Niehusmann
2000-05-18 13:41     ` Rik van Riel
2000-05-18 13:49       ` Stephen C. Tweedie

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox