linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* Re: [2.1.130-3] Page cache DEFINATELY too persistant... feature?
       [not found] ` <Pine.LNX.3.95.981126094159.5186D-100000@penguin.transmeta.com>
@ 1998-11-27 16:02   ` Stephen C. Tweedie
  1998-11-27 17:19     ` Chip Salzenberg
                       ` (3 more replies)
  0 siblings, 4 replies; 18+ messages in thread
From: Stephen C. Tweedie @ 1998-11-27 16:02 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: Stephen C. Tweedie, Benjamin Redelings I, linux-kernel, linux-mm

Hi,

Looks like I have a handle on what's wrong with the 2.1.130 vm (in
particular, its tendency to cache too much at the expense of
swapping).

The real problem seems to be that shrink_mmap() can fail for two
completely separate reasons.  First of all, we might fail to find a
free page because all of the cache pages we find are recently
referenced.  Secondly, we might fail to find a cache page at all.

The first case is an example of an overactive, large cache; the second
is an example of a very small cache.  Currently, however, we treat
these two cases pretty much the same.  In the second case, the correct
reaction is to swap, and 2.1.130 is sufficiently good at swapping that
we do so, heavily.  In the first case, high cache throughput, what we
really _should_ be doing is to age the pages more quickly.  What we
actually do is to swap.

On reflection, there is a completely natural way of distinguishing
between these two cases, and that is to extend the size of the
shrink_mmap() pass whenever we encounter many recently touched pages.
This is easy to do: simply restricting the "count_min" accounting in
shrink_mmap to avoid including salvageable but recently-touched pages
will automatically cause us to age faster as we encounter more touched
pages in the cache.

The patch below both makes sense from this perspective and seems to
work, which is always a good sign!  Moreover, it is inherently
self-tuning.  The more recently-accessed cache pages we encounter, the
faster we will age the cache.

On an 8MB boot, it is just as fast as plain 2.1.130 at doing a make
defrag over NFS (which means it is still 25% faster than 2.0.36 at
this job even on low-memory NFS).  On 64MB, I've been running emacs
with two or three 10MB mail folders loaded, netscape running a bunch
of pages, a couple of large 16-bit images under xv and a kernel build
all on different X pages, and switching between them is all extremely
fast.  It still has the great responsiveness under this sort of load
that first came with 2.1.130.  

The good news, however, is that an extra filesystem load of "wc
/usr/bin/*" does not cause a swap storm on the new kernel: it is
finally able to distinguish a cache which is too active and a cache
which is too small.  The system does still swap well once it decides
it needs to, giving swapout burst rates of 2--3MB/sec during this
test.

Note that older kernels did not have this problem because in 1.2, the
buffer cache would never ever grow into the space used by the rest of
the kernel, and in 2.0, the presence of page aging in the swapper but
not in the cache caused us to run around the shrink_mmap() loop far
too much anyway, at the expense of good swap performance.

--Stephen

----------------------------------------------------------------
--- mm/filemap.c.~1~	Thu Nov 26 18:48:52 1998
+++ mm/filemap.c	Fri Nov 27 12:45:03 1998
@@ -200,8 +200,8 @@
 	struct page * page;
 	int count_max, count_min;
 
-	count_max = (limit<<1) >> (priority>>1);
-	count_min = (limit<<1) >> (priority);
+	count_max = limit;
+	count_min = (limit<<2) >> (priority);
 
 	page = mem_map + clock;
 	do {
@@ -214,7 +214,15 @@
 		if (shrink_one_page(page, gfp_mask))
 			return 1;
 		count_max--;
-		if (page->inode || page->buffers)
+		/* 
+		 * If the page we looked at was recyclable but we didn't
+		 * reclaim it (presumably due to PG_referenced), don't
+		 * count it as scanned.  This way, the more referenced
+		 * page cache pages we encounter, the more rapidly we
+		 * will age them. 
+		 */
+		if (atomic_read(&page->count) != 1 ||
+		    (!page->inode && !page->buffers))
 			count_min--;
 		page++;
 		clock++;
--
This is a majordomo managed list.  To unsubscribe, send a message with
the body 'unsubscribe linux-mm me@address' to: majordomo@kvack.org

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [2.1.130-3] Page cache DEFINATELY too persistant... feature?
  1998-11-27 16:02   ` [2.1.130-3] Page cache DEFINATELY too persistant... feature? Stephen C. Tweedie
@ 1998-11-27 17:19     ` Chip Salzenberg
  1998-11-27 18:31     ` Linus Torvalds
                       ` (2 subsequent siblings)
  3 siblings, 0 replies; 18+ messages in thread
From: Chip Salzenberg @ 1998-11-27 17:19 UTC (permalink / raw)
  To: Stephen C. Tweedie
  Cc: Linus Torvalds, Benjamin Redelings I, linux-kernel, linux-mm

According to Stephen C. Tweedie:
> On reflection, there is a completely natural way of distinguishing
> between these two cases, and that is to extend the size of the
> shrink_mmap() pass whenever we encounter many recently touched pages.

This patch has _vastly_ improved my subjective impression of the VM
behavior of 130-pre3.  My computer is a laptop with 32M and a fairly
slow (non-DMA) hard drive; after this patch, things that used to be
quite slow -- especially Navigator -- seem much more snappy.

Thanks!
-- 
Chip Salzenberg      - a.k.a. -      <chip@perlsupport.com>
      "When do you work?"   "Whenever I'm not busy."
--
This is a majordomo managed list.  To unsubscribe, send a message with
the body 'unsubscribe linux-mm me@address' to: majordomo@kvack.org

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [2.1.130-3] Page cache DEFINATELY too persistant... feature?
  1998-11-27 16:02   ` [2.1.130-3] Page cache DEFINATELY too persistant... feature? Stephen C. Tweedie
  1998-11-27 17:19     ` Chip Salzenberg
@ 1998-11-27 18:31     ` Linus Torvalds
  1998-11-27 19:58     ` Zlatko Calusic
  1998-11-28  7:31     ` Eric W. Biederman
  3 siblings, 0 replies; 18+ messages in thread
From: Linus Torvalds @ 1998-11-27 18:31 UTC (permalink / raw)
  To: Stephen C. Tweedie; +Cc: Benjamin Redelings I, linux-kernel, linux-mm



On Fri, 27 Nov 1998, Stephen C. Tweedie wrote:
> 
> The patch below both makes sense from this perspective and seems to
> work, which is always a good sign!  Moreover, it is inherently
> self-tuning.  The more recently-accessed cache pages we encounter, the
> faster we will age the cache.

Looks sane to me. The previous counters never had any good reason behind
them either, this at least tries to reason about it. Applied.

		Linus

--
This is a majordomo managed list.  To unsubscribe, send a message with
the body 'unsubscribe linux-mm me@address' to: majordomo@kvack.org

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [2.1.130-3] Page cache DEFINATELY too persistant... feature?
  1998-11-27 16:02   ` [2.1.130-3] Page cache DEFINATELY too persistant... feature? Stephen C. Tweedie
  1998-11-27 17:19     ` Chip Salzenberg
  1998-11-27 18:31     ` Linus Torvalds
@ 1998-11-27 19:58     ` Zlatko Calusic
  1998-11-30 11:15       ` Stephen C. Tweedie
  1998-11-30 12:37       ` Rik van Riel
  1998-11-28  7:31     ` Eric W. Biederman
  3 siblings, 2 replies; 18+ messages in thread
From: Zlatko Calusic @ 1998-11-27 19:58 UTC (permalink / raw)
  To: Stephen C. Tweedie
  Cc: Linus Torvalds, Benjamin Redelings I, linux-kernel, linux-mm

"Stephen C. Tweedie" <sct@redhat.com> writes:

> Hi,
> 
> Looks like I have a handle on what's wrong with the 2.1.130 vm (in
> particular, its tendency to cache too much at the expense of
> swapping).
> 
> The real problem seems to be that shrink_mmap() can fail for two
> completely separate reasons.  First of all, we might fail to find a
> free page because all of the cache pages we find are recently
> referenced.  Secondly, we might fail to find a cache page at all.
> 
> The first case is an example of an overactive, large cache; the second
> is an example of a very small cache.  Currently, however, we treat
> these two cases pretty much the same.  In the second case, the correct
> reaction is to swap, and 2.1.130 is sufficiently good at swapping that
> we do so, heavily.  In the first case, high cache throughput, what we
> really _should_ be doing is to age the pages more quickly.  What we
> actually do is to swap.
> 
> On reflection, there is a completely natural way of distinguishing
> between these two cases, and that is to extend the size of the
> shrink_mmap() pass whenever we encounter many recently touched pages.
> This is easy to do: simply restricting the "count_min" accounting in
> shrink_mmap to avoid including salvageable but recently-touched pages
> will automatically cause us to age faster as we encounter more touched
> pages in the cache.
> 
> The patch below both makes sense from this perspective and seems to
> work, which is always a good sign!  Moreover, it is inherently
> self-tuning.  The more recently-accessed cache pages we encounter, the
> faster we will age the cache.
> 

Hi!

Yesterday, I was trying to understand the very same problem you're
speaking of. Sometimes kswapd decides to swapout lots of things,
sometimes not.

I applied your patch, but it didn't solve the problem.
To be honest, things are now even slightly worse. :(

Sample output of vmstat 1, while copying lots of stuff to /dev/null:

procs                  memory    swap        io    system         cpu
 r b w  swpd  free  buff cache  si  so   bi   bo   in   cs  us  sy  id
 1 1 0 23696  1656  3276 25128   0   0 6425   62  304  284  20  34  46
 2 0 1 23696  1444  3276 25344   0   0 9265    0  325  315  26  49  26
 2 0 1 23696  1384  3276 25408   0   0 10507    0  333  365  20  55  25
 3 0 1 23696  1408  3276 25388   0   0 10758    0  334  336  23  55  23
 2 0 0 23696  1672  3276 25132   0   0 9965    0  321  328  23  50  27
 3 0 1 23692  1408  3276 25384   4   0 9582    5  315  339  23  45  32
 2 0 1 23692  1400  3276 25392   0   0 9794    0  323  336  21  47  32
 4 0 1 23788  1436  3276 25460   0  96 9146   24  335  325  24  44  32
 2 0 1 23788  1152  3276 25736   0   0 9763    0  321  326  23  46  31
 1 1 1 24760  1356  3276 26504   4 976 1326  244  349  247  21  14  65
 2 0 1 25916   932  3276 28092  16 1192 1621  306  371  271  23   8  69
 1 1 1 26888   976  3276 29012  12 1056  993  264  335  289  19   9  72
 2 0 0 28208  1552  3276 29756   0 1320  750  330  380  276  10   6  84
 1 1 1 29224  1140  3276 31176   4 1040 1444  260  357  270  33  13  54
 2 0 1 30412  1200  3276 32296   8 1196 1131  304  405  274  20   8  73
 3 0 1 31412  1112  3276 33384   0 1000 1092  250  344  269  18  11  71
 2 0 1 32396   532  3276 34948   0 984 1570  246  359  242  19  11  70
 0 3 1 33504  1476  3276 35128   0 1128  197  282  314  279  15   4  81
 3 0 1 35080   648  3276 37520   0 1612 2443  403  299  325  24  13  63
 2 0 1 37116   736  3276 39468   4 2276 2077  575  314  352   8  14  78
 1 1 1 39368  1352  3276 41092   0 2300 1793  575  299  352  36  13  51
 1 1 1 41516   644  3276 43940   0 2356 3071  589  317  353  20  18  62
 1 0 2 43696  1220  3276 45544   4 2420 1848  605  321  354  20  12  68
 0 2 1 44980   532  3276 47512  16 1628 2306  407  318  328  22  14  64
 3 0 1 46512  1000  3276 48576  24 1832 1353  459  314  344  22  12  66
 2 1 0 46932  1648  3340 48284  88 888 3131  222  344  379  23  13  64
 2 1 0 46672  1656  3276 48068 108   0 6313    0  476  369  19  30  51
 3 1 0 46592 19812  3276 29840 156   0 4054    0  324  357  37  22  41


I'll do some more investigation this night.

Regards,
-- 
Posted by Zlatko Calusic           E-mail: <Zlatko.Calusic@CARNet.hr>
---------------------------------------------------------------------
		  So much time, and so little to do.
--
This is a majordomo managed list.  To unsubscribe, send a message with
the body 'unsubscribe linux-mm me@address' to: majordomo@kvack.org

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [2.1.130-3] Page cache DEFINATELY too persistant... feature?
  1998-11-27 16:02   ` [2.1.130-3] Page cache DEFINATELY too persistant... feature? Stephen C. Tweedie
                       ` (2 preceding siblings ...)
  1998-11-27 19:58     ` Zlatko Calusic
@ 1998-11-28  7:31     ` Eric W. Biederman
  1998-11-30 11:13       ` Stephen C. Tweedie
  3 siblings, 1 reply; 18+ messages in thread
From: Eric W. Biederman @ 1998-11-28  7:31 UTC (permalink / raw)
  To: Stephen C. Tweedie; +Cc: linux-mm

>>>>> "ST" == Stephen C Tweedie <sct@redhat.com> writes:

ST> Hi,
ST> Looks like I have a handle on what's wrong with the 2.1.130 vm (in
ST> particular, its tendency to cache too much at the expense of
ST> swapping).

I really should look and play with this but I have one question.

Why does it make sense when we want memory, to write every page
we can to swap before we free any memory?

I can't see how the policy of staying on a particular method of freeing pages
will hurt in the other cases where if we say we freed a page we actually did,
but in the current swap-out case it worries me.

Would a limit on a number of pages to try to write-to swap before we start trying to reclaim
pages be reasonable?

Eric

--
This is a majordomo managed list.  To unsubscribe, send a message with
the body 'unsubscribe linux-mm me@address' to: majordomo@kvack.org

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [2.1.130-3] Page cache DEFINATELY too persistant... feature?
  1998-11-28  7:31     ` Eric W. Biederman
@ 1998-11-30 11:13       ` Stephen C. Tweedie
  1998-11-30 15:08         ` Rik van Riel
                           ` (2 more replies)
  0 siblings, 3 replies; 18+ messages in thread
From: Stephen C. Tweedie @ 1998-11-30 11:13 UTC (permalink / raw)
  To: Eric W. Biederman; +Cc: Stephen C. Tweedie, linux-mm

Hi,

On 28 Nov 1998 01:31:00 -0600, ebiederm+eric@ccr.net (Eric W. Biederman)
said:

>>>>>> "ST" == Stephen C Tweedie <sct@redhat.com> writes:
ST> Hi,
ST> Looks like I have a handle on what's wrong with the 2.1.130 vm (in
ST> particular, its tendency to cache too much at the expense of
ST> swapping).

> I really should look and play with this but I have one question.

> Why does it make sense when we want memory, to write every page
> we can to swap before we free any memory?

What makes you think we do?

2.1.130 tries to shrink cache until a shrink_mmap() pass fails.  Then it
gives the swapper a chance, swapping a batch of pages and unlinking them
from the ptes.  The pages so release still stay in the page cache at
this point, btw, and will be picked up again from memory if they get
referenced before the page finally gets discarded.  We then go back to
shrink_mmap(), hopefully with a larger population of recyclable pages as
a result of the swapout, and we start using that again.

We only run one batch of swapouts before returning to shrink_mmap.

--Stephen
--
This is a majordomo managed list.  To unsubscribe, send a message with
the body 'unsubscribe linux-mm me@address' to: majordomo@kvack.org

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [2.1.130-3] Page cache DEFINATELY too persistant... feature?
  1998-11-27 19:58     ` Zlatko Calusic
@ 1998-11-30 11:15       ` Stephen C. Tweedie
  1998-11-30 23:13         ` Zlatko Calusic
  1998-11-30 12:37       ` Rik van Riel
  1 sibling, 1 reply; 18+ messages in thread
From: Stephen C. Tweedie @ 1998-11-30 11:15 UTC (permalink / raw)
  To: Zlatko.Calusic
  Cc: Stephen C. Tweedie, Linus Torvalds, Benjamin Redelings I,
	linux-kernel, linux-mm

Hi,

On 27 Nov 1998 20:58:38 +0100, Zlatko Calusic <Zlatko.Calusic@CARNet.hr>
said:

> Yesterday, I was trying to understand the very same problem you're
> speaking of. Sometimes kswapd decides to swapout lots of things,
> sometimes not.

> I applied your patch, but it didn't solve the problem.
> To be honest, things are now even slightly worse. :(

Well, after a few days of running with the patched 2.1.130, I have never
seen evil cache growth and the performance has been great throughout.
If you can give me a reproducible way of observing bad worst-case
behaviour, I'd love to see it, but right now, things like

	wc /usr/bin/*

run just fine with no swapping of any running apps.

--Stephen

--
This is a majordomo managed list.  To unsubscribe, send a message with
the body 'unsubscribe linux-mm me@address' to: majordomo@kvack.org

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [2.1.130-3] Page cache DEFINATELY too persistant... feature?
  1998-11-27 19:58     ` Zlatko Calusic
  1998-11-30 11:15       ` Stephen C. Tweedie
@ 1998-11-30 12:37       ` Rik van Riel
  1998-11-30 15:12         ` Zlatko Calusic
  1 sibling, 1 reply; 18+ messages in thread
From: Rik van Riel @ 1998-11-30 12:37 UTC (permalink / raw)
  To: Zlatko Calusic
  Cc: Stephen C. Tweedie, Linus Torvalds, Benjamin Redelings I,
	linux-kernel, linux-mm

On 27 Nov 1998, Zlatko Calusic wrote:
> "Stephen C. Tweedie" <sct@redhat.com> writes:
> 
> > The real problem seems to be that shrink_mmap() can fail for two
> > completely separate reasons.  First of all, we might fail to find a
> > free page because all of the cache pages we find are recently
> > referenced.  Secondly, we might fail to find a cache page at all.
> 
> Yesterday, I was trying to understand the very same problem you're
> speaking of. Sometimes kswapd decides to swapout lots of things,
> sometimes not.
> 
> I applied your patch, but it didn't solve the problem.
> To be honest, things are now even slightly worse. :(

The 'fix' is to lower the borrow percentages for both
the buffer cache and the page cache. If we don't do
that (or abolish the percentages completely) kswapd
doesn't have an incentive to switch from a succesful
round of swap_out() -- which btw doesn't free any
actual memory so kswapd just continues doing that --
to shrink_mmap().

Another thing we might want to try is inserting the
following test in do_try_to_free_page():

if (atomic_read(&nr_async_pages) >= pager_daemon.swap_cluster)
	state = 0;

This will switch kswapd to shrink_mmap() when we have enough
pages queued for efficient swap I/O. Of course this 'fix'
decreases swap throughput so we might want to think up something
more clever instead...

regards,

Rik -- now completely used to dvorak kbd layout...
+-------------------------------------------------------------------+
| Linux memory management tour guide.        H.H.vanRiel@phys.uu.nl |
| Scouting Vries cubscout leader.      http://www.phys.uu.nl/~riel/ |
+-------------------------------------------------------------------+

--
This is a majordomo managed list.  To unsubscribe, send a message with
the body 'unsubscribe linux-mm me@address' to: majordomo@kvack.org

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [2.1.130-3] Page cache DEFINATELY too persistant... feature?
  1998-11-30 11:13       ` Stephen C. Tweedie
@ 1998-11-30 15:08         ` Rik van Riel
  1998-11-30 21:40         ` Eric W. Biederman
  1998-11-30 22:00         ` Eric W. Biederman
  2 siblings, 0 replies; 18+ messages in thread
From: Rik van Riel @ 1998-11-30 15:08 UTC (permalink / raw)
  To: Stephen C. Tweedie; +Cc: Eric W. Biederman, linux-mm

On Mon, 30 Nov 1998, Stephen C. Tweedie wrote:
> On 28 Nov 1998 01:31:00 -0600, ebiederm+eric@ccr.net (Eric W. Biederman)
> said:
> 
> > Why does it make sense when we want memory, to write every page
> > we can to swap before we free any memory?
> 
> What makes you think we do?

What makes you think think we don't? Apart from the buffer
and cache borrow percentages kswapd doesn't have any incentive
to switch back from swap_out() to shrink_mmap()...

> 2.1.130 tries to shrink cache until a shrink_mmap() pass fails. 
> Then it gives the swapper a chance, swapping a batch of pages and
> unlinking them from the ptes.  The pages so release still stay in
> the page cache at this point, btw, and will be picked up again from
> memory if they get referenced before the page finally gets
> discarded.  We then go back to shrink_mmap(), hopefully with a
                 ^^^^
The real question is _when_? Is it soon enough to keep the
system in a sane state?

> larger population of recyclable pages as a result of the swapout,
> and we start using that again. 
>
> We only run one batch of swapouts before returning to shrink_mmap.

It's just that this batch can grow so large that it isn't
any fun and kills performance. We _do_ want to fix this...

cheers,

Rik -- hoping that this post makes my point clear...
+-------------------------------------------------------------------+
| Linux memory management tour guide.        H.H.vanRiel@phys.uu.nl |
| Scouting Vries cubscout leader.      http://www.phys.uu.nl/~riel/ |
+-------------------------------------------------------------------+

--
This is a majordomo managed list.  To unsubscribe, send a message with
the body 'unsubscribe linux-mm me@address' to: majordomo@kvack.org

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [2.1.130-3] Page cache DEFINATELY too persistant... feature?
  1998-11-30 12:37       ` Rik van Riel
@ 1998-11-30 15:12         ` Zlatko Calusic
  1998-11-30 19:29           ` Rik van Riel
  1998-11-30 20:20           ` Andrea Arcangeli
  0 siblings, 2 replies; 18+ messages in thread
From: Zlatko Calusic @ 1998-11-30 15:12 UTC (permalink / raw)
  To: Rik van Riel
  Cc: Stephen C. Tweedie, Linus Torvalds, Benjamin Redelings I,
	linux-kernel, linux-mm

Rik van Riel <H.H.vanRiel@phys.uu.nl> writes:

> On 27 Nov 1998, Zlatko Calusic wrote:
> > "Stephen C. Tweedie" <sct@redhat.com> writes:
> > 
> > > The real problem seems to be that shrink_mmap() can fail for two
> > > completely separate reasons.  First of all, we might fail to find a
> > > free page because all of the cache pages we find are recently
> > > referenced.  Secondly, we might fail to find a cache page at all.
> > 
> > Yesterday, I was trying to understand the very same problem you're
> > speaking of. Sometimes kswapd decides to swapout lots of things,
> > sometimes not.
> > 
> > I applied your patch, but it didn't solve the problem.
> > To be honest, things are now even slightly worse. :(
> 
> The 'fix' is to lower the borrow percentages for both
> the buffer cache and the page cache. If we don't do
> that (or abolish the percentages completely) kswapd
> doesn't have an incentive to switch from a succesful
> round of swap_out() -- which btw doesn't free any
> actual memory so kswapd just continues doing that --
> to shrink_mmap().

Yep, this is the conclusion of my experiments, too.

> 
> Another thing we might want to try is inserting the
> following test in do_try_to_free_page():
> 
> if (atomic_read(&nr_async_pages) >= pager_daemon.swap_cluster)
> 	state = 0;
> 
> This will switch kswapd to shrink_mmap() when we have enough
> pages queued for efficient swap I/O. Of course this 'fix'
> decreases swap throughput so we might want to think up something
> more clever instead...
> 

Exactly.

It is funny how we tried same things in order to find a solution. :)

I made the following change in do_try_to_free_page():

(writing from memory, notice the concept)

...
		case 2:
>>			swapouts++;
>>			if (swapouts > pager_daemon.swap_cluster) {
>>				swapouts = 0;
>>				state = 3;
>>			}
			if (swap_out(i, gfp_mask))
				return 1;
			state = 3;
		case 3:
			shrink_dcache_memory(i, gfp_mask);
			state = 0;
		i--;
		} while (i >= 0);


Unfortunately, this really killed swapout performance, so I dropped
the idea. Even letting swap_out do more passes, before changing state, 
didn't feel good.

One other idea I had, was to replace (code at the very beginning of
do_try_to_free_page()):

	if (buffer_over_borrow() || pgcache_over_borrow())
		shrink_mmap(i, gfp_mask);

with:

	if (buffer_over_borrow() || pgcache_over_borrow())
		state = 0;

While this looks like a good idea, it in fact makes kswapd a CPU hog,
and also doesn't help performance, because it makes limits too hard,
and there's slight debalance.

I'll keep hacking... :)
-- 
Posted by Zlatko Calusic           E-mail: <Zlatko.Calusic@CARNet.hr>
---------------------------------------------------------------------
	  If you don't think women are explosive, drop one!
--
This is a majordomo managed list.  To unsubscribe, send a message with
the body 'unsubscribe linux-mm me@address' to: majordomo@kvack.org

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [2.1.130-3] Page cache DEFINATELY too persistant... feature?
  1998-11-30 15:12         ` Zlatko Calusic
@ 1998-11-30 19:29           ` Rik van Riel
  1998-11-30 22:27             ` Zlatko Calusic
  1998-11-30 20:20           ` Andrea Arcangeli
  1 sibling, 1 reply; 18+ messages in thread
From: Rik van Riel @ 1998-11-30 19:29 UTC (permalink / raw)
  To: Zlatko Calusic
  Cc: Stephen C. Tweedie, Linus Torvalds, Benjamin Redelings I,
	Linux Kernel, Linux MM

On 30 Nov 1998, Zlatko Calusic wrote:
> Rik van Riel <H.H.vanRiel@phys.uu.nl> writes:

> > that (or abolish the percentages completely) kswapd
> > doesn't have an incentive to switch from a succesful
> > round of swap_out() -- which btw doesn't free any
> > actual memory so kswapd just continues doing that --
> > to shrink_mmap().
> 
> Yep, this is the conclusion of my experiments, too.

> I made the following change in do_try_to_free_page():

[SNIP]

> Unfortunately, this really killed swapout performance, so I dropped
> the idea. Even letting swap_out do more passes, before changing state, 
> didn't feel good.
> 
> One other idea I had, was to replace (code at the very beginning of
> do_try_to_free_page()):
> 
> 	if (buffer_over_borrow() || pgcache_over_borrow())
> 		shrink_mmap(i, gfp_mask);
> 
> with:
> 
> 	if (buffer_over_borrow() || pgcache_over_borrow())
> 		state = 0;

I am now trying:
	if (buffer_over_borrow() || pgcache_over_borrow() ||
			atomic_read(&nr_async_pages)
		shrink_mmap(i, gfp_mask);

Note that this doesn't stop kswapd from swapping out so
swapout performance shouldn't suffer. It does however
free up memory so kswapd should _terminate_ and keep the
amount of I/O done to a sane level.

Note that I'm running with my experimentas swapin readahead
patch enabled so the system should be stressed even more
than normally :)

cheers,

Rik -- now completely used to dvorak kbd layout...
+-------------------------------------------------------------------+
| Linux memory management tour guide.        H.H.vanRiel@phys.uu.nl |
| Scouting Vries cubscout leader.      http://www.phys.uu.nl/~riel/ |
+-------------------------------------------------------------------+

--
This is a majordomo managed list.  To unsubscribe, send a message with
the body 'unsubscribe linux-mm me@address' to: majordomo@kvack.org

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [2.1.130-3] Page cache DEFINATELY too persistant... feature?
  1998-11-30 15:12         ` Zlatko Calusic
  1998-11-30 19:29           ` Rik van Riel
@ 1998-11-30 20:20           ` Andrea Arcangeli
  1998-11-30 22:28             ` Zlatko Calusic
  1 sibling, 1 reply; 18+ messages in thread
From: Andrea Arcangeli @ 1998-11-30 20:20 UTC (permalink / raw)
  To: Zlatko Calusic
  Cc: Rik van Riel, Stephen C. Tweedie, Linus Torvalds,
	Benjamin Redelings I, linux-kernel, linux-mm

On 30 Nov 1998, Zlatko Calusic wrote:

>One other idea I had, was to replace (code at the very beginning of

Hey guy, this idea is mine from ages! ;-)

>do_try_to_free_page()):
>
>	if (buffer_over_borrow() || pgcache_over_borrow())
>		shrink_mmap(i, gfp_mask);
>
>with:
>
>	if (buffer_over_borrow() || pgcache_over_borrow())
>		state = 0;

This should be the thing that fixed the problem fine for people (or at
least arca-39 fixed it ;). Now I have perfect reports of the arca-39 mm
from the guys that sent the mm bug report to linux-kernel. I guess the
only interesting thing in my patch for this issue is my change you
mentioned above (and that I am using from a lot of time, and since I
started hacking the mm myself I never had mm problems anymore btw ;). With
the patch above Stephen' s patch is not needed, I don' t know if it can be
still helpful though (since I have not read it in details yet). 

Andrea Arcangeli

--
This is a majordomo managed list.  To unsubscribe, send a message with
the body 'unsubscribe linux-mm me@address' to: majordomo@kvack.org

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [2.1.130-3] Page cache DEFINATELY too persistant... feature?
  1998-11-30 11:13       ` Stephen C. Tweedie
  1998-11-30 15:08         ` Rik van Riel
@ 1998-11-30 21:40         ` Eric W. Biederman
  1998-11-30 22:00         ` Eric W. Biederman
  2 siblings, 0 replies; 18+ messages in thread
From: Eric W. Biederman @ 1998-11-30 21:40 UTC (permalink / raw)
  To: Stephen C. Tweedie; +Cc: linux-mm

>>>>> "ST" == Stephen C Tweedie <sct@redhat.com> writes:

ST> Hi,
ST> Looks like I have a handle on what's wrong with the 2.1.130 vm (in
ST> particular, its tendency to cache too much at the expense of
ST> swapping).

>> I really should look and play with this but I have one question.

>> Why does it make sense when we want memory, to write every page
>> we can to swap before we free any memory?

ST> What makes you think we do?

Reading the code, and a test I just performed.
The limit on the page cache size appears to be the only thing that throttles
this at all.

ST> 2.1.130 tries to shrink cache until a shrink_mmap() pass fails.  Then it
ST> gives the swapper a chance, swapping a batch of pages and unlinking them
ST> from the ptes.  The pages so release still stay in the page cache at
ST> this point, btw, and will be picked up again from memory if they get
ST> referenced before the page finally gets discarded.  We then go back to
ST> shrink_mmap(), hopefully with a larger population of recyclable pages as
ST> a result of the swapout, and we start using that again.

ST> We only run one batch of swapouts before returning to shrink_mmap.

As has been noted elsewhere the size of a batch of swapouts appears to be
controlled by chance.  So it has not boundary on worst case behavior.

I have just performed a small test based upon my observations of the code.
The practical result of this appears to be that we spend way to much time
in kswapd.

This is my test program. 
When it runs it takes kswapd takes 30%-50% of the processor.
Memory is maxed out.
And it's resident set size gets absolutely huge. 12M on a 32M box.

I won't argue that all of this is broken but I will say that this does seen
to be a little too much time spent in the swap_out routines.

Further this program takes about 0.02 seconds if the write to memory
is disabled.

0.08user 56.68system 3:36.40elapsed 26%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (6936major+32788minor)pagefaults 30430swaps

#include <stdio.h>
#include <unistd.h>
#include <sys/mman.h>
#include <stdlib.h>

#define PAGE_SIZE 4096
#define GET_SIZE (128*1024*1024)

int main(int argc, char **argv)
{
	char *buffer;
	int i;
	buffer = mmap(NULL, GET_SIZE, PROT_WRITE | PROT_READ | PROT_EXEC,
		      MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
	for(i = 0; i < GET_SIZE; i+= PAGE_SIZE) {
		buffer[i] = '\0';
	}
	return 0;

}


--
This is a majordomo managed list.  To unsubscribe, send a message with
the body 'unsubscribe linux-mm me@address' to: majordomo@kvack.org

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [2.1.130-3] Page cache DEFINATELY too persistant... feature?
  1998-11-30 11:13       ` Stephen C. Tweedie
  1998-11-30 15:08         ` Rik van Riel
  1998-11-30 21:40         ` Eric W. Biederman
@ 1998-11-30 22:00         ` Eric W. Biederman
  2 siblings, 0 replies; 18+ messages in thread
From: Eric W. Biederman @ 1998-11-30 22:00 UTC (permalink / raw)
  To: Stephen C. Tweedie; +Cc: linux-mm


I just performed one more test on
pre-linux-2.1.130 + Stephan Tweedies vm patch.

I went into proc
and changed pagecache from:
5 30 75 to 0 100 100
Then I ran my test program.
Until it was done running the system was locked up.

The same results happend with
0 75 75

With 0 30 75 however  I can finish composing this email.

I now know definentily that this is an autobalancing problem.

My suggestion would be to drop a call to shrink_mmap immediately
after swap_out with the same size.  And to ignore it's return code.

But however we do it, in cases of heavy swapping we need to call shrink_mmap
more often.

Perhaps this evening I can try some more.

Eric
--
This is a majordomo managed list.  To unsubscribe, send a message with
the body 'unsubscribe linux-mm me@address' to: majordomo@kvack.org

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [2.1.130-3] Page cache DEFINATELY too persistant... feature?
  1998-11-30 19:29           ` Rik van Riel
@ 1998-11-30 22:27             ` Zlatko Calusic
  1998-11-30 23:11               ` Rik van Riel
  0 siblings, 1 reply; 18+ messages in thread
From: Zlatko Calusic @ 1998-11-30 22:27 UTC (permalink / raw)
  To: Rik van Riel
  Cc: Stephen C. Tweedie, Linus Torvalds, Benjamin Redelings I,
	Linux Kernel, Linux MM

Rik van Riel <H.H.vanRiel@phys.uu.nl> writes:

> I am now trying:
> 	if (buffer_over_borrow() || pgcache_over_borrow() ||
> 			atomic_read(&nr_async_pages)
> 		shrink_mmap(i, gfp_mask);
> 
> Note that this doesn't stop kswapd from swapping out so
> swapout performance shouldn't suffer. It does however
> free up memory so kswapd should _terminate_ and keep the
> amount of I/O done to a sane level.

This still slows down swapping somewhat (20-30%) in my tests.

> 
> Note that I'm running with my experimentas swapin readahead
> patch enabled so the system should be stressed even more
> than normally :)
> 

I tried your swapin_readahead patch but it didn't work right:

swap_duplicate at c012054b: entry 00011904, unused page 
swap_duplicate at c012054b: entry 002c8c00, unused page 
swap_duplicate at c012054b: entry 00356700, unused page 
swap_duplicate at c012054b: entry 00370f00, unused page 
swap_duplicate at c012054b: entry 0038d000, unused page 
swap_duplicate at c012054b: entry 0039d100, unused page 
swap_duplicate at c012054b: entry 0000b500, unused page 

c012054b is read_swap_cache_async()

Memory gets eaten when I bang MM, and after sometime system blocks. I
also had one FS corruption, thanks to that. Didn't investigate
further.

Do you have a newer version of the patch?

Regards,
-- 
Posted by Zlatko Calusic           E-mail: <Zlatko.Calusic@CARNet.hr>
---------------------------------------------------------------------
	       Multitasking attempted. System confused.
--
This is a majordomo managed list.  To unsubscribe, send a message with
the body 'unsubscribe linux-mm me@address' to: majordomo@kvack.org

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [2.1.130-3] Page cache DEFINATELY too persistant... feature?
  1998-11-30 20:20           ` Andrea Arcangeli
@ 1998-11-30 22:28             ` Zlatko Calusic
  0 siblings, 0 replies; 18+ messages in thread
From: Zlatko Calusic @ 1998-11-30 22:28 UTC (permalink / raw)
  To: Andrea Arcangeli
  Cc: Rik van Riel, Stephen C. Tweedie, Linus Torvalds,
	Benjamin Redelings I, linux-kernel, linux-mm

Andrea Arcangeli <andrea@e-mind.com> writes:

> On 30 Nov 1998, Zlatko Calusic wrote:
> 
> >One other idea I had, was to replace (code at the very beginning of
> 
> Hey guy, this idea is mine from ages! ;-)

Yep, my apologies. It looked so perfect that I forgot where I saw it
for the first time. :)

Regards,
-- 
Posted by Zlatko Calusic           E-mail: <Zlatko.Calusic@CARNet.hr>
---------------------------------------------------------------------
RTFM in Unix: read the fine manual; RTFM in Win32: reboot the fine machine.
--
This is a majordomo managed list.  To unsubscribe, send a message with
the body 'unsubscribe linux-mm me@address' to: majordomo@kvack.org

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [2.1.130-3] Page cache DEFINATELY too persistant... feature?
  1998-11-30 22:27             ` Zlatko Calusic
@ 1998-11-30 23:11               ` Rik van Riel
  0 siblings, 0 replies; 18+ messages in thread
From: Rik van Riel @ 1998-11-30 23:11 UTC (permalink / raw)
  To: Zlatko Calusic; +Cc: Stephen C. Tweedie, Linux MM

On 30 Nov 1998, Zlatko Calusic wrote:
> Rik van Riel <H.H.vanRiel@phys.uu.nl> writes:
> 
> > I am now trying:
> > 	if (buffer_over_borrow() || pgcache_over_borrow() ||
> > 			atomic_read(&nr_async_pages)
> > 		shrink_mmap(i, gfp_mask);
> 
> This still slows down swapping somewhat (20-30%) in my tests.

I changed it in the next version of the patch (attached).
There are also a few swapin readahead and kswapd fixes in
it.

> > Note that I'm running with my experimentas swapin readahead
> > patch enabled so the system should be stressed even more
> > than normally :)
> 
> I tried your swapin_readahead patch but it didn't work right:
> 
> swap_duplicate at c012054b: entry 00011904, unused page 

I get those too, but I don't know why since I use the same
test to decide whether or not to call read_swap_cache_async()
or not...

> Memory gets eaten when I bang MM, and after sometime system blocks.
> I also had one FS corruption, thanks to that. Didn't investigate
> further. 

The system blockage was most likely caused by swapping in
stuff while we were tight on memory. This should be fixed
now.

have fun,

Rik -- now completely used to dvorak kbd layout...
+-------------------------------------------------------------------+
| Linux memory management tour guide.        H.H.vanRiel@phys.uu.nl |
| Scouting Vries cubscout leader.      http://www.phys.uu.nl/~riel/ |
+-------------------------------------------------------------------+

--- linux/mm/page_alloc.c.orig	Thu Nov 26 11:26:49 1998
+++ linux/mm/page_alloc.c	Mon Nov 30 23:14:16 1998
@@ -370,9 +370,28 @@
 	pte_t * page_table, unsigned long entry, int write_access)
 {
 	unsigned long page;
+	int i;
 	struct page *page_map;
+	unsigned long offset = SWP_OFFSET(entry);
+	struct swap_info_struct *swapdev = SWP_TYPE(entry) + swap_info;
 	
 	page_map = read_swap_cache(entry);
+
+	/*
+	 * Primitive swap readahead code. We simply read the
+	 * next 16 entries in the swap area. The break below
+	 * is needed or else the request queue will explode :)
+	 */
+	for (i = 1; i++ < 16;) {
+		offset++;
+		if (!swapdev->swap_map[offset] || offset >= swapdev->max
+			|| nr_free_pages - atomic_read(&nr_async_pages) <
+				(freepages.high + freepages.low)/2)
+			break;
+		read_swap_cache_async(SWP_ENTRY(SWP_TYPE(entry), offset),
+0);
+			break;
+	}
 
 	if (pte_val(*page_table) != entry) {
 		if (page_map)
--- linux/mm/page_io.c.orig	Thu Nov 26 11:26:49 1998
+++ linux/mm/page_io.c	Thu Nov 26 11:30:43 1998
@@ -60,7 +60,7 @@
 	}
 
 	/* Don't allow too many pending pages in flight.. */
-	if (atomic_read(&nr_async_pages) > SWAP_CLUSTER_MAX)
+	if (atomic_read(&nr_async_pages) > pager_daemon.swap_cluster)
 		wait = 1;
 
 	p = &swap_info[type];
--- linux/mm/vmscan.c.orig	Thu Nov 26 11:26:50 1998
+++ linux/mm/vmscan.c	Mon Nov 30 23:11:09 1998
@@ -430,7 +430,9 @@
 	/* Always trim SLAB caches when memory gets low. */
 	kmem_cache_reap(gfp_mask);
 
-	if (buffer_over_borrow() || pgcache_over_borrow())
+	if (buffer_over_borrow() || pgcache_over_borrow() ||
+			atomic_read(&nr_async_pages) > 
+			(pager_daemon.swap_cluster * 3) / 4)
 		shrink_mmap(i, gfp_mask);
 
 	switch (state) {

--
This is a majordomo managed list.  To unsubscribe, send a message with
the body 'unsubscribe linux-mm me@address' to: majordomo@kvack.org

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [2.1.130-3] Page cache DEFINATELY too persistant... feature?
  1998-11-30 11:15       ` Stephen C. Tweedie
@ 1998-11-30 23:13         ` Zlatko Calusic
  0 siblings, 0 replies; 18+ messages in thread
From: Zlatko Calusic @ 1998-11-30 23:13 UTC (permalink / raw)
  To: Stephen C. Tweedie
  Cc: Linus Torvalds, Benjamin Redelings I, linux-kernel, linux-mm

"Stephen C. Tweedie" <sct@redhat.com> writes:

> Hi,
> 
> On 27 Nov 1998 20:58:38 +0100, Zlatko Calusic <Zlatko.Calusic@CARNet.hr>
> said:
> 
> > Yesterday, I was trying to understand the very same problem you're
> > speaking of. Sometimes kswapd decides to swapout lots of things,
> > sometimes not.
> 
> > I applied your patch, but it didn't solve the problem.
> > To be honest, things are now even slightly worse. :(
> 
> Well, after a few days of running with the patched 2.1.130, I have never
> seen evil cache growth and the performance has been great throughout.
> If you can give me a reproducible way of observing bad worst-case
> behaviour, I'd love to see it, but right now, things like

Be my guest. :)

wc /usr/bin/* never made any problems for me, but:

{atlas} [/image]% ls -al
total 411290
drwxrwxrwt   5 root     root         1024 Dec  1 00:00 .
drwxr-xr-x  22 root     root         1024 Nov 17 03:04 ..
-rw-r--r--   1 zcalusic users    419430400 Nov 30 23:53 400MB
-rwxr-xr-x   1 zcalusic users         438 Dec  1 00:00 testing-mm
{atlas} [/image]% cat testing-mm 
#! /bin/sh
echo "starting vmstat 1 in background" > report-log
vmstat 1 >> report-log &
sleep 5
echo "starting xemacs" >> report-log; xemacs &
sleep 1
echo "starting netscape" >> report-log; netscape &
sleep 1
echo "starting gimp" >> report-log; gimp &
sleep 1
echo "sleep 45 started" >> report-log
sleep 45
echo "sleep 45 done" >> report-log
echo "cp 400MB /dev/null" >> report-log; cp 400MB /dev/null
kill `pidof vmstat`
{atlas} [/image]% ./testing-mm
{atlas} [/image]% cat report-log
starting vmstat 1 in background
 procs                  memory    swap        io    system         cpu
 r b w  swpd  free  buff cache  si  so   bi   bo   in   cs  us  sy  id
 1 0 0     0 19424  3660 19420  35  65  582  590  166  173   7  10  83
 0 0 0     0 19376  3660 19420   0   0    0    0  105    8   2   2  96
 0 0 0     0 19376  3660 19420   0   0    0    0  104    6   1   2  97
 0 0 0     0 19376  3660 19420   0   0    0    0  106   10   2   1  97
 0 0 0     0 19376  3660 19420   0   0    0    0  104    6   1   2  97
starting xemacs
 1 0 0     0 19120  3660 19500   0   0   62   21  133   37   5   4  91
starting netscape
 2 0 0     0 17680  3660 20312   0   0  806    0  232  243  10  10  81
starting gimp
 5 0 0     0 14448  3660 20896   0   0  546    0  198  694  65  15  19
 5 0 0     0 13308  3660 21388   0   0  412    0  176  829  67  13  20
sleep 45 started
 3 0 0     0 11396  3660 22232   0   0  579    0  247  540  83  17   0
 3 0 0     0  5520  3660 25012   0   0 2667    7  292  318  86  14   0
 3 0 0     0  1752  3276 27040   0   0 2389    0  275  417  85  15   0
 1 1 0     0  1532  3276 24144   0   0 3035    0  320 1460  53  23  24
 2 0 0     0  1620  3276 22872   0   0 1042    1  250  283  29   5  66
 0 3 0     0  1580  3276 22644   0   0  493  118  222  253  15   5  80
 2 0 0     0  1536  3276 22620   0   0  106   53  237  139   2   5  93
 3 0 0     0  1592  3276 22372   0   0  680    0  194  530  62  10  29
 2 0 0     0  1796  3276 21872   0   0  984    0  173  374  66  10  24
 3 0 0     0  1592  3276 21688   0   0  634    8  205 1523  37  13  50
 2 0 0     0  1624  3276 21196   0   0  656    0  209 3495  59  19  22
 2 0 0     0  1588  3276 20808   0   0  407   77  216  287  35   9  57
 procs                  memory    swap        io    system         cpu
 r b w  swpd  free  buff cache  si  so   bi   bo   in   cs  us  sy  id
 2 0 0     0  1576  3276 20200   0   0  404    0  176 1265  62  11  28
 1 0 0     0  1652  3276 19520   0   0  516    0  190 1833  64  14  21
 2 0 0     0  1576  3276 19344   0   0  507    0  188  864  27  10  63
 0 0 0     0  1572  3276 18112   0   0  152   83  139  452  21   8  71
 0 0 0     0  1572  3276 18112   0   0    0    0  145   14   0   3  97
 0 0 0     0  1572  3276 18112   0   0    0    0  104   10   0   3  97
 0 0 0     0  1572  3276 18112   0   0    0    0  104   10   1   2  97
 0 0 0     0  1572  3276 18112   0   0    0    0  104   20   1   2  97
 0 0 0     0  1572  3276 18112   0   0    0   19  114   17   2   2  96
 0 0 0     0  1572  3276 18112   0   0    0    0  104   10   0   3  97
 0 0 0     0  1572  3276 18112   0   0    0    0  104   12   1   2  97
 0 0 0     0  1572  3276 18112   0   0    0    0  104   10   0   3  97
 0 0 0     0  1572  3276 18112   0   0    0    0  104   10   0   3  97
 0 0 0     0  1572  3276 18112   0   0    0    1  105   17   1   2  97
 0 0 0     0  1572  3276 18112   0   0    0    0  104   12   0   3  97
 0 0 0     0  1572  3276 18112   0   0    0    0  104   10   0   3  97
 0 0 0     0  1572  3276 18112   0   0    0    0  104   10   0   3  97
 0 0 0     0  1572  3276 18112   0   0    0    0  104   29   3   1  96
 0 0 0     0  1572  3276 18112   0   0    0    1  105   12   1   2  97
 0 0 0     0  1572  3276 18112   0   0    0    0  104   12   1   2  97
 0 0 0     0  1572  3276 18112   0   0    0    0  104   10   0   3  97
 procs                  memory    swap        io    system         cpu
 r b w  swpd  free  buff cache  si  so   bi   bo   in   cs  us  sy  id
 0 0 0     0  1572  3276 18112   0   0    0    0  107   14   2   1  97
 0 0 0     0  1572  3276 18112   0   0    0    0  104   10   0   3  97
 0 0 0     0  1572  3276 18112   0   0    0    9  110   14   0   3  97
 0 0 0     0  1572  3276 18112   0   0    0    0  105   12   1   2  97
 0 0 0     0  1572  3276 18112   0   0    0    0  104   10   0   3  97
 0 0 0     0  1572  3276 18112   0   0    0    0  106   13   1   2  97
 0 0 0     0  1572  3276 18112   0   0    0    0  105   19   1   3  96
 0 0 0     0  1572  3276 18112   0   0    0    0  104   13   0   3  97
 0 0 0     0  1572  3276 18112   0   0    0    0  104   10   0   3  97
 0 0 0     0  1572  3276 18112   0   0    0    0  104   10   0   3  97
sleep 45 done
cp 400MB /dev/null
 0 1 1   276  1008  3284 18980   0 276 5711   69  206  120   2  20  78
 1 0 0  1020  1664  3276 19076   4 752 8253  190  270  161   2  40  58
 1 0 1  1028  1464  3276 19296   0   8 12540    2  304  187   1  47  52
 0 1 1  3488  1504  3276 21712   0 2460 5450  615  267  204   1  28  71
 0 1 1  6372   648  3276 25452   0 2892 3766  723  242  186   2  13  85
 1 0 1  9336  1172  3276 27892   0 3016 2502  754  250  208   3  12  85
 1 0 1 12136  1372  3276 30492   0 2848 2658  723  241  170   1  10  89
 0 1 1 15016  1140  3276 33604   0 2912 3153  728  250  220   0  12  88
 0 1 1 18016   644  3276 37100   0 3028 3541  757  247  192   2  13  86
 0 0 2 21316  1664  3276 39380   0 3340 2329  835  257  240   1  13  87
 1 0 1 23916  1204  3276 42440   0 2656 3125  664  244  182   2  14  84
 procs                  memory    swap        io    system         cpu
 r b w  swpd  free  buff cache  si  so   bi   bo   in   cs  us  sy  id
 0 1 1 26468  1000  3276 45196   0 2612 2827  655  233  173   0  18  82
 0 1 0 29260   776  3276 48212   0 2848 3084  712  244  178   2  15  83
 1 0 1 31692   760  3276 50660   0 2544 2570  636  246  165   0  14  86
 1 0 0 32236  1636  3276 50356   0 648 9547  162  285  182   1  35  64
 1 3 0 32192  1656  3276 50316  88   0 9256    0  263  190   2  30  68
 2 1 0 32140  1668  3276 50244 124   0 4982    0  222  168   1  21  78
 1 0 0 32112  1668  3292 50224  60   0 7761    0  240  158   1  34  65
 1 0 1 32112  1448  3276 50460   0   0 13588    0  322  202   2  54  44
 0 2 0 32104  1664  3300 50212  12   0 12123    0  266  188   3  55  42
 1 0 1 32088  1412  3276 50484  24   0 10550    0  284  181   2  40  58
 1 0 1 32068  1504  3276 50392  32   0 10710    5  282  181   2  40  58
 1 0 1 32068  1408  3276 50488   0   0 13200    0  299  205   3  54  43
 1 0 0 32068  1664  3340 50168   0   0 13107    0  303  184   1  47  52
 2 0 1 32068  1464  3276 50444   0   0 12544    0  299  188   1  52  47
 2 0 1 32068  1412  3276 50496   0   0 12385    1  302  195   2  50  48
 2 0 0 32068  1668  3276 50240   0   0 12337    0  296  171   1  52  47
 2 0 1 32064  1372  3276 50532  24   0 10319    0  276  184   2  41  57
 2 0 1 32064  1412  3276 50492   0   0 11822    0  283  181   1  45  54
 2 0 0 32064  1560  3276 50380   0   0 12336    0  296  178   1  48  51
 0 2 0 32024  1656  3276 50252  56   0 5516    3  216  144   2  20  78
 1 1 0 32016  1656  3276 50244  52   0 5237    0  205  129   1  21  78
 procs                  memory    swap        io    system         cpu
 r b w  swpd  free  buff cache  si  so   bi   bo   in   cs  us  sy  id
 1 1 0 32012  1544  3276 50352  56   0 5362    0  207  131   0  19  81
 1 1 0 32008  1656  3276 50236  76   0 6091    0  231  166   3  25  72
 1 1 0 32000  1668  3328 50164  36   0 5715    0  213  133   0  25  75
 1 1 0 31992  1668  3276 50208  76   0 5970    0  221  165   1  18  81
 1 1 1 31980  1404  3276 50460  44   0 5532    0  215  140   1  22  77
 0 3 1 31960  1388  3276 50468  72   0 5002    0  219  153   1  18  81
 0 3 0 31916  1644  3276 50172  64   0 5027    0  222  157   2  19  79
 1 1 0 31896  1640  3276 50140  28   0 5783    0  226  155   2  24  74
 1 1 0 31892  1624  3276 50160  64   0 6746    2  220  143   1  25  74
 1 1 0 31888  1568  3276 50212  56   0 7306    0  225  153   2  24  74
 1 1 1 31872  1412  3340 50280  48   0 7168    0  247  186   2  25  73
 1 1 1 31836  1408  3276 50312  56   0 5717    0  216  147   1  20  79
 2 1 0 31820  1640  3276 50080  32   0 10039    0  242  231   1  43  56
 3 0 1 31788  1412  3276 50272  80   0 5964    2  220  158   0  27  73
 1 1 0 31752  1608  3276 50036  48   0 5525    0  218  152   1  23  76
 1 1 0 31736  1664  3276 49964  12   0 4950    0  201  124   1  20  79
 1 1 0 31704  1648  3276 49948  56   0 4997    0  210  128   2  16  82
 1 1 0 31700  1588  3276 50004  64   0 6557    0  227  152   3  27  70
 2 0 0 31660  1656  3340 49840  48   0 6109    1  232  162   2  23  75
 1 1 0 31628  1668  3276 49864  56   0 7531    0  227  150   1  35  64
 1 1 0 31608  1572  3276 49940  16   0 6866    0  224  166   1  27  72
 procs                  memory    swap        io    system         cpu
 r b w  swpd  free  buff cache  si  so   bi   bo   in   cs  us  sy  id
 1 1 0 31540  1656  3276 49792  48   0 6320    0  237  167   2  28  70
 0 2 0 31536  1668  3276 49772  28   0 7218    2  229  169   1  29  70
 1 1 0 31524  1668  3276 49800  40   0 4933    0  250  121   2  24  74
 2 0 0 31524  1668  3276 49800  12   0 7151    0  252  143   1  28  71
 1 1 0 31512  1656  3276 49800  20   0 6010    0  229  136   1  25  74
 0 3 0 30600  1616  3296 49756 156   0 2573    0  223  206   1  14  85
 1 2 0 30448  1592  3276 49588 308   0  479    2  250  309   2   1  97

Machine in question has 64MB of RAM, which were mostly used by firing
up xemacs, netscape & gimp. Copying 400MB to /dev/null outswapped 32MB
(almost all used memory) in a matter of seconds.

Your patch is applied, of course. :)

Comments?
-- 
Posted by Zlatko Calusic           E-mail: <Zlatko.Calusic@CARNet.hr>
---------------------------------------------------------------------
		Oops. My brain just hit a bad sector.
--
This is a majordomo managed list.  To unsubscribe, send a message with
the body 'unsubscribe linux-mm me@address' to: majordomo@kvack.org

^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~1998-12-01  6:00 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <199811261236.MAA14785@dax.scot.redhat.com>
     [not found] ` <Pine.LNX.3.95.981126094159.5186D-100000@penguin.transmeta.com>
1998-11-27 16:02   ` [2.1.130-3] Page cache DEFINATELY too persistant... feature? Stephen C. Tweedie
1998-11-27 17:19     ` Chip Salzenberg
1998-11-27 18:31     ` Linus Torvalds
1998-11-27 19:58     ` Zlatko Calusic
1998-11-30 11:15       ` Stephen C. Tweedie
1998-11-30 23:13         ` Zlatko Calusic
1998-11-30 12:37       ` Rik van Riel
1998-11-30 15:12         ` Zlatko Calusic
1998-11-30 19:29           ` Rik van Riel
1998-11-30 22:27             ` Zlatko Calusic
1998-11-30 23:11               ` Rik van Riel
1998-11-30 20:20           ` Andrea Arcangeli
1998-11-30 22:28             ` Zlatko Calusic
1998-11-28  7:31     ` Eric W. Biederman
1998-11-30 11:13       ` Stephen C. Tweedie
1998-11-30 15:08         ` Rik van Riel
1998-11-30 21:40         ` Eric W. Biederman
1998-11-30 22:00         ` Eric W. Biederman

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox