* Re: [PATCH] fix for VM test9-pre,
@ 2000-10-02 19:01 Ying Chen/Almaden/IBM
0 siblings, 0 replies; 7+ messages in thread
From: Ying Chen/Almaden/IBM @ 2000-10-02 19:01 UTC (permalink / raw)
To: Rik van Riel; +Cc: linux-mm, Andrea Arcangeli, Ingo Molnar
>> Eample 2: I ran SPEC SFS tests to stress the Linux box. During
>> the tests, the lower memory will be filled up with inode cache
>> and dcache entries, while HIGHMEM is not quite used at all. Once
>> this happens, again, any interactive commands would take forever
>> to finish... Eventually, SPEC SFS would timeout and fail.
>> Sometimes, if I managed to kill some processes, I can
>> temporarilly get some other applications run. But most of the
>> applications would get stuch somewhere very quickly later on.
> However, I have no idea why your buffers and pagecache pages
> aren't bounced into the HIGHMEM zone ... They /should/ just
> be moved to the HIGHMEM zone where they don't bother the rest
> of the system, but for some reason it looks like that doesn't
> work right on your system ...
In the second example (running SPEC SFS), not much buffer space is used
though.
All of the stuff there in NORMAL is from inode cache and dcache entries.
So,
it doesn't seem that bouncing buffers to highmem would help much in the
second case.
Also, it's not the case that the buffers and pagecaches are not bounced to
the highmem I think.
For example, I tried to stick shrink_icache_memory() and
shrink_dcache_memory() below
refill_inactive_scan(xx) in kswapd(), to let it clean up some inode and
dcache entires
even if there is no memory pressure (every 1 sec I think). This seems to
make the system go back to normal.
When this happens, the system was able to use all the available space for
pagecache and buffers, both HIGH and LOW.
But as you can see, this fix doesn't seem to make sense, at least not quite
to me, and I don't know if
it would break anything else.
Ying
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux.eu.org/Linux-MM/
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH] fix for VM test9-pre,
2000-10-02 19:52 ` Andrea Arcangeli
@ 2000-10-02 19:56 ` Rik van Riel
0 siblings, 0 replies; 7+ messages in thread
From: Rik van Riel @ 2000-10-02 19:56 UTC (permalink / raw)
To: Andrea Arcangeli; +Cc: Ying Chen/Almaden/IBM, linux-mm, Ingo Molnar
On Mon, 2 Oct 2000, Andrea Arcangeli wrote:
> > I can dig out the bug report if you want ;)
>
> I read one that you sent to TYTSO and I believe classzone should
> take care of that highmem problem.
If that is the case, could you extract the bugfix from
the classzone code and send it to the list?
regards,
Rik
--
"What you're running that piece of shit Gnome?!?!"
-- Miguel de Icaza, UKUUG 2000
http://www.conectiva.com/ http://www.surriel.com/
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux.eu.org/Linux-MM/
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH] fix for VM test9-pre,
2000-10-02 19:28 ` Rik van Riel
@ 2000-10-02 19:52 ` Andrea Arcangeli
2000-10-02 19:56 ` Rik van Riel
0 siblings, 1 reply; 7+ messages in thread
From: Andrea Arcangeli @ 2000-10-02 19:52 UTC (permalink / raw)
To: Rik van Riel; +Cc: Ying Chen/Almaden/IBM, linux-mm, Ingo Molnar
On Mon, Oct 02, 2000 at 04:28:48PM -0300, Rik van Riel wrote:
> Yup, indeed. I guess we need some extra logic to prevent the
> system from trying to fill all of low memory with dirty
> pages just because all of the highmem pages are free.
A dirty page is allocated in the HIGHMEM immediatly because it's allocated with
GFP_HIGHMEM (see page_cache_alloc() macro). Only the I/O is slower then
(compared to a non highmem machine) because we need bounce buffers for it (and
that trashes mem bus and it makes the I/O slower but it's not a matter of
virtual memory balancing as far I can see).
> Unfortunately, I DID get a few bug reports about
> 2.4.0-test6 and earlier kernels that DID show this
> bug ...
So that may be yet another MM bug, since I remeber Ying said he didn't seen the
bad behaviour in test6.
> I can dig out the bug report if you want ;)
I read one that you sent to TYTSO and I believe classzone should take care of
that highmem problem.
Andrea
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux.eu.org/Linux-MM/
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH] fix for VM test9-pre,
2000-10-02 19:25 ` Andrea Arcangeli
@ 2000-10-02 19:28 ` Rik van Riel
2000-10-02 19:52 ` Andrea Arcangeli
0 siblings, 1 reply; 7+ messages in thread
From: Rik van Riel @ 2000-10-02 19:28 UTC (permalink / raw)
To: Andrea Arcangeli; +Cc: Ying Chen/Almaden/IBM, linux-mm, Ingo Molnar
On Mon, 2 Oct 2000, Andrea Arcangeli wrote:
> On Mon, Oct 02, 2000 at 02:07:51PM -0300, Rik van Riel wrote:
> > However, I have no idea why your buffers and pagecache pages
> > aren't bounced into the HIGHMEM zone ... They /should/ just
>
> buffers/dcache/icache can't be allocated in HIGHMEM zone. Only
> page cache can live in HIGHMEM by using bounce buffers for doing
> the I/O.
Yup, indeed. I guess we need some extra logic to prevent the
system from trying to fill all of low memory with dirty
pages just because all of the highmem pages are free.
(also, having more than say 200MB in the write-behind queue
probably doesn't make any sense)
> > be moved to the HIGHMEM zone where they don't bother the rest
> > of the system, but for some reason it looks like that doesn't
> > work right on your system ...
>
> That shouldn't be the problem, the bounce buffer logic isn't
> changed since test6 that is reported not to show the bad
> behaviour.
Unfortunately, I DID get a few bug reports about
2.4.0-test6 and earlier kernels that DID show this
bug ...
I can dig out the bug report if you want ;)
regards,
Rik
--
"What you're running that piece of shit Gnome?!?!"
-- Miguel de Icaza, UKUUG 2000
http://www.conectiva.com/ http://www.surriel.com/
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux.eu.org/Linux-MM/
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH] fix for VM test9-pre,
2000-10-02 17:07 ` Rik van Riel
@ 2000-10-02 19:25 ` Andrea Arcangeli
2000-10-02 19:28 ` Rik van Riel
0 siblings, 1 reply; 7+ messages in thread
From: Andrea Arcangeli @ 2000-10-02 19:25 UTC (permalink / raw)
To: Rik van Riel; +Cc: Ying Chen/Almaden/IBM, linux-mm, Ingo Molnar
On Mon, Oct 02, 2000 at 02:07:51PM -0300, Rik van Riel wrote:
> However, I have no idea why your buffers and pagecache pages
> aren't bounced into the HIGHMEM zone ... They /should/ just
buffers/dcache/icache can't be allocated in HIGHMEM zone. Only page cache can
live in HIGHMEM by using bounce buffers for doing the I/O.
> be moved to the HIGHMEM zone where they don't bother the rest
> of the system, but for some reason it looks like that doesn't
> work right on your system ...
That shouldn't be the problem, the bounce buffer logic isn't changed since
test6 that is reported not to show the bad behaviour.
Andrea
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux.eu.org/Linux-MM/
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH] fix for VM test9-pre,
2000-10-02 16:40 Ying Chen/Almaden/IBM
@ 2000-10-02 17:07 ` Rik van Riel
2000-10-02 19:25 ` Andrea Arcangeli
0 siblings, 1 reply; 7+ messages in thread
From: Rik van Riel @ 2000-10-02 17:07 UTC (permalink / raw)
To: Ying Chen/Almaden/IBM; +Cc: linux-mm, Andrea Arcangeli, Ingo Molnar
On Mon, 2 Oct 2000, Ying Chen/Almaden/IBM wrote:
> There are a couple strange behavior I saw with this vm patch on
> my box. I ran Linux test9-pre7 with the newest vmpatch in on a
> Dell PowerEdge with 2 GB memory.
>
> This patch seems to make interactive applications run with very very long
> response times when memory is short space.
> Example 1: If I do mke2fs on a 90GB file system, after halfway
> through making the file system, the lower 1GB is filled up. mkfs
> takes practically for ever to finish. I checked the sysrq-m
> output, DMA has only 512K buffer, NORMAL has 1020K, HIGHMEM has
> 1 GB. When this happens, I basically cannot do anything, not
> even ls, df, top, etc. They all take for ever to run. If I kill
> mkfs, (closing the telnet sessions that mkfs was in) things
> starts to come back alive. It almost feels like something got
> stuck somewhere.
> Eample 2: I ran SPEC SFS tests to stress the Linux box. During
> the tests, the lower memory will be filled up with inode cache
> and dcache entries, while HIGHMEM is not quite used at all. Once
> this happens, again, any interactive commands would take forever
> to finish... Eventually, SPEC SFS would timeout and fail.
> Sometimes, if I managed to kill some processes, I can
> temporarilly get some other applications run. But most of the
> applications would get stuch somewhere very quickly later on.
>
> I don't see such behavior in test6 though.
> Any ideas?
This is a balancing issue. Since you have 1GB of free memory,
the system tries to use that memory.
However, I have no idea why your buffers and pagecache pages
aren't bounced into the HIGHMEM zone ... They /should/ just
be moved to the HIGHMEM zone where they don't bother the rest
of the system, but for some reason it looks like that doesn't
work right on your system ...
Andrea, Ingo? Do you have any idea what could be going wrong
here ?
regards,
Rik
--
"What you're running that piece of shit Gnome?!?!"
-- Miguel de Icaza, UKUUG 2000
http://www.conectiva.com/ http://www.surriel.com/
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux.eu.org/Linux-MM/
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH] fix for VM test9-pre,
@ 2000-10-02 16:40 Ying Chen/Almaden/IBM
2000-10-02 17:07 ` Rik van Riel
0 siblings, 1 reply; 7+ messages in thread
From: Ying Chen/Almaden/IBM @ 2000-10-02 16:40 UTC (permalink / raw)
To: Rik van Riel; +Cc: linux-mm
Hi,
There are a couple strange behavior I saw with this vm patch on my box.
I ran Linux test9-pre7 with the newest vmpatch in on a Dell PowerEdge with
2 GB memory.
This patch seems to make interactive applications run with very very long
response times when memory is short space.
Example 1: If I do mke2fs on a 90GB file system, after halfway through
making the file system, the lower 1GB is filled up.
mkfs takes practically for ever to finish. I checked the sysrq-m output,
DMA has only 512K buffer, NORMAL has 1020K, HIGHMEM has 1 GB.
When this happens, I basically cannot do anything, not even ls, df, top,
etc. They all take for ever to run. If I kill mkfs, (closing the telnet
sessions
that mkfs was in) things starts to come back alive. It almost feels like
something got stuck somewhere.
Eample 2: I ran SPEC SFS tests to stress the Linux box. During the tests,
the lower memory will be filled up with inode cache and dcache entries,
while HIGHMEM is not quite used at all. Once this happens, again, any
interactive commands would take forever to finish... Eventually, SPEC SFS
would timeout and fail. Sometimes, if I managed to kill some processes, I
can temporarilly get some other applications run. But most of the
applications would get stuch somewhere very quickly later on.
I don't see such behavior in test6 though.
Any ideas?
Ying
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux.eu.org/Linux-MM/
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2000-10-02 19:56 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2000-10-02 19:01 [PATCH] fix for VM test9-pre, Ying Chen/Almaden/IBM
-- strict thread matches above, loose matches on Subject: below --
2000-10-02 16:40 Ying Chen/Almaden/IBM
2000-10-02 17:07 ` Rik van Riel
2000-10-02 19:25 ` Andrea Arcangeli
2000-10-02 19:28 ` Rik van Riel
2000-10-02 19:52 ` Andrea Arcangeli
2000-10-02 19:56 ` Rik van Riel
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox