linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* Re: [PATCH] Re: simple FS application that hangs 2.4-test5, mem mgmt problem or FS buffer cache mgmt problem?
@ 2000-09-22 23:59 Ying Chen/Almaden/IBM
  0 siblings, 0 replies; 4+ messages in thread
From: Ying Chen/Almaden/IBM @ 2000-09-22 23:59 UTC (permalink / raw)
  To: Rik van Riel; +Cc: Theodore Y. Ts'o

Hi, Rik,

I think I may have found out the problem with the memory problem that I
mentioned to you a while back. Correct me if I'm wrong.

The problem seems to be that when I ran SPEC SFS with large IOPS tests, it
created millions of files and directories. Linux uses a huge amount of
memory for inode and dcache (close to 1.5 GB). The rest of the memory (I
had 2 GB in total) is used for write/read buffer caches and some
kernel NFSD thread code  pages, etc. When the memory is exhausted, the
kswapd would kick in to free up the memory pages. However, in some cases,
when do_try_to_free_pages is called, it is called from an non-IO'able
environment. I think the calles were made from __alloc_pages() from the
network modules.  Since there is not much memory used for buffer cache and
mmaps, when try_to_free_pages() is called, shrink_mmap would not return
anything useful. Yet because GFP_IO is not turned on, there is no way to
free up memory used for inode and dcache. So, the memory allocation for the
NIC driver will fail. I got "IP: queue_glue: no memory available" kinda of
stuff from the console.

I printed out some messages from the vm module. I can see that when the
system ran into an infinite loop of some sort, which I don't quite
understand yet.  I'd think that I'd get a system crash at some point, since
no memory only fails operations. But I have not traced down while it went
into infinite loop. Sysrq-m tells me that I have run out of all the DMA,
NORMAL memory buffers. For HIGHMEM, I still have 800 MB available, but most
of it is from 2K pool. A few pages from other pools. I can't quite explain
this either. It seems that I should have run out of HIGHMEM also....

Any ideas?

BTW, the tests were run against test6.

Ying

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux.eu.org/Linux-MM/

^ permalink raw reply	[flat|nested] 4+ messages in thread
* Re: [PATCH] Re: simple FS application that hangs 2.4-test5, mem mgmt problem or FS buffer cache mgmt problem?
@ 2000-09-26  1:53 Ying Chen/Almaden/IBM
  0 siblings, 0 replies; 4+ messages in thread
From: Ying Chen/Almaden/IBM @ 2000-09-26  1:53 UTC (permalink / raw)
  To: ak; +Cc: linux-mm

Andi,

It does not seem to be the fragmentation problem. I did  bit more
investigation on it. It turns out when
__alloc_pages() was called, the zonelist passed in had only two zones (DMA
and LOWMEM/NORMAL). Both of these zones
have been exhaused. So, _alloc_pages() returns NULL. However, my HIGHMEM
zone still has 800 MB free, and the memory space is not
fragmented in anyway (it's got lots of 2MB buffers). I was wrong in my last
email in that I said the HIGHMEM had 2k buffers, it really should be 2048KB
buffers. Sorry about that.  I misread the console output. try_to_free_pages
() was not able to return anything useful for DMA and NORMAL
zones since all the memory used in the NORMAL and DMA zones was for inode
cache and directory cache. Unless GFP_IO is turned on, do_try_to_free_pages
will not be able to free any memory I think, despite almost 1GB memory left
in the HIGHMEM zone.

Why didn't the zonelist contain all three zones but only the first two? I'm
trying to find out the answer myself too from the source.....


Ying
---------------------- Forwarded by Ying Chen/Almaden/IBM on 09/25/2000
06:34 PM ---------------------------

Ying Chen
09/22/2000 04:59 PM

To:   Rik van Riel <riel@conectiva.com.br>
cc:   "Theodore Y. Ts'o" <tytso@mit.edu>
From: Ying Chen/Almaden/IBM@IBMUS
Subject:  Re: [PATCH] Re: simple FS application that hangs 2.4-test5, mem
      mgmt problem or FS buffer cache mgmt problem?  (Document link: Ying
      Chen)

Hi, Rik,

I think I may have found out the problem with the memory problem that I
mentioned to you a while back. Correct me if I'm wrong.

The problem seems to be that when I ran SPEC SFS with large IOPS tests, it
created millions of files and directories. Linux uses a huge amount of
memory for inode and dcache (close to 1.5 GB). The rest of the memory (I
had 2 GB in total) is used for write/read buffer caches and some
kernel NFSD thread code  pages, etc. When the memory is exhausted, the
kswapd would kick in to free up the memory pages. However, in some cases,
when do_try_to_free_pages is called, it is called from an non-IO'able
environment. I think the calles were made from __alloc_pages() from the
network modules.  Since there is not much memory used for buffer cache and
mmaps, when try_to_free_pages() is called, shrink_mmap would not return
anything useful. Yet because GFP_IO is not turned on, there is no way to
free up memory used for inode and dcache. So, the memory allocation for the
NIC driver will fail. I got "IP: queue_glue: no memory available" kinda of
stuff from the console.

I printed out some messages from the vm module. I can see that when the
system ran into an infinite loop of some sort, which I don't quite
understand yet.  I'd think that I'd get a system crash at some point, since
no memory only fails operations. But I have not traced down while it went
into infinite loop. Sysrq-m tells me that I have run out of all the DMA,
NORMAL memory buffers. For HIGHMEM, I still have 800 MB available, but most
of it is from 2K pool. A few pages from other pools. I can't quite explain
this either. It seems that I should have run out of HIGHMEM also....

Any ideas?

BTW, the tests were run against test6.

Ying



--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux.eu.org/Linux-MM/

^ permalink raw reply	[flat|nested] 4+ messages in thread
[parent not found: <200009121926.e8CJQGN28377@trampoline.thunk.org>]
* Re: [PATCH] Re: simple FS application that hangs 2.4-test5, mem mgmt problem or FS buffer cache mgmt problem?
@ 2000-09-05 17:02 Ying Chen/Almaden/IBM
  0 siblings, 0 replies; 4+ messages in thread
From: Ying Chen/Almaden/IBM @ 2000-09-05 17:02 UTC (permalink / raw)
  To: Rik van Riel

Ok. I got some alt-sysrq-m output from my SPEC SFS test. But the problem
was new.
Here is a description of the problem, and the alt-sysrq-m output.

I was trying to run SPEC SFS test with high IOPS like what I did before.
This time it's not the SPEC SFS server that died, but the client, which
ran 2.4-test6smp. The client machine is a 2-way IBM Intellistation M Pro
with 400 MHz P II, with 1GB memory.
My 2-way did not seem to die hard, i.e., somehow VM is still trying to kill
various processes. I got console messages like " VM: killing process sfs"
once in a while (since I have multiple sfs threads, I guess). But I cannot
do anything when it was spitting messages out.

I did alt-sysrq-m then. Here is the output:

SysRq: Show Memory
Mem-info:
Free Pages: 1740 kB (0 kB HighMem)
(Free: 435, lru-cache: 2818 (256 512 768) )
 M11 DMA: 4 * 4kB 3 * 8kB 2 * 16 kB 2 * 32 kB 3 * 64 kB 1 * 128 kB 1 * 256
kB 0 * 1024 KB 0 * 2048 kB = 712 kB)
 M 11 Normal: 3 * 4 kB 1 * 8 kB 1 * 16 kB 1 * 32 kB 1 * 64 kB  1 * 128 kB 1
* 256 kB 1 * 512 kB 0 * 1024 kB 0 * 2048 kB = 1024 kB)
 L00 HighMem = 0kB)
Swap cache: add 37995, delete 37995, find 1274/7279
Free swap: 0kB
229376 pages of RAM
0 pages of HIGHMEM
5194 reserved pages
170 pages shared
0 pages swap cached
0 pages in page table cache
Buffer memory: 80 kB
     CLEAN: 26 buffers, 26 kbyte, 9 used (last = 11), 0 locked, 0
protected, 0 dirty
     LOCKED: 54 buffers, 54 kbyte, 31 used (last = 54), 0 locked, 0
protected, 0 dirty


Ying Chen

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux.eu.org/Linux-MM/

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2000-09-26  1:53 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2000-09-22 23:59 [PATCH] Re: simple FS application that hangs 2.4-test5, mem mgmt problem or FS buffer cache mgmt problem? Ying Chen/Almaden/IBM
  -- strict thread matches above, loose matches on Subject: below --
2000-09-26  1:53 Ying Chen/Almaden/IBM
     [not found] <200009121926.e8CJQGN28377@trampoline.thunk.org>
2000-09-12 21:04 ` Rik van Riel
2000-09-05 17:02 Ying Chen/Almaden/IBM

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox