* oom-killer why ?
@ 2008-08-25 11:25 Marco Nietz
2008-08-25 15:56 ` Christoph Lameter
2008-08-26 11:11 ` Balbir Singh
0 siblings, 2 replies; 18+ messages in thread
From: Marco Nietz @ 2008-08-25 11:25 UTC (permalink / raw)
To: linux-mm
Today, i've meet the oom-killer the first time, but i could not
understand why this happens.
Swap and Highmem is ok, Could this be a Problem of lowmem and the bigmen
(pae) Kernel ?
It's a Machine with 2x4 Xeon Cores and 16GB of physical Memory running
Debian Etch with Kernel 2.6.18-6-686-bigmem.
Hier the dmesg-Output
oom-killer: gfp_mask=0x84d0, order=0
[<c014290b>] out_of_memory+0x25/0x13a
[<c0143d74>] __alloc_pages+0x1f5/0x275
[<c014a439>] __pte_alloc+0x11/0x9e
[<c014b864>] copy_page_range+0x155/0x3da
[<c01ba1d8>] vsnprintf+0x419/0x457
[<c011c184>] copy_process+0xa73/0x10a9
[<c011ca1f>] do_fork+0x91/0x17a
[<c0124d67>] do_gettimeofday+0x31/0xce
[<c01012c2>] sys_clone+0x28/0x2d
[<c0102c0d>] sysenter_past_esp+0x56/0x79
Mem-info:
DMA per-cpu:
cpu 0 hot: high 0, batch 1 used:0
cpu 0 cold: high 0, batch 1 used:0
cpu 1 hot: high 0, batch 1 used:0
cpu 1 cold: high 0, batch 1 used:0
cpu 2 hot: high 0, batch 1 used:0
cpu 2 cold: high 0, batch 1 used:0
cpu 3 hot: high 0, batch 1 used:0
cpu 3 cold: high 0, batch 1 used:0
cpu 4 hot: high 0, batch 1 used:0
cpu 4 cold: high 0, batch 1 used:0
cpu 5 hot: high 0, batch 1 used:0
cpu 5 cold: high 0, batch 1 used:0
cpu 6 hot: high 0, batch 1 used:0
cpu 6 cold: high 0, batch 1 used:0
cpu 7 hot: high 0, batch 1 used:0
cpu 7 cold: high 0, batch 1 used:0
DMA32 per-cpu: empty
Normal per-cpu:
cpu 0 hot: high 186, batch 31 used:128
cpu 0 cold: high 62, batch 15 used:48
cpu 1 hot: high 186, batch 31 used:30
cpu 1 cold: high 62, batch 15 used:47
cpu 2 hot: high 186, batch 31 used:35
cpu 2 cold: high 62, batch 15 used:59
cpu 3 hot: high 186, batch 31 used:79
cpu 3 cold: high 62, batch 15 used:55
cpu 4 hot: high 186, batch 31 used:8
cpu 4 cold: high 62, batch 15 used:53
cpu 5 hot: high 186, batch 31 used:162
cpu 5 cold: high 62, batch 15 used:52
cpu 6 hot: high 186, batch 31 used:181
cpu 6 cold: high 62, batch 15 used:57
cpu 7 hot: high 186, batch 31 used:9
cpu 7 cold: high 62, batch 15 used:58
HighMem per-cpu:
cpu 0 hot: high 186, batch 31 used:18
cpu 0 cold: high 62, batch 15 used:9
cpu 1 hot: high 186, batch 31 used:47
cpu 1 cold: high 62, batch 15 used:1
cpu 2 hot: high 186, batch 31 used:102
cpu 2 cold: high 62, batch 15 used:7
cpu 3 hot: high 186, batch 31 used:171
cpu 3 cold: high 62, batch 15 used:7
cpu 4 hot: high 186, batch 31 used:172
cpu 4 cold: high 62, batch 15 used:14
cpu 5 hot: high 186, batch 31 used:26
cpu 5 cold: high 62, batch 15 used:14
cpu 6 hot: high 186, batch 31 used:29
cpu 6 cold: high 62, batch 15 used:2
cpu 7 hot: high 186, batch 31 used:99
cpu 7 cold: high 62, batch 15 used:3
Free pages: 5949076kB (5941820kB HighMem)
Active:1102100 inactive:1373666 dirty:4831 writeback:0 unstable:0
free:1487269 slab:35543 mapped:139487 pagetables:152485
DMA free:3592kB min:68kB low:84kB high:100kB active:24kB inactive:16kB
present:16384kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 0 880 17392
DMA32 free:0kB min:0kB low:0kB high:0kB active:0kB inactive:0kB
present:0kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 0 880 17392
Normal free:3664kB min:3756kB low:4692kB high:5632kB active:280kB
inactive:244kB present:901120kB pages_scanned:593 all_unreclaimable? yes
lowmem_reserve[]: 0 0 0 132096
HighMem free:5941820kB min:512kB low:18148kB high:35784kB
active:4408096kB inactive:5494404kB present:16908288kB pages_scanned:0
all_unreclaimable? no
lowmem_reserve[]: 0 0 0 0
DMA: 2*4kB 0*8kB 0*16kB 0*32kB 0*64kB 0*128kB 0*256kB 1*512kB 1*1024kB
1*2048kB 0*4096kB = 3592kB
DMA32: empty
Normal: 0*4kB 0*8kB 1*16kB 0*32kB 1*64kB 0*128kB 0*256kB 1*512kB
1*1024kB 1*2048kB 0*4096kB = 3664kB
HighMem: 331931*4kB 303446*8kB 105186*16kB 14856*32kB 432*64kB 2*128kB
1*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 5941820kB
Swap cache: add 216611, delete 216611, find 112681/129891, race 0+3
Free swap = 7815516kB
Total swap = 7815612kB
Free swap: 7815516kB
4456448 pages of RAM
4227072 pages of HIGHMEM
299142 reserved pages
7012372 pages shared
0 pages swap cached
4831 pages dirty
0 pages writeback
139487 pages mapped
35576 pages slab
142180 pages pagetables
4456448 pages of RAM
4227072 pages of HIGHMEM
299142 reserved pages
6977702 pages shared
0 pages swap cached
4831 pages dirty
0 pages writeback
139487 pages mapped
35609 pages slab
138447 pages pagetables
4456448 pages of RAM
4227072 pages of HIGHMEM
299142 reserved pages
6901408 pages shared
0 pages swap cached
4831 pages dirty
0 pages writeback
139487 pages mapped
35576 pages slab
134910 pages pagetables
Thanks in Advance
Marco
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: oom-killer why ?
2008-08-25 11:25 oom-killer why ? Marco Nietz
@ 2008-08-25 15:56 ` Christoph Lameter
2008-08-25 16:18 ` Marco Nietz
2008-08-25 17:36 ` Larry Woodman
2008-08-26 11:11 ` Balbir Singh
1 sibling, 2 replies; 18+ messages in thread
From: Christoph Lameter @ 2008-08-25 15:56 UTC (permalink / raw)
To: Marco Nietz; +Cc: linux-mm
Marco Nietz wrote:
> DMA32: empty
> Normal: 0*4kB 0*8kB 1*16kB 0*32kB 1*64kB 0*128kB 0*256kB 1*512kB
> 1*1024kB 1*2048kB 0*4096kB = 3664kB
If the flags are for a regular allocation then you have had a something that
leaks kernel memory (device driver?). Can you get us the output of
/proc/meminfo and /proc/vmstat?
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: oom-killer why ?
2008-08-25 15:56 ` Christoph Lameter
@ 2008-08-25 16:18 ` Marco Nietz
2008-08-25 16:29 ` Christoph Lameter
2008-08-25 17:36 ` Larry Woodman
1 sibling, 1 reply; 18+ messages in thread
From: Marco Nietz @ 2008-08-25 16:18 UTC (permalink / raw)
To: linux-mm
Hi,
here's meminfo
MemTotal: 16629224 kB
MemFree: 384516 kB
Buffers: 936 kB
Cached: 14711232 kB
SwapCached: 60 kB
Active: 3154296 kB
Inactive: 12669472 kB
HighTotal: 15854912 kB
HighFree: 20872 kB
LowTotal: 774312 kB
LowFree: 363644 kB
SwapTotal: 7815612 kB
SwapFree: 7811560 kB
Dirty: 64208 kB
Writeback: 0 kB
AnonPages: 1111428 kB
Mapped: 303440 kB
Slab: 157620 kB
PageTables: 238648 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
CommitLimit: 16130224 kB
Committed_AS: 2329552 kB
VmallocTotal: 118776 kB
VmallocUsed: 8596 kB
VmallocChunk: 110060 kB
and here vmstat
nr_anon_pages 282519
nr_mapped 75910
nr_file_pages 3673378
nr_slab 39474
nr_page_table_pages 61177
nr_dirty 4911
nr_writeback 0
nr_unstable 0
nr_bounce 0
pgpgin 1878625233
pgpgout 594256837
pswpin 111708
pswpout 112242
pgalloc_dma 6685603
pgalloc_dma32 0
pgalloc_normal 1137887133
pgalloc_high 3076312085
pgfree 4220981603
pgactivate 3168847062
pgdeactivate 1804783249
pgfault 2209247031
pgmajfault 109378
pgrefill_dma 2202
pgrefill_dma32 0
pgrefill_normal 7741916
pgrefill_high 2086015597
pgsteal_dma 0
pgsteal_dma32 0
pgsteal_normal 0
pgsteal_high 0
pgscan_kswapd_dma 7857
pgscan_kswapd_dma32 0
pgscan_kswapd_normal 31078435
pgscan_kswapd_high 1109005504
pgscan_direct_dma 3
pgscan_direct_dma32 0
pgscan_direct_normal 25210
pgscan_direct_high 2507040
pginodesteal 0
slabs_scanned 363079168
kswapd_steal 1135004729
kswapd_inodesteal 15276762
pageoutrun 8748970
allocstall 2976
pgrotated 410023
Christoph Lameter schrieb:
> Marco Nietz wrote:
>
>> DMA32: empty
>> Normal: 0*4kB 0*8kB 1*16kB 0*32kB 1*64kB 0*128kB 0*256kB 1*512kB
>> 1*1024kB 1*2048kB 0*4096kB = 3664kB
>
> If the flags are for a regular allocation then you have had a something that
> leaks kernel memory (device driver?). Can you get us the output of
> /proc/meminfo and /proc/vmstat?
>
>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org. For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: oom-killer why ?
2008-08-25 16:18 ` Marco Nietz
@ 2008-08-25 16:29 ` Christoph Lameter
2008-08-25 17:26 ` Marco Nietz
0 siblings, 1 reply; 18+ messages in thread
From: Christoph Lameter @ 2008-08-25 16:29 UTC (permalink / raw)
To: Marco Nietz; +Cc: linux-mm
The output seem to have been from another run.
Can you reproduce the oom? Which kernel version is this? The full dmesg output
may help.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: oom-killer why ?
2008-08-25 16:29 ` Christoph Lameter
@ 2008-08-25 17:26 ` Marco Nietz
2008-08-25 18:55 ` Christoph Lameter
2008-08-26 10:45 ` Larry Woodman
0 siblings, 2 replies; 18+ messages in thread
From: Marco Nietz @ 2008-08-25 17:26 UTC (permalink / raw)
To: linux-mm
It's should be possible to reproduce the oom, but it's a Production Server.
The oom happens after if've increased the Maximum Connections and
Shared-Buffers for the Postgres Database Server on that Machine.
It's kernel: 2.6.18-6-686-bigmem a Debian Etch Server.
And here is the Complete dmesg:
oom-killer: gfp_mask=0xd0, order=0
[<c014290b>] out_of_memory+0x25/0x13a
[<c0143d74>] __alloc_pages+0x1f5/0x275
[<c024c64e>] tcp_sendmsg+0x4da/0x98a
[<c0125ba1>] __mod_timer+0x99/0xa3
[<c026381b>] inet_sendmsg+0x35/0x3f
[<c02204c3>] sock_sendmsg+0xce/0xe8
[<c012e01d>] autoremove_wake_function+0x0/0x2d
[<c0258d60>] tcp_v4_do_rcv+0x25/0x2b4
[<f8fe8114>] ip_confirm+0x27/0x2c [ip_conntrack]
[<c025b18d>] tcp_v4_rcv+0x8d2/0x925
[<c0220a20>] sys_sendto+0x116/0x140
[<c0225a29>] __alloc_skb+0x49/0xf2
[<c022650b>] __netdev_alloc_skb+0x12/0x2a
[<f885bb5f>] e1000_alloc_rx_buffers_ps+0xf3/0x1ff [e1000]
[<f885e0b7>] e1000_clean_rx_irq_ps+0x48a/0x4a2 [e1000]
[<c02252e3>] kfree_skbmem+0x8/0x63
[<c0220a63>] sys_send+0x19/0x1d
[<c0221d88>] sys_socketcall+0xd2/0x181
[<c0102c0d>] sysenter_past_esp+0x56/0x79
Mem-info:
DMA per-cpu:
cpu 0 hot: high 0, batch 1 used:0
cpu 0 cold: high 0, batch 1 used:0
cpu 1 hot: high 0, batch 1 used:0
cpu 1 cold: high 0, batch 1 used:0
cpu 2 hot: high 0, batch 1 used:0
cpu 2 cold: high 0, batch 1 used:0
cpu 3 hot: high 0, batch 1 used:0
cpu 3 cold: high 0, batch 1 used:0
cpu 4 hot: high 0, batch 1 used:0
cpu 4 cold: high 0, batch 1 used:0
cpu 5 hot: high 0, batch 1 used:0
cpu 5 cold: high 0, batch 1 used:0
cpu 6 hot: high 0, batch 1 used:0
cpu 6 cold: high 0, batch 1 used:0
cpu 7 hot: high 0, batch 1 used:0
cpu 7 cold: high 0, batch 1 used:0
DMA32 per-cpu: empty
Normal per-cpu:
cpu 0 hot: high 186, batch 31 used:161
cpu 0 cold: high 62, batch 15 used:47
cpu 1 hot: high 186, batch 31 used:170
cpu 1 cold: high 62, batch 15 used:51
cpu 2 hot: high 186, batch 31 used:156
cpu 2 cold: high 62, batch 15 used:59
cpu 3 hot: high 186, batch 31 used:33
cpu 3 cold: high 62, batch 15 used:55
cpu 4 hot: high 186, batch 31 used:77
cpu 4 cold: high 62, batch 15 used:60
cpu 5 hot: high 186, batch 31 used:82
cpu 5 cold: high 62, batch 15 used:52
cpu 6 hot: high 186, batch 31 used:105
cpu 6 cold: high 62, batch 15 used:57
cpu 7 hot: high 186, batch 31 used:36
cpu 7 cold: high 62, batch 15 used:56
HighMem per-cpu:
cpu 0 hot: high 186, batch 31 used:110
cpu 0 cold: high 62, batch 15 used:1
cpu 1 hot: high 186, batch 31 used:141
cpu 1 cold: high 62, batch 15 used:8
cpu 2 hot: high 186, batch 31 used:25
cpu 2 cold: high 62, batch 15 used:13
cpu 3 hot: high 186, batch 31 used:25
cpu 3 cold: high 62, batch 15 used:7
cpu 4 hot: high 186, batch 31 used:82
cpu 4 cold: high 62, batch 15 used:2
cpu 5 hot: high 186, batch 31 used:153
cpu 5 cold: high 62, batch 15 used:12
cpu 6 hot: high 186, batch 31 used:98
cpu 6 cold: high 62, batch 15 used:3
cpu 7 hot: high 186, batch 31 used:34
cpu 7 cold: high 62, batch 15 used:7
Free pages: 47480kB (40200kB HighMem)
Active:2252481 inactive:1698658 dirty:5471 writeback:10 unstable:0
free:11870 slab:35170 mapped:140818 pagetables:152848
DMA free:3588kB min:68kB low:84kB high:100kB active:0kB inactive:12kB
present:16384kB pages_scanned:9 all_unreclaimable? no
lowmem_reserve[]: 0 0 880 17392
DMA32 free:0kB min:0kB low:0kB high:0kB active:0kB inactive:0kB
present:0kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 0 880 17392
Normal free:3692kB min:3756kB low:4692kB high:5632kB active:112kB
inactive:276kB present:901120kB pages_scanned:899 all_unreclaimable? yes
lowmem_reserve[]: 0 0 0 132096
HighMem free:40200kB min:512kB low:18148kB high:35784kB active:9009812kB
inactive:6794344kB present:16908288kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 0 0 0
DMA: 1*4kB 0*8kB 0*16kB 0*32kB 0*64kB 0*128kB 0*256kB 1*512kB 1*1024kB
1*2048kB 0*4096kB = 3588kB
DMA32: empty
Normal: 1*4kB 1*8kB 0*16kB 1*32kB 1*64kB 0*128kB 0*256kB 1*512kB
1*1024kB 1*2048kB 0*4096kB = 3692kB
HighMem: 132*4kB 581*8kB 1039*16kB 191*32kB 184*64kB 2*128kB 1*256kB
0*512kB 0*1024kB 0*2048kB 0*4096kB = 40200kB
Swap cache: add 216611, delete 216318, find 112681/129891, race 0+3
Free swap = 7803264kB
Total swap = 7815612kB
Free swap: 7803264kB
4456448 pages of RAM
4227072 pages of HIGHMEM
299142 reserved pages
15946987 pages shared
293 pages swap cached
5580 pages dirty
10 pages writeback
140818 pages mapped
35170 pages slab
152848 pages pagetables
Out of Memory: Kill process 27934 (postmaster) score 22552458 and children.
Out of memory: Killed process 27937 (postmaster).
oom-killer: gfp_mask=0x84d0, order=0
[<c014290b>] out_of_memory+0x25/0x13a
[<c0143d74>] __alloc_pages+0x1f5/0x275
[<c014a439>] __pte_alloc+0x11/0x9e
[<c014a576>] __handle_mm_fault+0xb0/0xa1f
[<f8e12356>] start_next_msg+0xc/0x91 [ipmi_si]
[<f8e126b5>] smi_event_handler+0x2da/0x338 [ipmi_si]
[<c0125a90>] lock_timer_base+0x15/0x2f
[<c01155b7>] do_page_fault+0x23b/0x59a
[<c0117c15>] try_to_wake_up+0x355/0x35f
[<c011537c>] do_page_fault+0x0/0x59a
[<c01037f5>] error_code+0x39/0x40
[<c013f966>] file_read_actor+0x27/0xca
[<c014016e>] do_generic_mapping_read+0x177/0x42a
[<c0140c60>] __generic_file_aio_read+0x16b/0x1b2
[<c013f93f>] file_read_actor+0x0/0xca
[<f9057e1d>] xfs_read+0x26f/0x2d8 [xfs]
[<c015538e>] shmem_nopage+0x9d/0xad
[<f9054e1b>] xfs_file_aio_read+0x5c/0x64 [xfs]
[<c015906f>] do_sync_read+0xb6/0xf1
[<c012e01d>] autoremove_wake_function+0x0/0x2d
[<c0158fb9>] do_sync_read+0x0/0xf1
[<c0159978>] vfs_read+0x9f/0x141
[<c0159dc4>] sys_read+0x3c/0x63
[<c0102c0d>] sysenter_past_esp+0x56/0x79
Mem-info:
DMA per-cpu:
cpu 0 hot: high 0, batch 1 used:0
cpu 0 cold: high 0, batch 1 used:0
cpu 1 hot: high 0, batch 1 used:0
cpu 1 cold: high 0, batch 1 used:0
cpu 2 hot: high 0, batch 1 used:0
cpu 2 cold: high 0, batch 1 used:0
cpu 3 hot: high 0, batch 1 used:0
cpu 3 cold: high 0, batch 1 used:0
cpu 4 hot: high 0, batch 1 used:0
cpu 4 cold: high 0, batch 1 used:0
cpu 5 hot: high 0, batch 1 used:0
cpu 5 cold: high 0, batch 1 used:0
cpu 6 hot: high 0, batch 1 used:0
cpu 6 cold: high 0, batch 1 used:0
cpu 7 hot: high 0, batch 1 used:0
cpu 7 cold: high 0, batch 1 used:0
DMA32 per-cpu: empty
Normal per-cpu:
cpu 0 hot: high 186, batch 31 used:128
cpu 0 cold: high 62, batch 15 used:48
cpu 1 hot: high 186, batch 31 used:30
cpu 1 cold: high 62, batch 15 used:47
cpu 2 hot: high 186, batch 31 used:35
cpu 2 cold: high 62, batch 15 used:59
cpu 3 hot: high 186, batch 31 used:79
cpu 3 cold: high 62, batch 15 used:55
cpu 4 hot: high 186, batch 31 used:8
cpu 4 cold: high 62, batch 15 used:53
cpu 5 hot: high 186, batch 31 used:162
cpu 5 cold: high 62, batch 15 used:52
cpu 6 hot: high 186, batch 31 used:181
cpu 6 cold: high 62, batch 15 used:57
cpu 7 hot: high 186, batch 31 used:9
cpu 7 cold: high 62, batch 15 used:58
HighMem per-cpu:
cpu 0 hot: high 186, batch 31 used:7
cpu 0 cold: high 62, batch 15 used:9
cpu 1 hot: high 186, batch 31 used:47
cpu 1 cold: high 62, batch 15 used:1
cpu 2 hot: high 186, batch 31 used:102
cpu 2 cold: high 62, batch 15 used:7
cpu 3 hot: high 186, batch 31 used:133
cpu 3 cold: high 62, batch 15 used:7
cpu 4 hot: high 186, batch 31 used:172
cpu 4 cold: high 62, batch 15 used:14
cpu 5 hot: high 186, batch 31 used:11
cpu 5 cold: high 62, batch 15 used:14
cpu 6 hot: high 186, batch 31 used:30
cpu 6 cold: high 62, batch 15 used:2
cpu 7 hot: high 186, batch 31 used:99
cpu 7 cold: high 62, batch 15 used:3
Free pages: 5949196kB (5941944kB HighMem)
Active:1102138 inactive:1373656 dirty:4831 writeback:0 unstable:0
free:1487299 slab:35543 mapped:139487 pagetables:152485
DMA free:3588kB min:68kB low:84kB high:100kB active:0kB inactive:40kB
present:16384kB pages_scanned:40 all_unreclaimable? yes
lowmem_reserve[]: 0 0 880 17392
DMA32 free:0kB min:0kB low:0kB high:0kB active:0kB inactive:0kB
present:0kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 0 880 17392
Normal free:3664kB min:3756kB low:4692kB high:5632kB active:280kB
inactive:244kB present:901120kB pages_scanned:593 all_unreclaimable? yes
lowmem_reserve[]: 0 0 0 132096
HighMem free:5941944kB min:512kB low:18148kB high:35784kB
active:4408272kB inactive:5494340kB present:16908288kB pages_scanned:0
all_unreclaimable? no
lowmem_reserve[]: 0 0 0 0
DMA: 1*4kB 0*8kB 0*16kB 0*32kB 0*64kB 0*128kB 0*256kB 1*512kB 1*1024kB
1*2048kB 0*4096kB = 3588kB
DMA32: empty
Normal: 0*4kB 0*8kB 1*16kB 0*32kB 1*64kB 0*128kB 0*256kB 1*512kB
1*1024kB 1*2048kB 0*4096kB = 3664kB
HighMem: 331962*4kB 303446*8kB 105186*16kB 14856*32kB 432*64kB 2*128kB
1*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 5941944kB
Swap cache: add 216611, delete 216611, find 112681/129891, race 0+3
Free swap = 7815516kB
Total swap = 7815612kB
Free swap: 7815516kB
oom-killer: gfp_mask=0x84d0, order=0
[<c014290b>] out_of_memory+0x25/0x13a
[<c0143d74>] __alloc_pages+0x1f5/0x275
[<c014a439>] __pte_alloc+0x11/0x9e
[<c014b864>] copy_page_range+0x155/0x3da
[<c01ba1d8>] vsnprintf+0x419/0x457
[<c011c184>] copy_process+0xa73/0x10a9
[<c011ca1f>] do_fork+0x91/0x17a
[<c0124d67>] do_gettimeofday+0x31/0xce
[<c01012c2>] sys_clone+0x28/0x2d
[<c0102c0d>] sysenter_past_esp+0x56/0x79
Mem-info:
DMA per-cpu:
cpu 0 hot: high 0, batch 1 used:0
cpu 0 cold: high 0, batch 1 used:0
cpu 1 hot: high 0, batch 1 used:0
cpu 1 cold: high 0, batch 1 used:0
cpu 2 hot: high 0, batch 1 used:0
cpu 2 cold: high 0, batch 1 used:0
cpu 3 hot: high 0, batch 1 used:0
cpu 3 cold: high 0, batch 1 used:0
cpu 4 hot: high 0, batch 1 used:0
cpu 4 cold: high 0, batch 1 used:0
cpu 5 hot: high 0, batch 1 used:0
cpu 5 cold: high 0, batch 1 used:0
cpu 6 hot: high 0, batch 1 used:0
cpu 6 cold: high 0, batch 1 used:0
cpu 7 hot: high 0, batch 1 used:0
cpu 7 cold: high 0, batch 1 used:0
DMA32 per-cpu: empty
Normal per-cpu:
cpu 0 hot: high 186, batch 31 used:128
cpu 0 cold: high 62, batch 15 used:48
cpu 1 hot: high 186, batch 31 used:30
cpu 1 cold: high 62, batch 15 used:47
cpu 2 hot: high 186, batch 31 used:35
cpu 2 cold: high 62, batch 15 used:59
cpu 3 hot: high 186, batch 31 used:79
cpu 3 cold: high 62, batch 15 used:55
cpu 4 hot: high 186, batch 31 used:8
cpu 4 cold: high 62, batch 15 used:53
cpu 5 hot: high 186, batch 31 used:162
cpu 5 cold: high 62, batch 15 used:52
cpu 6 hot: high 186, batch 31 used:181
cpu 6 cold: high 62, batch 15 used:57
cpu 7 hot: high 186, batch 31 used:9
cpu 7 cold: high 62, batch 15 used:58
HighMem per-cpu:
cpu 0 hot: high 186, batch 31 used:18
cpu 0 cold: high 62, batch 15 used:9
cpu 1 hot: high 186, batch 31 used:47
cpu 1 cold: high 62, batch 15 used:1
cpu 2 hot: high 186, batch 31 used:102
cpu 2 cold: high 62, batch 15 used:7
cpu 3 hot: high 186, batch 31 used:133
cpu 3 cold: high 62, batch 15 used:7
cpu 4 hot: high 186, batch 31 used:172
cpu 4 cold: high 62, batch 15 used:14
cpu 5 hot: high 186, batch 31 used:9
cpu 5 cold: high 62, batch 15 used:14
cpu 6 hot: high 186, batch 31 used:30
cpu 6 cold: high 62, batch 15 used:2
cpu 7 hot: high 186, batch 31 used:99
cpu 7 cold: high 62, batch 15 used:3
Free pages: 5948948kB (5941696kB HighMem)
Active:1102194 inactive:1373658 dirty:4831 writeback:0 unstable:0
free:1487237 slab:35543 mapped:139487 pagetables:152485
DMA free:3588kB min:68kB low:84kB high:100kB active:0kB inactive:40kB
present:16384kB pages_scanned:40 all_unreclaimable? yes
lowmem_reserve[]: 0 0 880 17392
DMA32 free:0kB min:0kB low:0kB high:0kB active:0kB inactive:0kB
present:0kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 0 880 17392
Normal free:3664kB min:3756kB low:4692kB high:5632kB active:280kB
inactive:244kB present:901120kB pages_scanned:593 all_unreclaimable? yes
lowmem_reserve[]: 0 0 0 132096
HighMem free:5941696kB min:512kB low:18148kB high:35784kB
active:4408496kB inactive:5494348kB present:16908288kB pages_scanned:0
all_unreclaimable? no
lowmem_reserve[]: 0 0 0 0
DMA: 1*4kB 0*8kB 0*16kB 0*32kB 0*64kB 0*128kB 0*256kB 1*512kB 1*1024kB
1*2048kB 0*4096kB = 3588kB
DMA32: empty
Normal: 0*4kB 0*8kB 1*16kB 0*32kB 1*64kB 0*128kB 0*256kB 1*512kB
1*1024kB 1*2048kB 0*4096kB = 3664kB
HighMem: 331900*4kB 303446*8kB 105186*16kB 14856*32kB 432*64kB 2*128kB
1*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 5941696kB
Swap cache: add 216611, delete 216611, find 112681/129891, race 0+3
Free swap = 7815516kB
Total swap = 7815612kB
Free swap: 7815516kB
oom-killer: gfp_mask=0xd0, order=0
[<c014290b>] out_of_memory+0x25/0x13a
[<c0143d74>] __alloc_pages+0x1f5/0x275
[<c015653f>] cache_alloc_refill+0x293/0x487
[<c015630f>] cache_alloc_refill+0x63/0x487
[<c01562a3>] kmem_cache_alloc+0x32/0x3b
[<c011b799>] copy_process+0x88/0x10a9
[<c012bfad>] alloc_pid+0x1ba/0x211
[<c011ca1f>] do_fork+0x91/0x17a
[<c0124d67>] do_gettimeofday+0x31/0xce
[<c01012c2>] sys_clone+0x28/0x2d
[<c0102c0d>] sysenter_past_esp+0x56/0x79
Mem-info:
DMA per-cpu:
cpu 0 hot: high 0, batch 1 used:0
cpu 0 cold: high 0, batch 1 used:0
cpu 1 hot: high 0, batch 1 used:0
cpu 1 cold: high 0, batch 1 used:0
cpu 2 hot: high 0, batch 1 used:0
cpu 2 cold: high 0, batch 1 used:0
cpu 3 hot: high 0, batch 1 used:0
cpu 3 cold: high 0, batch 1 used:0
cpu 4 hot: high 0, batch 1 used:0
cpu 4 cold: high 0, batch 1 used:0
cpu 5 hot: high 0, batch 1 used:0
cpu 5 cold: high 0, batch 1 used:0
cpu 6 hot: high 0, batch 1 used:0
cpu 6 cold: high 0, batch 1 used:0
cpu 7 hot: high 0, batch 1 used:0
cpu 7 cold: high 0, batch 1 used:0
DMA32 per-cpu: empty
Normal per-cpu:
cpu 0 hot: high 186, batch 31 used:128
cpu 0 cold: high 62, batch 15 used:48
cpu 1 hot: high 186, batch 31 used:30
cpu 1 cold: high 62, batch 15 used:47
cpu 2 hot: high 186, batch 31 used:35
cpu 2 cold: high 62, batch 15 used:59
cpu 3 hot: high 186, batch 31 used:79
cpu 3 cold: high 62, batch 15 used:55
cpu 4 hot: high 186, batch 31 used:8
cpu 4 cold: high 62, batch 15 used:53
cpu 5 hot: high 186, batch 31 used:162
cpu 5 cold: high 62, batch 15 used:52
cpu 6 hot: high 186, batch 31 used:181
cpu 6 cold: high 62, batch 15 used:57
cpu 7 hot: high 186, batch 31 used:9
cpu 7 cold: high 62, batch 15 used:58
HighMem per-cpu:
cpu 0 hot: high 186, batch 31 used:18
cpu 0 cold: high 62, batch 15 used:9
cpu 1 hot: high 186, batch 31 used:47
cpu 1 cold: high 62, batch 15 used:1
cpu 2 hot: high 186, batch 31 used:102
cpu 2 cold: high 62, batch 15 used:7
cpu 3 hot: high 186, batch 31 used:133
cpu 3 cold: high 62, batch 15 used:7
cpu 4 hot: high 186, batch 31 used:172
cpu 4 cold: high 62, batch 15 used:14
cpu 5 hot: high 186, batch 31 used:7
cpu 5 cold: high 62, batch 15 used:14
cpu 6 hot: high 186, batch 31 used:29
cpu 6 cold: high 62, batch 15 used:2
cpu 7 hot: high 186, batch 31 used:99
cpu 7 cold: high 62, batch 15 used:3
Free pages: 5948948kB (5941696kB HighMem)
Active:1102194 inactive:1373660 dirty:4831 writeback:0 unstable:0
free:1487237 slab:35543 mapped:139487 pagetables:152485
DMA free:3588kB min:68kB low:84kB high:100kB active:0kB inactive:40kB
present:16384kB pages_scanned:40 all_unreclaimable? yes
lowmem_reserve[]: 0 0 880 17392
DMA32 free:0kB min:0kB low:0kB high:0kB active:0kB inactive:0kB
present:0kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 0 880 17392
Normal free:3664kB min:3756kB low:4692kB high:5632kB active:280kB
inactive:244kB present:901120kB pages_scanned:593 all_unreclaimable? yes
lowmem_reserve[]: 0 0 0 132096
HighMem free:5941696kB min:512kB low:18148kB high:35784kB
active:4408496kB inactive:5494356kB present:16908288kB pages_scanned:0
all_unreclaimable? no
lowmem_reserve[]: 0 0 0 0
DMA: 1*4kB 0*8kB 0*16kB 0*32kB 0*64kB 0*128kB 0*256kB 1*512kB 1*1024kB
1*2048kB 0*4096kB = 3588kB
DMA32: empty
Normal: 0*4kB 0*8kB 1*16kB 0*32kB 1*64kB 0*128kB 0*256kB 1*512kB
1*1024kB 1*2048kB 0*4096kB = 3664kB
HighMem: 331900*4kB 303446*8kB 105186*16kB 14856*32kB 432*64kB 2*128kB
1*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 5941696kB
Swap cache: add 216611, delete 216611, find 112681/129891, race 0+3
Free swap = 7815516kB
Total swap = 7815612kB
Free swap: 7815516kB
4456448 pages of RAM
4227072 pages of HIGHMEM
299142 reserved pages
7272664 pages shared
0 pages swap cached
4831 pages dirty
0 pages writeback
139487 pages mapped
35543 pages slab
152485 pages pagetables
Out of Memory: Kill process 27934 (postmaster) score 28021408 and children.
Out of memory: Killed process 28957 (postmaster).
oom-killer: gfp_mask=0x84d0, order=0
[<c014290b>] out_of_memory+0x25/0x13a
[<c0143d74>] __alloc_pages+0x1f5/0x275
[<c014a439>] __pte_alloc+0x11/0x9e
[<c014a576>] __handle_mm_fault+0xb0/0xa1f
[<f8e12356>] start_next_msg+0xc/0x91 [ipmi_si]
4456448 pages of RAM
4227072 pages of HIGHMEM
299142 reserved pages
7271308 pages shared
0 pages swap cached
4831 pages dirty
0 pages writeback
139487 pages mapped
35543 pages slab
152485 pages pagetables
[<f8e126b5>] smi_event_handler+0x2da/0x338 [ipmi_si]
[<c0125a90>] lock_timer_base+0x15/0x2f
[<c01155b7>] do_page_fault+0x23b/0x59a
[<c0117c15>] try_to_wake_up+0x355/0x35f
[<c011537c>] do_page_fault+0x0/0x59a
[<c01037f5>] error_code+0x39/0x40
[<c013f966>] file_read_actor+0x27/0xca
[<c014016e>] do_generic_mapping_read+0x177/0x42a
[<c0140c60>] __generic_file_aio_read+0x16b/0x1b2
[<c013f93f>] file_read_actor+0x0/0xca
[<f9057e1d>] xfs_read+0x26f/0x2d8 [xfs]
[<c015538e>] shmem_nopage+0x9d/0xad
[<f9054e1b>] xfs_file_aio_read+0x5c/0x64 [xfs]
[<c015906f>] do_sync_read+0xb6/0xf1
[<c012e01d>] autoremove_wake_function+0x0/0x2d
[<c0158fb9>] do_sync_read+0x0/0xf1
[<c0159978>] vfs_read+0x9f/0x141
[<c0159dc4>] sys_read+0x3c/0x63
[<c0102c0d>] sysenter_past_esp+0x56/0x79
Mem-info:
DMA per-cpu:
cpu 0 hot: high 0, batch 1 used:0
cpu 0 cold: high 0, batch 1 used:0
cpu 1 hot: high 0, batch 1 used:0
cpu 1 cold: high 0, batch 1 used:0
cpu 2 hot: high 0, batch 1 used:0
cpu 2 cold: high 0, batch 1 used:0
cpu 3 hot: high 0, batch 1 used:0
cpu 3 cold: high 0, batch 1 used:0
cpu 4 hot: high 0, batch 1 used:0
cpu 4 cold: high 0, batch 1 used:0
cpu 5 hot: high 0, batch 1 used:0
cpu 5 cold: high 0, batch 1 used:0
cpu 6 hot: high 0, batch 1 used:0
cpu 6 cold: high 0, batch 1 used:0
cpu 7 hot: high 0, batch 1 used:0
cpu 7 cold: high 0, batch 1 used:0
DMA32 per-cpu: empty
Normal per-cpu:
cpu 0 hot: high 186, batch 31 used:128
cpu 0 cold: high 62, batch 15 used:48
cpu 1 hot: high 186, batch 31 used:30
cpu 1 cold: high 62, batch 15 used:47
cpu 2 hot: high 186, batch 31 used:35
cpu 2 cold: high 62, batch 15 used:59
cpu 3 hot: high 186, batch 31 used:79
cpu 3 cold: high 62, batch 15 used:55
cpu 4 hot: high 186, batch 31 used:8
cpu 4 cold: high 62, batch 15 used:53
cpu 5 hot: high 186, batch 31 used:162
cpu 5 cold: high 62, batch 15 used:52
cpu 6 hot: high 186, batch 31 used:181
cpu 6 cold: high 62, batch 15 used:57
cpu 7 hot: high 186, batch 31 used:9
cpu 7 cold: high 62, batch 15 used:58
HighMem per-cpu:
cpu 0 hot: high 186, batch 31 used:18
cpu 0 cold: high 62, batch 15 used:9
cpu 1 hot: high 186, batch 31 used:47
cpu 1 cold: high 62, batch 15 used:1
cpu 2 hot: high 186, batch 31 used:102
cpu 2 cold: high 62, batch 15 used:7
cpu 3 hot: high 186, batch 31 used:171
cpu 3 cold: high 62, batch 15 used:7
cpu 4 hot: high 186, batch 31 used:172
cpu 4 cold: high 62, batch 15 used:14
cpu 5 hot: high 186, batch 31 used:28
cpu 5 cold: high 62, batch 15 used:14
cpu 6 hot: high 186, batch 31 used:29
cpu 6 cold: high 62, batch 15 used:2
cpu 7 hot: high 186, batch 31 used:99
cpu 7 cold: high 62, batch 15 used:3
Free pages: 5949072kB (5941820kB HighMem)
Active:1102100 inactive:1373664 dirty:4831 writeback:0 unstable:0
free:1487268 slab:35543 mapped:139487 pagetables:152485
DMA free:3588kB min:68kB low:84kB high:100kB active:24kB inactive:16kB
present:16384kB pages_scanned:74 all_unreclaimable? yes
lowmem_reserve[]: 0 0 880 17392
DMA32 free:0kB min:0kB low:0kB high:0kB active:0kB inactive:0kB
present:0kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 0 880 17392
Normal free:3664kB min:3756kB low:4692kB high:5632kB active:280kB
inactive:244kB present:901120kB pages_scanned:593 all_unreclaimable? yes
lowmem_reserve[]: 0 0 0 132096
HighMem free:5941820kB min:512kB low:18148kB high:35784kB
active:4408096kB inactive:5494396kB present:16908288kB pages_scanned:0
all_unreclaimable? no
lowmem_reserve[]: 0 0 0 0
DMA: 1*4kB 0*8kB 0*16kB 0*32kB 0*64kB 0*128kB 0*256kB 1*512kB 1*1024kB
1*2048kB 0*4096kB = 3588kB
DMA32: empty
Normal: 0*4kB 0*8kB 1*16kB 0*32kB 1*64kB 0*128kB 0*256kB 1*512kB
1*1024kB 1*2048kB 0*4096kB = 3664kB
HighMem: 331931*4kB 303446*8kB 105186*16kB 14856*32kB 432*64kB 2*128kB
1*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 5941820kB
Swap cache: add 216611, delete 216611, find 112681/129891, race 0+3
Free swap = 7815516kB
Total swap = 7815612kB
Free swap: 7815516kB
oom-killer: gfp_mask=0xd0, order=0
[<c014290b>] out_of_memory+0x25/0x13a
[<c0143d74>] __alloc_pages+0x1f5/0x275
[<c015653f>] cache_alloc_refill+0x293/0x487
[<c015630f>] cache_alloc_refill+0x63/0x487
[<c01562a3>] kmem_cache_alloc+0x32/0x3b
[<c011b799>] copy_process+0x88/0x10a9
[<c012bfad>] alloc_pid+0x1ba/0x211
[<c011ca1f>] do_fork+0x91/0x17a
[<c0124d67>] do_gettimeofday+0x31/0xce
[<c01012c2>] sys_clone+0x28/0x2d
[<c0102c0d>] sysenter_past_esp+0x56/0x79
Mem-info:
DMA per-cpu:
cpu 0 hot: high 0, batch 1 used:0
cpu 0 cold: high 0, batch 1 used:0
cpu 1 hot: high 0, batch 1 used:0
cpu 1 cold: high 0, batch 1 used:0
cpu 2 hot: high 0, batch 1 used:0
cpu 2 cold: high 0, batch 1 used:0
cpu 3 hot: high 0, batch 1 used:0
cpu 3 cold: high 0, batch 1 used:0
cpu 4 hot: high 0, batch 1 used:0
cpu 4 cold: high 0, batch 1 used:0
cpu 5 hot: high 0, batch 1 used:0
cpu 5 cold: high 0, batch 1 used:0
cpu 6 hot: high 0, batch 1 used:0
cpu 6 cold: high 0, batch 1 used:0
cpu 7 hot: high 0, batch 1 used:0
cpu 7 cold: high 0, batch 1 used:0
DMA32 per-cpu: empty
Normal per-cpu:
cpu 0 hot: high 186, batch 31 used:128
cpu 0 cold: high 62, batch 15 used:48
cpu 1 hot: high 186, batch 31 used:30
cpu 1 cold: high 62, batch 15 used:47
cpu 2 hot: high 186, batch 31 used:35
cpu 2 cold: high 62, batch 15 used:59
cpu 3 hot: high 186, batch 31 used:79
cpu 3 cold: high 62, batch 15 used:55
cpu 4 hot: high 186, batch 31 used:8
cpu 4 cold: high 62, batch 15 used:53
cpu 5 hot: high 186, batch 31 used:162
cpu 5 cold: high 62, batch 15 used:52
cpu 6 hot: high 186, batch 31 used:181
cpu 6 cold: high 62, batch 15 used:57
cpu 7 hot: high 186, batch 31 used:9
cpu 7 cold: high 62, batch 15 used:58
HighMem per-cpu:
cpu 0 hot: high 186, batch 31 used:18
cpu 0 cold: high 62, batch 15 used:9
cpu 1 hot: high 186, batch 31 used:47
cpu 1 cold: high 62, batch 15 used:1
cpu 2 hot: high 186, batch 31 used:102
cpu 2 cold: high 62, batch 15 used:7
cpu 3 hot: high 186, batch 31 used:171
cpu 3 cold: high 62, batch 15 used:7
cpu 4 hot: high 186, batch 31 used:172
cpu 4 cold: high 62, batch 15 used:14
cpu 5 hot: high 186, batch 31 used:28
cpu 5 cold: high 62, batch 15 used:14
cpu 6 hot: high 186, batch 31 used:29
cpu 6 cold: high 62, batch 15 used:2
cpu 7 hot: high 186, batch 31 used:99
cpu 7 cold: high 62, batch 15 used:3
Free pages: 5949072kB (5941820kB HighMem)
Active:1102100 inactive:1373664 dirty:4831 writeback:0 unstable:0
free:1487268 slab:35543 mapped:139487 pagetables:152485
DMA free:3588kB min:68kB low:84kB high:100kB active:24kB inactive:16kB
present:16384kB pages_scanned:74 all_unreclaimable? yes
lowmem_reserve[]: 0 0 880 17392
DMA32 free:0kB min:0kB low:0kB high:0kB active:0kB inactive:0kB
present:0kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 0 880 17392
Normal free:3664kB min:3756kB low:4692kB high:5632kB active:280kB
inactive:244kB present:901120kB pages_scanned:593 all_unreclaimable? yes
lowmem_reserve[]: 0 0 0 132096
HighMem free:5941820kB min:512kB low:18148kB high:35784kB
active:4408096kB inactive:5494396kB present:16908288kB pages_scanned:0
all_unreclaimable? no
lowmem_reserve[]: 0 0 0 0
DMA: 1*4kB 0*8kB 0*16kB 0*32kB 0*64kB 0*128kB 0*256kB 1*512kB 1*1024kB
1*2048kB 0*4096kB = 3588kB
DMA32: empty
Normal: 0*4kB 0*8kB 1*16kB 0*32kB 1*64kB 0*128kB 0*256kB 1*512kB
1*1024kB 1*2048kB 0*4096kB = 3664kB
HighMem: 331931*4kB 303446*8kB 105186*16kB 14856*32kB 432*64kB 2*128kB
1*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 5941820kB
Swap cache: add 216611, delete 216611, find 112681/129891, race 0+3
Free swap = 7815516kB
Total swap = 7815612kB
Free swap: 7815516kB
4456448 pages of RAM
4227072 pages of HIGHMEM
299142 reserved pages
7264880 pages shared
0 pages swap cached
4831 pages dirty
0 pages writeback
139487 pages mapped
35543 pages slab
152485 pages pagetables
oom-killer: gfp_mask=0x84d0, order=0
[<c014290b>] out_of_memory+0x25/0x13a
[<c0143d74>] __alloc_pages+0x1f5/0x275
[<c014a439>] __pte_alloc+0x11/0x9e
[<c014b864>] copy_page_range+0x155/0x3da
[<c01ba1d8>] vsnprintf+0x419/0x457
[<c011c184>] copy_process+0xa73/0x10a9
[<c011ca1f>] do_fork+0x91/0x17a
[<c0124d67>] do_gettimeofday+0x31/0xce
[<c01012c2>] sys_clone+0x28/0x2d
[<c0102c0d>] sysenter_past_esp+0x56/0x79
Mem-info:
DMA per-cpu:
cpu 0 hot: high 0, batch 1 used:0
cpu 0 cold: high 0, batch 1 used:0
cpu 1 hot: high 0, batch 1 used:0
cpu 1 cold: high 0, batch 1 used:0
cpu 2 hot: high 0, batch 1 used:0
cpu 2 cold: high 0, batch 1 used:0
cpu 3 hot: high 0, batch 1 used:0
cpu 3 cold: high 0, batch 1 used:0
cpu 4 hot: high 0, batch 1 used:0
cpu 4 cold: high 0, batch 1 used:0
cpu 5 hot: high 0, batch 1 used:0
cpu 5 cold: high 0, batch 1 used:0
cpu 6 hot: high 0, batch 1 used:0
cpu 6 cold: high 0, batch 1 used:0
cpu 7 hot: high 0, batch 1 used:0
cpu 7 cold: high 0, batch 1 used:0
DMA32 per-cpu: empty
Normal per-cpu:
cpu 0 hot: high 186, batch 31 used:128
cpu 0 cold: high 62, batch 15 used:48
cpu 1 hot: high 186, batch 31 used:30
cpu 1 cold: high 62, batch 15 used:47
cpu 2 hot: high 186, batch 31 used:35
cpu 2 cold: high 62, batch 15 used:59
cpu 3 hot: high 186, batch 31 used:79
cpu 3 cold: high 62, batch 15 used:55
cpu 4 hot: high 186, batch 31 used:8
cpu 4 cold: high 62, batch 15 used:53
cpu 5 hot: high 186, batch 31 used:162
cpu 5 cold: high 62, batch 15 used:52
cpu 6 hot: high 186, batch 31 used:181
cpu 6 cold: high 62, batch 15 used:57
cpu 7 hot: high 186, batch 31 used:9
cpu 7 cold: high 62, batch 15 used:58
HighMem per-cpu:
cpu 0 hot: high 186, batch 31 used:18
cpu 0 cold: high 62, batch 15 used:9
cpu 1 hot: high 186, batch 31 used:47
cpu 1 cold: high 62, batch 15 used:1
cpu 2 hot: high 186, batch 31 used:102
cpu 2 cold: high 62, batch 15 used:7
cpu 3 hot: high 186, batch 31 used:171
cpu 3 cold: high 62, batch 15 used:7
cpu 4 hot: high 186, batch 31 used:172
cpu 4 cold: high 62, batch 15 used:14
cpu 5 hot: high 186, batch 31 used:26
cpu 5 cold: high 62, batch 15 used:14
cpu 6 hot: high 186, batch 31 used:29
cpu 6 cold: high 62, batch 15 used:2
cpu 7 hot: high 186, batch 31 used:99
cpu 7 cold: high 62, batch 15 used:3
Free pages: 5949076kB (5941820kB HighMem)
Active:1102100 inactive:1373666 dirty:4831 writeback:0 unstable:0
free:1487269 slab:35543 mapped:139487 pagetables:152485
DMA free:3592kB min:68kB low:84kB high:100kB active:24kB inactive:16kB
present:16384kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 0 880 17392
DMA32 free:0kB min:0kB low:0kB high:0kB active:0kB inactive:0kB
present:0kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 0 880 17392
Normal free:3664kB min:3756kB low:4692kB high:5632kB active:280kB
inactive:244kB present:901120kB pages_scanned:593 all_unreclaimable? yes
lowmem_reserve[]: 0 0 0 132096
HighMem free:5941820kB min:512kB low:18148kB high:35784kB
active:4408096kB inactive:5494404kB present:16908288kB pages_scanned:0
all_unreclaimable? no
lowmem_reserve[]: 0 0 0 0
DMA: 2*4kB 0*8kB 0*16kB 0*32kB 0*64kB 0*128kB 0*256kB 1*512kB 1*1024kB
1*2048kB 0*4096kB = 3592kB
DMA32: empty
Normal: 0*4kB 0*8kB 1*16kB 0*32kB 1*64kB 0*128kB 0*256kB 1*512kB
1*1024kB 1*2048kB 0*4096kB = 3664kB
HighMem: 331931*4kB 303446*8kB 105186*16kB 14856*32kB 432*64kB 2*128kB
1*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 5941820kB
Swap cache: add 216611, delete 216611, find 112681/129891, race 0+3
Free swap = 7815516kB
Total swap = 7815612kB
Free swap: 7815516kB
4456448 pages of RAM
4227072 pages of HIGHMEM
299142 reserved pages
7012372 pages shared
0 pages swap cached
4831 pages dirty
0 pages writeback
139487 pages mapped
35576 pages slab
142180 pages pagetables
4456448 pages of RAM
4227072 pages of HIGHMEM
299142 reserved pages
6977702 pages shared
0 pages swap cached
4831 pages dirty
0 pages writeback
139487 pages mapped
35609 pages slab
138447 pages pagetables
4456448 pages of RAM
4227072 pages of HIGHMEM
299142 reserved pages
6901408 pages shared
0 pages swap cached
4831 pages dirty
0 pages writeback
139487 pages mapped
35576 pages slab
134910 pages pagetables
Christoph Lameter schrieb:
> The output seem to have been from another run.
>
> Can you reproduce the oom? Which kernel version is this? The full dmesg output
> may help.
>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org. For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: oom-killer why ?
2008-08-25 15:56 ` Christoph Lameter
2008-08-25 16:18 ` Marco Nietz
@ 2008-08-25 17:36 ` Larry Woodman
1 sibling, 0 replies; 18+ messages in thread
From: Larry Woodman @ 2008-08-25 17:36 UTC (permalink / raw)
To: Christoph Lameter; +Cc: Marco Nietz, linux-mm
On Mon, 2008-08-25 at 10:56 -0500, Christoph Lameter wrote:
> Marco Nietz wrote:
>
> > DMA32: empty
> > Normal: 0*4kB 0*8kB 1*16kB 0*32kB 1*64kB 0*128kB 0*256kB 1*512kB
> > 1*1024kB 1*2048kB 0*4096kB = 3664kB
>
> If the flags are for a regular allocation then you have had a something that
> leaks kernel memory (device driver?). Can you get us the output of
> /proc/meminfo and /proc/vmstat?
Unless CONFIG_HIGHPTE is not set the allocation should be using highmem:
-----------------------------------------------------------------------
pgtable_t pte_alloc_one(struct mm_struct *mm, unsigned long address)
{
struct page *pte;
#ifdef CONFIG_HIGHPTE
pte = alloc_pages(GFP_KERNEL|__GFP_HIGHMEM|__GFP_REPEAT|
__GFP_ZERO, 0);
#else
pte = alloc_pages(GFP_KERNEL|__GFP_REPEAT|__GFP_ZERO, 0);
#endif
if (pte)
pgtable_page_ctor(pte);
return pte;
}
-----------------------------------------------------------------------
>
>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org. For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: oom-killer why ?
2008-08-25 17:26 ` Marco Nietz
@ 2008-08-25 18:55 ` Christoph Lameter
2008-08-26 6:44 ` Marco Nietz
2008-08-26 10:45 ` Larry Woodman
1 sibling, 1 reply; 18+ messages in thread
From: Christoph Lameter @ 2008-08-25 18:55 UTC (permalink / raw)
To: Marco Nietz; +Cc: linux-mm
Marco Nietz wrote:
> It's should be possible to reproduce the oom, but it's a Production Server.
>
> The oom happens after if've increased the Maximum Connections and
> Shared-Buffers for the Postgres Database Server on that Machine.
>
> It's kernel: 2.6.18-6-686-bigmem a Debian Etch Server.
Hmmm... That should be fairly stable. I wonder how prostgres handles the
buffers? If the pages are mlocked and are required to be in lowmem then what
you saw could be related to the postgres configuration.
> And here is the Complete dmesg:
The problem is that the boot messages are cut off we cannot see the basic
operating system configuration and the hardware that was detected.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: oom-killer why ?
2008-08-25 18:55 ` Christoph Lameter
@ 2008-08-26 6:44 ` Marco Nietz
2008-08-26 13:23 ` Christoph Lameter
0 siblings, 1 reply; 18+ messages in thread
From: Marco Nietz @ 2008-08-26 6:44 UTC (permalink / raw)
To: Christoph Lameter; +Cc: linux-mm
Christoph Lameter schrieb:
> Hmmm... That should be fairly stable. I wonder how prostgres handles the
> buffers? If the pages are mlocked and are required to be in lowmem then what
> you saw could be related to the postgres configuration.
Don't know it exactly, but will try to find it out. And yes, the Machine
was fairly stable until i raised up shared buffers.
> The problem is that the boot messages are cut off we cannot see the basic
> operating system configuration and the hardware that was detected.
Haven't got more Information than the ones if've posted, sorry-
Maybe this short Overview helps:
It's a Dell Poweredge 1950, Dual Quad Core with 2.66GHz and 16G Ram. The
Machine has two Raid-Controllers. One used for the OS and the other one
for a Direct Attached Storage (MD-3000). This Storage is controlled with
multipath-tools and used for Database Storage.
Best Regards
Marco
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: oom-killer why ?
2008-08-25 17:26 ` Marco Nietz
2008-08-25 18:55 ` Christoph Lameter
@ 2008-08-26 10:45 ` Larry Woodman
1 sibling, 0 replies; 18+ messages in thread
From: Larry Woodman @ 2008-08-26 10:45 UTC (permalink / raw)
To: Marco Nietz; +Cc: linux-mm
On Mon, 2008-08-25 at 19:26 +0200, Marco Nietz wrote:
> It's should be possible to reproduce the oom, but it's a Production Server.
> [<c014290b>] out_of_memory+0x25/0x13a
> [<c0143d74>] __alloc_pages+0x1f5/0x275
> [<c014a439>] __pte_alloc+0x11/0x9e
> [<c014a576>] __handle_mm_fault+0xb0/0xa1f
> pagetables:152485
> Normal free:3664kB min:3756kB low:4692kB high:5632kB active:280kB
> inactive:244kB present:901120kB pages_scanned:593 all_unreclaimable? yes
If it is allocating lowmem for ptepages CONFIG_HIGHPTE is not set so it
exhausts the Normal zone with wired pte pages and eventually OOM kills.
-----------------------------------------------------------------------
pgtable_t pte_alloc_one(struct mm_struct *mm, unsigned long address)
{
struct page *pte;
#ifdef CONFIG_HIGHPTE
pte = alloc_pages(GFP_KERNEL|__GFP_HIGHMEM|__GFP_REPEAT|
__GFP_ZERO, 0);
#else
pte = alloc_pages(GFP_KERNEL|__GFP_REPEAT|__GFP_ZERO, 0);
#endif
if (pte)
pgtable_page_ctor(pte);
return pte;
}
-----------------------------------------------------------------------
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: oom-killer why ?
2008-08-25 11:25 oom-killer why ? Marco Nietz
2008-08-25 15:56 ` Christoph Lameter
@ 2008-08-26 11:11 ` Balbir Singh
2008-08-26 12:00 ` Marco Nietz
1 sibling, 1 reply; 18+ messages in thread
From: Balbir Singh @ 2008-08-26 11:11 UTC (permalink / raw)
To: Marco Nietz; +Cc: linux-mm
Marco Nietz wrote:
> Today, i've meet the oom-killer the first time, but i could not
> understand why this happens.
>
> Swap and Highmem is ok, Could this be a Problem of lowmem and the bigmen
> (pae) Kernel ?
>
> It's a Machine with 2x4 Xeon Cores and 16GB of physical Memory running
> Debian Etch with Kernel 2.6.18-6-686-bigmem.
>
> Hier the dmesg-Output
>
> oom-killer: gfp_mask=0x84d0, order=0
> [<c014290b>] out_of_memory+0x25/0x13a
> [<c0143d74>] __alloc_pages+0x1f5/0x275
> [<c014a439>] __pte_alloc+0x11/0x9e
> [<c014b864>] copy_page_range+0x155/0x3da
> [<c01ba1d8>] vsnprintf+0x419/0x457
> [<c011c184>] copy_process+0xa73/0x10a9
> [<c011ca1f>] do_fork+0x91/0x17a
> [<c0124d67>] do_gettimeofday+0x31/0xce
> [<c01012c2>] sys_clone+0x28/0x2d
> [<c0102c0d>] sysenter_past_esp+0x56/0x79
> Mem-info:
> DMA per-cpu:
> cpu 0 hot: high 0, batch 1 used:0
> cpu 0 cold: high 0, batch 1 used:0
> cpu 1 hot: high 0, batch 1 used:0
> cpu 1 cold: high 0, batch 1 used:0
> cpu 2 hot: high 0, batch 1 used:0
> cpu 2 cold: high 0, batch 1 used:0
> cpu 3 hot: high 0, batch 1 used:0
> cpu 3 cold: high 0, batch 1 used:0
> cpu 4 hot: high 0, batch 1 used:0
> cpu 4 cold: high 0, batch 1 used:0
> cpu 5 hot: high 0, batch 1 used:0
> cpu 5 cold: high 0, batch 1 used:0
> cpu 6 hot: high 0, batch 1 used:0
> cpu 6 cold: high 0, batch 1 used:0
> cpu 7 hot: high 0, batch 1 used:0
> cpu 7 cold: high 0, batch 1 used:0
> DMA32 per-cpu: empty
> Normal per-cpu:
> cpu 0 hot: high 186, batch 31 used:128
> cpu 0 cold: high 62, batch 15 used:48
> cpu 1 hot: high 186, batch 31 used:30
> cpu 1 cold: high 62, batch 15 used:47
> cpu 2 hot: high 186, batch 31 used:35
> cpu 2 cold: high 62, batch 15 used:59
> cpu 3 hot: high 186, batch 31 used:79
> cpu 3 cold: high 62, batch 15 used:55
> cpu 4 hot: high 186, batch 31 used:8
> cpu 4 cold: high 62, batch 15 used:53
> cpu 5 hot: high 186, batch 31 used:162
> cpu 5 cold: high 62, batch 15 used:52
> cpu 6 hot: high 186, batch 31 used:181
> cpu 6 cold: high 62, batch 15 used:57
> cpu 7 hot: high 186, batch 31 used:9
> cpu 7 cold: high 62, batch 15 used:58
> HighMem per-cpu:
> cpu 0 hot: high 186, batch 31 used:18
> cpu 0 cold: high 62, batch 15 used:9
> cpu 1 hot: high 186, batch 31 used:47
> cpu 1 cold: high 62, batch 15 used:1
> cpu 2 hot: high 186, batch 31 used:102
> cpu 2 cold: high 62, batch 15 used:7
> cpu 3 hot: high 186, batch 31 used:171
> cpu 3 cold: high 62, batch 15 used:7
> cpu 4 hot: high 186, batch 31 used:172
> cpu 4 cold: high 62, batch 15 used:14
> cpu 5 hot: high 186, batch 31 used:26
> cpu 5 cold: high 62, batch 15 used:14
> cpu 6 hot: high 186, batch 31 used:29
> cpu 6 cold: high 62, batch 15 used:2
> cpu 7 hot: high 186, batch 31 used:99
> cpu 7 cold: high 62, batch 15 used:3
> Free pages: 5949076kB (5941820kB HighMem)
> Active:1102100 inactive:1373666 dirty:4831 writeback:0 unstable:0
> free:1487269 slab:35543 mapped:139487 pagetables:152485
> DMA free:3592kB min:68kB low:84kB high:100kB active:24kB inactive:16kB
> present:16384kB pages_scanned:0 all_unreclaimable? no
> lowmem_reserve[]: 0 0 880 17392
pages_scanned is 0
> DMA32 free:0kB min:0kB low:0kB high:0kB active:0kB inactive:0kB
> present:0kB pages_scanned:0 all_unreclaimable? no
> lowmem_reserve[]: 0 0 880 17392
pages_scanned is 0
> Normal free:3664kB min:3756kB low:4692kB high:5632kB active:280kB
> inactive:244kB present:901120kB pages_scanned:593 all_unreclaimable? yes
> lowmem_reserve[]: 0 0 0 132096
pages_scanned is 593 and all_unreclaimable is yes
> HighMem free:5941820kB min:512kB low:18148kB high:35784kB
> active:4408096kB inactive:5494404kB present:16908288kB pages_scanned:0
> all_unreclaimable? no
pages_scanned is 0
Do you have CONFIG_HIGHPTE set? I suspect you don't (I don't really know the
debian etch configuration). I suspect you've run out of zone normal pages to
allocate.
[snip]
--
Balbir
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: oom-killer why ?
2008-08-26 11:11 ` Balbir Singh
@ 2008-08-26 12:00 ` Marco Nietz
2008-08-26 13:15 ` Balbir Singh
0 siblings, 1 reply; 18+ messages in thread
From: Marco Nietz @ 2008-08-26 12:00 UTC (permalink / raw)
To: balbir; +Cc: linux-mm
Balbir Singh schrieb:
>> DMA32 free:0kB min:0kB low:0kB high:0kB active:0kB inactive:0kB
>> present:0kB pages_scanned:0 all_unreclaimable? no
>> lowmem_reserve[]: 0 0 880 17392
>
> pages_scanned is 0
Is'nt this zone irrelevant for a 32bit Kernel ?
>> Normal free:3664kB min:3756kB low:4692kB high:5632kB active:280kB
>> inactive:244kB present:901120kB pages_scanned:593 all_unreclaimable? yes
>> lowmem_reserve[]: 0 0 0 132096
>
> pages_scanned is 593 and all_unreclaimable is yes
Reclaimable means, that the Pages are reusable for other Purposes, or not ?
>> HighMem free:5941820kB min:512kB low:18148kB high:35784kB
>> active:4408096kB inactive:5494404kB present:16908288kB pages_scanned:0
>> all_unreclaimable? no
>
> pages_scanned is 0
> Do you have CONFIG_HIGHPTE set? I suspect you don't (I don't really know the
> debian etch configuration)
No, it's not set in the running Debian Kernel.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: oom-killer why ?
2008-08-26 12:00 ` Marco Nietz
@ 2008-08-26 13:15 ` Balbir Singh
2008-08-26 13:18 ` Balbir Singh
0 siblings, 1 reply; 18+ messages in thread
From: Balbir Singh @ 2008-08-26 13:15 UTC (permalink / raw)
To: Marco Nietz; +Cc: linux-mm
Marco Nietz wrote:
> Balbir Singh schrieb:
>
>>> DMA32 free:0kB min:0kB low:0kB high:0kB active:0kB inactive:0kB
>>> present:0kB pages_scanned:0 all_unreclaimable? no
>>> lowmem_reserve[]: 0 0 880 17392
>> pages_scanned is 0
>
> Is'nt this zone irrelevant for a 32bit Kernel ?
>
Doesn't matter, since you have 0 present pages.
>>> Normal free:3664kB min:3756kB low:4692kB high:5632kB active:280kB
>>> inactive:244kB present:901120kB pages_scanned:593 all_unreclaimable? yes
>>> lowmem_reserve[]: 0 0 0 132096
>> pages_scanned is 593 and all_unreclaimable is yes
>
> Reclaimable means, that the Pages are reusable for other Purposes, or not ?
>
It is set by a background routine that tries to reclaim pages (balance_pgdat()),
to indicate that it was unable to reclaim any pages from the zone, even though
it did a certain amount of work to do so.
>>> HighMem free:5941820kB min:512kB low:18148kB high:35784kB
>>> active:4408096kB inactive:5494404kB present:16908288kB pages_scanned:0
>>> all_unreclaimable? no
>> pages_scanned is 0
>
>> Do you have CONFIG_HIGHPTE set? I suspect you don't (I don't really know the
>> debian etch configuration)
>
> No, it's not set in the running Debian Kernel.
Looks like CONFIG_HIGHPTE=y would have helped allocate pages since you do have
pages in HighMem available.
--
Balbir
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: oom-killer why ?
2008-08-26 13:15 ` Balbir Singh
@ 2008-08-26 13:18 ` Balbir Singh
2008-08-26 19:09 ` Larry Woodman
0 siblings, 1 reply; 18+ messages in thread
From: Balbir Singh @ 2008-08-26 13:18 UTC (permalink / raw)
To: Marco Nietz; +Cc: balbir, linux-mm
Balbir Singh wrote:
> Marco Nietz wrote:
>> Balbir Singh schrieb:
>>
>>>> DMA32 free:0kB min:0kB low:0kB high:0kB active:0kB inactive:0kB
>>>> present:0kB pages_scanned:0 all_unreclaimable? no
>>>> lowmem_reserve[]: 0 0 880 17392
>>> pages_scanned is 0
>> Is'nt this zone irrelevant for a 32bit Kernel ?
>>
>
> Doesn't matter, since you have 0 present pages.
>
>>>> Normal free:3664kB min:3756kB low:4692kB high:5632kB active:280kB
>>>> inactive:244kB present:901120kB pages_scanned:593 all_unreclaimable? yes
>>>> lowmem_reserve[]: 0 0 0 132096
>>> pages_scanned is 593 and all_unreclaimable is yes
>> Reclaimable means, that the Pages are reusable for other Purposes, or not ?
>>
>
> It is set by a background routine that tries to reclaim pages (balance_pgdat()),
> to indicate that it was unable to reclaim any pages from the zone, even though
> it did a certain amount of work to do so.
>
>>>> HighMem free:5941820kB min:512kB low:18148kB high:35784kB
>>>> active:4408096kB inactive:5494404kB present:16908288kB pages_scanned:0
>>>> all_unreclaimable? no
>>> pages_scanned is 0
>>> Do you have CONFIG_HIGHPTE set? I suspect you don't (I don't really know the
>>> debian etch configuration)
>> No, it's not set in the running Debian Kernel.
>
> Looks like CONFIG_HIGHPTE=y would have helped allocate pages since you do have
> pages in HighMem available.
>
Looking closely, may be there is a leak like Christoph suggested (most of the
pages have been consumed by the kernel) - only 280kB+244kB is in use by user
pages. The rest has either leaked or in use by the kernel.
--
Balbir
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: oom-killer why ?
2008-08-26 6:44 ` Marco Nietz
@ 2008-08-26 13:23 ` Christoph Lameter
0 siblings, 0 replies; 18+ messages in thread
From: Christoph Lameter @ 2008-08-26 13:23 UTC (permalink / raw)
To: Marco Nietz; +Cc: linux-mm
Marco Nietz wrote:
>
> It's a Dell Poweredge 1950, Dual Quad Core with 2.66GHz and 16G Ram. The
> Machine has two Raid-Controllers. One used for the OS and the other one
> for a Direct Attached Storage (MD-3000). This Storage is controlled with
> multipath-tools and used for Database Storage.
I'd strongly suggest to go to 64 bit for that machine.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: oom-killer why ?
2008-08-26 13:18 ` Balbir Singh
@ 2008-08-26 19:09 ` Larry Woodman
2008-08-27 2:32 ` Balbir Singh
0 siblings, 1 reply; 18+ messages in thread
From: Larry Woodman @ 2008-08-26 19:09 UTC (permalink / raw)
To: balbir; +Cc: Marco Nietz, linux-mm
On Tue, 2008-08-26 at 18:48 +0530, Balbir Singh wrote:
> Balbir Singh wrote:
>
> Looking closely, may be there is a leak like Christoph suggested (most of the
> pages have been consumed by the kernel) - only 280kB+244kB is in use by user
> pages. The rest has either leaked or in use by the kernel.
>
There is no leak. Between the ptepages(pagetables:152485), the
memmap(4456448 pages of RAM * 32bytes = 34816 pages) and the
slabcache(slab:35543) you can account for ~99% of the Normal zone and
its wired. You simply cant run a large database without hugepages and
without CONFIG_HIGHPTE set and not exhaust Lowmem on a 16GB x86 system.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: oom-killer why ?
2008-08-26 19:09 ` Larry Woodman
@ 2008-08-27 2:32 ` Balbir Singh
2008-08-27 6:21 ` Marco Nietz
0 siblings, 1 reply; 18+ messages in thread
From: Balbir Singh @ 2008-08-27 2:32 UTC (permalink / raw)
To: Larry Woodman; +Cc: Marco Nietz, Linux Memory Management List
Larry Woodman wrote:
> On Tue, 2008-08-26 at 18:48 +0530, Balbir Singh wrote:
>> Balbir Singh wrote:
>>
>> Looking closely, may be there is a leak like Christoph suggested (most of the
>> pages have been consumed by the kernel) - only 280kB+244kB is in use by user
>> pages. The rest has either leaked or in use by the kernel.
>>
>
> There is no leak. Between the ptepages(pagetables:152485), the
> memmap(4456448 pages of RAM * 32bytes = 34816 pages) and the
> slabcache(slab:35543) you can account for ~99% of the Normal zone and
> its wired. You simply cant run a large database without hugepages and
> without CONFIG_HIGHPTE set and not exhaust Lowmem on a 16GB x86 system.
Thanks for looking at it more closely, Yes, we do need to have CONFIG_HIGHPTE
enabled.
--
Balbir
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: oom-killer why ?
2008-08-27 2:32 ` Balbir Singh
@ 2008-08-27 6:21 ` Marco Nietz
2008-08-30 18:19 ` Rik van Riel
0 siblings, 1 reply; 18+ messages in thread
From: Marco Nietz @ 2008-08-27 6:21 UTC (permalink / raw)
To: Linux Memory Management List
Thank you all for your Help.
My first guess that the oom where caused by running out of Lowmem was
confirmed and the Solution is to upgrade the Server to a 64bit OS.
All right to that point, but why this was affected by the raised up
Sharded Buffers from postgres ? Is shared buffer preferred to be in lowmem ?
With the smaller Buffersize (256mb) we haven't had any Problems with
that Machine.
> Larry Woodman wrote:
>> On Tue, 2008-08-26 at 18:48 +0530, Balbir Singh wrote:
>>> Balbir Singh wrote:
>>>
>>> Looking closely, may be there is a leak like Christoph suggested (most of the
>>> pages have been consumed by the kernel) - only 280kB+244kB is in use by user
>>> pages. The rest has either leaked or in use by the kernel.
>>>
>> There is no leak. Between the ptepages(pagetables:152485), the
>> memmap(4456448 pages of RAM * 32bytes = 34816 pages) and the
>> slabcache(slab:35543) you can account for ~99% of the Normal zone and
>> its wired. You simply cant run a large database without hugepages and
>> without CONFIG_HIGHPTE set and not exhaust Lowmem on a 16GB x86 system.
>
> Thanks for looking at it more closely, Yes, we do need to have CONFIG_HIGHPTE
> enabled.
>
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: oom-killer why ?
2008-08-27 6:21 ` Marco Nietz
@ 2008-08-30 18:19 ` Rik van Riel
0 siblings, 0 replies; 18+ messages in thread
From: Rik van Riel @ 2008-08-30 18:19 UTC (permalink / raw)
To: Marco Nietz; +Cc: Linux Memory Management List
On Wed, 27 Aug 2008 08:21:28 +0200
Marco Nietz <m.nietz-mm@iplabs.de> wrote:
> My first guess that the oom where caused by running out of Lowmem was
> confirmed and the Solution is to upgrade the Server to a 64bit OS.
Indeed.
> All right to that point, but why this was affected by the raised up
> Sharded Buffers from postgres ? Is shared buffer preferred to be in lowmem ?
>
> With the smaller Buffersize (256mb) we haven't had any Problems with
> that Machine.
No, but the page tables used to map the shared buffer are in lowmem.
Page tables take up 0.5% of the data size, per process.
This means that if you have 200 processes mapping 1GB of data, you
would need 1GB of page tables. You do not have that much lowmem :)
--
All rights reversed.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 18+ messages in thread
end of thread, other threads:[~2008-08-30 18:19 UTC | newest]
Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2008-08-25 11:25 oom-killer why ? Marco Nietz
2008-08-25 15:56 ` Christoph Lameter
2008-08-25 16:18 ` Marco Nietz
2008-08-25 16:29 ` Christoph Lameter
2008-08-25 17:26 ` Marco Nietz
2008-08-25 18:55 ` Christoph Lameter
2008-08-26 6:44 ` Marco Nietz
2008-08-26 13:23 ` Christoph Lameter
2008-08-26 10:45 ` Larry Woodman
2008-08-25 17:36 ` Larry Woodman
2008-08-26 11:11 ` Balbir Singh
2008-08-26 12:00 ` Marco Nietz
2008-08-26 13:15 ` Balbir Singh
2008-08-26 13:18 ` Balbir Singh
2008-08-26 19:09 ` Larry Woodman
2008-08-27 2:32 ` Balbir Singh
2008-08-27 6:21 ` Marco Nietz
2008-08-30 18:19 ` Rik van Riel
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox