* 4.2rc1 odd looking page allocator failure stats
@ 2015-07-08 20:43 Dave Jones
2015-07-09 0:42 ` David Rientjes
0 siblings, 1 reply; 2+ messages in thread
From: Dave Jones @ 2015-07-08 20:43 UTC (permalink / raw)
To: linux-mm
I've got a box with 4GB of RAM that I've driven into oom (so much so that e1000 can't
alloc a single page, so I can't even ping it). But over serial console I noticed this..
[158831.710001] DMA32 free:1624kB min:6880kB low:8600kB high:10320kB active_anon:407004kB inactive_anon:799300kB active_file:516kB inactive_file:6644kB unevictable:0kB
isolated(anon):0kB isolated(file):0kB present:3127220kB managed:3043108kB mlocked:0kB dirty:6680kB writeback:64kB mapped:31544kB shmem:1146792kB
slab_reclaimable:46812kB slab_unreclaimable:388364kB kernel_stack:2288kB pagetables:2076kB unstable:0kB bounce:0kB free_pcp:0kB local_pcp:0kB
free_cma:0kB writeback_tmp:0kB pages_scanned:70152496980 all_unreclaimable? yes
How come that 'pages_scanned' number is greater than the number of pages in the system ?
Does kswapd iterate over the same pages a number of times each time the page allocator fails ?
I've managed to hit this a couple times this week, where the oom killer kicks in, kills some
processes, but then the machine goes into a death spiral of looping in the page allocator.
Once that begins, it never tries to oom kill again, just hours of page allocation failure messages.
Dave
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 2+ messages in thread
* Re: 4.2rc1 odd looking page allocator failure stats
2015-07-08 20:43 4.2rc1 odd looking page allocator failure stats Dave Jones
@ 2015-07-09 0:42 ` David Rientjes
0 siblings, 0 replies; 2+ messages in thread
From: David Rientjes @ 2015-07-09 0:42 UTC (permalink / raw)
To: Dave Jones; +Cc: linux-mm
On Wed, 8 Jul 2015, Dave Jones wrote:
> I've got a box with 4GB of RAM that I've driven into oom (so much so that e1000 can't
> alloc a single page, so I can't even ping it). But over serial console I noticed this..
>
> [158831.710001] DMA32 free:1624kB min:6880kB low:8600kB high:10320kB active_anon:407004kB inactive_anon:799300kB active_file:516kB inactive_file:6644kB unevictable:0kB
> isolated(anon):0kB isolated(file):0kB present:3127220kB managed:3043108kB mlocked:0kB dirty:6680kB writeback:64kB mapped:31544kB shmem:1146792kB
> slab_reclaimable:46812kB slab_unreclaimable:388364kB kernel_stack:2288kB pagetables:2076kB unstable:0kB bounce:0kB free_pcp:0kB local_pcp:0kB
> free_cma:0kB writeback_tmp:0kB pages_scanned:70152496980 all_unreclaimable? yes
>
> How come that 'pages_scanned' number is greater than the number of pages in the system ?
> Does kswapd iterate over the same pages a number of times each time the page allocator fails ?
>
>
> I've managed to hit this a couple times this week, where the oom killer kicks in, kills some
> processes, but then the machine goes into a death spiral of looping in the page allocator.
> Once that begins, it never tries to oom kill again, just hours of page allocation failure messages.
>
We don't have the full oom log to see if there's any indication of a
problem, but pages_scanned should be able to grow very large since it's
never reset as a result of either memory freeing or periodic pcp flush (I
notice free_pcp is 0kB above) so pages_scanned never gets cleared.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2015-07-09 0:42 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-07-08 20:43 4.2rc1 odd looking page allocator failure stats Dave Jones
2015-07-09 0:42 ` David Rientjes
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox