From: Michal Hocko <mhocko@kernel.org>
To: Oleg Nesterov <oleg@redhat.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
Andrea Arcangeli <aarcange@redhat.com>,
Mel Gorman <mgorman@techsingularity.net>,
linux-kernel@vger.kernel.org, linux-mm@kvack.org
Subject: Re: zone_reclaimable() leads to livelock in __alloc_pages_slowpath()
Date: Tue, 24 May 2016 09:16:19 +0200 [thread overview]
Message-ID: <20160524071619.GB8259@dhcp22.suse.cz> (raw)
In-Reply-To: <20160523151419.GA8284@redhat.com>
On Mon 23-05-16 17:14:19, Oleg Nesterov wrote:
> On 05/23, Michal Hocko wrote:
[...]
> > Could you add some tracing and see what are the numbers
> > above?
>
> with the patch below I can press Ctrl-C when it hangs, this breaks the
> endless loop and the output looks like
>
> vmscan: ZONE=ffffffff8189f180 0 scanned=0 pages=6
> vmscan: ZONE=ffffffff8189eb00 0 scanned=1 pages=0
> ...
> vmscan: ZONE=ffffffff8189eb00 0 scanned=2 pages=1
> vmscan: ZONE=ffffffff8189f180 0 scanned=4 pages=6
> ...
> vmscan: ZONE=ffffffff8189f180 0 scanned=4 pages=6
> vmscan: ZONE=ffffffff8189f180 0 scanned=4 pages=6
>
> the numbers are always small.
Small but scanned is not 0 and constant which means it either gets reset
repeatedly (something gets freed) or we have stopped scanning. Which
pattern can you see? I assume that the swap space is full at the time
(could you add get_nr_swap_pages() to the output). Also zone->name would
be better than the pointer.
I am trying to reproduce but your test case always hits the oom killer:
This is in a qemu x86_64 virtual machine:
# free
total used free shared buffers cached
Mem: 490212 96788 393424 0 3196 9976
-/+ buffers/cache: 83616 406596
Swap: 138236 57740 80496
I have tried with much larger swap space but no change except for the
run time of the test which is expected.
# grep "^processor" /proc/cpuinfo | wc -l
1
[... Skipped several previous attempts ...]
[ 695.215235] vmscan: XXX: zone:DMA32 nr_pages_scanned:0 reclaimable:20
[ 695.215245] vmscan: XXX: zone:DMA32 nr_pages_scanned:0 reclaimable:20
[ 695.215255] vmscan: XXX: zone:DMA32 nr_pages_scanned:0 reclaimable:20
[ 695.215282] vmscan: XXX: zone:DMA32 nr_pages_scanned:1 reclaimable:27
[ 695.215303] vmscan: XXX: zone:DMA32 nr_pages_scanned:5 reclaimable:27
[ 695.215327] vmscan: XXX: zone:DMA32 nr_pages_scanned:18 reclaimable:27
[ 695.215351] vmscan: XXX: zone:DMA32 nr_pages_scanned:45 reclaimable:27
[ 695.215362] vmscan: XXX: zone:DMA32 nr_pages_scanned:45 reclaimable:27
[ 695.215373] vmscan: XXX: zone:DMA32 nr_pages_scanned:45 reclaimable:27
[ 695.215382] vmscan: XXX: zone:DMA32 nr_pages_scanned:45 reclaimable:27
[ 695.215392] vmscan: XXX: zone:DMA32 nr_pages_scanned:45 reclaimable:27
[ 695.215402] vmscan: XXX: zone:DMA32 nr_pages_scanned:45 reclaimable:27
[ 695.215412] vmscan: XXX: zone:DMA32 nr_pages_scanned:45 reclaimable:27
[ 695.215422] vmscan: XXX: zone:DMA32 nr_pages_scanned:45 reclaimable:27
[ 695.215431] vmscan: XXX: zone:DMA32 nr_pages_scanned:45 reclaimable:27
[ 695.215442] vmscan: XXX: zone:DMA32 nr_pages_scanned:46 reclaimable:27
[ 695.215462] vmscan: XXX: zone:DMA32 nr_pages_scanned:48 reclaimable:27
[ 695.215482] vmscan: XXX: zone:DMA32 nr_pages_scanned:53 reclaimable:27
[ 695.215504] vmscan: XXX: zone:DMA32 nr_pages_scanned:63 reclaimable:27
[ 695.215528] vmscan: XXX: zone:DMA32 nr_pages_scanned:90 reclaimable:27
[...]
[ 695.215620] vmscan: XXX: zone:DMA32 nr_pages_scanned:91 reclaimable:27
[ 695.215640] vmscan: XXX: zone:DMA32 nr_pages_scanned:94 reclaimable:27
[ 695.215659] vmscan: XXX: zone:DMA32 nr_pages_scanned:100 reclaimable:27
[ 695.215683] vmscan: XXX: zone:DMA32 nr_pages_scanned:113 reclaimable:27
[...]
[ 695.215786] vmscan: XXX: zone:DMA32 nr_pages_scanned:140 reclaimable:27
[ 695.215797] vmscan: XXX: zone:DMA32 nr_pages_scanned:141 reclaimable:27
[ 695.215816] vmscan: XXX: zone:DMA32 nr_pages_scanned:144 reclaimable:27
[ 695.215836] vmscan: XXX: zone:DMA32 nr_pages_scanned:150 reclaimable:27
[ 695.215906] test-oleg invoked oom-killer: gfp_mask=0x24201ca(GFP_HIGHUSER_MOVABLE|__GFP_COLD), order=0, oom_score_adj=0
--
Michal Hocko
SUSE Labs
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2016-05-24 7:16 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-05-20 20:28 Oleg Nesterov
2016-05-21 4:07 ` Tetsuo Handa
2016-05-22 21:17 ` Oleg Nesterov
2016-05-23 7:29 ` Michal Hocko
2016-05-23 15:14 ` Oleg Nesterov
2016-05-24 7:16 ` Michal Hocko [this message]
2016-05-24 22:43 ` Oleg Nesterov
2016-05-25 12:09 ` Michal Hocko
2016-05-29 21:25 ` Oleg Nesterov
2016-05-31 12:52 ` Michal Hocko
2016-05-31 23:56 ` Oleg Nesterov
2016-06-01 10:00 ` Michal Hocko
2016-06-01 21:38 ` Oleg Nesterov
2016-06-02 15:11 ` Michal Hocko
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20160524071619.GB8259@dhcp22.suse.cz \
--to=mhocko@kernel.org \
--cc=aarcange@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mgorman@techsingularity.net \
--cc=oleg@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox