From: Mike Waychison <mikew@google.com>
To: Shaohua Li <shaohua.li@intel.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
Mel Gorman <mgorman@suse.de>, Minchan Kim <minchan.kim@gmail.com>,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>,
Johannes Weiner <jweiner@redhat.com>,
"linux-mm@kvack.org" <linux-mm@kvack.org>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
Hugh Dickens <hughd@google.com>, Greg Thelen <gthelen@google.com>
Subject: Re: [PATCH] mm: Fix kswapd livelock on single core, no preempt kernel
Date: Tue, 13 Dec 2011 20:36:43 -0800 [thread overview]
Message-ID: <CAGTjWtDvmLnNqUoddUCmLVSDN0HcOjtsuFbAs+MFy24JFX-P3g@mail.gmail.com> (raw)
In-Reply-To: <1323829490.22361.395.camel@sli10-conroe>
On Tue, Dec 13, 2011 at 6:24 PM, Shaohua Li <shaohua.li@intel.com> wrote:
> On Wed, 2011-12-14 at 01:44 +0800, Mike Waychison wrote:
>> On a single core system with kernel preemption disabled, it is possible
>> for the memory system to be so taxed that kswapd cannot make any forward
>> progress. This can happen when most of system memory is tied up as
>> anonymous memory without swap enabled, causing kswapd to consistently
>> fail to achieve its watermark goals. In turn, sleeping_prematurely()
>> will consistently return true and kswapd_try_to_sleep() to never invoke
>> schedule(). This causes the kswapd thread to stay on the CPU in
>> perpetuity and keeps other threads from processing oom-kills to reclaim
>> memory.
>>
>> The cond_resched() instance in balance_pgdat() is never called as the
>> loop that iterates from DEF_PRIORITY down to 0 will always set
>> all_zones_ok to true, and not set it to false once we've passed
>> DEF_PRIORITY as zones that are marked ->all_unreclaimable are not
>> considered in the "all_zones_ok" evaluation.
>>
>> This change modifies kswapd_try_to_sleep to ensure that we enter
>> scheduler at least once per invocation if needed. This allows kswapd to
>> get off the CPU and allows other threads to die off from the OOM killer
>> (freeing memory that is otherwise unavailable in the process).
> your description suggests zones with all_unreclaimable set. but in this
> case sleeping_prematurely() will return false instead of true, kswapd
> will do sleep then. is there anything I missed?
Debugging this, I didn't get a dump from oom-kill as it never ran
(until I binary patched in a cond_resched() into live hung machines --
this reproduced in a VM).
I was however able to capture the following data while it was hung:
/cloud/vmm/host/backend/perfmetric/node0/zone0/active_anon : long long = 773
/cloud/vmm/host/backend/perfmetric/node0/zone0/active_file : long long = 6
/cloud/vmm/host/backend/perfmetric/node0/zone0/anon_pages : long long = 1,329
/cloud/vmm/host/backend/perfmetric/node0/zone0/bounce : long long = 0
/cloud/vmm/host/backend/perfmetric/node0/zone0/dirtied : long long = 4,425
/cloud/vmm/host/backend/perfmetric/node0/zone0/file_dirty : long long = 0
/cloud/vmm/host/backend/perfmetric/node0/zone0/file_mapped : long long = 5
/cloud/vmm/host/backend/perfmetric/node0/zone0/file_pages : long long = 330
/cloud/vmm/host/backend/perfmetric/node0/zone0/free_pages : long long = 2,018
/cloud/vmm/host/backend/perfmetric/node0/zone0/inactive_anon : long long = 865
/cloud/vmm/host/backend/perfmetric/node0/zone0/inactive_file : long long = 13
/cloud/vmm/host/backend/perfmetric/node0/zone0/kernel_stack : long long = 10
/cloud/vmm/host/backend/perfmetric/node0/zone0/mlock : long long = 0
/cloud/vmm/host/backend/perfmetric/node0/zone0/pagetable : long long = 74
/cloud/vmm/host/backend/perfmetric/node0/zone0/shmem : long long = 0
/cloud/vmm/host/backend/perfmetric/node0/zone0/slab_reclaimable : long long = 54
/cloud/vmm/host/backend/perfmetric/node0/zone0/slab_unreclaimable :
long long = 130
/cloud/vmm/host/backend/perfmetric/node0/zone0/unevictable : long long = 0
/cloud/vmm/host/backend/perfmetric/node0/zone0/writeback : long long = 0
/cloud/vmm/host/backend/perfmetric/node0/zone0/written : long long = 47,184
/cloud/vmm/host/backend/perfmetric/node0/zone1/active_anon : long long = 359,251
/cloud/vmm/host/backend/perfmetric/node0/zone1/active_file : long long = 67
/cloud/vmm/host/backend/perfmetric/node0/zone1/anon_pages : long long = 441,180
/cloud/vmm/host/backend/perfmetric/node0/zone1/bounce : long long = 0
/cloud/vmm/host/backend/perfmetric/node0/zone1/dirtied : long long = 6,457,125
/cloud/vmm/host/backend/perfmetric/node0/zone1/file_dirty : long long = 0
/cloud/vmm/host/backend/perfmetric/node0/zone1/file_mapped : long long = 134
/cloud/vmm/host/backend/perfmetric/node0/zone1/file_pages : long long = 38,090
/cloud/vmm/host/backend/perfmetric/node0/zone1/free_pages : long long = 1,630
/cloud/vmm/host/backend/perfmetric/node0/zone1/inactive_anon : long
long = 119,779
/cloud/vmm/host/backend/perfmetric/node0/zone1/inactive_file : long long = 81
/cloud/vmm/host/backend/perfmetric/node0/zone1/kernel_stack : long long = 173
/cloud/vmm/host/backend/perfmetric/node0/zone1/mlock : long long = 0
/cloud/vmm/host/backend/perfmetric/node0/zone1/pagetable : long long = 15,222
/cloud/vmm/host/backend/perfmetric/node0/zone1/shmem : long long = 1
/cloud/vmm/host/backend/perfmetric/node0/zone1/slab_reclaimable : long
long = 1,677
/cloud/vmm/host/backend/perfmetric/node0/zone1/slab_unreclaimable :
long long = 7,152
/cloud/vmm/host/backend/perfmetric/node0/zone1/unevictable : long long = 0
/cloud/vmm/host/backend/perfmetric/node0/zone1/writeback : long long = 8
/cloud/vmm/host/backend/perfmetric/node0/zone1/written : long long = 16,639,708
These value were static while the machine was hung up in kswapd. I
unfortunately don't have the low/min/max or lowmem watermarks handy.
>From stepping through with gdb, I was able to determine that
ZONE_DMA32 would fail zone_watermark_ok_safe(), causing a scan up to
end_zone == 1. If memory serves, it would not get the
->all_unreclaimable flag. I didn't get the chance to root cause this
internal inconsistency though.
FYI, this was seen with a 2.6.39-based kernel with no-numa, no-memcg
and swap-enabled.
If I get the chance, I can reproduce and look at this closer to try
and root cause why zone_reclaimable() would return true, but I won't
be able to do that until after the holidays -- sometime in January.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2011-12-14 4:37 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-12-13 17:44 Mike Waychison
2011-12-14 2:24 ` Shaohua Li
2011-12-14 4:36 ` Mike Waychison [this message]
2011-12-14 4:45 ` Mike Waychison
2011-12-15 1:06 ` Shaohua Li
2011-12-14 12:20 ` Mel Gorman
2011-12-14 15:37 ` Mike Waychison
2011-12-14 10:51 ` James Bottomley
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAGTjWtDvmLnNqUoddUCmLVSDN0HcOjtsuFbAs+MFy24JFX-P3g@mail.gmail.com \
--to=mikew@google.com \
--cc=akpm@linux-foundation.org \
--cc=gthelen@google.com \
--cc=hughd@google.com \
--cc=jweiner@redhat.com \
--cc=kamezawa.hiroyu@jp.fujitsu.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mgorman@suse.de \
--cc=minchan.kim@gmail.com \
--cc=shaohua.li@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox