From: Minchan Kim <minchan@kernel.org>
To: Ying Han <yinghan@google.com>
Cc: Michal Hocko <mhocko@suse.cz>,
Balbir Singh <bsingharora@gmail.com>,
Rik van Riel <riel@redhat.com>, Hugh Dickins <hughd@google.com>,
Johannes Weiner <hannes@cmpxchg.org>, Mel Gorman <mel@csn.ul.ie>,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>,
Pavel Emelyanov <xemul@openvz.org>,
Fengguang Wu <fengguang.wu@intel.com>,
Greg Thelen <gthelen@google.com>,
linux-mm@kvack.org
Subject: Re: [PATCH 2/2] memcg: fix livelock in try charge during readahead
Date: Wed, 14 Dec 2011 20:31:20 +0900 [thread overview]
Message-ID: <CAEwNFnDbGcbzEd1j4ctXu=WpZ7GwnV3Md1+7sQVvNBOVN6LR4A@mail.gmail.com> (raw)
In-Reply-To: <CALWz4iw1i_EtJD9y+JZb+5YnAOuZ93Bg=fO+-KGD6xR6a7znNw@mail.gmail.com>
On Wed, Dec 14, 2011 at 3:29 AM, Ying Han <yinghan@google.com> wrote:
> On Mon, Dec 12, 2011 at 10:10 PM, Minchan Kim <minchan@kernel.org> wrote:
>> On Mon, Dec 12, 2011 at 06:16:48PM -0800, Ying Han wrote:
>>> Couple of kernel dumps are triggered by watchdog timeout. It turns out that two
>>> processes within a memcg livelock on a same page lock. We believe this is not
>>> memcg specific issue and the same livelock exists in non-memcg world as well.
>>>
>>> The sequence of triggering the livelock:
>>> 1. Task_A enters pagefault (filemap_fault) and then starts readahead
>>> filemap_fault
>>> -> do_sync_mmap_readahead
>>> -> ra_submit
>>> ->__do_page_cache_readahead // here we allocate the readahead pages
>>> ->read_pages
>>> ...
>>> ->add_to_page_cache_locked
>>> //for each page, we do the try charge and then add the page into
>>> //radix tree. If one of the try charge failed, it enters per-memcg
>>> //oom while holding the page lock of previous readahead pages.
>>>
>>> // in the memcg oom killer, it picks a task within the same memcg
>>> // and mark it TIF_MEMDIE. then it goes back into retry loop and
>>> // hopes the task exits to free some memory.
>>>
>>> 2. Task_B enters pagefault (filemap_fault) and finds the page in radix tree (
>>> one of the readahead pages from ProcessA)
>>>
>>> filemap_fault
>>> ->__lock_page // here it is marked as TIF_MEMDIE. but it can not proceed since
>>> // the page lock is hold by ProcessA looping at OOM.
>>>
>>> Since the TIF_MEMDIE task_B is live locked, it ends up blocking other tasks
>>> making forward progress since they are also checking the flag in
>>> select_bad_process. The same issue exists in the non-memcg world. Instead of
>>> entering oom through mem_cgroup_cache_charge(), we might enter it through
>>> radix_tree_preload().
>>>
>>> The proposed fix here is to pass __GFP_NORETRY gfp_mask into try charge under
>>> readahead. Then we skip entering memcg OOM kill which eliminates the case where
>>> it OOMs on one page and holds other page locks. It seems to be safe to do that
>>> since both filemap_fault() and do_generic_file_read() handles the fallback case
>>> of "no_cached_page".
>>>
>>> Note:
>>> After this patch, we might experience some charge fails for readahead pages
>>> (since we don't enter oom). But this sounds sane compared to letting the system
>>> trying extremely hard to charge a readahead page by doing reclaim and then oom,
>>> the later one also triggers livelock as listed above.
>>>
>>> Signed-off-by: Greg Thelen <gthelen@google.com>
>>> Signed-off-by: Ying Han <yinghan@google.com>
>>
>> Nice catch.
>>
>> The concern is GFP_KERNEL != avoid OOM.
>> Although it works now, it can be changed.
>>
>> With alternative idea, We can use explicit oom_killer_disable with __GFP_NOWARN
>> but it wouldn't work since oom_killer_disabled isn't reference count variable.
>> Of course, we can change it with reference-counted atomic variable.
>> The benefit is it's more explicit and doesn't depends on __GFP_NORETRY implementation.
>> So I don't have a good idea except above.
>
>> If you want __GFP_NORTRY patch, thing we can do best is add comment in detail, at least.
>> both side, here add_to_page_cache_lru and there __GFP_NORETRY in include/linux/gfp.h.
>
> Correct me in case i missed something, looks like I want to backport
> the " x86,mm: make pagefault killable" patch, and we might be able to
> solve the livelock w/o changing the readahead code.
>
I missed lock_page_or_retry Kame pointed out.
So, backport should solve the problem.
--
Kind regards,
Minchan Kim
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
prev parent reply other threads:[~2011-12-14 11:31 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-12-13 2:16 Ying Han
2011-12-13 4:45 ` KAMEZAWA Hiroyuki
2011-12-13 17:59 ` Ying Han
2011-12-13 6:10 ` Minchan Kim
2011-12-13 18:29 ` Ying Han
2011-12-14 10:53 ` Michal Hocko
2011-12-14 11:31 ` Minchan Kim [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CAEwNFnDbGcbzEd1j4ctXu=WpZ7GwnV3Md1+7sQVvNBOVN6LR4A@mail.gmail.com' \
--to=minchan@kernel.org \
--cc=bsingharora@gmail.com \
--cc=fengguang.wu@intel.com \
--cc=gthelen@google.com \
--cc=hannes@cmpxchg.org \
--cc=hughd@google.com \
--cc=kamezawa.hiroyu@jp.fujitsu.com \
--cc=linux-mm@kvack.org \
--cc=mel@csn.ul.ie \
--cc=mhocko@suse.cz \
--cc=riel@redhat.com \
--cc=xemul@openvz.org \
--cc=yinghan@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox