linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Minchan Kim <minchan.kim@gmail.com>
To: Andrew Barry <abarry@cray.com>
Cc: linux-mm <linux-mm@kvack.org>, Mel Gorman <mgorman@suse.de>,
	Rik van Riel <riel@redhat.com>,
	KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Johannes Weiner <hannes@cmpxchg.org>
Subject: Re: Unending loop in __alloc_pages_slowpath following OOM-kill; rfc: patch.
Date: Tue, 17 May 2011 19:34:47 +0900	[thread overview]
Message-ID: <BANLkTikiXUzbsUkzaKZsZg+5ugruA2JdMA@mail.gmail.com> (raw)
In-Reply-To: <4DCDA347.9080207@cray.com>

On Sat, May 14, 2011 at 6:31 AM, Andrew Barry <abarry@cray.com> wrote:
> I believe I found a problem in __alloc_pages_slowpath, which allows a process to
> get stuck endlessly looping, even when lots of memory is available.
>
> Running an I/O and memory intensive stress-test I see a 0-order page allocation
> with __GFP_IO and __GFP_WAIT, running on a system with very little free memory.
> Right about the same time that the stress-test gets killed by the OOM-killer,
> the utility trying to allocate memory gets stuck in __alloc_pages_slowpath even
> though most of the systems memory was freed by the oom-kill of the stress-test.
>
> The utility ends up looping from the rebalance label down through the
> wait_iff_congested continiously. Because order=0, __alloc_pages_direct_compact
> skips the call to get_page_from_freelist. Because all of the reclaimable memory
> on the system has already been reclaimed, __alloc_pages_direct_reclaim skips the
> call to get_page_from_freelist. Since there is no __GFP_FS flag, the block with
> __alloc_pages_may_oom is skipped. The loop hits the wait_iff_congested, then
> jumps back to rebalance without ever trying to get_page_from_freelist. This loop
> repeats infinitely.
>
> Is there a reason that this loop is set up this way for 0 order allocations? I
> applied the below patch, and the problem corrects itself. Does anyone have any
> thoughts on the patch, or on a better way to address this situation?
>
> The test case is pretty pathological. Running a mix of I/O stress-tests that do
> a lot of fork() and consume all of the system memory, I can pretty reliably hit
> this on 600 nodes, in about 12 hours. 32GB/node.
>

It's amazing.
I think it's _very_ rare but it's possible if test program killed by
oom has only lots of anonymous pages and allocation tasks try to
allocate order-0 page with GFP_NOFS.

When the [in]active lists are empty suddenly(But I am not sure how
come the situation happens.) and we are reclaiming order-0 page,
compaction and __alloc_pages_direct_reclaim doesn't work. compaction
doesn't work as it's order-0 page reclaiming.  In case of
__alloc_pages_direct_reclaim, it would work only if we have lru pages
in [in]active list. But unfortunately we don't have any pages in lru
list.
So, last resort is following codes in do_try_to_free_pages.

        /* top priority shrink_zones still had more to do? don't OOM, then */
        if (scanning_global_lru(sc) && !all_unreclaimable(zonelist, sc))
                return 1;

But it has a problem, too. all_unreclaimable checks zone->all_unreclaimable.
zone->all_unreclaimable is set by below condition.

zone->pages_scanned < zone_reclaimable_pages(zone) * 6

If lru list is completely empty, shrink_zone doesn't work so
zone->pages_scanned would be zero. But as we know, zone_page_state
isn't exact by per_cpu_pageset. So it might be positive value. After
all, zone_reclaimable always return true. It means kswapd never set
zone->all_unreclaimable.  So last resort become nop.

In this case, current allocation doesn't have a chance to call
get_page_from_freelist as Andrew Barry said.

Does it make sense?
If it is, how about this?

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index ebc7faa..4f64355 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2105,6 +2105,7 @@ restart:
                first_zones_zonelist(zonelist, high_zoneidx, NULL,
                                        &preferred_zone);

+rebalance:
        /* This is the last chance, in general, before the goto nopage. */
        page = get_page_from_freelist(gfp_mask, nodemask, order, zonelist,
                        high_zoneidx, alloc_flags & ~ALLOC_NO_WATERMARKS,
@@ -2112,7 +2113,6 @@ restart:
        if (page)
                goto got_pg;

-rebalance:
        /* Allocate without watermarks if the context allows */
        if (alloc_flags & ALLOC_NO_WATERMARKS) {
                page = __alloc_pages_high_priority(gfp_mask, order,


> Thanks
> Andrew Barry
>
> ---
>  mm/page_alloc.c |    5 ++++-
>  1 files changed, 4 insertions(+), 1 deletions(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 9f8a97b..c719664 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -2158,7 +2158,10 @@ rebalance:
>        if (should_alloc_retry(gfp_mask, order, pages_reclaimed)) {
>                /* Wait for some write requests to complete then retry */
>                wait_iff_congested(preferred_zone, BLK_RW_ASYNC, HZ/50);
> -               goto rebalance;
> +               if (did_some_progress)
> +                       goto rebalance;
> +               else
> +                       goto restart;
>        } else {
>                /*
>                 * High-order allocations do not necessarily loop after
>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
>



-- 
Kind regards,
Minchan Kim

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2011-05-17 10:34 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-05-13 21:31 Andrew Barry
2011-05-17 10:34 ` Minchan Kim [this message]
2011-05-17 11:34   ` Mel Gorman
2011-05-17 15:49   ` Andrew Barry
2011-05-18 22:29     ` Minchan Kim
2011-05-20 16:49       ` Minchan Kim
2011-05-20 17:16         ` Rik van Riel
2011-05-20 17:23         ` Mel Gorman
2011-05-24  4:54         ` KOSAKI Motohiro
2011-05-24  5:45           ` KOSAKI Motohiro
2011-05-24  8:30           ` Mel Gorman
2011-05-24  8:36             ` KOSAKI Motohiro
2011-05-24  8:49               ` Mel Gorman
2011-05-24  9:05                 ` KOSAKI Motohiro
2011-05-24  9:16                   ` Mel Gorman
2011-05-24  9:40                     ` KOSAKI Motohiro
2011-05-24 10:57                       ` Mel Gorman
2011-05-24 23:53                         ` KOSAKI Motohiro
2011-05-24  8:34           ` Minchan Kim
2011-05-24  8:41             ` KOSAKI Motohiro
2011-05-24  8:57               ` Minchan Kim
2011-05-24  9:36                 ` KOSAKI Motohiro

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=BANLkTikiXUzbsUkzaKZsZg+5ugruA2JdMA@mail.gmail.com \
    --to=minchan.kim@gmail.com \
    --cc=abarry@cray.com \
    --cc=akpm@linux-foundation.org \
    --cc=hannes@cmpxchg.org \
    --cc=kosaki.motohiro@jp.fujitsu.com \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@suse.de \
    --cc=riel@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox