linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
To: mhocko@suse.com
Cc: akpm@linux-foundation.org, linux-mm@kvack.org,
	aarcange@redhat.com, rientjes@google.com, hannes@cmpxchg.org,
	mjaggi@caviumnetworks.com, oleg@redhat.com,
	vdavydov.dev@gmail.com
Subject: Re: [PATCH] mm,oom: use ALLOC_OOM for OOM victim's last second allocation
Date: Thu, 7 Dec 2017 20:59:34 +0900	[thread overview]
Message-ID: <201712072059.HAJ04643.QSJtVMFLFOOOHF@I-love.SAKURA.ne.jp> (raw)
In-Reply-To: <20171207115127.GH20234@dhcp22.suse.cz>

Michal Hocko wrote:
> On Thu 07-12-17 20:42:20, Tetsuo Handa wrote:
> > Manish Jaggi noticed that running LTP oom01/oom02 ltp tests with high core
> > count causes random kernel panics when an OOM victim which consumed memory
> > in a way the OOM reaper does not help was selected by the OOM killer [1].
> > Since commit 696453e66630ad45 ("mm, oom: task_will_free_mem should skip
> > oom_reaped tasks") changed task_will_free_mem(current) in out_of_memory()
> > to return false as soon as MMF_OOM_SKIP is set, many threads sharing the
> > victim's mm were not able to try allocation from memory reserves after the
> > OOM reaper gave up reclaiming memory.
> > 
> > Therefore, this patch allows OOM victims to use ALLOC_OOM watermark for
> > last second allocation attempt.
> > 
> > [1] http://lkml.kernel.org/r/e6c83a26-1d59-4afd-55cf-04e58bdde188@caviumnetworks.com
> > 
> > Fixes: 696453e66630ad45 ("mm, oom: task_will_free_mem should skip oom_reaped tasks")
> > Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
> > Reported-by: Manish Jaggi <mjaggi@caviumnetworks.com>
> > Acked-by: Michal Hocko <mhocko@suse.com>
> 
> I haven't acked _this_ patch! I will have a look but the patch is
> different enough from the original that keeping any acks or reviews is
> inappropriate. Do not do it again!

I see. But nothing has changed except that this is called before entering
into the OOM killer. I assumed that this is a trivial change.

> 
> > Cc: Michal Hocko <mhocko@suse.com>
> > Cc: Oleg Nesterov <oleg@redhat.com>
> > Cc: Vladimir Davydov <vdavydov.dev@gmail.com>
> > Cc: David Rientjes <rientjes@google.com>
> > Cc: Andrea Arcangeli <aarcange@redhat.com>
> > Cc: Johannes Weiner <hannes@cmpxchg.org>
> > ---
> >  mm/page_alloc.c | 39 +++++++++++++++++++++++++++++----------
> >  1 file changed, 29 insertions(+), 10 deletions(-)
> > 
> > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > index 73f5d45..5d054a4 100644
> > --- a/mm/page_alloc.c
> > +++ b/mm/page_alloc.c
> > @@ -3309,6 +3309,10 @@ void warn_alloc(gfp_t gfp_mask, nodemask_t *nodemask, const char *fmt, ...)
> >  	return page;
> >  }
> >  
> > +static struct page *alloc_pages_before_oomkill(gfp_t gfp_mask,
> > +					       unsigned int order,
> > +					       const struct alloc_context *ac);
> > +
> >  static inline struct page *
> >  __alloc_pages_may_oom(gfp_t gfp_mask, unsigned int order,
> >  	const struct alloc_context *ac, unsigned long *did_some_progress)
> > @@ -3334,16 +3338,7 @@ void warn_alloc(gfp_t gfp_mask, nodemask_t *nodemask, const char *fmt, ...)
> >  		return NULL;
> >  	}
> >  
> > -	/*
> > -	 * Go through the zonelist yet one more time, keep very high watermark
> > -	 * here, this is only to catch a parallel oom killing, we must fail if
> > -	 * we're still under heavy pressure. But make sure that this reclaim
> > -	 * attempt shall not depend on __GFP_DIRECT_RECLAIM && !__GFP_NORETRY
> > -	 * allocation which will never fail due to oom_lock already held.
> > -	 */
> > -	page = get_page_from_freelist((gfp_mask | __GFP_HARDWALL) &
> > -				      ~__GFP_DIRECT_RECLAIM, order,
> > -				      ALLOC_WMARK_HIGH|ALLOC_CPUSET, ac);
> > +	page = alloc_pages_before_oomkill(gfp_mask, order, ac);
> >  	if (page)
> >  		goto out;
> >  
> > @@ -3755,6 +3750,30 @@ bool gfp_pfmemalloc_allowed(gfp_t gfp_mask)
> >  	return !!__gfp_pfmemalloc_flags(gfp_mask);
> >  }
> >  
> > +static struct page *alloc_pages_before_oomkill(gfp_t gfp_mask,
> > +					       unsigned int order,
> > +					       const struct alloc_context *ac)
> > +{
> > +	/*
> > +	 * Go through the zonelist yet one more time, keep very high watermark
> > +	 * here, this is only to catch a parallel oom killing, we must fail if
> > +	 * we're still under heavy pressure. But make sure that this reclaim
> > +	 * attempt shall not depend on __GFP_DIRECT_RECLAIM && !__GFP_NORETRY
> > +	 * allocation which will never fail due to oom_lock already held.
> > +	 * Also, make sure that OOM victims can try ALLOC_OOM watermark
> > +	 * in case they haven't tried ALLOC_OOM watermark.
> > +	 */
> > +	int alloc_flags = ALLOC_CPUSET | ALLOC_WMARK_HIGH;
> > +	int reserve_flags;
> > +
> > +	gfp_mask |= __GFP_HARDWALL;
> > +	gfp_mask &= ~__GFP_DIRECT_RECLAIM;
> > +	reserve_flags = __gfp_pfmemalloc_flags(gfp_mask);
> > +	if (reserve_flags)
> > +		alloc_flags = reserve_flags;
> > +	return get_page_from_freelist(gfp_mask, order, alloc_flags, ac);
> > +}
> > +
> >  /*
> >   * Checks whether it makes sense to retry the reclaim to make a forward progress
> >   * for the given allocation request.
> > -- 
> > 1.8.3.1
> > 
> 
> -- 
> Michal Hocko
> SUSE Labs
> 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2017-12-07 12:39 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-12-07 11:42 Tetsuo Handa
2017-12-07 11:51 ` Michal Hocko
2017-12-07 11:59   ` Tetsuo Handa [this message]
2017-12-07 12:22     ` Michal Hocko
2017-12-08 10:58       ` Tetsuo Handa
2017-12-11 11:15         ` [PATCH] mm, oom: task_will_free_mem() should ignore MMF_OOM_SKIP unless __GFP_NOFAIL Tetsuo Handa
2017-12-11 11:44           ` Michal Hocko
2017-12-11 11:42         ` [PATCH] mm,oom: use ALLOC_OOM for OOM victim's last second allocation Michal Hocko
2017-12-12  8:09           ` Tetsuo Handa
2017-12-12 10:07             ` Michal Hocko
2017-12-11 11:57 ` Michal Hocko
2017-12-13 11:06   ` Tetsuo Handa
2017-12-19 14:36     ` Tetsuo Handa
2017-12-19 14:55       ` Michal Hocko
2017-12-21 15:34         ` Tetsuo Handa
2017-12-21 16:42           ` Michal Hocko
2017-12-23 14:41             ` Tetsuo Handa

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=201712072059.HAJ04643.QSJtVMFLFOOOHF@I-love.SAKURA.ne.jp \
    --to=penguin-kernel@i-love.sakura.ne.jp \
    --cc=aarcange@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=hannes@cmpxchg.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@suse.com \
    --cc=mjaggi@caviumnetworks.com \
    --cc=oleg@redhat.com \
    --cc=rientjes@google.com \
    --cc=vdavydov.dev@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox