linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: David Rientjes <rientjes@google.com>
To: Michal Hocko <mhocko@kernel.org>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	"Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>,
	Vlastimil Babka <vbabka@suse.cz>,
	linux-kernel@vger.kernel.org, linux-mm@kvack.org
Subject: Re: [patch] mm, thp: do not cause memcg oom for thp
Date: Tue, 20 Mar 2018 13:25:23 -0700 (PDT)	[thread overview]
Message-ID: <alpine.DEB.2.20.1803201321430.167205@chino.kir.corp.google.com> (raw)
In-Reply-To: <20180320071624.GB23100@dhcp22.suse.cz>

On Tue, 20 Mar 2018, Michal Hocko wrote:

> > Commit 2516035499b9 ("mm, thp: remove __GFP_NORETRY from khugepaged and
> > madvised allocations") changed the page allocator to no longer detect thp
> > allocations based on __GFP_NORETRY.
> > 
> > It did not, however, modify the mem cgroup try_charge() path to avoid oom
> > kill for either khugepaged collapsing or thp faulting.  It is never
> > expected to oom kill a process to allocate a hugepage for thp; reclaim is
> > governed by the thp defrag mode and MADV_HUGEPAGE, but allocations (and
> > charging) should fallback instead of oom killing processes.
> 
> For some reason I thought that the charging path simply bails out for
> costly orders - effectively the same thing as for the global OOM killer.
> But we do not. Is there any reason to not do that though? Why don't we
> simply do
> 

I'm not sure of the expectation of high-order memcg charging without 
__GFP_NORETRY, I only know that khugepaged can now cause memcg oom kills 
when trying to collapse memory, and then subsequently found that the same 
situation exists for faulting instead of falling back to small pages.

> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index d1a917b5b7b7..08accbcd1a18 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -1493,7 +1493,7 @@ static void memcg_oom_recover(struct mem_cgroup *memcg)
>  
>  static void mem_cgroup_oom(struct mem_cgroup *memcg, gfp_t mask, int order)
>  {
> -	if (!current->memcg_may_oom)
> +	if (!current->memcg_may_oom || order > PAGE_ALLOC_COSTLY_ORDER)
>  		return;
>  	/*
>  	 * We are in the middle of the charge context here, so we

That may make sense as an additional patch, but for thp allocations we 
don't want to retry reclaim nr_retries times anyway; we want the old 
behavior of __GFP_NORETRY before commit 2516035499b9.  So the above would 
be a follow-up patch that wouldn't replace mine.

  reply	other threads:[~2018-03-20 20:25 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-03-19 21:10 David Rientjes
2018-03-20  7:16 ` Michal Hocko
2018-03-20 20:25   ` David Rientjes [this message]
2018-03-21  8:22     ` Michal Hocko
2018-03-21 19:37       ` David Rientjes
2018-03-21 20:53         ` Michal Hocko
2018-03-21 21:27           ` David Rientjes
2018-03-22  8:11             ` Michal Hocko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=alpine.DEB.2.20.1803201321430.167205@chino.kir.corp.google.com \
    --to=rientjes@google.com \
    --cc=akpm@linux-foundation.org \
    --cc=kirill.shutemov@linux.intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@kernel.org \
    --cc=vbabka@suse.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox