linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Michal Hocko <mhocko@suse.com>
To: Johannes Weiner <hannes@cmpxchg.org>
Cc: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>,
	akpm@linux-foundation.org, linux-mm@kvack.org,
	Andrea Arcangeli <aarcange@redhat.com>
Subject: Re: [PATCH 1/3] mm,oom: Move last second allocation to inside the OOM killer.
Date: Tue, 5 Dec 2017 14:02:15 +0100	[thread overview]
Message-ID: <20171205130215.bxkgzbzo25sljmgd@dhcp22.suse.cz> (raw)
In-Reply-To: <20171205104601.GA1898@cmpxchg.org>

On Tue 05-12-17 10:46:01, Johannes Weiner wrote:
> On Fri, Dec 01, 2017 at 05:38:30PM +0100, Michal Hocko wrote:
[...]
> > So are you saying that the existing last allocation attempt is more
> > reasonable? I've tried to remove it [1] and you were against that.
> > 
> > All I'am trying to tell is that _if_ we want to have something like
> > the last moment allocation after reclaim gave up then it should happen
> > closer to the killing the actual disruptive operation. The current
> > attempt in __alloc_pages_may_oom makes only very little sense to me.
> 
> Yes, you claim that, but you're not making a convincing case to me.
> 
> That last attempt serializes OOM conditions. It doesn't matter where
> it is before the OOM kill as long as it's inside the OOM lock, because
> these are the outcomes from the locked section:
> 
> 	1. It's the first invocation, nothing is on the freelist, no
> 	task has TIF_MEMDIE set. Choose a victim and kill.
> 
> 	2. It's the second invocation, the first invocation is still
> 	active. The trylock fails and we retry.
> 
> 	3. It's the second invocation, a victim has been dispatched
> 	but nothing has been freed. TIF_MEMDIE is found, we retry.
> 
> 	4. It's the second invocation, a victim has died (or been
> 	reaped) and freed memory. The allocation succeeds.
> 
> That's how the OOM state machine works in the presence of multiple
> allocating threads, and the code as is makes perfect sense to me.
> 
> Your argument for moving the allocation attempt closer to the kill is
> because the OOM kill is destructive and we don't want it to happen
> when something unrelated happens to free memory during the victim
> selection. I do understand that.
> 
> My argument against doing that is that the OOM kill is destructive and
> we want it tied to memory pressure as determined by reclaim, not
> random events we don't have control over, so that users can test the
> safety of the memory pressure created by their applications before
> putting them into production environments.
> 
> We'd give up a certain amount of determinism and reproducibility, and
> introduce unexpected implementation-defined semantics (currently the
> sampling window for pressure is reclaim time, afterwards it would
> include OOM victim selection time), in an attempt to probabilistically
> reduce OOM kills under severe memory pressure by an unknown factor.
> 
> This might sound intriguing when you only focus on the split second
> between the last reclaim attempt and when we issue the kill - "hey,
> look, here is one individual instance of a kill I could have avoided
> by exploiting a race condition."
> 
> But it's bad system behavior. For most users OOM kills are extremely
> disruptive. Literally the only way to make them any worse is by making
> them unpredictable and less reproducible.

Thanks for the extended clarification. I understand your concern much
more now. I do not fully agree, though.

OOM killer has always had that "try to prevent killing a victim"
approach in it and on some cases it is a good thing. Basically anytime
when there are reasonable changes of a forward progress then a saved
kill might save a workload and user data. That is something we really
care about much more than a determinism which is quite limited by the
fact that the memory reclaim itself cannot be deterministic because
there way too many parties to interact together on a highly complex
system.

On the other hand we used to have some back-off heuristics which were
promissing a forward progress yet they had some rough edges and were too
livelock happy. So this is definitely a tricky area.

> I do understand the upsides you're advocating for - although you
> haven't quantified them. They're just not worth the downsides.

OK, fair enough. Let's drop the patch then. There is no _strong_
justification for it and what I've seen as "nice to have" is indeed
really hard to quantify and not really worth merging without a full
consensus.

Thanks!
-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2017-12-05 13:02 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-11-25 10:52 Tetsuo Handa
2017-11-25 10:52 ` [PATCH 2/3] mm,oom: Use ALLOC_OOM for OOM victim's last second allocation Tetsuo Handa
2017-11-25 10:52 ` [PATCH 3/3] mm,oom: Remove oom_lock serialization from the OOM reaper Tetsuo Handa
2017-11-28 13:04 ` [PATCH 1/3] mm,oom: Move last second allocation to inside the OOM killer Michal Hocko
2017-12-01 14:33 ` Johannes Weiner
2017-12-01 14:46   ` Michal Hocko
2017-12-01 14:56     ` Johannes Weiner
2017-12-01 15:17       ` Michal Hocko
2017-12-01 15:57         ` Johannes Weiner
2017-12-01 16:38           ` Michal Hocko
2017-12-05 10:46             ` Johannes Weiner
2017-12-05 13:02               ` Michal Hocko [this message]
2017-12-05 13:17                 ` Tetsuo Handa
2017-12-05 13:42                   ` Michal Hocko
2017-12-05 14:07                     ` Tetsuo Handa
2017-12-05 14:30                       ` Michal Hocko
2017-12-01 16:52           ` Tetsuo Handa

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20171205130215.bxkgzbzo25sljmgd@dhcp22.suse.cz \
    --to=mhocko@suse.com \
    --cc=aarcange@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=hannes@cmpxchg.org \
    --cc=linux-mm@kvack.org \
    --cc=penguin-kernel@I-love.SAKURA.ne.jp \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox