linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Michal Hocko <mhocko@kernel.org>
To: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Cc: linux-mm@kvack.org, hannes@cmpxchg.org, rientjes@google.com,
	linux-kernel@vger.kernel.org
Subject: Re: [PATCH] oom_reaper: close race without using oom_lock
Date: Wed, 26 Jul 2017 13:46:39 +0200	[thread overview]
Message-ID: <20170726114638.GL2981@dhcp22.suse.cz> (raw)
In-Reply-To: <201707262033.JGE65600.MOtQFFLOJOSFVH@I-love.SAKURA.ne.jp>

On Wed 26-07-17 20:33:21, Tetsuo Handa wrote:
> Michal Hocko wrote:
> > On Sun 23-07-17 09:41:50, Tetsuo Handa wrote:
> > > So, how can we verify the above race a real problem?
> > 
> > Try to simulate a _real_ workload and see whether we kill more tasks
> > than necessary. 
> 
> Whether it is a _real_ workload or not cannot become an answer.
> 
> If somebody is trying to allocate hundreds/thousands of pages after memory of
> an OOM victim was reaped, avoiding this race window makes no sense; next OOM
> victim will be selected anyway. But if somebody is trying to allocate only one
> page and then is planning to release a lot of memory, avoiding this race window
> can save somebody from being OOM-killed needlessly. This race window depends on
> what the threads are about to do, not whether the workload is natural or
> artificial.

And with a desparate lack of crystal ball we cannot do much about that
really.

> My question is, how can users know it if somebody was OOM-killed needlessly
> by allowing MMF_OOM_SKIP to race.

Is it really important to know that the race is due to MMF_OOM_SKIP?
Isn't it sufficient to see that we kill too many tasks and then debug it
further once something hits that?

[...]
> Is it guaranteed that __node_reclaim() never (even indirectly) waits for
> __GFP_DIRECT_RECLAIM && !__GFP_NORETRY memory allocation?

this is a direct reclaim which can go down to slab shrinkers with all
the usual fun...

> >                                      Such races are unfortunate but
> > unavoidable unless we synchronize oom kill with any memory freeing which
> > smells like a no-go to me. We can try a last allocation attempt right
> > before we go and kill something (which still wouldn't be race free) but
> > that might cause other issues - e.g. prolonged trashing without ever
> > killing something - but I haven't evaluated those to be honest.
> 
> Yes, postpone last get_page_from_freelist() attempt till oom_kill_process()
> will be what we would afford at best.

as I've said this would have to be evaluated very carefully and a strong
usecase would have to be shown.
-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2017-07-26 11:46 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-07-18 14:06 Tetsuo Handa
2017-07-18 14:16 ` Michal Hocko
2017-07-18 20:51   ` Tetsuo Handa
2017-07-20 14:11     ` Michal Hocko
2017-07-20 21:47       ` Tetsuo Handa
2017-07-21 15:00         ` Michal Hocko
2017-07-21 15:18           ` Tetsuo Handa
2017-07-21 15:33             ` Michal Hocko
2017-07-23  0:41               ` Tetsuo Handa
2017-07-23  3:03                 ` Tetsuo Handa
2017-07-24  6:38                 ` Michal Hocko
2017-07-26 11:33                   ` Tetsuo Handa
2017-07-26 11:46                     ` Michal Hocko [this message]
2017-08-05  1:02                       ` Tetsuo Handa
2017-08-07  6:02                         ` Michal Hocko
2017-08-08  2:14                           ` penguin-kernel
2017-08-10 11:34                             ` Michal Hocko
2017-08-10 12:10                               ` Tetsuo Handa
2017-08-10 12:36                                 ` Michal Hocko
2017-08-10 14:28                                   ` Tetsuo Handa
2017-07-18 14:17 ` Johannes Weiner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20170726114638.GL2981@dhcp22.suse.cz \
    --to=mhocko@kernel.org \
    --cc=hannes@cmpxchg.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=penguin-kernel@I-love.SAKURA.ne.jp \
    --cc=rientjes@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox