linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
To: mhocko@suse.com
Cc: aarcange@redhat.com, hannes@cmpxchg.org,
	akpm@linux-foundation.org, linux-mm@kvack.org,
	rientjes@google.com, mjaggi@caviumnetworks.com, mgorman@suse.de,
	oleg@redhat.com, vdavydov.dev@gmail.com, vbabka@suse.cz
Subject: Re: [RFC PATCH 2/2] mm,oom: Try last second allocation after selecting an OOM victim.
Date: Fri, 20 Oct 2017 23:18:19 +0900	[thread overview]
Message-ID: <201710202318.IJE26050.SFVFMOLHQJOOtF@I-love.SAKURA.ne.jp> (raw)
In-Reply-To: <20171020124009.joie5neol3gbdmxe@dhcp22.suse.cz>

Michal Hocko wrote:
> On Tue 17-10-17 22:04:59, Tetsuo Handa wrote:
> > Below is updated patch. The motivation of this patch is to guarantee that
> > the thread (it can be SCHED_IDLE priority) calling out_of_memory() can use
> > enough CPU resource by saving CPU resource wasted by threads (they can be
> > !SCHED_IDLE priority) waiting for out_of_memory(). Thus, replace
> > mutex_trylock() with mutex_lock_killable().
> 
> So what exactly guanratees SCHED_IDLE running while other high priority
> processes keep preempting it while it holds the oom lock? Not everybody
> is inside the allocation path to get out of the way.

I think that that is a too much worry. If you worry such possibility,
current assumption

	/*
	 * Acquire the oom lock.  If that fails, somebody else is
	 * making progress for us.
	 */

is horribly broken. Also, high priority threads keep preempting will
prevent low priority threads from reaching __alloc_pages_may_oom()
because preemption will occur not only during a low priority thread is
holding oom_lock but also while oom_lock is not held. We can try to
reduce preemption while oom_lock is held by scattering around
preempt_disable()/preempt_enable(). But you said you don't want to
disable preemption during OOM kill operation when I proposed scattering
patch, didn't you?

So, I think that worrying about high priority threads preventing the low
priority thread with oom_lock held is too much. Preventing high priority
threads waiting for oom_lock from disturbing the low priority thread with
oom_lock held by wasting CPU resource will be sufficient.

If you don't like it, the only way will be to offload to a dedicated
kernel thread (like the OOM reaper) so that allocating threads are
no longer blocked by oom_lock. That's a big change.

> > 
> > By replacing mutex_trylock() with mutex_lock_killable(), it might prevent
> > the OOM reaper from start reaping immediately. Thus, remove mutex_lock() from
> > the OOM reaper.
> 
> oom_lock shouldn't be necessary in oom_reaper anymore and that is worth
> a separate patch.

I'll propose as a separate patch after we apply "mm, oom:
task_will_free_mem(current) should ignore MMF_OOM_SKIP for once." or
we call __alloc_pages_slowpath() with oom_lock held.

>  
> > By removing mutex_lock() from the OOM reaper, the race window of needlessly
> > selecting next OOM victim becomes wider, for the last second allocation
> > attempt no longer waits for the OOM reaper. Thus, do the really last
> > allocation attempt after selecting an OOM victim using the same watermark.
> > 
> > Can we go with this direction?
> 
> The patch is just too cluttered. You do not want to use
> __alloc_pages_slowpath. get_page_from_freelist would be more
> appropriate. Also doing alloc_pages_before_oomkill two times seems to be
> excessive.

This patch is intentionally calling __alloc_pages_slowpath() because
it handles ALLOC_OOM by calling __gfp_pfmemalloc_flags(). If this patch
calls only get_page_from_freelist(), we will fail to try ALLOC_OOM before
calling out_of_memory() (when current thread is selected as OOM victim
while waiting for oom_lock) and just before sending SIGKILL (when
task_will_free_mem(current) in out_of_memory() returned false because
MMF_OOM_SKIP was set before ALLOC_OOM allocation is attempted) unless
we apply "mm, oom: task_will_free_mem(current) should ignore MMF_OOM_SKIP
for once.".

> 
> That being said, make sure you adrress all the concerns brought up by
> Andrea and Johannes in the above email thread first.

I don't think there are concerns if we wait for oom_lock.
The only concern will be do not depend on __GFP_DIRECT_RECLAIM allocation
while oom_lock is held. Andrea and Johannes, what are your concerns?

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2017-10-20 15:08 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <1503577106-9196-1-git-send-email-penguin-kernel@I-love.SAKURA.ne.jp>
2017-08-24 12:18 ` Tetsuo Handa
2017-08-24 13:18   ` Michal Hocko
2017-08-24 14:40     ` Tetsuo Handa
2017-08-25  8:00       ` Michal Hocko
2017-09-09  0:55         ` Tetsuo Handa
     [not found]           ` <201710172204.AGG30740.tVHJFFOQLMSFOO@I-love.SAKURA.ne.jp>
2017-10-20 12:40             ` Michal Hocko
2017-10-20 14:18               ` Tetsuo Handa [this message]
2017-10-23 11:30                 ` Michal Hocko
2017-10-24 11:24                   ` Tetsuo Handa
2017-10-24 11:41                     ` Michal Hocko
2017-10-25 10:48                       ` Tetsuo Handa
2017-10-25 11:09                         ` Michal Hocko
2017-10-25 12:15                           ` Tetsuo Handa
2017-10-25 12:41                             ` Michal Hocko
2017-10-25 14:58                               ` Tetsuo Handa
2017-10-25 15:05                                 ` Michal Hocko
2017-10-25 15:34                                   ` Tetsuo Handa
2017-08-24 13:03 ` [PATCH 1/2] mm,page_alloc: Don't call __node_reclaim() with oom_lock held Michal Hocko
2017-08-25 20:47 ` Andrew Morton
2017-08-26  1:28   ` Tetsuo Handa
2017-08-27  4:17     ` Tetsuo Handa

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=201710202318.IJE26050.SFVFMOLHQJOOtF@I-love.SAKURA.ne.jp \
    --to=penguin-kernel@i-love.sakura.ne.jp \
    --cc=aarcange@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=hannes@cmpxchg.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@suse.de \
    --cc=mhocko@suse.com \
    --cc=mjaggi@caviumnetworks.com \
    --cc=oleg@redhat.com \
    --cc=rientjes@google.com \
    --cc=vbabka@suse.cz \
    --cc=vdavydov.dev@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox