From: Michal Hocko <mhocko@suse.com>
To: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Cc: aarcange@redhat.com, hannes@cmpxchg.org,
akpm@linux-foundation.org, linux-mm@kvack.org,
rientjes@google.com, mjaggi@caviumnetworks.com, mgorman@suse.de,
oleg@redhat.com, vdavydov.dev@gmail.com, vbabka@suse.cz
Subject: Re: [RFC PATCH 2/2] mm,oom: Try last second allocation after selecting an OOM victim.
Date: Fri, 20 Oct 2017 14:40:09 +0200 [thread overview]
Message-ID: <20171020124009.joie5neol3gbdmxe@dhcp22.suse.cz> (raw)
In-Reply-To: <201710172204.AGG30740.tVHJFFOQLMSFOO@I-love.SAKURA.ne.jp>
On Tue 17-10-17 22:04:59, Tetsuo Handa wrote:
[...]
> I checked http://lkml.kernel.org/r/20160128163802.GA15953@dhcp22.suse.cz but
> I didn't find reason to use high watermark for the last second allocation
> attempt. The only thing required for avoiding livelock will be "do not
> depend on __GFP_DIRECT_RECLAIM allocation while oom_lock is held".
Andrea tried to explain it http://lkml.kernel.org/r/20160128190204.GJ12228@redhat.com
"
: Elaborating the comment: the reason for the high wmark is to reduce
: the likelihood of livelocks and be sure to invoke the OOM killer, if
: we're still under pressure and reclaim just failed. The high wmark is
: used to be sure the failure of reclaim isn't going to be ignored. If
: using the min wmark like you propose there's risk of livelock or
: anyway of delayed OOM killer invocation.
:
: The reason for doing one last wmark check (regardless of the wmark
: used) before invoking the oom killer, was just to be sure another OOM
: killer invocation hasn't already freed a ton of memory while we were
: stuck in reclaim. A lot of free memory generated by the OOM killer,
: won't make a parallel reclaim more likely to succeed, it just creates
: free memory, but reclaim only succeeds when it finds "freeable" memory
: and it makes progress in converting it to free memory. So for the
: purpose of this last check, the high wmark would work fine as lots of
: free memory would have been generated in such case.
"
I've had some problems with this reasoning for the current OOM killer
logic but I haven't been convincing enough. Maybe you will have a better
luck.
> Below is updated patch. The motivation of this patch is to guarantee that
> the thread (it can be SCHED_IDLE priority) calling out_of_memory() can use
> enough CPU resource by saving CPU resource wasted by threads (they can be
> !SCHED_IDLE priority) waiting for out_of_memory(). Thus, replace
> mutex_trylock() with mutex_lock_killable().
So what exactly guanratees SCHED_IDLE running while other high priority
processes keep preempting it while it holds the oom lock? Not everybody
is inside the allocation path to get out of the way.
>
> By replacing mutex_trylock() with mutex_lock_killable(), it might prevent
> the OOM reaper from start reaping immediately. Thus, remove mutex_lock() from
> the OOM reaper.
oom_lock shouldn't be necessary in oom_reaper anymore and that is worth
a separate patch.
> By removing mutex_lock() from the OOM reaper, the race window of needlessly
> selecting next OOM victim becomes wider, for the last second allocation
> attempt no longer waits for the OOM reaper. Thus, do the really last
> allocation attempt after selecting an OOM victim using the same watermark.
>
> Can we go with this direction?
The patch is just too cluttered. You do not want to use
__alloc_pages_slowpath. get_page_from_freelist would be more
appropriate. Also doing alloc_pages_before_oomkill two times seems to be
excessive.
That being said, make sure you adrress all the concerns brought up by
Andrea and Johannes in the above email thread first.
--
Michal Hocko
SUSE Labs
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2017-10-20 12:40 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <1503577106-9196-1-git-send-email-penguin-kernel@I-love.SAKURA.ne.jp>
2017-08-24 12:18 ` Tetsuo Handa
2017-08-24 13:18 ` Michal Hocko
2017-08-24 14:40 ` Tetsuo Handa
2017-08-25 8:00 ` Michal Hocko
2017-09-09 0:55 ` Tetsuo Handa
[not found] ` <201710172204.AGG30740.tVHJFFOQLMSFOO@I-love.SAKURA.ne.jp>
2017-10-20 12:40 ` Michal Hocko [this message]
2017-10-20 14:18 ` Tetsuo Handa
2017-10-23 11:30 ` Michal Hocko
2017-10-24 11:24 ` Tetsuo Handa
2017-10-24 11:41 ` Michal Hocko
2017-10-25 10:48 ` Tetsuo Handa
2017-10-25 11:09 ` Michal Hocko
2017-10-25 12:15 ` Tetsuo Handa
2017-10-25 12:41 ` Michal Hocko
2017-10-25 14:58 ` Tetsuo Handa
2017-10-25 15:05 ` Michal Hocko
2017-10-25 15:34 ` Tetsuo Handa
2017-08-24 13:03 ` [PATCH 1/2] mm,page_alloc: Don't call __node_reclaim() with oom_lock held Michal Hocko
2017-08-25 20:47 ` Andrew Morton
2017-08-26 1:28 ` Tetsuo Handa
2017-08-27 4:17 ` Tetsuo Handa
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20171020124009.joie5neol3gbdmxe@dhcp22.suse.cz \
--to=mhocko@suse.com \
--cc=aarcange@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=hannes@cmpxchg.org \
--cc=linux-mm@kvack.org \
--cc=mgorman@suse.de \
--cc=mjaggi@caviumnetworks.com \
--cc=oleg@redhat.com \
--cc=penguin-kernel@I-love.SAKURA.ne.jp \
--cc=rientjes@google.com \
--cc=vbabka@suse.cz \
--cc=vdavydov.dev@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox