linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
To: mhocko@kernel.org
Cc: akpm@linux-foundation.org, rientjes@google.com, mgorman@suse.de,
	oleg@redhat.com, torvalds@linux-foundation.org, hughd@google.com,
	andrea@kernel.org, riel@redhat.com, linux-mm@kvack.org,
	linux-kernel@vger.kernel.org
Subject: Re: [PATCH 6/6] mm,oom: wait for OOM victims when using oom_kill_allocating_task == 1
Date: Thu, 18 Feb 2016 19:45:45 +0900	[thread overview]
Message-ID: <201602181945.EDI35454.MVOHLQSOFFJOtF@I-love.SAKURA.ne.jp> (raw)
In-Reply-To: <20160217133242.GJ29196@dhcp22.suse.cz>

Michal Hocko wrote:
> On Wed 17-02-16 19:36:36, Tetsuo Handa wrote:
> > From 0b36864d4100ecbdcaa2fc2d1927c9e270f1b629 Mon Sep 17 00:00:00 2001
> > From: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
> > Date: Wed, 17 Feb 2016 16:37:59 +0900
> > Subject: [PATCH 6/6] mm,oom: wait for OOM victims when using oom_kill_allocating_task == 1
> >
> > Currently, out_of_memory() does not wait for existing TIF_MEMDIE threads
> > if /proc/sys/vm/oom_kill_allocating_task is set to 1. This can result in
> > killing more OOM victims than needed. We can wait for the OOM reaper to
> > reap memory used by existing TIF_MEMDIE threads if possible. If the OOM
> > reaper is not available, the system will be kept OOM stalled until an
> > OOM-unkillable thread does a GFP_FS allocation request and calls
> > oom_kill_allocating_task == 0 path.
> >
> > This patch changes oom_kill_allocating_task == 1 case to call
> > select_bad_process() in order to wait for existing TIF_MEMDIE threads.
>
> The primary motivation for oom_kill_allocating_task was to reduce the
> overhead of select_bad_process. See fe071d7e8aae ("oom: add
> oom_kill_allocating_task sysctl"). So this basically defeats the whole
> purpose of the feature.
>

I didn't know that. But I think that printk()ing all candidates much more
significantly degrades performance than scanning the tasklist. It would be
nice if setting /proc/sys/vm/oom_dump_tasks = N (N > 1) shows only top N
memory-hog processes.

> I am not user of this knob because it behaves absolutely randomly but
> IMHO we should simply do something like the following. It would be more
> compliant to the documentation and prevent from livelock which is
> currently possible (albeit very unlikely) when a single task consimes
> all the memory reserves and we keep looping over out_of_memory without
> any progress.
>
> But as I've said I have no idea whether somebody relies on the current
> behavior so this is more of a thinking loudly than proposing an actual
> patch at this point of time.

Maybe try warning messages for finding somebody using
oom_kill_allocating_task?

> ---
> diff --git a/mm/oom_kill.c b/mm/oom_kill.c
> index 078e07ec0906..7de84fb2dd03 100644
> --- a/mm/oom_kill.c
> +++ b/mm/oom_kill.c
> @@ -706,6 +706,9 @@ void oom_kill_process(struct oom_control *oc, struct task_struct *p,
>  	pr_err("%s: Kill process %d (%s) score %u or sacrifice child\n",
>  		message, task_pid_nr(p), p->comm, points);
>
> +	if (sysctl_oom_kill_allocating_task)
> +		goto kill;
> +

We have

  "Out of memory (oom_kill_allocating_task)"
  "Out of memory"
  "Memory cgroup out of memory"

but we don't have

  "Memory cgroup out of memory (oom_kill_allocating_task)"

.

I don't know whether we should use this condition for memcg OOM case.

>  	/*
>  	 * If any of p's children has a different mm and is eligible for kill,
>  	 * the one with the highest oom_badness() score is sacrificed for its
> @@ -734,6 +737,7 @@ void oom_kill_process(struct oom_control *oc, struct task_struct *p,
>  	}
>  	read_unlock(&tasklist_lock);
>
> +kill:
>  	p = find_lock_task_mm(victim);
>  	if (!p) {
>  		put_task_struct(victim);
> @@ -888,6 +892,9 @@ bool out_of_memory(struct oom_control *oc)
>  	if (sysctl_oom_kill_allocating_task && current->mm &&
>  	    !oom_unkillable_task(current, NULL, oc->nodemask) &&
>  	    current->signal->oom_score_adj != OOM_SCORE_ADJ_MIN) {
> +		if (test_thread_flag(TIF_MEMDIE))
> +			panic("Out of memory (oom_kill_allocating_task) not able to make a forward progress");
> +

If current thread got TIF_MEMDIE, current thread will not call out_of_memory()
again because current thread will exit the allocation (unless __GFP_NOFAIL)
due to use of ALLOC_NO_WATERMARKS.

This condition becomes true only when "some OOM-unkillable thread called
out_of_memory() and chose current as the OOM victim" && "current was
running between gfp_to_alloc_flags() in __alloc_pages_slowpath() and
!mutex_trylock(&oom_lock) in __alloc_pages_may_oom()" which is almost
impossibly triggerable. If we trigger this condition, I think it was
triggered by error by chance (rather than really unable to make a
forward progress).

>  		get_task_struct(current);
>  		oom_kill_process(oc, current, 0, totalpages, NULL,
>  				 "Out of memory (oom_kill_allocating_task)");

Anyway, we can forget about this [PATCH 6/6] for now.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2016-02-18 10:46 UTC|newest]

Thread overview: 29+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-02-17 10:28 [PATCH 0/6] preparation for merging the OOM reaper Tetsuo Handa
2016-02-17 10:29 ` [PATCH 1/6] mm,oom: exclude TIF_MEMDIE processes from candidates Tetsuo Handa
2016-02-17 12:41   ` Michal Hocko
2016-02-17 16:40     ` Tetsuo Handa
2016-02-17 17:33       ` Michal Hocko
2016-02-17 20:55         ` Tetsuo Handa
2016-02-17 10:30 ` [PATCH 2/6] mm,oom: don't abort on exiting processes when selecting a victim Tetsuo Handa
2016-02-17 12:54   ` Michal Hocko
2016-02-17 13:07     ` Tetsuo Handa
2016-02-17 14:00       ` Michal Hocko
2016-02-17 14:39         ` Tetsuo Handa
2016-02-17 15:01           ` Michal Hocko
2016-02-17 15:29             ` Tetsuo Handa
2016-02-17 16:17               ` Michal Hocko
2016-02-18 11:21                 ` Tetsuo Handa
2016-02-17 10:32 ` [PATCH 3/6] mm,oom: exclude oom_task_origin processes if they are OOM victims Tetsuo Handa
2016-02-17 13:02   ` Michal Hocko
2016-02-17 10:33 ` [PATCH 4/6] mm,oom: exclude oom_task_origin processes if they are OOM-unkillable Tetsuo Handa
2016-02-17 13:10   ` Michal Hocko
2016-02-17 13:36     ` Tetsuo Handa
2016-02-17 13:44       ` Michal Hocko
2016-02-17 10:34 ` [PATCH 5/6] mm,oom: Re-enable OOM killer using timers Tetsuo Handa
2016-02-17 13:20   ` Michal Hocko
2016-04-09 14:00     ` Tetsuo Handa
2016-04-09 14:04       ` Tetsuo Handa
2016-02-17 10:36 ` [PATCH 6/6] mm,oom: wait for OOM victims when using oom_kill_allocating_task == 1 Tetsuo Handa
2016-02-17 13:32   ` Michal Hocko
2016-02-18 10:45     ` Tetsuo Handa [this message]
2016-02-18 12:20       ` Michal Hocko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=201602181945.EDI35454.MVOHLQSOFFJOtF@I-love.SAKURA.ne.jp \
    --to=penguin-kernel@i-love.sakura.ne.jp \
    --cc=akpm@linux-foundation.org \
    --cc=andrea@kernel.org \
    --cc=hughd@google.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@suse.de \
    --cc=mhocko@kernel.org \
    --cc=oleg@redhat.com \
    --cc=riel@redhat.com \
    --cc=rientjes@google.com \
    --cc=torvalds@linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox