linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Michal Hocko <mhocko@kernel.org>
To: Yafang Shao <laoar.shao@gmail.com>
Cc: David Rientjes <rientjes@google.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Linux MM <linux-mm@kvack.org>
Subject: Re: [PATCH] mm, oom: don't invoke oom killer if current has been reapered
Date: Mon, 13 Jul 2020 14:45:03 +0200	[thread overview]
Message-ID: <20200713124503.GF16783@dhcp22.suse.cz> (raw)
In-Reply-To: <CALOAHbAXpg0X_9TLMjv+e8RXD3ExF1+1x8vYNpAeZCMOLyrrsQ@mail.gmail.com>

On Mon 13-07-20 20:24:07, Yafang Shao wrote:
> On Mon, Jul 13, 2020 at 2:21 PM Michal Hocko <mhocko@kernel.org> wrote:
> >
> > On Mon 13-07-20 08:01:57, Michal Hocko wrote:
> > > On Fri 10-07-20 23:18:01, Yafang Shao wrote:
> > [...]
> > > > There're many threads of a multi-threaded task parallel running in a
> > > > container on many cpus. Then many threads triggered OOM at the same time,
> > > >
> > > > CPU-1               CPU-2         ...        CPU-n
> > > > thread-1        thread-2      ...        thread-n
> > > >
> > > > wait oom_lock   wait oom_lock ...        hold oom_lock
> > > >
> > > >                                          (sigkill received)
> > > >
> > > >                                          select current as victim
> > > >                                          and wakeup oom reaper
> > > >
> > > >                                          release oom_lock
> > > >
> > > >                                          (MMF_OOM_SKIP set by oom reaper)
> > > >
> > > >                                          (lots of pages are freed)
> > > > hold oom_lock
> > >
> > > Could you be more specific please? The page allocator never waits for
> > > the oom_lock and keeps retrying instead. Also __alloc_pages_may_oom
> > > tries to allocate with the lock held.
> >
> > I suspect that you are looking at memcg oom killer.
> 
> Right, these threads were waiting the oom_lock in mem_cgroup_out_of_memory().
> 
> > Because we do not do
> > trylock there for some reason I do not immediatelly remember from top of
> > my head. If this is really the case then I would recommend looking into
> > how the page allocator implements this and follow the same pattern for
> > memcg as well.
> >
> 
> That is a good suggestion.
> But we can't try locking the global oom_lock here, because task ooming
> in memcg foo may can't help the tasks in memcg bar.

I do not follow. oom_lock is not about fwd progress. It is a big lock to
synchronize against oom_disable logic.

I have this in mind

diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 248e6cad0095..29d1f8c2d968 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -1563,8 +1563,10 @@ static bool mem_cgroup_out_of_memory(struct mem_cgroup *memcg, gfp_t gfp_mask,
 	};
 	bool ret;
 
-	if (mutex_lock_killable(&oom_lock))
+	if (!mutex_trylock(&oom_lock))
 		return true;
+
+
 	/*
 	 * A few threads which were not waiting at mutex_lock_killable() can
 	 * fail to bail out. Therefore, check again after holding oom_lock.

But as I've said I would need to double check the history on why we
differ here. Btw. I suspect that mem_cgroup_out_of_memory call in
mem_cgroup_oom_synchronize is bogus and can no longer trigger after
29ef680ae7c21 but this needs double checking as well.

> IOW, we need to introduce the per memcg oom_lock, like bellow,

I do not see why. Besides that we already do have per oom memcg
hierarchy lock.

-- 
Michal Hocko
SUSE Labs


  reply	other threads:[~2020-07-13 12:45 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-07-11  3:18 Yafang Shao
2020-07-11  5:37 ` Yafang Shao
2020-07-13  6:01 ` Michal Hocko
2020-07-13  6:21   ` Michal Hocko
2020-07-13 12:24     ` Yafang Shao
2020-07-13 12:45       ` Michal Hocko [this message]
2020-07-13 13:11         ` Yafang Shao
2020-07-13 19:05           ` Michal Hocko
2020-07-14  0:15             ` Tetsuo Handa
2020-07-14  0:18               ` Tetsuo Handa
2020-07-14  2:09             ` Yafang Shao
2020-07-13 23:50 ` Tetsuo Handa
2020-07-14  2:13   ` Yafang Shao
2020-07-14  2:42     ` Tetsuo Handa
2020-07-14  2:58       ` Yafang Shao
2020-07-14  4:06         ` Tetsuo Handa
2020-07-14  5:03           ` Yafang Shao
2020-07-14  6:51           ` Michal Hocko
2020-07-14  6:43   ` Michal Hocko
2020-07-14  9:30     ` Yafang Shao

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200713124503.GF16783@dhcp22.suse.cz \
    --to=mhocko@kernel.org \
    --cc=akpm@linux-foundation.org \
    --cc=laoar.shao@gmail.com \
    --cc=linux-mm@kvack.org \
    --cc=rientjes@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox