From: David Rientjes <rientjes@google.com>
To: Michal Hocko <mhocko@kernel.org>
Cc: Andrew Morton <akpm@linux-foundation.org>,
Vlastimil Babka <vbabka@suse.cz>,
linux-kernel@vger.kernel.org, linux-mm@kvack.org
Subject: Re: [patch] mm, oom: prevent soft lockup on memcg oom for UP systems
Date: Tue, 10 Mar 2020 16:02:23 -0700 (PDT) [thread overview]
Message-ID: <alpine.DEB.2.21.2003101556270.177273@chino.kir.corp.google.com> (raw)
In-Reply-To: <20200310221019.GE8447@dhcp22.suse.cz>
On Tue, 10 Mar 2020, Michal Hocko wrote:
> > When a process is oom killed as a result of memcg limits and the victim
> > is waiting to exit, nothing ends up actually yielding the processor back
> > to the victim on UP systems with preemption disabled. Instead, the
> > charging process simply loops in memcg reclaim and eventually soft
> > lockups.
> >
> > Memory cgroup out of memory: Killed process 808 (repro) total-vm:41944kB, anon-rss:35344kB, file-rss:504kB, shmem-rss:0kB, UID:0 pgtables:108kB oom_score_adj:0
> > watchdog: BUG: soft lockup - CPU#0 stuck for 23s! [repro:806]
> > CPU: 0 PID: 806 Comm: repro Not tainted 5.6.0-rc5+ #136
> > RIP: 0010:shrink_lruvec+0x4e9/0xa40
> > ...
> > Call Trace:
> > shrink_node+0x40d/0x7d0
> > do_try_to_free_pages+0x13f/0x470
> > try_to_free_mem_cgroup_pages+0x16d/0x230
> > try_charge+0x247/0xac0
> > mem_cgroup_try_charge+0x10a/0x220
> > mem_cgroup_try_charge_delay+0x1e/0x40
> > handle_mm_fault+0xdf2/0x15f0
> > do_user_addr_fault+0x21f/0x420
> > page_fault+0x2f/0x40
> >
> > Make sure that something ends up actually yielding the processor back to
> > the victim to allow for memory freeing. Most appropriate place appears to
> > be shrink_node_memcgs() where the iteration of all decendant memcgs could
> > be particularly lengthy.
>
> There is a cond_resched in shrink_lruvec and another one in
> shrink_page_list. Why doesn't any of them hit? Is it because there are
> no pages on the LRU list? Because rss data suggests there should be
> enough pages to go that path. Or maybe it is shrink_slab path that takes
> too long?
>
I think it can be a number of cases, most notably mem_cgroup_protected()
checks which is why the cond_resched() is added above it. Rather than add
cond_resched() only for MEMCG_PROT_MIN and for certain MEMCG_PROT_LOW, the
cond_resched() is added above the switch clause because the iteration
itself may be potentially very lengthy.
We could also do it in shrink_zones() or the priority based
do_try_to_free_pages() loop, but I'd be nervous about the lengthy memcg
iteration in shrink_node_memcgs() independent of this.
Any other ideas on how to ensure we actually try to resched for the
benefit of an oom victim to prevent this soft lockup?
> The patch itself makes sense to me but I would like to see more
> explanation on how that happens.
>
> Thanks.
>
> > Cc: Vlastimil Babka <vbabka@suse.cz>
> > Cc: Michal Hocko <mhocko@kernel.org>
> > Cc: stable@vger.kernel.org
> > Signed-off-by: David Rientjes <rientjes@google.com>
> > ---
> > mm/vmscan.c | 2 ++
> > 1 file changed, 2 insertions(+)
> >
> > diff --git a/mm/vmscan.c b/mm/vmscan.c
> > --- a/mm/vmscan.c
> > +++ b/mm/vmscan.c
> > @@ -2637,6 +2637,8 @@ static void shrink_node_memcgs(pg_data_t *pgdat, struct scan_control *sc)
> > unsigned long reclaimed;
> > unsigned long scanned;
> >
> > + cond_resched();
> > +
> > switch (mem_cgroup_protected(target_memcg, memcg)) {
> > case MEMCG_PROT_MIN:
> > /*
>
> --
> Michal Hocko
> SUSE Labs
>
next prev parent reply other threads:[~2020-03-10 23:02 UTC|newest]
Thread overview: 37+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-03-10 21:39 David Rientjes
2020-03-10 22:05 ` Tetsuo Handa
2020-03-10 22:55 ` David Rientjes
2020-03-11 9:34 ` Tetsuo Handa
2020-03-11 19:38 ` David Rientjes
2020-03-11 22:04 ` Tetsuo Handa
2020-03-11 22:14 ` David Rientjes
2020-03-12 0:12 ` Tetsuo Handa
2020-03-12 18:07 ` David Rientjes
2020-03-12 22:32 ` Andrew Morton
2020-03-16 9:31 ` Michal Hocko
2020-03-16 10:04 ` Tetsuo Handa
2020-03-16 10:14 ` Michal Hocko
2020-03-13 0:15 ` Tetsuo Handa
2020-03-13 22:01 ` David Rientjes
2020-03-13 23:15 ` Tetsuo Handa
2020-03-13 23:32 ` Tetsuo Handa
2020-03-16 23:59 ` David Rientjes
2020-03-17 3:18 ` Tetsuo Handa
2020-03-17 4:09 ` David Rientjes
2020-03-18 0:55 ` [patch v2] " David Rientjes
2020-03-18 9:42 ` Michal Hocko
2020-03-18 21:40 ` David Rientjes
2020-03-18 22:03 ` [patch v3] " David Rientjes
2020-03-19 7:09 ` Michal Hocko
2020-03-12 4:23 ` [patch] " Tetsuo Handa
2020-03-10 22:10 ` Michal Hocko
2020-03-10 23:02 ` David Rientjes [this message]
2020-03-11 8:27 ` Michal Hocko
2020-03-11 19:45 ` David Rientjes
2020-03-12 8:32 ` Michal Hocko
2020-03-12 18:20 ` David Rientjes
2020-03-12 20:16 ` Michal Hocko
2020-03-16 9:32 ` Michal Hocko
2020-03-11 0:18 ` Andrew Morton
2020-03-11 0:34 ` David Rientjes
2020-03-11 8:36 ` Michal Hocko
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=alpine.DEB.2.21.2003101556270.177273@chino.kir.corp.google.com \
--to=rientjes@google.com \
--cc=akpm@linux-foundation.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@kernel.org \
--cc=vbabka@suse.cz \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox