* [PATCH stable v5.8] mm: memcg: fix memcg reclaim soft lockup
@ 2020-09-18 1:19 Julius Hemanth Pitti
2020-09-21 16:12 ` Greg KH
0 siblings, 1 reply; 3+ messages in thread
From: Julius Hemanth Pitti @ 2020-09-18 1:19 UTC (permalink / raw)
To: akpm, xlpang, mhocko, vdavydov.dev, ktkhai, hannes
Cc: stable, linux-mm, linux-kernel, xe-linux-external,
Linus Torvalds, Julius Hemanth Pitti
From: Xunlei Pang <xlpang@linux.alibaba.com>
commit e3336cab2579012b1e72b5265adf98e2d6e244ad upstream.
We've met softlockup with "CONFIG_PREEMPT_NONE=y", when the target memcg
doesn't have any reclaimable memory.
It can be easily reproduced as below:
watchdog: BUG: soft lockup - CPU#0 stuck for 111s![memcg_test:2204]
CPU: 0 PID: 2204 Comm: memcg_test Not tainted 5.9.0-rc2+ #12
Call Trace:
shrink_lruvec+0x49f/0x640
shrink_node+0x2a6/0x6f0
do_try_to_free_pages+0xe9/0x3e0
try_to_free_mem_cgroup_pages+0xef/0x1f0
try_charge+0x2c1/0x750
mem_cgroup_charge+0xd7/0x240
__add_to_page_cache_locked+0x2fd/0x370
add_to_page_cache_lru+0x4a/0xc0
pagecache_get_page+0x10b/0x2f0
filemap_fault+0x661/0xad0
ext4_filemap_fault+0x2c/0x40
__do_fault+0x4d/0xf9
handle_mm_fault+0x1080/0x1790
It only happens on our 1-vcpu instances, because there's no chance for
oom reaper to run to reclaim the to-be-killed process.
Add a cond_resched() at the upper shrink_node_memcgs() to solve this
issue, this will mean that we will get a scheduling point for each memcg
in the reclaimed hierarchy without any dependency on the reclaimable
memory in that memcg thus making it more predictable.
Suggested-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Xunlei Pang <xlpang@linux.alibaba.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Acked-by: Chris Down <chris@chrisdown.name>
Acked-by: Michal Hocko <mhocko@suse.com>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Link: http://lkml.kernel.org/r/1598495549-67324-1-git-send-email-xlpang@linux.alibaba.com
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Fixes: b0dedc49a2da ("mm/vmscan.c: iterate only over charged shrinkers during memcg shrink_slab()")
Cc: stable@vger.kernel.org
Signed-off-by: Julius Hemanth Pitti <jpitti@cisco.com>
---
mm/vmscan.c | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 749d239c62b2..8b97bc615d8c 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2619,6 +2619,14 @@ static void shrink_node_memcgs(pg_data_t *pgdat, struct scan_control *sc)
unsigned long reclaimed;
unsigned long scanned;
+ /*
+ * This loop can become CPU-bound when target memcgs
+ * aren't eligible for reclaim - either because they
+ * don't have any reclaimable pages, or because their
+ * memory is explicitly protected. Avoid soft lockups.
+ */
+ cond_resched();
+
switch (mem_cgroup_protected(target_memcg, memcg)) {
case MEMCG_PROT_MIN:
/*
--
2.17.1
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [PATCH stable v5.8] mm: memcg: fix memcg reclaim soft lockup
2020-09-18 1:19 [PATCH stable v5.8] mm: memcg: fix memcg reclaim soft lockup Julius Hemanth Pitti
@ 2020-09-21 16:12 ` Greg KH
2020-09-21 16:15 ` Julius Hemanth Pitti (jpitti)
0 siblings, 1 reply; 3+ messages in thread
From: Greg KH @ 2020-09-21 16:12 UTC (permalink / raw)
To: Julius Hemanth Pitti
Cc: akpm, xlpang, mhocko, vdavydov.dev, ktkhai, hannes, stable,
linux-mm, linux-kernel, xe-linux-external, Linus Torvalds
On Thu, Sep 17, 2020 at 06:19:13PM -0700, Julius Hemanth Pitti wrote:
> From: Xunlei Pang <xlpang@linux.alibaba.com>
>
> commit e3336cab2579012b1e72b5265adf98e2d6e244ad upstream.
>
> We've met softlockup with "CONFIG_PREEMPT_NONE=y", when the target memcg
> doesn't have any reclaimable memory.
>
> It can be easily reproduced as below:
>
> watchdog: BUG: soft lockup - CPU#0 stuck for 111s![memcg_test:2204]
> CPU: 0 PID: 2204 Comm: memcg_test Not tainted 5.9.0-rc2+ #12
> Call Trace:
> shrink_lruvec+0x49f/0x640
> shrink_node+0x2a6/0x6f0
> do_try_to_free_pages+0xe9/0x3e0
> try_to_free_mem_cgroup_pages+0xef/0x1f0
> try_charge+0x2c1/0x750
> mem_cgroup_charge+0xd7/0x240
> __add_to_page_cache_locked+0x2fd/0x370
> add_to_page_cache_lru+0x4a/0xc0
> pagecache_get_page+0x10b/0x2f0
> filemap_fault+0x661/0xad0
> ext4_filemap_fault+0x2c/0x40
> __do_fault+0x4d/0xf9
> handle_mm_fault+0x1080/0x1790
>
> It only happens on our 1-vcpu instances, because there's no chance for
> oom reaper to run to reclaim the to-be-killed process.
>
> Add a cond_resched() at the upper shrink_node_memcgs() to solve this
> issue, this will mean that we will get a scheduling point for each memcg
> in the reclaimed hierarchy without any dependency on the reclaimable
> memory in that memcg thus making it more predictable.
>
> Suggested-by: Michal Hocko <mhocko@suse.com>
> Signed-off-by: Xunlei Pang <xlpang@linux.alibaba.com>
> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
> Acked-by: Chris Down <chris@chrisdown.name>
> Acked-by: Michal Hocko <mhocko@suse.com>
> Acked-by: Johannes Weiner <hannes@cmpxchg.org>
> Link: http://lkml.kernel.org/r/1598495549-67324-1-git-send-email-xlpang@linux.alibaba.com
> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
> Fixes: b0dedc49a2da ("mm/vmscan.c: iterate only over charged shrinkers during memcg shrink_slab()")
> Cc: stable@vger.kernel.org
> Signed-off-by: Julius Hemanth Pitti <jpitti@cisco.com>
> ---
> mm/vmscan.c | 8 ++++++++
> 1 file changed, 8 insertions(+)
The Fixes: tag you show here goes back to 4.19, can you provide a 4.19.y
and 5.4.y version of this as well?
thanks,
greg k-h
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [PATCH stable v5.8] mm: memcg: fix memcg reclaim soft lockup
2020-09-21 16:12 ` Greg KH
@ 2020-09-21 16:15 ` Julius Hemanth Pitti (jpitti)
0 siblings, 0 replies; 3+ messages in thread
From: Julius Hemanth Pitti (jpitti) @ 2020-09-21 16:15 UTC (permalink / raw)
To: greg
Cc: vdavydov.dev, linux-kernel, xlpang, linux-mm, torvalds, stable,
hannes, akpm, xe-linux-external(mailer list),
mhocko, ktkhai
On Mon, 2020-09-21 at 18:12 +0200, Greg KH wrote:
> On Thu, Sep 17, 2020 at 06:19:13PM -0700, Julius Hemanth Pitti wrote:
> > From: Xunlei Pang <xlpang@linux.alibaba.com>
> >
> > commit e3336cab2579012b1e72b5265adf98e2d6e244ad upstream.
> >
> > We've met softlockup with "CONFIG_PREEMPT_NONE=y", when the target
> > memcg
> > doesn't have any reclaimable memory.
> >
> > It can be easily reproduced as below:
> >
> > watchdog: BUG: soft lockup - CPU#0 stuck for
> > 111s![memcg_test:2204]
> > CPU: 0 PID: 2204 Comm: memcg_test Not tainted 5.9.0-rc2+ #12
> > Call Trace:
> > shrink_lruvec+0x49f/0x640
> > shrink_node+0x2a6/0x6f0
> > do_try_to_free_pages+0xe9/0x3e0
> > try_to_free_mem_cgroup_pages+0xef/0x1f0
> > try_charge+0x2c1/0x750
> > mem_cgroup_charge+0xd7/0x240
> > __add_to_page_cache_locked+0x2fd/0x370
> > add_to_page_cache_lru+0x4a/0xc0
> > pagecache_get_page+0x10b/0x2f0
> > filemap_fault+0x661/0xad0
> > ext4_filemap_fault+0x2c/0x40
> > __do_fault+0x4d/0xf9
> > handle_mm_fault+0x1080/0x1790
> >
> > It only happens on our 1-vcpu instances, because there's no chance
> > for
> > oom reaper to run to reclaim the to-be-killed process.
> >
> > Add a cond_resched() at the upper shrink_node_memcgs() to solve
> > this
> > issue, this will mean that we will get a scheduling point for each
> > memcg
> > in the reclaimed hierarchy without any dependency on the
> > reclaimable
> > memory in that memcg thus making it more predictable.
> >
> > Suggested-by: Michal Hocko <mhocko@suse.com>
> > Signed-off-by: Xunlei Pang <xlpang@linux.alibaba.com>
> > Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
> > Acked-by: Chris Down <chris@chrisdown.name>
> > Acked-by: Michal Hocko <mhocko@suse.com>
> > Acked-by: Johannes Weiner <hannes@cmpxchg.org>
> > Link:
> > http://lkml.kernel.org/r/1598495549-67324-1-git-send-email-xlpang@linux.alibaba.com
> > Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
> > Fixes: b0dedc49a2da ("mm/vmscan.c: iterate only over charged
> > shrinkers during memcg shrink_slab()")
> > Cc: stable@vger.kernel.org
> > Signed-off-by: Julius Hemanth Pitti <jpitti@cisco.com>
> > ---
> > mm/vmscan.c | 8 ++++++++
> > 1 file changed, 8 insertions(+)
>
> The Fixes: tag you show here goes back to 4.19, can you provide a
> 4.19.y
> and 5.4.y version of this as well?
Sure. Will send for both 5.4.y and 4.19.y.
>
> thanks,
>
> greg k-h
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2020-09-21 16:15 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-09-18 1:19 [PATCH stable v5.8] mm: memcg: fix memcg reclaim soft lockup Julius Hemanth Pitti
2020-09-21 16:12 ` Greg KH
2020-09-21 16:15 ` Julius Hemanth Pitti (jpitti)
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox