linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH stable v5.4] mm: memcg: fix memcg reclaim soft lockup
@ 2020-09-21 18:05 Julius Hemanth Pitti
  2020-09-25 10:55 ` Greg KH
  0 siblings, 1 reply; 2+ messages in thread
From: Julius Hemanth Pitti @ 2020-09-21 18:05 UTC (permalink / raw)
  To: gregkh, akpm, xlpang, mhocko, vdavydov.dev, ktkhai, hannes
  Cc: stable, linux-mm, linux-kernel, xe-linux-external,
	Linus Torvalds, Julius Hemanth Pitti

From: Xunlei Pang <xlpang@linux.alibaba.com>

commit e3336cab2579012b1e72b5265adf98e2d6e244ad upstream

We've met softlockup with "CONFIG_PREEMPT_NONE=y", when the target memcg
doesn't have any reclaimable memory.

It can be easily reproduced as below:

  watchdog: BUG: soft lockup - CPU#0 stuck for 111s![memcg_test:2204]
  CPU: 0 PID: 2204 Comm: memcg_test Not tainted 5.9.0-rc2+ #12
  Call Trace:
    shrink_lruvec+0x49f/0x640
    shrink_node+0x2a6/0x6f0
    do_try_to_free_pages+0xe9/0x3e0
    try_to_free_mem_cgroup_pages+0xef/0x1f0
    try_charge+0x2c1/0x750
    mem_cgroup_charge+0xd7/0x240
    __add_to_page_cache_locked+0x2fd/0x370
    add_to_page_cache_lru+0x4a/0xc0
    pagecache_get_page+0x10b/0x2f0
    filemap_fault+0x661/0xad0
    ext4_filemap_fault+0x2c/0x40
    __do_fault+0x4d/0xf9
    handle_mm_fault+0x1080/0x1790

It only happens on our 1-vcpu instances, because there's no chance for
oom reaper to run to reclaim the to-be-killed process.

Add a cond_resched() at the upper shrink_node_memcgs() to solve this
issue, this will mean that we will get a scheduling point for each memcg
in the reclaimed hierarchy without any dependency on the reclaimable
memory in that memcg thus making it more predictable.

[jpitti@cisco.com:
   - backported to v5.4.y
   - Upstream patch applies fix in shrink_node_memcgs(), which
     is not present to v5.4.y. Appled to shrink_node()]

Suggested-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Xunlei Pang <xlpang@linux.alibaba.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Acked-by: Chris Down <chris@chrisdown.name>
Acked-by: Michal Hocko <mhocko@suse.com>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Link: http://lkml.kernel.org/r/1598495549-67324-1-git-send-email-xlpang@linux.alibaba.com
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Fixes: b0dedc49a2da ("mm/vmscan.c: iterate only over charged shrinkers during memcg shrink_slab()")
Cc: stable@vger.kernel.org
Signed-off-by: Julius Hemanth Pitti <jpitti@cisco.com>
---
 mm/vmscan.c | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 7fde5f904c8d..6db9176d8c63 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2775,6 +2775,14 @@ static bool shrink_node(pg_data_t *pgdat, struct scan_control *sc)
 			unsigned long reclaimed;
 			unsigned long scanned;
 
+			/*
+			 * This loop can become CPU-bound when target memcgs
+			 * aren't eligible for reclaim - either because they
+			 * don't have any reclaimable pages, or because their
+			 * memory is explicitly protected. Avoid soft lockups.
+			 */
+			cond_resched();
+
 			switch (mem_cgroup_protected(root, memcg)) {
 			case MEMCG_PROT_MIN:
 				/*
-- 
2.17.1



^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: [PATCH stable v5.4] mm: memcg: fix memcg reclaim soft lockup
  2020-09-21 18:05 [PATCH stable v5.4] mm: memcg: fix memcg reclaim soft lockup Julius Hemanth Pitti
@ 2020-09-25 10:55 ` Greg KH
  0 siblings, 0 replies; 2+ messages in thread
From: Greg KH @ 2020-09-25 10:55 UTC (permalink / raw)
  To: Julius Hemanth Pitti
  Cc: akpm, xlpang, mhocko, vdavydov.dev, ktkhai, hannes, stable,
	linux-mm, linux-kernel, xe-linux-external, Linus Torvalds

On Mon, Sep 21, 2020 at 11:05:08AM -0700, Julius Hemanth Pitti wrote:
> From: Xunlei Pang <xlpang@linux.alibaba.com>
> 
> commit e3336cab2579012b1e72b5265adf98e2d6e244ad upstream
> 
> We've met softlockup with "CONFIG_PREEMPT_NONE=y", when the target memcg
> doesn't have any reclaimable memory.
> 
> It can be easily reproduced as below:
> 
>   watchdog: BUG: soft lockup - CPU#0 stuck for 111s![memcg_test:2204]
>   CPU: 0 PID: 2204 Comm: memcg_test Not tainted 5.9.0-rc2+ #12
>   Call Trace:
>     shrink_lruvec+0x49f/0x640
>     shrink_node+0x2a6/0x6f0
>     do_try_to_free_pages+0xe9/0x3e0
>     try_to_free_mem_cgroup_pages+0xef/0x1f0
>     try_charge+0x2c1/0x750
>     mem_cgroup_charge+0xd7/0x240
>     __add_to_page_cache_locked+0x2fd/0x370
>     add_to_page_cache_lru+0x4a/0xc0
>     pagecache_get_page+0x10b/0x2f0
>     filemap_fault+0x661/0xad0
>     ext4_filemap_fault+0x2c/0x40
>     __do_fault+0x4d/0xf9
>     handle_mm_fault+0x1080/0x1790
> 
> It only happens on our 1-vcpu instances, because there's no chance for
> oom reaper to run to reclaim the to-be-killed process.
> 
> Add a cond_resched() at the upper shrink_node_memcgs() to solve this
> issue, this will mean that we will get a scheduling point for each memcg
> in the reclaimed hierarchy without any dependency on the reclaimable
> memory in that memcg thus making it more predictable.
> 
> [jpitti@cisco.com:
>    - backported to v5.4.y
>    - Upstream patch applies fix in shrink_node_memcgs(), which
>      is not present to v5.4.y. Appled to shrink_node()]

Thanks for this, now queued up here and for 4.19

greg k-h


^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2020-09-25 10:55 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-09-21 18:05 [PATCH stable v5.4] mm: memcg: fix memcg reclaim soft lockup Julius Hemanth Pitti
2020-09-25 10:55 ` Greg KH

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox