From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.9 required=3.0 tests=DKIMWL_WL_MED,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_SANE_1,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B76BDC18E5A for ; Tue, 10 Mar 2020 23:02:28 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3FE3122525 for ; Tue, 10 Mar 2020 23:02:28 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="AhM2LYnM" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3FE3122525 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C81726B0003; Tue, 10 Mar 2020 19:02:27 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C31806B0006; Tue, 10 Mar 2020 19:02:27 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B47246B0007; Tue, 10 Mar 2020 19:02:27 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0193.hostedemail.com [216.40.44.193]) by kanga.kvack.org (Postfix) with ESMTP id 9967C6B0003 for ; Tue, 10 Mar 2020 19:02:27 -0400 (EDT) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 63E45180AD81D for ; Tue, 10 Mar 2020 23:02:27 +0000 (UTC) X-FDA: 76580978334.08.mint76_327ce97746615 X-HE-Tag: mint76_327ce97746615 X-Filterd-Recvd-Size: 6228 Received: from mail-pf1-f194.google.com (mail-pf1-f194.google.com [209.85.210.194]) by imf31.hostedemail.com (Postfix) with ESMTP for ; Tue, 10 Mar 2020 23:02:26 +0000 (UTC) Received: by mail-pf1-f194.google.com with SMTP id c144so173145pfb.10 for ; Tue, 10 Mar 2020 16:02:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:from:to:cc:subject:in-reply-to:message-id:references :user-agent:mime-version; bh=sNJMHcqhH5bIss2Jn3/DDyujT2niTfFiD1TV+eWRWSM=; b=AhM2LYnMo549Pcj8tA22dgyojsiuOl3wBouOrWpxbFvolY/IxEaU6woN++aO28nkhr wV4EwxFKHBdY8J7cYXuJtNeo+Q1DjvaNz/a469+98LzLVdGWkTKdBmu8Ffd8plzwUtWO yhOtOeQStxJnwkCm5YWLUEq4xWtPDJubEq6dVxFKvnLcZCUGGiONnNxIXIVJDWARHdEr OCNymGSoMHGnphFunl5/fXtqqbwZaHWsIdugwyx+IF4EygYi/kGfoEp7UY1IMCmkwAiS HU/u4G3Vpq1sHojFWjrVZ6MA3rJU4oo2x3EJKwKzCwMZqM0tnWYBTiH9uWLuM0yxm5/z F2Fg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:in-reply-to:message-id :references:user-agent:mime-version; bh=sNJMHcqhH5bIss2Jn3/DDyujT2niTfFiD1TV+eWRWSM=; b=IZjYmg7H00vht0brOM/UNEqdG5pGAp4U/KLjthmfpANklijX5QE8DWa7pE0UdLjuXY QX5iwNjL4+4GWznLh15bnFfHfw31Q7a5KCyk0ugsMF0vFGkREHjXdtJBMASYtu1BQ1lf LgyCuo6UUifpPTtaJEOgVgTA+gF6IboX4PSrxhyB/kCZdzSF0sC9hH5F9rkgBtNIQUyb mtIIVrSQz4ovmMsLFgs1P0oSSsv54O58FBYY/QnPnojkG4d03nUKI5jnFEzmXWdq2MO4 9RCIhRvNMpNQrXewXZyzdCSXrqLoug3UT8wfZoykAtQ+AJjInwkNvE86fbQWLJWQzsmz 4Zxw== X-Gm-Message-State: ANhLgQ1zP6qmvS9TvLeSleGNYFyXlZQ0sKzAhc+lHx/4L8WZCM0wujUR u142Zypd8xFC04KIt6bF+vWR0Q== X-Google-Smtp-Source: ADFU+vsPEcD1L1fHln7MhGHnJ+8cQSbt0Y/rSgnIw0ZdHjpLWSSdjtdq/hMpBfexq3PLc/P53wO2ew== X-Received: by 2002:a63:485f:: with SMTP id x31mr21297929pgk.347.1583881345555; Tue, 10 Mar 2020 16:02:25 -0700 (PDT) Received: from [2620:15c:17:3:3a5:23a7:5e32:4598] ([2620:15c:17:3:3a5:23a7:5e32:4598]) by smtp.gmail.com with ESMTPSA id x2sm46513691pge.2.2020.03.10.16.02.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 10 Mar 2020 16:02:24 -0700 (PDT) Date: Tue, 10 Mar 2020 16:02:23 -0700 (PDT) From: David Rientjes X-X-Sender: rientjes@chino.kir.corp.google.com To: Michal Hocko cc: Andrew Morton , Vlastimil Babka , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: Re: [patch] mm, oom: prevent soft lockup on memcg oom for UP systems In-Reply-To: <20200310221019.GE8447@dhcp22.suse.cz> Message-ID: References: <20200310221019.GE8447@dhcp22.suse.cz> User-Agent: Alpine 2.21 (DEB 202 2017-01-01) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, 10 Mar 2020, Michal Hocko wrote: > > When a process is oom killed as a result of memcg limits and the victim > > is waiting to exit, nothing ends up actually yielding the processor back > > to the victim on UP systems with preemption disabled. Instead, the > > charging process simply loops in memcg reclaim and eventually soft > > lockups. > > > > Memory cgroup out of memory: Killed process 808 (repro) total-vm:41944kB, anon-rss:35344kB, file-rss:504kB, shmem-rss:0kB, UID:0 pgtables:108kB oom_score_adj:0 > > watchdog: BUG: soft lockup - CPU#0 stuck for 23s! [repro:806] > > CPU: 0 PID: 806 Comm: repro Not tainted 5.6.0-rc5+ #136 > > RIP: 0010:shrink_lruvec+0x4e9/0xa40 > > ... > > Call Trace: > > shrink_node+0x40d/0x7d0 > > do_try_to_free_pages+0x13f/0x470 > > try_to_free_mem_cgroup_pages+0x16d/0x230 > > try_charge+0x247/0xac0 > > mem_cgroup_try_charge+0x10a/0x220 > > mem_cgroup_try_charge_delay+0x1e/0x40 > > handle_mm_fault+0xdf2/0x15f0 > > do_user_addr_fault+0x21f/0x420 > > page_fault+0x2f/0x40 > > > > Make sure that something ends up actually yielding the processor back to > > the victim to allow for memory freeing. Most appropriate place appears to > > be shrink_node_memcgs() where the iteration of all decendant memcgs could > > be particularly lengthy. > > There is a cond_resched in shrink_lruvec and another one in > shrink_page_list. Why doesn't any of them hit? Is it because there are > no pages on the LRU list? Because rss data suggests there should be > enough pages to go that path. Or maybe it is shrink_slab path that takes > too long? > I think it can be a number of cases, most notably mem_cgroup_protected() checks which is why the cond_resched() is added above it. Rather than add cond_resched() only for MEMCG_PROT_MIN and for certain MEMCG_PROT_LOW, the cond_resched() is added above the switch clause because the iteration itself may be potentially very lengthy. We could also do it in shrink_zones() or the priority based do_try_to_free_pages() loop, but I'd be nervous about the lengthy memcg iteration in shrink_node_memcgs() independent of this. Any other ideas on how to ensure we actually try to resched for the benefit of an oom victim to prevent this soft lockup? > The patch itself makes sense to me but I would like to see more > explanation on how that happens. > > Thanks. > > > Cc: Vlastimil Babka > > Cc: Michal Hocko > > Cc: stable@vger.kernel.org > > Signed-off-by: David Rientjes > > --- > > mm/vmscan.c | 2 ++ > > 1 file changed, 2 insertions(+) > > > > diff --git a/mm/vmscan.c b/mm/vmscan.c > > --- a/mm/vmscan.c > > +++ b/mm/vmscan.c > > @@ -2637,6 +2637,8 @@ static void shrink_node_memcgs(pg_data_t *pgdat, struct scan_control *sc) > > unsigned long reclaimed; > > unsigned long scanned; > > > > + cond_resched(); > > + > > switch (mem_cgroup_protected(target_memcg, memcg)) { > > case MEMCG_PROT_MIN: > > /* > > -- > Michal Hocko > SUSE Labs >