From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1ABF1C47082 for ; Tue, 8 Jun 2021 16:17:31 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A2D5761278 for ; Tue, 8 Jun 2021 16:17:30 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A2D5761278 Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 1934D6B0070; Tue, 8 Jun 2021 12:17:30 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 144506B0071; Tue, 8 Jun 2021 12:17:30 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 00BCA6B0072; Tue, 8 Jun 2021 12:17:29 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id C0FCC6B0070 for ; Tue, 8 Jun 2021 12:17:29 -0400 (EDT) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 2EBE8998E for ; Tue, 8 Jun 2021 16:17:29 +0000 (UTC) X-FDA: 78231061818.28.37020F1 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29]) by imf24.hostedemail.com (Postfix) with ESMTP id 1A37BA000241 for ; Tue, 8 Jun 2021 16:17:24 +0000 (UTC) Received: from relay2.suse.de (relay2.suse.de [149.44.160.134]) by smtp-out2.suse.de (Postfix) with ESMTP id 3EFFF1FD33; Tue, 8 Jun 2021 16:17:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1623169047; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=Gi8XCTYoavMm87bNCuOv2uvqUg6h/75nK0TzemGzaj8=; b=e8p9f0vstQUoeVxfIg4N5nt8NmysE+kFUj4CTZtHPu5RKJDqWxzzwtwxa3LlSVjbf86Vp0 gPMv0eAkmM4/HqdfUHsZLtqM+DKYI534IubzVOzQNcH8mPDMZ5DXPBd0lzHXAoneO5TSIz Ac2gekzihAmN6m4TRxXc/yqJiJbt+OU= Received: from suse.cz (unknown [10.100.201.86]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by relay2.suse.de (Postfix) with ESMTPS id A37EBA3B85; Tue, 8 Jun 2021 16:17:26 +0000 (UTC) Date: Tue, 8 Jun 2021 18:17:25 +0200 From: Michal Hocko To: Tetsuo Handa Cc: Aaron Tomlin , Waiman Long , Shakeel Butt , Linux MM , Andrew Morton , Vlastimil Babka , LKML Subject: Re: [RFC PATCH] mm/oom_kill: allow oom kill allocating task for non-global case Message-ID: References: <6d23ce58-4c4b-116a-6d74-c2cf4947492b@redhat.com> <353d012f-e8d4-c54c-b33e-54737e1a0115@redhat.com> <20210608100022.pzuwa6aiiffnoikx@ava.usersys.com> <931bbf2e-19e3-c598-c244-ae5e7d00dfb0@i-love.sakura.ne.jp> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <931bbf2e-19e3-c598-c244-ae5e7d00dfb0@i-love.sakura.ne.jp> Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=suse.com header.s=susede1 header.b=e8p9f0vs; spf=pass (imf24.hostedemail.com: domain of mhocko@suse.com designates 195.135.220.29 as permitted sender) smtp.mailfrom=mhocko@suse.com; dmarc=pass (policy=quarantine) header.from=suse.com X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 1A37BA000241 X-Stat-Signature: mws7ddmbe51yp6sbb7esjyfdnqg9d3ff X-HE-Tag: 1623169044-331095 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed 09-06-21 00:22:13, Tetsuo Handa wrote: > On 2021/06/08 22:58, Michal Hocko wrote: > > I do not see this message to be ever printed on 4.18 for memcg oom: > > /* Found nothing?!?! Either we hang forever, or we panic. */ > > if (!oc->chosen && !is_sysrq_oom(oc) && !is_memcg_oom(oc)) { > > dump_header(oc, NULL); > > panic("Out of memory and no killable processes...\n"); > > } > > > > So how come it got triggered here? Is it possible that there is a global > > oom killer somehow going on along with the memcg OOM? Because the below > > stack clearly points to a memcg OOM and a new one AFAICS. > > 4.18 does print this message, and panic() will be called if global OOM > killer invocation were in progress. > > 4.18.0-193.51.1.el8.x86_64 is doing > > ---------- > select_bad_process(oc); > /* Found nothing?!?! */ > if (!oc->chosen) { > dump_header(oc, NULL); > pr_warn("Out of memory and no killable processes...\n"); > /* > * If we got here due to an actual allocation at the > * system level, we cannot survive this and will enter > * an endless loop in the allocator. Bail out now. > */ > if (!is_sysrq_oom(oc) && !is_memcg_oom(oc)) > panic("System is deadlocked on memory\n"); > } > ---------- Ahh, OK. That would explain that. I have looked at 4.18 Vanilla kernel. I do not have RHEL sources handy and neither checked the 4.18 stable tree. Thanks for the clarification! [...] > Since dump_tasks() from dump_header(oc, NULL) does not exclude tasks > which already has MMF_OOM_SKIP set, it is possible that the last OOM > killable victim was already OOM killed but the OOM reaper failed to reclaim > memory and set MMF_OOM_SKIP. (Well, maybe we want to exclude (or annotate) > MMF_OOM_SKIP tasks when showing OOM victim candidates...) Well, the allocating task was clearly alive and whether it has been reaped or not is not all that important as it should force the charge as an oom victim. This is actually the most puzzling part. Because the allocating task either is not a preexisting OOM victim and therefore could become one or it has been and should have skipped the memcg killer altogether. But I fail to see how it could be missed completely while looking for a victim. -- Michal Hocko SUSE Labs