From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.1 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D0093C433E3 for ; Mon, 13 Jul 2020 12:24:43 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 91E9620773 for ; Mon, 13 Jul 2020 12:24:43 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="HQ1It2GM" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 91E9620773 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 277ED8D0005; Mon, 13 Jul 2020 08:24:43 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 228A98D0001; Mon, 13 Jul 2020 08:24:43 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 164ED8D0005; Mon, 13 Jul 2020 08:24:43 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0019.hostedemail.com [216.40.44.19]) by kanga.kvack.org (Postfix) with ESMTP id 01B418D0001 for ; Mon, 13 Jul 2020 08:24:42 -0400 (EDT) Received: from smtpin27.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 78F6D180AD804 for ; Mon, 13 Jul 2020 12:24:42 +0000 (UTC) X-FDA: 77032971204.27.rub63_2009da626ee8 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin27.hostedemail.com (Postfix) with ESMTP id 53E9E3D663 for ; Mon, 13 Jul 2020 12:24:42 +0000 (UTC) X-HE-Tag: rub63_2009da626ee8 X-Filterd-Recvd-Size: 4771 Received: from mail-io1-f65.google.com (mail-io1-f65.google.com [209.85.166.65]) by imf24.hostedemail.com (Postfix) with ESMTP for ; Mon, 13 Jul 2020 12:24:41 +0000 (UTC) Received: by mail-io1-f65.google.com with SMTP id p205so4630032iod.8 for ; Mon, 13 Jul 2020 05:24:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=p4sjEQ202LDp6ttX66gF9kdWWwVoe+Vx42iP0n8dcCE=; b=HQ1It2GMc7YmuieH/M48YJhDkZyoRIJ339L6V8NjLSlzpwTUMB0cBUDIxxRE+2Q4eT J/cnJPatvRkQ/Qj+sS7PHtIDokMKeg2avZHC5iYo7uIQ9QCSMRGkeODcIadWTqaRu8ZX Si1gAIa80bXtSsuX4Yl6XU5HYchYZz6rbQ/0CrU89iok98lSByNoGmrd8cUlH+dk9Gi/ 9QaSpB0QP1UMMrqa5ItMwMmwzuH90ZswZ1nbPIiQ3dfC/0bW3WwBS/m/9g8KXv0uxFDz B0rHQjoACiZsgFDmbJldxkIszmJ+3tIRYOJ1DhHTEcUPMymJRLAsOXCb8a8g1mZIN/9B emxw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=p4sjEQ202LDp6ttX66gF9kdWWwVoe+Vx42iP0n8dcCE=; b=svtytxX1A3ADR8ZbUYNDK0F0iQbCNcHmZtVlLg3wUneoUezpVtrbdUutwgv7fY9Gck Zd5k8x8DW1dhLQ2BK/Y9W28uL8BwcHwlkcfQdItJRuZ0EFn18zRNU836LYHAaN0T4kFz pNVeziHlBm7Vu1Chg+NFUo1GVnc7k7a5PP8d7rAhjtuKiwfen/PP5SNoTHrE22/PoKdr oTSaIH3gu0mwrjUrPXqXC9rLAYELmWGbHbSkU9RgyBNqDh2i67zt47LFQPFfb72ccjJX B+c4vywRX2yMCv6Ete1mESOlBhfEXkaejxV86whX5Cuqo9xJyxLX5opeRmX5bj9XVNLY V/Bg== X-Gm-Message-State: AOAM530islNBGCmxnQ3G30lYaaTbnyi/+apOledP2zeNLBVT/5nxxoJF tgZgjMHY+ZFifTtOmQac1wnQBU3VlCSJNzfK8pg= X-Google-Smtp-Source: ABdhPJzYqq3Z4MdGkYZ3yh/jUK4bZZx/BHmFry4Rx3BeXV4owUMj8Jo0hkwqcESO2eGUlmDj5DxRqOQiOK9/+NRhy2s= X-Received: by 2002:a6b:c343:: with SMTP id t64mr6297701iof.66.1594643081375; Mon, 13 Jul 2020 05:24:41 -0700 (PDT) MIME-Version: 1.0 References: <1594437481-11144-1-git-send-email-laoar.shao@gmail.com> <20200713060154.GA16783@dhcp22.suse.cz> <20200713062132.GB16783@dhcp22.suse.cz> In-Reply-To: <20200713062132.GB16783@dhcp22.suse.cz> From: Yafang Shao Date: Mon, 13 Jul 2020 20:24:07 +0800 Message-ID: Subject: Re: [PATCH] mm, oom: don't invoke oom killer if current has been reapered To: Michal Hocko Cc: David Rientjes , Andrew Morton , Linux MM Content-Type: text/plain; charset="UTF-8" X-Rspamd-Queue-Id: 53E9E3D663 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam01 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Jul 13, 2020 at 2:21 PM Michal Hocko wrote: > > On Mon 13-07-20 08:01:57, Michal Hocko wrote: > > On Fri 10-07-20 23:18:01, Yafang Shao wrote: > [...] > > > There're many threads of a multi-threaded task parallel running in a > > > container on many cpus. Then many threads triggered OOM at the same time, > > > > > > CPU-1 CPU-2 ... CPU-n > > > thread-1 thread-2 ... thread-n > > > > > > wait oom_lock wait oom_lock ... hold oom_lock > > > > > > (sigkill received) > > > > > > select current as victim > > > and wakeup oom reaper > > > > > > release oom_lock > > > > > > (MMF_OOM_SKIP set by oom reaper) > > > > > > (lots of pages are freed) > > > hold oom_lock > > > > Could you be more specific please? The page allocator never waits for > > the oom_lock and keeps retrying instead. Also __alloc_pages_may_oom > > tries to allocate with the lock held. > > I suspect that you are looking at memcg oom killer. Right, these threads were waiting the oom_lock in mem_cgroup_out_of_memory(). > Because we do not do > trylock there for some reason I do not immediatelly remember from top of > my head. If this is really the case then I would recommend looking into > how the page allocator implements this and follow the same pattern for > memcg as well. > That is a good suggestion. But we can't try locking the global oom_lock here, because task ooming in memcg foo may can't help the tasks in memcg bar. IOW, we need to introduce the per memcg oom_lock, like bellow, mem_cgroup_out_of_memory + if (mutex_trylock(memcg->lock)) + return true. if (mutex_lock_killable(&oom_lock)) return true; And the memcg tree should also be considered. -- Thanks Yafang