From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-wm0-f70.google.com (mail-wm0-f70.google.com [74.125.82.70]) by kanga.kvack.org (Postfix) with ESMTP id 00C7F6B025F for ; Mon, 14 Aug 2017 08:04:23 -0400 (EDT) Received: by mail-wm0-f70.google.com with SMTP id q189so13756577wmd.6 for ; Mon, 14 Aug 2017 05:04:22 -0700 (PDT) Received: from mx0a-00082601.pphosted.com (mx0b-00082601.pphosted.com. [67.231.153.30]) by mx.google.com with ESMTPS id m18si3753530wmc.79.2017.08.14.05.04.21 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 14 Aug 2017 05:04:21 -0700 (PDT) Date: Mon, 14 Aug 2017 13:03:49 +0100 From: Roman Gushchin Subject: Re: [v4 2/4] mm, oom: cgroup-aware OOM killer Message-ID: <20170814120349.GA24393@castle.DHCP.thefacebook.com> References: <20170726132718.14806-1-guro@fb.com> <20170726132718.14806-3-guro@fb.com> <20170801145435.GN15774@dhcp22.suse.cz> <20170801152548.GA29502@castle.dhcp.TheFacebook.com> MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Disposition: inline In-Reply-To: Sender: owner-linux-mm@kvack.org List-ID: To: David Rientjes Cc: Michal Hocko , linux-mm@kvack.org, Vladimir Davydov , Johannes Weiner , Tetsuo Handa , Tejun Heo , kernel-team@fb.com, cgroups@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org On Tue, Aug 08, 2017 at 04:06:38PM -0700, David Rientjes wrote: > On Tue, 1 Aug 2017, Roman Gushchin wrote: > > > > To the rest of the patch. I have to say I do not quite like how it is > > > implemented. I was hoping for something much simpler which would hook > > > into oom_evaluate_task. If a task belongs to a memcg with kill-all flag > > > then we would update the cumulative memcg badness (more specifically the > > > badness of the topmost parent with kill-all flag). Memcg will then > > > compete with existing self contained tasks (oom_badness will have to > > > tell whether points belong to a task or a memcg to allow the caller to > > > deal with it). But it shouldn't be much more complex than that. > > > > I'm not sure, it will be any simpler. Basically I'm doing the same: > > the difference is that you want to iterate over tasks and for each > > task traverse the memcg tree, update per-cgroup oom score and find > > the corresponding memcg(s) with the kill-all flag. I'm doing the opposite: > > traverse the cgroup tree, and for each leaf cgroup iterate over processes. > > > > Also, please note, that even without the kill-all flag the decision is made > > on per-cgroup level (except tasks in the root cgroup). > > > > I think your implementation is preferred and is actually quite simple to > follow, and I would encourage you to follow through with it. It has a > similar implementation to what we have done for years to kill a process > from a leaf memcg. Hi David! Thank you for the support. > > I did notice that oom_kill_memcg_victim() calls directly into > __oom_kill_process(), however, so we lack the traditional oom killer > output that shows memcg usage and potential tasklist. I think we should > still be dumping this information to the kernel log so that we can see a > breakdown of charged memory. I think the existing output is too verbose for the case, when we kill a cgroup with many processes inside. But I absolutely agree, that we need some debug output, I'll add it in v5. Thanks! -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org