From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-0.8 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A8CB9C433DF for ; Thu, 9 Jul 2020 01:58:00 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 5E346206F6 for ; Thu, 9 Jul 2020 01:58:00 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="aLH0wPIo" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5E346206F6 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id E43306B0005; Wed, 8 Jul 2020 21:57:59 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id DCC636B0006; Wed, 8 Jul 2020 21:57:59 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C95056B0007; Wed, 8 Jul 2020 21:57:59 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0237.hostedemail.com [216.40.44.237]) by kanga.kvack.org (Postfix) with ESMTP id B0BFD6B0005 for ; Wed, 8 Jul 2020 21:57:59 -0400 (EDT) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 564A9180AD804 for ; Thu, 9 Jul 2020 01:57:59 +0000 (UTC) X-FDA: 77016876678.18.cows48_2e0585e26ec1 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin18.hostedemail.com (Postfix) with ESMTP id 3126A100ED3B4 for ; Thu, 9 Jul 2020 01:57:59 +0000 (UTC) X-HE-Tag: cows48_2e0585e26ec1 X-Filterd-Recvd-Size: 10135 Received: from mail-il1-f193.google.com (mail-il1-f193.google.com [209.85.166.193]) by imf03.hostedemail.com (Postfix) with ESMTP for ; Thu, 9 Jul 2020 01:57:58 +0000 (UTC) Received: by mail-il1-f193.google.com with SMTP id a6so666153ilq.13 for ; Wed, 08 Jul 2020 18:57:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc:content-transfer-encoding; bh=1uQ4CwY2icnuFmVb0VZiakGIA4OA+SjZslODojEcg80=; b=aLH0wPIoWiGAzYuhuW/3JVQq/G1hCKYSLS5fZpWCYQqepViz67cGLeSo7nGuuCH7SY qXZGca2isT/UXhaU1vFx1LuLsUz/s1FoZkHf65HJyXJJ55/LMNBzUMkpKinpoQ2XeaqS xEA6QbzwkTqiA0MfnmrIxI0BH336+ju6OHvBAc3Vx7yqzp6hri4tQtzzOXwJWHUPTUq1 P7LzwrCNari05SXqVzvnVz8bxsCCLXFHp4+2+Ih16XZ2q+8QMXd9QP3odvNQ5PmQZOcU MZdWZQKWNnnmZDx4NdCdIJQgR8hv73DTW5O8rZ+p4iz3ahjrf/VpNpf10Uh4WAewC2eV w1WQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc:content-transfer-encoding; bh=1uQ4CwY2icnuFmVb0VZiakGIA4OA+SjZslODojEcg80=; b=ZCL91lZIV6zK/2wDibPyXsKnIXLBqhyHHYlgAFS6o+JF2d3ddAVkHy1bPxcK+hdah7 FG59mgC//cyCLOsthC8NOfb+KGbInIqU346HJ8jxRC/+xGwuHBjEGi5IiP3P7G2NwQX3 pvRAQ3DWya1e1S0Bozmbjybl6Af4E2nzKlLhcWeCAUfxZWCwr2ek+/pWmhfNZGh6JaHU U+qgbVX632UoyRvDEfM1Xd7AYm4WByDW3TC7rbxR5XV7RoHwULiFqCdFqO6XQ4nsudAe VmAAJJOmXRJrjnI96qbKRndu4gBVX6yinQBHy+7I336OKh/IHBvWQVFUP2HFbx4kuo4k PG7A== X-Gm-Message-State: AOAM530MJa0A5HHATFtNPCIJONwxqf7Ib4SO6Qv+3udqjiB4gdMGMjwm sx7RYEhm8szrosQZn8Nwsw22Xc5tGrHK87+l6Mw= X-Google-Smtp-Source: ABdhPJyd78Je0DEdpZWZ06QCOkuogCZQ0Sm3AJEcgJQoPGUgOkDYztTn0/sjOzz7CGsYCZM8kqvAFd+9eZsMM0+9rLU= X-Received: by 2002:a92:404e:: with SMTP id n75mr4024114ila.203.1594259877979; Wed, 08 Jul 2020 18:57:57 -0700 (PDT) MIME-Version: 1.0 References: <1594214649-9837-1-git-send-email-laoar.shao@gmail.com> <20200708142806.GJ7271@dhcp22.suse.cz> <20200708160926.GL7271@dhcp22.suse.cz> In-Reply-To: <20200708160926.GL7271@dhcp22.suse.cz> From: Yafang Shao Date: Thu, 9 Jul 2020 09:57:22 +0800 Message-ID: Subject: Re: [PATCH] mm, oom: make the calculation of oom badness more accurate To: Michal Hocko Cc: David Rientjes , Andrew Morton , Linux MM Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: 3126A100ED3B4 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam02 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, Jul 9, 2020 at 12:09 AM Michal Hocko wrote: > > On Wed 08-07-20 23:11:43, Yafang Shao wrote: > > On Wed, Jul 8, 2020 at 10:28 PM Michal Hocko wrote: > > > > > > On Wed 08-07-20 09:24:09, Yafang Shao wrote: > > > > Recently we found an issue on our production environment that when = memcg > > > > oom is triggered the oom killer doesn't chose the process with larg= est > > > > resident memory but chose the first scanned process. Note that all > > > > processes in this memcg have the same oom_score_adj, so the oom kil= ler > > > > should chose the process with largest resident memory. > > > > > > > > Bellow is part of the oom info, which is enough to analyze this iss= ue. > > > > [7516987.983223] memory: usage 16777216kB, limit 16777216kB, failcn= t 52843037 > > > > [7516987.983224] memory+swap: usage 16777216kB, limit 9007199254740= 988kB, failcnt 0 > > > > [7516987.983225] kmem: usage 301464kB, limit 9007199254740988kB, fa= ilcnt 0 > > > > [...] > > > > [7516987.983293] [ pid ] uid tgid total_vm rss pgtables_byt= es swapents oom_score_adj name > > > > [7516987.983510] [ 5740] 0 5740 257 1 32768 = 0 -998 pause > > > > [7516987.983574] [58804] 0 58804 4594 771 81920 = 0 -998 entry_point.bas > > > > [7516987.983577] [58908] 0 58908 7089 689 98304 = 0 -998 cron > > > > [7516987.983580] [58910] 0 58910 16235 5576 163840 = 0 -998 supervisord > > > > [7516987.983590] [59620] 0 59620 18074 1395 188416 = 0 -998 sshd > > > > [7516987.983594] [59622] 0 59622 18680 6679 188416 = 0 -998 python > > > > [7516987.983598] [59624] 0 59624 1859266 5161 548864 = 0 -998 odin-agent > > > > [7516987.983600] [59625] 0 59625 707223 9248 983040 = 0 -998 filebeat > > > > [7516987.983604] [59627] 0 59627 416433 64239 774144 = 0 -998 odin-log-agent > > > > [7516987.983607] [59631] 0 59631 180671 15012 385024 = 0 -998 python3 > > > > [7516987.983612] [61396] 0 61396 791287 3189 352256 = 0 -998 client > > > > [7516987.983615] [61641] 0 61641 1844642 29089 946176 = 0 -998 client > > > > [7516987.983765] [ 9236] 0 9236 2642 467 53248 = 0 -998 php_scanner > > > > [7516987.983911] [42898] 0 42898 15543 838 167936 = 0 -998 su > > > > [7516987.983915] [42900] 1000 42900 3673 867 77824 = 0 -998 exec_script_vr2 > > > > [7516987.983918] [42925] 1000 42925 36475 19033 335872 = 0 -998 python > > > > [7516987.983921] [57146] 1000 57146 3673 848 73728 = 0 -998 exec_script_J2p > > > > [7516987.983925] [57195] 1000 57195 186359 22958 491520 = 0 -998 python2 > > > > [7516987.983928] [58376] 1000 58376 275764 14402 290816 = 0 -998 rosmaster > > > > [7516987.983931] [58395] 1000 58395 155166 4449 245760 = 0 -998 rosout > > > > [7516987.983935] [58406] 1000 58406 18285584 3967322 37101568 = 0 -998 data_sim > > > > [7516987.984221] oom-kill:constraint=3DCONSTRAINT_MEMCG,nodemask=3D= (null),cpuset=3D3aa16c9482ae3a6f6b78bda68a55d32c87c99b985e0f11331cddf05af6c= 4d753,mems_allowed=3D0-1,oom_memcg=3D/kubepods/podf1c273d3-9b36-11ea-b3df-2= 46e9693c184,task_memcg=3D/kubepods/podf1c273d3-9b36-11ea-b3df-246e9693c184/= 1f246a3eeea8f70bf91141eeaf1805346a666e225f823906485ea0b6c37dfc3d,task=3Dpau= se,pid=3D5740,uid=3D0 > > > > [7516987.984254] Memory cgroup out of memory: Killed process 5740 (= pause) total-vm:1028kB, anon-rss:4kB, file-rss:0kB, shmem-rss:0kB > > > > [7516988.092344] oom_reaper: reaped process 5740 (pause), now anon-= rss:0kB, file-rss:0kB, shmem-rss:0kB > > > > > > > > We can find that the first scanned process 5740 (pause) was killed,= but its > > > > rss is only one page. That is because, when we calculate the oom ba= dness in > > > > oom_badness(), we always ignore the negtive point and convert all o= f these > > > > negtive points to 1. Now as oom_score_adj of all the processes in t= his > > > > targeted memcg have the same value -998, the points of these proces= ses are > > > > all negtive value. As a result, the first scanned process will be k= illed. > > > > > > Such a large bias can skew results quite considerably. > > > > > > > Right. > > Pls. refer the kubernetes doc[1] for more information about this large = bias . > > > > [1]. https://kubernetes.io/docs/tasks/administer-cluster/out-of-resourc= e/ > > > > > > The oom_socre_adj (-998) in this memcg is set by kubelet, because i= t is a > > > > a Guaranteed pod, which has higher priority to prevent from being k= illed by > > > > system oom. > > > > > > This is really interesting! I assume that the oom_score_adj is set to > > > protect from the global oom situation right? > > > > Right. See also the kubernetes doc. > > > > > I am struggling to > > > understand what is the expected behavior when the oom is internal for > > > such a group though. Does killing a single task from such a group is = a > > > sensible choice? I am not really familiar with kubelet but can it cop= e > > > with data_sim going away from under it while the rest would still run= ? > > > Wouldn't it make more sense to simply tear down the whole thing? > > > > > > > There are two containers in one kubernetes pod, one of which is a > > pause-container, which has only one process - the pause, which is > > managing the netns, and the other is the docker-init-container, in > > which all other processes are running. > > Once the pause process is killed, the kubelet will rebuild all the > > containers in this pod, while if one of the processes in the > > docker-init-container is killed, the kubelet will try to re-run it. > > So tearing down the whole thing is more costly than only trying to > > re-running one process. > > I'm not familiar with kubernetes as well, that is my understanding. > > Thanks for the clarification! > > [...] > > > oom_score has a very coarse units because it maps all the consumed > > > memory into 0 - 1000 scale so effectively per-mille of the usable > > > memory. oom_score_adj acts on top of that as a bias. This is > > > exported to the userspace and I do not think we can change that (see > > > Documentation/filesystems/proc.rst) unfortunately. > > > > In this doc, I only find the oom_score and oom_score_adj is exposed to > > the userspace. > > While this patch only changes the oom_control->chosen_points, which is > > only for oom internally use. > > So I don't think we can't change oom_control->chosen_points. > > Unless I am misreading the patch you are allowing negative values to be > returned from proc_oom_score and that is used by proc_oom_score which is > exported to the userspace. Thanks for pointing it out. I missed it. --=20 Thanks Yafang