From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AE1B5C3A5A7 for ; Thu, 8 Dec 2022 14:24:01 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3636B8E0005; Thu, 8 Dec 2022 09:24:01 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 2EC4A8E0001; Thu, 8 Dec 2022 09:24:01 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 190D28E0005; Thu, 8 Dec 2022 09:24:01 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id DDAF38E0001 for ; Thu, 8 Dec 2022 09:24:00 -0500 (EST) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id B57BF120E92 for ; Thu, 8 Dec 2022 14:24:00 +0000 (UTC) X-FDA: 80219358240.16.62ED971 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29]) by imf20.hostedemail.com (Postfix) with ESMTP id D39141C001E for ; Thu, 8 Dec 2022 14:23:58 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=suse.com header.s=susede1 header.b=Qz7O1fJU; spf=pass (imf20.hostedemail.com: domain of mhocko@suse.com designates 195.135.220.29 as permitted sender) smtp.mailfrom=mhocko@suse.com; dmarc=pass (policy=quarantine) header.from=suse.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1670509439; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=DX6YDzb8PyIKMrTf468EmhvBSzk6iJctUJatHJ6cuIA=; b=alN56qajpx+BFJLiPhLa+iJRzMDtxVPsXCpPHN9NDUWmAyenm1d5yqD6XBxjC2fxQbsiG+ gdX7LQnQefFCzR8DdnC249VH6jnYSspyRM/JaMckg4eelFPT7MdYcnNYuIL5B63gDuR/gz fewFvwZTO+PpcsFwY8P72kZN0GIv+lc= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=pass header.d=suse.com header.s=susede1 header.b=Qz7O1fJU; spf=pass (imf20.hostedemail.com: domain of mhocko@suse.com designates 195.135.220.29 as permitted sender) smtp.mailfrom=mhocko@suse.com; dmarc=pass (policy=quarantine) header.from=suse.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1670509439; a=rsa-sha256; cv=none; b=5HXzZ8K6L5S+CWTlfCSGxACLQM3yz03afvDQfX1x/7Q2SswlFAQppEZqzQolZXhHVfkTa7 JWdjd/ULm6IAUPIWJfKM9Xts12Ms7aWNNZbOsYiBTbkcAovreumfvSXX7weaj2HI8S2eAI EveVtn4ZScCjfTsMH7BVo1SMn86wppA= Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id 3F65B208CE; Thu, 8 Dec 2022 14:23:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1670509437; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=DX6YDzb8PyIKMrTf468EmhvBSzk6iJctUJatHJ6cuIA=; b=Qz7O1fJU8HHKKGBJmw8kfV/cErZ0iiN4f0bZfE5jsTRKpgLCCwWA2WlbQKYkLPlqYUsdIv QL4Tw0VatsYd6IlM50vHZ/Vh+5Gf/HUmlVUUZOZbte/Xefcdyd2eJjwb7GOYwzbCqbDOFp u3emrcIcxK2r4dEtCLvHkYxAVTnsGc0= Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 1574913416; Thu, 8 Dec 2022 14:23:57 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id SgCvBH3zkWODbAAAMHmgww (envelope-from ); Thu, 08 Dec 2022 14:23:57 +0000 Date: Thu, 8 Dec 2022 15:23:56 +0100 From: Michal Hocko To: =?utf-8?B?56iL5Z6y5rab?= Chengkaitao Cheng Cc: chengkaitao , "tj@kernel.org" , "lizefan.x@bytedance.com" , "hannes@cmpxchg.org" , "corbet@lwn.net" , "roman.gushchin@linux.dev" , "shakeelb@google.com" , "akpm@linux-foundation.org" , "songmuchun@bytedance.com" , "viro@zeniv.linux.org.uk" , "zhengqi.arch@bytedance.com" , "ebiederm@xmission.com" , "Liam.Howlett@oracle.com" , "chengzhihao1@huawei.com" , "haolee.swjtu@gmail.com" , "yuzhao@google.com" , "willy@infradead.org" , "vasily.averin@linux.dev" , "vbabka@suse.cz" , "surenb@google.com" , "sfr@canb.auug.org.au" , "mcgrof@kernel.org" , "sujiaxun@uniontech.com" , "feng.tang@intel.com" , "cgroups@vger.kernel.org" , "linux-doc@vger.kernel.org" , "linux-kernel@vger.kernel.org" , "linux-fsdevel@vger.kernel.org" , "linux-mm@kvack.org" Subject: Re: [PATCH v2] mm: memcontrol: protect the memory in cgroup from being oom killed Message-ID: References: <3E260DAC-2E2F-48B7-98BB-036EF0A423DC@didiglobal.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <3E260DAC-2E2F-48B7-98BB-036EF0A423DC@didiglobal.com> X-Rspamd-Queue-Id: D39141C001E X-Stat-Signature: qbwwp634bh8wzduwa5ty5gh574358rwi X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1670509438-311447 X-HE-Meta: U2FsdGVkX189gNp2XTYRQ91gIzr88mAjWcVVW4Og4E1+i3ERo+HIlYsMR5wHAwkvpIIvF/yuF8s2G1r3Yi4Z/aaky6PuB4oLpToruPC9u2NTEi2HXyhJVgRjhjNp27KcEZjVqTItzIsbajBMKw58cOGa4Y371EEMXCnUA5C2hwvIQdJZxy6+mi/sduDmTbF0rM90ug9kAHGS9eprwjTjBZfBPK4botS5gYnWy1oJ4K8CtZq1KlA+47PVM+0HZzidUb9wrHsixDPpHx5bc+j+TNM41mFqgWokOsQ5ZcKZSl1/R24Dz1l7TNfhU7VtPT+z1pTcROOWqGqY/UGB4IdmKLH1nF55oHS4pc29kNu3OqL795C1LbcNLUqWiMsE0mV8P6fKUiLyEhdb/aoHlruTtKRqlmLUOApH/78AKUGNdku7bCYj4yphwCta3jHNY/Uhq1G19LtyaIzfyy3Lr23oDxpKU3Coh+geCeZxG+nEDoe4TtTacA+ZHA72ofGSYE3oCDAGObK6Ezkvq4FMMtrX994MQTQL4/zz9y6hTr6nvoKeCMTW0UkjwAJjLSPtnPj2Aiv8VHdzJZSDcOyBpFUjm6OS8cZeCo/d1reVPzwMknrJP6ORZq35ymMnHnpwXhHKq3nwvH93z+jg19eXNCEiH6T8AAgUSvA6NNBi7XS6YLZcx6vhbaedJZ476TIO/P3ygIt+6yJ6qDpTq/wIosNSnMkA2y6ai/K+P4z1cEG1EggJl7ggrdo3iBx0BA2o/uX1tAwrmiBQ/35fBIQGvejQjomwRXEv3a5X5adeevGODKR6zL6thODsTOcaeI0igJMEz8fTgpK7DByQO/7TbVHzLJeFgn0N5D2qRg1tWej4I2ENpnkjc1ploHwiM0lDrjDU3kUGGDEeGATxyHv7yGOVDLInVLxLOkLq0nKLt2irNRcLnI/qmHotZe/TpK7EmAb7Y+JHDYzYmiObMBbnnyA 3IWvi3Qw 3eI3g X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu 08-12-22 14:07:06, 程垲涛 Chengkaitao Cheng wrote: > At 2022-12-08 16:14:10, "Michal Hocko" wrote: > >On Thu 08-12-22 07:59:27, 程垲涛 Chengkaitao Cheng wrote: > >> At 2022-12-08 15:33:07, "Michal Hocko" wrote: > >> >On Thu 08-12-22 11:46:44, chengkaitao wrote: > >> >> From: chengkaitao > >> >> > >> >> We created a new interface for memory, If there is > >> >> the OOM killer under parent memory cgroup, and the memory usage of a > >> >> child cgroup is within its effective oom.protect boundary, the cgroup's > >> >> tasks won't be OOM killed unless there is no unprotected tasks in other > >> >> children cgroups. It draws on the logic of in the > >> >> inheritance relationship. > >> >> > >> >> It has the following advantages, > >> >> 1. We have the ability to protect more important processes, when there > >> >> is a memcg's OOM killer. The oom.protect only takes effect local memcg, > >> >> and does not affect the OOM killer of the host. > >> >> 2. Historically, we can often use oom_score_adj to control a group of > >> >> processes, It requires that all processes in the cgroup must have a > >> >> common parent processes, we have to set the common parent process's > >> >> oom_score_adj, before it forks all children processes. So that it is > >> >> very difficult to apply it in other situations. Now oom.protect has no > >> >> such restrictions, we can protect a cgroup of processes more easily. The > >> >> cgroup can keep some memory, even if the OOM killer has to be called. > >> >> > >> >> Signed-off-by: chengkaitao > >> >> --- > >> >> v2: Modify the formula of the process request memcg protection quota. > >> > > >> >The new formula doesn't really address concerns expressed previously. > >> >Please read my feedback carefully again and follow up with questions if > >> >something is not clear. > >> > >> The previous discussion was quite scattered. Can you help me summarize > >> your concerns again? > > > >The most important part is http://lkml.kernel.org/r/Y4jFnY7kMdB8ReSW@dhcp22.suse.cz > >: Let me just emphasise that we are talking about fundamental disconnect. > >: Rss based accounting has been used for the OOM killer selection because > >: the memory gets unmapped and _potentially_ freed when the process goes > >: away. Memcg changes are bound to the object life time and as said in > >: many cases there is no direct relation with any process life time. > > > We need to discuss the relationship between memcg's mem and process's mem, > > task_usage = task_anon(rss_anon) + task_mapped_file(rss_file) > + task_mapped_share(rss_share) + task_pgtables + task_swapents > > memcg_usage = memcg_anon + memcg_file + memcg_pgtables + memcg_share > = all_task_anon + all_task_mapped_file + all_task_mapped_share > + all_task_pgtables + unmapped_file + unmapped_share > = all_task_usage + unmapped_file + unmapped_share - all_task_swapents You are missing all the kernel charged objects (aka __GFP_ACCOUNT allocations resp. SLAB_ACCOUNT for slab caches). Depending on the workload this can be really a very noticeable portion. So not this is not just about unmapped cache or shm. > >That is to the per-process discount based on rss or any per-process > >memory metrics. > > > >Another really important question is the actual configurability. The > >hierarchical protection has to be enforced and that means that same as > >memory reclaim protection it has to be enforced top-to-bottom in the > >cgroup hierarchy. That makes the oom protection rather non-trivial to > >configure without having a good picture of a larger part of the cgroup > >hierarchy as it cannot be tuned based on a reclaim feedback. > > There is an essential difference between reclaim and oom killer. oom killer is a memory reclaim of the last resort. So yes, there is some difference but fundamentally it is about releasing some memory. And long term we have learned that the more clever it tries to be the more likely corner cases can happen. It is simply impossible to know the best candidate so this is a just a best effort. We try to aim for predictability at least. > The reclaim > cannot be directly perceived by users, I very strongly disagree with this statement. First the direct reclaim is a direct source of latencies because the work is done on behalf of the allocating process. There are side effect possible as well because refaults have their cost as well. > so memcg need to count indicators > similar to pgscan_(kswapd/direct). However, when the user process is killed > by oom killer, users can clearly perceive and count (such as the number of > restarts of a certain type of process). At the same time, the kernel also has > memory.events to count some information about the oom killer, which can > also be used for feedback adjustment. Yes we have those metrics already. I suspect I haven't made myself clear. I didn't say there are no measures to see how oom behaves. What I've said that I _suspect_ that oom protection would be really hard to configure correctly because unlike the memory reclaim which happens during the normal operation, oom is a relatively rare event and it is quite hard to use it for any feedback mechanisms. But I am really open to be convinced otherwise and this is in fact what I have been asking for since the beginning. I would love to see some examples on the reasonable configuration for a practical usecase. It is one thing to say that you can set the protection to a certain value and a different one to have a way to determine that value. See my point? -- Michal Hocko SUSE Labs