linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: "程垲涛 Chengkaitao Cheng" <chengkaitao@didiglobal.com>
To: "roman.gushchin@linux.dev" <roman.gushchin@linux.dev>
Cc: Tao pilgrim <pilgrimtao@gmail.com>,
	"tj@kernel.org" <tj@kernel.org>,
	"lizefan.x@bytedance.com" <lizefan.x@bytedance.com>,
	"hannes@cmpxchg.org" <hannes@cmpxchg.org>,
	"corbet@lwn.net" <corbet@lwn.net>,
	"shakeelb@google.com" <shakeelb@google.com>,
	"akpm@linux-foundation.org" <akpm@linux-foundation.org>,
	Michal Hocko <mhocko@suse.com>,
	"songmuchun@bytedance.com" <songmuchun@bytedance.com>,
	"cgel.zte@gmail.com" <cgel.zte@gmail.com>,
	"ran.xiaokai@zte.com.cn" <ran.xiaokai@zte.com.cn>,
	"viro@zeniv.linux.org.uk" <viro@zeniv.linux.org.uk>,
	"zhengqi.arch@bytedance.com" <zhengqi.arch@bytedance.com>,
	"ebiederm@xmission.com" <ebiederm@xmission.com>,
	"Liam.Howlett@oracle.com" <Liam.Howlett@oracle.com>,
	"chengzhihao1@huawei.com" <chengzhihao1@huawei.com>,
	"haolee.swjtu@gmail.com" <haolee.swjtu@gmail.com>,
	"yuzhao@google.com" <yuzhao@google.com>,
	"willy@infradead.org" <willy@infradead.org>,
	"vasily.averin@linux.dev" <vasily.averin@linux.dev>,
	"vbabka@suse.cz" <vbabka@suse.cz>,
	"surenb@google.com" <surenb@google.com>,
	"sfr@canb.auug.org.au" <sfr@canb.auug.org.au>,
	"mcgrof@kernel.org" <mcgrof@kernel.org>,
	"sujiaxun@uniontech.com" <sujiaxun@uniontech.com>,
	"feng.tang@intel.com" <feng.tang@intel.com>,
	"cgroups@vger.kernel.org" <cgroups@vger.kernel.org>,
	"linux-doc@vger.kernel.org" <linux-doc@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-fsdevel@vger.kernel.org" <linux-fsdevel@vger.kernel.org>,
	Bagas Sanjaya <bagasdotme@gmail.com>,
	"linux-mm@kvack.org" <linux-mm@kvack.org>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Subject: Re: [PATCH] mm: memcontrol: protect the memory in cgroup from being oom killed
Date: Thu, 1 Dec 2022 07:49:04 +0000	[thread overview]
Message-ID: <5019F6D4-D341-4A5E-BAA1-1359A090114A@didiglobal.com> (raw)
In-Reply-To: <E5A5BCC3-460E-4E81-8DD3-88B4A2868285@didiglobal.com>

At 2022-12-01 07:29:11, "Roman Gushchin" <roman.gushchin@linux.dev> wrote:
>On Wed, Nov 30, 2022 at 03:01:58PM +0800, chengkaitao wrote:
>> From: chengkaitao <pilgrimtao@gmail.com>
>> 
>> We created a new interface <memory.oom.protect> for memory, If there is
>> the OOM killer under parent memory cgroup, and the memory usage of a
>> child cgroup is within its effective oom.protect boundary, the cgroup's
>> tasks won't be OOM killed unless there is no unprotected tasks in other
>> children cgroups. It draws on the logic of <memory.min/low> in the
>> inheritance relationship.
>> 
>> It has the following advantages,
>> 1. We have the ability to protect more important processes, when there
>> is a memcg's OOM killer. The oom.protect only takes effect local memcg,
>> and does not affect the OOM killer of the host.
>> 2. Historically, we can often use oom_score_adj to control a group of
>> processes, It requires that all processes in the cgroup must have a
>> common parent processes, we have to set the common parent process's
>> oom_score_adj, before it forks all children processes. So that it is
>> very difficult to apply it in other situations. Now oom.protect has no
>> such restrictions, we can protect a cgroup of processes more easily. The
>> cgroup can keep some memory, even if the OOM killer has to be called.
>
>It reminds me our attempts to provide a more sophisticated cgroup-aware oom
>killer. 

As you said, I also like simple strategies and concise code very much, so in 
the design of oom.protect, we reuse the evaluation method of oom_score, 
we draws on the logic of <memory.min/low> in the inheritance relationship. 
Memory.min/low have been demonstrated for a long time. I did it to reduce 
the burden on the kernel.

>The problem is that the decision which process(es) to kill or preserve
>is individual to a specific workload (and can be even time-dependent
>for a given workload). 

It is correct to kill a process with high workload, but it may not be the 
most appropriate. I think the specific process to kill needs to be decided 
by the user. I think it is the original intention of score_adj design.

>So it's really hard to come up with an in-kernel
>mechanism which is at the same time flexible enough to work for the majority
>of users and reliable enough to serve as the last oom resort measure (which
>is the basic goal of the kernel oom killer).
>
Our goal is to find a method that is less intrusive to the existing 
mechanisms of the kernel, and find a more reasonable supplement 
or alternative to the limitations of score_adj.

>Previously the consensus was to keep the in-kernel oom killer dumb and reliable
>and implement complex policies in userspace (e.g. systemd-oomd etc).
>
>Is there a reason why such approach can't work in your case?

I think that as kernel developers, we should try our best to provide 
users with simpler and more powerful interfaces. It is clear that the 
current oom score mechanism has many limitations. Users need to 
do a lot of timed loop detection in order to complete work similar 
to the oom score mechanism, or develop a new mechanism just to 
skip the imperfect oom score mechanism. This is an inefficient and 
forced behavior

Thanks for your comment!


  reply	other threads:[~2022-12-01  7:49 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-11-30  7:01 chengkaitao
2022-11-30  8:41 ` Bagas Sanjaya
2022-11-30 11:33   ` Tao pilgrim
2022-11-30 12:43     ` Bagas Sanjaya
2022-11-30 13:25       ` 程垲涛 Chengkaitao Cheng
2022-11-30 15:46     ` 程垲涛 Chengkaitao Cheng
2022-11-30 16:27       ` Michal Hocko
2022-12-01  4:52         ` 程垲涛 Chengkaitao Cheng
2022-12-01  7:49           ` 程垲涛 Chengkaitao Cheng [this message]
2022-12-01  9:02             ` Michal Hocko
2022-12-01 13:05               ` 程垲涛 Chengkaitao Cheng
2022-12-01  8:49           ` Michal Hocko
2022-12-01 10:52             ` 程垲涛 Chengkaitao Cheng
2022-12-01 12:44               ` Michal Hocko
2022-12-01 13:08                 ` Michal Hocko
2022-12-01 14:30                   ` 程垲涛 Chengkaitao Cheng
2022-12-01 15:17                     ` Michal Hocko
2022-12-02  8:37                       ` 程垲涛 Chengkaitao Cheng
2022-11-30 13:15 ` Michal Hocko
2022-11-30 23:29 ` Roman Gushchin
2022-12-01 20:18   ` Mina Almasry

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5019F6D4-D341-4A5E-BAA1-1359A090114A@didiglobal.com \
    --to=chengkaitao@didiglobal.com \
    --cc=Liam.Howlett@oracle.com \
    --cc=akpm@linux-foundation.org \
    --cc=bagasdotme@gmail.com \
    --cc=cgel.zte@gmail.com \
    --cc=cgroups@vger.kernel.org \
    --cc=chengzhihao1@huawei.com \
    --cc=corbet@lwn.net \
    --cc=ebiederm@xmission.com \
    --cc=feng.tang@intel.com \
    --cc=gregkh@linuxfoundation.org \
    --cc=hannes@cmpxchg.org \
    --cc=haolee.swjtu@gmail.com \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=lizefan.x@bytedance.com \
    --cc=mcgrof@kernel.org \
    --cc=mhocko@suse.com \
    --cc=pilgrimtao@gmail.com \
    --cc=ran.xiaokai@zte.com.cn \
    --cc=roman.gushchin@linux.dev \
    --cc=sfr@canb.auug.org.au \
    --cc=shakeelb@google.com \
    --cc=songmuchun@bytedance.com \
    --cc=sujiaxun@uniontech.com \
    --cc=surenb@google.com \
    --cc=tj@kernel.org \
    --cc=vasily.averin@linux.dev \
    --cc=vbabka@suse.cz \
    --cc=viro@zeniv.linux.org.uk \
    --cc=willy@infradead.org \
    --cc=yuzhao@google.com \
    --cc=zhengqi.arch@bytedance.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox