From: Yafang Shao <laoar.shao@gmail.com>
To: Michal Hocko <mhocko@kernel.org>
Cc: Johannes Weiner <hannes@cmpxchg.org>,
Vladimir Davydov <vdavydov.dev@gmail.com>,
Andrew Morton <akpm@linux-foundation.org>,
Linux MM <linux-mm@kvack.org>
Subject: Re: [PATCH] mm, memcg: clear page protection when memcg oom group happens
Date: Mon, 25 Nov 2019 20:17:15 +0800 [thread overview]
Message-ID: <CALOAHbCdE+xhVG4JNPf2t=s7VAfeb4F5bO2A+BCcwwcipkFXWQ@mail.gmail.com> (raw)
In-Reply-To: <20191125115409.GJ31714@dhcp22.suse.cz>
On Mon, Nov 25, 2019 at 7:54 PM Michal Hocko <mhocko@kernel.org> wrote:
>
> On Mon 25-11-19 19:37:59, Yafang Shao wrote:
> > On Mon, Nov 25, 2019 at 7:08 PM Michal Hocko <mhocko@kernel.org> wrote:
> > >
> > > On Mon 25-11-19 05:14:53, Yafang Shao wrote:
> > > > We set memory.oom.group to make all processes in this memcg are killed by
> > > > OOM killer to free more pages. In this case, it doesn't make sense to
> > > > protect the pages with memroy.{min, low} again if they are set.
> > >
> > > I do not see why? What does group OOM killing has anything to do with
> > > the reclaim protection? What is the actual problem you are trying to
> > > solve?
> > >
> >
> > The cgroup is treated as a indivisible workload when cgroup.oom.group
> > is set and OOM killer is trying to kill a prcess in this cgroup.
>
> Yes this is true.
>
> > We set cgroup.oom.group is to guarantee the workload integrity, now
> > that processes ara all killed, why keeps the page cache here?
>
> Because an administrator has configured the reclaim protection in a
> certain way and hopefully had a good reason to do that. We are not going
> to override that configure just because there is on OOM killer invoked
> and killed tasks in that memcg. The workload might get restarted and it
> would run under a different constrains all of the sudden which is not
> expected.
>
> In short kernel should never silently change the configuration made by
> an admistrator.
Understood.
So what about bellow changes ? We don't override the admin setting,
but we reclaim the page caches from it if this memcg is oom killed.
Something like,
mem_cgroup_protected
{
...
+ if (!cgroup_is_populated(memcg->css.cgroup) &&
mem_cgroup_under_oom_group_kill(memcg))
+ return MEMCG_PROT_NONE;
+
usage = page_counter_read(&memcg->memory);
if (!usage)
return MEMCG_PROT_NONE;
}
next prev parent reply other threads:[~2019-11-25 12:17 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-11-25 10:14 Yafang Shao
2019-11-25 11:08 ` Michal Hocko
2019-11-25 11:37 ` Yafang Shao
2019-11-25 11:54 ` Michal Hocko
2019-11-25 12:17 ` Yafang Shao [this message]
2019-11-25 12:31 ` Michal Hocko
2019-11-25 12:37 ` Yafang Shao
2019-11-25 12:45 ` Michal Hocko
2019-11-25 14:11 ` Yafang Shao
2019-11-25 14:21 ` Michal Hocko
2019-11-25 14:42 ` Johannes Weiner
2019-11-25 14:45 ` Yafang Shao
2019-11-26 3:52 ` Yafang Shao
2019-11-26 7:31 ` Michal Hocko
2019-11-26 9:35 ` Yafang Shao
2019-11-26 9:50 ` Michal Hocko
2019-11-26 10:02 ` Yafang Shao
2019-11-26 10:22 ` Michal Hocko
2019-11-26 10:56 ` Yafang Shao
2019-11-25 14:44 ` Yafang Shao
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CALOAHbCdE+xhVG4JNPf2t=s7VAfeb4F5bO2A+BCcwwcipkFXWQ@mail.gmail.com' \
--to=laoar.shao@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=hannes@cmpxchg.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@kernel.org \
--cc=vdavydov.dev@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox