From: Souptick Joarder <jrdr.linux@gmail.com>
To: Yafang Shao <laoar.shao@gmail.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
Linux-MM <linux-mm@kvack.org>, Roman Gushchin <guro@fb.com>,
Randy Dunlap <rdunlap@infradead.org>,
Johannes Weiner <hannes@cmpxchg.org>,
Michal Hocko <mhocko@suse.com>,
Vladimir Davydov <vdavydov.dev@gmail.com>,
Tetsuo Handa <penguin-kernel@i-love.sakura.ne.jp>
Subject: Re: [PATCH] mm, memcg: skip killing processes under memcg protection at first scan
Date: Sun, 18 Aug 2019 22:41:06 +0530 [thread overview]
Message-ID: <CAFqt6zZOeoK0s6gP_-me1fJ_ymRN=QXj3mfKXNQ-i5_coK21iQ@mail.gmail.com> (raw)
In-Reply-To: <1566102294-14803-1-git-send-email-laoar.shao@gmail.com>
On Sun, Aug 18, 2019 at 9:55 AM Yafang Shao <laoar.shao@gmail.com> wrote:
>
> In the current memory.min design, the system is going to do OOM instead
> of reclaiming the reclaimable pages protected by memory.min if the
> system is lack of free memory. While under this condition, the OOM
> killer may kill the processes in the memcg protected by memory.min.
> This behavior is very weird.
> In order to make it more reasonable, I make some changes in the OOM
> killer. In this patch, the OOM killer will do two-round scan. It will
> skip the processes under memcg protection at the first scan, and if it
> can't kill any processes it will rescan all the processes.
>
> Regarding the overhead this change may takes, I don't think it will be a
> problem because this only happens under system memory pressure and
> the OOM killer can't find any proper victims which are not under memcg
> protection.
>
> Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
> Cc: Roman Gushchin <guro@fb.com>
> Cc: Randy Dunlap <rdunlap@infradead.org>
> Cc: Johannes Weiner <hannes@cmpxchg.org>
> Cc: Michal Hocko <mhocko@suse.com>
> Cc: Vladimir Davydov <vdavydov.dev@gmail.com>
> Cc: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
> ---
> include/linux/memcontrol.h | 6 ++++++
> mm/memcontrol.c | 16 ++++++++++++++++
> mm/oom_kill.c | 23 +++++++++++++++++++++--
> 3 files changed, 43 insertions(+), 2 deletions(-)
>
> diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> index 44c4146..58bd86b 100644
> --- a/include/linux/memcontrol.h
> +++ b/include/linux/memcontrol.h
> @@ -337,6 +337,7 @@ static inline bool mem_cgroup_disabled(void)
>
> enum mem_cgroup_protection mem_cgroup_protected(struct mem_cgroup *root,
> struct mem_cgroup *memcg);
> +int task_under_memcg_protection(struct task_struct *p);
>
> int mem_cgroup_try_charge(struct page *page, struct mm_struct *mm,
> gfp_t gfp_mask, struct mem_cgroup **memcgp,
> @@ -813,6 +814,11 @@ static inline enum mem_cgroup_protection mem_cgroup_protected(
> return MEMCG_PROT_NONE;
> }
>
> +int task_under_memcg_protection(struct task_struct *p)
> +{
> + return 0;
> +}
> +
> static inline int mem_cgroup_try_charge(struct page *page, struct mm_struct *mm,
> gfp_t gfp_mask,
> struct mem_cgroup **memcgp,
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index cdbb7a8..c4d8e53 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -6030,6 +6030,22 @@ enum mem_cgroup_protection mem_cgroup_protected(struct mem_cgroup *root,
> return MEMCG_PROT_NONE;
> }
>
> +int task_under_memcg_protection(struct task_struct *p)
> +{
> + struct mem_cgroup *memcg;
> + int protected;
> +
> + rcu_read_lock();
> + memcg = mem_cgroup_from_task(p);
> + if (memcg != root_mem_cgroup && memcg->memory.min)
> + protected = 1;
> + else
> + protected = 0;
> + rcu_read_unlock();
> +
> + return protected;
I think returning a bool type would be more appropriate.
> +}
> +
> /**
> * mem_cgroup_try_charge - try charging a page
> * @page: page to charge
> diff --git a/mm/oom_kill.c b/mm/oom_kill.c
> index eda2e2a..259dd2c 100644
> --- a/mm/oom_kill.c
> +++ b/mm/oom_kill.c
> @@ -368,11 +368,30 @@ static void select_bad_process(struct oom_control *oc)
> mem_cgroup_scan_tasks(oc->memcg, oom_evaluate_task, oc);
> else {
> struct task_struct *p;
> + int memcg_check = 0;
> + int memcg_skip = 0;
> + int selected = 0;
>
> rcu_read_lock();
> - for_each_process(p)
> - if (oom_evaluate_task(p, oc))
> +retry:
> + for_each_process(p) {
> + if (!memcg_check && task_under_memcg_protection(p)) {
> + memcg_skip = 1;
> + continue;
> + }
> + selected = oom_evaluate_task(p, oc);
> + if (selected)
> break;
> + }
> +
> + if (!selected) {
> + if (memcg_skip) {
> + if (!oc->chosen || oc->chosen == (void *)-1UL) {
> + memcg_check = 1;
> + goto retry;
> + }
> + }
> + }
> rcu_read_unlock();
> }
> }
> --
> 1.8.3.1
>
>
next prev parent reply other threads:[~2019-08-18 17:11 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-08-18 4:24 Yafang Shao
2019-08-18 14:08 ` kbuild test robot
2019-08-18 14:08 ` kbuild test robot
2019-08-18 17:11 ` Souptick Joarder [this message]
2019-08-19 1:03 ` Yafang Shao
2019-08-19 7:31 ` Michal Hocko
2019-08-19 8:15 ` Yafang Shao
2019-08-20 6:31 ` Michal Hocko
2019-08-20 7:02 ` Yafang Shao
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CAFqt6zZOeoK0s6gP_-me1fJ_ymRN=QXj3mfKXNQ-i5_coK21iQ@mail.gmail.com' \
--to=jrdr.linux@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=guro@fb.com \
--cc=hannes@cmpxchg.org \
--cc=laoar.shao@gmail.com \
--cc=linux-mm@kvack.org \
--cc=mhocko@suse.com \
--cc=penguin-kernel@i-love.sakura.ne.jp \
--cc=rdunlap@infradead.org \
--cc=vdavydov.dev@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox