From: Yafang Shao <laoar.shao@gmail.com>
To: Qian Cai <cai@lca.pw>
Cc: Michal Hocko <mhocko@kernel.org>,
Johannes Weiner <hannes@cmpxchg.org>,
Vladimir Davydov <vdavydov.dev@gmail.com>,
Andrew Morton <akpm@linux-foundation.org>,
Linux MM <linux-mm@kvack.org>,
David Hildenbrand <david@redhat.com>
Subject: Re: [PATCH v3] mm, memcg: fix the stupid OOM killer when shrinking memcg hard limit
Date: Thu, 28 Nov 2019 12:59:56 +0800 [thread overview]
Message-ID: <CALOAHbB7jhmrUFSL83H6U0=_9EdESkBNGGuY5BKEdujBBPs7+g@mail.gmail.com> (raw)
In-Reply-To: <4807E3C5-990F-4A5E-B7D9-22357A4B2845@lca.pw>
On Thu, Nov 28, 2019 at 12:28 PM Qian Cai <cai@lca.pw> wrote:
>
>
>
> > On Nov 27, 2019, at 11:17 PM, Yafang Shao <laoar.shao@gmail.com> wrote:
> >
> > When there are no more processes in a memcg (e.g., due to OOM
> > group), we can still have file pages in the page cache.
> >
> > If these pages are protected by memory.min, they can't be reclaimed.
> > Especially if there won't be another process in this memcg and the memcg
> > is kept online, we do want to drop these pages from the page cache.
> >
> > By dropping these page caches we can avoid reclaimers (e.g., kswapd or
> > direct) to scan and reclaim pages from all memcgs in the system -
> > because the reclaimers will try to fairly reclaim pages from all memcgs
> > in the system when under memory pressure.
> >
> > By setting the hard limit of such a memcg to 0, we allow to drop the
> > page cache of such memcgs. Unfortunately, this may invoke the OOM killer
> > and generate a lot of output. The OOM output is not expected by an admin
> > who wants to drop these caches and knows that there are no processes in
> > this memcg anymore.
> >
> > Therefore, if a memcg is not populated, we should not invoke the OOM
> > killer - there is nothing to kill. The next time a new process is
> > started in the memcg and the "max" is still below usage, the OOM killer
> > will be invoked and the new process will be killed.
> >
> > [ Above commit log is contributed by David ]
> >
> > What's worse about this issue is that when there're killable tasks and the
> > OOM killer killed the last task, and what will happen then ? As nr_reclaims
> > is already 0 and drained is alreay true, the OOM killer will try to kill
> > nothing (because he knows he has killed the last task), what's a stupid
> > behavior.
> >
> > Someone may worry that the admins may not going to see that the memcg was
> > OOM due to the limit change. But this is not a issue, because the admins
> > changes the limit and then the admins must check the result of his change
> > - by checking memory.{max, current, stat} he can get all he wants.
> >
> > Cc: David Hildenbrand <david@redhat.com>
> > Nacked-by: Michal Hocko <mhocko@suse.com>
> > Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
>
> Surely too big a turkey to swallow ? — unprofessional wording and carrying a NACK
> from one of the maintainers.
>
I'm sorry if there're any unprofessional wording.
I just want to make the kernel more stable.
Thanks
Yafang
prev parent reply other threads:[~2019-11-28 5:00 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-11-28 4:17 Yafang Shao
2019-11-28 4:28 ` Qian Cai
2019-11-28 4:59 ` Yafang Shao [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CALOAHbB7jhmrUFSL83H6U0=_9EdESkBNGGuY5BKEdujBBPs7+g@mail.gmail.com' \
--to=laoar.shao@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=cai@lca.pw \
--cc=david@redhat.com \
--cc=hannes@cmpxchg.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@kernel.org \
--cc=vdavydov.dev@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox