linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Michal Hocko <mhocko@kernel.org>
To: Tejun Heo <tj@kernel.org>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Roman Gushchin <guro@fb.com>,
	Johannes Weiner <hannes@cmpxchg.org>,
	Vladimir Davydov <vdavydov.dev@gmail.com>,
	cgroups@vger.kernel.org, linux-mm@kvack.org, kernel-team@fb.com
Subject: Re: [PATCH v2] mm: memcg: update memcg OOM messages on cgroup2
Date: Tue, 7 Aug 2018 19:54:10 +0200	[thread overview]
Message-ID: <20180807175410.GI10003@dhcp22.suse.cz> (raw)
In-Reply-To: <20180807150944.GA3978217@devbig004.ftw2.facebook.com>

On Tue 07-08-18 08:09:44, Tejun Heo wrote:
> On Tue, Aug 07, 2018 at 09:13:32AM +0200, Michal Hocko wrote:
> > > * It's the same information as memory.stat but would be in a different
> > >   format and will likely be a bit of an eyeful.
> > >
> > > * It can easily become a really long line.  Each kernel log can be ~1k
> > >   in length and there can be other limits in the log pipeline
> > >   (e.g. netcons).
> > 
> > Are we getting close to those limits?
> 
> Yeah, I think the stats we have can already go close to or over 500
> bytes easily, which is already pushing the netcons udp packet size
> limit.
> 
> > > * The information is already multi-line and cgroup oom kills don't
> > >   take down the system, so there's no need to worry about scroll back
> > >   that much.  Also, not printing recursive info means the output is
> > >   well-bound.
> > 
> > Well, on the other hand you can have a lot of memcgs under OOM and then
> > swamp the log a lot.
> 
> idk, the info dump is already multi-line.  If we have a lot of memcgs
> under OOM, we're already kinda messed up (e.g. we can't tell which
> line is for which oom). 

Well, I am not really worried about interleaved oom reports because they
do use oom_lock so this shouldn't be a problem. I just meant to say that
a lot of memcg ooms will swamp the log and having more lines doesn't
really help.

That being said. I will not really push hard. If there is a general
consensus with this output I will not stand in the way. But I believe
that more compact oom report is both nicer and easier to read. At least
from my POV and I have processed countless number of those.
-- 
Michal Hocko
SUSE Labs

      reply	other threads:[~2018-08-07 17:54 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-08-03 17:57 [PATCH] " Tejun Heo
2018-08-03 20:30 ` Roman Gushchin
2018-08-06 15:55   ` Johannes Weiner
2018-08-06 15:48 ` Johannes Weiner
2018-08-06 16:15 ` [PATCH v2] " Tejun Heo
2018-08-06 16:34   ` Roman Gushchin
2018-08-06 18:08   ` Andrew Morton
2018-08-06 18:19     ` Tejun Heo
2018-09-19 17:17       ` Johannes Weiner
2018-08-06 20:06   ` Michal Hocko
2018-08-06 20:19     ` Tejun Heo
2018-08-07  7:13       ` Michal Hocko
2018-08-07 15:09         ` Tejun Heo
2018-08-07 17:54           ` Michal Hocko [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180807175410.GI10003@dhcp22.suse.cz \
    --to=mhocko@kernel.org \
    --cc=akpm@linux-foundation.org \
    --cc=cgroups@vger.kernel.org \
    --cc=guro@fb.com \
    --cc=hannes@cmpxchg.org \
    --cc=kernel-team@fb.com \
    --cc=linux-mm@kvack.org \
    --cc=tj@kernel.org \
    --cc=vdavydov.dev@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox