From: Michal Hocko <mhocko@kernel.org>
To: PINTU KUMAR <pintu.k@samsung.com>
Cc: akpm@linux-foundation.org, minchan@kernel.org, dave@stgolabs.net,
koct9i@gmail.com, rientjes@google.com, hannes@cmpxchg.org,
penguin-kernel@i-love.sakura.ne.jp, bywxiaobai@163.com,
mgorman@suse.de, vbabka@suse.cz, js1304@gmail.com,
kirill.shutemov@linux.intel.com, alexander.h.duyck@redhat.com,
sasha.levin@oracle.com, cl@linux.com, fengguang.wu@intel.com,
linux-kernel@vger.kernel.org, linux-mm@kvack.org,
cpgs@samsung.com, pintu_agarwal@yahoo.com, pintu.ping@gmail.com,
vishnu.ps@samsung.com, rohit.kr@samsung.com,
c.rajkumar@samsung.com, sreenathd@samsung.com
Subject: Re: [PATCH 1/1] mm: vmstat: Add OOM kill count in vmstat counter
Date: Thu, 8 Oct 2015 16:18:51 +0200 [thread overview]
Message-ID: <20151008141851.GD426@dhcp22.suse.cz> (raw)
In-Reply-To: <023601d1010f$787696b0$6963c410$@samsung.com>
On Wed 07-10-15 20:18:16, PINTU KUMAR wrote:
[...]
> Ok, let me explain the real case that we have experienced.
> In our case, we have low memory killer in user space itself that invoked based
> on some memory threshold.
> Something like, below 100MB threshold starting killing until it comes back to
> 150MB.
> During our long duration ageing test (more than 72 hours) we observed that many
> applications are killed.
> Now, we were not sure if killing happens in user space or kernel space.
> When we saw the kernel logs, it generated many logs such as;
> /var/log/{messages, messages.0, messages.1, messages.2, messages.3, etc.}
> But, none of the logs contains kernel OOM messages. Although there were some LMK
> kill in user space.
> Then in another round of test we keep dumping _dmesg_ output to a file after
> each iteration.
> After 3 days of tests this time we observed that dmesg output dump contains many
> kernel oom messages.
I am confused. So you suspect that the OOM report didn't get to
/var/log/messages while it was in dmesg?
> Now, every time this dumping is not feasible. And instead of counting manually
> in log file, we wanted to know number of oom kills happened during this tests.
> So we decided to add a counter in /proc/vmstat to track the kernel oom_kill, and
> monitor it during our ageing test.
>
> Basically, we wanted to tune our user space LMK killer for different threshold
> values, so that we can completely avoid the kernel oom kill.
> So, just by looking into this counter, we could able to tune the LMK threshold
> values without depending on the kernel log messages.
Wouldn't a trace point suit you better for this particular use case
considering this is a testing environment?
> Also, in most of the system /var/log/messages are not present and we just
> depends on kernel dmesg output, which is petty small for longer run.
> Even if we reduce the loglevel to 4, it may not be suitable to capture all logs.
Hmm, I would consider a logless system considerably crippled but I see
your point and I can imagine that especially small devices might try
to save every single B of the storage. Such a system is basically
undebugable IMO but it still might be interesting to see OOM killer
traces.
> > What is even more confusing is the mixing of memcg and global oom
> > conditions. They are really different things. Memcg API will even
> > give you notification about the OOM event.
> >
> Ok, you are suggesting to divide the oom_kill counter into 2 parts (global &
> memcg) ?
> May be something like:
> nr_oom_victims
> nr_memcg_oom_victims
You do not need the later. Memcg interface already provides you with a
notification API and if a counter is _really_ needed then it should be
per-memcg not a global cumulative number.
--
Michal Hocko
SUSE Labs
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2015-10-08 14:18 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-10-01 10:48 Pintu Kumar
2015-10-01 13:29 ` Anshuman Khandual
2015-10-05 6:19 ` PINTU KUMAR
2015-10-01 13:38 ` Michal Hocko
2015-10-05 6:12 ` PINTU KUMAR
2015-10-05 12:22 ` Michal Hocko
2015-10-06 6:59 ` PINTU KUMAR
2015-10-06 15:41 ` Michal Hocko
2015-10-07 14:48 ` PINTU KUMAR
2015-10-08 14:18 ` Michal Hocko [this message]
2015-10-08 16:06 ` PINTU KUMAR
2015-10-08 16:30 ` Michal Hocko
2015-10-09 12:59 ` PINTU KUMAR
2015-10-12 13:33 ` [PATCH 1/1] mm: vmstat: Add OOM victims " Pintu Kumar
2015-10-12 14:28 ` [RESEND PATCH " Pintu Kumar
2015-10-14 3:05 ` David Rientjes
2015-10-14 13:41 ` PINTU KUMAR
2015-10-14 22:04 ` David Rientjes
2015-10-15 14:35 ` PINTU KUMAR
2015-10-12 14:44 ` [PATCH " PINTU KUMAR
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20151008141851.GD426@dhcp22.suse.cz \
--to=mhocko@kernel.org \
--cc=akpm@linux-foundation.org \
--cc=alexander.h.duyck@redhat.com \
--cc=bywxiaobai@163.com \
--cc=c.rajkumar@samsung.com \
--cc=cl@linux.com \
--cc=cpgs@samsung.com \
--cc=dave@stgolabs.net \
--cc=fengguang.wu@intel.com \
--cc=hannes@cmpxchg.org \
--cc=js1304@gmail.com \
--cc=kirill.shutemov@linux.intel.com \
--cc=koct9i@gmail.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mgorman@suse.de \
--cc=minchan@kernel.org \
--cc=penguin-kernel@i-love.sakura.ne.jp \
--cc=pintu.k@samsung.com \
--cc=pintu.ping@gmail.com \
--cc=pintu_agarwal@yahoo.com \
--cc=rientjes@google.com \
--cc=rohit.kr@samsung.com \
--cc=sasha.levin@oracle.com \
--cc=sreenathd@samsung.com \
--cc=vbabka@suse.cz \
--cc=vishnu.ps@samsung.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox