From: Jianguo Wu <wujianguo@huawei.com>
To: David Rientjes <rientjes@google.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
Johannes Weiner <hannes@cmpxchg.org>,
Rik van Riel <riel@redhat.com>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: Re: [question] how to figure out OOM reason? should dump slab/vmalloc info when OOM?
Date: Tue, 11 Feb 2014 12:06:48 +0800 [thread overview]
Message-ID: <52F9A1D8.7040301@huawei.com> (raw)
In-Reply-To: <alpine.DEB.2.02.1401211236520.10355@chino.kir.corp.google.com>
On 2014/1/22 4:41, David Rientjes wrote:
> On Tue, 21 Jan 2014, Jianguo Wu wrote:
>
>>> The problem is that slabinfo becomes excessively verbose and dumping it
>>> all to the kernel log often times causes important messages to be lost.
>>> This is why we control things like the tasklist dump with a VM sysctl. It
>>> would be possible to dump, say, the top ten slab caches with the highest
>>> memory usage, but it will only be helpful for slab leaks. Typically there
>>> are better debugging tools available than analyzing the kernel log; if you
>>> see unusually high slab memory in the meminfo dump, you can enable it.
>>>
>>
>> But, when OOM has happened, we can only use kernel log, slab/vmalloc info from proc
>> is stale. Maybe we can dump slab/vmalloc with a VM sysctl, and only top 10/20 entrys?
>>
>
> You could, but it's a tradeoff between how much to dump to a general
> resource such as the kernel log and how many sysctls we add that control
> every possible thing. Slab leaks would definitely be a minority of oom
> conditions and you should normally be able to reproduce them by running
> the same workload; just use slabtop(1) or manually inspect /proc/slabinfo
> while such a workload is running for indicators. I don't think we want to
> add the information by default, though, nor do we want to add sysctls to
> control the behavior (you'd still need to reproduce the issue after
> enabling it).
>
> We are currently discussing userspace oom handlers, though, that would
> allow you to run a process that would be notified and allowed to allocate
> a small amount of memory on oom conditions. It would then be trivial to
> dump any information you feel pertinent in userspace prior to killing
> something. I like to inspect heap profiles for memory hogs while
> debugging our malloc() issues, for example, and you could look more
> closely at kernel memory.
>
> I'll cc you on future discussions of that feature.
>
Hi David,
Thanks for your kindly explanation, do you have any specific plans on this?
Thanks,
Jianguo Wu.
>
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2014-02-11 4:07 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-01-20 10:36 Jianguo Wu
2014-01-21 5:34 ` David Rientjes
2014-01-21 12:40 ` Jianguo Wu
2014-01-21 20:41 ` David Rientjes
2014-02-11 4:06 ` Jianguo Wu [this message]
2014-02-12 0:28 ` David Rientjes
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=52F9A1D8.7040301@huawei.com \
--to=wujianguo@huawei.com \
--cc=akpm@linux-foundation.org \
--cc=hannes@cmpxchg.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=riel@redhat.com \
--cc=rientjes@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox