From: Pekka J Enberg <penberg@cs.helsinki.fi>
To: Pavel Emelianov <xemul@sw.ru>
Cc: Andrew Morton <akpm@osdl.org>,
Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
devel@openvz.org, Kirill Korotaev <dev@openvz.org>,
linux-mm@kvack.org
Subject: Re: [PATCH] Show slab memory usage on OOM and SysRq-M
Date: Tue, 17 Apr 2007 17:02:46 +0300 (EEST) [thread overview]
Message-ID: <Pine.LNX.4.64.0704171653420.22366@sbz-30.cs.Helsinki.FI> (raw)
In-Reply-To: <4624D0C1.4090304@sw.ru>
Hi Pavel,
At some point in time, I wrote:
> > So, now we have two locks protecting cache_chain? Please explain why
> > you can't use the mutex.
On Tue, 17 Apr 2007, Pavel Emelianov wrote:
> Because OOM can actually happen with this mutex locked. For example
> kmem_cache_create() locks it and calls kmalloc(), or write to
> /proc/slabinfo also locks it and calls do_tune_cpu_caches(). This is
> very rare case and the deadlock is VERY unlikely to happen, but it
> will be very disappointing if it happens.
>
> Moreover, I put the call to show_slabs() into sysrq handler, so it may
> be called from atomic context.
>
> Making mutex_trylock() is possible, but we risk of loosing this info
> in case OOM happens while the mutex is locked for cache shrinking (see
> cache_reap() for example)...
>
> So we have a choice - either we have an additional lock on a slow and
> rare paths and show this info for sure, or we do not have a lock, but
> have a risk of loosing this info.
I don't worry about performance as much I do about maintenance. Do you
know if mutex_trylock() is a problem in practice? Could we perhaps fix
the worst offenders who are holding cache_chain_mutex for a long time?
In any case, if we do end up adding the lock, please add a BIG FAT COMMENT
explaining why we have it.
At some point in time, I wrote:
> > I would also drop the OFF_SLAB bits because it really doesn't matter
> > that much for your purposes. Besides, you're already per-node and
> > per-CPU caches here which attribute to much more memory on NUMA setups
> > for example.
On Tue, 17 Apr 2007, Pavel Emelianov wrote:
> This gives us a more precise information :) The precision is less than 1%
> so if nobody likes/needs it, this may be dropped.
My point is that the "precision" is useless here. We probably waste more
memory in the caches which are not accounted here. So I'd just drop it.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2007-04-17 14:02 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2007-04-17 12:55 Pavel Emelianov
2007-04-17 13:22 ` Pekka Enberg
2007-04-17 13:50 ` Pavel Emelianov
2007-04-17 14:02 ` Pekka J Enberg [this message]
2007-04-17 14:21 ` Pavel Emelianov
2007-04-17 15:12 ` Eric Dumazet
2007-04-18 6:17 ` Pekka J Enberg
2007-04-18 12:07 ` Eric Dumazet
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Pine.LNX.4.64.0704171653420.22366@sbz-30.cs.Helsinki.FI \
--to=penberg@cs.helsinki.fi \
--cc=akpm@osdl.org \
--cc=dev@openvz.org \
--cc=devel@openvz.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=xemul@sw.ru \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox