From: Vladimir Davydov <vdavydov@parallels.com>
To: akpm@linux-foundation.org
Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, devel@openvz.org
Subject: [PATCH -mm 0/3] slab: cleanup mem hotplug synchronization
Date: Sun, 6 Apr 2014 19:33:49 +0400 [thread overview]
Message-ID: <cover.1396779337.git.vdavydov@parallels.com> (raw)
Hi,
kmem_cache_{create,destroy,shrink} need to get a stable value of
cpu/node online mask, because they init/destroy/access per-cpu/node
kmem_cache parts, which can be allocated or destroyed on cpu/mem
hotplug. To protect against cpu hotplug, these functions use
{get,put}_online_cpus. However, they do nothing to synchronize with
memory hotplug - taking the slab_mutex does not eliminate the
possibility of race as described in patch 3.
What we need there is something like get_online_cpus, but for memory. We
already have lock_memory_hotplug, which serves for the purpose, but it's
a bit of a hammer right now, because it's backed by a mutex. As a
result, it imposes some limitations to locking order, which are not
desirable, and can't be used just like get_online_cpus. I propose to
turn this mutex into an rw semaphore, which will be taken for reading in
lock_memory_hotplug and for writing in memory hotplug code (that's what
patch 1 does).
When I tried to use this rw semaphore in the slab implementation, I came
across a problem with lockdep: rw_semaphore is not marked as read
recursive although it is (at least, it looks so to me), so lockdep
complains about wrong ordering with a sysfs internal mutex in case of
slub, because in contrast to recursive read lock, non-recursive one
should always be taken in the same order with a mutex. That's why in
patch 2 I mark rw semaphore as read-recursive, just like rw spin lock.
Thanks,
Vladimir Davydov (3):
mem-hotplug: turn mem_hotplug_mutex to rwsem
lockdep: mark rwsem_acquire_read as recursive
slab: lock_memory_hotplug for kmem_cache_{create,destroy,shrink}
include/linux/lockdep.h | 2 +-
include/linux/mmzone.h | 7 ++---
mm/memory_hotplug.c | 70 +++++++++++++++++++++--------------------------
mm/slab.c | 26 ++----------------
mm/slab.h | 1 +
mm/slab_common.c | 35 ++++++++++++++++++++++--
mm/slob.c | 3 +-
mm/slub.c | 5 ++--
mm/vmscan.c | 2 +-
9 files changed, 75 insertions(+), 76 deletions(-)
--
1.7.10.4
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next reply other threads:[~2014-04-06 15:33 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-04-06 15:33 Vladimir Davydov [this message]
2014-04-06 15:33 ` [PATCH -mm 1/3] mem-hotplug: turn mem_hotplug_mutex to rwsem Vladimir Davydov
2014-04-06 15:33 ` [PATCH -mm 2/3] lockdep: mark rwsem_acquire_read as recursive Vladimir Davydov
2014-04-07 8:13 ` Peter Zijlstra
2014-04-07 8:26 ` Vladimir Davydov
2014-04-06 15:33 ` [PATCH -mm 3/3] slab: lock_memory_hotplug for kmem_cache_{create,destroy,shrink} Vladimir Davydov
2014-04-06 17:46 ` [PATCH -mm 0/3] slab: cleanup mem hotplug synchronization Vladimir Davydov
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=cover.1396779337.git.vdavydov@parallels.com \
--to=vdavydov@parallels.com \
--cc=akpm@linux-foundation.org \
--cc=devel@openvz.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox