From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-lb0-f169.google.com (mail-lb0-f169.google.com [209.85.217.169]) by kanga.kvack.org (Postfix) with ESMTP id 007176B003A for ; Mon, 3 Feb 2014 10:54:47 -0500 (EST) Received: by mail-lb0-f169.google.com with SMTP id q8so5500826lbi.14 for ; Mon, 03 Feb 2014 07:54:47 -0800 (PST) Received: from relay.parallels.com (relay.parallels.com. [195.214.232.42]) by mx.google.com with ESMTPS id uf1si10668341lbb.4.2014.02.03.07.54.45 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 03 Feb 2014 07:54:46 -0800 (PST) From: Vladimir Davydov Subject: [PATCH v2 4/7] memcg, slab: unregister cache from memcg before starting to destroy it Date: Mon, 3 Feb 2014 19:54:39 +0400 Message-ID: In-Reply-To: References: MIME-Version: 1.0 Content-Type: text/plain Sender: owner-linux-mm@kvack.org List-ID: To: akpm@linux-foundation.org Cc: mhocko@suse.cz, rientjes@google.com, penberg@kernel.org, cl@linux.com, glommer@gmail.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, devel@openvz.org Currently, memcg_unregister_cache(), which deletes the cache being destroyed from the memcg_slab_caches list, is called after __kmem_cache_shutdown() (see kmem_cache_destroy()), which starts to destroy the cache. As a result, one can access a partially destroyed cache while traversing a memcg_slab_caches list, which can have deadly consequences (for instance, cache_show() called for each cache on a memcg_slab_caches list from mem_cgroup_slabinfo_read() will dereference pointers to already freed data). To fix this, let's move memcg_unregister_cache() before the cache destruction process beginning, issuing memcg_register_cache() on failure. Signed-off-by: Vladimir Davydov --- mm/memcontrol.c | 12 ++++++------ mm/slab_common.c | 3 ++- 2 files changed, 8 insertions(+), 7 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index b94d9917707c..6a059e73212c 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -3247,6 +3247,7 @@ int memcg_alloc_cache_params(struct kmem_cache *s, s->memcg_params->root_cache = root_cache; INIT_WORK(&s->memcg_params->destroy, kmem_cache_destroy_work_func); + css_get(&memcg->css); } else s->memcg_params->is_root_cache = true; @@ -3255,6 +3256,10 @@ int memcg_alloc_cache_params(struct kmem_cache *s, void memcg_free_cache_params(struct kmem_cache *s) { + if (!s->memcg_params) + return; + if (!s->memcg_params->is_root_cache) + css_put(&s->memcg_params->memcg->css); kfree(s->memcg_params); } @@ -3277,9 +3282,6 @@ void memcg_register_cache(struct kmem_cache *s) memcg = s->memcg_params->memcg; id = memcg_cache_id(memcg); - css_get(&memcg->css); - - /* * Since readers won't lock (see cache_from_memcg_idx()), we need a * barrier here to ensure nobody will see the kmem_cache partially @@ -3328,10 +3330,8 @@ void memcg_unregister_cache(struct kmem_cache *s) * after removing it from the memcg_slab_caches list, otherwise we can * fail to convert memcg_params_to_cache() while traversing the list. */ - VM_BUG_ON(!root->memcg_params->memcg_caches[id]); + VM_BUG_ON(root->memcg_params->memcg_caches[id] != s); root->memcg_params->memcg_caches[id] = NULL; - - css_put(&memcg->css); } /* diff --git a/mm/slab_common.c b/mm/slab_common.c index 3314fb3ead7f..4dff4bb66f19 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -310,9 +310,9 @@ void kmem_cache_destroy(struct kmem_cache *s) s->refcount--; if (!s->refcount) { list_del(&s->list); + memcg_unregister_cache(s); if (!__kmem_cache_shutdown(s)) { - memcg_unregister_cache(s); mutex_unlock(&slab_mutex); if (s->flags & SLAB_DESTROY_BY_RCU) rcu_barrier(); @@ -322,6 +322,7 @@ void kmem_cache_destroy(struct kmem_cache *s) kmem_cache_free(kmem_cache, s); } else { list_add(&s->list, &slab_caches); + memcg_register_cache(s); mutex_unlock(&slab_mutex); printk(KERN_ERR "kmem_cache_destroy %s: Slab cache still has objects\n", s->name); -- 1.7.10.4 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org