From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from psmtp.com (na3sys010amx166.postini.com [74.125.245.166]) by kanga.kvack.org (Postfix) with SMTP id AB7D36B0069 for ; Fri, 21 Sep 2012 05:28:03 -0400 (EDT) Received: by pbbro12 with SMTP id ro12so7797273pbb.14 for ; Fri, 21 Sep 2012 02:28:03 -0700 (PDT) MIME-Version: 1.0 In-Reply-To: <505C27E4.90509@parallels.com> References: <1347977530-29755-1-git-send-email-glommer@parallels.com> <1347977530-29755-16-git-send-email-glommer@parallels.com> <505C27E4.90509@parallels.com> Date: Fri, 21 Sep 2012 18:28:02 +0900 Message-ID: Subject: Re: [PATCH v3 15/16] memcg/sl[au]b: shrink dead caches From: JoonSoo Kim Content-Type: text/plain; charset=ISO-8859-1 Sender: owner-linux-mm@kvack.org List-ID: To: Glauber Costa Cc: linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, kamezawa.hiroyu@jp.fujitsu.com, devel@openvz.org, Tejun Heo , linux-mm@kvack.org, Suleiman Souhlal , Frederic Weisbecker , Mel Gorman , David Rientjes , Christoph Lameter , Pekka Enberg , Michal Hocko , Johannes Weiner Hi, Glauber. >> 2012/9/18 Glauber Costa : >>> diff --git a/mm/slub.c b/mm/slub.c >>> index 0b68d15..9d79216 100644 >>> --- a/mm/slub.c >>> +++ b/mm/slub.c >>> @@ -2602,6 +2602,7 @@ redo: >>> } else >>> __slab_free(s, page, x, addr); >>> >>> + kmem_cache_verify_dead(s); >>> } >> >> As far as u know, I am not a expert and don't know anything about memcg. >> IMHO, this implementation may hurt system performance in some case. >> >> In case of memcg is destoried, remained kmem_cache is marked "dead". >> After it is marked, >> every free operation to this "dead" kmem_cache call >> kmem_cache_verify_dead() and finally call kmem_cache_shrink(). > > As long as it is restricted to that cache, this is a non issue. > dead caches are exactly what they name imply: dead. > > Means that we actively want them to go away, and just don't kill them > right away because they have some inflight objects - which we expect not > to be too much. Hmm.. I don't think so. We can destroy memcg whenever we want, is it right? If it is right, there is many inflight objects when we destory memcg. If there is so many inflight objects, performance of these processes can be hurt too much. >> And, I found one case that destroying memcg's kmem_cache don't works properly. >> If we destroy memcg after all object is freed, current implementation >> doesn't destroy kmem_cache. >> kmem_cache_destroy_work_func() check "cachep->memcg_params.nr_pages == 0", >> but in this case, it return false, because kmem_cache may have >> cpu_slab, and cpu_partials_slabs. >> As we already free all objects, kmem_cache_verify_dead() is not invoked forever. >> I think that we need another kmem_cache_shrink() in >> kmem_cache_destroy_work_func(). > > I'll take a look here. What you describe makes sense, and can > potentially happen. I tried to handle this case with care in > destroy_all_caches, but I may have always made a mistake... > > Did you see this actively happening, or are you just assuming this can > happen from your read of the code? Just read of the code. Thanks. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org