From: Vladimir Davydov <vdavydov@parallels.com>
To: akpm@linux-foundation.org
Cc: cl@linux.com, iamjoonsoo.kim@lge.com, rientjes@google.com,
penberg@kernel.org, hannes@cmpxchg.org, mhocko@suse.cz,
linux-kernel@vger.kernel.org, linux-mm@kvack.org
Subject: [PATCH -mm v3 8/8] slab: do not keep free objects/slabs on dead memcg caches
Date: Fri, 13 Jun 2014 00:38:22 +0400 [thread overview]
Message-ID: <a985aec824cd35df381692fca83f7a8debc80305.1402602126.git.vdavydov@parallels.com> (raw)
In-Reply-To: <cover.1402602126.git.vdavydov@parallels.com>
Since a dead memcg cache is destroyed only after the last slab allocated
to it is freed, we must disable caching of free objects/slabs for such
caches, otherwise they will be hanging around forever.
For SLAB that means we must disable per cpu free object arrays and make
free_block always discard empty slabs irrespective of node's free_limit.
To disable per cpu arrays, we free them on kmem_cache_shrink (see
drain_cpu_caches -> do_drain) and make __cache_free fall back to
free_block if there is no per cpu array. Also, we have to disable
allocation of per cpu arrays on cpu hotplug for dead caches (see
cpuup_prepare, __do_tune_cpucache).
After we disabled free objects/slabs caching, there is no need to reap
those caches periodically. Moreover, it will only result in slowdown. So
we also make cache_reap skip then.
Signed-off-by: Vladimir Davydov <vdavydov@parallels.com>
---
mm/slab.c | 31 ++++++++++++++++++++++++++++++-
1 file changed, 30 insertions(+), 1 deletion(-)
diff --git a/mm/slab.c b/mm/slab.c
index b3af82419251..7e91f5f1341d 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -1210,6 +1210,9 @@ static int cpuup_prepare(long cpu)
struct array_cache *shared = NULL;
struct array_cache **alien = NULL;
+ if (memcg_cache_dead(cachep))
+ continue;
+
nc = alloc_arraycache(node, cachep->limit,
cachep->batchcount, GFP_KERNEL);
if (!nc)
@@ -2411,10 +2414,18 @@ static void do_drain(void *arg)
check_irq_off();
ac = cpu_cache_get(cachep);
+ if (!ac)
+ return;
+
spin_lock(&cachep->node[node]->list_lock);
free_block(cachep, ac->entry, ac->avail, node);
spin_unlock(&cachep->node[node]->list_lock);
ac->avail = 0;
+
+ if (memcg_cache_dead(cachep)) {
+ cachep->array[smp_processor_id()] = NULL;
+ kfree(ac);
+ }
}
static void drain_cpu_caches(struct kmem_cache *cachep)
@@ -3368,7 +3379,8 @@ static void free_block(struct kmem_cache *cachep, void **objpp, int nr_objects,
/* fixup slab chains */
if (page->active == 0) {
- if (n->free_objects > n->free_limit) {
+ if (n->free_objects > n->free_limit ||
+ memcg_cache_dead(cachep)) {
n->free_objects -= cachep->num;
/* No need to drop any previously held
* lock here, even if we have a off-slab slab
@@ -3462,6 +3474,17 @@ static inline void __cache_free(struct kmem_cache *cachep, void *objp,
kmemcheck_slab_free(cachep, objp, cachep->object_size);
+#ifdef CONFIG_MEMCG_KMEM
+ if (unlikely(!ac)) {
+ int nodeid = page_to_nid(virt_to_page(objp));
+
+ spin_lock(&cachep->node[nodeid]->list_lock);
+ free_block(cachep, &objp, 1, nodeid);
+ spin_unlock(&cachep->node[nodeid]->list_lock);
+ return;
+ }
+#endif
+
/*
* Skip calling cache_free_alien() when the platform is not numa.
* This will avoid cache misses that happen while accessing slabp (which
@@ -3803,6 +3826,9 @@ static int __do_tune_cpucache(struct kmem_cache *cachep, int limit,
struct ccupdate_struct *new;
int i;
+ if (memcg_cache_dead(cachep))
+ return 0;
+
new = kzalloc(sizeof(*new) + nr_cpu_ids * sizeof(struct array_cache *),
gfp);
if (!new)
@@ -3988,6 +4014,9 @@ static void cache_reap(struct work_struct *w)
list_for_each_entry(searchp, &slab_caches, list) {
check_irq_on();
+ if (memcg_cache_dead(searchp))
+ continue;
+
/*
* We only take the node lock if absolutely necessary and we
* have established with reasonable certainty that
--
1.7.10.4
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2014-06-12 20:39 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-06-12 20:38 [PATCH -mm v3 0/8] memcg/slab: reintroduce dead cache self-destruction Vladimir Davydov
2014-06-12 20:38 ` [PATCH -mm v3 1/8] memcg: cleanup memcg_cache_params refcnt usage Vladimir Davydov
2014-06-12 20:38 ` [PATCH -mm v3 2/8] memcg: destroy kmem caches when last slab is freed Vladimir Davydov
2014-06-12 20:38 ` [PATCH -mm v3 3/8] memcg: mark caches that belong to offline memcgs as dead Vladimir Davydov
2014-06-12 20:38 ` [PATCH -mm v3 4/8] slub: don't fail kmem_cache_shrink if slab placement optimization fails Vladimir Davydov
2014-06-12 20:38 ` [PATCH -mm v3 5/8] slub: make slab_free non-preemptable Vladimir Davydov
2014-06-12 20:38 ` [PATCH -mm v3 6/8] memcg: wait for kfree's to finish before destroying cache Vladimir Davydov
2014-06-12 20:38 ` [PATCH -mm v3 7/8] slub: make dead memcg caches discard free slabs immediately Vladimir Davydov
2014-06-13 16:54 ` Christoph Lameter
2014-06-24 7:50 ` Joonsoo Kim
2014-06-24 8:25 ` Vladimir Davydov
2014-06-24 9:42 ` [PATCH -mm] slub: kmem_cache_shrink: check if partial list is empty under list_lock Vladimir Davydov
2014-06-12 20:38 ` Vladimir Davydov [this message]
2014-06-12 20:41 ` [PATCH -mm v3 8/8] slab: do not keep free objects/slabs on dead memcg caches Vladimir Davydov
2014-06-24 7:25 ` Joonsoo Kim
2014-06-24 7:42 ` Vladimir Davydov
2014-06-24 12:28 ` [PATCH -mm] slab: set free_limit for dead caches to 0 Vladimir Davydov
2014-06-24 7:38 ` [PATCH -mm v3 8/8] slab: do not keep free objects/slabs on dead memcg caches Joonsoo Kim
2014-06-24 7:48 ` Vladimir Davydov
2014-06-25 13:45 ` Vladimir Davydov
2014-06-27 6:05 ` Joonsoo Kim
2014-06-30 15:49 ` Christoph Lameter
2014-07-01 7:46 ` Vladimir Davydov
2014-06-25 14:39 ` [PATCH] slab: document why cache can have no per cpu array on kfree Vladimir Davydov
2014-06-25 16:19 ` Christoph Lameter
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=a985aec824cd35df381692fca83f7a8debc80305.1402602126.git.vdavydov@parallels.com \
--to=vdavydov@parallels.com \
--cc=akpm@linux-foundation.org \
--cc=cl@linux.com \
--cc=hannes@cmpxchg.org \
--cc=iamjoonsoo.kim@lge.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@suse.cz \
--cc=penberg@kernel.org \
--cc=rientjes@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox