linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Vladimir Davydov <vdavydov@parallels.com>
To: akpm@linux-foundation.org
Cc: cl@linux.com, iamjoonsoo.kim@lge.com, rientjes@google.com,
	penberg@kernel.org, hannes@cmpxchg.org, mhocko@suse.cz,
	linux-kernel@vger.kernel.org, linux-mm@kvack.org
Subject: [PATCH -mm v3 5/8] slub: make slab_free non-preemptable
Date: Fri, 13 Jun 2014 00:38:19 +0400	[thread overview]
Message-ID: <0c66165d4f46fa80cd31df147e7bbcaa5fea784c.1402602126.git.vdavydov@parallels.com> (raw)
In-Reply-To: <cover.1402602126.git.vdavydov@parallels.com>

Since per memcg cache destruction is scheduled when the last slab is
freed, to avoid use-after-free in kmem_cache_free we should either
rearrange code in kmem_cache_free so that it won't dereference the cache
ptr after freeing the object, or wait for all kmem_cache_free's to
complete before proceeding to cache destruction.

The former approach isn't a good option from the future development
point of view, because every modifications to kmem_cache_free must be
done with great care then. Hence we should provide a method to wait for
all currently executing kmem_cache_free's to finish.

This patch makes SLUB's implementation of kmem_cache_free
non-preemptable. As a result, synchronize_sched() will work as a barrier
against kmem_cache_free's in flight, so that issuing it before cache
destruction will protect us against the use-after-free.

This won't affect performance of kmem_cache_free, because we already
disable preemption there, and this patch only moves preempt_enable to
the end of the function. Neither should it affect the system latency,
because kmem_cache_free is extremely short, even in its slow path.

SLAB's version of kmem_cache_free already proceeds with irqs disabled,
so we only add a comment explaining why it's necessary for kmemcg there.

Signed-off-by: Vladimir Davydov <vdavydov@parallels.com>
Acked-by: Christoph Lameter <cl@linux.com>
---
 mm/slab.c |    6 ++++++
 mm/slub.c |   12 ++++++------
 2 files changed, 12 insertions(+), 6 deletions(-)

diff --git a/mm/slab.c b/mm/slab.c
index 9ca3b87edabc..b3af82419251 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -3450,6 +3450,12 @@ static inline void __cache_free(struct kmem_cache *cachep, void *objp,
 {
 	struct array_cache *ac = cpu_cache_get(cachep);
 
+	/*
+	 * Since we free objects with irqs and therefore preemption disabled,
+	 * we can use synchronize_sched() to wait for all currently executing
+	 * kfree's to finish. This is necessary to avoid use-after-free on
+	 * per memcg cache destruction.
+	 */
 	check_irq_off();
 	kmemleak_free_recursive(objp, cachep->flags);
 	objp = cache_free_debugcheck(cachep, objp, caller);
diff --git a/mm/slub.c b/mm/slub.c
index 35741592be8c..52565a9426ef 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -2673,18 +2673,17 @@ static __always_inline void slab_free(struct kmem_cache *s,
 
 	slab_free_hook(s, x);
 
-redo:
 	/*
-	 * Determine the currently cpus per cpu slab.
-	 * The cpu may change afterward. However that does not matter since
-	 * data is retrieved via this pointer. If we are on the same cpu
-	 * during the cmpxchg then the free will succedd.
+	 * We could make this function fully preemptable, but then we wouldn't
+	 * have a method to wait for all currently executing kfree's to finish,
+	 * which is necessary to avoid use-after-free on per memcg cache
+	 * destruction.
 	 */
 	preempt_disable();
+redo:
 	c = this_cpu_ptr(s->cpu_slab);
 
 	tid = c->tid;
-	preempt_enable();
 
 	if (likely(page == c->page)) {
 		set_freepointer(s, object, c->freelist);
@@ -2701,6 +2700,7 @@ redo:
 	} else
 		__slab_free(s, page, x, addr);
 
+	preempt_enable();
 }
 
 void kmem_cache_free(struct kmem_cache *s, void *x)
-- 
1.7.10.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  parent reply	other threads:[~2014-06-12 20:38 UTC|newest]

Thread overview: 25+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-06-12 20:38 [PATCH -mm v3 0/8] memcg/slab: reintroduce dead cache self-destruction Vladimir Davydov
2014-06-12 20:38 ` [PATCH -mm v3 1/8] memcg: cleanup memcg_cache_params refcnt usage Vladimir Davydov
2014-06-12 20:38 ` [PATCH -mm v3 2/8] memcg: destroy kmem caches when last slab is freed Vladimir Davydov
2014-06-12 20:38 ` [PATCH -mm v3 3/8] memcg: mark caches that belong to offline memcgs as dead Vladimir Davydov
2014-06-12 20:38 ` [PATCH -mm v3 4/8] slub: don't fail kmem_cache_shrink if slab placement optimization fails Vladimir Davydov
2014-06-12 20:38 ` Vladimir Davydov [this message]
2014-06-12 20:38 ` [PATCH -mm v3 6/8] memcg: wait for kfree's to finish before destroying cache Vladimir Davydov
2014-06-12 20:38 ` [PATCH -mm v3 7/8] slub: make dead memcg caches discard free slabs immediately Vladimir Davydov
2014-06-13 16:54   ` Christoph Lameter
2014-06-24  7:50   ` Joonsoo Kim
2014-06-24  8:25     ` Vladimir Davydov
2014-06-24  9:42     ` [PATCH -mm] slub: kmem_cache_shrink: check if partial list is empty under list_lock Vladimir Davydov
2014-06-12 20:38 ` [PATCH -mm v3 8/8] slab: do not keep free objects/slabs on dead memcg caches Vladimir Davydov
2014-06-12 20:41   ` Vladimir Davydov
2014-06-24  7:25   ` Joonsoo Kim
2014-06-24  7:42     ` Vladimir Davydov
2014-06-24 12:28     ` [PATCH -mm] slab: set free_limit for dead caches to 0 Vladimir Davydov
2014-06-24  7:38   ` [PATCH -mm v3 8/8] slab: do not keep free objects/slabs on dead memcg caches Joonsoo Kim
2014-06-24  7:48     ` Vladimir Davydov
2014-06-25 13:45     ` Vladimir Davydov
2014-06-27  6:05       ` Joonsoo Kim
2014-06-30 15:49         ` Christoph Lameter
2014-07-01  7:46           ` Vladimir Davydov
2014-06-25 14:39     ` [PATCH] slab: document why cache can have no per cpu array on kfree Vladimir Davydov
2014-06-25 16:19       ` Christoph Lameter

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=0c66165d4f46fa80cd31df147e7bbcaa5fea784c.1402602126.git.vdavydov@parallels.com \
    --to=vdavydov@parallels.com \
    --cc=akpm@linux-foundation.org \
    --cc=cl@linux.com \
    --cc=hannes@cmpxchg.org \
    --cc=iamjoonsoo.kim@lge.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@suse.cz \
    --cc=penberg@kernel.org \
    --cc=rientjes@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox