From: Joonsoo Kim <js1304@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: Christoph Lameter <cl@linux.com>,
Pekka Enberg <penberg@kernel.org>,
David Rientjes <rientjes@google.com>,
Joonsoo Kim <iamjoonsoo.kim@lge.com>,
Jesper Dangaard Brouer <brouer@redhat.com>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: [PATCH 07/16] mm/slab: alternative implementation for DEBUG_SLAB_LEAK
Date: Thu, 14 Jan 2016 14:24:20 +0900 [thread overview]
Message-ID: <1452749069-15334-8-git-send-email-iamjoonsoo.kim@lge.com> (raw)
In-Reply-To: <1452749069-15334-1-git-send-email-iamjoonsoo.kim@lge.com>
DEBUG_SLAB_LEAK is a debug option. It's current implementation
requires status buffer so we need more memory to use it. And, it
cause kmem_cache initialization step more complex.
To remove this extra memory usage and to simplify initialization
step, this patch implement this feature with another way.
When user requests to get slab object owner information, it marks
that getting information is started. And then, all free objects
in caches are flushed to corresponding slab page. Now, we can
distinguish all freed object so we can know all allocated objects, too.
After collecting slab object owner information on allocated objects,
mark is checked that there is no free during the processing. If true,
we can be sure that our information is correct so information is
returned to user.
Although this way is rather complex, it has two important benefits
mentioned above. So, I think it is worth changing.
There is one drawback that it takes more time to get slab object owner
information but it is just a debug option so it doesn't matter at all.
To help review, this patch implements new way only. Following patch
will remove useless code.
Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
---
include/linux/slab_def.h | 3 ++
mm/slab.c | 85 +++++++++++++++++++++++++++++++++++-------------
2 files changed, 66 insertions(+), 22 deletions(-)
diff --git a/include/linux/slab_def.h b/include/linux/slab_def.h
index cf139d3..e878ba3 100644
--- a/include/linux/slab_def.h
+++ b/include/linux/slab_def.h
@@ -60,6 +60,9 @@ struct kmem_cache {
atomic_t allocmiss;
atomic_t freehit;
atomic_t freemiss;
+#ifdef CONFIG_DEBUG_SLAB_LEAK
+ atomic_t store_user_clean;
+#endif
/*
* If debugging is enabled, then the allocator can add additional
diff --git a/mm/slab.c b/mm/slab.c
index 7a10e18..9a64d8f 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -396,20 +396,25 @@ static void set_obj_status(struct page *page, int idx, int val)
status[idx] = val;
}
-static inline unsigned int get_obj_status(struct page *page, int idx)
+static inline bool is_store_user_clean(struct kmem_cache *cachep)
{
- int freelist_size;
- char *status;
- struct kmem_cache *cachep = page->slab_cache;
+ return atomic_read(&cachep->store_user_clean) == 1;
+}
- freelist_size = cachep->num * sizeof(freelist_idx_t);
- status = (char *)page->freelist + freelist_size;
+static inline void set_store_user_clean(struct kmem_cache *cachep)
+{
+ atomic_set(&cachep->store_user_clean, 1);
+}
- return status[idx];
+static inline void set_store_user_dirty(struct kmem_cache *cachep)
+{
+ if (is_store_user_clean(cachep))
+ atomic_set(&cachep->store_user_clean, 0);
}
#else
static inline void set_obj_status(struct page *page, int idx, int val) {}
+static inline void set_store_user_dirty(struct kmem_cache *cachep) {}
#endif
@@ -2550,6 +2555,11 @@ static void *slab_get_obj(struct kmem_cache *cachep, struct page *page)
objp = index_to_obj(cachep, page, get_free_obj(page, page->active));
page->active++;
+#if DEBUG
+ if (cachep->flags & SLAB_STORE_USER)
+ set_store_user_dirty(cachep);
+#endif
+
return objp;
}
@@ -2725,8 +2735,10 @@ static void *cache_free_debugcheck(struct kmem_cache *cachep, void *objp,
*dbg_redzone1(cachep, objp) = RED_INACTIVE;
*dbg_redzone2(cachep, objp) = RED_INACTIVE;
}
- if (cachep->flags & SLAB_STORE_USER)
+ if (cachep->flags & SLAB_STORE_USER) {
+ set_store_user_dirty(cachep);
*dbg_userword(cachep, objp) = (void *)caller;
+ }
objnr = obj_to_index(cachep, page, objp);
@@ -4082,15 +4094,34 @@ static void handle_slab(unsigned long *n, struct kmem_cache *c,
struct page *page)
{
void *p;
- int i;
+ int i, j;
+ unsigned long v;
if (n[0] == n[1])
return;
for (i = 0, p = page->s_mem; i < c->num; i++, p += c->size) {
- if (get_obj_status(page, i) != OBJECT_ACTIVE)
+ bool active = true;
+
+ for (j = page->active; j < c->num; j++) {
+ if (get_free_obj(page, j) == i) {
+ active = false;
+ break;
+ }
+ }
+
+ if (!active)
continue;
- if (!add_caller(n, (unsigned long)*dbg_userword(c, p)))
+ /*
+ * probe_kernel_read() is used for DEBUG_PAGEALLOC. page table
+ * mapping is established when actual object allocation and
+ * we could mistakenly access the unmapped object in the cpu
+ * cache.
+ */
+ if (probe_kernel_read(&v, dbg_userword(c, p), sizeof(v)))
+ continue;
+
+ if (!add_caller(n, v))
return;
}
}
@@ -4126,21 +4157,31 @@ static int leaks_show(struct seq_file *m, void *p)
if (!(cachep->flags & SLAB_RED_ZONE))
return 0;
- /* OK, we can do it */
+ /*
+ * Set store_user_clean and start to grab stored user information
+ * for all objects on this cache. If some alloc/free requests comes
+ * during the processing, information would be wrong so restart
+ * whole processing.
+ */
+ do {
+ set_store_user_clean(cachep);
+ drain_cpu_caches(cachep);
+
+ x[1] = 0;
- x[1] = 0;
+ for_each_kmem_cache_node(cachep, node, n) {
- for_each_kmem_cache_node(cachep, node, n) {
+ check_irq_on();
+ spin_lock_irq(&n->list_lock);
- check_irq_on();
- spin_lock_irq(&n->list_lock);
+ list_for_each_entry(page, &n->slabs_full, lru)
+ handle_slab(x, cachep, page);
+ list_for_each_entry(page, &n->slabs_partial, lru)
+ handle_slab(x, cachep, page);
+ spin_unlock_irq(&n->list_lock);
+ }
+ } while (!is_store_user_clean(cachep));
- list_for_each_entry(page, &n->slabs_full, lru)
- handle_slab(x, cachep, page);
- list_for_each_entry(page, &n->slabs_partial, lru)
- handle_slab(x, cachep, page);
- spin_unlock_irq(&n->list_lock);
- }
name = cachep->name;
if (x[0] == x[1]) {
/* Increase the buffer size */
--
1.9.1
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2016-01-14 5:24 UTC|newest]
Thread overview: 36+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-01-14 5:24 [PATCH 00/16] mm/slab: introduce new freed objects management way, OBJFREELIST_SLAB Joonsoo Kim
2016-01-14 5:24 ` [PATCH 01/16] mm/slab: fix stale code comment Joonsoo Kim
2016-01-14 15:22 ` Christoph Lameter
2016-01-14 5:24 ` [PATCH 02/16] mm/slab: remove useless structure define Joonsoo Kim
2016-01-14 15:22 ` Christoph Lameter
2016-01-14 5:24 ` [PATCH 03/16] mm/slab: remove the checks for slab implementation bug Joonsoo Kim
2016-01-14 15:23 ` Christoph Lameter
2016-01-14 16:20 ` Joonsoo Kim
2016-01-14 5:24 ` [PATCH 04/16] mm/slab: activate debug_pagealloc in SLAB when it is actually enabled Joonsoo Kim
2016-01-14 12:09 ` Jesper Dangaard Brouer
2016-01-14 16:16 ` Joonsoo Kim
2016-01-14 5:24 ` [PATCH 05/16] mm/slab: use more appropriate condition check for debug_pagealloc Joonsoo Kim
2016-01-14 5:24 ` [PATCH 06/16] mm/slab: clean-up DEBUG_PAGEALLOC processing code Joonsoo Kim
2016-01-14 5:24 ` Joonsoo Kim [this message]
2016-01-14 5:24 ` [PATCH 08/16] mm/slab: remove object status buffer for DEBUG_SLAB_LEAK Joonsoo Kim
2016-01-14 5:24 ` [PATCH 09/16] mm/slab: put the freelist at the end of slab page Joonsoo Kim
2016-01-14 15:26 ` Christoph Lameter
2016-01-14 16:21 ` Joonsoo Kim
2016-01-14 17:13 ` Christoph Lameter
2016-01-14 5:24 ` [PATCH 10/16] mm/slab: align cache size first before determination of OFF_SLAB candidate Joonsoo Kim
2016-01-14 5:24 ` [PATCH 11/16] mm/slab: clean-up cache type determination Joonsoo Kim
2016-01-14 5:24 ` [PATCH 12/16] mm/slab: do not change cache size if debug pagealloc isn't possible Joonsoo Kim
2016-01-14 5:24 ` [PATCH 13/16] mm/slab: make criteria for off slab determination robust and simple Joonsoo Kim
2016-01-14 5:24 ` [PATCH 14/16] mm/slab: factor out slab list fixup code Joonsoo Kim
2016-01-14 5:24 ` [PATCH 15/16] mm/slab: factor out debugging initialization in cache_init_objs() Joonsoo Kim
2016-01-14 5:24 ` [PATCH 16/16] mm/slab: introduce new slab management type, OBJFREELIST_SLAB Joonsoo Kim
2016-01-14 15:32 ` Christoph Lameter
2016-01-14 16:24 ` Joonsoo Kim
2016-01-27 13:35 ` Vlastimil Babka
2016-01-27 16:48 ` Christoph Lameter
2016-01-27 17:18 ` Vlastimil Babka
2016-01-27 17:55 ` Christoph Lameter
2016-01-28 4:51 ` Joonsoo Kim
2016-01-29 15:21 ` Vlastimil Babka
2016-01-27 4:40 ` [PATCH 00/16] mm/slab: introduce new freed objects management way, OBJFREELIST_SLAB Andrew Morton
2016-01-27 4:46 ` Joonsoo Kim
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1452749069-15334-8-git-send-email-iamjoonsoo.kim@lge.com \
--to=js1304@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=brouer@redhat.com \
--cc=cl@linux.com \
--cc=iamjoonsoo.kim@lge.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=penberg@kernel.org \
--cc=rientjes@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox