From: Joonsoo Kim <js1304@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: Christoph Lameter <cl@linux.com>,
Pekka Enberg <penberg@kernel.org>,
David Rientjes <rientjes@google.com>,
Joonsoo Kim <iamjoonsoo.kim@lge.com>,
Jesper Dangaard Brouer <brouer@redhat.com>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: [PATCH 13/16] mm/slab: make criteria for off slab determination robust and simple
Date: Thu, 14 Jan 2016 14:24:26 +0900 [thread overview]
Message-ID: <1452749069-15334-14-git-send-email-iamjoonsoo.kim@lge.com> (raw)
In-Reply-To: <1452749069-15334-1-git-send-email-iamjoonsoo.kim@lge.com>
To become a off slab, there are some constraints to avoid
bootstrapping problem and recursive call. This can be avoided differently
by simply checking that corresponding kmalloc cache is ready and it's not
a off slab. It would be more robust because static size checking can be
affected by cache size change or architecture type but dynamic checking
isn't.
One check 'freelist_cache->size > cachep->size / 2' is added to check
benefit of choosing off slab, because, now, there is no size constraint
which ensures enough advantage when selecting off slab.
Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
---
mm/slab.c | 45 +++++++++++++++++----------------------------
1 file changed, 17 insertions(+), 28 deletions(-)
diff --git a/mm/slab.c b/mm/slab.c
index b0f6eda..e86977e 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -272,7 +272,6 @@ static void kmem_cache_node_init(struct kmem_cache_node *parent)
#define CFLGS_OFF_SLAB (0x80000000UL)
#define OFF_SLAB(x) ((x)->flags & CFLGS_OFF_SLAB)
-#define OFF_SLAB_MIN_SIZE (max_t(size_t, PAGE_SIZE >> 5, KMALLOC_MIN_SIZE + 1))
#define BATCHREFILL_LIMIT 16
/*
@@ -1880,7 +1879,6 @@ static void slabs_destroy(struct kmem_cache *cachep, struct list_head *list)
static size_t calculate_slab_order(struct kmem_cache *cachep,
size_t size, unsigned long flags)
{
- unsigned long offslab_limit;
size_t left_over = 0;
int gfporder;
@@ -1897,16 +1895,24 @@ static size_t calculate_slab_order(struct kmem_cache *cachep,
break;
if (flags & CFLGS_OFF_SLAB) {
+ struct kmem_cache *freelist_cache;
+ size_t freelist_size;
+
+ freelist_size = num * sizeof(freelist_idx_t);
+ freelist_cache = kmalloc_slab(freelist_size, 0u);
+ if (!freelist_cache)
+ continue;
+
/*
- * Max number of objs-per-slab for caches which
- * use off-slab slabs. Needed to avoid a possible
- * looping condition in cache_grow().
+ * Needed to avoid possible looping condition
+ * in cache_grow()
*/
- offslab_limit = size;
- offslab_limit /= sizeof(freelist_idx_t);
+ if (OFF_SLAB(freelist_cache))
+ continue;
- if (num > offslab_limit)
- break;
+ /* check if off slab has enough benefit */
+ if (freelist_cache->size > cachep->size / 2)
+ continue;
}
/* Found something acceptable - save it away */
@@ -2032,17 +2038,9 @@ static bool set_off_slab_cache(struct kmem_cache *cachep,
cachep->num = 0;
/*
- * Determine if the slab management is 'on' or 'off' slab.
- * (bootstrapping cannot cope with offslab caches so don't do
- * it too early on. Always use on-slab management when
- * SLAB_NOLEAKTRACE to avoid recursive calls into kmemleak)
+ * Always use on-slab management when SLAB_NOLEAKTRACE
+ * to avoid recursive calls into kmemleak.
*/
- if (size < OFF_SLAB_MIN_SIZE)
- return false;
-
- if (slab_early_init)
- return false;
-
if (flags & SLAB_NOLEAKTRACE)
return false;
@@ -2206,7 +2204,6 @@ __kmem_cache_create (struct kmem_cache *cachep, unsigned long flags)
* sized slab is initialized in current slab initialization sequence.
*/
if (debug_pagealloc_enabled() && (flags & SLAB_POISON) &&
- !slab_early_init && size >= kmalloc_size(INDEX_NODE) &&
size >= 256 && cachep->object_size > cache_line_size()) {
if (size < PAGE_SIZE || size % PAGE_SIZE == 0) {
size_t tmp_size = ALIGN(size, PAGE_SIZE);
@@ -2255,14 +2252,6 @@ done:
if (OFF_SLAB(cachep)) {
cachep->freelist_cache =
kmalloc_slab(cachep->freelist_size, 0u);
- /*
- * This is a possibility for one of the kmalloc_{dma,}_caches.
- * But since we go off slab only for object size greater than
- * OFF_SLAB_MIN_SIZE, and kmalloc_{dma,}_caches get created
- * in ascending order,this should not happen at all.
- * But leave a BUG_ON for some lucky dude.
- */
- BUG_ON(ZERO_OR_NULL_PTR(cachep->freelist_cache));
}
err = setup_cpu_cache(cachep, gfp);
--
1.9.1
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2016-01-14 5:25 UTC|newest]
Thread overview: 36+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-01-14 5:24 [PATCH 00/16] mm/slab: introduce new freed objects management way, OBJFREELIST_SLAB Joonsoo Kim
2016-01-14 5:24 ` [PATCH 01/16] mm/slab: fix stale code comment Joonsoo Kim
2016-01-14 15:22 ` Christoph Lameter
2016-01-14 5:24 ` [PATCH 02/16] mm/slab: remove useless structure define Joonsoo Kim
2016-01-14 15:22 ` Christoph Lameter
2016-01-14 5:24 ` [PATCH 03/16] mm/slab: remove the checks for slab implementation bug Joonsoo Kim
2016-01-14 15:23 ` Christoph Lameter
2016-01-14 16:20 ` Joonsoo Kim
2016-01-14 5:24 ` [PATCH 04/16] mm/slab: activate debug_pagealloc in SLAB when it is actually enabled Joonsoo Kim
2016-01-14 12:09 ` Jesper Dangaard Brouer
2016-01-14 16:16 ` Joonsoo Kim
2016-01-14 5:24 ` [PATCH 05/16] mm/slab: use more appropriate condition check for debug_pagealloc Joonsoo Kim
2016-01-14 5:24 ` [PATCH 06/16] mm/slab: clean-up DEBUG_PAGEALLOC processing code Joonsoo Kim
2016-01-14 5:24 ` [PATCH 07/16] mm/slab: alternative implementation for DEBUG_SLAB_LEAK Joonsoo Kim
2016-01-14 5:24 ` [PATCH 08/16] mm/slab: remove object status buffer " Joonsoo Kim
2016-01-14 5:24 ` [PATCH 09/16] mm/slab: put the freelist at the end of slab page Joonsoo Kim
2016-01-14 15:26 ` Christoph Lameter
2016-01-14 16:21 ` Joonsoo Kim
2016-01-14 17:13 ` Christoph Lameter
2016-01-14 5:24 ` [PATCH 10/16] mm/slab: align cache size first before determination of OFF_SLAB candidate Joonsoo Kim
2016-01-14 5:24 ` [PATCH 11/16] mm/slab: clean-up cache type determination Joonsoo Kim
2016-01-14 5:24 ` [PATCH 12/16] mm/slab: do not change cache size if debug pagealloc isn't possible Joonsoo Kim
2016-01-14 5:24 ` Joonsoo Kim [this message]
2016-01-14 5:24 ` [PATCH 14/16] mm/slab: factor out slab list fixup code Joonsoo Kim
2016-01-14 5:24 ` [PATCH 15/16] mm/slab: factor out debugging initialization in cache_init_objs() Joonsoo Kim
2016-01-14 5:24 ` [PATCH 16/16] mm/slab: introduce new slab management type, OBJFREELIST_SLAB Joonsoo Kim
2016-01-14 15:32 ` Christoph Lameter
2016-01-14 16:24 ` Joonsoo Kim
2016-01-27 13:35 ` Vlastimil Babka
2016-01-27 16:48 ` Christoph Lameter
2016-01-27 17:18 ` Vlastimil Babka
2016-01-27 17:55 ` Christoph Lameter
2016-01-28 4:51 ` Joonsoo Kim
2016-01-29 15:21 ` Vlastimil Babka
2016-01-27 4:40 ` [PATCH 00/16] mm/slab: introduce new freed objects management way, OBJFREELIST_SLAB Andrew Morton
2016-01-27 4:46 ` Joonsoo Kim
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1452749069-15334-14-git-send-email-iamjoonsoo.kim@lge.com \
--to=js1304@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=brouer@redhat.com \
--cc=cl@linux.com \
--cc=iamjoonsoo.kim@lge.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=penberg@kernel.org \
--cc=rientjes@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox