From: Joonsoo Kim <js1304@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: Christoph Lameter <cl@linux.com>,
Pekka Enberg <penberg@kernel.org>,
David Rientjes <rientjes@google.com>,
Joonsoo Kim <iamjoonsoo.kim@lge.com>,
Jesper Dangaard Brouer <brouer@redhat.com>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: [PATCH 09/16] mm/slab: put the freelist at the end of slab page
Date: Thu, 14 Jan 2016 14:24:22 +0900 [thread overview]
Message-ID: <1452749069-15334-10-git-send-email-iamjoonsoo.kim@lge.com> (raw)
In-Reply-To: <1452749069-15334-1-git-send-email-iamjoonsoo.kim@lge.com>
Currently, the freelist is at the front of slab page. This requires
extra space to meet object alignment requirement. If we put the freelist
at the end of slab page, object could start at page boundary and will
be at correct alignment. This is possible because freelist has
no alignment constraint itself.
This gives us two benefits. It removes extra memory space
for the freelist alignment and remove complex calculation
at cache initialization step. I can't think notable drawback here.
I mentioned that this would reduce extra memory space, but, this benefit
is rather theoretical because it can be applied to very few cases.
Following is the example cache type that can get benefit from this change.
size align num before after
32 8 124 4100 4092
64 8 63 4103 4095
88 8 46 4102 4094
272 8 15 4103 4095
408 8 10 4098 4090
32 16 124 4108 4092
64 16 63 4111 4095
32 32 124 4124 4092
64 32 63 4127 4095
96 32 42 4106 4074
before means whole size for objects and aligned freelist before applying
patch and after shows the result of this patch.
Since before is more than 4096, number of object should decrease and
memory waste happens.
Anyway, this patch removes complex calculation so looks beneficial to me.
Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
---
mm/slab.c | 89 ++++++++++++++++-----------------------------------------------
1 file changed, 22 insertions(+), 67 deletions(-)
diff --git a/mm/slab.c b/mm/slab.c
index 02bdb91..fe17acc 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -456,55 +456,12 @@ static inline struct array_cache *cpu_cache_get(struct kmem_cache *cachep)
return this_cpu_ptr(cachep->cpu_cache);
}
-static size_t calculate_freelist_size(int nr_objs, size_t align)
-{
- size_t freelist_size;
-
- freelist_size = nr_objs * sizeof(freelist_idx_t);
- if (align)
- freelist_size = ALIGN(freelist_size, align);
-
- return freelist_size;
-}
-
-static int calculate_nr_objs(size_t slab_size, size_t buffer_size,
- size_t idx_size, size_t align)
-{
- int nr_objs;
- size_t remained_size;
- size_t freelist_size;
-
- /*
- * Ignore padding for the initial guess. The padding
- * is at most @align-1 bytes, and @buffer_size is at
- * least @align. In the worst case, this result will
- * be one greater than the number of objects that fit
- * into the memory allocation when taking the padding
- * into account.
- */
- nr_objs = slab_size / (buffer_size + idx_size);
-
- /*
- * This calculated number will be either the right
- * amount, or one greater than what we want.
- */
- remained_size = slab_size - nr_objs * buffer_size;
- freelist_size = calculate_freelist_size(nr_objs, align);
- if (remained_size < freelist_size)
- nr_objs--;
-
- return nr_objs;
-}
-
/*
* Calculate the number of objects and left-over bytes for a given buffer size.
*/
static void cache_estimate(unsigned long gfporder, size_t buffer_size,
- size_t align, int flags, size_t *left_over,
- unsigned int *num)
+ unsigned long flags, size_t *left_over, unsigned int *num)
{
- int nr_objs;
- size_t mgmt_size;
size_t slab_size = PAGE_SIZE << gfporder;
/*
@@ -512,9 +469,12 @@ static void cache_estimate(unsigned long gfporder, size_t buffer_size,
* on it. For the latter case, the memory allocated for a
* slab is used for:
*
- * - One freelist_idx_t for each object
- * - Padding to respect alignment of @align
* - @buffer_size bytes for each object
+ * - One freelist_idx_t for each object
+ *
+ * We don't need to consider alignment of freelist because
+ * freelist will be at the end of slab page. The objects will be
+ * at the correct alignment.
*
* If the slab management structure is off the slab, then the
* alignment will already be calculated into the size. Because
@@ -522,16 +482,13 @@ static void cache_estimate(unsigned long gfporder, size_t buffer_size,
* correct alignment when allocated.
*/
if (flags & CFLGS_OFF_SLAB) {
- mgmt_size = 0;
- nr_objs = slab_size / buffer_size;
-
+ *num = slab_size / buffer_size;
+ *left_over = slab_size % buffer_size;
} else {
- nr_objs = calculate_nr_objs(slab_size, buffer_size,
- sizeof(freelist_idx_t), align);
- mgmt_size = calculate_freelist_size(nr_objs, align);
+ *num = slab_size / (buffer_size + sizeof(freelist_idx_t));
+ *left_over = slab_size %
+ (buffer_size + sizeof(freelist_idx_t));
}
- *num = nr_objs;
- *left_over = slab_size - nr_objs*buffer_size - mgmt_size;
}
#if DEBUG
@@ -1921,7 +1878,7 @@ static void slabs_destroy(struct kmem_cache *cachep, struct list_head *list)
* towards high-order requests, this should be changed.
*/
static size_t calculate_slab_order(struct kmem_cache *cachep,
- size_t size, size_t align, unsigned long flags)
+ size_t size, unsigned long flags)
{
unsigned long offslab_limit;
size_t left_over = 0;
@@ -1931,7 +1888,7 @@ static size_t calculate_slab_order(struct kmem_cache *cachep,
unsigned int num;
size_t remainder;
- cache_estimate(gfporder, size, align, flags, &remainder, &num);
+ cache_estimate(gfporder, size, flags, &remainder, &num);
if (!num)
continue;
@@ -2207,12 +2164,12 @@ __kmem_cache_create (struct kmem_cache *cachep, unsigned long flags)
if (FREELIST_BYTE_INDEX && size < SLAB_OBJ_MIN_SIZE)
size = ALIGN(SLAB_OBJ_MIN_SIZE, cachep->align);
- left_over = calculate_slab_order(cachep, size, cachep->align, flags);
+ left_over = calculate_slab_order(cachep, size, flags);
if (!cachep->num)
return -E2BIG;
- freelist_size = calculate_freelist_size(cachep->num, cachep->align);
+ freelist_size = cachep->num * sizeof(freelist_idx_t);
/*
* If the slab has been placed off-slab, and we have enough space then
@@ -2223,11 +2180,6 @@ __kmem_cache_create (struct kmem_cache *cachep, unsigned long flags)
left_over -= freelist_size;
}
- if (flags & CFLGS_OFF_SLAB) {
- /* really off slab. No need for manual alignment */
- freelist_size = calculate_freelist_size(cachep->num, 0);
- }
-
cachep->colour_off = cache_line_size();
/* Offset must be a multiple of the alignment. */
if (cachep->colour_off < cachep->align)
@@ -2443,6 +2395,9 @@ static void *alloc_slabmgmt(struct kmem_cache *cachep,
void *freelist;
void *addr = page_address(page);
+ page->s_mem = addr + colour_off;
+ page->active = 0;
+
if (OFF_SLAB(cachep)) {
/* Slab management obj is off-slab. */
freelist = kmem_cache_alloc_node(cachep->freelist_cache,
@@ -2450,11 +2405,11 @@ static void *alloc_slabmgmt(struct kmem_cache *cachep,
if (!freelist)
return NULL;
} else {
- freelist = addr + colour_off;
- colour_off += cachep->freelist_size;
+ /* We will use last bytes at the slab for freelist */
+ freelist = addr + (PAGE_SIZE << cachep->gfporder) -
+ cachep->freelist_size;
}
- page->active = 0;
- page->s_mem = addr + colour_off;
+
return freelist;
}
--
1.9.1
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2016-01-14 5:24 UTC|newest]
Thread overview: 36+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-01-14 5:24 [PATCH 00/16] mm/slab: introduce new freed objects management way, OBJFREELIST_SLAB Joonsoo Kim
2016-01-14 5:24 ` [PATCH 01/16] mm/slab: fix stale code comment Joonsoo Kim
2016-01-14 15:22 ` Christoph Lameter
2016-01-14 5:24 ` [PATCH 02/16] mm/slab: remove useless structure define Joonsoo Kim
2016-01-14 15:22 ` Christoph Lameter
2016-01-14 5:24 ` [PATCH 03/16] mm/slab: remove the checks for slab implementation bug Joonsoo Kim
2016-01-14 15:23 ` Christoph Lameter
2016-01-14 16:20 ` Joonsoo Kim
2016-01-14 5:24 ` [PATCH 04/16] mm/slab: activate debug_pagealloc in SLAB when it is actually enabled Joonsoo Kim
2016-01-14 12:09 ` Jesper Dangaard Brouer
2016-01-14 16:16 ` Joonsoo Kim
2016-01-14 5:24 ` [PATCH 05/16] mm/slab: use more appropriate condition check for debug_pagealloc Joonsoo Kim
2016-01-14 5:24 ` [PATCH 06/16] mm/slab: clean-up DEBUG_PAGEALLOC processing code Joonsoo Kim
2016-01-14 5:24 ` [PATCH 07/16] mm/slab: alternative implementation for DEBUG_SLAB_LEAK Joonsoo Kim
2016-01-14 5:24 ` [PATCH 08/16] mm/slab: remove object status buffer " Joonsoo Kim
2016-01-14 5:24 ` Joonsoo Kim [this message]
2016-01-14 15:26 ` [PATCH 09/16] mm/slab: put the freelist at the end of slab page Christoph Lameter
2016-01-14 16:21 ` Joonsoo Kim
2016-01-14 17:13 ` Christoph Lameter
2016-01-14 5:24 ` [PATCH 10/16] mm/slab: align cache size first before determination of OFF_SLAB candidate Joonsoo Kim
2016-01-14 5:24 ` [PATCH 11/16] mm/slab: clean-up cache type determination Joonsoo Kim
2016-01-14 5:24 ` [PATCH 12/16] mm/slab: do not change cache size if debug pagealloc isn't possible Joonsoo Kim
2016-01-14 5:24 ` [PATCH 13/16] mm/slab: make criteria for off slab determination robust and simple Joonsoo Kim
2016-01-14 5:24 ` [PATCH 14/16] mm/slab: factor out slab list fixup code Joonsoo Kim
2016-01-14 5:24 ` [PATCH 15/16] mm/slab: factor out debugging initialization in cache_init_objs() Joonsoo Kim
2016-01-14 5:24 ` [PATCH 16/16] mm/slab: introduce new slab management type, OBJFREELIST_SLAB Joonsoo Kim
2016-01-14 15:32 ` Christoph Lameter
2016-01-14 16:24 ` Joonsoo Kim
2016-01-27 13:35 ` Vlastimil Babka
2016-01-27 16:48 ` Christoph Lameter
2016-01-27 17:18 ` Vlastimil Babka
2016-01-27 17:55 ` Christoph Lameter
2016-01-28 4:51 ` Joonsoo Kim
2016-01-29 15:21 ` Vlastimil Babka
2016-01-27 4:40 ` [PATCH 00/16] mm/slab: introduce new freed objects management way, OBJFREELIST_SLAB Andrew Morton
2016-01-27 4:46 ` Joonsoo Kim
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1452749069-15334-10-git-send-email-iamjoonsoo.kim@lge.com \
--to=js1304@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=brouer@redhat.com \
--cc=cl@linux.com \
--cc=iamjoonsoo.kim@lge.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=penberg@kernel.org \
--cc=rientjes@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox