From: js1304@gmail.com
To: Andrew Morton <akpm@linux-foundation.org>
Cc: Christoph Lameter <cl@linux.com>,
Pekka Enberg <penberg@kernel.org>,
David Rientjes <rientjes@google.com>,
Jesper Dangaard Brouer <brouer@redhat.com>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org,
Joonsoo Kim <iamjoonsoo.kim@lge.com>
Subject: [PATCH v2 10/11] mm/slab: refill cpu cache through a new slab without holding a node lock
Date: Tue, 12 Apr 2016 13:51:05 +0900 [thread overview]
Message-ID: <1460436666-20462-11-git-send-email-iamjoonsoo.kim@lge.com> (raw)
In-Reply-To: <1460436666-20462-1-git-send-email-iamjoonsoo.kim@lge.com>
From: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Until now, cache growing makes a free slab on node's slab list and then we
can allocate free objects from it. This necessarily requires to hold a
node lock which is very contended. If we refill cpu cache before
attaching it to node's slab list, we can avoid holding a node lock as much
as possible because this newly allocated slab is only visible to the
current task. This will reduce lock contention.
Below is the result of concurrent allocation/free in slab allocation
benchmark made by Christoph a long time ago. I make the output simpler.
The number shows cycle count during alloc/free respectively so less is
better.
* Before
Kmalloc N*alloc N*free(32): Average=355/750
Kmalloc N*alloc N*free(64): Average=452/812
Kmalloc N*alloc N*free(128): Average=559/1070
Kmalloc N*alloc N*free(256): Average=1176/980
Kmalloc N*alloc N*free(512): Average=1939/1189
Kmalloc N*alloc N*free(1024): Average=3521/1278
Kmalloc N*alloc N*free(2048): Average=7152/1838
Kmalloc N*alloc N*free(4096): Average=13438/2013
* After
Kmalloc N*alloc N*free(32): Average=248/966
Kmalloc N*alloc N*free(64): Average=261/949
Kmalloc N*alloc N*free(128): Average=314/1016
Kmalloc N*alloc N*free(256): Average=741/1061
Kmalloc N*alloc N*free(512): Average=1246/1152
Kmalloc N*alloc N*free(1024): Average=2437/1259
Kmalloc N*alloc N*free(2048): Average=4980/1800
Kmalloc N*alloc N*free(4096): Average=9000/2078
It shows that contention is reduced for all the object sizes and
performance increases by 30 ~ 40%.
Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
---
mm/slab.c | 68 +++++++++++++++++++++++++++++++++------------------------------
1 file changed, 36 insertions(+), 32 deletions(-)
diff --git a/mm/slab.c b/mm/slab.c
index 2c28ad5..cf12fbd 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -2852,6 +2852,30 @@ static noinline void *cache_alloc_pfmemalloc(struct kmem_cache *cachep,
return obj;
}
+/*
+ * Slab list should be fixed up by fixup_slab_list() for existing slab
+ * or cache_grow_end() for new slab
+ */
+static __always_inline int alloc_block(struct kmem_cache *cachep,
+ struct array_cache *ac, struct page *page, int batchcount)
+{
+ /*
+ * There must be at least one object available for
+ * allocation.
+ */
+ BUG_ON(page->active >= cachep->num);
+
+ while (page->active < cachep->num && batchcount--) {
+ STATS_INC_ALLOCED(cachep);
+ STATS_INC_ACTIVE(cachep);
+ STATS_SET_HIGH(cachep);
+
+ ac->entry[ac->avail++] = slab_get_obj(cachep, page);
+ }
+
+ return batchcount;
+}
+
static void *cache_alloc_refill(struct kmem_cache *cachep, gfp_t flags)
{
int batchcount;
@@ -2864,7 +2888,6 @@ static void *cache_alloc_refill(struct kmem_cache *cachep, gfp_t flags)
check_irq_off();
node = numa_mem_id();
-retry:
ac = cpu_cache_get(cachep);
batchcount = ac->batchcount;
if (!ac->touched && batchcount > BATCHREFILL_LIMIT) {
@@ -2894,21 +2917,7 @@ retry:
check_spinlock_acquired(cachep);
- /*
- * The slab was either on partial or free list so
- * there must be at least one object available for
- * allocation.
- */
- BUG_ON(page->active >= cachep->num);
-
- while (page->active < cachep->num && batchcount--) {
- STATS_INC_ALLOCED(cachep);
- STATS_INC_ACTIVE(cachep);
- STATS_SET_HIGH(cachep);
-
- ac->entry[ac->avail++] = slab_get_obj(cachep, page);
- }
-
+ batchcount = alloc_block(cachep, ac, page, batchcount);
fixup_slab_list(cachep, n, page, &list);
}
@@ -2928,21 +2937,18 @@ alloc_done:
}
page = cache_grow_begin(cachep, gfp_exact_node(flags), node);
- cache_grow_end(cachep, page);
/*
* cache_grow_begin() can reenable interrupts,
* then ac could change.
*/
ac = cpu_cache_get(cachep);
- node = numa_mem_id();
+ if (!ac->avail && page)
+ alloc_block(cachep, ac, page, batchcount);
+ cache_grow_end(cachep, page);
- /* no objects in sight? abort */
- if (!page && ac->avail == 0)
+ if (!ac->avail)
return NULL;
-
- if (!ac->avail) /* objects refilled by interrupt? */
- goto retry;
}
ac->touched = 1;
@@ -3136,14 +3142,13 @@ static void *____cache_alloc_node(struct kmem_cache *cachep, gfp_t flags,
{
struct page *page;
struct kmem_cache_node *n;
- void *obj;
+ void *obj = NULL;
void *list = NULL;
VM_BUG_ON(nodeid < 0 || nodeid >= MAX_NUMNODES);
n = get_node(cachep, nodeid);
BUG_ON(!n);
-retry:
check_irq_off();
spin_lock(&n->list_lock);
page = get_first_slab(n, false);
@@ -3165,19 +3170,18 @@ retry:
spin_unlock(&n->list_lock);
fixup_objfreelist_debug(cachep, &list);
- goto done;
+ return obj;
must_grow:
spin_unlock(&n->list_lock);
page = cache_grow_begin(cachep, gfp_exact_node(flags), nodeid);
+ if (page) {
+ /* This slab isn't counted yet so don't update free_objects */
+ obj = slab_get_obj(cachep, page);
+ }
cache_grow_end(cachep, page);
- if (page)
- goto retry;
- return fallback_alloc(cachep, flags);
-
-done:
- return obj;
+ return obj ? obj : fallback_alloc(cachep, flags);
}
static __always_inline void *
--
1.9.1
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2016-04-12 4:52 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-04-12 4:50 [PATCH v2 00/11] mm/slab: reduce lock contention in alloc path js1304
2016-04-12 4:50 ` [PATCH v2 01/11] mm/slab: fix the theoretical race by holding proper lock js1304
2016-04-12 16:38 ` Christoph Lameter
2016-04-14 1:56 ` Joonsoo Kim
2016-04-12 4:50 ` [PATCH v2 02/11] mm/slab: remove BAD_ALIEN_MAGIC again js1304
2016-04-12 16:41 ` Christoph Lameter
2016-04-12 4:50 ` [PATCH v2 03/11] mm/slab: drain the free slab as much as possible js1304
2016-04-12 4:50 ` [PATCH v2 04/11] mm/slab: factor out kmem_cache_node initialization code js1304
2016-04-12 16:53 ` Christoph Lameter
2016-04-26 0:47 ` Joonsoo Kim
2016-04-12 4:51 ` [PATCH v2 05/11] mm/slab: clean-up kmem_cache_node setup js1304
2016-04-12 16:55 ` Christoph Lameter
2016-04-12 4:51 ` [PATCH v2 06/11] mm/slab: don't keep free slabs if free_objects exceeds free_limit js1304
2016-07-22 11:51 ` Tetsuo Handa
2016-07-26 7:18 ` Joonsoo Kim
2016-04-12 4:51 ` [PATCH v2 07/11] mm/slab: racy access/modify the slab color js1304
2016-04-12 4:51 ` [PATCH v2 08/11] mm/slab: make cache_grow() handle the page allocated on arbitrary node js1304
2016-04-12 4:51 ` [PATCH v2 09/11] mm/slab: separate cache_grow() to two parts js1304
2016-04-12 4:51 ` js1304 [this message]
2016-04-12 4:51 ` [PATCH v2 11/11] mm/slab: lockless decision to grow cache js1304
2016-04-12 7:24 ` Jesper Dangaard Brouer
2016-04-12 8:16 ` Joonsoo Kim
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1460436666-20462-11-git-send-email-iamjoonsoo.kim@lge.com \
--to=js1304@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=brouer@redhat.com \
--cc=cl@linux.com \
--cc=iamjoonsoo.kim@lge.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=penberg@kernel.org \
--cc=rientjes@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox