From: Joonsoo Kim <iamjoonsoo.kim@lge.com>
To: Andrew Morton <akpm@linux-foundation.org>,
"Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>,
Marek Szyprowski <m.szyprowski@samsung.com>,
Michal Nazarewicz <mina86@mina86.com>
Cc: Minchan Kim <minchan@kernel.org>,
Russell King - ARM Linux <linux@arm.linux.org.uk>,
Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
Paolo Bonzini <pbonzini@redhat.com>,
Gleb Natapov <gleb@kernel.org>, Alexander Graf <agraf@suse.de>,
Benjamin Herrenschmidt <benh@kernel.crashing.org>,
Paul Mackerras <paulus@samba.org>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org,
linux-arm-kernel@lists.infradead.org, kvm@vger.kernel.org,
kvm-ppc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org,
Joonsoo Kim <iamjoonsoo.kim@lge.com>
Subject: [PATCH v2 10/10] mm, cma: use spinlock instead of mutex
Date: Thu, 12 Jun 2014 12:21:47 +0900 [thread overview]
Message-ID: <1402543307-29800-11-git-send-email-iamjoonsoo.kim@lge.com> (raw)
In-Reply-To: <1402543307-29800-1-git-send-email-iamjoonsoo.kim@lge.com>
Currently, we should take the mutex for manipulating bitmap.
This job may be really simple and short so we don't need to sleep
if contended. So I change it to spinlock.
Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
diff --git a/mm/cma.c b/mm/cma.c
index 22a5b23..3085e8c 100644
--- a/mm/cma.c
+++ b/mm/cma.c
@@ -27,6 +27,7 @@
#include <linux/memblock.h>
#include <linux/err.h>
#include <linux/mm.h>
+#include <linux/spinlock.h>
#include <linux/mutex.h>
#include <linux/sizes.h>
#include <linux/slab.h>
@@ -36,7 +37,7 @@ struct cma {
unsigned long count;
unsigned long *bitmap;
int order_per_bit; /* Order of pages represented by one bit */
- struct mutex lock;
+ spinlock_t lock;
};
/*
@@ -72,9 +73,9 @@ static void clear_cma_bitmap(struct cma *cma, unsigned long pfn, int count)
bitmapno = (pfn - cma->base_pfn) >> cma->order_per_bit;
nr_bits = cma_bitmap_pages_to_bits(cma, count);
- mutex_lock(&cma->lock);
+ spin_lock(&cma->lock);
bitmap_clear(cma->bitmap, bitmapno, nr_bits);
- mutex_unlock(&cma->lock);
+ spin_unlock(&cma->lock);
}
static int __init cma_activate_area(struct cma *cma)
@@ -112,7 +113,7 @@ static int __init cma_activate_area(struct cma *cma)
init_cma_reserved_pageblock(pfn_to_page(base_pfn));
} while (--i);
- mutex_init(&cma->lock);
+ spin_lock_init(&cma->lock);
return 0;
err:
@@ -261,11 +262,11 @@ struct page *cma_alloc(struct cma *cma, int count, unsigned int align)
nr_bits = cma_bitmap_pages_to_bits(cma, count);
for (;;) {
- mutex_lock(&cma->lock);
+ spin_lock(&cma->lock);
bitmapno = bitmap_find_next_zero_area(cma->bitmap,
bitmap_maxno, start, nr_bits, mask);
if (bitmapno >= bitmap_maxno) {
- mutex_unlock(&cma->lock);
+ spin_unlock(&cma->lock);
break;
}
bitmap_set(cma->bitmap, bitmapno, nr_bits);
@@ -274,7 +275,7 @@ struct page *cma_alloc(struct cma *cma, int count, unsigned int align)
* our exclusive use. If the migration fails we will take the
* lock again and unmark it.
*/
- mutex_unlock(&cma->lock);
+ spin_unlock(&cma->lock);
pfn = cma->base_pfn + (bitmapno << cma->order_per_bit);
mutex_lock(&cma_mutex);
--
1.7.9.5
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2014-06-12 3:17 UTC|newest]
Thread overview: 69+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-06-12 3:21 [PATCH v2 00/10] CMA: generalize CMA reserved area management code Joonsoo Kim
2014-06-12 3:21 ` [PATCH v2 01/10] DMA, CMA: clean-up log message Joonsoo Kim
2014-06-12 4:41 ` Aneesh Kumar K.V
2014-06-12 5:53 ` Joonsoo Kim
2014-06-12 8:55 ` Michal Nazarewicz
2014-06-12 9:53 ` Michal Nazarewicz
2014-06-16 5:18 ` Joonsoo Kim
2014-06-12 5:18 ` Minchan Kim
2014-06-12 5:55 ` Joonsoo Kim
2014-06-12 8:15 ` Zhang Yanfei
2014-06-12 8:56 ` Michal Nazarewicz
2014-06-12 3:21 ` [PATCH v2 02/10] DMA, CMA: fix possible memory leak Joonsoo Kim
2014-06-12 4:43 ` Aneesh Kumar K.V
2014-06-12 5:25 ` Minchan Kim
2014-06-12 6:02 ` Joonsoo Kim
2014-06-12 8:19 ` Zhang Yanfei
2014-06-12 9:47 ` Michal Nazarewicz
2014-06-12 3:21 ` [PATCH v2 03/10] DMA, CMA: separate core cma management codes from DMA APIs Joonsoo Kim
2014-06-12 4:44 ` Aneesh Kumar K.V
2014-06-12 5:37 ` Minchan Kim
2014-06-16 5:24 ` Joonsoo Kim
2014-06-12 9:55 ` Michal Nazarewicz
2014-06-12 3:21 ` [PATCH v2 04/10] DMA, CMA: support alignment constraint on cma region Joonsoo Kim
2014-06-12 4:50 ` Aneesh Kumar K.V
2014-06-12 5:52 ` Minchan Kim
2014-06-12 6:07 ` Joonsoo Kim
2014-06-12 10:02 ` Michal Nazarewicz
2014-06-16 5:19 ` Joonsoo Kim
2014-06-12 3:21 ` [PATCH v2 05/10] DMA, CMA: support arbitrary bitmap granularity Joonsoo Kim
2014-06-12 6:06 ` Minchan Kim
2014-06-12 6:43 ` Joonsoo Kim
2014-06-12 6:42 ` Minchan Kim
2014-06-12 7:08 ` Minchan Kim
2014-06-12 7:25 ` Zhang Yanfei
2014-06-12 7:41 ` Joonsoo Kim
2014-06-12 8:28 ` Zhang Yanfei
2014-06-12 10:19 ` Michal Nazarewicz
2014-06-16 5:23 ` Joonsoo Kim
2014-06-14 10:09 ` Aneesh Kumar K.V
2014-06-12 3:21 ` [PATCH v2 06/10] CMA: generalize CMA reserved area management functionality Joonsoo Kim
2014-06-12 7:13 ` Minchan Kim
2014-06-12 7:42 ` Joonsoo Kim
2014-06-12 8:29 ` Zhang Yanfei
2014-06-14 10:06 ` Aneesh Kumar K.V
2014-06-14 10:08 ` Aneesh Kumar K.V
2014-06-14 10:16 ` Aneesh Kumar K.V
2014-06-16 5:27 ` Joonsoo Kim
2014-06-12 3:21 ` [PATCH v2 07/10] PPC, KVM, CMA: use general CMA reserved area management framework Joonsoo Kim
2014-06-14 8:53 ` Aneesh Kumar K.V
2014-06-16 5:34 ` Joonsoo Kim
2014-06-16 7:02 ` Aneesh Kumar K.V
2014-06-14 10:05 ` Aneesh Kumar K.V
2014-06-16 5:29 ` Joonsoo Kim
2014-06-12 3:21 ` [PATCH v2 08/10] mm, cma: clean-up cma allocation error path Joonsoo Kim
2014-06-12 7:16 ` Minchan Kim
2014-06-12 8:31 ` Zhang Yanfei
2014-06-12 11:34 ` Michal Nazarewicz
2014-06-14 7:18 ` Aneesh Kumar K.V
2014-06-12 3:21 ` [PATCH v2 09/10] mm, cma: move output param to the end of param list Joonsoo Kim
2014-06-12 7:19 ` Minchan Kim
2014-06-12 7:43 ` Joonsoo Kim
2014-06-12 11:38 ` Michal Nazarewicz
2014-06-14 7:20 ` Aneesh Kumar K.V
2014-06-12 3:21 ` Joonsoo Kim [this message]
2014-06-12 7:40 ` [PATCH v2 10/10] mm, cma: use spinlock instead of mutex Minchan Kim
2014-06-12 7:56 ` Joonsoo Kim
2014-06-14 7:25 ` [PATCH v2 00/10] CMA: generalize CMA reserved area management code Aneesh Kumar K.V
2014-06-16 5:32 ` Joonsoo Kim
2014-06-16 7:04 ` Aneesh Kumar K.V
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1402543307-29800-11-git-send-email-iamjoonsoo.kim@lge.com \
--to=iamjoonsoo.kim@lge.com \
--cc=agraf@suse.de \
--cc=akpm@linux-foundation.org \
--cc=aneesh.kumar@linux.vnet.ibm.com \
--cc=benh@kernel.crashing.org \
--cc=gleb@kernel.org \
--cc=gregkh@linuxfoundation.org \
--cc=kvm-ppc@vger.kernel.org \
--cc=kvm@vger.kernel.org \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux@arm.linux.org.uk \
--cc=linuxppc-dev@lists.ozlabs.org \
--cc=m.szyprowski@samsung.com \
--cc=mina86@mina86.com \
--cc=minchan@kernel.org \
--cc=paulus@samba.org \
--cc=pbonzini@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox