* [PATCHv3 00/12] dmapool enhancements
@ 2023-01-03 19:15 Keith Busch
2023-01-03 19:15 ` [PATCHv3 01/12] dmapool: add alloc/free performance test Keith Busch
` (11 more replies)
0 siblings, 12 replies; 18+ messages in thread
From: Keith Busch @ 2023-01-03 19:15 UTC (permalink / raw)
To: linux-mm, linux-kernel, Matthew Wilcox, Christoph Hellwig
Cc: Tony Battersby, Kernel Team, Keith Busch
From: Keith Busch <kbusch@kernel.org>
Time spent in dma_pool alloc/free increases linearly with the number of
pages backing the pool. We can reduce this to constant time with minor
changes to how free pages are tracked.
Changes since v2:
Added received reviews
Applied suggestions from Christoph (removed inlines, use preferred
conditional compiling style, minor changes in patch sequence, use
kzalloc)
Fixed printf formats caught by kernel test robot
Added one extra cleanup patch at the end
Keith Busch (8):
dmapool: add alloc/free performance test
dmapool: move debug code to own functions
dmapool: rearrange page alloc failure handling
dmapool: consolidate page initialization
dmapool: simplify freeing
dmapool: don't memset on free twice
dmapool: link blocks across pages
dmapool: create/destroy cleanup
Tony Battersby (4):
dmapool: remove checks for dev == NULL
dmapool: use sysfs_emit() instead of scnprintf()
dmapool: cleanup integer types
dmapool: speedup DMAPOOL_DEBUG with init_on_alloc
mm/Kconfig | 9 ++
mm/Makefile | 1 +
mm/dmapool.c | 371 ++++++++++++++++++++++------------------------
mm/dmapool_test.c | 147 ++++++++++++++++++
4 files changed, 331 insertions(+), 197 deletions(-)
create mode 100644 mm/dmapool_test.c
--
2.30.2
^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCHv3 01/12] dmapool: add alloc/free performance test
2023-01-03 19:15 [PATCHv3 00/12] dmapool enhancements Keith Busch
@ 2023-01-03 19:15 ` Keith Busch
2023-01-03 19:15 ` [PATCHv3 02/12] dmapool: remove checks for dev == NULL Keith Busch
` (10 subsequent siblings)
11 siblings, 0 replies; 18+ messages in thread
From: Keith Busch @ 2023-01-03 19:15 UTC (permalink / raw)
To: linux-mm, linux-kernel, Matthew Wilcox, Christoph Hellwig
Cc: Tony Battersby, Kernel Team, Keith Busch
From: Keith Busch <kbusch@kernel.org>
Provide a module that allocates and frees many blocks of various sizes
and report how long it takes. This is intended to provide a consistent
way to measure how changes to the dma_pool_alloc/free routines affect
timing.
Signed-off-by: Keith Busch <kbusch@kernel.org>
---
mm/Kconfig | 9 +++
mm/Makefile | 1 +
mm/dmapool_test.c | 147 ++++++++++++++++++++++++++++++++++++++++++++++
3 files changed, 157 insertions(+)
create mode 100644 mm/dmapool_test.c
diff --git a/mm/Kconfig b/mm/Kconfig
index ff7b209dec055..c1476384a6238 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -1081,6 +1081,15 @@ comment "GUP_TEST needs to have DEBUG_FS enabled"
config GUP_GET_PXX_LOW_HIGH
bool
+config DMAPOOL_TEST
+ tristate "Enable a module to run time tests on dma_pool"
+ depends on HAS_DMA
+ help
+ Provides a test module that will allocate and free many blocks of
+ various sizes and report how long it takes. This is intended to
+ provide a consistent way to measure how changes to the
+ dma_pool_alloc/free routines affect performance.
+
config ARCH_HAS_PTE_SPECIAL
bool
diff --git a/mm/Makefile b/mm/Makefile
index 8e105e5b3e293..3a08f5d7b1782 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -103,6 +103,7 @@ obj-$(CONFIG_MEMCG) += swap_cgroup.o
endif
obj-$(CONFIG_CGROUP_HUGETLB) += hugetlb_cgroup.o
obj-$(CONFIG_GUP_TEST) += gup_test.o
+obj-$(CONFIG_DMAPOOL_TEST) += dmapool_test.o
obj-$(CONFIG_MEMORY_FAILURE) += memory-failure.o
obj-$(CONFIG_HWPOISON_INJECT) += hwpoison-inject.o
obj-$(CONFIG_DEBUG_KMEMLEAK) += kmemleak.o
diff --git a/mm/dmapool_test.c b/mm/dmapool_test.c
new file mode 100644
index 0000000000000..370fb9e209eff
--- /dev/null
+++ b/mm/dmapool_test.c
@@ -0,0 +1,147 @@
+#include <linux/device.h>
+#include <linux/dma-map-ops.h>
+#include <linux/dma-mapping.h>
+#include <linux/dmapool.h>
+#include <linux/kernel.h>
+#include <linux/ktime.h>
+#include <linux/module.h>
+
+#define NR_TESTS (100)
+
+struct dma_pool_pair {
+ dma_addr_t dma;
+ void *v;
+};
+
+struct dmapool_parms {
+ size_t size;
+ size_t align;
+ size_t boundary;
+};
+
+static const struct dmapool_parms pool_parms[] = {
+ { .size = 16, .align = 16, .boundary = 0 },
+ { .size = 64, .align = 64, .boundary = 0 },
+ { .size = 256, .align = 256, .boundary = 0 },
+ { .size = 1024, .align = 1024, .boundary = 0 },
+ { .size = 4096, .align = 4096, .boundary = 0 },
+ { .size = 68, .align = 32, .boundary = 4096 },
+};
+
+static struct dma_pool *pool;
+static struct device test_dev;
+static u64 dma_mask;
+
+static inline int nr_blocks(int size)
+{
+ return clamp_t(int, (PAGE_SIZE / size) * 512, 1024, 8192);
+}
+
+static int dmapool_test_alloc(struct dma_pool_pair *p, int blocks)
+{
+ int i;
+
+ for (i = 0; i < blocks; i++) {
+ p[i].v = dma_pool_alloc(pool, GFP_KERNEL,
+ &p[i].dma);
+ if (!p[i].v)
+ goto pool_fail;
+ }
+
+ for (i = 0; i < blocks; i++)
+ dma_pool_free(pool, p[i].v, p[i].dma);
+
+ return 0;
+
+pool_fail:
+ for (--i; i >= 0; i--)
+ dma_pool_free(pool, p[i].v, p[i].dma);
+ return -ENOMEM;
+}
+
+static int dmapool_test_block(const struct dmapool_parms *parms)
+{
+ int blocks = nr_blocks(parms->size);
+ ktime_t start_time, end_time;
+ struct dma_pool_pair *p;
+ int i, ret;
+
+ p = kcalloc(blocks, sizeof(*p), GFP_KERNEL);
+ if (!p)
+ return -ENOMEM;
+
+ pool = dma_pool_create("test pool", &test_dev, parms->size,
+ parms->align, parms->boundary);
+ if (!pool) {
+ ret = -ENOMEM;
+ goto free_pairs;
+ }
+
+ start_time = ktime_get();
+ for (i = 0; i < NR_TESTS; i++) {
+ ret = dmapool_test_alloc(p, blocks);
+ if (ret)
+ goto free_pool;
+ if (need_resched())
+ cond_resched();
+ }
+ end_time = ktime_get();
+
+ printk("dmapool test: size:%-4zu align:%-4zu blocks:%-4d time:%llu\n",
+ parms->size, parms->align, blocks,
+ ktime_us_delta(end_time, start_time));
+
+free_pool:
+ dma_pool_destroy(pool);
+free_pairs:
+ kfree(p);
+ return ret;
+}
+
+static void dmapool_test_release(struct device *dev)
+{
+}
+
+static int dmapool_checks(void)
+{
+ int i, ret;
+
+ ret = dev_set_name(&test_dev, "dmapool-test");
+ if (ret)
+ return ret;
+
+ ret = device_register(&test_dev);
+ if (ret) {
+ printk("%s: register failed:%d\n", __func__, ret);
+ goto put_device;
+ }
+
+ test_dev.release = dmapool_test_release;
+ set_dma_ops(&test_dev, NULL);
+ test_dev.dma_mask = &dma_mask;
+ ret = dma_set_mask_and_coherent(&test_dev, DMA_BIT_MASK(64));
+ if (ret) {
+ printk("%s: mask failed:%d\n", __func__, ret);
+ goto del_device;
+ }
+
+ for (i = 0; i < ARRAY_SIZE(pool_parms); i++) {
+ ret = dmapool_test_block(&pool_parms[i]);
+ if (ret)
+ break;
+ }
+
+del_device:
+ device_del(&test_dev);
+put_device:
+ put_device(&test_dev);
+ return ret;
+}
+
+static void dmapool_exit(void)
+{
+}
+
+module_init(dmapool_checks);
+module_exit(dmapool_exit);
+MODULE_LICENSE("GPL");
--
2.30.2
^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCHv3 02/12] dmapool: remove checks for dev == NULL
2023-01-03 19:15 [PATCHv3 00/12] dmapool enhancements Keith Busch
2023-01-03 19:15 ` [PATCHv3 01/12] dmapool: add alloc/free performance test Keith Busch
@ 2023-01-03 19:15 ` Keith Busch
2023-01-03 19:15 ` [PATCHv3 03/12] dmapool: use sysfs_emit() instead of scnprintf() Keith Busch
` (9 subsequent siblings)
11 siblings, 0 replies; 18+ messages in thread
From: Keith Busch @ 2023-01-03 19:15 UTC (permalink / raw)
To: linux-mm, linux-kernel, Matthew Wilcox, Christoph Hellwig
Cc: Tony Battersby, Kernel Team, Keith Busch
From: Tony Battersby <tonyb@cybernetics.com>
dmapool originally tried to support pools without a device because
dma_alloc_coherent() supports allocations without a device. But nobody
ended up using dma pools without a device, and trying to do so will
result in an oops. So remove the checks for pool->dev == NULL since they
are unneeded bloat.
Signed-off-by: Tony Battersby <tonyb@cybernetics.com>
[added check for null dev on create]
Signed-off-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
---
mm/dmapool.c | 45 ++++++++++++++-------------------------------
1 file changed, 14 insertions(+), 31 deletions(-)
diff --git a/mm/dmapool.c b/mm/dmapool.c
index a7eb5d0eb2da7..559207e1c3339 100644
--- a/mm/dmapool.c
+++ b/mm/dmapool.c
@@ -134,6 +134,9 @@ struct dma_pool *dma_pool_create(const char *name, struct device *dev,
size_t allocation;
bool empty = false;
+ if (!dev)
+ return NULL;
+
if (align == 0)
align = 1;
else if (align & (align - 1))
@@ -275,7 +278,7 @@ void dma_pool_destroy(struct dma_pool *pool)
mutex_lock(&pools_reg_lock);
mutex_lock(&pools_lock);
list_del(&pool->pools);
- if (pool->dev && list_empty(&pool->dev->dma_pools))
+ if (list_empty(&pool->dev->dma_pools))
empty = true;
mutex_unlock(&pools_lock);
if (empty)
@@ -284,12 +287,8 @@ void dma_pool_destroy(struct dma_pool *pool)
list_for_each_entry_safe(page, tmp, &pool->page_list, page_list) {
if (is_page_busy(page)) {
- if (pool->dev)
- dev_err(pool->dev, "%s %s, %p busy\n", __func__,
- pool->name, page->vaddr);
- else
- pr_err("%s %s, %p busy\n", __func__,
- pool->name, page->vaddr);
+ dev_err(pool->dev, "%s %s, %p busy\n", __func__,
+ pool->name, page->vaddr);
/* leak the still-in-use consistent memory */
list_del(&page->page_list);
kfree(page);
@@ -351,12 +350,8 @@ void *dma_pool_alloc(struct dma_pool *pool, gfp_t mem_flags,
for (i = sizeof(page->offset); i < pool->size; i++) {
if (data[i] == POOL_POISON_FREED)
continue;
- if (pool->dev)
- dev_err(pool->dev, "%s %s, %p (corrupted)\n",
- __func__, pool->name, retval);
- else
- pr_err("%s %s, %p (corrupted)\n",
- __func__, pool->name, retval);
+ dev_err(pool->dev, "%s %s, %p (corrupted)\n",
+ __func__, pool->name, retval);
/*
* Dump the first 4 bytes even if they are not
@@ -411,12 +406,8 @@ void dma_pool_free(struct dma_pool *pool, void *vaddr, dma_addr_t dma)
page = pool_find_page(pool, dma);
if (!page) {
spin_unlock_irqrestore(&pool->lock, flags);
- if (pool->dev)
- dev_err(pool->dev, "%s %s, %p/%pad (bad dma)\n",
- __func__, pool->name, vaddr, &dma);
- else
- pr_err("%s %s, %p/%pad (bad dma)\n",
- __func__, pool->name, vaddr, &dma);
+ dev_err(pool->dev, "%s %s, %p/%pad (bad dma)\n",
+ __func__, pool->name, vaddr, &dma);
return;
}
@@ -426,12 +417,8 @@ void dma_pool_free(struct dma_pool *pool, void *vaddr, dma_addr_t dma)
#ifdef DMAPOOL_DEBUG
if ((dma - page->dma) != offset) {
spin_unlock_irqrestore(&pool->lock, flags);
- if (pool->dev)
- dev_err(pool->dev, "%s %s, %p (bad vaddr)/%pad\n",
- __func__, pool->name, vaddr, &dma);
- else
- pr_err("%s %s, %p (bad vaddr)/%pad\n",
- __func__, pool->name, vaddr, &dma);
+ dev_err(pool->dev, "%s %s, %p (bad vaddr)/%pad\n",
+ __func__, pool->name, vaddr, &dma);
return;
}
{
@@ -442,12 +429,8 @@ void dma_pool_free(struct dma_pool *pool, void *vaddr, dma_addr_t dma)
continue;
}
spin_unlock_irqrestore(&pool->lock, flags);
- if (pool->dev)
- dev_err(pool->dev, "%s %s, dma %pad already free\n",
- __func__, pool->name, &dma);
- else
- pr_err("%s %s, dma %pad already free\n",
- __func__, pool->name, &dma);
+ dev_err(pool->dev, "%s %s, dma %pad already free\n",
+ __func__, pool->name, &dma);
return;
}
}
--
2.30.2
^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCHv3 03/12] dmapool: use sysfs_emit() instead of scnprintf()
2023-01-03 19:15 [PATCHv3 00/12] dmapool enhancements Keith Busch
2023-01-03 19:15 ` [PATCHv3 01/12] dmapool: add alloc/free performance test Keith Busch
2023-01-03 19:15 ` [PATCHv3 02/12] dmapool: remove checks for dev == NULL Keith Busch
@ 2023-01-03 19:15 ` Keith Busch
2023-01-03 19:15 ` [PATCHv3 04/12] dmapool: cleanup integer types Keith Busch
` (8 subsequent siblings)
11 siblings, 0 replies; 18+ messages in thread
From: Keith Busch @ 2023-01-03 19:15 UTC (permalink / raw)
To: linux-mm, linux-kernel, Matthew Wilcox, Christoph Hellwig
Cc: Tony Battersby, Kernel Team, Keith Busch
From: Tony Battersby <tonyb@cybernetics.com>
Use sysfs_emit instead of scnprintf, snprintf or sprintf.
Signed-off-by: Tony Battersby <tonyb@cybernetics.com>
Signed-off-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
---
mm/dmapool.c | 23 +++++++----------------
1 file changed, 7 insertions(+), 16 deletions(-)
diff --git a/mm/dmapool.c b/mm/dmapool.c
index 559207e1c3339..20616b760bb9c 100644
--- a/mm/dmapool.c
+++ b/mm/dmapool.c
@@ -64,18 +64,11 @@ static DEFINE_MUTEX(pools_reg_lock);
static ssize_t pools_show(struct device *dev, struct device_attribute *attr, char *buf)
{
- unsigned temp;
- unsigned size;
- char *next;
+ int size;
struct dma_page *page;
struct dma_pool *pool;
- next = buf;
- size = PAGE_SIZE;
-
- temp = scnprintf(next, size, "poolinfo - 0.1\n");
- size -= temp;
- next += temp;
+ size = sysfs_emit(buf, "poolinfo - 0.1\n");
mutex_lock(&pools_lock);
list_for_each_entry(pool, &dev->dma_pools, pools) {
@@ -90,16 +83,14 @@ static ssize_t pools_show(struct device *dev, struct device_attribute *attr, cha
spin_unlock_irq(&pool->lock);
/* per-pool info, no real statistics yet */
- temp = scnprintf(next, size, "%-16s %4u %4zu %4zu %2u\n",
- pool->name, blocks,
- pages * (pool->allocation / pool->size),
- pool->size, pages);
- size -= temp;
- next += temp;
+ size += sysfs_emit_at(buf, size, "%-16s %4u %4zu %4zu %2u\n",
+ pool->name, blocks,
+ pages * (pool->allocation / pool->size),
+ pool->size, pages);
}
mutex_unlock(&pools_lock);
- return PAGE_SIZE - size;
+ return size;
}
static DEVICE_ATTR_RO(pools);
--
2.30.2
^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCHv3 04/12] dmapool: cleanup integer types
2023-01-03 19:15 [PATCHv3 00/12] dmapool enhancements Keith Busch
` (2 preceding siblings ...)
2023-01-03 19:15 ` [PATCHv3 03/12] dmapool: use sysfs_emit() instead of scnprintf() Keith Busch
@ 2023-01-03 19:15 ` Keith Busch
2023-01-03 19:15 ` [PATCHv3 05/12] dmapool: speedup DMAPOOL_DEBUG with init_on_alloc Keith Busch
` (7 subsequent siblings)
11 siblings, 0 replies; 18+ messages in thread
From: Keith Busch @ 2023-01-03 19:15 UTC (permalink / raw)
To: linux-mm, linux-kernel, Matthew Wilcox, Christoph Hellwig
Cc: Tony Battersby, Kernel Team, Keith Busch
From: Tony Battersby <tonyb@cybernetics.com>
To represent the size of a single allocation, dmapool currently uses
'unsigned int' in some places and 'size_t' in other places. Standardize
on 'unsigned int' to reduce overhead, but use 'size_t' when counting all
the blocks in the entire pool.
Signed-off-by: Tony Battersby <tonyb@cybernetics.com>
Signed-off-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
---
mm/dmapool.c | 19 +++++++++++--------
1 file changed, 11 insertions(+), 8 deletions(-)
diff --git a/mm/dmapool.c b/mm/dmapool.c
index 20616b760bb9c..ee993bb59fc27 100644
--- a/mm/dmapool.c
+++ b/mm/dmapool.c
@@ -43,10 +43,10 @@
struct dma_pool { /* the pool */
struct list_head page_list;
spinlock_t lock;
- size_t size;
struct device *dev;
- size_t allocation;
- size_t boundary;
+ unsigned int size;
+ unsigned int allocation;
+ unsigned int boundary;
char name[32];
struct list_head pools;
};
@@ -73,7 +73,7 @@ static ssize_t pools_show(struct device *dev, struct device_attribute *attr, cha
mutex_lock(&pools_lock);
list_for_each_entry(pool, &dev->dma_pools, pools) {
unsigned pages = 0;
- unsigned blocks = 0;
+ size_t blocks = 0;
spin_lock_irq(&pool->lock);
list_for_each_entry(page, &pool->page_list, page_list) {
@@ -83,9 +83,10 @@ static ssize_t pools_show(struct device *dev, struct device_attribute *attr, cha
spin_unlock_irq(&pool->lock);
/* per-pool info, no real statistics yet */
- size += sysfs_emit_at(buf, size, "%-16s %4u %4zu %4zu %2u\n",
+ size += sysfs_emit_at(buf, size, "%-16s %4zu %4zu %4u %2u\n",
pool->name, blocks,
- pages * (pool->allocation / pool->size),
+ (size_t) pages *
+ (pool->allocation / pool->size),
pool->size, pages);
}
mutex_unlock(&pools_lock);
@@ -133,7 +134,7 @@ struct dma_pool *dma_pool_create(const char *name, struct device *dev,
else if (align & (align - 1))
return NULL;
- if (size == 0)
+ if (size == 0 || size > INT_MAX)
return NULL;
else if (size < 4)
size = 4;
@@ -146,6 +147,8 @@ struct dma_pool *dma_pool_create(const char *name, struct device *dev,
else if ((boundary < size) || (boundary & (boundary - 1)))
return NULL;
+ boundary = min(boundary, allocation);
+
retval = kmalloc(sizeof(*retval), GFP_KERNEL);
if (!retval)
return retval;
@@ -306,7 +309,7 @@ void *dma_pool_alloc(struct dma_pool *pool, gfp_t mem_flags,
{
unsigned long flags;
struct dma_page *page;
- size_t offset;
+ unsigned int offset;
void *retval;
might_alloc(mem_flags);
--
2.30.2
^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCHv3 05/12] dmapool: speedup DMAPOOL_DEBUG with init_on_alloc
2023-01-03 19:15 [PATCHv3 00/12] dmapool enhancements Keith Busch
` (3 preceding siblings ...)
2023-01-03 19:15 ` [PATCHv3 04/12] dmapool: cleanup integer types Keith Busch
@ 2023-01-03 19:15 ` Keith Busch
2023-01-03 19:15 ` [PATCHv3 06/12] dmapool: move debug code to own functions Keith Busch
` (6 subsequent siblings)
11 siblings, 0 replies; 18+ messages in thread
From: Keith Busch @ 2023-01-03 19:15 UTC (permalink / raw)
To: linux-mm, linux-kernel, Matthew Wilcox, Christoph Hellwig
Cc: Tony Battersby, Kernel Team, Keith Busch
From: Tony Battersby <tonyb@cybernetics.com>
Avoid double-memset of the same allocated memory in dma_pool_alloc()
when both DMAPOOL_DEBUG is enabled and init_on_alloc=1.
Signed-off-by: Tony Battersby <tonyb@cybernetics.com>
Signed-off-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
---
mm/dmapool.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/dmapool.c b/mm/dmapool.c
index ee993bb59fc27..eaed3ffb42aa8 100644
--- a/mm/dmapool.c
+++ b/mm/dmapool.c
@@ -356,7 +356,7 @@ void *dma_pool_alloc(struct dma_pool *pool, gfp_t mem_flags,
break;
}
}
- if (!(mem_flags & __GFP_ZERO))
+ if (!want_init_on_alloc(mem_flags))
memset(retval, POOL_POISON_ALLOCATED, pool->size);
#endif
spin_unlock_irqrestore(&pool->lock, flags);
--
2.30.2
^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCHv3 06/12] dmapool: move debug code to own functions
2023-01-03 19:15 [PATCHv3 00/12] dmapool enhancements Keith Busch
` (4 preceding siblings ...)
2023-01-03 19:15 ` [PATCHv3 05/12] dmapool: speedup DMAPOOL_DEBUG with init_on_alloc Keith Busch
@ 2023-01-03 19:15 ` Keith Busch
2023-01-08 17:06 ` Christoph Hellwig
2023-01-03 19:15 ` [PATCHv3 07/12] dmapool: rearrange page alloc failure handling Keith Busch
` (5 subsequent siblings)
11 siblings, 1 reply; 18+ messages in thread
From: Keith Busch @ 2023-01-03 19:15 UTC (permalink / raw)
To: linux-mm, linux-kernel, Matthew Wilcox, Christoph Hellwig
Cc: Tony Battersby, Kernel Team, Keith Busch
From: Keith Busch <kbusch@kernel.org>
Clean up the normal path by moving the debug code outside it.
Signed-off-by: Keith Busch <kbusch@kernel.org>
---
mm/dmapool.c | 113 +++++++++++++++++++++++++++++++--------------------
1 file changed, 68 insertions(+), 45 deletions(-)
diff --git a/mm/dmapool.c b/mm/dmapool.c
index eaed3ffb42aa8..7bd8990e1913d 100644
--- a/mm/dmapool.c
+++ b/mm/dmapool.c
@@ -294,6 +294,38 @@ void dma_pool_destroy(struct dma_pool *pool)
}
EXPORT_SYMBOL(dma_pool_destroy);
+#ifdef DMAPOOL_DEBUG
+static void pool_check_block(struct dma_pool *pool, void *retval,
+ unsigned int offset, gfp_t mem_flags)
+{
+ int i;
+ u8 *data = retval;
+ /* page->offset is stored in first 4 bytes */
+ for (i = sizeof(offset); i < pool->size; i++) {
+ if (data[i] == POOL_POISON_FREED)
+ continue;
+ dev_err(pool->dev, "%s %s, %p (corrupted)\n",
+ __func__, pool->name, retval);
+
+ /*
+ * Dump the first 4 bytes even if they are not
+ * POOL_POISON_FREED
+ */
+ print_hex_dump(KERN_ERR, "", DUMP_PREFIX_OFFSET, 16, 1,
+ data, pool->size, 1);
+ break;
+ }
+ if (!want_init_on_alloc(mem_flags))
+ memset(retval, POOL_POISON_ALLOCATED, pool->size);
+}
+#else
+static void pool_check_block(struct dma_pool *pool, void *retval,
+ unsigned int offset, gfp_t mem_flags)
+
+{
+}
+#endif
+
/**
* dma_pool_alloc - get a block of consistent memory
* @pool: dma pool that will produce the block
@@ -336,29 +368,7 @@ void *dma_pool_alloc(struct dma_pool *pool, gfp_t mem_flags,
page->offset = *(int *)(page->vaddr + offset);
retval = offset + page->vaddr;
*handle = offset + page->dma;
-#ifdef DMAPOOL_DEBUG
- {
- int i;
- u8 *data = retval;
- /* page->offset is stored in first 4 bytes */
- for (i = sizeof(page->offset); i < pool->size; i++) {
- if (data[i] == POOL_POISON_FREED)
- continue;
- dev_err(pool->dev, "%s %s, %p (corrupted)\n",
- __func__, pool->name, retval);
-
- /*
- * Dump the first 4 bytes even if they are not
- * POOL_POISON_FREED
- */
- print_hex_dump(KERN_ERR, "", DUMP_PREFIX_OFFSET, 16, 1,
- data, pool->size, 1);
- break;
- }
- }
- if (!want_init_on_alloc(mem_flags))
- memset(retval, POOL_POISON_ALLOCATED, pool->size);
-#endif
+ pool_check_block(pool, retval, offset, mem_flags);
spin_unlock_irqrestore(&pool->lock, flags);
if (want_init_on_alloc(mem_flags))
@@ -381,6 +391,39 @@ static struct dma_page *pool_find_page(struct dma_pool *pool, dma_addr_t dma)
return NULL;
}
+#ifdef DMAPOOL_DEBUG
+static bool pool_page_err(struct dma_pool *pool, struct dma_page *page,
+ void *vaddr, dma_addr_t dma)
+{
+ unsigned int offset = vaddr - page->vaddr;
+ unsigned int chain = page->offset;
+
+ if ((dma - page->dma) != offset) {
+ dev_err(pool->dev, "%s %s, %p (bad vaddr)/%pad\n",
+ __func__, pool->name, vaddr, &dma);
+ return true;
+ }
+
+ while (chain < pool->allocation) {
+ if (chain != offset) {
+ chain = *(int *)(page->vaddr + chain);
+ continue;
+ }
+ dev_err(pool->dev, "%s %s, dma %pad already free\n",
+ __func__, pool->name, &dma);
+ return true;
+ }
+ memset(vaddr, POOL_POISON_FREED, pool->size);
+ return false;
+}
+#else
+static bool pool_page_err(struct dma_pool *pool, struct dma_page *page,
+ void *vaddr, dma_addr_t dma)
+{
+ return false;
+}
+#endif
+
/**
* dma_pool_free - put block back into dma pool
* @pool: the dma pool holding the block
@@ -394,7 +437,6 @@ void dma_pool_free(struct dma_pool *pool, void *vaddr, dma_addr_t dma)
{
struct dma_page *page;
unsigned long flags;
- unsigned int offset;
spin_lock_irqsave(&pool->lock, flags);
page = pool_find_page(pool, dma);
@@ -405,35 +447,16 @@ void dma_pool_free(struct dma_pool *pool, void *vaddr, dma_addr_t dma)
return;
}
- offset = vaddr - page->vaddr;
if (want_init_on_free())
memset(vaddr, 0, pool->size);
-#ifdef DMAPOOL_DEBUG
- if ((dma - page->dma) != offset) {
+ if (pool_page_err(pool, page, vaddr, dma)) {
spin_unlock_irqrestore(&pool->lock, flags);
- dev_err(pool->dev, "%s %s, %p (bad vaddr)/%pad\n",
- __func__, pool->name, vaddr, &dma);
return;
}
- {
- unsigned int chain = page->offset;
- while (chain < pool->allocation) {
- if (chain != offset) {
- chain = *(int *)(page->vaddr + chain);
- continue;
- }
- spin_unlock_irqrestore(&pool->lock, flags);
- dev_err(pool->dev, "%s %s, dma %pad already free\n",
- __func__, pool->name, &dma);
- return;
- }
- }
- memset(vaddr, POOL_POISON_FREED, pool->size);
-#endif
page->in_use--;
*(int *)vaddr = page->offset;
- page->offset = offset;
+ page->offset = vaddr - page->vaddr;
/*
* Resist a temptation to do
* if (!is_page_busy(page)) pool_free_page(pool, page);
--
2.30.2
^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCHv3 07/12] dmapool: rearrange page alloc failure handling
2023-01-03 19:15 [PATCHv3 00/12] dmapool enhancements Keith Busch
` (5 preceding siblings ...)
2023-01-03 19:15 ` [PATCHv3 06/12] dmapool: move debug code to own functions Keith Busch
@ 2023-01-03 19:15 ` Keith Busch
2023-01-03 19:15 ` [PATCHv3 08/12] dmapool: consolidate page initialization Keith Busch
` (4 subsequent siblings)
11 siblings, 0 replies; 18+ messages in thread
From: Keith Busch @ 2023-01-03 19:15 UTC (permalink / raw)
To: linux-mm, linux-kernel, Matthew Wilcox, Christoph Hellwig
Cc: Tony Battersby, Kernel Team, Keith Busch
From: Keith Busch <kbusch@kernel.org>
Handle the error in a condition so the good path can be in the normal
flow.
Signed-off-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
---
mm/dmapool.c | 18 +++++++++---------
1 file changed, 9 insertions(+), 9 deletions(-)
diff --git a/mm/dmapool.c b/mm/dmapool.c
index 7bd8990e1913d..0a443c8120f62 100644
--- a/mm/dmapool.c
+++ b/mm/dmapool.c
@@ -222,17 +222,17 @@ static struct dma_page *pool_alloc_page(struct dma_pool *pool, gfp_t mem_flags)
return NULL;
page->vaddr = dma_alloc_coherent(pool->dev, pool->allocation,
&page->dma, mem_flags);
- if (page->vaddr) {
-#ifdef DMAPOOL_DEBUG
- memset(page->vaddr, POOL_POISON_FREED, pool->allocation);
-#endif
- pool_initialise_page(pool, page);
- page->in_use = 0;
- page->offset = 0;
- } else {
+ if (!page->vaddr) {
kfree(page);
- page = NULL;
+ return NULL;
}
+#ifdef DMAPOOL_DEBUG
+ memset(page->vaddr, POOL_POISON_FREED, pool->allocation);
+#endif
+ pool_initialise_page(pool, page);
+ page->in_use = 0;
+ page->offset = 0;
+
return page;
}
--
2.30.2
^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCHv3 08/12] dmapool: consolidate page initialization
2023-01-03 19:15 [PATCHv3 00/12] dmapool enhancements Keith Busch
` (6 preceding siblings ...)
2023-01-03 19:15 ` [PATCHv3 07/12] dmapool: rearrange page alloc failure handling Keith Busch
@ 2023-01-03 19:15 ` Keith Busch
2023-01-08 17:07 ` Christoph Hellwig
2023-01-03 19:15 ` [PATCHv3 09/12] dmapool: simplify freeing Keith Busch
` (3 subsequent siblings)
11 siblings, 1 reply; 18+ messages in thread
From: Keith Busch @ 2023-01-03 19:15 UTC (permalink / raw)
To: linux-mm, linux-kernel, Matthew Wilcox, Christoph Hellwig
Cc: Tony Battersby, Kernel Team, Keith Busch
From: Keith Busch <kbusch@kernel.org>
Various fields of the dma pool are set in different places. Move it all
to one function.
Signed-off-by: Keith Busch <kbusch@kernel.org>
---
mm/dmapool.c | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/mm/dmapool.c b/mm/dmapool.c
index 0a443c8120f62..6862b4e763891 100644
--- a/mm/dmapool.c
+++ b/mm/dmapool.c
@@ -202,6 +202,11 @@ static void pool_initialise_page(struct dma_pool *pool, struct dma_page *page)
unsigned int offset = 0;
unsigned int next_boundary = pool->boundary;
+#ifdef DMAPOOL_DEBUG
+ memset(page->vaddr, POOL_POISON_FREED, pool->allocation);
+#endif
+ page->in_use = 0;
+ page->offset = 0;
do {
unsigned int next = offset + pool->size;
if (unlikely((next + pool->size) >= next_boundary)) {
@@ -226,12 +231,7 @@ static struct dma_page *pool_alloc_page(struct dma_pool *pool, gfp_t mem_flags)
kfree(page);
return NULL;
}
-#ifdef DMAPOOL_DEBUG
- memset(page->vaddr, POOL_POISON_FREED, pool->allocation);
-#endif
pool_initialise_page(pool, page);
- page->in_use = 0;
- page->offset = 0;
return page;
}
--
2.30.2
^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCHv3 09/12] dmapool: simplify freeing
2023-01-03 19:15 [PATCHv3 00/12] dmapool enhancements Keith Busch
` (7 preceding siblings ...)
2023-01-03 19:15 ` [PATCHv3 08/12] dmapool: consolidate page initialization Keith Busch
@ 2023-01-03 19:15 ` Keith Busch
2023-01-08 17:08 ` Christoph Hellwig
2023-01-03 19:15 ` [PATCHv3 10/12] dmapool: don't memset on free twice Keith Busch
` (2 subsequent siblings)
11 siblings, 1 reply; 18+ messages in thread
From: Keith Busch @ 2023-01-03 19:15 UTC (permalink / raw)
To: linux-mm, linux-kernel, Matthew Wilcox, Christoph Hellwig
Cc: Tony Battersby, Kernel Team, Keith Busch
From: Keith Busch <kbusch@kernel.org>
The actions for busy and not busy are mostly the same, so combine these
and remove the unnecessary function. Also, the pool is about to be freed
so there's no need to poison the page data since we only check for
poison on alloc, which can't be done on a freed pool.
Signed-off-by: Keith Busch <kbusch@kernel.org>
---
mm/dmapool.c | 26 +++++++-------------------
1 file changed, 7 insertions(+), 19 deletions(-)
diff --git a/mm/dmapool.c b/mm/dmapool.c
index 6862b4e763891..4dab48e7e0d75 100644
--- a/mm/dmapool.c
+++ b/mm/dmapool.c
@@ -1,6 +1,6 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
- * DMA Pool allocator
+* DMA Pool allocator
*
* Copyright 2001 David Brownell
* Copyright 2007 Intel Corporation
@@ -241,18 +241,6 @@ static inline bool is_page_busy(struct dma_page *page)
return page->in_use != 0;
}
-static void pool_free_page(struct dma_pool *pool, struct dma_page *page)
-{
- dma_addr_t dma = page->dma;
-
-#ifdef DMAPOOL_DEBUG
- memset(page->vaddr, POOL_POISON_FREED, pool->allocation);
-#endif
- dma_free_coherent(pool->dev, pool->allocation, page->vaddr, dma);
- list_del(&page->page_list);
- kfree(page);
-}
-
/**
* dma_pool_destroy - destroys a pool of dma memory blocks.
* @pool: dma pool that will be destroyed
@@ -280,14 +268,14 @@ void dma_pool_destroy(struct dma_pool *pool)
mutex_unlock(&pools_reg_lock);
list_for_each_entry_safe(page, tmp, &pool->page_list, page_list) {
- if (is_page_busy(page)) {
+ if (!is_page_busy(page))
+ dma_free_coherent(pool->dev, pool->allocation,
+ page->vaddr, page->dma);
+ else
dev_err(pool->dev, "%s %s, %p busy\n", __func__,
pool->name, page->vaddr);
- /* leak the still-in-use consistent memory */
- list_del(&page->page_list);
- kfree(page);
- } else
- pool_free_page(pool, page);
+ list_del(&page->page_list);
+ kfree(page);
}
kfree(pool);
--
2.30.2
^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCHv3 10/12] dmapool: don't memset on free twice
2023-01-03 19:15 [PATCHv3 00/12] dmapool enhancements Keith Busch
` (8 preceding siblings ...)
2023-01-03 19:15 ` [PATCHv3 09/12] dmapool: simplify freeing Keith Busch
@ 2023-01-03 19:15 ` Keith Busch
2023-01-03 19:15 ` [PATCHv3 11/12] dmapool: link blocks across pages Keith Busch
2023-01-03 19:15 ` [PATCHv3 12/12] dmapool: create/destroy cleanup Keith Busch
11 siblings, 0 replies; 18+ messages in thread
From: Keith Busch @ 2023-01-03 19:15 UTC (permalink / raw)
To: linux-mm, linux-kernel, Matthew Wilcox, Christoph Hellwig
Cc: Tony Battersby, Kernel Team, Keith Busch
From: Keith Busch <kbusch@kernel.org>
If debug is enabled, dmapool will poison the range, so no need to clear
it to 0 immediately before writing over it.
Signed-off-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
---
mm/dmapool.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/mm/dmapool.c b/mm/dmapool.c
index 4dab48e7e0d75..d886b46c4b289 100644
--- a/mm/dmapool.c
+++ b/mm/dmapool.c
@@ -408,6 +408,8 @@ static bool pool_page_err(struct dma_pool *pool, struct dma_page *page,
static bool pool_page_err(struct dma_pool *pool, struct dma_page *page,
void *vaddr, dma_addr_t dma)
{
+ if (want_init_on_free())
+ memset(vaddr, 0, pool->size);
return false;
}
#endif
@@ -435,8 +437,6 @@ void dma_pool_free(struct dma_pool *pool, void *vaddr, dma_addr_t dma)
return;
}
- if (want_init_on_free())
- memset(vaddr, 0, pool->size);
if (pool_page_err(pool, page, vaddr, dma)) {
spin_unlock_irqrestore(&pool->lock, flags);
return;
--
2.30.2
^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCHv3 11/12] dmapool: link blocks across pages
2023-01-03 19:15 [PATCHv3 00/12] dmapool enhancements Keith Busch
` (9 preceding siblings ...)
2023-01-03 19:15 ` [PATCHv3 10/12] dmapool: don't memset on free twice Keith Busch
@ 2023-01-03 19:15 ` Keith Busch
2023-01-08 17:08 ` Christoph Hellwig
2023-01-03 19:15 ` [PATCHv3 12/12] dmapool: create/destroy cleanup Keith Busch
11 siblings, 1 reply; 18+ messages in thread
From: Keith Busch @ 2023-01-03 19:15 UTC (permalink / raw)
To: linux-mm, linux-kernel, Matthew Wilcox, Christoph Hellwig
Cc: Tony Battersby, Kernel Team, Keith Busch
From: Keith Busch <kbusch@kernel.org>
The allocated dmapool pages are never freed for the lifetime of the
pool. There is no need for the two level list+stack lookup for finding a
free block since nothing is ever removed from the list. Just use a
simple stack, reducing time complexity to constant.
The implementation inserts the stack linking elements and the dma handle
of the block within itself when freed. This means the smallest possible
dmapool block is increased to at most 16 bytes to accomodate these
fields, but there are no exisiting users requesting a dma pool smaller
than that anyway.
Removing the list has a significant change in performance. Using the
kernel's micro-benchmarking self test:
Before:
# modprobe dmapool_test
dmapool test: size:16 blocks:8192 time:57282
dmapool test: size:64 blocks:8192 time:172562
dmapool test: size:256 blocks:8192 time:789247
dmapool test: size:1024 blocks:2048 time:371823
dmapool test: size:4096 blocks:1024 time:362237
After:
# modprobe dmapool_test
dmapool test: size:16 blocks:8192 time:24997
dmapool test: size:64 blocks:8192 time:26584
dmapool test: size:256 blocks:8192 time:33542
dmapool test: size:1024 blocks:2048 time:9022
dmapool test: size:4096 blocks:1024 time:6045
The module test allocates quite a few blocks that may not accurately
represent how these pools are used in real life. For a more marco level
benchmark, running fio high-depth + high-batched on nvme, this patch
shows submission and completion latency reduced by ~100usec each, 1%
IOPs improvement, and perf record's time spent in dma_pool_alloc/free
were reduced by half.
Signed-off-by: Keith Busch <kbusch@kernel.org>
---
mm/dmapool.c | 221 ++++++++++++++++++++++++---------------------------
1 file changed, 106 insertions(+), 115 deletions(-)
diff --git a/mm/dmapool.c b/mm/dmapool.c
index d886b46c4b289..d23747a71bff2 100644
--- a/mm/dmapool.c
+++ b/mm/dmapool.c
@@ -15,7 +15,7 @@
* represented by the 'struct dma_pool' which keeps a doubly-linked list of
* allocated pages. Each page in the page_list is split into blocks of at
* least 'size' bytes. Free blocks are tracked in an unsorted singly-linked
- * list of free blocks within the page. Used blocks aren't tracked, but we
+ * list of free blocks across all pages. Used blocks aren't tracked, but we
* keep a count of how many are currently allocated from each page.
*/
@@ -40,13 +40,22 @@
#define DMAPOOL_DEBUG 1
#endif
+struct dma_block {
+ struct dma_block *next_block;
+ dma_addr_t dma;
+};
+
struct dma_pool { /* the pool */
struct list_head page_list;
spinlock_t lock;
struct device *dev;
+ struct dma_block *next_block;
unsigned int size;
unsigned int allocation;
unsigned int boundary;
+ size_t nr_blocks;
+ size_t nr_active;
+ size_t nr_pages;
char name[32];
struct list_head pools;
};
@@ -55,8 +64,6 @@ struct dma_page { /* cacheable header for 'allocation' bytes */
struct list_head page_list;
void *vaddr;
dma_addr_t dma;
- unsigned int in_use;
- unsigned int offset;
};
static DEFINE_MUTEX(pools_lock);
@@ -64,30 +71,18 @@ static DEFINE_MUTEX(pools_reg_lock);
static ssize_t pools_show(struct device *dev, struct device_attribute *attr, char *buf)
{
- int size;
- struct dma_page *page;
struct dma_pool *pool;
+ unsigned size;
size = sysfs_emit(buf, "poolinfo - 0.1\n");
mutex_lock(&pools_lock);
list_for_each_entry(pool, &dev->dma_pools, pools) {
- unsigned pages = 0;
- size_t blocks = 0;
-
- spin_lock_irq(&pool->lock);
- list_for_each_entry(page, &pool->page_list, page_list) {
- pages++;
- blocks += page->in_use;
- }
- spin_unlock_irq(&pool->lock);
-
/* per-pool info, no real statistics yet */
- size += sysfs_emit_at(buf, size, "%-16s %4zu %4zu %4u %2u\n",
- pool->name, blocks,
- (size_t) pages *
- (pool->allocation / pool->size),
- pool->size, pages);
+ size += sysfs_emit_at(buf, size, "%-16s %4zu %4zu %4u %2zu\n",
+ pool->name, pool->nr_active,
+ pool->nr_blocks, pool->size,
+ pool->nr_pages);
}
mutex_unlock(&pools_lock);
@@ -96,6 +91,25 @@ static ssize_t pools_show(struct device *dev, struct device_attribute *attr, cha
static DEVICE_ATTR_RO(pools);
+static struct dma_block *pool_block_pop(struct dma_pool *pool)
+{
+ struct dma_block *block = pool->next_block;
+
+ if (block) {
+ pool->next_block = block->next_block;
+ pool->nr_active++;
+ }
+ return block;
+}
+
+static void pool_block_push(struct dma_pool *pool, struct dma_block *block,
+ dma_addr_t dma)
+{
+ block->dma = dma;
+ block->next_block = pool->next_block;
+ pool->next_block = block;
+}
+
/**
* dma_pool_create - Creates a pool of consistent memory blocks, for dma.
* @name: name of pool, for diagnostics
@@ -136,8 +150,8 @@ struct dma_pool *dma_pool_create(const char *name, struct device *dev,
if (size == 0 || size > INT_MAX)
return NULL;
- else if (size < 4)
- size = 4;
+ if (size < sizeof(struct dma_block))
+ size = sizeof(struct dma_block);
size = ALIGN(size, align);
allocation = max_t(size_t, size, PAGE_SIZE);
@@ -149,7 +163,7 @@ struct dma_pool *dma_pool_create(const char *name, struct device *dev,
boundary = min(boundary, allocation);
- retval = kmalloc(sizeof(*retval), GFP_KERNEL);
+ retval = kzalloc(sizeof(*retval), GFP_KERNEL);
if (!retval)
return retval;
@@ -162,7 +176,6 @@ struct dma_pool *dma_pool_create(const char *name, struct device *dev,
retval->size = size;
retval->boundary = boundary;
retval->allocation = allocation;
-
INIT_LIST_HEAD(&retval->pools);
/*
@@ -199,23 +212,27 @@ EXPORT_SYMBOL(dma_pool_create);
static void pool_initialise_page(struct dma_pool *pool, struct dma_page *page)
{
- unsigned int offset = 0;
- unsigned int next_boundary = pool->boundary;
+ unsigned int next_boundary = pool->boundary, offset = 0;
+ struct dma_block *block;
#ifdef DMAPOOL_DEBUG
memset(page->vaddr, POOL_POISON_FREED, pool->allocation);
#endif
- page->in_use = 0;
- page->offset = 0;
- do {
- unsigned int next = offset + pool->size;
- if (unlikely((next + pool->size) >= next_boundary)) {
- next = next_boundary;
+ while (offset + pool->size <= pool->allocation) {
+ if (offset + pool->size > next_boundary) {
+ offset = next_boundary;
next_boundary += pool->boundary;
+ continue;
}
- *(int *)(page->vaddr + offset) = next;
- offset = next;
- } while (offset < pool->allocation);
+
+ block = page->vaddr + offset;
+ pool_block_push(pool, block, page->dma + offset);
+ offset += pool->size;
+ pool->nr_blocks++;
+ }
+
+ list_add(&page->page_list, &pool->page_list);
+ pool->nr_pages++;
}
static struct dma_page *pool_alloc_page(struct dma_pool *pool, gfp_t mem_flags)
@@ -231,16 +248,10 @@ static struct dma_page *pool_alloc_page(struct dma_pool *pool, gfp_t mem_flags)
kfree(page);
return NULL;
}
- pool_initialise_page(pool, page);
return page;
}
-static inline bool is_page_busy(struct dma_page *page)
-{
- return page->in_use != 0;
-}
-
/**
* dma_pool_destroy - destroys a pool of dma memory blocks.
* @pool: dma pool that will be destroyed
@@ -252,7 +263,7 @@ static inline bool is_page_busy(struct dma_page *page)
void dma_pool_destroy(struct dma_pool *pool)
{
struct dma_page *page, *tmp;
- bool empty = false;
+ bool empty = false, busy = false;
if (unlikely(!pool))
return;
@@ -267,13 +278,15 @@ void dma_pool_destroy(struct dma_pool *pool)
device_remove_file(pool->dev, &dev_attr_pools);
mutex_unlock(&pools_reg_lock);
+ if (pool->nr_active) {
+ dev_err(pool->dev, "%s %s busy\n", __func__, pool->name);
+ busy = true;
+ }
+
list_for_each_entry_safe(page, tmp, &pool->page_list, page_list) {
- if (!is_page_busy(page))
+ if (!busy)
dma_free_coherent(pool->dev, pool->allocation,
page->vaddr, page->dma);
- else
- dev_err(pool->dev, "%s %s, %p busy\n", __func__,
- pool->name, page->vaddr);
list_del(&page->page_list);
kfree(page);
}
@@ -283,17 +296,17 @@ void dma_pool_destroy(struct dma_pool *pool)
EXPORT_SYMBOL(dma_pool_destroy);
#ifdef DMAPOOL_DEBUG
-static void pool_check_block(struct dma_pool *pool, void *retval,
- unsigned int offset, gfp_t mem_flags)
+static void pool_check_block(struct dma_pool *pool, struct dma_block *block,
+ gfp_t mem_flags)
{
+ u8 *data = (void *)block;
int i;
- u8 *data = retval;
- /* page->offset is stored in first 4 bytes */
- for (i = sizeof(offset); i < pool->size; i++) {
+
+ for (i = sizeof(struct dma_block); i < pool->size; i++) {
if (data[i] == POOL_POISON_FREED)
continue;
- dev_err(pool->dev, "%s %s, %p (corrupted)\n",
- __func__, pool->name, retval);
+ dev_err(pool->dev, "%s %s, %p (corrupted)\n", __func__,
+ pool->name, block);
/*
* Dump the first 4 bytes even if they are not
@@ -303,13 +316,13 @@ static void pool_check_block(struct dma_pool *pool, void *retval,
data, pool->size, 1);
break;
}
+
if (!want_init_on_alloc(mem_flags))
- memset(retval, POOL_POISON_ALLOCATED, pool->size);
+ memset(block, POOL_POISON_ALLOCATED, pool->size);
}
#else
-static void pool_check_block(struct dma_pool *pool, void *retval,
- unsigned int offset, gfp_t mem_flags)
-
+static void pool_check_block(struct dma_pool *pool, struct dma_block *block,
+ gfp_t mem_flags)
{
}
#endif
@@ -327,45 +340,41 @@ static void pool_check_block(struct dma_pool *pool, void *retval,
void *dma_pool_alloc(struct dma_pool *pool, gfp_t mem_flags,
dma_addr_t *handle)
{
- unsigned long flags;
+ struct dma_block *block;
struct dma_page *page;
- unsigned int offset;
- void *retval;
+ unsigned long flags;
might_alloc(mem_flags);
spin_lock_irqsave(&pool->lock, flags);
- list_for_each_entry(page, &pool->page_list, page_list) {
- if (page->offset < pool->allocation)
- goto ready;
- }
-
- /* pool_alloc_page() might sleep, so temporarily drop &pool->lock */
- spin_unlock_irqrestore(&pool->lock, flags);
-
- page = pool_alloc_page(pool, mem_flags & (~__GFP_ZERO));
- if (!page)
- return NULL;
+ block = pool_block_pop(pool);
+ if (!block) {
+ /*
+ * pool_alloc_page() might sleep, so temporarily drop
+ * &pool->lock
+ */
+ spin_unlock_irqrestore(&pool->lock, flags);
- spin_lock_irqsave(&pool->lock, flags);
+ page = pool_alloc_page(pool, mem_flags & (~__GFP_ZERO));
+ if (!page)
+ return NULL;
- list_add(&page->page_list, &pool->page_list);
- ready:
- page->in_use++;
- offset = page->offset;
- page->offset = *(int *)(page->vaddr + offset);
- retval = offset + page->vaddr;
- *handle = offset + page->dma;
- pool_check_block(pool, retval, offset, mem_flags);
+ spin_lock_irqsave(&pool->lock, flags);
+ pool_initialise_page(pool, page);
+ block = pool_block_pop(pool);
+ }
spin_unlock_irqrestore(&pool->lock, flags);
+ *handle = block->dma;
+ pool_check_block(pool, block, mem_flags);
if (want_init_on_alloc(mem_flags))
- memset(retval, 0, pool->size);
+ memset(block, 0, pool->size);
- return retval;
+ return block;
}
EXPORT_SYMBOL(dma_pool_alloc);
+#ifdef DMAPOOL_DEBUG
static struct dma_page *pool_find_page(struct dma_pool *pool, dma_addr_t dma)
{
struct dma_page *page;
@@ -379,34 +388,33 @@ static struct dma_page *pool_find_page(struct dma_pool *pool, dma_addr_t dma)
return NULL;
}
-#ifdef DMAPOOL_DEBUG
-static bool pool_page_err(struct dma_pool *pool, struct dma_page *page,
- void *vaddr, dma_addr_t dma)
+static bool pool_block_err(struct dma_pool *pool, void *vaddr, dma_addr_t dma)
{
- unsigned int offset = vaddr - page->vaddr;
- unsigned int chain = page->offset;
+ struct dma_block *block = pool->next_block;
+ struct dma_page *page;
- if ((dma - page->dma) != offset) {
- dev_err(pool->dev, "%s %s, %p (bad vaddr)/%pad\n",
+ page = pool_find_page(pool, dma);
+ if (!page) {
+ dev_err(pool->dev, "%s %s, %p/%pad (bad dma)\n",
__func__, pool->name, vaddr, &dma);
return true;
}
- while (chain < pool->allocation) {
- if (chain != offset) {
- chain = *(int *)(page->vaddr + chain);
+ while (block) {
+ if (block != vaddr) {
+ block = block->next_block;
continue;
}
dev_err(pool->dev, "%s %s, dma %pad already free\n",
__func__, pool->name, &dma);
return true;
}
+
memset(vaddr, POOL_POISON_FREED, pool->size);
return false;
}
#else
-static bool pool_page_err(struct dma_pool *pool, struct dma_page *page,
- void *vaddr, dma_addr_t dma)
+static bool pool_block_err(struct dma_pool *pool, void *vaddr, dma_addr_t dma)
{
if (want_init_on_free())
memset(vaddr, 0, pool->size);
@@ -425,31 +433,14 @@ static bool pool_page_err(struct dma_pool *pool, struct dma_page *page,
*/
void dma_pool_free(struct dma_pool *pool, void *vaddr, dma_addr_t dma)
{
- struct dma_page *page;
+ struct dma_block *block = vaddr;
unsigned long flags;
spin_lock_irqsave(&pool->lock, flags);
- page = pool_find_page(pool, dma);
- if (!page) {
- spin_unlock_irqrestore(&pool->lock, flags);
- dev_err(pool->dev, "%s %s, %p/%pad (bad dma)\n",
- __func__, pool->name, vaddr, &dma);
- return;
- }
-
- if (pool_page_err(pool, page, vaddr, dma)) {
- spin_unlock_irqrestore(&pool->lock, flags);
- return;
+ if (!pool_block_err(pool, vaddr, dma)) {
+ pool_block_push(pool, block, dma);
+ pool->nr_active--;
}
-
- page->in_use--;
- *(int *)vaddr = page->offset;
- page->offset = vaddr - page->vaddr;
- /*
- * Resist a temptation to do
- * if (!is_page_busy(page)) pool_free_page(pool, page);
- * Better have a few empty pages hang around.
- */
spin_unlock_irqrestore(&pool->lock, flags);
}
EXPORT_SYMBOL(dma_pool_free);
--
2.30.2
^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCHv3 12/12] dmapool: create/destroy cleanup
2023-01-03 19:15 [PATCHv3 00/12] dmapool enhancements Keith Busch
` (10 preceding siblings ...)
2023-01-03 19:15 ` [PATCHv3 11/12] dmapool: link blocks across pages Keith Busch
@ 2023-01-03 19:15 ` Keith Busch
2023-01-08 17:09 ` Christoph Hellwig
11 siblings, 1 reply; 18+ messages in thread
From: Keith Busch @ 2023-01-03 19:15 UTC (permalink / raw)
To: linux-mm, linux-kernel, Matthew Wilcox, Christoph Hellwig
Cc: Tony Battersby, Kernel Team, Keith Busch
From: Keith Busch <kbusch@kernel.org>
Set the 'empty' bool directly from the result of the function that
determines its value instead of adding indirection logic.
Signed-off-by: Keith Busch <kbusch@kernel.org>
---
mm/dmapool.c | 10 ++++------
1 file changed, 4 insertions(+), 6 deletions(-)
diff --git a/mm/dmapool.c b/mm/dmapool.c
index d23747a71bff2..db4de646a3a91 100644
--- a/mm/dmapool.c
+++ b/mm/dmapool.c
@@ -138,7 +138,7 @@ struct dma_pool *dma_pool_create(const char *name, struct device *dev,
{
struct dma_pool *retval;
size_t allocation;
- bool empty = false;
+ bool empty;
if (!dev)
return NULL;
@@ -188,8 +188,7 @@ struct dma_pool *dma_pool_create(const char *name, struct device *dev,
*/
mutex_lock(&pools_reg_lock);
mutex_lock(&pools_lock);
- if (list_empty(&dev->dma_pools))
- empty = true;
+ empty = list_empty(&dev->dma_pools);
list_add(&retval->pools, &dev->dma_pools);
mutex_unlock(&pools_lock);
if (empty) {
@@ -263,7 +262,7 @@ static struct dma_page *pool_alloc_page(struct dma_pool *pool, gfp_t mem_flags)
void dma_pool_destroy(struct dma_pool *pool)
{
struct dma_page *page, *tmp;
- bool empty = false, busy = false;
+ bool empty, busy = false;
if (unlikely(!pool))
return;
@@ -271,8 +270,7 @@ void dma_pool_destroy(struct dma_pool *pool)
mutex_lock(&pools_reg_lock);
mutex_lock(&pools_lock);
list_del(&pool->pools);
- if (list_empty(&pool->dev->dma_pools))
- empty = true;
+ empty = list_empty(&pool->dev->dma_pools);
mutex_unlock(&pools_lock);
if (empty)
device_remove_file(pool->dev, &dev_attr_pools);
--
2.30.2
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCHv3 06/12] dmapool: move debug code to own functions
2023-01-03 19:15 ` [PATCHv3 06/12] dmapool: move debug code to own functions Keith Busch
@ 2023-01-08 17:06 ` Christoph Hellwig
0 siblings, 0 replies; 18+ messages in thread
From: Christoph Hellwig @ 2023-01-08 17:06 UTC (permalink / raw)
To: Keith Busch
Cc: linux-mm, linux-kernel, Matthew Wilcox, Christoph Hellwig,
Tony Battersby, Kernel Team, Keith Busch
> +#ifdef DMAPOOL_DEBUG
I'd drop the weird tab indent carrier over from the original code here.
Also any reason to not use a single big ifdef blocked instead of
multiple ones?
Otherwise looks good:
Reviewed-by: Christoph Hellwig <hch@lst.de>
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCHv3 08/12] dmapool: consolidate page initialization
2023-01-03 19:15 ` [PATCHv3 08/12] dmapool: consolidate page initialization Keith Busch
@ 2023-01-08 17:07 ` Christoph Hellwig
0 siblings, 0 replies; 18+ messages in thread
From: Christoph Hellwig @ 2023-01-08 17:07 UTC (permalink / raw)
To: Keith Busch
Cc: linux-mm, linux-kernel, Matthew Wilcox, Christoph Hellwig,
Tony Battersby, Kernel Team, Keith Busch
On Tue, Jan 03, 2023 at 11:15:47AM -0800, Keith Busch wrote:
> From: Keith Busch <kbusch@kernel.org>
>
> Various fields of the dma pool are set in different places. Move it all
> to one function.
>
> Signed-off-by: Keith Busch <kbusch@kernel.org>
Looks good:
Reviewed-by: Christoph Hellwig <hch@lst.de>
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCHv3 09/12] dmapool: simplify freeing
2023-01-03 19:15 ` [PATCHv3 09/12] dmapool: simplify freeing Keith Busch
@ 2023-01-08 17:08 ` Christoph Hellwig
0 siblings, 0 replies; 18+ messages in thread
From: Christoph Hellwig @ 2023-01-08 17:08 UTC (permalink / raw)
To: Keith Busch
Cc: linux-mm, linux-kernel, Matthew Wilcox, Christoph Hellwig,
Tony Battersby, Kernel Team, Keith Busch
> - * DMA Pool allocator
> +* DMA Pool allocator
This got corrupted somehow.
> + if (!is_page_busy(page))
> + dma_free_coherent(pool->dev, pool->allocation,
> + page->vaddr, page->dma);
> + else
> dev_err(pool->dev, "%s %s, %p busy\n", __func__,
> pool->name, page->vaddr);
> + list_del(&page->page_list);
> + kfree(page);
I'm still not sure what the point of leaking the page in case it is
busy vs letting KASAN and friends actually catch it, but the pure
rearrangement is an improvement over the previous state, so:
Reviewed-by: Christoph Hellwig <hch@lst.de>
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCHv3 11/12] dmapool: link blocks across pages
2023-01-03 19:15 ` [PATCHv3 11/12] dmapool: link blocks across pages Keith Busch
@ 2023-01-08 17:08 ` Christoph Hellwig
0 siblings, 0 replies; 18+ messages in thread
From: Christoph Hellwig @ 2023-01-08 17:08 UTC (permalink / raw)
To: Keith Busch
Cc: linux-mm, linux-kernel, Matthew Wilcox, Christoph Hellwig,
Tony Battersby, Kernel Team, Keith Busch
Looks good:
Reviewed-by: Christoph Hellwig <hch@lst.de>
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCHv3 12/12] dmapool: create/destroy cleanup
2023-01-03 19:15 ` [PATCHv3 12/12] dmapool: create/destroy cleanup Keith Busch
@ 2023-01-08 17:09 ` Christoph Hellwig
0 siblings, 0 replies; 18+ messages in thread
From: Christoph Hellwig @ 2023-01-08 17:09 UTC (permalink / raw)
To: Keith Busch
Cc: linux-mm, linux-kernel, Matthew Wilcox, Christoph Hellwig,
Tony Battersby, Kernel Team, Keith Busch
Looks good:
Reviewed-by: Christoph Hellwig <hch@lst.de>
^ permalink raw reply [flat|nested] 18+ messages in thread
end of thread, other threads:[~2023-01-08 17:09 UTC | newest]
Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-01-03 19:15 [PATCHv3 00/12] dmapool enhancements Keith Busch
2023-01-03 19:15 ` [PATCHv3 01/12] dmapool: add alloc/free performance test Keith Busch
2023-01-03 19:15 ` [PATCHv3 02/12] dmapool: remove checks for dev == NULL Keith Busch
2023-01-03 19:15 ` [PATCHv3 03/12] dmapool: use sysfs_emit() instead of scnprintf() Keith Busch
2023-01-03 19:15 ` [PATCHv3 04/12] dmapool: cleanup integer types Keith Busch
2023-01-03 19:15 ` [PATCHv3 05/12] dmapool: speedup DMAPOOL_DEBUG with init_on_alloc Keith Busch
2023-01-03 19:15 ` [PATCHv3 06/12] dmapool: move debug code to own functions Keith Busch
2023-01-08 17:06 ` Christoph Hellwig
2023-01-03 19:15 ` [PATCHv3 07/12] dmapool: rearrange page alloc failure handling Keith Busch
2023-01-03 19:15 ` [PATCHv3 08/12] dmapool: consolidate page initialization Keith Busch
2023-01-08 17:07 ` Christoph Hellwig
2023-01-03 19:15 ` [PATCHv3 09/12] dmapool: simplify freeing Keith Busch
2023-01-08 17:08 ` Christoph Hellwig
2023-01-03 19:15 ` [PATCHv3 10/12] dmapool: don't memset on free twice Keith Busch
2023-01-03 19:15 ` [PATCHv3 11/12] dmapool: link blocks across pages Keith Busch
2023-01-08 17:08 ` Christoph Hellwig
2023-01-03 19:15 ` [PATCHv3 12/12] dmapool: create/destroy cleanup Keith Busch
2023-01-08 17:09 ` Christoph Hellwig
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox