linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 3/3] dmapool: Use xarray for vaddr-to-block lookup
@ 2024-11-22 21:11 Brian Johannesmeyer
  2024-11-22 21:11 ` [PATCH v3 2/3] dmapool: Use pool_find_block() in pool_block_err() Brian Johannesmeyer
                   ` (2 more replies)
  0 siblings, 3 replies; 4+ messages in thread
From: Brian Johannesmeyer @ 2024-11-22 21:11 UTC (permalink / raw)
  To: Keith Busch, Christoph Hellwig, Andrew Morton, linux-mm,
	linux-kernel, linux-hardening
  Cc: Brian Johannesmeyer, Raphael Isemann, Cristiano Giuffrida,
	Herbert Bos, Greg KH

Optimize the performance of `dma_pool_free()` by implementing an xarray to
map a `vaddr` to its corresponding `block`. This eliminates the need to
iterate through the entire `page_list` for vaddr-to-block translation,
thereby improving performance.

Performance results from the `DMAPOOL_TEST` test show the improvement.
Before the patch:
```
dmapool test: size:16   align:16   blocks:8192 time:34432
dmapool test: size:64   align:64   blocks:8192 time:62262
dmapool test: size:256  align:256  blocks:8192 time:238137
dmapool test: size:1024 align:1024 blocks:2048 time:61386
dmapool test: size:4096 align:4096 blocks:1024 time:75342
dmapool test: size:68   align:32   blocks:8192 time:88243
```

After the patch:
```
dmapool test: size:16   align:16   blocks:8192 time:37954
dmapool test: size:64   align:64   blocks:8192 time:40036
dmapool test: size:256  align:256  blocks:8192 time:41942
dmapool test: size:1024 align:1024 blocks:2048 time:10964
dmapool test: size:4096 align:4096 blocks:1024 time:6101
dmapool test: size:68   align:32   blocks:8192 time:41307
```

This change reduces the runtime overhead, particularly for larger block
sizes.

Co-developed-by: Raphael Isemann <teemperor@gmail.com>
Signed-off-by: Raphael Isemann <teemperor@gmail.com>
Signed-off-by: Brian Johannesmeyer <bjohannesmeyer@gmail.com>
---
 mm/dmapool.c | 28 +++++++++++-----------------
 1 file changed, 11 insertions(+), 17 deletions(-)

diff --git a/mm/dmapool.c b/mm/dmapool.c
index f2b96be25412..1cc2cc87ab93 100644
--- a/mm/dmapool.c
+++ b/mm/dmapool.c
@@ -35,6 +35,7 @@
 #include <linux/string.h>
 #include <linux/types.h>
 #include <linux/wait.h>
+#include <linux/xarray.h>
 
 #ifdef CONFIG_SLUB_DEBUG_ON
 #define DMAPOOL_DEBUG 1
@@ -59,6 +60,7 @@ struct dma_pool {		/* the pool */
 	unsigned int boundary;
 	char name[32];
 	struct list_head pools;
+	struct xarray block_map;
 };
 
 struct dma_page {		/* cacheable header for 'allocation' bytes */
@@ -96,23 +98,7 @@ static DEVICE_ATTR_RO(pools);
 
 static struct dma_block *pool_find_block(struct dma_pool *pool, void *vaddr)
 {
-	struct dma_page *page;
-	size_t offset, index;
-
-	list_for_each_entry(page, &pool->page_list, page_list) {
-		if (vaddr < page->vaddr)
-			continue;
-		offset = vaddr - page->vaddr;
-		if (offset >= pool->allocation)
-			continue;
-
-		index = offset / pool->size;
-		if (index >= page->blocks_per_page)
-			return NULL;
-
-		return &page->blocks[index];
-	}
-	return NULL;
+	return xa_load(&pool->block_map, (unsigned long)vaddr);
 }
 
 #ifdef DMAPOOL_DEBUG
@@ -273,6 +259,7 @@ struct dma_pool *dma_pool_create(const char *name, struct device *dev,
 	retval->boundary = boundary;
 	retval->allocation = allocation;
 	INIT_LIST_HEAD(&retval->pools);
+	xa_init(&retval->block_map);
 
 	/*
 	 * pools_lock ensures that the ->dma_pools list does not get corrupted.
@@ -324,6 +311,12 @@ static void pool_initialise_page(struct dma_pool *pool, struct dma_page *page)
 		block->dma = page->dma + offset;
 		block->next_block = NULL;
 
+		if (xa_err(xa_store(&pool->block_map, (unsigned long)block->vaddr,
+				    block, GFP_KERNEL))) {
+			pr_err("dma_pool: Failed to store block in xarray\n");
+			return;
+		}
+
 		if (last)
 			last->next_block = block;
 		else
@@ -385,6 +378,7 @@ void dma_pool_destroy(struct dma_pool *pool)
 	if (unlikely(!pool))
 		return;
 
+	xa_destroy(&pool->block_map);
 	mutex_lock(&pools_reg_lock);
 	mutex_lock(&pools_lock);
 	list_del(&pool->pools);
-- 
2.34.1



^ permalink raw reply	[flat|nested] 4+ messages in thread

* [PATCH v3 2/3] dmapool: Use pool_find_block() in pool_block_err()
  2024-11-22 21:11 [PATCH v3 3/3] dmapool: Use xarray for vaddr-to-block lookup Brian Johannesmeyer
@ 2024-11-22 21:11 ` Brian Johannesmeyer
  2024-11-22 21:11 ` [PATCH v3 1/3] dmapool: Move pool metadata into non-DMA memory Brian Johannesmeyer
  2024-11-22 21:11 ` [PATCH v3 0/3] dmapool: Mitigate dev-controllable mem. corruption Brian Johannesmeyer
  2 siblings, 0 replies; 4+ messages in thread
From: Brian Johannesmeyer @ 2024-11-22 21:11 UTC (permalink / raw)
  To: Keith Busch, Christoph Hellwig, Andrew Morton, linux-mm,
	linux-kernel, linux-hardening
  Cc: Brian Johannesmeyer, Raphael Isemann, Cristiano Giuffrida,
	Herbert Bos, Greg KH

In the previous patch, the `pool_find_block()` function was added to
translate a virtual address into the corresponding `struct dma_block`. The
existing `pool_find_page()` function performs a similar role by translating
a DMA address into the `struct dma_page` containing it.

To reduce redundant code and improve consistency, remove the
`pool_find_page()` function and update `pool_block_err()` to use
`pool_find_block()` instead. Doing so eliminates duplicate functionality
and consolidates the block lookup process.

Co-developed-by: Raphael Isemann <teemperor@gmail.com>
Signed-off-by: Raphael Isemann <teemperor@gmail.com>
Signed-off-by: Brian Johannesmeyer <bjohannesmeyer@gmail.com>
---
 mm/dmapool.c | 38 ++++++++++++--------------------------
 1 file changed, 12 insertions(+), 26 deletions(-)

diff --git a/mm/dmapool.c b/mm/dmapool.c
index 3790ca4a631d..f2b96be25412 100644
--- a/mm/dmapool.c
+++ b/mm/dmapool.c
@@ -141,39 +141,25 @@ static void pool_check_block(struct dma_pool *pool, struct dma_block *block,
 		memset(block->vaddr, POOL_POISON_ALLOCATED, pool->size);
 }
 
-static struct dma_page *pool_find_page(struct dma_pool *pool, dma_addr_t dma)
-{
-	struct dma_page *page;
-
-	list_for_each_entry(page, &pool->page_list, page_list) {
-		if (dma < page->dma)
-			continue;
-		if ((dma - page->dma) < pool->allocation)
-			return page;
-	}
-	return NULL;
-}
-
 static bool pool_block_err(struct dma_pool *pool, void *vaddr, dma_addr_t dma)
 {
-	struct dma_block *block = pool->next_block;
-	struct dma_page *page;
+	struct dma_block *block = pool_find_block(pool, vaddr);
 
-	page = pool_find_page(pool, dma);
-	if (!page) {
-		dev_err(pool->dev, "%s %s, %p/%pad (bad dma)\n",
-			__func__, pool->name, vaddr, &dma);
+	if (!block) {
+		dev_err(pool->dev, "%s %s, invalid block %p\n",
+			__func__, pool->name, vaddr);
 		return true;
 	}
 
-	while (block) {
-		if (block->vaddr != vaddr) {
-			block = block->next_block;
-			continue;
+	struct dma_block *iter = pool->next_block;
+
+	while (iter) {
+		if (iter == block) {
+			dev_err(pool->dev, "%s %s, dma %pad already free\n",
+				__func__, pool->name, &dma);
+			return true;
 		}
-		dev_err(pool->dev, "%s %s, dma %pad already free\n",
-			__func__, pool->name, &dma);
-		return true;
+		iter = iter->next_block;
 	}
 
 	memset(vaddr, POOL_POISON_FREED, pool->size);
-- 
2.34.1



^ permalink raw reply	[flat|nested] 4+ messages in thread

* [PATCH v3 1/3] dmapool: Move pool metadata into non-DMA memory
  2024-11-22 21:11 [PATCH v3 3/3] dmapool: Use xarray for vaddr-to-block lookup Brian Johannesmeyer
  2024-11-22 21:11 ` [PATCH v3 2/3] dmapool: Use pool_find_block() in pool_block_err() Brian Johannesmeyer
@ 2024-11-22 21:11 ` Brian Johannesmeyer
  2024-11-22 21:11 ` [PATCH v3 0/3] dmapool: Mitigate dev-controllable mem. corruption Brian Johannesmeyer
  2 siblings, 0 replies; 4+ messages in thread
From: Brian Johannesmeyer @ 2024-11-22 21:11 UTC (permalink / raw)
  To: Keith Busch, Christoph Hellwig, Andrew Morton, linux-mm,
	linux-kernel, linux-hardening
  Cc: Brian Johannesmeyer, Raphael Isemann, Cristiano Giuffrida,
	Herbert Bos, Greg KH

If a `struct dma_block` object resides in DMA memory, a malicious
peripheral device can corrupt its metadata --- specifically, its
`next_block` pointer, which links blocks in a DMA pool. By corrupting these
pointers, an attacker can manipulate `dma_pool_alloc()` into returning
attacker-controllable pointers, which can lead to kernel memory corruption
from a driver that calls it.

To prevent this, move the `struct dma_block` metadata into non-DMA memory,
ensuring that devices cannot tamper with the internal pointers of the DMA
pool allocator. Specifically:

- Add a `vaddr` field to `struct dma_block` to point to the actual
  DMA-accessible block.
- Maintain an array of `struct dma_block` objects in `struct dma_page` to
  track the metadata of each block within an allocated page.

This change secures the DMA pool allocator by keeping its metadata in
kernel memory, inaccessible to peripheral devices, thereby preventing
potential attacks that could corrupt kernel memory through DMA operations.

**Performance Impact**

Unfortunately, performance results from the `DMAPOOL_TEST` test show this
negatively affects performance. Before the patch:
```
dmapool test: size:16   align:16   blocks:8192 time:11860
dmapool test: size:64   align:64   blocks:8192 time:11951
dmapool test: size:256  align:256  blocks:8192 time:12287
dmapool test: size:1024 align:1024 blocks:2048 time:3134
dmapool test: size:4096 align:4096 blocks:1024 time:1686
dmapool test: size:68   align:32   blocks:8192 time:12050
```

After the patch:
```
dmapool test: size:16   align:16   blocks:8192 time:34432
dmapool test: size:64   align:64   blocks:8192 time:62262
dmapool test: size:256  align:256  blocks:8192 time:238137
dmapool test: size:1024 align:1024 blocks:2048 time:61386
dmapool test: size:4096 align:4096 blocks:1024 time:75342
dmapool test: size:68   align:32   blocks:8192 time:88243
```

While the performance impact is significant, this patch provides protection
against malicious devices tampering with DMA pool metadata. A subsequent
patch in this series introduces an optimization to mitigate the runtime
overhead.

Co-developed-by: Raphael Isemann <teemperor@gmail.com>
Signed-off-by: Raphael Isemann <teemperor@gmail.com>
Signed-off-by: Brian Johannesmeyer <bjohannesmeyer@gmail.com>
---
 mm/dmapool.c | 62 +++++++++++++++++++++++++++++++++++++++++++---------
 1 file changed, 52 insertions(+), 10 deletions(-)

diff --git a/mm/dmapool.c b/mm/dmapool.c
index f0bfc6c490f4..3790ca4a631d 100644
--- a/mm/dmapool.c
+++ b/mm/dmapool.c
@@ -43,6 +43,7 @@
 struct dma_block {
 	struct dma_block *next_block;
 	dma_addr_t dma;
+	void *vaddr;
 };
 
 struct dma_pool {		/* the pool */
@@ -64,6 +65,8 @@ struct dma_page {		/* cacheable header for 'allocation' bytes */
 	struct list_head page_list;
 	void *vaddr;
 	dma_addr_t dma;
+	struct dma_block *blocks;
+	size_t blocks_per_page;
 };
 
 static DEFINE_MUTEX(pools_lock);
@@ -91,14 +94,35 @@ static ssize_t pools_show(struct device *dev, struct device_attribute *attr, cha
 
 static DEVICE_ATTR_RO(pools);
 
+static struct dma_block *pool_find_block(struct dma_pool *pool, void *vaddr)
+{
+	struct dma_page *page;
+	size_t offset, index;
+
+	list_for_each_entry(page, &pool->page_list, page_list) {
+		if (vaddr < page->vaddr)
+			continue;
+		offset = vaddr - page->vaddr;
+		if (offset >= pool->allocation)
+			continue;
+
+		index = offset / pool->size;
+		if (index >= page->blocks_per_page)
+			return NULL;
+
+		return &page->blocks[index];
+	}
+	return NULL;
+}
+
 #ifdef DMAPOOL_DEBUG
 static void pool_check_block(struct dma_pool *pool, struct dma_block *block,
 			     gfp_t mem_flags)
 {
-	u8 *data = (void *)block;
+	u8 *data = (void *)block->vaddr;
 	int i;
 
-	for (i = sizeof(struct dma_block); i < pool->size; i++) {
+	for (i = 0; i < pool->size; i++) {
 		if (data[i] == POOL_POISON_FREED)
 			continue;
 		dev_err(pool->dev, "%s %s, %p (corrupted)\n", __func__,
@@ -114,7 +138,7 @@ static void pool_check_block(struct dma_pool *pool, struct dma_block *block,
 	}
 
 	if (!want_init_on_alloc(mem_flags))
-		memset(block, POOL_POISON_ALLOCATED, pool->size);
+		memset(block->vaddr, POOL_POISON_ALLOCATED, pool->size);
 }
 
 static struct dma_page *pool_find_page(struct dma_pool *pool, dma_addr_t dma)
@@ -143,7 +167,7 @@ static bool pool_block_err(struct dma_pool *pool, void *vaddr, dma_addr_t dma)
 	}
 
 	while (block) {
-		if (block != vaddr) {
+		if (block->vaddr != vaddr) {
 			block = block->next_block;
 			continue;
 		}
@@ -238,8 +262,6 @@ struct dma_pool *dma_pool_create(const char *name, struct device *dev,
 
 	if (size == 0 || size > INT_MAX)
 		return NULL;
-	if (size < sizeof(struct dma_block))
-		size = sizeof(struct dma_block);
 
 	size = ALIGN(size, align);
 	allocation = max_t(size_t, size, PAGE_SIZE);
@@ -301,6 +323,7 @@ static void pool_initialise_page(struct dma_pool *pool, struct dma_page *page)
 {
 	unsigned int next_boundary = pool->boundary, offset = 0;
 	struct dma_block *block, *first = NULL, *last = NULL;
+	size_t i = 0;
 
 	pool_init_page(pool, page);
 	while (offset + pool->size <= pool->allocation) {
@@ -310,7 +333,8 @@ static void pool_initialise_page(struct dma_pool *pool, struct dma_page *page)
 			continue;
 		}
 
-		block = page->vaddr + offset;
+		block = &page->blocks[i];
+		block->vaddr = page->vaddr + offset;
 		block->dma = page->dma + offset;
 		block->next_block = NULL;
 
@@ -322,6 +346,7 @@ static void pool_initialise_page(struct dma_pool *pool, struct dma_page *page)
 
 		offset += pool->size;
 		pool->nr_blocks++;
+		i++;
 	}
 
 	last->next_block = pool->next_block;
@@ -339,9 +364,18 @@ static struct dma_page *pool_alloc_page(struct dma_pool *pool, gfp_t mem_flags)
 	if (!page)
 		return NULL;
 
+	page->blocks_per_page = pool->allocation / pool->size;
+	page->blocks = kmalloc_array(page->blocks_per_page,
+				     sizeof(struct dma_block), GFP_KERNEL);
+	if (!page->blocks) {
+		kfree(page);
+		return NULL;
+	}
+
 	page->vaddr = dma_alloc_coherent(pool->dev, pool->allocation,
 					 &page->dma, mem_flags);
 	if (!page->vaddr) {
+		kfree(page->blocks);
 		kfree(page);
 		return NULL;
 	}
@@ -383,6 +417,7 @@ void dma_pool_destroy(struct dma_pool *pool)
 		if (!busy)
 			dma_free_coherent(pool->dev, pool->allocation,
 					  page->vaddr, page->dma);
+		kfree(page->blocks);
 		list_del(&page->page_list);
 		kfree(page);
 	}
@@ -432,9 +467,9 @@ void *dma_pool_alloc(struct dma_pool *pool, gfp_t mem_flags,
 	*handle = block->dma;
 	pool_check_block(pool, block, mem_flags);
 	if (want_init_on_alloc(mem_flags))
-		memset(block, 0, pool->size);
+		memset(block->vaddr, 0, pool->size);
 
-	return block;
+	return block->vaddr;
 }
 EXPORT_SYMBOL(dma_pool_alloc);
 
@@ -449,9 +484,16 @@ EXPORT_SYMBOL(dma_pool_alloc);
  */
 void dma_pool_free(struct dma_pool *pool, void *vaddr, dma_addr_t dma)
 {
-	struct dma_block *block = vaddr;
+	struct dma_block *block;
 	unsigned long flags;
 
+	block = pool_find_block(pool, vaddr);
+	if (!block) {
+		dev_err(pool->dev, "%s %s, invalid vaddr %p\n",
+			__func__, pool->name, vaddr);
+		return;
+	}
+
 	spin_lock_irqsave(&pool->lock, flags);
 	if (!pool_block_err(pool, vaddr, dma)) {
 		pool_block_push(pool, block, dma);
-- 
2.34.1



^ permalink raw reply	[flat|nested] 4+ messages in thread

* [PATCH v3 0/3] dmapool: Mitigate dev-controllable mem. corruption
  2024-11-22 21:11 [PATCH v3 3/3] dmapool: Use xarray for vaddr-to-block lookup Brian Johannesmeyer
  2024-11-22 21:11 ` [PATCH v3 2/3] dmapool: Use pool_find_block() in pool_block_err() Brian Johannesmeyer
  2024-11-22 21:11 ` [PATCH v3 1/3] dmapool: Move pool metadata into non-DMA memory Brian Johannesmeyer
@ 2024-11-22 21:11 ` Brian Johannesmeyer
  2 siblings, 0 replies; 4+ messages in thread
From: Brian Johannesmeyer @ 2024-11-22 21:11 UTC (permalink / raw)
  To: Keith Busch, Christoph Hellwig, Andrew Morton, linux-mm,
	linux-kernel, linux-hardening
  Cc: Brian Johannesmeyer, Raphael Isemann, Cristiano Giuffrida,
	Herbert Bos, Greg KH

We discovered a security-related issue in the DMA pool allocator.

V1 of our RFC was submitted to the Linux kernel security team. They
recommended submitting it to the relevant subsystem maintainers and the
hardening mailing list instead, as they did not consider this an explicit
security issue. Their rationale was that Linux implicitly assumes hardware
can be trusted.

**Threat Model**: While Linux drivers typically trust their hardware, there
may be specific drivers that do not operate under this assumption. Hence,
this threat model assumes a malicious peripheral device capable of
corrupting DMA data to exploit the kernel. In this scenario, the device
manipulates kernel-initialized data (similar to the attack described in the
Thunderclap paper [0]) to achieve arbitrary kernel memory corruption. 

**DMA pool background**. A DMA pool aims to reduce the overhead of DMA
allocations by creating a large DMA buffer --- the "pool" --- from which
smaller buffers are allocated as needed. Fundamentally, a DMA pool
functions like a heap: it is a structure composed of linked memory
"blocks", which, in this context, are DMA buffers. When a driver employs a
DMA pool, it grants the device access not only to these blocks but also to
the pointers linking them.

**Vulnerability**. Similar to traditional heap corruption vulnerabilities
--- where a malicious program corrupts heap metadata to e.g., hijack
control flow --- a malicious device may corrupt DMA pool metadata. This
corruption can trivially lead to arbitrary kernel memory corruption from
any driver that uses it. Indeed, because the DMA pool API is extensively
used, this vulnerability is not confined to a single instance. In fact,
every usage of the DMA pool API is potentially vulnerable. An exploit
proceeds with the following steps:

1. The DMA `pool` initializes its list of blocks, then points to the first
block.
2. The malicious device overwrites the first 8 bytes of the first block ---
which contain its `next_block` pointer --- to an arbitrary kernel address,
`kernel_addr`.
3. The driver makes its first call to `dma_pool_alloc()`, after which, the
pool should point to the second block. However, it instead points to
`kernel_addr`.
4. The driver again calls `dma_pool_alloc()`, which incorrectly returns
`kernel_addr`. Therefore, anytime the driver writes to this "block", it may
corrupt sensitive kernel data.

I have a PDF document that illustrates how these steps work. Please let me
know if you would like me to share it with you.

**Proposed mitigation**. To mitigate the corruption of DMA pool metadata
(i.e., the pointers linking the blocks), the metadata should be moved into
non-DMA memory, ensuring it cannot be altered by a device. I have included
a patch series that implements this change. I have tested the patches with
the `DMAPOOL_TEST` test and my own basic unit tests that ensure the DMA
pool allocator is not vulnerable.

**Performance**. I evaluated the patch set's performance by running the
`DMAPOOL_TEST` test with/without the patches applied. Here is its output
*without* the patches applied:
```
dmapool test: size:16   align:16   blocks:8192 time:11860
dmapool test: size:64   align:64   blocks:8192 time:11951
dmapool test: size:256  align:256  blocks:8192 time:12287
dmapool test: size:1024 align:1024 blocks:2048 time:3134
dmapool test: size:4096 align:4096 blocks:1024 time:1686
dmapool test: size:68   align:32   blocks:8192 time:12050
```

And here is its output *with* the patches applied:
```
dmapool test: size:16   align:16   blocks:8192 time:37954
dmapool test: size:64   align:64   blocks:8192 time:40036
dmapool test: size:256  align:256  blocks:8192 time:41942
dmapool test: size:1024 align:1024 blocks:2048 time:10964
dmapool test: size:4096 align:4096 blocks:1024 time:6101
dmapool test: size:68   align:32   blocks:8192 time:41307
```

The patch set results in a 2.2x--2.6x runtime overhead, as demonstrated in
the performance results. AFAICT, most of this overhead originates from the
change in dma_pool_free()'s vaddr-to-block translation. Previously, the
translation was a simple typecast, but with the patches applied, it now
requires a lookup in an xarray. As Keith noted [1], achieving baseline
performance would likely require changing the API.

**Changes**
- V2 -> V3: (i) Use an xarray for vaddr-to-block translations, which
  improves the
performance of free operations. (ii) Remove the minimum DMA block size
constraint, as it is no longer necessary.
- V1 -> V2: Submit to public mailing lists.

Thanks,

Brian Johannesmeyer

[0] Link: https://www.csl.sri.com/~neumann/ndss-iommu.pdf
[1] Link:
https://patchwork.kernel.org/project/linux-mm/cover/20241119205529.3871048-1-bjohannesmeyer@gmail.com/#26130533

Brian Johannesmeyer (3):
  dmapool: Move pool metadata into non-DMA memory
  dmapool: Use pool_find_block() in pool_block_err()
  Use xarray for efficient vaddr-to-block translation

 mm/dmapool.c | 92 ++++++++++++++++++++++++++++++++--------------------
 1 file changed, 57 insertions(+), 35 deletions(-)

-- 
2.34.1



^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2024-11-22 21:11 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-11-22 21:11 [PATCH v3 3/3] dmapool: Use xarray for vaddr-to-block lookup Brian Johannesmeyer
2024-11-22 21:11 ` [PATCH v3 2/3] dmapool: Use pool_find_block() in pool_block_err() Brian Johannesmeyer
2024-11-22 21:11 ` [PATCH v3 1/3] dmapool: Move pool metadata into non-DMA memory Brian Johannesmeyer
2024-11-22 21:11 ` [PATCH v3 0/3] dmapool: Mitigate dev-controllable mem. corruption Brian Johannesmeyer

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox