* [PATCH v2 0/3] dma-buf: heaps: cma: enable dmem cgroup accounting
@ 2026-02-18 17:14 Eric Chanudet
2026-02-18 17:14 ` [PATCH v2 1/3] cma: Register dmem region for each cma region Eric Chanudet
` (4 more replies)
0 siblings, 5 replies; 12+ messages in thread
From: Eric Chanudet @ 2026-02-18 17:14 UTC (permalink / raw)
To: Sumit Semwal, Benjamin Gaignard, Brian Starkey, John Stultz,
T.J. Mercier, Christian König, Andrew Morton,
David Hildenbrand, Lorenzo Stoakes, Liam R. Howlett,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko
Cc: linux-media, dri-devel, linaro-mm-sig, linux-kernel,
Maxime Ripard, Albert Esteve, linux-mm, Eric Chanudet,
Maxime Ripard
An earlier series[1] from Maxime introduced dmem to the cma allocator in
an attempt to use it generally for dma-buf. Restart from there and apply
the charge in the narrower context of the CMA dma-buf heap instead.
In line with introducing cgroup to the system heap[2], this behavior is
enabled based on dma_heap.mem_accounting, disabled by default.
dmem is chosen for CMA heaps as it allows limits to be set for each
region backing each heap. The charge is only put in the dma-buf heap for
now as it guaranties it can be accounted against a userspace process
that requested the allocation.
[1] https://lore.kernel.org/all/20250310-dmem-cgroups-v1-0-2984c1bc9312@kernel.org/
[2] https://lore.kernel.org/all/20260116-dmabuf-heap-system-memcg-v3-0-ecc6b62cc446@redhat.com/
Signed-off-by: Eric Chanudet <echanude@redhat.com>
---
Changes in v2:
- Rebase on Maxime's introduction of dmem to the cma allocator:
https://lore.kernel.org/all/20250310-dmem-cgroups-v1-0-2984c1bc9312@kernel.org/
- Remove the dmem region registration from the cma dma-buf heap
- Remove the misplaced logic for the default region.
- Link to v1: https://lore.kernel.org/r/20260130-dmabuf-heap-cma-dmem-v1-1-3647ea993e99@redhat.com
---
Eric Chanudet (1):
dma-buf: heaps: cma: charge each cma heap's dmem
Maxime Ripard (2):
cma: Register dmem region for each cma region
cma: Provide accessor to cma dmem region
drivers/dma-buf/heaps/cma_heap.c | 15 ++++++++++++++-
include/linux/cma.h | 9 +++++++++
mm/cma.c | 20 +++++++++++++++++++-
mm/cma.h | 3 +++
4 files changed, 45 insertions(+), 2 deletions(-)
---
base-commit: 948e195dfaa56e48eabda591f97630502ff7e27e
change-id: 20260128-dmabuf-heap-cma-dmem-f4120a2df4a8
Best regards,
--
Eric Chanudet <echanude@redhat.com>
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH v2 1/3] cma: Register dmem region for each cma region
2026-02-18 17:14 [PATCH v2 0/3] dma-buf: heaps: cma: enable dmem cgroup accounting Eric Chanudet
@ 2026-02-18 17:14 ` Eric Chanudet
2026-02-18 17:14 ` [PATCH v2 2/3] cma: Provide accessor to cma dmem region Eric Chanudet
` (3 subsequent siblings)
4 siblings, 0 replies; 12+ messages in thread
From: Eric Chanudet @ 2026-02-18 17:14 UTC (permalink / raw)
To: Sumit Semwal, Benjamin Gaignard, Brian Starkey, John Stultz,
T.J. Mercier, Christian König, Andrew Morton,
David Hildenbrand, Lorenzo Stoakes, Liam R. Howlett,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko
Cc: linux-media, dri-devel, linaro-mm-sig, linux-kernel,
Maxime Ripard, Albert Esteve, linux-mm, Eric Chanudet,
Maxime Ripard
From: Maxime Ripard <mripard@kernel.org>
Now that the dmem cgroup has been merged, we need to create memory
regions for each allocator devices might allocate DMA memory from.
Since CMA is one of these allocators, we need to create such a region.
CMA can deal with multiple regions though, so we'll need to create a
dmem region per CMA region.
Signed-off-by: Maxime Ripard <mripard@kernel.org>
Signed-off-by: Eric Chanudet <echanude@redhat.com>
---
mm/cma.c | 13 ++++++++++++-
mm/cma.h | 3 +++
2 files changed, 15 insertions(+), 1 deletion(-)
diff --git a/mm/cma.c b/mm/cma.c
index 813e6dc7b0954864c9ef8cf7adc6a2293241de47..78016647d512868cd87bc2c1a52dd2295acaaf01 100644
--- a/mm/cma.c
+++ b/mm/cma.c
@@ -25,6 +25,7 @@
#include <linux/string_choices.h>
#include <linux/log2.h>
#include <linux/cma.h>
+#include <linux/cgroup_dmem.h>
#include <linux/highmem.h>
#include <linux/io.h>
#include <linux/kmemleak.h>
@@ -142,6 +143,15 @@ static void __init cma_activate_area(struct cma *cma)
int allocrange, r;
struct cma_memrange *cmr;
unsigned long bitmap_count, count;
+ struct dmem_cgroup_region *region;
+
+ region = dmem_cgroup_register_region(cma_get_size(cma), "cma/%s", cma->name);
+ if (IS_ERR(region))
+ goto out;
+
+#ifdef CONFIG_CGROUP_DMEM
+ cma->dmem_cgrp_region = region;
+#endif
for (allocrange = 0; allocrange < cma->nranges; allocrange++) {
cmr = &cma->ranges[allocrange];
@@ -183,7 +193,8 @@ static void __init cma_activate_area(struct cma *cma)
cleanup:
for (r = 0; r < allocrange; r++)
bitmap_free(cma->ranges[r].bitmap);
-
+ dmem_cgroup_unregister_region(region);
+out:
/* Expose all pages to the buddy, they are useless for CMA. */
if (!test_bit(CMA_RESERVE_PAGES_ON_ERROR, &cma->flags)) {
for (r = 0; r < allocrange; r++) {
diff --git a/mm/cma.h b/mm/cma.h
index c70180c36559c295d837725e26596cf546cd8b7e..e91bedcb17be8c9e0d31aea1b67c0db36315536d 100644
--- a/mm/cma.h
+++ b/mm/cma.h
@@ -62,6 +62,9 @@ struct cma {
unsigned long flags;
/* NUMA node (NUMA_NO_NODE if unspecified) */
int nid;
+#ifdef CONFIG_CGROUP_DMEM
+ struct dmem_cgroup_region *dmem_cgrp_region;
+#endif
};
enum cma_flags {
--
2.52.0
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH v2 2/3] cma: Provide accessor to cma dmem region
2026-02-18 17:14 [PATCH v2 0/3] dma-buf: heaps: cma: enable dmem cgroup accounting Eric Chanudet
2026-02-18 17:14 ` [PATCH v2 1/3] cma: Register dmem region for each cma region Eric Chanudet
@ 2026-02-18 17:14 ` Eric Chanudet
2026-02-18 17:14 ` [PATCH v2 3/3] dma-buf: heaps: cma: charge each cma heap's dmem Eric Chanudet
` (2 subsequent siblings)
4 siblings, 0 replies; 12+ messages in thread
From: Eric Chanudet @ 2026-02-18 17:14 UTC (permalink / raw)
To: Sumit Semwal, Benjamin Gaignard, Brian Starkey, John Stultz,
T.J. Mercier, Christian König, Andrew Morton,
David Hildenbrand, Lorenzo Stoakes, Liam R. Howlett,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko
Cc: linux-media, dri-devel, linaro-mm-sig, linux-kernel,
Maxime Ripard, Albert Esteve, linux-mm, Eric Chanudet,
Maxime Ripard
From: Maxime Ripard <mripard@kernel.org>
Consumers of the CMA API will have to know which CMA region their device
allocate from in order for them to charge the memory allocation in the
right one.
Let's provide an accessor for that region.
Signed-off-by: Maxime Ripard <mripard@kernel.org>
Signed-off-by: Eric Chanudet <echanude@redhat.com>
---
include/linux/cma.h | 9 +++++++++
mm/cma.c | 7 +++++++
2 files changed, 16 insertions(+)
diff --git a/include/linux/cma.h b/include/linux/cma.h
index 62d9c1cf632652489ccd9e01bf1370f2b1f3c249..8ece66c35e9e640b98db4b24a9bd118ad07ec082 100644
--- a/include/linux/cma.h
+++ b/include/linux/cma.h
@@ -77,4 +77,13 @@ static inline bool cma_validate_zones(struct cma *cma)
}
#endif
+#if IS_ENABLED(CONFIG_CGROUP_DMEM)
+struct dmem_cgroup_region *cma_get_dmem_cgroup_region(struct cma *cma);
+#else /* CONFIG_CGROUP_DMEM */
+static inline struct dmem_cgroup_region *cma_get_dmem_cgroup_region(struct cma *cma)
+{
+ return NULL;
+}
+#endif /* CONFIG_CGROUP_DMEM */
+
#endif
diff --git a/mm/cma.c b/mm/cma.c
index 78016647d512868cd87bc2c1a52dd2295acaaf01..c8b0de1da3e71bd6b8ab749ab58eb27446a1657e 100644
--- a/mm/cma.c
+++ b/mm/cma.c
@@ -53,6 +53,13 @@ const char *cma_get_name(const struct cma *cma)
return cma->name;
}
+#if IS_ENABLED(CONFIG_CGROUP_DMEM)
+struct dmem_cgroup_region *cma_get_dmem_cgroup_region(struct cma *cma)
+{
+ return cma->dmem_cgrp_region;
+}
+#endif /* CONFIG_CGROUP_DMEM */
+
static unsigned long cma_bitmap_aligned_mask(const struct cma *cma,
unsigned int align_order)
{
--
2.52.0
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH v2 3/3] dma-buf: heaps: cma: charge each cma heap's dmem
2026-02-18 17:14 [PATCH v2 0/3] dma-buf: heaps: cma: enable dmem cgroup accounting Eric Chanudet
2026-02-18 17:14 ` [PATCH v2 1/3] cma: Register dmem region for each cma region Eric Chanudet
2026-02-18 17:14 ` [PATCH v2 2/3] cma: Provide accessor to cma dmem region Eric Chanudet
@ 2026-02-18 17:14 ` Eric Chanudet
2026-02-19 7:17 ` Christian König
2026-02-19 9:16 ` Maxime Ripard
2026-02-19 9:45 ` [PATCH v2 0/3] dma-buf: heaps: cma: enable dmem cgroup accounting Albert Esteve
2026-02-20 1:14 ` T.J. Mercier
4 siblings, 2 replies; 12+ messages in thread
From: Eric Chanudet @ 2026-02-18 17:14 UTC (permalink / raw)
To: Sumit Semwal, Benjamin Gaignard, Brian Starkey, John Stultz,
T.J. Mercier, Christian König, Andrew Morton,
David Hildenbrand, Lorenzo Stoakes, Liam R. Howlett,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko
Cc: linux-media, dri-devel, linaro-mm-sig, linux-kernel,
Maxime Ripard, Albert Esteve, linux-mm, Eric Chanudet
The cma dma-buf heaps let userspace allocate buffers in CMA regions
without enforcing limits. Since each cma region registers in dmem,
charge against it when allocating a buffer in a cma heap.
Signed-off-by: Eric Chanudet <echanude@redhat.com>
---
drivers/dma-buf/heaps/cma_heap.c | 15 ++++++++++++++-
1 file changed, 14 insertions(+), 1 deletion(-)
diff --git a/drivers/dma-buf/heaps/cma_heap.c b/drivers/dma-buf/heaps/cma_heap.c
index 49cc45fb42dd7200c3c14384bcfdbe85323454b1..bbd4f9495808da19256d97bd6a4dca3e1b0a30a0 100644
--- a/drivers/dma-buf/heaps/cma_heap.c
+++ b/drivers/dma-buf/heaps/cma_heap.c
@@ -27,6 +27,7 @@
#include <linux/scatterlist.h>
#include <linux/slab.h>
#include <linux/vmalloc.h>
+#include <linux/cgroup_dmem.h>
#define DEFAULT_CMA_NAME "default_cma_region"
@@ -58,6 +59,7 @@ struct cma_heap_buffer {
pgoff_t pagecount;
int vmap_cnt;
void *vaddr;
+ struct dmem_cgroup_pool_state *pool;
};
struct dma_heap_attachment {
@@ -276,6 +278,7 @@ static void cma_heap_dma_buf_release(struct dma_buf *dmabuf)
kfree(buffer->pages);
/* release memory */
cma_release(cma_heap->cma, buffer->cma_pages, buffer->pagecount);
+ dmem_cgroup_uncharge(buffer->pool, buffer->len);
kfree(buffer);
}
@@ -319,9 +322,17 @@ static struct dma_buf *cma_heap_allocate(struct dma_heap *heap,
if (align > CONFIG_CMA_ALIGNMENT)
align = CONFIG_CMA_ALIGNMENT;
+ if (mem_accounting) {
+ ret = dmem_cgroup_try_charge(
+ cma_get_dmem_cgroup_region(cma_heap->cma), size,
+ &buffer->pool, NULL);
+ if (ret)
+ goto free_buffer;
+ }
+
cma_pages = cma_alloc(cma_heap->cma, pagecount, align, false);
if (!cma_pages)
- goto free_buffer;
+ goto uncharge_cgroup;
/* Clear the cma pages */
if (PageHighMem(cma_pages)) {
@@ -376,6 +387,8 @@ static struct dma_buf *cma_heap_allocate(struct dma_heap *heap,
kfree(buffer->pages);
free_cma:
cma_release(cma_heap->cma, cma_pages, pagecount);
+uncharge_cgroup:
+ dmem_cgroup_uncharge(buffer->pool, size);
free_buffer:
kfree(buffer);
--
2.52.0
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v2 3/3] dma-buf: heaps: cma: charge each cma heap's dmem
2026-02-18 17:14 ` [PATCH v2 3/3] dma-buf: heaps: cma: charge each cma heap's dmem Eric Chanudet
@ 2026-02-19 7:17 ` Christian König
2026-02-19 17:10 ` Eric Chanudet
2026-02-19 9:16 ` Maxime Ripard
1 sibling, 1 reply; 12+ messages in thread
From: Christian König @ 2026-02-19 7:17 UTC (permalink / raw)
To: Eric Chanudet, Sumit Semwal, Benjamin Gaignard, Brian Starkey,
John Stultz, T.J. Mercier, Andrew Morton, David Hildenbrand,
Lorenzo Stoakes, Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko
Cc: linux-media, dri-devel, linaro-mm-sig, linux-kernel,
Maxime Ripard, Albert Esteve, linux-mm
On 2/18/26 18:14, Eric Chanudet wrote:
> The cma dma-buf heaps let userspace allocate buffers in CMA regions
> without enforcing limits. Since each cma region registers in dmem,
> charge against it when allocating a buffer in a cma heap.
>
> Signed-off-by: Eric Chanudet <echanude@redhat.com>
> ---
> drivers/dma-buf/heaps/cma_heap.c | 15 ++++++++++++++-
> 1 file changed, 14 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/dma-buf/heaps/cma_heap.c b/drivers/dma-buf/heaps/cma_heap.c
> index 49cc45fb42dd7200c3c14384bcfdbe85323454b1..bbd4f9495808da19256d97bd6a4dca3e1b0a30a0 100644
> --- a/drivers/dma-buf/heaps/cma_heap.c
> +++ b/drivers/dma-buf/heaps/cma_heap.c
> @@ -27,6 +27,7 @@
> #include <linux/scatterlist.h>
> #include <linux/slab.h>
> #include <linux/vmalloc.h>
> +#include <linux/cgroup_dmem.h>
>
> #define DEFAULT_CMA_NAME "default_cma_region"
>
> @@ -58,6 +59,7 @@ struct cma_heap_buffer {
> pgoff_t pagecount;
> int vmap_cnt;
> void *vaddr;
> + struct dmem_cgroup_pool_state *pool;
> };
>
> struct dma_heap_attachment {
> @@ -276,6 +278,7 @@ static void cma_heap_dma_buf_release(struct dma_buf *dmabuf)
> kfree(buffer->pages);
> /* release memory */
> cma_release(cma_heap->cma, buffer->cma_pages, buffer->pagecount);
> + dmem_cgroup_uncharge(buffer->pool, buffer->len);
> kfree(buffer);
> }
>
> @@ -319,9 +322,17 @@ static struct dma_buf *cma_heap_allocate(struct dma_heap *heap,
> if (align > CONFIG_CMA_ALIGNMENT)
> align = CONFIG_CMA_ALIGNMENT;
>
> + if (mem_accounting) {
Since mem_accounting is a module parameter it is possible to make it changeable during runtime.
IIRC it currently is read only, but maybe add a one line comment that the cma heap now depends on that.
Apart from that the series looks totally sane to me.
Regards,
Christian.
> + ret = dmem_cgroup_try_charge(
> + cma_get_dmem_cgroup_region(cma_heap->cma), size,
> + &buffer->pool, NULL);
> + if (ret)
> + goto free_buffer;
> + }
> +
> cma_pages = cma_alloc(cma_heap->cma, pagecount, align, false);
> if (!cma_pages)
> - goto free_buffer;
> + goto uncharge_cgroup;
>
> /* Clear the cma pages */
> if (PageHighMem(cma_pages)) {
> @@ -376,6 +387,8 @@ static struct dma_buf *cma_heap_allocate(struct dma_heap *heap,
> kfree(buffer->pages);
> free_cma:
> cma_release(cma_heap->cma, cma_pages, pagecount);
> +uncharge_cgroup:
> + dmem_cgroup_uncharge(buffer->pool, size);
> free_buffer:
> kfree(buffer);
>
>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v2 3/3] dma-buf: heaps: cma: charge each cma heap's dmem
2026-02-18 17:14 ` [PATCH v2 3/3] dma-buf: heaps: cma: charge each cma heap's dmem Eric Chanudet
2026-02-19 7:17 ` Christian König
@ 2026-02-19 9:16 ` Maxime Ripard
2026-02-19 17:21 ` Eric Chanudet
1 sibling, 1 reply; 12+ messages in thread
From: Maxime Ripard @ 2026-02-19 9:16 UTC (permalink / raw)
To: Eric Chanudet
Cc: Sumit Semwal, Benjamin Gaignard, Brian Starkey, John Stultz,
T.J. Mercier, Christian König, Andrew Morton,
David Hildenbrand, Lorenzo Stoakes, Liam R. Howlett,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
linux-media, dri-devel, linaro-mm-sig, linux-kernel,
Albert Esteve, linux-mm
[-- Attachment #1: Type: text/plain, Size: 2117 bytes --]
Hi,
On Wed, Feb 18, 2026 at 12:14:12PM -0500, Eric Chanudet wrote:
> The cma dma-buf heaps let userspace allocate buffers in CMA regions
> without enforcing limits. Since each cma region registers in dmem,
> charge against it when allocating a buffer in a cma heap.
>
> Signed-off-by: Eric Chanudet <echanude@redhat.com>
> ---
> drivers/dma-buf/heaps/cma_heap.c | 15 ++++++++++++++-
> 1 file changed, 14 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/dma-buf/heaps/cma_heap.c b/drivers/dma-buf/heaps/cma_heap.c
> index 49cc45fb42dd7200c3c14384bcfdbe85323454b1..bbd4f9495808da19256d97bd6a4dca3e1b0a30a0 100644
> --- a/drivers/dma-buf/heaps/cma_heap.c
> +++ b/drivers/dma-buf/heaps/cma_heap.c
> @@ -27,6 +27,7 @@
> #include <linux/scatterlist.h>
> #include <linux/slab.h>
> #include <linux/vmalloc.h>
> +#include <linux/cgroup_dmem.h>
>
> #define DEFAULT_CMA_NAME "default_cma_region"
>
> @@ -58,6 +59,7 @@ struct cma_heap_buffer {
> pgoff_t pagecount;
> int vmap_cnt;
> void *vaddr;
> + struct dmem_cgroup_pool_state *pool;
I guess we should add an #if IS_ENABLED #endif guard for dmem?
> };
>
> struct dma_heap_attachment {
> @@ -276,6 +278,7 @@ static void cma_heap_dma_buf_release(struct dma_buf *dmabuf)
> kfree(buffer->pages);
> /* release memory */
> cma_release(cma_heap->cma, buffer->cma_pages, buffer->pagecount);
> + dmem_cgroup_uncharge(buffer->pool, buffer->len);
> kfree(buffer);
> }
>
> @@ -319,9 +322,17 @@ static struct dma_buf *cma_heap_allocate(struct dma_heap *heap,
> if (align > CONFIG_CMA_ALIGNMENT)
> align = CONFIG_CMA_ALIGNMENT;
>
> + if (mem_accounting) {
> + ret = dmem_cgroup_try_charge(
> + cma_get_dmem_cgroup_region(cma_heap->cma), size,
> + &buffer->pool, NULL);
This alone doesn't call for a new version, but adhering to the kernel
coding style would look like this:
+ ret = dmem_cgroup_try_charge(cma_get_dmem_cgroup_region(cma_heap->cma),
+ size, &buffer->pool, NULL);
It looks good to me otherwise,
Acked-by: Maxime Ripard <mripard@kernel.org>
Maxime
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 273 bytes --]
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v2 0/3] dma-buf: heaps: cma: enable dmem cgroup accounting
2026-02-18 17:14 [PATCH v2 0/3] dma-buf: heaps: cma: enable dmem cgroup accounting Eric Chanudet
` (2 preceding siblings ...)
2026-02-18 17:14 ` [PATCH v2 3/3] dma-buf: heaps: cma: charge each cma heap's dmem Eric Chanudet
@ 2026-02-19 9:45 ` Albert Esteve
2026-02-20 1:14 ` T.J. Mercier
4 siblings, 0 replies; 12+ messages in thread
From: Albert Esteve @ 2026-02-19 9:45 UTC (permalink / raw)
To: Eric Chanudet
Cc: Sumit Semwal, Benjamin Gaignard, Brian Starkey, John Stultz,
T.J. Mercier, Christian König, Andrew Morton,
David Hildenbrand, Lorenzo Stoakes, Liam R. Howlett,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
linux-media, dri-devel, linaro-mm-sig, linux-kernel,
Maxime Ripard, linux-mm, Maxime Ripard
On Wed, Feb 18, 2026 at 6:15 PM Eric Chanudet <echanude@redhat.com> wrote:
>
> An earlier series[1] from Maxime introduced dmem to the cma allocator in
> an attempt to use it generally for dma-buf. Restart from there and apply
> the charge in the narrower context of the CMA dma-buf heap instead.
>
> In line with introducing cgroup to the system heap[2], this behavior is
> enabled based on dma_heap.mem_accounting, disabled by default.
>
> dmem is chosen for CMA heaps as it allows limits to be set for each
> region backing each heap. The charge is only put in the dma-buf heap for
> now as it guaranties it can be accounted against a userspace process
> that requested the allocation.
>
> [1] https://lore.kernel.org/all/20250310-dmem-cgroups-v1-0-2984c1bc9312@kernel.org/
> [2] https://lore.kernel.org/all/20260116-dmabuf-heap-system-memcg-v3-0-ecc6b62cc446@redhat.com/
>
> Signed-off-by: Eric Chanudet <echanude@redhat.com>
Tested-by: Albert Esteve <aesteve@redhat.com>
I tested the series with a Fedora VM, setting the global user.slice
dmem.max value and then trying to allocate buffers of different sizes
with DMA_HEAP_IOCTL_ALLOC. Exceeding the max limit results in
'Resource temporarily unavailable' and the allocation fails.
BR,
Albert
> ---
> Changes in v2:
> - Rebase on Maxime's introduction of dmem to the cma allocator:
> https://lore.kernel.org/all/20250310-dmem-cgroups-v1-0-2984c1bc9312@kernel.org/
> - Remove the dmem region registration from the cma dma-buf heap
> - Remove the misplaced logic for the default region.
> - Link to v1: https://lore.kernel.org/r/20260130-dmabuf-heap-cma-dmem-v1-1-3647ea993e99@redhat.com
>
> ---
> Eric Chanudet (1):
> dma-buf: heaps: cma: charge each cma heap's dmem
>
> Maxime Ripard (2):
> cma: Register dmem region for each cma region
> cma: Provide accessor to cma dmem region
>
> drivers/dma-buf/heaps/cma_heap.c | 15 ++++++++++++++-
> include/linux/cma.h | 9 +++++++++
> mm/cma.c | 20 +++++++++++++++++++-
> mm/cma.h | 3 +++
> 4 files changed, 45 insertions(+), 2 deletions(-)
> ---
> base-commit: 948e195dfaa56e48eabda591f97630502ff7e27e
> change-id: 20260128-dmabuf-heap-cma-dmem-f4120a2df4a8
>
> Best regards,
> --
> Eric Chanudet <echanude@redhat.com>
>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v2 3/3] dma-buf: heaps: cma: charge each cma heap's dmem
2026-02-19 7:17 ` Christian König
@ 2026-02-19 17:10 ` Eric Chanudet
2026-02-20 8:16 ` Christian König
0 siblings, 1 reply; 12+ messages in thread
From: Eric Chanudet @ 2026-02-19 17:10 UTC (permalink / raw)
To: Christian König
Cc: Sumit Semwal, Benjamin Gaignard, Brian Starkey, John Stultz,
T.J. Mercier, Andrew Morton, David Hildenbrand, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, linux-media, dri-devel,
linaro-mm-sig, linux-kernel, Maxime Ripard, Albert Esteve,
linux-mm
On Thu, Feb 19, 2026 at 08:17:28AM +0100, Christian König wrote:
>
>
> On 2/18/26 18:14, Eric Chanudet wrote:
> > The cma dma-buf heaps let userspace allocate buffers in CMA regions
> > without enforcing limits. Since each cma region registers in dmem,
> > charge against it when allocating a buffer in a cma heap.
> >
> > Signed-off-by: Eric Chanudet <echanude@redhat.com>
> > ---
> > drivers/dma-buf/heaps/cma_heap.c | 15 ++++++++++++++-
> > 1 file changed, 14 insertions(+), 1 deletion(-)
> >
> > diff --git a/drivers/dma-buf/heaps/cma_heap.c b/drivers/dma-buf/heaps/cma_heap.c
> > index 49cc45fb42dd7200c3c14384bcfdbe85323454b1..bbd4f9495808da19256d97bd6a4dca3e1b0a30a0 100644
> > --- a/drivers/dma-buf/heaps/cma_heap.c
> > +++ b/drivers/dma-buf/heaps/cma_heap.c
> > @@ -27,6 +27,7 @@
> > #include <linux/scatterlist.h>
> > #include <linux/slab.h>
> > #include <linux/vmalloc.h>
> > +#include <linux/cgroup_dmem.h>
> >
> > #define DEFAULT_CMA_NAME "default_cma_region"
> >
> > @@ -58,6 +59,7 @@ struct cma_heap_buffer {
> > pgoff_t pagecount;
> > int vmap_cnt;
> > void *vaddr;
> > + struct dmem_cgroup_pool_state *pool;
> > };
> >
> > struct dma_heap_attachment {
> > @@ -276,6 +278,7 @@ static void cma_heap_dma_buf_release(struct dma_buf *dmabuf)
> > kfree(buffer->pages);
> > /* release memory */
> > cma_release(cma_heap->cma, buffer->cma_pages, buffer->pagecount);
> > + dmem_cgroup_uncharge(buffer->pool, buffer->len);
> > kfree(buffer);
> > }
> >
> > @@ -319,9 +322,17 @@ static struct dma_buf *cma_heap_allocate(struct dma_heap *heap,
> > if (align > CONFIG_CMA_ALIGNMENT)
> > align = CONFIG_CMA_ALIGNMENT;
> >
> > + if (mem_accounting) {
>
> Since mem_accounting is a module parameter it is possible to make it changeable during runtime.
>
> IIRC it currently is read only, but maybe add a one line comment that the cma heap now depends on that.
>
Agreed, while read-only it is easily missed without at least a comment.
Alternatively, should that value be captured in the init callback to
guaranty it is set once and make this requirement clearer?
Thanks,
> Apart from that the series looks totally sane to me.
>
> Regards,
> Christian.
>
> > + ret = dmem_cgroup_try_charge(
> > + cma_get_dmem_cgroup_region(cma_heap->cma), size,
> > + &buffer->pool, NULL);
> > + if (ret)
> > + goto free_buffer;
> > + }
> > +
> > cma_pages = cma_alloc(cma_heap->cma, pagecount, align, false);
> > if (!cma_pages)
> > - goto free_buffer;
> > + goto uncharge_cgroup;
> >
> > /* Clear the cma pages */
> > if (PageHighMem(cma_pages)) {
> > @@ -376,6 +387,8 @@ static struct dma_buf *cma_heap_allocate(struct dma_heap *heap,
> > kfree(buffer->pages);
> > free_cma:
> > cma_release(cma_heap->cma, cma_pages, pagecount);
> > +uncharge_cgroup:
> > + dmem_cgroup_uncharge(buffer->pool, size);
> > free_buffer:
> > kfree(buffer);
> >
> >
>
--
Eric Chanudet
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v2 3/3] dma-buf: heaps: cma: charge each cma heap's dmem
2026-02-19 9:16 ` Maxime Ripard
@ 2026-02-19 17:21 ` Eric Chanudet
0 siblings, 0 replies; 12+ messages in thread
From: Eric Chanudet @ 2026-02-19 17:21 UTC (permalink / raw)
To: Maxime Ripard
Cc: Sumit Semwal, Benjamin Gaignard, Brian Starkey, John Stultz,
T.J. Mercier, Christian König, Andrew Morton,
David Hildenbrand, Lorenzo Stoakes, Liam R. Howlett,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
linux-media, dri-devel, linaro-mm-sig, linux-kernel,
Albert Esteve, linux-mm
On Thu, Feb 19, 2026 at 10:16:37AM +0100, Maxime Ripard wrote:
> Hi,
>
> On Wed, Feb 18, 2026 at 12:14:12PM -0500, Eric Chanudet wrote:
> > The cma dma-buf heaps let userspace allocate buffers in CMA regions
> > without enforcing limits. Since each cma region registers in dmem,
> > charge against it when allocating a buffer in a cma heap.
> >
> > Signed-off-by: Eric Chanudet <echanude@redhat.com>
> > ---
> > drivers/dma-buf/heaps/cma_heap.c | 15 ++++++++++++++-
> > 1 file changed, 14 insertions(+), 1 deletion(-)
> >
> > diff --git a/drivers/dma-buf/heaps/cma_heap.c b/drivers/dma-buf/heaps/cma_heap.c
> > index 49cc45fb42dd7200c3c14384bcfdbe85323454b1..bbd4f9495808da19256d97bd6a4dca3e1b0a30a0 100644
> > --- a/drivers/dma-buf/heaps/cma_heap.c
> > +++ b/drivers/dma-buf/heaps/cma_heap.c
> > @@ -27,6 +27,7 @@
> > #include <linux/scatterlist.h>
> > #include <linux/slab.h>
> > #include <linux/vmalloc.h>
> > +#include <linux/cgroup_dmem.h>
> >
> > #define DEFAULT_CMA_NAME "default_cma_region"
> >
> > @@ -58,6 +59,7 @@ struct cma_heap_buffer {
> > pgoff_t pagecount;
> > int vmap_cnt;
> > void *vaddr;
> > + struct dmem_cgroup_pool_state *pool;
>
> I guess we should add an #if IS_ENABLED #endif guard for dmem?
>
Sure, I saw the other user (ttm) didn't, but that makes sense as the
field is useless if dmem is not enabled.
> > };
> >
> > struct dma_heap_attachment {
> > @@ -276,6 +278,7 @@ static void cma_heap_dma_buf_release(struct dma_buf *dmabuf)
> > kfree(buffer->pages);
> > /* release memory */
> > cma_release(cma_heap->cma, buffer->cma_pages, buffer->pagecount);
> > + dmem_cgroup_uncharge(buffer->pool, buffer->len);
> > kfree(buffer);
> > }
> >
> > @@ -319,9 +322,17 @@ static struct dma_buf *cma_heap_allocate(struct dma_heap *heap,
> > if (align > CONFIG_CMA_ALIGNMENT)
> > align = CONFIG_CMA_ALIGNMENT;
> >
> > + if (mem_accounting) {
> > + ret = dmem_cgroup_try_charge(
> > + cma_get_dmem_cgroup_region(cma_heap->cma), size,
> > + &buffer->pool, NULL);
>
> This alone doesn't call for a new version, but adhering to the kernel
> coding style would look like this:
>
> + ret = dmem_cgroup_try_charge(cma_get_dmem_cgroup_region(cma_heap->cma),
> + size, &buffer->pool, NULL);
Will add to v3 with the other changes.
Thanks,
>
> It looks good to me otherwise,
> Acked-by: Maxime Ripard <mripard@kernel.org>
>
> Maxime
--
Eric Chanudet
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v2 0/3] dma-buf: heaps: cma: enable dmem cgroup accounting
2026-02-18 17:14 [PATCH v2 0/3] dma-buf: heaps: cma: enable dmem cgroup accounting Eric Chanudet
` (3 preceding siblings ...)
2026-02-19 9:45 ` [PATCH v2 0/3] dma-buf: heaps: cma: enable dmem cgroup accounting Albert Esteve
@ 2026-02-20 1:14 ` T.J. Mercier
2026-02-20 9:45 ` Christian König
4 siblings, 1 reply; 12+ messages in thread
From: T.J. Mercier @ 2026-02-20 1:14 UTC (permalink / raw)
To: Eric Chanudet
Cc: Sumit Semwal, Benjamin Gaignard, Brian Starkey, John Stultz,
Christian König, Andrew Morton, David Hildenbrand,
Lorenzo Stoakes, Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, linux-media, dri-devel,
linaro-mm-sig, linux-kernel, Maxime Ripard, Albert Esteve,
linux-mm, Maxime Ripard, Yosry Ahmed, Shakeel Butt
On Wed, Feb 18, 2026 at 9:15 AM Eric Chanudet <echanude@redhat.com> wrote:
Hi Eric,
> An earlier series[1] from Maxime introduced dmem to the cma allocator in
> an attempt to use it generally for dma-buf. Restart from there and apply
> the charge in the narrower context of the CMA dma-buf heap instead.
>
> In line with introducing cgroup to the system heap[2], this behavior is
> enabled based on dma_heap.mem_accounting, disabled by default.
>
> dmem is chosen for CMA heaps as it allows limits to be set for each
> region backing each heap. The charge is only put in the dma-buf heap for
> now as it guaranties it can be accounted against a userspace process
> that requested the allocation.
But CMA memory is system memory, and regular (non-CMA) movable
allocations can occur out of these CMA areas. So this splits system
memory accounting between memcg (from [2]) and dmem. If I want to put
a limit on system memory use I have to adjust multiple limits (memcg +
dmems) and know how to divide the total between them all.
How do you envision using this combination of different controllers?
Thanks,
T.J.
> [1] https://lore.kernel.org/all/20250310-dmem-cgroups-v1-0-2984c1bc9312@kernel.org/
> [2] https://lore.kernel.org/all/20260116-dmabuf-heap-system-memcg-v3-0-ecc6b62cc446@redhat.com/
>
> Signed-off-by: Eric Chanudet <echanude@redhat.com>
> ---
> Changes in v2:
> - Rebase on Maxime's introduction of dmem to the cma allocator:
> https://lore.kernel.org/all/20250310-dmem-cgroups-v1-0-2984c1bc9312@kernel.org/
> - Remove the dmem region registration from the cma dma-buf heap
> - Remove the misplaced logic for the default region.
> - Link to v1: https://lore.kernel.org/r/20260130-dmabuf-heap-cma-dmem-v1-1-3647ea993e99@redhat.com
>
> ---
> Eric Chanudet (1):
> dma-buf: heaps: cma: charge each cma heap's dmem
>
> Maxime Ripard (2):
> cma: Register dmem region for each cma region
> cma: Provide accessor to cma dmem region
>
> drivers/dma-buf/heaps/cma_heap.c | 15 ++++++++++++++-
> include/linux/cma.h | 9 +++++++++
> mm/cma.c | 20 +++++++++++++++++++-
> mm/cma.h | 3 +++
> 4 files changed, 45 insertions(+), 2 deletions(-)
> ---
> base-commit: 948e195dfaa56e48eabda591f97630502ff7e27e
> change-id: 20260128-dmabuf-heap-cma-dmem-f4120a2df4a8
>
> Best regards,
> --
> Eric Chanudet <echanude@redhat.com>
>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v2 3/3] dma-buf: heaps: cma: charge each cma heap's dmem
2026-02-19 17:10 ` Eric Chanudet
@ 2026-02-20 8:16 ` Christian König
0 siblings, 0 replies; 12+ messages in thread
From: Christian König @ 2026-02-20 8:16 UTC (permalink / raw)
To: Eric Chanudet
Cc: Sumit Semwal, Benjamin Gaignard, Brian Starkey, John Stultz,
T.J. Mercier, Andrew Morton, David Hildenbrand, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, linux-media, dri-devel,
linaro-mm-sig, linux-kernel, Maxime Ripard, Albert Esteve,
linux-mm
On 2/19/26 18:10, Eric Chanudet wrote:
> On Thu, Feb 19, 2026 at 08:17:28AM +0100, Christian König wrote:
>>
>>
>> On 2/18/26 18:14, Eric Chanudet wrote:
>>> The cma dma-buf heaps let userspace allocate buffers in CMA regions
>>> without enforcing limits. Since each cma region registers in dmem,
>>> charge against it when allocating a buffer in a cma heap.
>>>
>>> Signed-off-by: Eric Chanudet <echanude@redhat.com>
>>> ---
>>> drivers/dma-buf/heaps/cma_heap.c | 15 ++++++++++++++-
>>> 1 file changed, 14 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/drivers/dma-buf/heaps/cma_heap.c b/drivers/dma-buf/heaps/cma_heap.c
>>> index 49cc45fb42dd7200c3c14384bcfdbe85323454b1..bbd4f9495808da19256d97bd6a4dca3e1b0a30a0 100644
>>> --- a/drivers/dma-buf/heaps/cma_heap.c
>>> +++ b/drivers/dma-buf/heaps/cma_heap.c
>>> @@ -27,6 +27,7 @@
>>> #include <linux/scatterlist.h>
>>> #include <linux/slab.h>
>>> #include <linux/vmalloc.h>
>>> +#include <linux/cgroup_dmem.h>
>>>
>>> #define DEFAULT_CMA_NAME "default_cma_region"
>>>
>>> @@ -58,6 +59,7 @@ struct cma_heap_buffer {
>>> pgoff_t pagecount;
>>> int vmap_cnt;
>>> void *vaddr;
>>> + struct dmem_cgroup_pool_state *pool;
>>> };
>>>
>>> struct dma_heap_attachment {
>>> @@ -276,6 +278,7 @@ static void cma_heap_dma_buf_release(struct dma_buf *dmabuf)
>>> kfree(buffer->pages);
>>> /* release memory */
>>> cma_release(cma_heap->cma, buffer->cma_pages, buffer->pagecount);
>>> + dmem_cgroup_uncharge(buffer->pool, buffer->len);
>>> kfree(buffer);
>>> }
>>>
>>> @@ -319,9 +322,17 @@ static struct dma_buf *cma_heap_allocate(struct dma_heap *heap,
>>> if (align > CONFIG_CMA_ALIGNMENT)
>>> align = CONFIG_CMA_ALIGNMENT;
>>>
>>> + if (mem_accounting) {
>>
>> Since mem_accounting is a module parameter it is possible to make it changeable during runtime.
>>
>> IIRC it currently is read only, but maybe add a one line comment that the cma heap now depends on that.
>>
>
> Agreed, while read-only it is easily missed without at least a comment.
> Alternatively, should that value be captured in the init callback to
> guaranty it is set once and make this requirement clearer?
It probably makes more sense to make nails with heads and make it runtime configurable.
I'm not sure how exactly dmem_cgroup_try_charge()/dmem_cgroup_uncharge() works, could be that it works correctly out of the box and you just need to initialize buffer->pool to NULL when mem_accounting is not enabled.
Regards,
Christian.
>
> Thanks,
>
>> Apart from that the series looks totally sane to me.
>>
>> Regards,
>> Christian.
>>
>>> + ret = dmem_cgroup_try_charge(
>>> + cma_get_dmem_cgroup_region(cma_heap->cma), size,
>>> + &buffer->pool, NULL);
>>> + if (ret)
>>> + goto free_buffer;
>>> + }
>>> +
>>> cma_pages = cma_alloc(cma_heap->cma, pagecount, align, false);
>>> if (!cma_pages)
>>> - goto free_buffer;
>>> + goto uncharge_cgroup;
>>>
>>> /* Clear the cma pages */
>>> if (PageHighMem(cma_pages)) {
>>> @@ -376,6 +387,8 @@ static struct dma_buf *cma_heap_allocate(struct dma_heap *heap,
>>> kfree(buffer->pages);
>>> free_cma:
>>> cma_release(cma_heap->cma, cma_pages, pagecount);
>>> +uncharge_cgroup:
>>> + dmem_cgroup_uncharge(buffer->pool, size);
>>> free_buffer:
>>> kfree(buffer);
>>>
>>>
>>
>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v2 0/3] dma-buf: heaps: cma: enable dmem cgroup accounting
2026-02-20 1:14 ` T.J. Mercier
@ 2026-02-20 9:45 ` Christian König
0 siblings, 0 replies; 12+ messages in thread
From: Christian König @ 2026-02-20 9:45 UTC (permalink / raw)
To: T.J. Mercier, Eric Chanudet
Cc: Sumit Semwal, Benjamin Gaignard, Brian Starkey, John Stultz,
Andrew Morton, David Hildenbrand, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, linux-media, dri-devel,
linaro-mm-sig, linux-kernel, Maxime Ripard, Albert Esteve,
linux-mm, Maxime Ripard, Yosry Ahmed, Shakeel Butt
On 2/20/26 02:14, T.J. Mercier wrote:
> On Wed, Feb 18, 2026 at 9:15 AM Eric Chanudet <echanude@redhat.com> wrote:
>
> Hi Eric,
>
>> An earlier series[1] from Maxime introduced dmem to the cma allocator in
>> an attempt to use it generally for dma-buf. Restart from there and apply
>> the charge in the narrower context of the CMA dma-buf heap instead.
>>
>> In line with introducing cgroup to the system heap[2], this behavior is
>> enabled based on dma_heap.mem_accounting, disabled by default.
>>
>> dmem is chosen for CMA heaps as it allows limits to be set for each
>> region backing each heap. The charge is only put in the dma-buf heap for
>> now as it guaranties it can be accounted against a userspace process
>> that requested the allocation.
>
> But CMA memory is system memory, and regular (non-CMA) movable
> allocations can occur out of these CMA areas. So this splits system
> memory accounting between memcg (from [2]) and dmem. If I want to put
> a limit on system memory use I have to adjust multiple limits (memcg +
> dmems) and know how to divide the total between them all.
>
> How do you envision using this combination of different controllers?
Yeah we have this problem pretty much everywhere.
There are both use cases where you want to account device allocations to memcg and when you don't want that.
From what I know at the moment it would be best if the administrator could say for each dmem if it should account additionally to memcg or not.
Using module parameters to enable/disable it globally is just a workaround as far as I can see.
Regards,
Christian.
>
> Thanks,
> T.J.
>
>> [1] https://lore.kernel.org/all/20250310-dmem-cgroups-v1-0-2984c1bc9312@kernel.org/
>> [2] https://lore.kernel.org/all/20260116-dmabuf-heap-system-memcg-v3-0-ecc6b62cc446@redhat.com/
>>
>> Signed-off-by: Eric Chanudet <echanude@redhat.com>
>> ---
>> Changes in v2:
>> - Rebase on Maxime's introduction of dmem to the cma allocator:
>> https://lore.kernel.org/all/20250310-dmem-cgroups-v1-0-2984c1bc9312@kernel.org/
>> - Remove the dmem region registration from the cma dma-buf heap
>> - Remove the misplaced logic for the default region.
>> - Link to v1: https://lore.kernel.org/r/20260130-dmabuf-heap-cma-dmem-v1-1-3647ea993e99@redhat.com
>>
>> ---
>> Eric Chanudet (1):
>> dma-buf: heaps: cma: charge each cma heap's dmem
>>
>> Maxime Ripard (2):
>> cma: Register dmem region for each cma region
>> cma: Provide accessor to cma dmem region
>>
>> drivers/dma-buf/heaps/cma_heap.c | 15 ++++++++++++++-
>> include/linux/cma.h | 9 +++++++++
>> mm/cma.c | 20 +++++++++++++++++++-
>> mm/cma.h | 3 +++
>> 4 files changed, 45 insertions(+), 2 deletions(-)
>> ---
>> base-commit: 948e195dfaa56e48eabda591f97630502ff7e27e
>> change-id: 20260128-dmabuf-heap-cma-dmem-f4120a2df4a8
>>
>> Best regards,
>> --
>> Eric Chanudet <echanude@redhat.com>
>>
^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2026-02-20 9:45 UTC | newest]
Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2026-02-18 17:14 [PATCH v2 0/3] dma-buf: heaps: cma: enable dmem cgroup accounting Eric Chanudet
2026-02-18 17:14 ` [PATCH v2 1/3] cma: Register dmem region for each cma region Eric Chanudet
2026-02-18 17:14 ` [PATCH v2 2/3] cma: Provide accessor to cma dmem region Eric Chanudet
2026-02-18 17:14 ` [PATCH v2 3/3] dma-buf: heaps: cma: charge each cma heap's dmem Eric Chanudet
2026-02-19 7:17 ` Christian König
2026-02-19 17:10 ` Eric Chanudet
2026-02-20 8:16 ` Christian König
2026-02-19 9:16 ` Maxime Ripard
2026-02-19 17:21 ` Eric Chanudet
2026-02-19 9:45 ` [PATCH v2 0/3] dma-buf: heaps: cma: enable dmem cgroup accounting Albert Esteve
2026-02-20 1:14 ` T.J. Mercier
2026-02-20 9:45 ` Christian König
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox