linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [RFC PATCH 0/5] GEM buffer memory tracking
@ 2022-09-09 11:16 Lucas Stach
  2022-09-09 11:16 ` [RFC PATCH 1/5] mm: add MM_DRIVERPAGES Lucas Stach
                   ` (5 more replies)
  0 siblings, 6 replies; 7+ messages in thread
From: Lucas Stach @ 2022-09-09 11:16 UTC (permalink / raw)
  To: linux-mm, dri-devel
  Cc: Daniel Vetter, David Airlie, Andrew Morton, Michal Hocko,
	Christian König, linux-fsdevel, kernel

Hi MM and DRM people,

during the discussions about per-file OOM badness [1] it repeatedly came up
that it should be possible to simply track the DRM GEM memory usage by some
new MM counters.

The basic problem statement is as follows: in the DRM subsystem drivers can
allocate buffer aka. GEM objects on behalf of a userspace process. In many
cases those buffers behave just like anonymous memory, but they may be used
only by the devices driven by the DRM drivers. As the buffers can be quite
large (multi-MB is the norm, rather than the exception) userspace will not
map/fault them into the process address space when it doesn't need access to
the content of the buffers. Thus the memory used by those buffers is not
accounted to any process and evades visibility by the usual userspace tools
and the OOM handling.

This series tries to remedy this situation by making such memory visible
by accounting it exclusively to the process that created the GEM object.
For now it only hooks up the tracking to the CMA helpers and the etnaviv
drivers, which was enough for me to prove the concept and see it actually
working, other drivers could follow if the proposal sounds sane.

Known shortcomings of this very simplistic implementation:

1. GEM objects can be shared between processes by exporting/importing them
as dma-bufs. When they are shared between multiple processes, killing the
process that got the memory accounted will not actually free the memory, as
the object is kept alive by the sharing process.

2. It currently only accounts the full size of them GEM object, more advanced
devices/drivers may only sparsely populate the backing storage of the object
as needed. This could be solved by having more granular accounting.

I would like to invite everyone to poke holes into this proposal to see if
this might get us on the right trajectory to finally track GEM memory usage
or if it (again) falls short and doesn't satisfy the requirements we have
for graphics memory tracking.

Regards,
Lucas

[1] https://lore.kernel.org/linux-mm/20220531100007.174649-1-christian.koenig@amd.com/

Lucas Stach (5):
  mm: add MM_DRIVERPAGES
  drm/gem: track mm struct of allocating process in gem object
  drm/gem: add functions to account GEM object memory usage
  drm/cma-helper: account memory used by CMA GEM objects
  drm/etnaviv: account memory used by GEM buffers

 drivers/gpu/drm/drm_gem.c             | 42 +++++++++++++++++++++++++++
 drivers/gpu/drm/drm_gem_cma_helper.c  |  4 +++
 drivers/gpu/drm/etnaviv/etnaviv_gem.c |  3 ++
 fs/proc/task_mmu.c                    |  6 ++--
 include/drm/drm_gem.h                 | 15 ++++++++++
 include/linux/mm.h                    |  3 +-
 include/linux/mm_types_task.h         |  1 +
 kernel/fork.c                         |  1 +
 8 files changed, 72 insertions(+), 3 deletions(-)

-- 
2.30.2



^ permalink raw reply	[flat|nested] 7+ messages in thread

* [RFC PATCH 1/5] mm: add MM_DRIVERPAGES
  2022-09-09 11:16 [RFC PATCH 0/5] GEM buffer memory tracking Lucas Stach
@ 2022-09-09 11:16 ` Lucas Stach
  2022-09-09 11:16 ` [RFC PATCH 2/5] drm/gem: track mm struct of allocating process in gem object Lucas Stach
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 7+ messages in thread
From: Lucas Stach @ 2022-09-09 11:16 UTC (permalink / raw)
  To: linux-mm, dri-devel
  Cc: Daniel Vetter, David Airlie, Andrew Morton, Michal Hocko,
	Christian König, linux-fsdevel, kernel

This adds a mm counter for pages allocated by a driver on behalf of
a userspace task.

Especially with DRM drivers there can be large amounts of pages that
are never mapped into userspace and thus are not tracked by the usual
ANONPAGES mmap accounting, as those pages are only ever touched by the
device. They can make up a significant portion of the tasks resident
memory size, but are currently not visible in any of the memory
statistics visible to userspace and the OOM handling.

Add the counter to allow tracking such memory, which allows to make
more sensible decisions in the OOM handling as well as allowing
userspace some better insight into the real system memory usage.

Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
---
 fs/proc/task_mmu.c            | 6 ++++--
 include/linux/mm.h            | 3 ++-
 include/linux/mm_types_task.h | 1 +
 kernel/fork.c                 | 1 +
 4 files changed, 8 insertions(+), 3 deletions(-)

diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
index a3398d0f1927..80b095a233bf 100644
--- a/fs/proc/task_mmu.c
+++ b/fs/proc/task_mmu.c
@@ -33,7 +33,8 @@ void task_mem(struct seq_file *m, struct mm_struct *mm)
 	unsigned long text, lib, swap, anon, file, shmem;
 	unsigned long hiwater_vm, total_vm, hiwater_rss, total_rss;
 
-	anon = get_mm_counter(mm, MM_ANONPAGES);
+	anon = get_mm_counter(mm, MM_ANONPAGES) +
+	       get_mm_counter(mm, MM_DRIVERPAGES);
 	file = get_mm_counter(mm, MM_FILEPAGES);
 	shmem = get_mm_counter(mm, MM_SHMEMPAGES);
 
@@ -94,7 +95,8 @@ unsigned long task_statm(struct mm_struct *mm,
 	*text = (PAGE_ALIGN(mm->end_code) - (mm->start_code & PAGE_MASK))
 								>> PAGE_SHIFT;
 	*data = mm->data_vm + mm->stack_vm;
-	*resident = *shared + get_mm_counter(mm, MM_ANONPAGES);
+	*resident = *shared + get_mm_counter(mm, MM_ANONPAGES) +
+		    get_mm_counter(mm, MM_DRIVERPAGES);
 	return mm->total_vm;
 }
 
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 3bedc449c14d..2cc014d1ea27 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2003,7 +2003,8 @@ static inline unsigned long get_mm_rss(struct mm_struct *mm)
 {
 	return get_mm_counter(mm, MM_FILEPAGES) +
 		get_mm_counter(mm, MM_ANONPAGES) +
-		get_mm_counter(mm, MM_SHMEMPAGES);
+		get_mm_counter(mm, MM_SHMEMPAGES) +
+		get_mm_counter(mm, MM_DRIVERPAGES);
 }
 
 static inline unsigned long get_mm_hiwater_rss(struct mm_struct *mm)
diff --git a/include/linux/mm_types_task.h b/include/linux/mm_types_task.h
index c1bc6731125c..420d88e79906 100644
--- a/include/linux/mm_types_task.h
+++ b/include/linux/mm_types_task.h
@@ -45,6 +45,7 @@ enum {
 	MM_ANONPAGES,	/* Resident anonymous pages */
 	MM_SWAPENTS,	/* Anonymous swap entries */
 	MM_SHMEMPAGES,	/* Resident shared memory pages */
+	MM_DRIVERPAGES,	/* pages allocated by a driver on behalf of a task */
 	NR_MM_COUNTERS
 };
 
diff --git a/kernel/fork.c b/kernel/fork.c
index 90c85b17bf69..74a07a2288ba 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -135,6 +135,7 @@ static const char * const resident_page_types[] = {
 	NAMED_ARRAY_INDEX(MM_ANONPAGES),
 	NAMED_ARRAY_INDEX(MM_SWAPENTS),
 	NAMED_ARRAY_INDEX(MM_SHMEMPAGES),
+	NAMED_ARRAY_INDEX(MM_DRIVERPAGES),
 };
 
 DEFINE_PER_CPU(unsigned long, process_counts) = 0;
-- 
2.30.2



^ permalink raw reply	[flat|nested] 7+ messages in thread

* [RFC PATCH 2/5] drm/gem: track mm struct of allocating process in gem object
  2022-09-09 11:16 [RFC PATCH 0/5] GEM buffer memory tracking Lucas Stach
  2022-09-09 11:16 ` [RFC PATCH 1/5] mm: add MM_DRIVERPAGES Lucas Stach
@ 2022-09-09 11:16 ` Lucas Stach
  2022-09-09 11:16 ` [RFC PATCH 3/5] drm/gem: add functions to account GEM object memory usage Lucas Stach
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 7+ messages in thread
From: Lucas Stach @ 2022-09-09 11:16 UTC (permalink / raw)
  To: linux-mm, dri-devel
  Cc: Daniel Vetter, David Airlie, Andrew Morton, Michal Hocko,
	Christian König, linux-fsdevel, kernel

This keeps around a weak reference to the struct mm of the process
allocating the GEM object. This allows us to charge/uncharge the
process with the allocated backing store memory, even if this is
happening from another context.

Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
---
 drivers/gpu/drm/drm_gem.c |  5 +++++
 include/drm/drm_gem.h     | 12 ++++++++++++
 2 files changed, 17 insertions(+)

diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
index 86d670c71286..b882f935cd4b 100644
--- a/drivers/gpu/drm/drm_gem.c
+++ b/drivers/gpu/drm/drm_gem.c
@@ -36,6 +36,7 @@
 #include <linux/pagemap.h>
 #include <linux/pagevec.h>
 #include <linux/shmem_fs.h>
+#include <linux/sched/mm.h>
 #include <linux/slab.h>
 #include <linux/string_helpers.h>
 #include <linux/types.h>
@@ -157,6 +158,9 @@ void drm_gem_private_object_init(struct drm_device *dev,
 	obj->dev = dev;
 	obj->filp = NULL;
 
+	mmgrab(current->mm);
+	obj->mm = current->mm;
+
 	kref_init(&obj->refcount);
 	obj->handle_count = 0;
 	obj->size = size;
@@ -949,6 +953,7 @@ drm_gem_object_release(struct drm_gem_object *obj)
 	if (obj->filp)
 		fput(obj->filp);
 
+	mmdrop(obj->mm);
 	dma_resv_fini(&obj->_resv);
 	drm_gem_free_mmap_offset(obj);
 }
diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h
index 87cffc9efa85..d021a083c282 100644
--- a/include/drm/drm_gem.h
+++ b/include/drm/drm_gem.h
@@ -234,6 +234,18 @@ struct drm_gem_object {
 	 */
 	struct drm_vma_offset_node vma_node;
 
+	/**
+	 * @mm:
+	 *
+	 * mm struct of the process creating the object. Used to account the
+	 * allocated backing store memory.
+	 *
+	 * Note that this is a weak reference created by mmgrab(), so any
+	 * manipulation needs to make sure the address space is still around by
+	 * calling mmget_not_zero().
+	 */
+	struct mm_struct *mm;
+
 	/**
 	 * @size:
 	 *
-- 
2.30.2



^ permalink raw reply	[flat|nested] 7+ messages in thread

* [RFC PATCH 3/5] drm/gem: add functions to account GEM object memory usage
  2022-09-09 11:16 [RFC PATCH 0/5] GEM buffer memory tracking Lucas Stach
  2022-09-09 11:16 ` [RFC PATCH 1/5] mm: add MM_DRIVERPAGES Lucas Stach
  2022-09-09 11:16 ` [RFC PATCH 2/5] drm/gem: track mm struct of allocating process in gem object Lucas Stach
@ 2022-09-09 11:16 ` Lucas Stach
  2022-09-09 11:16 ` [RFC PATCH 4/5] drm/cma-helper: account memory used by CMA GEM objects Lucas Stach
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 7+ messages in thread
From: Lucas Stach @ 2022-09-09 11:16 UTC (permalink / raw)
  To: linux-mm, dri-devel
  Cc: Daniel Vetter, David Airlie, Andrew Morton, Michal Hocko,
	Christian König, linux-fsdevel, kernel

This adds some functions which driver can call to make the MM aware
of the resident memory used by the GEM object. As drivers will have
different points where memory is made resident/pinned into system
memory, this just adds the helper functions and drivers need to make
sure to call them at the right points.

Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
---
 drivers/gpu/drm/drm_gem.c | 37 +++++++++++++++++++++++++++++++++++++
 include/drm/drm_gem.h     |  3 +++
 2 files changed, 40 insertions(+)

diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
index b882f935cd4b..efccd0a1dde7 100644
--- a/drivers/gpu/drm/drm_gem.c
+++ b/drivers/gpu/drm/drm_gem.c
@@ -1279,3 +1279,40 @@ drm_gem_unlock_reservations(struct drm_gem_object **objs, int count,
 	ww_acquire_fini(acquire_ctx);
 }
 EXPORT_SYMBOL(drm_gem_unlock_reservations);
+
+/**
+ * drm_gem_add_resident - Account memory used by GEM object to the
+ * task which called drm_gem_object_init(). Call when the pages are
+ * made resident in system memory, i.e. pinned for GPU usage.
+ *
+ * @obj: GEM buffer object
+ */
+void drm_gem_add_resident(struct drm_gem_object *obj)
+{
+	if (!mmget_not_zero(obj->mm))
+		return;
+
+	add_mm_counter(obj->mm, MM_DRIVERPAGES, obj->size / PAGE_SIZE);
+
+	mmput(obj->mm);
+}
+EXPORT_SYMBOL(drm_gem_add_resident)
+
+/**
+ * drm_gem_dec_resident - Remove memory used by GEM object accounted
+ * to the task which called drm_gem_object_init(). Call this when the
+ * pages backing the GEM object are no longer resident in system memory,
+ * i.e. when freeing or unpinning the pages.
+ *
+ * @obj: GEM buffer object
+ */
+void drm_gem_dec_resident(struct drm_gem_object *obj)
+{
+	if (!mmget_not_zero(obj->mm))
+		return;
+
+	add_mm_counter(obj->mm, MM_DRIVERPAGES, -(obj->size / PAGE_SIZE));
+
+	mmput(obj->mm);
+}
+EXPORT_SYMBOL(drm_gem_dec_resident)
diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h
index d021a083c282..5951963a2f1a 100644
--- a/include/drm/drm_gem.h
+++ b/include/drm/drm_gem.h
@@ -374,6 +374,9 @@ int drm_gem_mmap_obj(struct drm_gem_object *obj, unsigned long obj_size,
 		     struct vm_area_struct *vma);
 int drm_gem_mmap(struct file *filp, struct vm_area_struct *vma);
 
+void drm_gem_add_resident(struct drm_gem_object *obj);
+void drm_gem_dec_resident(struct drm_gem_object *obj);
+
 /**
  * drm_gem_object_get - acquire a GEM buffer object reference
  * @obj: GEM buffer object
-- 
2.30.2



^ permalink raw reply	[flat|nested] 7+ messages in thread

* [RFC PATCH 4/5] drm/cma-helper: account memory used by CMA GEM objects
  2022-09-09 11:16 [RFC PATCH 0/5] GEM buffer memory tracking Lucas Stach
                   ` (2 preceding siblings ...)
  2022-09-09 11:16 ` [RFC PATCH 3/5] drm/gem: add functions to account GEM object memory usage Lucas Stach
@ 2022-09-09 11:16 ` Lucas Stach
  2022-09-09 11:16 ` [RFC PATCH 5/5] drm/etnaviv: account memory used by GEM buffers Lucas Stach
  2022-09-09 11:32 ` [RFC PATCH 0/5] GEM buffer memory tracking Christian König
  5 siblings, 0 replies; 7+ messages in thread
From: Lucas Stach @ 2022-09-09 11:16 UTC (permalink / raw)
  To: linux-mm, dri-devel
  Cc: Daniel Vetter, David Airlie, Andrew Morton, Michal Hocko,
	Christian König, linux-fsdevel, kernel

CMA buffer are pinned into system memory as soon as they are allocated
and will only disappear when they are freed.

Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
---
 drivers/gpu/drm/drm_gem_cma_helper.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c b/drivers/gpu/drm/drm_gem_cma_helper.c
index 42abee9a0f4f..f0c4e7e6cc33 100644
--- a/drivers/gpu/drm/drm_gem_cma_helper.c
+++ b/drivers/gpu/drm/drm_gem_cma_helper.c
@@ -162,6 +162,8 @@ struct drm_gem_cma_object *drm_gem_cma_create(struct drm_device *drm,
 		goto error;
 	}
 
+	drm_gem_add_resident(&cma_obj->base);
+
 	return cma_obj;
 
 error:
@@ -230,6 +232,8 @@ void drm_gem_cma_free(struct drm_gem_cma_object *cma_obj)
 	struct drm_gem_object *gem_obj = &cma_obj->base;
 	struct iosys_map map = IOSYS_MAP_INIT_VADDR(cma_obj->vaddr);
 
+	drm_gem_dec_resident(gem_obj);
+
 	if (gem_obj->import_attach) {
 		if (cma_obj->vaddr)
 			dma_buf_vunmap(gem_obj->import_attach->dmabuf, &map);
-- 
2.30.2



^ permalink raw reply	[flat|nested] 7+ messages in thread

* [RFC PATCH 5/5] drm/etnaviv: account memory used by GEM buffers
  2022-09-09 11:16 [RFC PATCH 0/5] GEM buffer memory tracking Lucas Stach
                   ` (3 preceding siblings ...)
  2022-09-09 11:16 ` [RFC PATCH 4/5] drm/cma-helper: account memory used by CMA GEM objects Lucas Stach
@ 2022-09-09 11:16 ` Lucas Stach
  2022-09-09 11:32 ` [RFC PATCH 0/5] GEM buffer memory tracking Christian König
  5 siblings, 0 replies; 7+ messages in thread
From: Lucas Stach @ 2022-09-09 11:16 UTC (permalink / raw)
  To: linux-mm, dri-devel
  Cc: Daniel Vetter, David Airlie, Andrew Morton, Michal Hocko,
	Christian König, linux-fsdevel, kernel

Etnaviv GEM buffers are pinned into memory as soon as we allocate
the pages backing the object and only disappear when freeing the
GEM object as there is no shrinker hooked up for unused buffers.

Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
---
 drivers/gpu/drm/etnaviv/etnaviv_gem.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem.c b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
index cc386f8a7116..bf3d75b8e154 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_gem.c
+++ b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
@@ -67,6 +67,8 @@ static int etnaviv_gem_shmem_get_pages(struct etnaviv_gem_object *etnaviv_obj)
 
 	etnaviv_obj->pages = p;
 
+	drm_gem_add_resident(&etnaviv_obj->base);
+
 	return 0;
 }
 
@@ -79,6 +81,7 @@ static void put_pages(struct etnaviv_gem_object *etnaviv_obj)
 		etnaviv_obj->sgt = NULL;
 	}
 	if (etnaviv_obj->pages) {
+		drm_gem_dec_resident(&etnaviv_obj->base);
 		drm_gem_put_pages(&etnaviv_obj->base, etnaviv_obj->pages,
 				  true, false);
 
-- 
2.30.2



^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [RFC PATCH 0/5] GEM buffer memory tracking
  2022-09-09 11:16 [RFC PATCH 0/5] GEM buffer memory tracking Lucas Stach
                   ` (4 preceding siblings ...)
  2022-09-09 11:16 ` [RFC PATCH 5/5] drm/etnaviv: account memory used by GEM buffers Lucas Stach
@ 2022-09-09 11:32 ` Christian König
  5 siblings, 0 replies; 7+ messages in thread
From: Christian König @ 2022-09-09 11:32 UTC (permalink / raw)
  To: Lucas Stach, linux-mm, dri-devel
  Cc: Daniel Vetter, David Airlie, Andrew Morton, Michal Hocko,
	linux-fsdevel, kernel

Am 09.09.22 um 13:16 schrieb Lucas Stach:
> Hi MM and DRM people,
>
> during the discussions about per-file OOM badness [1] it repeatedly came up
> that it should be possible to simply track the DRM GEM memory usage by some
> new MM counters.
>
> The basic problem statement is as follows: in the DRM subsystem drivers can
> allocate buffer aka. GEM objects on behalf of a userspace process. In many
> cases those buffers behave just like anonymous memory, but they may be used
> only by the devices driven by the DRM drivers. As the buffers can be quite
> large (multi-MB is the norm, rather than the exception) userspace will not
> map/fault them into the process address space when it doesn't need access to
> the content of the buffers. Thus the memory used by those buffers is not
> accounted to any process and evades visibility by the usual userspace tools
> and the OOM handling.
>
> This series tries to remedy this situation by making such memory visible
> by accounting it exclusively to the process that created the GEM object.
> For now it only hooks up the tracking to the CMA helpers and the etnaviv
> drivers, which was enough for me to prove the concept and see it actually
> working, other drivers could follow if the proposal sounds sane.
>
> Known shortcomings of this very simplistic implementation:
>
> 1. GEM objects can be shared between processes by exporting/importing them
> as dma-bufs. When they are shared between multiple processes, killing the
> process that got the memory accounted will not actually free the memory, as
> the object is kept alive by the sharing process.
>
> 2. It currently only accounts the full size of them GEM object, more advanced
> devices/drivers may only sparsely populate the backing storage of the object
> as needed. This could be solved by having more granular accounting.
>
> I would like to invite everyone to poke holes into this proposal to see if
> this might get us on the right trajectory to finally track GEM memory usage
> or if it (again) falls short and doesn't satisfy the requirements we have
> for graphics memory tracking.

Good to see other looking into this problem as well since I didn't had 
time for it recently.

I've tried this approach as well, but was quickly shot down by the 
forking behavior of the core kernel.

The problem is that the MM counters get copied over to child processes 
and because of that become imbalanced when this child process now 
terminates.

What you could do is to change the forking behavior for MM_DRIVERPAGES 
so that it always stays with the process which has initially allocated 
the memory and never leaks to children.

Apart from that I suggest to rename it since the shmemfd and a few other 
implementations have pretty much the same problem.

Regards,
Christian.

>
> Regards,
> Lucas
>
> [1] https://lore.kernel.org/linux-mm/20220531100007.174649-1-christian.koenig@amd.com/
>
> Lucas Stach (5):
>    mm: add MM_DRIVERPAGES
>    drm/gem: track mm struct of allocating process in gem object
>    drm/gem: add functions to account GEM object memory usage
>    drm/cma-helper: account memory used by CMA GEM objects
>    drm/etnaviv: account memory used by GEM buffers
>
>   drivers/gpu/drm/drm_gem.c             | 42 +++++++++++++++++++++++++++
>   drivers/gpu/drm/drm_gem_cma_helper.c  |  4 +++
>   drivers/gpu/drm/etnaviv/etnaviv_gem.c |  3 ++
>   fs/proc/task_mmu.c                    |  6 ++--
>   include/drm/drm_gem.h                 | 15 ++++++++++
>   include/linux/mm.h                    |  3 +-
>   include/linux/mm_types_task.h         |  1 +
>   kernel/fork.c                         |  1 +
>   8 files changed, 72 insertions(+), 3 deletions(-)
>



^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2022-09-09 11:33 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-09-09 11:16 [RFC PATCH 0/5] GEM buffer memory tracking Lucas Stach
2022-09-09 11:16 ` [RFC PATCH 1/5] mm: add MM_DRIVERPAGES Lucas Stach
2022-09-09 11:16 ` [RFC PATCH 2/5] drm/gem: track mm struct of allocating process in gem object Lucas Stach
2022-09-09 11:16 ` [RFC PATCH 3/5] drm/gem: add functions to account GEM object memory usage Lucas Stach
2022-09-09 11:16 ` [RFC PATCH 4/5] drm/cma-helper: account memory used by CMA GEM objects Lucas Stach
2022-09-09 11:16 ` [RFC PATCH 5/5] drm/etnaviv: account memory used by GEM buffers Lucas Stach
2022-09-09 11:32 ` [RFC PATCH 0/5] GEM buffer memory tracking Christian König

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox