linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/4] drm/gem-shmem: Track page accessed/dirty status
@ 2026-02-04 11:39 Thomas Zimmermann
  2026-02-04 11:39 ` [PATCH v2 1/4] drm/gem-shmem: Return vm_fault_t from drm_gem_shmem_try_map_pmd() Thomas Zimmermann
                   ` (3 more replies)
  0 siblings, 4 replies; 8+ messages in thread
From: Thomas Zimmermann @ 2026-02-04 11:39 UTC (permalink / raw)
  To: boris.brezillon, loic.molinari, willy, frank.binns, matt.coster,
	maarten.lankhorst, mripard, airlied, simona
  Cc: dri-devel, linux-mm, Thomas Zimmermann

Track page access/dirty status in gem-shmem for better integration with
the overall memory management. Gem-shmem has long had two flag bits in
struct drm_gem_shmem_object, named pages_mark_accessed_on_put and
pages_mark_dirty_on_put, but never used them much; except for some odd
cases in drivers. Therefore pages in gem-shmem where never marked
correctly. (Other DRM memory managers do some course-grain tracking at
least).

Patches 1 and 2 switch from PFN-based mapping to page mapping. The pages
are already available; only the mmap handling needs to be adapted.

Patch 3 adds tracking access and dirty status in mmap.

Patch 4 adds tracking access and dirty status in vmap. Becasue there's
no fault handling here, we refer to the existing status bits in struct
drm_gem_shmem_object. Each page's status will be updated by the page
release in drm_gem_put_pages(). The imagination driver requires a small
fix to make it work correctly.

Tested with CONFIG_VM=y by running animations on DRM's bochs driver for
several hours. This uses gem-shmem's mmap and vmap extensively.

v2:
- fix possible OOB access into page array (Matthew)
- simplify fault-handler error handling (Boris)
- simplify internal interfaces (Matthew)

Thomas Zimmermann (4):
  drm/gem-shmem: Return vm_fault_t from drm_gem_shmem_try_map_pmd()
  drm/gem-shmem: Map pages in mmap fault handler
  drm/gem-shmem: Track folio accessed/dirty status in mmap
  drm/gem-shmem: Track folio accessed/dirty status in vmap

 drivers/gpu/drm/drm_gem_shmem_helper.c | 77 ++++++++++++++++++--------
 drivers/gpu/drm/imagination/pvr_gem.c  |  6 +-
 2 files changed, 57 insertions(+), 26 deletions(-)


base-commit: 6e53f6296065672f8a0c7f98b4b6c409dac382b4
-- 
2.52.0



^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH v2 1/4] drm/gem-shmem: Return vm_fault_t from drm_gem_shmem_try_map_pmd()
  2026-02-04 11:39 [PATCH v2 0/4] drm/gem-shmem: Track page accessed/dirty status Thomas Zimmermann
@ 2026-02-04 11:39 ` Thomas Zimmermann
  2026-02-04 13:58   ` Boris Brezillon
  2026-02-04 11:39 ` [PATCH v2 2/4] drm/gem-shmem: Map pages in mmap fault handler Thomas Zimmermann
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 8+ messages in thread
From: Thomas Zimmermann @ 2026-02-04 11:39 UTC (permalink / raw)
  To: boris.brezillon, loic.molinari, willy, frank.binns, matt.coster,
	maarten.lankhorst, mripard, airlied, simona
  Cc: dri-devel, linux-mm, Thomas Zimmermann

Return the exact VM_FAULT_ mask from drm_gem_shmem_try_map_pmd(). Gives
the caller better insight into the result. Return 0 if nothing was done.

If the caller sees VM_FAULT_NOPAGE, drm_gem_shmem_try_map_pmd() added a
PMD entry to the page table. As before, return early from the page-fault
handler in that case.

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Suggested-by: Matthew Wilcox <willy@infradead.org>
---
 drivers/gpu/drm/drm_gem_shmem_helper.c | 14 ++++++--------
 1 file changed, 6 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
index 3871a6d92f77..e7316dc7e921 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -550,8 +550,8 @@ int drm_gem_shmem_dumb_create(struct drm_file *file, struct drm_device *dev,
 }
 EXPORT_SYMBOL_GPL(drm_gem_shmem_dumb_create);
 
-static bool drm_gem_shmem_try_map_pmd(struct vm_fault *vmf, unsigned long addr,
-				      struct page *page)
+static vm_fault_t drm_gem_shmem_try_map_pmd(struct vm_fault *vmf, unsigned long addr,
+					    struct page *page)
 {
 #ifdef CONFIG_ARCH_SUPPORTS_PMD_PFNMAP
 	unsigned long pfn = page_to_pfn(page);
@@ -562,12 +562,11 @@ static bool drm_gem_shmem_try_map_pmd(struct vm_fault *vmf, unsigned long addr,
 	    pmd_none(*vmf->pmd) &&
 	    folio_test_pmd_mappable(page_folio(page))) {
 		pfn &= PMD_MASK >> PAGE_SHIFT;
-		if (vmf_insert_pfn_pmd(vmf, pfn, false) == VM_FAULT_NOPAGE)
-			return true;
+		return vmf_insert_pfn_pmd(vmf, pfn, false);
 	}
 #endif
 
-	return false;
+	return 0;
 }
 
 static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf)
@@ -593,10 +592,9 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf)
 		goto out;
 	}
 
-	if (drm_gem_shmem_try_map_pmd(vmf, vmf->address, pages[page_offset])) {
-		ret = VM_FAULT_NOPAGE;
+	ret = drm_gem_shmem_try_map_pmd(vmf, vmf->address, pages[page_offset]);
+	if (ret == VM_FAULT_NOPAGE)
 		goto out;
-	}
 
 	pfn = page_to_pfn(pages[page_offset]);
 	ret = vmf_insert_pfn(vma, vmf->address, pfn);
-- 
2.52.0



^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH v2 2/4] drm/gem-shmem: Map pages in mmap fault handler
  2026-02-04 11:39 [PATCH v2 0/4] drm/gem-shmem: Track page accessed/dirty status Thomas Zimmermann
  2026-02-04 11:39 ` [PATCH v2 1/4] drm/gem-shmem: Return vm_fault_t from drm_gem_shmem_try_map_pmd() Thomas Zimmermann
@ 2026-02-04 11:39 ` Thomas Zimmermann
  2026-02-04 16:03   ` Matthew Wilcox
  2026-02-04 11:39 ` [PATCH v2 3/4] drm/gem-shmem: Track folio accessed/dirty status in mmap Thomas Zimmermann
  2026-02-04 11:39 ` [PATCH v2 4/4] drm/gem-shmem: Track folio accessed/dirty status in vmap Thomas Zimmermann
  3 siblings, 1 reply; 8+ messages in thread
From: Thomas Zimmermann @ 2026-02-04 11:39 UTC (permalink / raw)
  To: boris.brezillon, loic.molinari, willy, frank.binns, matt.coster,
	maarten.lankhorst, mripard, airlied, simona
  Cc: dri-devel, linux-mm, Thomas Zimmermann

Gem-shmem operates on pages instead of I/O memory ranges, so use them
for mmap. This will allow for tracking page dirty/accessed flags. If
hugepage support is available, insert the page's folio if possible.
Otherwise fall back to mapping individual pages.

As the PFN is no longer required for hugepage mappings, simplify the
related code and make it depend on CONFIG_TRANSPARENT_HUGEPAGE. Prepare
for tracking folio status.

v2:
- do not look up the page before testing page-array bounds (Matthew)
- simplify error handling in drm_gem_shmem_fault() (Boris)
- adapt to changes in drm_gem_shmem_try_mmap_pmd()

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
 drivers/gpu/drm/drm_gem_shmem_helper.c | 49 +++++++++++++++-----------
 1 file changed, 29 insertions(+), 20 deletions(-)

diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
index e7316dc7e921..24553dec070d 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -553,16 +553,17 @@ EXPORT_SYMBOL_GPL(drm_gem_shmem_dumb_create);
 static vm_fault_t drm_gem_shmem_try_map_pmd(struct vm_fault *vmf, unsigned long addr,
 					    struct page *page)
 {
-#ifdef CONFIG_ARCH_SUPPORTS_PMD_PFNMAP
-	unsigned long pfn = page_to_pfn(page);
-	unsigned long paddr = pfn << PAGE_SHIFT;
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+	phys_addr_t paddr = page_to_phys(page);
 	bool aligned = (addr & ~PMD_MASK) == (paddr & ~PMD_MASK);
 
-	if (aligned &&
-	    pmd_none(*vmf->pmd) &&
-	    folio_test_pmd_mappable(page_folio(page))) {
-		pfn &= PMD_MASK >> PAGE_SHIFT;
-		return vmf_insert_pfn_pmd(vmf, pfn, false);
+	if (aligned && pmd_none(*vmf->pmd)) {
+		struct folio *folio = page_folio(page);
+
+		if (folio_test_pmd_mappable(folio)) {
+			/* Read-only mapping; split upon write fault */
+			return vmf_insert_folio_pmd(vmf, folio, false);
+		}
 	}
 #endif
 
@@ -575,13 +576,10 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf)
 	struct drm_gem_object *obj = vma->vm_private_data;
 	struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
 	loff_t num_pages = obj->size >> PAGE_SHIFT;
-	vm_fault_t ret;
 	struct page **pages = shmem->pages;
-	pgoff_t page_offset;
-	unsigned long pfn;
-
-	/* Offset to faulty address in the VMA. */
-	page_offset = vmf->pgoff - vma->vm_pgoff;
+	pgoff_t page_offset = vmf->pgoff - vma->vm_pgoff; /* page offset within VMA */
+	struct page *page;
+	vm_fault_t ret;
 
 	dma_resv_lock(shmem->base.resv, NULL);
 
@@ -592,14 +590,25 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf)
 		goto out;
 	}
 
-	ret = drm_gem_shmem_try_map_pmd(vmf, vmf->address, pages[page_offset]);
-	if (ret == VM_FAULT_NOPAGE)
+	page = pages[page_offset];
+	if (!page) {
+		ret = VM_FAULT_SIGBUS;
 		goto out;
+	}
+
+	ret = drm_gem_shmem_try_map_pmd(vmf, vmf->address, page);
+	if (ret != VM_FAULT_NOPAGE) {
+		struct folio *folio = page_folio(page);
+
+		get_page(page);
 
-	pfn = page_to_pfn(pages[page_offset]);
-	ret = vmf_insert_pfn(vma, vmf->address, pfn);
+		folio_lock(folio);
+
+		vmf->page = page;
+		ret = VM_FAULT_LOCKED;
+	}
 
- out:
+out:
 	dma_resv_unlock(shmem->base.resv);
 
 	return ret;
@@ -689,7 +698,7 @@ int drm_gem_shmem_mmap(struct drm_gem_shmem_object *shmem, struct vm_area_struct
 	if (ret)
 		return ret;
 
-	vm_flags_set(vma, VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP);
+	vm_flags_mod(vma, VM_DONTEXPAND | VM_DONTDUMP, VM_PFNMAP);
 	vma->vm_page_prot = vm_get_page_prot(vma->vm_flags);
 	if (shmem->map_wc)
 		vma->vm_page_prot = pgprot_writecombine(vma->vm_page_prot);
-- 
2.52.0



^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH v2 3/4] drm/gem-shmem: Track folio accessed/dirty status in mmap
  2026-02-04 11:39 [PATCH v2 0/4] drm/gem-shmem: Track page accessed/dirty status Thomas Zimmermann
  2026-02-04 11:39 ` [PATCH v2 1/4] drm/gem-shmem: Return vm_fault_t from drm_gem_shmem_try_map_pmd() Thomas Zimmermann
  2026-02-04 11:39 ` [PATCH v2 2/4] drm/gem-shmem: Map pages in mmap fault handler Thomas Zimmermann
@ 2026-02-04 11:39 ` Thomas Zimmermann
  2026-02-04 11:39 ` [PATCH v2 4/4] drm/gem-shmem: Track folio accessed/dirty status in vmap Thomas Zimmermann
  3 siblings, 0 replies; 8+ messages in thread
From: Thomas Zimmermann @ 2026-02-04 11:39 UTC (permalink / raw)
  To: boris.brezillon, loic.molinari, willy, frank.binns, matt.coster,
	maarten.lankhorst, mripard, airlied, simona
  Cc: dri-devel, linux-mm, Thomas Zimmermann

Invoke folio_mark_accessed() in mmap page faults to add the folio to
the memory manager's LRU list. Userspace invokes mmap to get the memory
for software rendering. Compositors do the same when creating the final
on-screen image, so keeping the pages in LRU makes sense. Avoids paging
out graphics buffers when under memory pressure.

In page_mkwrite, further invoke the folio_mark_dirty() to add the folio
for writeback, should the underlying file be paged out from system memory.
This rarely happens in practice, yet it would corrupt the buffer content.

This has little effect on a system's hardware-accelerated rendering, which
only mmaps for an initial setup of textures, meshes, shaders, etc.

v2:
- adapt to changes in drm_gem_shmem_try_mmap_pmd()

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Reviewed-by: Boris Brezillon <boris.brezillon@collabora.com>
---
 drivers/gpu/drm/drm_gem_shmem_helper.c | 20 +++++++++++++++++++-
 1 file changed, 19 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
index 24553dec070d..c2ee30967c41 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -597,12 +597,17 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf)
 	}
 
 	ret = drm_gem_shmem_try_map_pmd(vmf, vmf->address, page);
-	if (ret != VM_FAULT_NOPAGE) {
+	if (ret == VM_FAULT_NOPAGE) {
+		struct folio *folio = page_folio(page);
+
+		folio_mark_accessed(folio);
+	} else {
 		struct folio *folio = page_folio(page);
 
 		get_page(page);
 
 		folio_lock(folio);
+		folio_mark_accessed(folio);
 
 		vmf->page = page;
 		ret = VM_FAULT_LOCKED;
@@ -648,10 +653,23 @@ static void drm_gem_shmem_vm_close(struct vm_area_struct *vma)
 	drm_gem_vm_close(vma);
 }
 
+static vm_fault_t drm_gem_shmem_page_mkwrite(struct vm_fault *vmf)
+{
+	struct folio *folio = page_folio(vmf->page);
+
+	file_update_time(vmf->vma->vm_file);
+
+	folio_lock(folio);
+	folio_mark_dirty(folio);
+
+	return VM_FAULT_LOCKED;
+}
+
 const struct vm_operations_struct drm_gem_shmem_vm_ops = {
 	.fault = drm_gem_shmem_fault,
 	.open = drm_gem_shmem_vm_open,
 	.close = drm_gem_shmem_vm_close,
+	.page_mkwrite = drm_gem_shmem_page_mkwrite,
 };
 EXPORT_SYMBOL_GPL(drm_gem_shmem_vm_ops);
 
-- 
2.52.0



^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH v2 4/4] drm/gem-shmem: Track folio accessed/dirty status in vmap
  2026-02-04 11:39 [PATCH v2 0/4] drm/gem-shmem: Track page accessed/dirty status Thomas Zimmermann
                   ` (2 preceding siblings ...)
  2026-02-04 11:39 ` [PATCH v2 3/4] drm/gem-shmem: Track folio accessed/dirty status in mmap Thomas Zimmermann
@ 2026-02-04 11:39 ` Thomas Zimmermann
  3 siblings, 0 replies; 8+ messages in thread
From: Thomas Zimmermann @ 2026-02-04 11:39 UTC (permalink / raw)
  To: boris.brezillon, loic.molinari, willy, frank.binns, matt.coster,
	maarten.lankhorst, mripard, airlied, simona
  Cc: dri-devel, linux-mm, Thomas Zimmermann

On successful vmap, set the page_mark_accessed_on_put and _dirty_on_put
flags in the gem-shmem object. Signals that the contained pages require
LRU and dirty tracking when they are being released back to SHMEM. Clear
these flags on put, so that buffer remains quiet until the next call to
vmap. There's no means of handling dirty status in vmap as there's no
write-only mapping available.

Both flags, _accessed_on_put and _dirty_on_put, have always been part of
the gem-shmem object, but never used much. So most drivers did not track
the page status correctly.

Only the v3d and imagination drivers make limited use of _dirty_on_put. In
the case of imagination, move the flag setting from init to cleanup. This
ensures writeback of modified pages but does not interfere with the
internal vmap/vunmap calls. V3d already implements this behaviour.

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Reviewed-by: Boris Brezillon <boris.brezillon@collabora.com> # gem-shmem
---
 drivers/gpu/drm/drm_gem_shmem_helper.c | 4 ++++
 drivers/gpu/drm/imagination/pvr_gem.c  | 6 ++++--
 2 files changed, 8 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
index c2ee30967c41..a0b0912c1d42 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -265,6 +265,8 @@ void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem)
 				  shmem->pages_mark_dirty_on_put,
 				  shmem->pages_mark_accessed_on_put);
 		shmem->pages = NULL;
+		shmem->pages_mark_accessed_on_put = false;
+		shmem->pages_mark_dirty_on_put = false;
 	}
 }
 EXPORT_SYMBOL_GPL(drm_gem_shmem_put_pages_locked);
@@ -397,6 +399,8 @@ int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem,
 		} else {
 			iosys_map_set_vaddr(map, shmem->vaddr);
 			refcount_set(&shmem->vmap_use_count, 1);
+			shmem->pages_mark_accessed_on_put = true;
+			shmem->pages_mark_dirty_on_put = true;
 		}
 	}
 
diff --git a/drivers/gpu/drm/imagination/pvr_gem.c b/drivers/gpu/drm/imagination/pvr_gem.c
index c07c9a915190..307b02c916d4 100644
--- a/drivers/gpu/drm/imagination/pvr_gem.c
+++ b/drivers/gpu/drm/imagination/pvr_gem.c
@@ -25,7 +25,10 @@
 
 static void pvr_gem_object_free(struct drm_gem_object *obj)
 {
-	drm_gem_shmem_object_free(obj);
+	struct drm_gem_shmem_object *shmem_obj = to_drm_gem_shmem_obj(obj);
+
+	shmem_obj->pages_mark_dirty_on_put = true;
+	drm_gem_shmem_free(shmem_obj);
 }
 
 static struct dma_buf *pvr_gem_export(struct drm_gem_object *obj, int flags)
@@ -363,7 +366,6 @@ pvr_gem_object_create(struct pvr_device *pvr_dev, size_t size, u64 flags)
 	if (IS_ERR(shmem_obj))
 		return ERR_CAST(shmem_obj);
 
-	shmem_obj->pages_mark_dirty_on_put = true;
 	shmem_obj->map_wc = !(flags & PVR_BO_CPU_CACHED);
 	pvr_obj = shmem_gem_to_pvr_gem(shmem_obj);
 	pvr_obj->flags = flags;
-- 
2.52.0



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2 1/4] drm/gem-shmem: Return vm_fault_t from drm_gem_shmem_try_map_pmd()
  2026-02-04 11:39 ` [PATCH v2 1/4] drm/gem-shmem: Return vm_fault_t from drm_gem_shmem_try_map_pmd() Thomas Zimmermann
@ 2026-02-04 13:58   ` Boris Brezillon
  0 siblings, 0 replies; 8+ messages in thread
From: Boris Brezillon @ 2026-02-04 13:58 UTC (permalink / raw)
  To: Thomas Zimmermann
  Cc: loic.molinari, willy, frank.binns, matt.coster,
	maarten.lankhorst, mripard, airlied, simona, dri-devel, linux-mm

On Wed,  4 Feb 2026 12:39:29 +0100
Thomas Zimmermann <tzimmermann@suse.de> wrote:

> Return the exact VM_FAULT_ mask from drm_gem_shmem_try_map_pmd(). Gives
> the caller better insight into the result. Return 0 if nothing was done.
> 
> If the caller sees VM_FAULT_NOPAGE, drm_gem_shmem_try_map_pmd() added a
> PMD entry to the page table. As before, return early from the page-fault
> handler in that case.
> 
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> Suggested-by: Matthew Wilcox <willy@infradead.org>

Reviewed-by: Boris Brezillon <boris.brezillon@collabora.com>

> ---
>  drivers/gpu/drm/drm_gem_shmem_helper.c | 14 ++++++--------
>  1 file changed, 6 insertions(+), 8 deletions(-)
> 
> diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
> index 3871a6d92f77..e7316dc7e921 100644
> --- a/drivers/gpu/drm/drm_gem_shmem_helper.c
> +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
> @@ -550,8 +550,8 @@ int drm_gem_shmem_dumb_create(struct drm_file *file, struct drm_device *dev,
>  }
>  EXPORT_SYMBOL_GPL(drm_gem_shmem_dumb_create);
>  
> -static bool drm_gem_shmem_try_map_pmd(struct vm_fault *vmf, unsigned long addr,
> -				      struct page *page)
> +static vm_fault_t drm_gem_shmem_try_map_pmd(struct vm_fault *vmf, unsigned long addr,
> +					    struct page *page)
>  {
>  #ifdef CONFIG_ARCH_SUPPORTS_PMD_PFNMAP
>  	unsigned long pfn = page_to_pfn(page);
> @@ -562,12 +562,11 @@ static bool drm_gem_shmem_try_map_pmd(struct vm_fault *vmf, unsigned long addr,
>  	    pmd_none(*vmf->pmd) &&
>  	    folio_test_pmd_mappable(page_folio(page))) {
>  		pfn &= PMD_MASK >> PAGE_SHIFT;
> -		if (vmf_insert_pfn_pmd(vmf, pfn, false) == VM_FAULT_NOPAGE)
> -			return true;
> +		return vmf_insert_pfn_pmd(vmf, pfn, false);
>  	}
>  #endif
>  
> -	return false;
> +	return 0;
>  }
>  
>  static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf)
> @@ -593,10 +592,9 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf)
>  		goto out;
>  	}
>  
> -	if (drm_gem_shmem_try_map_pmd(vmf, vmf->address, pages[page_offset])) {
> -		ret = VM_FAULT_NOPAGE;
> +	ret = drm_gem_shmem_try_map_pmd(vmf, vmf->address, pages[page_offset]);
> +	if (ret == VM_FAULT_NOPAGE)
>  		goto out;
> -	}
>  
>  	pfn = page_to_pfn(pages[page_offset]);
>  	ret = vmf_insert_pfn(vma, vmf->address, pfn);



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2 2/4] drm/gem-shmem: Map pages in mmap fault handler
  2026-02-04 11:39 ` [PATCH v2 2/4] drm/gem-shmem: Map pages in mmap fault handler Thomas Zimmermann
@ 2026-02-04 16:03   ` Matthew Wilcox
  2026-02-09  8:46     ` Thomas Zimmermann
  0 siblings, 1 reply; 8+ messages in thread
From: Matthew Wilcox @ 2026-02-04 16:03 UTC (permalink / raw)
  To: Thomas Zimmermann
  Cc: boris.brezillon, loic.molinari, frank.binns, matt.coster,
	maarten.lankhorst, mripard, airlied, simona, dri-devel, linux-mm

On Wed, Feb 04, 2026 at 12:39:30PM +0100, Thomas Zimmermann wrote:
> +	ret = drm_gem_shmem_try_map_pmd(vmf, vmf->address, page);
> +	if (ret != VM_FAULT_NOPAGE) {
> +		struct folio *folio = page_folio(page);
> +
> +		get_page(page);

folio_get(folio);

> -	pfn = page_to_pfn(pages[page_offset]);
> -	ret = vmf_insert_pfn(vma, vmf->address, pfn);
> +		folio_lock(folio);
> +
> +		vmf->page = page;
> +		ret = VM_FAULT_LOCKED;
> +	}
>  
> - out:
> +out:
>  	dma_resv_unlock(shmem->base.resv);
>  
>  	return ret;
> @@ -689,7 +698,7 @@ int drm_gem_shmem_mmap(struct drm_gem_shmem_object *shmem, struct vm_area_struct
>  	if (ret)
>  		return ret;
>  
> -	vm_flags_set(vma, VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP);
> +	vm_flags_mod(vma, VM_DONTEXPAND | VM_DONTDUMP, VM_PFNMAP);

Do you need to explicitly clear VM_PFNMAP here?  I'm not familiar with
the DRM stack; maybe that's set for you higher in the stack.



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2 2/4] drm/gem-shmem: Map pages in mmap fault handler
  2026-02-04 16:03   ` Matthew Wilcox
@ 2026-02-09  8:46     ` Thomas Zimmermann
  0 siblings, 0 replies; 8+ messages in thread
From: Thomas Zimmermann @ 2026-02-09  8:46 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: boris.brezillon, loic.molinari, frank.binns, matt.coster,
	maarten.lankhorst, mripard, airlied, simona, dri-devel, linux-mm

Hi,

I came across commit 8b93d1d7dbd5 ("drm/shmem-helper: Switch to 
vmf_insert_pfn") from 2021, which makes it very clear the PFNMAP is 
strongly preferred over pages. I totally forgot about that change. The 
next iteration of this series will therefore not contain this patch.

Best regards
Thomas

Am 04.02.26 um 17:03 schrieb Matthew Wilcox:
> On Wed, Feb 04, 2026 at 12:39:30PM +0100, Thomas Zimmermann wrote:
>> +	ret = drm_gem_shmem_try_map_pmd(vmf, vmf->address, page);
>> +	if (ret != VM_FAULT_NOPAGE) {
>> +		struct folio *folio = page_folio(page);
>> +
>> +		get_page(page);
> folio_get(folio);
>
>> -	pfn = page_to_pfn(pages[page_offset]);
>> -	ret = vmf_insert_pfn(vma, vmf->address, pfn);
>> +		folio_lock(folio);
>> +
>> +		vmf->page = page;
>> +		ret = VM_FAULT_LOCKED;
>> +	}
>>   
>> - out:
>> +out:
>>   	dma_resv_unlock(shmem->base.resv);
>>   
>>   	return ret;
>> @@ -689,7 +698,7 @@ int drm_gem_shmem_mmap(struct drm_gem_shmem_object *shmem, struct vm_area_struct
>>   	if (ret)
>>   		return ret;
>>   
>> -	vm_flags_set(vma, VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP);
>> +	vm_flags_mod(vma, VM_DONTEXPAND | VM_DONTDUMP, VM_PFNMAP);
> Do you need to explicitly clear VM_PFNMAP here?  I'm not familiar with
> the DRM stack; maybe that's set for you higher in the stack.
>

-- 
--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Frankenstr. 146, 90461 Nürnberg, Germany, www.suse.com
GF: Jochen Jaser, Andrew McDonald, Werner Knoblich, (HRB 36809, AG Nürnberg)




^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2026-02-09  8:46 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2026-02-04 11:39 [PATCH v2 0/4] drm/gem-shmem: Track page accessed/dirty status Thomas Zimmermann
2026-02-04 11:39 ` [PATCH v2 1/4] drm/gem-shmem: Return vm_fault_t from drm_gem_shmem_try_map_pmd() Thomas Zimmermann
2026-02-04 13:58   ` Boris Brezillon
2026-02-04 11:39 ` [PATCH v2 2/4] drm/gem-shmem: Map pages in mmap fault handler Thomas Zimmermann
2026-02-04 16:03   ` Matthew Wilcox
2026-02-09  8:46     ` Thomas Zimmermann
2026-02-04 11:39 ` [PATCH v2 3/4] drm/gem-shmem: Track folio accessed/dirty status in mmap Thomas Zimmermann
2026-02-04 11:39 ` [PATCH v2 4/4] drm/gem-shmem: Track folio accessed/dirty status in vmap Thomas Zimmermann

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox