From: Thomas Zimmermann <tzimmermann@suse.de>
To: Matthew Wilcox <willy@infradead.org>
Cc: boris.brezillon@collabora.com, loic.molinari@collabora.com,
frank.binns@imgtec.com, matt.coster@imgtec.com,
maarten.lankhorst@linux.intel.com, mripard@kernel.org,
airlied@gmail.com, simona@ffwll.ch,
dri-devel@lists.freedesktop.org, linux-mm@kvack.org
Subject: Re: [PATCH v2 2/4] drm/gem-shmem: Map pages in mmap fault handler
Date: Mon, 9 Feb 2026 09:46:08 +0100 [thread overview]
Message-ID: <1a5c21d2-d552-4dc0-847d-42077fed6bda@suse.de> (raw)
In-Reply-To: <aYNt5m8rffUYK1al@casper.infradead.org>
Hi,
I came across commit 8b93d1d7dbd5 ("drm/shmem-helper: Switch to
vmf_insert_pfn") from 2021, which makes it very clear the PFNMAP is
strongly preferred over pages. I totally forgot about that change. The
next iteration of this series will therefore not contain this patch.
Best regards
Thomas
Am 04.02.26 um 17:03 schrieb Matthew Wilcox:
> On Wed, Feb 04, 2026 at 12:39:30PM +0100, Thomas Zimmermann wrote:
>> + ret = drm_gem_shmem_try_map_pmd(vmf, vmf->address, page);
>> + if (ret != VM_FAULT_NOPAGE) {
>> + struct folio *folio = page_folio(page);
>> +
>> + get_page(page);
> folio_get(folio);
>
>> - pfn = page_to_pfn(pages[page_offset]);
>> - ret = vmf_insert_pfn(vma, vmf->address, pfn);
>> + folio_lock(folio);
>> +
>> + vmf->page = page;
>> + ret = VM_FAULT_LOCKED;
>> + }
>>
>> - out:
>> +out:
>> dma_resv_unlock(shmem->base.resv);
>>
>> return ret;
>> @@ -689,7 +698,7 @@ int drm_gem_shmem_mmap(struct drm_gem_shmem_object *shmem, struct vm_area_struct
>> if (ret)
>> return ret;
>>
>> - vm_flags_set(vma, VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP);
>> + vm_flags_mod(vma, VM_DONTEXPAND | VM_DONTDUMP, VM_PFNMAP);
> Do you need to explicitly clear VM_PFNMAP here? I'm not familiar with
> the DRM stack; maybe that's set for you higher in the stack.
>
--
--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Frankenstr. 146, 90461 Nürnberg, Germany, www.suse.com
GF: Jochen Jaser, Andrew McDonald, Werner Knoblich, (HRB 36809, AG Nürnberg)
next prev parent reply other threads:[~2026-02-09 8:46 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-02-04 11:39 [PATCH v2 0/4] drm/gem-shmem: Track page accessed/dirty status Thomas Zimmermann
2026-02-04 11:39 ` [PATCH v2 1/4] drm/gem-shmem: Return vm_fault_t from drm_gem_shmem_try_map_pmd() Thomas Zimmermann
2026-02-04 13:58 ` Boris Brezillon
2026-02-04 11:39 ` [PATCH v2 2/4] drm/gem-shmem: Map pages in mmap fault handler Thomas Zimmermann
2026-02-04 16:03 ` Matthew Wilcox
2026-02-09 8:46 ` Thomas Zimmermann [this message]
2026-02-04 11:39 ` [PATCH v2 3/4] drm/gem-shmem: Track folio accessed/dirty status in mmap Thomas Zimmermann
2026-02-04 11:39 ` [PATCH v2 4/4] drm/gem-shmem: Track folio accessed/dirty status in vmap Thomas Zimmermann
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1a5c21d2-d552-4dc0-847d-42077fed6bda@suse.de \
--to=tzimmermann@suse.de \
--cc=airlied@gmail.com \
--cc=boris.brezillon@collabora.com \
--cc=dri-devel@lists.freedesktop.org \
--cc=frank.binns@imgtec.com \
--cc=linux-mm@kvack.org \
--cc=loic.molinari@collabora.com \
--cc=maarten.lankhorst@linux.intel.com \
--cc=matt.coster@imgtec.com \
--cc=mripard@kernel.org \
--cc=simona@ffwll.ch \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox