From: Jordan Niethe <jniethe@nvidia.com>
To: linux-mm@kvack.org
Cc: balbirs@nvidia.com, matthew.brost@intel.com,
akpm@linux-foundation.org, linux-kernel@vger.kernel.org,
dri-devel@lists.freedesktop.org, david@redhat.com,
ziy@nvidia.com, apopple@nvidia.com, lorenzo.stoakes@oracle.com,
lyude@redhat.com, dakr@kernel.org, airlied@gmail.com,
simona@ffwll.ch, rcampbell@nvidia.com, mpenttil@redhat.com,
jgg@nvidia.com, willy@infradead.org,
linuxppc-dev@lists.ozlabs.org, intel-xe@lists.freedesktop.org,
jgg@ziepe.ca, Felix.Kuehling@amd.com, jhubbard@nvidia.com
Subject: Re: [PATCH v3 13/13] mm: Remove device private pages from the physical address space
Date: Tue, 27 Jan 2026 11:29:30 +1100 [thread overview]
Message-ID: <413d265f-9ffc-499b-8dbc-26f92bdff6d8@nvidia.com> (raw)
In-Reply-To: <20260123062309.23090-14-jniethe@nvidia.com>
Hi,
On 23/1/26 17:23, Jordan Niethe wrote:
> diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
> index a8aad9e0b1fb..ac3da24b2aac 100644
> --- a/drivers/gpu/drm/xe/xe_svm.c
> +++ b/drivers/gpu/drm/xe/xe_svm.c
> @@ -452,7 +452,7 @@ static u64 xe_page_to_dpa(struct page *page)
> struct xe_pagemap *xpagemap = xe_page_to_pagemap(page);
> struct xe_vram_region *vr = xe_pagemap_to_vr(xpagemap);
> u64 hpa_base = xpagemap->hpa_base;
> - u64 pfn = page_to_pfn(page);
> + u64 pfn = device_private_page_to_offset(page);
> u64 offset;
> u64 dpa;
>
> @@ -1700,8 +1700,6 @@ static void xe_pagemap_destroy_work(struct work_struct *work)
> */
> if (drm_dev_enter(drm, &idx)) {
> devm_memunmap_pages(drm->dev, pagemap);
> - devm_release_mem_region(drm->dev, pagemap->range.start,
> - pagemap->range.end - pagemap->range.start + 1);
> drm_dev_exit(idx);
> }
There's some new failures on the intel-xe CI run: https://patchwork.freedesktop.org/series/159738/#rev5
Looks like the issue is a missing update to call devm_memunmap_device_private_pagemap():
diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
index ac3da24b2aac..aadc73b6f951 100644
--- a/drivers/gpu/drm/xe/xe_svm.c
+++ b/drivers/gpu/drm/xe/xe_svm.c
@@ -1699,7 +1699,7 @@ static void xe_pagemap_destroy_work(struct work_struct *work)
* will do shortly.
*/
if (drm_dev_enter(drm, &idx)) {
- devm_memunmap_pages(drm->dev, pagemap);
+ devm_memunmap_device_private_pagemap(drm->dev, pagemap);
drm_dev_exit(idx);
}
Thanks,
Jordan.
>
> @@ -1745,8 +1743,6 @@ static struct xe_pagemap *xe_pagemap_create(struct xe_device *xe, struct xe_vram
> struct xe_pagemap *xpagemap;
> struct dev_pagemap *pagemap;
> struct drm_pagemap *dpagemap;
> - struct resource *res;
> - void *addr;
> int err;
>
> xpagemap = kzalloc(sizeof(*xpagemap), GFP_KERNEL);
> @@ -1763,36 +1759,24 @@ static struct xe_pagemap *xe_pagemap_create(struct xe_device *xe, struct xe_vram
> if (err)
> goto out_no_dpagemap;
>
> - res = devm_request_free_mem_region(dev, &iomem_resource,
> - vr->usable_size);
> - if (IS_ERR(res)) {
> - err = PTR_ERR(res);
> - goto out_err;
> - }
> -
> err = drm_pagemap_acquire_owner(&xpagemap->peer, &xe_owner_list,
> xe_has_interconnect);
> if (err)
> - goto out_no_owner;
> + goto out_err;
>
> pagemap->type = MEMORY_DEVICE_PRIVATE;
> - pagemap->range.start = res->start;
> - pagemap->range.end = res->end;
> pagemap->nr_range = 1;
> + pagemap->nr_pages = vr->usable_size / PAGE_SIZE;
> pagemap->owner = xpagemap->peer.owner;
> pagemap->ops = drm_pagemap_pagemap_ops_get();
> - addr = devm_memremap_pages(dev, pagemap);
> - if (IS_ERR(addr)) {
> - err = PTR_ERR(addr);
> + err = devm_memremap_device_private_pagemap(dev, pagemap);
> + if (err)
> goto out_no_pages;
> - }
> - xpagemap->hpa_base = res->start;
> + xpagemap->hpa_base = pagemap->range.start;
> return xpagemap;
>
> out_no_pages:
> drm_pagemap_release_owner(&xpagemap->peer);
> -out_no_owner:
> - devm_release_mem_region(dev, res->start, res->end - res->start + 1);
> out_err:
> drm_pagemap_put(dpagemap);
> return ERR_PTR(err);
> diff --git a/include/linux/hmm.h b/include/linux/hmm.h
> index d8756c341620..25bb4df298f7 100644
next prev parent reply other threads:[~2026-01-27 0:29 UTC|newest]
Thread overview: 26+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-01-23 6:22 [PATCH v3 00/13] Remove device private pages from " Jordan Niethe
2026-01-23 6:22 ` [PATCH v3 01/13] mm/migrate_device: Introduce migrate_pfn_from_page() helper Jordan Niethe
2026-01-28 5:07 ` Kuehling, Felix
2026-01-29 1:06 ` Jordan Niethe
2026-01-23 6:22 ` [PATCH v3 02/13] drm/amdkfd: Use migrate pfns internally Jordan Niethe
2026-01-27 23:15 ` Balbir Singh
2026-01-28 5:08 ` Kuehling, Felix
2026-01-23 6:22 ` [PATCH v3 03/13] mm/migrate_device: Make migrate_device_{pfns,range}() take mpfns Jordan Niethe
2026-01-23 6:23 ` [PATCH v3 04/13] mm/migrate_device: Add migrate PFN flag to track device private pages Jordan Niethe
2026-01-28 5:09 ` Kuehling, Felix
2026-01-23 6:23 ` [PATCH v3 05/13] mm/page_vma_mapped: Add flag to page_vma_mapped_walk::flags " Jordan Niethe
2026-01-27 21:01 ` Zi Yan
2026-01-23 6:23 ` [PATCH v3 06/13] mm: Add helpers to create migration entries from struct pages Jordan Niethe
2026-01-23 6:23 ` [PATCH v3 07/13] mm: Add a new swap type for migration entries of device private pages Jordan Niethe
2026-01-23 6:23 ` [PATCH v3 08/13] mm: Add softleaf support for device private migration entries Jordan Niethe
2026-01-23 6:23 ` [PATCH v3 09/13] mm: Begin creating " Jordan Niethe
2026-01-23 6:23 ` [PATCH v3 10/13] mm: Add helpers to create device private entries from struct pages Jordan Niethe
2026-01-23 6:23 ` [PATCH v3 11/13] mm/util: Add flag to track device private pages in page snapshots Jordan Niethe
2026-01-23 6:23 ` [PATCH v3 12/13] mm/hmm: Add flag to track device private pages Jordan Niethe
2026-01-23 6:23 ` [PATCH v3 13/13] mm: Remove device private pages from the physical address space Jordan Niethe
2026-01-27 0:29 ` Jordan Niethe [this message]
2026-01-27 21:12 ` Zi Yan
2026-01-27 23:26 ` Jordan Niethe
2026-01-28 5:10 ` Kuehling, Felix
2026-01-29 13:49 ` [PATCH v3 00/13] Remove device private pages from " Huang, Ying
2026-01-29 23:26 ` Alistair Popple
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=413d265f-9ffc-499b-8dbc-26f92bdff6d8@nvidia.com \
--to=jniethe@nvidia.com \
--cc=Felix.Kuehling@amd.com \
--cc=airlied@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=apopple@nvidia.com \
--cc=balbirs@nvidia.com \
--cc=dakr@kernel.org \
--cc=david@redhat.com \
--cc=dri-devel@lists.freedesktop.org \
--cc=intel-xe@lists.freedesktop.org \
--cc=jgg@nvidia.com \
--cc=jgg@ziepe.ca \
--cc=jhubbard@nvidia.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linuxppc-dev@lists.ozlabs.org \
--cc=lorenzo.stoakes@oracle.com \
--cc=lyude@redhat.com \
--cc=matthew.brost@intel.com \
--cc=mpenttil@redhat.com \
--cc=rcampbell@nvidia.com \
--cc=simona@ffwll.ch \
--cc=willy@infradead.org \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox