linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: "Kuehling, Felix" <felix.kuehling@amd.com>
To: Jordan Niethe <jniethe@nvidia.com>, linux-mm@kvack.org
Cc: balbirs@nvidia.com, matthew.brost@intel.com,
	akpm@linux-foundation.org, linux-kernel@vger.kernel.org,
	dri-devel@lists.freedesktop.org, david@redhat.com,
	ziy@nvidia.com, apopple@nvidia.com, lorenzo.stoakes@oracle.com,
	lyude@redhat.com, dakr@kernel.org, airlied@gmail.com,
	simona@ffwll.ch, rcampbell@nvidia.com, mpenttil@redhat.com,
	jgg@nvidia.com, willy@infradead.org,
	linuxppc-dev@lists.ozlabs.org, intel-xe@lists.freedesktop.org,
	jgg@ziepe.ca, jhubbard@nvidia.com
Subject: Re: [PATCH v3 02/13] drm/amdkfd: Use migrate pfns internally
Date: Wed, 28 Jan 2026 00:08:42 -0500	[thread overview]
Message-ID: <20b283c6-d75d-400c-8955-851534b2f4f9@amd.com> (raw)
In-Reply-To: <20260123062309.23090-3-jniethe@nvidia.com>

On 2026-01-23 01:22, Jordan Niethe wrote:
> A future change will remove device private pages from the physical
> address space. This will mean that device private pages no longer have a
> pfn.
>
> A MIGRATE_PFN flag will be introduced that distinguishes between mpfns
> that contain a pfn vs an offset into device private memory.
>
> Replace usages of pfns and page_to_pfn() with mpfns and
> migrate_pfn_to_page() to prepare for handling this distinction. This
> will assist in continuing to use the same code paths for both
> MEMORY_DEVICE_PRIVATE and MEMORY_DEVICE_COHERENT devices.
>
> Signed-off-by: Jordan Niethe <jniethe@nvidia.com>

Reviewed-by: Felix Kuehling <felix.kuehling@amd.com>


> ---
> v2:
>    - New to series
> v3:
>    - No change
> ---
>   drivers/gpu/drm/amd/amdkfd/kfd_migrate.c | 15 +++++++--------
>   drivers/gpu/drm/amd/amdkfd/kfd_migrate.h |  2 +-
>   2 files changed, 8 insertions(+), 9 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
> index 1f03cf7342a5..3dd7a35d19f7 100644
> --- a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
> +++ b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
> @@ -210,17 +210,17 @@ svm_migrate_copy_done(struct amdgpu_device *adev, struct dma_fence *mfence)
>   }
>   
>   unsigned long
> -svm_migrate_addr_to_pfn(struct amdgpu_device *adev, unsigned long addr)
> +svm_migrate_addr_to_mpfn(struct amdgpu_device *adev, unsigned long addr)
>   {
> -	return (addr + adev->kfd.pgmap.range.start) >> PAGE_SHIFT;
> +	return migrate_pfn((addr + adev->kfd.pgmap.range.start) >> PAGE_SHIFT);
>   }
>   
>   static void
> -svm_migrate_get_vram_page(struct svm_range *prange, unsigned long pfn)
> +svm_migrate_get_vram_page(struct svm_range *prange, unsigned long mpfn)
>   {
>   	struct page *page;
>   
> -	page = pfn_to_page(pfn);
> +	page = migrate_pfn_to_page(mpfn);
>   	svm_range_bo_ref(prange->svm_bo);
>   	page->zone_device_data = prange->svm_bo;
>   	zone_device_page_init(page, 0);
> @@ -231,7 +231,7 @@ svm_migrate_put_vram_page(struct amdgpu_device *adev, unsigned long addr)
>   {
>   	struct page *page;
>   
> -	page = pfn_to_page(svm_migrate_addr_to_pfn(adev, addr));
> +	page = migrate_pfn_to_page(svm_migrate_addr_to_mpfn(adev, addr));
>   	unlock_page(page);
>   	put_page(page);
>   }
> @@ -241,7 +241,7 @@ svm_migrate_addr(struct amdgpu_device *adev, struct page *page)
>   {
>   	unsigned long addr;
>   
> -	addr = page_to_pfn(page) << PAGE_SHIFT;
> +	addr = (migrate_pfn_from_page(page) >> MIGRATE_PFN_SHIFT) << PAGE_SHIFT;
>   	return (addr - adev->kfd.pgmap.range.start);
>   }
>   
> @@ -307,9 +307,8 @@ svm_migrate_copy_to_vram(struct kfd_node *node, struct svm_range *prange,
>   
>   		if (migrate->src[i] & MIGRATE_PFN_MIGRATE) {
>   			dst[i] = cursor.start + (j << PAGE_SHIFT);
> -			migrate->dst[i] = svm_migrate_addr_to_pfn(adev, dst[i]);
> +			migrate->dst[i] = svm_migrate_addr_to_mpfn(adev, dst[i]);
>   			svm_migrate_get_vram_page(prange, migrate->dst[i]);
> -			migrate->dst[i] = migrate_pfn(migrate->dst[i]);
>   			mpages++;
>   		}
>   		spage = migrate_pfn_to_page(migrate->src[i]);
> diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.h b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.h
> index 2b7fd442d29c..a80b72abe1e0 100644
> --- a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.h
> +++ b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.h
> @@ -48,7 +48,7 @@ int svm_migrate_vram_to_ram(struct svm_range *prange, struct mm_struct *mm,
>   			    uint32_t trigger, struct page *fault_page);
>   
>   unsigned long
> -svm_migrate_addr_to_pfn(struct amdgpu_device *adev, unsigned long addr);
> +svm_migrate_addr_to_mpfn(struct amdgpu_device *adev, unsigned long addr);
>   
>   #endif /* IS_ENABLED(CONFIG_HSA_AMD_SVM) */
>   


  parent reply	other threads:[~2026-01-28  5:08 UTC|newest]

Thread overview: 26+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-01-23  6:22 [PATCH v3 00/13] Remove device private pages from physical address space Jordan Niethe
2026-01-23  6:22 ` [PATCH v3 01/13] mm/migrate_device: Introduce migrate_pfn_from_page() helper Jordan Niethe
2026-01-28  5:07   ` Kuehling, Felix
2026-01-29  1:06   ` Jordan Niethe
2026-01-23  6:22 ` [PATCH v3 02/13] drm/amdkfd: Use migrate pfns internally Jordan Niethe
2026-01-27 23:15   ` Balbir Singh
2026-01-28  5:08   ` Kuehling, Felix [this message]
2026-01-23  6:22 ` [PATCH v3 03/13] mm/migrate_device: Make migrate_device_{pfns,range}() take mpfns Jordan Niethe
2026-01-23  6:23 ` [PATCH v3 04/13] mm/migrate_device: Add migrate PFN flag to track device private pages Jordan Niethe
2026-01-28  5:09   ` Kuehling, Felix
2026-01-23  6:23 ` [PATCH v3 05/13] mm/page_vma_mapped: Add flag to page_vma_mapped_walk::flags " Jordan Niethe
2026-01-27 21:01   ` Zi Yan
2026-01-23  6:23 ` [PATCH v3 06/13] mm: Add helpers to create migration entries from struct pages Jordan Niethe
2026-01-23  6:23 ` [PATCH v3 07/13] mm: Add a new swap type for migration entries of device private pages Jordan Niethe
2026-01-23  6:23 ` [PATCH v3 08/13] mm: Add softleaf support for device private migration entries Jordan Niethe
2026-01-23  6:23 ` [PATCH v3 09/13] mm: Begin creating " Jordan Niethe
2026-01-23  6:23 ` [PATCH v3 10/13] mm: Add helpers to create device private entries from struct pages Jordan Niethe
2026-01-23  6:23 ` [PATCH v3 11/13] mm/util: Add flag to track device private pages in page snapshots Jordan Niethe
2026-01-23  6:23 ` [PATCH v3 12/13] mm/hmm: Add flag to track device private pages Jordan Niethe
2026-01-23  6:23 ` [PATCH v3 13/13] mm: Remove device private pages from the physical address space Jordan Niethe
2026-01-27  0:29   ` Jordan Niethe
2026-01-27 21:12   ` Zi Yan
2026-01-27 23:26     ` Jordan Niethe
2026-01-28  5:10   ` Kuehling, Felix
2026-01-29 13:49 ` [PATCH v3 00/13] Remove device private pages from " Huang, Ying
2026-01-29 23:26   ` Alistair Popple

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20b283c6-d75d-400c-8955-851534b2f4f9@amd.com \
    --to=felix.kuehling@amd.com \
    --cc=airlied@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=apopple@nvidia.com \
    --cc=balbirs@nvidia.com \
    --cc=dakr@kernel.org \
    --cc=david@redhat.com \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=jgg@nvidia.com \
    --cc=jgg@ziepe.ca \
    --cc=jhubbard@nvidia.com \
    --cc=jniethe@nvidia.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linuxppc-dev@lists.ozlabs.org \
    --cc=lorenzo.stoakes@oracle.com \
    --cc=lyude@redhat.com \
    --cc=matthew.brost@intel.com \
    --cc=mpenttil@redhat.com \
    --cc=rcampbell@nvidia.com \
    --cc=simona@ffwll.ch \
    --cc=willy@infradead.org \
    --cc=ziy@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox