From: Balbir Singh <balbirs@nvidia.com>
To: Francois Dugast <francois.dugast@intel.com>,
intel-xe@lists.freedesktop.org
Cc: dri-devel@lists.freedesktop.org,
Matthew Brost <matthew.brost@intel.com>,
Andrew Morton <akpm@linux-foundation.org>,
David Hildenbrand <david@kernel.org>,
Lorenzo Stoakes <lorenzo.stoakes@oracle.com>,
"Liam R . Howlett" <Liam.Howlett@oracle.com>,
Vlastimil Babka <vbabka@suse.cz>, Mike Rapoport <rppt@kernel.org>,
Suren Baghdasaryan <surenb@google.com>,
Michal Hocko <mhocko@suse.com>, Zi Yan <ziy@nvidia.com>,
Alistair Popple <apopple@nvidia.com>,
linux-mm@kvack.org
Subject: Re: [PATCH v5 4/5] drm/pagemap: Correct cpages calculation for migrate_vma_setup
Date: Thu, 15 Jan 2026 18:48:03 +1100 [thread overview]
Message-ID: <1075ad94-3c8c-4e1f-be77-4bea9657d796@nvidia.com> (raw)
In-Reply-To: <20260114192111.1267147-5-francois.dugast@intel.com>
On 1/15/26 06:19, Francois Dugast wrote:
> From: Matthew Brost <matthew.brost@intel.com>
>
> cpages returned from migrate_vma_setup represents the total number of
> individual pages found, not the number of 4K pages. The math in
> drm_pagemap_migrate_to_devmem for npages is based on the number of 4K
> pages, so cpages != npages can fail even if the entire memory range is
> found in migrate_vma_setup (e.g., when a single 2M page is found).
> Add drm_pagemap_cpages, which converts cpages to the number of 4K pages
> found.
>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: David Hildenbrand <david@kernel.org>
> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> Cc: Liam R. Howlett <Liam.Howlett@oracle.com>
> Cc: Vlastimil Babka <vbabka@suse.cz>
> Cc: Mike Rapoport <rppt@kernel.org>
> Cc: Suren Baghdasaryan <surenb@google.com>
> Cc: Michal Hocko <mhocko@suse.com>
> Cc: Zi Yan <ziy@nvidia.com>
> Cc: Alistair Popple <apopple@nvidia.com>
> Cc: Balbir Singh <balbirs@nvidia.com>
> Cc: linux-mm@kvack.org
> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> Reviewed-by: Francois Dugast <francois.dugast@intel.com>
> Signed-off-by: Francois Dugast <francois.dugast@intel.com>
> ---
> drivers/gpu/drm/drm_pagemap.c | 38 ++++++++++++++++++++++++++++++++++-
> 1 file changed, 37 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/drm_pagemap.c b/drivers/gpu/drm/drm_pagemap.c
> index f613b4d48499..3fc466f04b13 100644
> --- a/drivers/gpu/drm/drm_pagemap.c
> +++ b/drivers/gpu/drm/drm_pagemap.c
> @@ -452,6 +452,41 @@ static int drm_pagemap_migrate_range(struct drm_pagemap_devmem *devmem,
> return ret;
> }
>
> +/**
> + * drm_pagemap_cpages() - Count collected pages
> + * @migrate_pfn: Array of migrate_pfn entries to account
> + * @npages: Number of entries in @migrate_pfn
> + *
> + * Compute the total number of minimum-sized pages represented by the
> + * collected entries in @migrate_pfn. The total is derived from the
> + * order encoded in each entry.
> + *
> + * Return: Total number of minimum-sized pages.
> + */
> +static int drm_pagemap_cpages(unsigned long *migrate_pfn, unsigned long npages)
> +{
> + unsigned long i, cpages = 0;
> +
> + for (i = 0; i < npages;) {
> + struct page *page = migrate_pfn_to_page(migrate_pfn[i]);
> + struct folio *folio;
> + unsigned int order = 0;
> +
> + if (page) {
> + folio = page_folio(page);
> + order = folio_order(folio);
> + cpages += NR_PAGES(order);
> + } else if (migrate_pfn[i] & MIGRATE_PFN_COMPOUND) {
> + order = HPAGE_PMD_ORDER;
> + cpages += NR_PAGES(order);
> + }
> +
> + i += NR_PAGES(order);
> + }
> +
> + return cpages;
> +}
> +
> /**
> * drm_pagemap_migrate_to_devmem() - Migrate a struct mm_struct range to device memory
> * @devmem_allocation: The device memory allocation to migrate to.
> @@ -564,7 +599,8 @@ int drm_pagemap_migrate_to_devmem(struct drm_pagemap_devmem *devmem_allocation,
> goto err_free;
> }
>
> - if (migrate.cpages != npages) {
> + if (migrate.cpages != npages &&
> + drm_pagemap_cpages(migrate.src, npages) != npages) {
> /*
> * Some pages to migrate. But we want to migrate all or
> * nothing. Raced or unknown device pages.
Reviewed-by: Balbir Singh <balbirs@nvidia.com>
next prev parent reply other threads:[~2026-01-15 7:48 UTC|newest]
Thread overview: 33+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-01-14 19:19 [PATCH v5 0/5] Enable THP support in drm_pagemap Francois Dugast
2026-01-14 19:19 ` [PATCH v5 1/5] mm/zone_device: Reinitialize large zone device private folios Francois Dugast
2026-01-14 21:48 ` Andrew Morton
2026-01-14 23:34 ` Matthew Brost
2026-01-14 23:51 ` Matthew Brost
2026-01-15 2:40 ` Andrew Morton
2026-01-15 2:50 ` Matthew Brost
2026-01-15 2:36 ` Balbir Singh
2026-01-15 2:41 ` Matthew Brost
2026-01-15 7:13 ` Alistair Popple
2026-01-15 7:57 ` Matthew Brost
2026-01-15 3:01 ` Andrew Morton
2026-01-15 3:07 ` Matthew Brost
2026-01-15 4:05 ` Matthew Brost
2026-01-15 5:27 ` Alistair Popple
2026-01-15 5:57 ` Matthew Brost
2026-01-15 6:18 ` Matthew Brost
2026-01-15 7:07 ` Alistair Popple
2026-01-15 7:39 ` Balbir Singh
2026-01-15 7:43 ` Matthew Brost
2026-01-15 11:05 ` Alistair Popple
2026-01-16 6:35 ` Matthew Brost
2026-01-16 16:39 ` Rodrigo Vivi
2026-01-16 16:13 ` Vlastimil Babka
2026-01-16 16:43 ` Rodrigo Vivi
2026-01-16 18:07 ` Kuehling, Felix
2026-01-14 19:19 ` [PATCH v5 2/5] drm/pagemap: Unlock and put folios when possible Francois Dugast
2026-01-15 2:41 ` Balbir Singh
2026-01-15 2:54 ` Matthew Brost
2026-01-14 19:19 ` [PATCH v5 3/5] drm/pagemap: Add helper to access zone_device_data Francois Dugast
2026-01-14 19:19 ` [PATCH v5 4/5] drm/pagemap: Correct cpages calculation for migrate_vma_setup Francois Dugast
2026-01-15 7:48 ` Balbir Singh [this message]
2026-01-14 19:19 ` [PATCH v5 5/5] drm/pagemap: Enable THP support for GPU memory migration Francois Dugast
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1075ad94-3c8c-4e1f-be77-4bea9657d796@nvidia.com \
--to=balbirs@nvidia.com \
--cc=Liam.Howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=apopple@nvidia.com \
--cc=david@kernel.org \
--cc=dri-devel@lists.freedesktop.org \
--cc=francois.dugast@intel.com \
--cc=intel-xe@lists.freedesktop.org \
--cc=linux-mm@kvack.org \
--cc=lorenzo.stoakes@oracle.com \
--cc=matthew.brost@intel.com \
--cc=mhocko@suse.com \
--cc=rppt@kernel.org \
--cc=surenb@google.com \
--cc=vbabka@suse.cz \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox