From: "Mika Penttilä" <mpenttil@redhat.com>
To: Balbir Singh <balbirs@nvidia.com>,
linux-kernel@vger.kernel.org, linux-mm@kvack.org
Cc: damon@lists.linux.dev, dri-devel@lists.freedesktop.org,
Andrew Morton <akpm@linux-foundation.org>,
David Hildenbrand <david@redhat.com>, Zi Yan <ziy@nvidia.com>,
Joshua Hahn <joshua.hahnjy@gmail.com>,
Rakie Kim <rakie.kim@sk.com>, Byungchul Park <byungchul@sk.com>,
Gregory Price <gourry@gourry.net>,
Ying Huang <ying.huang@linux.alibaba.com>,
Alistair Popple <apopple@nvidia.com>,
Oscar Salvador <osalvador@suse.de>,
Lorenzo Stoakes <lorenzo.stoakes@oracle.com>,
Baolin Wang <baolin.wang@linux.alibaba.com>,
"Liam R. Howlett" <Liam.Howlett@oracle.com>,
Nico Pache <npache@redhat.com>,
Ryan Roberts <ryan.roberts@arm.com>, Dev Jain <dev.jain@arm.com>,
Barry Song <baohua@kernel.org>, Lyude Paul <lyude@redhat.com>,
Danilo Krummrich <dakr@kernel.org>,
David Airlie <airlied@gmail.com>, Simona Vetter <simona@ffwll.ch>,
Ralph Campbell <rcampbell@nvidia.com>,
Matthew Brost <matthew.brost@intel.com>,
Francois Dugast <francois.dugast@intel.com>
Subject: Re: [v5 05/15] mm/migrate_device: handle partially mapped folios during collection
Date: Mon, 8 Sep 2025 07:14:02 +0300 [thread overview]
Message-ID: <e6b795de-f522-4952-9ec3-00a2359c43a9@redhat.com> (raw)
In-Reply-To: <20250908000448.180088-6-balbirs@nvidia.com>
Hi,
On 9/8/25 03:04, Balbir Singh wrote:
> Extend migrate_vma_collect_pmd() to handle partially mapped large
> folios that require splitting before migration can proceed.
>
> During PTE walk in the collection phase, if a large folio is only
> partially mapped in the migration range, it must be split to ensure
> the folio is correctly migrated.
>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: David Hildenbrand <david@redhat.com>
> Cc: Zi Yan <ziy@nvidia.com>
> Cc: Joshua Hahn <joshua.hahnjy@gmail.com>
> Cc: Rakie Kim <rakie.kim@sk.com>
> Cc: Byungchul Park <byungchul@sk.com>
> Cc: Gregory Price <gourry@gourry.net>
> Cc: Ying Huang <ying.huang@linux.alibaba.com>
> Cc: Alistair Popple <apopple@nvidia.com>
> Cc: Oscar Salvador <osalvador@suse.de>
> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
> Cc: "Liam R. Howlett" <Liam.Howlett@oracle.com>
> Cc: Nico Pache <npache@redhat.com>
> Cc: Ryan Roberts <ryan.roberts@arm.com>
> Cc: Dev Jain <dev.jain@arm.com>
> Cc: Barry Song <baohua@kernel.org>
> Cc: Lyude Paul <lyude@redhat.com>
> Cc: Danilo Krummrich <dakr@kernel.org>
> Cc: David Airlie <airlied@gmail.com>
> Cc: Simona Vetter <simona@ffwll.ch>
> Cc: Ralph Campbell <rcampbell@nvidia.com>
> Cc: Mika Penttilä <mpenttil@redhat.com>
> Cc: Matthew Brost <matthew.brost@intel.com>
> Cc: Francois Dugast <francois.dugast@intel.com>
>
> Signed-off-by: Balbir Singh <balbirs@nvidia.com>
> ---
> mm/migrate_device.c | 94 +++++++++++++++++++++++++++++++++++++++++++++
> 1 file changed, 94 insertions(+)
>
> diff --git a/mm/migrate_device.c b/mm/migrate_device.c
> index abd9f6850db6..f45ef182287d 100644
> --- a/mm/migrate_device.c
> +++ b/mm/migrate_device.c
> @@ -54,6 +54,53 @@ static int migrate_vma_collect_hole(unsigned long start,
> return 0;
> }
>
> +/**
> + * migrate_vma_split_folio() - Helper function to split a THP folio
> + * @folio: the folio to split
> + * @fault_page: struct page associated with the fault if any
> + *
> + * Returns 0 on success
> + */
> +static int migrate_vma_split_folio(struct folio *folio,
> + struct page *fault_page)
> +{
> + int ret;
> + struct folio *fault_folio = fault_page ? page_folio(fault_page) : NULL;
> + struct folio *new_fault_folio = NULL;
> +
> + if (folio != fault_folio) {
> + folio_get(folio);
> + folio_lock(folio);
> + }
> +
> + ret = split_folio(folio);
> + if (ret) {
> + if (folio != fault_folio) {
> + folio_unlock(folio);
> + folio_put(folio);
> + }
> + return ret;
> + }
> +
> + new_fault_folio = fault_page ? page_folio(fault_page) : NULL;
> +
> + /*
> + * Ensure the lock is held on the correct
> + * folio after the split
> + */
> + if (!new_fault_folio) {
> + folio_unlock(folio);
> + folio_put(folio);
> + } else if (folio != new_fault_folio) {
> + folio_get(new_fault_folio);
> + folio_lock(new_fault_folio);
> + folio_unlock(folio);
> + folio_put(folio);
> + }
> +
> + return 0;
> +}
> +
> static int migrate_vma_collect_pmd(pmd_t *pmdp,
> unsigned long start,
> unsigned long end,
> @@ -136,6 +183,8 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp,
> * page table entry. Other special swap entries are not
> * migratable, and we ignore regular swapped page.
> */
> + struct folio *folio;
> +
> entry = pte_to_swp_entry(pte);
> if (!is_device_private_entry(entry))
> goto next;
> @@ -147,6 +196,29 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp,
> pgmap->owner != migrate->pgmap_owner)
> goto next;
>
> + folio = page_folio(page);
> + if (folio_test_large(folio)) {
> + int ret;
> +
> + /*
> + * The reason for finding pmd present with a
> + * large folio for the pte is partial unmaps.
> + * Split the folio now for the migration to be
> + * handled correctly
> + */
> + pte_unmap_unlock(ptep, ptl);
> + ret = migrate_vma_split_folio(folio,
> + migrate->fault_page);
> +
> + if (ret) {
> + ptep = pte_offset_map_lock(mm, pmdp, addr, &ptl);
> + goto next;
> + }
> +
> + addr = start;
> + goto again;
> + }
> +
> mpfn = migrate_pfn(page_to_pfn(page)) |
> MIGRATE_PFN_MIGRATE;
> if (is_writable_device_private_entry(entry))
> @@ -171,6 +243,28 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp,
> pgmap->owner != migrate->pgmap_owner)
> goto next;
> }
> + folio = page_folio(page);
> + if (folio_test_large(folio)) {
> + int ret;
> +
> + /*
> + * The reason for finding pmd present with a
> + * large folio for the pte is partial unmaps.
> + * Split the folio now for the migration to be
> + * handled correctly
> + */
This comment is still not changed, there are other reasons for pte mapped large pages.
Also now all the mTHPs are splitted, which is change of behavior (currently ignored)
for order < PMD_ORDER.
> + pte_unmap_unlock(ptep, ptl);
> + ret = migrate_vma_split_folio(folio,
> + migrate->fault_page);
> +
> + if (ret) {
> + ptep = pte_offset_map_lock(mm, pmdp, addr, &ptl);
> + goto next;
> + }
> +
> + addr = start;
> + goto again;
> + }
> mpfn = migrate_pfn(pfn) | MIGRATE_PFN_MIGRATE;
> mpfn |= pte_write(pte) ? MIGRATE_PFN_WRITE : 0;
> }
--Mika
next prev parent reply other threads:[~2025-09-08 4:14 UTC|newest]
Thread overview: 50+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-09-08 0:04 [v5 00/15] mm: support device-private THP Balbir Singh
2025-09-08 0:04 ` [v5 01/15] mm/zone_device: support large zone device private folios Balbir Singh
2025-09-11 11:45 ` David Hildenbrand
2025-09-11 12:49 ` Balbir Singh
2025-09-11 12:52 ` David Hildenbrand
2025-09-12 4:49 ` Balbir Singh
2025-09-12 9:20 ` David Hildenbrand
2025-09-12 23:14 ` Balbir Singh
2025-09-15 8:02 ` David Hildenbrand
2025-09-18 12:27 ` Chris Mason
2025-09-19 1:49 ` Balbir Singh
2025-09-08 0:04 ` [v5 02/15] mm/huge_memory: add device-private THP support to PMD operations Balbir Singh
2025-09-11 12:15 ` David Hildenbrand
2025-09-15 1:35 ` Balbir Singh
2025-09-15 8:10 ` David Hildenbrand
2025-09-16 3:27 ` Balbir Singh
2025-09-17 10:22 ` David Hildenbrand
2025-09-08 0:04 ` [v5 03/15] mm/rmap: extend rmap and migration support device-private entries Balbir Singh
2025-09-11 12:04 ` David Hildenbrand
2025-09-15 2:37 ` Balbir Singh
2025-09-12 1:59 ` SeongJae Park
2025-09-12 4:51 ` Balbir Singh
2025-09-08 0:04 ` [v5 04/15] mm/huge_memory: implement device-private THP splitting Balbir Singh
2025-09-11 12:31 ` David Hildenbrand
2025-09-15 3:54 ` Balbir Singh
2025-09-15 8:23 ` David Hildenbrand
2025-09-08 0:04 ` [v5 05/15] mm/migrate_device: handle partially mapped folios during collection Balbir Singh
2025-09-08 4:14 ` Mika Penttilä [this message]
2025-09-08 4:57 ` Balbir Singh
2025-09-18 16:42 ` Chris Mason
2025-09-19 8:36 ` Balbir Singh
2025-09-19 11:33 ` Chris Mason
2025-09-08 0:04 ` [v5 06/15] mm/migrate_device: implement THP migration of zone device pages Balbir Singh
2025-09-11 11:52 ` Mika Penttilä
2025-09-12 5:04 ` Balbir Singh
2025-09-12 5:28 ` Mika Penttilä
2025-09-12 5:38 ` Mika Penttilä
2025-09-16 10:50 ` Balbir Singh
2025-09-08 0:04 ` [v5 07/15] mm/memory/fault: add THP fault handling for zone device private pages Balbir Singh
2025-09-11 12:42 ` David Hildenbrand
2025-09-15 10:31 ` Balbir Singh
2025-09-15 11:22 ` David Hildenbrand
2025-09-08 0:04 ` [v5 08/15] lib/test_hmm: add zone device private THP test infrastructure Balbir Singh
2025-09-08 0:04 ` [v5 09/15] mm/memremap: add driver callback support for folio splitting Balbir Singh
2025-09-08 0:04 ` [v5 10/15] mm/migrate_device: add THP splitting during migration Balbir Singh
2025-09-08 0:04 ` [v5 11/15] lib/test_hmm: add large page allocation failure testing Balbir Singh
2025-09-08 0:04 ` [v5 12/15] selftests/mm/hmm-tests: new tests for zone device THP migration Balbir Singh
2025-09-08 0:04 ` [v5 13/15] selftests/mm/hmm-tests: partial unmap, mremap and anon_write tests Balbir Singh
2025-09-08 0:04 ` [v5 14/15] selftests/mm/hmm-tests: new throughput tests including THP Balbir Singh
2025-09-08 0:04 ` [v5 15/15] gpu/drm/nouveau: enable THP support for GPU memory migration Balbir Singh
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=e6b795de-f522-4952-9ec3-00a2359c43a9@redhat.com \
--to=mpenttil@redhat.com \
--cc=Liam.Howlett@oracle.com \
--cc=airlied@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=apopple@nvidia.com \
--cc=balbirs@nvidia.com \
--cc=baohua@kernel.org \
--cc=baolin.wang@linux.alibaba.com \
--cc=byungchul@sk.com \
--cc=dakr@kernel.org \
--cc=damon@lists.linux.dev \
--cc=david@redhat.com \
--cc=dev.jain@arm.com \
--cc=dri-devel@lists.freedesktop.org \
--cc=francois.dugast@intel.com \
--cc=gourry@gourry.net \
--cc=joshua.hahnjy@gmail.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lorenzo.stoakes@oracle.com \
--cc=lyude@redhat.com \
--cc=matthew.brost@intel.com \
--cc=npache@redhat.com \
--cc=osalvador@suse.de \
--cc=rakie.kim@sk.com \
--cc=rcampbell@nvidia.com \
--cc=ryan.roberts@arm.com \
--cc=simona@ffwll.ch \
--cc=ying.huang@linux.alibaba.com \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox