* [PATCH v2 0/3] Migrate on fault for device pages
@ 2026-01-19 11:24 mpenttil
2026-01-19 11:25 ` [PATCH v2 1/3] mm: unified hmm fault and migrate device pagewalk paths mpenttil
` (2 more replies)
0 siblings, 3 replies; 8+ messages in thread
From: mpenttil @ 2026-01-19 11:24 UTC (permalink / raw)
To: linux-mm
Cc: linux-kernel, Mika Penttilä,
David Hildenbrand, Jason Gunthorpe, Leon Romanovsky,
Alistair Popple, Balbir Singh, Zi Yan, Matthew Brost
From: Mika Penttilä <mpenttil@redhat.com>
Currently, the way device page faulting and migration works
is not optimal, if you want to do both fault handling and
migration at once.
Being able to migrate not present pages (or pages mapped with incorrect
permissions, eg. COW) to the GPU requires doing either of the
following sequences:
1. hmm_range_fault() - fault in non-present pages with correct permissions, etc.
2. migrate_vma_*() - migrate the pages
Or:
1. migrate_vma_*() - migrate present pages
2. If non-present pages detected by migrate_vma_*():
a) call hmm_range_fault() to fault pages in
b) call migrate_vma_*() again to migrate now present pages
The problem with the first sequence is that you always have to do two
page walks even when most of the time the pages are present or zero page
mappings so the common case takes a performance hit.
The second sequence is better for the common case, but far worse if
pages aren't present because now you have to walk the page tables three
times (once to find the page is not present, once so hmm_range_fault()
can find a non-present page to fault in and once again to setup the
migration). It is also tricky to code correctly. One page table walk
could costs over 1000 cpu cycles on X86-64, which is a significant hit.
We should be able to walk the page table once, faulting
pages in as required and replacing them with migration entries if
requested.
Add a new flag to HMM APIs, HMM_PFN_REQ_MIGRATE,
which tells to prepare for migration also during fault handling.
Also, for the migrate_vma_setup() call paths, a flag, MIGRATE_VMA_FAULT,
is added to tell to add fault handling to migrate.
One extra benefit of migrating with hmm_range_fault() path
is the migrate_vma.vma gets populated, so no need to
retrieve that separataly.
Tested in X86-64 VM with HMM test device, passing the selftests.
Tested also rebased on the
"Remove device private pages from physical address space" series:
https://lore.kernel.org/linux-mm/20260107091823.68974-1-jniethe@nvidia.com/
plus a small patch to adjust with no problems.
Changes v1-v2:
- rebase on 6.19-rc6
- fixed issues found by kernel test robot
- fixed locking (pmd/ptl) to cover handle_ and prepare_ regions
parts if migrating
- other issues discovered by Matthew, thanks!
Changes RFC-v1:
- rebase on 6.19-rc5
- adjust for the device THP
- changes from feedback
Revisions:
- RFC https://lore.kernel.org/linux-mm/20250814072045.3637192-1-mpenttil@redhat.com/
- v1: https://lore.kernel.org/all/20260114091923.3950465-1-mpenttil@redhat.com/
Cc: David Hildenbrand <david@redhat.com>
Cc: Jason Gunthorpe <jgg@nvidia.com>
Cc: Leon Romanovsky <leonro@nvidia.com>
Cc: Alistair Popple <apopple@nvidia.com>
Cc: Balbir Singh <balbirs@nvidia.com>
Cc: Zi Yan <ziy@nvidia.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Suggested-by: Alistair Popple <apopple@nvidia.com>
Signed-off-by: Mika Penttilä <mpenttil@redhat.com>
Mika Penttilä (3):
mm: unified hmm fault and migrate device pagewalk paths
mm: add new testcase for the migrate on fault case
mm:/migrate_device.c: remove migrate_vma_collect_*() functions
include/linux/hmm.h | 19 +-
include/linux/migrate.h | 27 +-
lib/test_hmm.c | 100 +++-
lib/test_hmm_uapi.h | 19 +-
mm/hmm.c | 770 +++++++++++++++++++++++--
mm/migrate_device.c | 594 +++----------------
tools/testing/selftests/mm/hmm-tests.c | 54 ++
7 files changed, 1002 insertions(+), 581 deletions(-)
base-commit: 24d479d26b25bce5faea3ddd9fa8f3a6c3129ea7
--
2.50.0
^ permalink raw reply [flat|nested] 8+ messages in thread* [PATCH v2 1/3] mm: unified hmm fault and migrate device pagewalk paths 2026-01-19 11:24 [PATCH v2 0/3] Migrate on fault for device pages mpenttil @ 2026-01-19 11:25 ` mpenttil 2026-01-19 20:31 ` kernel test robot ` (2 more replies) 2026-01-19 11:25 ` [PATCH v2 2/3] mm: add new testcase for the migrate on fault case mpenttil 2026-01-19 11:25 ` [PATCH v2 3/3] mm:/migrate_device.c: remove migrate_vma_collect_*() functions mpenttil 2 siblings, 3 replies; 8+ messages in thread From: mpenttil @ 2026-01-19 11:25 UTC (permalink / raw) To: linux-mm Cc: linux-kernel, Mika Penttilä, David Hildenbrand, Jason Gunthorpe, Leon Romanovsky, Alistair Popple, Balbir Singh, Zi Yan, Matthew Brost From: Mika Penttilä <mpenttil@redhat.com> Currently, the way device page faulting and migration works is not optimal, if you want to do both fault handling and migration at once. Being able to migrate not present pages (or pages mapped with incorrect permissions, eg. COW) to the GPU requires doing either of the following sequences: 1. hmm_range_fault() - fault in non-present pages with correct permissions, etc. 2. migrate_vma_*() - migrate the pages Or: 1. migrate_vma_*() - migrate present pages 2. If non-present pages detected by migrate_vma_*(): a) call hmm_range_fault() to fault pages in b) call migrate_vma_*() again to migrate now present pages The problem with the first sequence is that you always have to do two page walks even when most of the time the pages are present or zero page mappings so the common case takes a performance hit. The second sequence is better for the common case, but far worse if pages aren't present because now you have to walk the page tables three times (once to find the page is not present, once so hmm_range_fault() can find a non-present page to fault in and once again to setup the migration). It is also tricky to code correctly. We should be able to walk the page table once, faulting pages in as required and replacing them with migration entries if requested. Add a new flag to HMM APIs, HMM_PFN_REQ_MIGRATE, which tells to prepare for migration also during fault handling. Also, for the migrate_vma_setup() call paths, a flag, MIGRATE_VMA_FAULT, is added to tell to add fault handling to migrate. Cc: David Hildenbrand <david@redhat.com> Cc: Jason Gunthorpe <jgg@nvidia.com> Cc: Leon Romanovsky <leonro@nvidia.com> Cc: Alistair Popple <apopple@nvidia.com> Cc: Balbir Singh <balbirs@nvidia.com> Cc: Zi Yan <ziy@nvidia.com> Cc: Matthew Brost <matthew.brost@intel.com> Suggested-by: Alistair Popple <apopple@nvidia.com> Signed-off-by: Mika Penttilä <mpenttil@redhat.com> --- include/linux/hmm.h | 19 +- include/linux/migrate.h | 27 +- mm/hmm.c | 770 +++++++++++++++++++++++++++++++++++++--- mm/migrate_device.c | 86 ++++- 4 files changed, 839 insertions(+), 63 deletions(-) diff --git a/include/linux/hmm.h b/include/linux/hmm.h index db75ffc949a7..e2f53e155af2 100644 --- a/include/linux/hmm.h +++ b/include/linux/hmm.h @@ -12,7 +12,7 @@ #include <linux/mm.h> struct mmu_interval_notifier; - +struct migrate_vma; /* * On output: * 0 - The page is faultable and a future call with @@ -27,6 +27,7 @@ struct mmu_interval_notifier; * HMM_PFN_P2PDMA_BUS - Bus mapped P2P transfer * HMM_PFN_DMA_MAPPED - Flag preserved on input-to-output transformation * to mark that page is already DMA mapped + * HMM_PFN_MIGRATE - Migrate PTE installed * * On input: * 0 - Return the current state of the page, do not fault it. @@ -34,6 +35,7 @@ struct mmu_interval_notifier; * will fail * HMM_PFN_REQ_WRITE - The output must have HMM_PFN_WRITE or hmm_range_fault() * will fail. Must be combined with HMM_PFN_REQ_FAULT. + * HMM_PFN_REQ_MIGRATE - For default_flags, request to migrate to device */ enum hmm_pfn_flags { /* Output fields and flags */ @@ -48,15 +50,25 @@ enum hmm_pfn_flags { HMM_PFN_P2PDMA = 1UL << (BITS_PER_LONG - 5), HMM_PFN_P2PDMA_BUS = 1UL << (BITS_PER_LONG - 6), - HMM_PFN_ORDER_SHIFT = (BITS_PER_LONG - 11), + /* Migrate request */ + HMM_PFN_MIGRATE = 1UL << (BITS_PER_LONG - 7), + HMM_PFN_COMPOUND = 1UL << (BITS_PER_LONG - 8), + HMM_PFN_ORDER_SHIFT = (BITS_PER_LONG - 13), /* Input flags */ HMM_PFN_REQ_FAULT = HMM_PFN_VALID, HMM_PFN_REQ_WRITE = HMM_PFN_WRITE, + HMM_PFN_REQ_MIGRATE = HMM_PFN_MIGRATE, HMM_PFN_FLAGS = ~((1UL << HMM_PFN_ORDER_SHIFT) - 1), }; +enum { + /* These flags are carried from input-to-output */ + HMM_PFN_INOUT_FLAGS = HMM_PFN_DMA_MAPPED | HMM_PFN_P2PDMA | + HMM_PFN_P2PDMA_BUS, +}; + /* * hmm_pfn_to_page() - return struct page pointed to by a device entry * @@ -107,6 +119,7 @@ static inline unsigned int hmm_pfn_to_map_order(unsigned long hmm_pfn) * @default_flags: default flags for the range (write, read, ... see hmm doc) * @pfn_flags_mask: allows to mask pfn flags so that only default_flags matter * @dev_private_owner: owner of device private pages + * @migrate: structure for migrating the associated vma */ struct hmm_range { struct mmu_interval_notifier *notifier; @@ -117,12 +130,14 @@ struct hmm_range { unsigned long default_flags; unsigned long pfn_flags_mask; void *dev_private_owner; + struct migrate_vma *migrate; }; /* * Please see Documentation/mm/hmm.rst for how to use the range API. */ int hmm_range_fault(struct hmm_range *range); +int hmm_range_migrate_prepare(struct hmm_range *range, struct migrate_vma **pargs); /* * HMM_RANGE_DEFAULT_TIMEOUT - default timeout (ms) when waiting for a range diff --git a/include/linux/migrate.h b/include/linux/migrate.h index 26ca00c325d9..104eda2dd881 100644 --- a/include/linux/migrate.h +++ b/include/linux/migrate.h @@ -3,6 +3,7 @@ #define _LINUX_MIGRATE_H #include <linux/mm.h> +#include <linux/hmm.h> #include <linux/mempolicy.h> #include <linux/migrate_mode.h> #include <linux/hugetlb.h> @@ -97,6 +98,16 @@ static inline int set_movable_ops(const struct movable_operations *ops, enum pag return -ENOSYS; } +enum migrate_vma_info { + MIGRATE_VMA_SELECT_NONE = 0, + MIGRATE_VMA_SELECT_COMPOUND = MIGRATE_VMA_SELECT_NONE, +}; + +static inline enum migrate_vma_info hmm_select_migrate(struct hmm_range *range) +{ + return MIGRATE_VMA_SELECT_NONE; +} + #endif /* CONFIG_MIGRATION */ #ifdef CONFIG_NUMA_BALANCING @@ -140,11 +151,12 @@ static inline unsigned long migrate_pfn(unsigned long pfn) return (pfn << MIGRATE_PFN_SHIFT) | MIGRATE_PFN_VALID; } -enum migrate_vma_direction { +enum migrate_vma_info { MIGRATE_VMA_SELECT_SYSTEM = 1 << 0, MIGRATE_VMA_SELECT_DEVICE_PRIVATE = 1 << 1, MIGRATE_VMA_SELECT_DEVICE_COHERENT = 1 << 2, MIGRATE_VMA_SELECT_COMPOUND = 1 << 3, + MIGRATE_VMA_FAULT = 1 << 4, }; struct migrate_vma { @@ -182,6 +194,17 @@ struct migrate_vma { struct page *fault_page; }; +static inline enum migrate_vma_info hmm_select_migrate(struct hmm_range *range) +{ + enum migrate_vma_info minfo; + + minfo = range->migrate ? range->migrate->flags : 0; + minfo |= (range->default_flags & HMM_PFN_REQ_MIGRATE) ? + MIGRATE_VMA_SELECT_SYSTEM : 0; + + return minfo; +} + int migrate_vma_setup(struct migrate_vma *args); void migrate_vma_pages(struct migrate_vma *migrate); void migrate_vma_finalize(struct migrate_vma *migrate); @@ -192,7 +215,7 @@ void migrate_device_pages(unsigned long *src_pfns, unsigned long *dst_pfns, unsigned long npages); void migrate_device_finalize(unsigned long *src_pfns, unsigned long *dst_pfns, unsigned long npages); - +void migrate_hmm_range_setup(struct hmm_range *range); #endif /* CONFIG_MIGRATION */ #endif /* _LINUX_MIGRATE_H */ diff --git a/mm/hmm.c b/mm/hmm.c index 4ec74c18bef6..1fdb8665eeec 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -20,6 +20,7 @@ #include <linux/pagemap.h> #include <linux/leafops.h> #include <linux/hugetlb.h> +#include <linux/migrate.h> #include <linux/memremap.h> #include <linux/sched/mm.h> #include <linux/jump_label.h> @@ -27,12 +28,20 @@ #include <linux/pci-p2pdma.h> #include <linux/mmu_notifier.h> #include <linux/memory_hotplug.h> +#include <asm/tlbflush.h> #include "internal.h" struct hmm_vma_walk { - struct hmm_range *range; - unsigned long last; + struct mmu_notifier_range mmu_range; + struct vm_area_struct *vma; + struct hmm_range *range; + unsigned long start; + unsigned long end; + unsigned long last; + bool locked; + bool pmdlocked; + spinlock_t *ptl; }; enum { @@ -41,21 +50,38 @@ enum { HMM_NEED_ALL_BITS = HMM_NEED_FAULT | HMM_NEED_WRITE_FAULT, }; -enum { - /* These flags are carried from input-to-output */ - HMM_PFN_INOUT_FLAGS = HMM_PFN_DMA_MAPPED | HMM_PFN_P2PDMA | - HMM_PFN_P2PDMA_BUS, -}; - static int hmm_pfns_fill(unsigned long addr, unsigned long end, - struct hmm_range *range, unsigned long cpu_flags) + struct hmm_vma_walk *hmm_vma_walk, unsigned long cpu_flags) { + struct hmm_range *range = hmm_vma_walk->range; unsigned long i = (addr - range->start) >> PAGE_SHIFT; + enum migrate_vma_info minfo; + bool migrate = false; + + minfo = hmm_select_migrate(range); + if (cpu_flags != HMM_PFN_ERROR) { + if (minfo && (vma_is_anonymous(hmm_vma_walk->vma))) { + cpu_flags |= (HMM_PFN_VALID | HMM_PFN_MIGRATE); + migrate = true; + } + } + + if (migrate && thp_migration_supported() && + (minfo & MIGRATE_VMA_SELECT_COMPOUND) && + IS_ALIGNED(addr, HPAGE_PMD_SIZE) && + IS_ALIGNED(end, HPAGE_PMD_SIZE)) { + range->hmm_pfns[i] &= HMM_PFN_INOUT_FLAGS; + range->hmm_pfns[i] |= cpu_flags | HMM_PFN_COMPOUND; + addr += PAGE_SIZE; + i++; + cpu_flags = 0; + } for (; addr < end; addr += PAGE_SIZE, i++) { range->hmm_pfns[i] &= HMM_PFN_INOUT_FLAGS; range->hmm_pfns[i] |= cpu_flags; } + return 0; } @@ -171,11 +197,11 @@ static int hmm_vma_walk_hole(unsigned long addr, unsigned long end, if (!walk->vma) { if (required_fault) return -EFAULT; - return hmm_pfns_fill(addr, end, range, HMM_PFN_ERROR); + return hmm_pfns_fill(addr, end, hmm_vma_walk, HMM_PFN_ERROR); } if (required_fault) return hmm_vma_fault(addr, end, required_fault, walk); - return hmm_pfns_fill(addr, end, range, 0); + return hmm_pfns_fill(addr, end, hmm_vma_walk, 0); } static inline unsigned long hmm_pfn_flags_order(unsigned long order) @@ -208,8 +234,13 @@ static int hmm_vma_handle_pmd(struct mm_walk *walk, unsigned long addr, cpu_flags = pmd_to_hmm_pfn_flags(range, pmd); required_fault = hmm_range_need_fault(hmm_vma_walk, hmm_pfns, npages, cpu_flags); - if (required_fault) + if (required_fault) { + if (hmm_vma_walk->pmdlocked) { + spin_unlock(hmm_vma_walk->ptl); + hmm_vma_walk->pmdlocked = false; + } return hmm_vma_fault(addr, end, required_fault, walk); + } pfn = pmd_pfn(pmd) + ((addr & ~PMD_MASK) >> PAGE_SHIFT); for (i = 0; addr < end; addr += PAGE_SIZE, i++, pfn++) { @@ -289,14 +320,28 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr, goto fault; if (softleaf_is_migration(entry)) { - pte_unmap(ptep); - hmm_vma_walk->last = addr; - migration_entry_wait(walk->mm, pmdp, addr); - return -EBUSY; + if (!hmm_select_migrate(range)) { + if (hmm_vma_walk->locked) { + pte_unmap_unlock(ptep, hmm_vma_walk->ptl); + hmm_vma_walk->locked = false; + } else + pte_unmap(ptep); + + hmm_vma_walk->last = addr; + migration_entry_wait(walk->mm, pmdp, addr); + return -EBUSY; + } else + goto out; } /* Report error for everything else */ - pte_unmap(ptep); + + if (hmm_vma_walk->locked) { + pte_unmap_unlock(ptep, hmm_vma_walk->ptl); + hmm_vma_walk->locked = false; + } else + pte_unmap(ptep); + return -EFAULT; } @@ -313,7 +358,12 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr, if (!vm_normal_page(walk->vma, addr, pte) && !is_zero_pfn(pte_pfn(pte))) { if (hmm_pte_need_fault(hmm_vma_walk, pfn_req_flags, 0)) { - pte_unmap(ptep); + if (hmm_vma_walk->locked) { + pte_unmap_unlock(ptep, hmm_vma_walk->ptl); + hmm_vma_walk->locked = false; + } else + pte_unmap(ptep); + return -EFAULT; } new_pfn_flags = HMM_PFN_ERROR; @@ -326,7 +376,11 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr, return 0; fault: - pte_unmap(ptep); + if (hmm_vma_walk->locked) { + pte_unmap_unlock(ptep, hmm_vma_walk->ptl); + hmm_vma_walk->locked = false; + } else + pte_unmap(ptep); /* Fault any virtual address we were asked to fault */ return hmm_vma_fault(addr, end, required_fault, walk); } @@ -370,13 +424,18 @@ static int hmm_vma_handle_absent_pmd(struct mm_walk *walk, unsigned long start, required_fault = hmm_range_need_fault(hmm_vma_walk, hmm_pfns, npages, 0); if (required_fault) { - if (softleaf_is_device_private(entry)) + if (softleaf_is_device_private(entry)) { + if (hmm_vma_walk->pmdlocked) { + spin_unlock(hmm_vma_walk->ptl); + hmm_vma_walk->pmdlocked = false; + } return hmm_vma_fault(addr, end, required_fault, walk); + } else return -EFAULT; } - return hmm_pfns_fill(start, end, range, HMM_PFN_ERROR); + return hmm_pfns_fill(start, end, hmm_vma_walk, HMM_PFN_ERROR); } #else static int hmm_vma_handle_absent_pmd(struct mm_walk *walk, unsigned long start, @@ -384,15 +443,486 @@ static int hmm_vma_handle_absent_pmd(struct mm_walk *walk, unsigned long start, pmd_t pmd) { struct hmm_vma_walk *hmm_vma_walk = walk->private; - struct hmm_range *range = hmm_vma_walk->range; unsigned long npages = (end - start) >> PAGE_SHIFT; if (hmm_range_need_fault(hmm_vma_walk, hmm_pfns, npages, 0)) return -EFAULT; - return hmm_pfns_fill(start, end, range, HMM_PFN_ERROR); + return hmm_pfns_fill(start, end, hmm_vma_walk, HMM_PFN_ERROR); } #endif /* CONFIG_ARCH_ENABLE_THP_MIGRATION */ +#ifdef CONFIG_DEVICE_MIGRATION +/** + * migrate_vma_split_folio() - Helper function to split a THP folio + * @folio: the folio to split + * @fault_page: struct page associated with the fault if any + * + * Returns 0 on success + */ +static int migrate_vma_split_folio(struct folio *folio, + struct page *fault_page) +{ + int ret; + struct folio *fault_folio = fault_page ? page_folio(fault_page) : NULL; + struct folio *new_fault_folio = NULL; + + if (folio != fault_folio) { + folio_get(folio); + folio_lock(folio); + } + + ret = split_folio(folio); + if (ret) { + if (folio != fault_folio) { + folio_unlock(folio); + folio_put(folio); + } + return ret; + } + + new_fault_folio = fault_page ? page_folio(fault_page) : NULL; + + /* + * Ensure the lock is held on the correct + * folio after the split + */ + if (!new_fault_folio) { + folio_unlock(folio); + folio_put(folio); + } else if (folio != new_fault_folio) { + if (new_fault_folio != fault_folio) { + folio_get(new_fault_folio); + folio_lock(new_fault_folio); + } + folio_unlock(folio); + folio_put(folio); + } + + return 0; +} + +static int hmm_vma_handle_migrate_prepare_pmd(const struct mm_walk *walk, + pmd_t *pmdp, + unsigned long start, + unsigned long end, + unsigned long *hmm_pfn) +{ + struct hmm_vma_walk *hmm_vma_walk = walk->private; + struct hmm_range *range = hmm_vma_walk->range; + struct migrate_vma *migrate = range->migrate; + struct folio *fault_folio = NULL; + struct folio *folio; + enum migrate_vma_info minfo; + unsigned long i; + int r = 0; + + minfo = hmm_select_migrate(range); + if (!minfo) + return r; + + fault_folio = (migrate && migrate->fault_page) ? + page_folio(migrate->fault_page) : NULL; + + if (pmd_none(*pmdp)) + return hmm_pfns_fill(start, end, hmm_vma_walk, 0); + + if (!(hmm_pfn[0] & HMM_PFN_VALID)) + goto out; + + if (pmd_trans_huge(*pmdp)) { + if (!(minfo & MIGRATE_VMA_SELECT_SYSTEM)) + goto out; + + folio = pmd_folio(*pmdp); + if (is_huge_zero_folio(folio)) + return hmm_pfns_fill(start, end, hmm_vma_walk, 0); + + } else if (!pmd_present(*pmdp)) { + const softleaf_t entry = softleaf_from_pmd(*pmdp); + + folio = softleaf_to_folio(entry); + + if (!softleaf_is_device_private(entry)) + goto out; + + if (!(minfo & MIGRATE_VMA_SELECT_DEVICE_PRIVATE)) + goto out; + if (folio->pgmap->owner != migrate->pgmap_owner) + goto out; + + } else { + hmm_vma_walk->last = start; + return -EBUSY; + } + + folio_get(folio); + + if (folio != fault_folio && unlikely(!folio_trylock(folio))) { + folio_put(folio); + hmm_pfns_fill(start, end, hmm_vma_walk, HMM_PFN_ERROR); + return 0; + } + + if (thp_migration_supported() && + (migrate->flags & MIGRATE_VMA_SELECT_COMPOUND) && + (IS_ALIGNED(start, HPAGE_PMD_SIZE) && + IS_ALIGNED(end, HPAGE_PMD_SIZE))) { + + struct page_vma_mapped_walk pvmw = { + .ptl = hmm_vma_walk->ptl, + .address = start, + .pmd = pmdp, + .vma = walk->vma, + }; + + hmm_pfn[0] |= HMM_PFN_MIGRATE | HMM_PFN_COMPOUND; + + r = set_pmd_migration_entry(&pvmw, folio_page(folio, 0)); + if (r) { + hmm_pfn[0] &= ~(HMM_PFN_MIGRATE | HMM_PFN_COMPOUND); + r = -ENOENT; // fallback + goto unlock_out; + } + for (i = 1, start += PAGE_SIZE; start < end; start += PAGE_SIZE, i++) + hmm_pfn[i] &= HMM_PFN_INOUT_FLAGS; + + } else { + r = -ENOENT; // fallback + goto unlock_out; + } + + +out: + return r; + +unlock_out: + if (folio != fault_folio) + folio_unlock(folio); + folio_put(folio); + goto out; + +} + +/* + * Install migration entries if migration requested, either from fault + * or migrate paths. + * + */ +static int hmm_vma_handle_migrate_prepare(const struct mm_walk *walk, + pmd_t *pmdp, + pte_t *ptep, + unsigned long addr, + unsigned long *hmm_pfn) +{ + struct hmm_vma_walk *hmm_vma_walk = walk->private; + struct hmm_range *range = hmm_vma_walk->range; + struct migrate_vma *migrate = range->migrate; + struct mm_struct *mm = walk->vma->vm_mm; + struct folio *fault_folio = NULL; + enum migrate_vma_info minfo; + struct dev_pagemap *pgmap; + bool anon_exclusive; + struct folio *folio; + unsigned long pfn; + struct page *page; + softleaf_t entry; + pte_t pte, swp_pte; + bool writable = false; + + // Do we want to migrate at all? + minfo = hmm_select_migrate(range); + if (!minfo) + return 0; + + fault_folio = (migrate && migrate->fault_page) ? + page_folio(migrate->fault_page) : NULL; + + if (!hmm_vma_walk->locked) { + ptep = pte_offset_map_lock(mm, pmdp, addr, &hmm_vma_walk->ptl); + hmm_vma_walk->locked = true; + } + pte = ptep_get(ptep); + + if (pte_none(pte)) { + // migrate without faulting case + if (vma_is_anonymous(walk->vma)) { + *hmm_pfn &= HMM_PFN_INOUT_FLAGS; + *hmm_pfn |= HMM_PFN_MIGRATE | HMM_PFN_VALID; + goto out; + } + } + + if (!(hmm_pfn[0] & HMM_PFN_VALID)) + goto out; + + if (!pte_present(pte)) { + /* + * Only care about unaddressable device page special + * page table entry. Other special swap entries are not + * migratable, and we ignore regular swapped page. + */ + entry = softleaf_from_pte(pte); + if (!softleaf_is_device_private(entry)) + goto out; + + if (!(minfo & MIGRATE_VMA_SELECT_DEVICE_PRIVATE)) + goto out; + + page = softleaf_to_page(entry); + folio = page_folio(page); + if (folio->pgmap->owner != migrate->pgmap_owner) + goto out; + + if (folio_test_large(folio)) { + int ret; + + pte_unmap_unlock(ptep, hmm_vma_walk->ptl); + hmm_vma_walk->locked = false; + ret = migrate_vma_split_folio(folio, + migrate->fault_page); + if (ret) + goto out_error; + return -EAGAIN; + } + + pfn = page_to_pfn(page); + if (softleaf_is_device_private_write(entry)) + writable = true; + } else { + pfn = pte_pfn(pte); + if (is_zero_pfn(pfn) && + (minfo & MIGRATE_VMA_SELECT_SYSTEM)) { + *hmm_pfn = HMM_PFN_MIGRATE|HMM_PFN_VALID; + goto out; + } + page = vm_normal_page(walk->vma, addr, pte); + if (page && !is_zone_device_page(page) && + !(minfo & MIGRATE_VMA_SELECT_SYSTEM)) { + goto out; + } else if (page && is_device_coherent_page(page)) { + pgmap = page_pgmap(page); + + if (!(minfo & + MIGRATE_VMA_SELECT_DEVICE_COHERENT) || + pgmap->owner != migrate->pgmap_owner) + goto out; + } + + folio = page ? page_folio(page) : NULL; + if (folio && folio_test_large(folio)) { + int ret; + + pte_unmap_unlock(ptep, hmm_vma_walk->ptl); + hmm_vma_walk->locked = false; + + ret = migrate_vma_split_folio(folio, + migrate->fault_page); + if (ret) + goto out_error; + return -EAGAIN; + } + + writable = pte_write(pte); + } + + if (!page || !page->mapping) + goto out; + + /* + * By getting a reference on the folio we pin it and that blocks + * any kind of migration. Side effect is that it "freezes" the + * pte. + * + * We drop this reference after isolating the folio from the lru + * for non device folio (device folio are not on the lru and thus + * can't be dropped from it). + */ + folio = page_folio(page); + folio_get(folio); + + /* + * We rely on folio_trylock() to avoid deadlock between + * concurrent migrations where each is waiting on the others + * folio lock. If we can't immediately lock the folio we fail this + * migration as it is only best effort anyway. + * + * If we can lock the folio it's safe to set up a migration entry + * now. In the common case where the folio is mapped once in a + * single process setting up the migration entry now is an + * optimisation to avoid walking the rmap later with + * try_to_migrate(). + */ + + if (fault_folio == folio || folio_trylock(folio)) { + anon_exclusive = folio_test_anon(folio) && + PageAnonExclusive(page); + + flush_cache_page(walk->vma, addr, pfn); + + if (anon_exclusive) { + pte = ptep_clear_flush(walk->vma, addr, ptep); + + if (folio_try_share_anon_rmap_pte(folio, page)) { + set_pte_at(mm, addr, ptep, pte); + folio_unlock(folio); + folio_put(folio); + goto out; + } + } else { + pte = ptep_get_and_clear(mm, addr, ptep); + } + + if (pte_dirty(pte)) + folio_mark_dirty(folio); + + /* Setup special migration page table entry */ + if (writable) + entry = make_writable_migration_entry(pfn); + else if (anon_exclusive) + entry = make_readable_exclusive_migration_entry(pfn); + else + entry = make_readable_migration_entry(pfn); + + if (pte_present(pte)) { + if (pte_young(pte)) + entry = make_migration_entry_young(entry); + if (pte_dirty(pte)) + entry = make_migration_entry_dirty(entry); + } + + swp_pte = swp_entry_to_pte(entry); + if (pte_present(pte)) { + if (pte_soft_dirty(pte)) + swp_pte = pte_swp_mksoft_dirty(swp_pte); + if (pte_uffd_wp(pte)) + swp_pte = pte_swp_mkuffd_wp(swp_pte); + } else { + if (pte_swp_soft_dirty(pte)) + swp_pte = pte_swp_mksoft_dirty(swp_pte); + if (pte_swp_uffd_wp(pte)) + swp_pte = pte_swp_mkuffd_wp(swp_pte); + } + + set_pte_at(mm, addr, ptep, swp_pte); + folio_remove_rmap_pte(folio, page, walk->vma); + folio_put(folio); + *hmm_pfn |= HMM_PFN_MIGRATE; + + if (pte_present(pte)) + flush_tlb_range(walk->vma, addr, addr + PAGE_SIZE); + } else + folio_put(folio); +out: + return 0; +out_error: + return -EFAULT; + +} + +static int hmm_vma_walk_split(pmd_t *pmdp, + unsigned long addr, + struct mm_walk *walk) +{ + struct hmm_vma_walk *hmm_vma_walk = walk->private; + struct hmm_range *range = hmm_vma_walk->range; + struct migrate_vma *migrate = range->migrate; + struct folio *folio, *fault_folio; + spinlock_t *ptl; + int ret = 0; + + fault_folio = (migrate && migrate->fault_page) ? + page_folio(migrate->fault_page) : NULL; + + ptl = pmd_lock(walk->mm, pmdp); + if (unlikely(!pmd_trans_huge(*pmdp))) { + spin_unlock(ptl); + goto out; + } + + folio = pmd_folio(*pmdp); + if (is_huge_zero_folio(folio)) { + spin_unlock(ptl); + split_huge_pmd(walk->vma, pmdp, addr); + } else { + folio_get(folio); + spin_unlock(ptl); + + if (folio != fault_folio) { + if (unlikely(!folio_trylock(folio))) { + folio_put(folio); + ret = -EBUSY; + goto out; + } + } else + folio_put(folio); + + ret = split_folio(folio); + if (fault_folio != folio) { + folio_unlock(folio); + folio_put(folio); + } + + } +out: + return ret; +} +#else +static int hmm_vma_handle_migrate_prepare_pmd(const struct mm_walk *walk, + pmd_t *pmdp, + unsigned long start, + unsigned long end, + unsigned long *hmm_pfn) +{ + return 0; +} + +static int hmm_vma_handle_migrate_prepare(const struct mm_walk *walk, + pmd_t *pmdp, + pte_t *pte, + unsigned long addr, + unsigned long *hmm_pfn) +{ + return 0; +} + +static int hmm_vma_walk_split(pmd_t *pmdp, + unsigned long addr, + struct mm_walk *walk) +{ + return 0; +} +#endif + +static int hmm_vma_capture_migrate_range(unsigned long start, + unsigned long end, + struct mm_walk *walk) +{ + struct hmm_vma_walk *hmm_vma_walk = walk->private; + struct hmm_range *range = hmm_vma_walk->range; + + if (!hmm_select_migrate(range)) + return 0; + + if (hmm_vma_walk->vma && (hmm_vma_walk->vma != walk->vma)) + return -ERANGE; + + hmm_vma_walk->vma = walk->vma; + hmm_vma_walk->start = start; + hmm_vma_walk->end = end; + + if (end - start > range->end - range->start) + return -ERANGE; + + if (!hmm_vma_walk->mmu_range.owner) { + mmu_notifier_range_init_owner(&hmm_vma_walk->mmu_range, MMU_NOTIFY_MIGRATE, 0, + walk->vma->vm_mm, start, end, + range->dev_private_owner); + mmu_notifier_invalidate_range_start(&hmm_vma_walk->mmu_range); + } + + return 0; +} + static int hmm_vma_walk_pmd(pmd_t *pmdp, unsigned long start, unsigned long end, @@ -403,43 +933,112 @@ static int hmm_vma_walk_pmd(pmd_t *pmdp, unsigned long *hmm_pfns = &range->hmm_pfns[(start - range->start) >> PAGE_SHIFT]; unsigned long npages = (end - start) >> PAGE_SHIFT; + struct mm_struct *mm = walk->vma->vm_mm; unsigned long addr = start; + enum migrate_vma_info minfo; + unsigned long i; + spinlock_t *ptl; pte_t *ptep; pmd_t pmd; + int r; + + minfo = hmm_select_migrate(range); again: + hmm_vma_walk->locked = false; + hmm_vma_walk->pmdlocked = false; pmd = pmdp_get_lockless(pmdp); - if (pmd_none(pmd)) - return hmm_vma_walk_hole(start, end, -1, walk); + if (pmd_none(pmd)) { + r = hmm_vma_walk_hole(start, end, -1, walk); + if (r || !minfo) + return r; + + ptl = pmd_lock(walk->mm, pmdp); + if (pmd_none(*pmdp)) { + // hmm_vma_walk_hole() filled migration needs + spin_unlock(ptl); + return r; + } + spin_unlock(ptl); + } if (thp_migration_supported() && pmd_is_migration_entry(pmd)) { - if (hmm_range_need_fault(hmm_vma_walk, hmm_pfns, npages, 0)) { + if (!minfo) { + if (hmm_range_need_fault(hmm_vma_walk, hmm_pfns, npages, 0)) { + hmm_vma_walk->last = addr; + pmd_migration_entry_wait(walk->mm, pmdp); + return -EBUSY; + } + } + for (i = 0; addr < end; addr += PAGE_SIZE, i++) + hmm_pfns[i] &= HMM_PFN_INOUT_FLAGS; + + return 0; + } + + if (minfo) { + hmm_vma_walk->ptl = pmd_lock(mm, pmdp); + hmm_vma_walk->pmdlocked = true; + pmd = pmdp_get(pmdp); + } else + pmd = pmdp_get_lockless(pmdp); + + if (pmd_trans_huge(pmd) || !pmd_present(pmd)) { + + if (!pmd_present(pmd)) { + r = hmm_vma_handle_absent_pmd(walk, start, end, hmm_pfns, + pmd); + if (r || !minfo) + return r; + } else { + + /* + * No need to take pmd_lock here if not migrating, + * even if some other thread is splitting the huge + * pmd we will get that event through mmu_notifier callback. + * + * So just read pmd value and check again it's a transparent + * huge or device mapping one and compute corresponding pfn + * values. + */ + + if (!pmd_trans_huge(pmd)) { + // must be lockless + goto again; + } + + r = hmm_vma_handle_pmd(walk, addr, end, hmm_pfns, pmd); + + if (r || !minfo) + return r; + } + + r = hmm_vma_handle_migrate_prepare_pmd(walk, pmdp, start, end, hmm_pfns); + + if (hmm_vma_walk->pmdlocked) { + spin_unlock(hmm_vma_walk->ptl); + hmm_vma_walk->pmdlocked = false; + } + + if (r == -ENOENT) { + r = hmm_vma_walk_split(pmdp, addr, walk); + if (r) { + /* Split not successful, skip */ + return hmm_pfns_fill(start, end, hmm_vma_walk, HMM_PFN_ERROR); + } + + /* Split successful or "again", reloop */ hmm_vma_walk->last = addr; - pmd_migration_entry_wait(walk->mm, pmdp); return -EBUSY; } - return hmm_pfns_fill(start, end, range, 0); - } - if (!pmd_present(pmd)) - return hmm_vma_handle_absent_pmd(walk, start, end, hmm_pfns, - pmd); + return r; - if (pmd_trans_huge(pmd)) { - /* - * No need to take pmd_lock here, even if some other thread - * is splitting the huge pmd we will get that event through - * mmu_notifier callback. - * - * So just read pmd value and check again it's a transparent - * huge or device mapping one and compute corresponding pfn - * values. - */ - pmd = pmdp_get_lockless(pmdp); - if (!pmd_trans_huge(pmd)) - goto again; + } - return hmm_vma_handle_pmd(walk, addr, end, hmm_pfns, pmd); + if (hmm_vma_walk->pmdlocked) { + spin_unlock(hmm_vma_walk->ptl); + hmm_vma_walk->pmdlocked = false; } /* @@ -451,22 +1050,41 @@ static int hmm_vma_walk_pmd(pmd_t *pmdp, if (pmd_bad(pmd)) { if (hmm_range_need_fault(hmm_vma_walk, hmm_pfns, npages, 0)) return -EFAULT; - return hmm_pfns_fill(start, end, range, HMM_PFN_ERROR); + return hmm_pfns_fill(start, end, hmm_vma_walk, HMM_PFN_ERROR); } - ptep = pte_offset_map(pmdp, addr); + if (minfo) { + ptep = pte_offset_map_lock(mm, pmdp, addr, &hmm_vma_walk->ptl); + if (ptep) + hmm_vma_walk->locked = true; + } else + ptep = pte_offset_map(pmdp, addr); if (!ptep) goto again; + for (; addr < end; addr += PAGE_SIZE, ptep++, hmm_pfns++) { - int r; r = hmm_vma_handle_pte(walk, addr, end, pmdp, ptep, hmm_pfns); if (r) { /* hmm_vma_handle_pte() did pte_unmap() */ return r; } + + r = hmm_vma_handle_migrate_prepare(walk, pmdp, ptep, addr, hmm_pfns); + if (r == -EAGAIN) { + goto again; + } + if (r) { + hmm_pfns_fill(addr, end, hmm_vma_walk, HMM_PFN_ERROR); + break; + } } - pte_unmap(ptep - 1); + + if (hmm_vma_walk->locked) + pte_unmap_unlock(ptep - 1, hmm_vma_walk->ptl); + else + pte_unmap(ptep - 1); + return 0; } @@ -600,6 +1218,11 @@ static int hmm_vma_walk_test(unsigned long start, unsigned long end, struct hmm_vma_walk *hmm_vma_walk = walk->private; struct hmm_range *range = hmm_vma_walk->range; struct vm_area_struct *vma = walk->vma; + int r; + + r = hmm_vma_capture_migrate_range(start, end, walk); + if (r) + return r; if (!(vma->vm_flags & (VM_IO | VM_PFNMAP)) && vma->vm_flags & VM_READ) @@ -622,7 +1245,7 @@ static int hmm_vma_walk_test(unsigned long start, unsigned long end, (end - start) >> PAGE_SHIFT, 0)) return -EFAULT; - hmm_pfns_fill(start, end, range, HMM_PFN_ERROR); + hmm_pfns_fill(start, end, hmm_vma_walk, HMM_PFN_ERROR); /* Skip this vma and continue processing the next vma. */ return 1; @@ -652,9 +1275,17 @@ static const struct mm_walk_ops hmm_walk_ops = { * the invalidation to finish. * -EFAULT: A page was requested to be valid and could not be made valid * ie it has no backing VMA or it is illegal to access + * -ERANGE: The range crosses multiple VMAs, or space for hmm_pfns array + * is too low. * * This is similar to get_user_pages(), except that it can read the page tables * without mutating them (ie causing faults). + * + * If want to do migrate after faulting, call hmm_range_fault() with + * HMM_PFN_REQ_MIGRATE and initialize range.migrate field. + * After hmm_range_fault() call migrate_hmm_range_setup() instead of + * migrate_vma_setup() and after that follow normal migrate calls path. + * */ int hmm_range_fault(struct hmm_range *range) { @@ -662,16 +1293,32 @@ int hmm_range_fault(struct hmm_range *range) .range = range, .last = range->start, }; - struct mm_struct *mm = range->notifier->mm; + bool is_fault_path = !!range->notifier; + struct mm_struct *mm; int ret; + /* + * + * Could be serving a device fault or come from migrate + * entry point. For the former we have not resolved the vma + * yet, and the latter we don't have a notifier (but have a vma). + * + */ +#ifdef CONFIG_DEVICE_MIGRATION + mm = is_fault_path ? range->notifier->mm : range->migrate->vma->vm_mm; +#else + mm = range->notifier->mm; +#endif mmap_assert_locked(mm); do { /* If range is no longer valid force retry. */ - if (mmu_interval_check_retry(range->notifier, - range->notifier_seq)) - return -EBUSY; + if (is_fault_path && mmu_interval_check_retry(range->notifier, + range->notifier_seq)) { + ret = -EBUSY; + break; + } + ret = walk_page_range(mm, hmm_vma_walk.last, range->end, &hmm_walk_ops, &hmm_vma_walk); /* @@ -681,6 +1328,19 @@ int hmm_range_fault(struct hmm_range *range) * output, and all >= are still at their input values. */ } while (ret == -EBUSY); + +#ifdef CONFIG_DEVICE_MIGRATION + if (hmm_select_migrate(range) && range->migrate && + hmm_vma_walk.mmu_range.owner) { + // The migrate_vma path has the following initialized + if (is_fault_path) { + range->migrate->vma = hmm_vma_walk.vma; + range->migrate->start = range->start; + range->migrate->end = hmm_vma_walk.end; + } + mmu_notifier_invalidate_range_end(&hmm_vma_walk.mmu_range); + } +#endif return ret; } EXPORT_SYMBOL(hmm_range_fault); diff --git a/mm/migrate_device.c b/mm/migrate_device.c index 23379663b1e1..bda6320f6242 100644 --- a/mm/migrate_device.c +++ b/mm/migrate_device.c @@ -734,7 +734,16 @@ static void migrate_vma_unmap(struct migrate_vma *migrate) */ int migrate_vma_setup(struct migrate_vma *args) { + int ret; long nr_pages = (args->end - args->start) >> PAGE_SHIFT; + struct hmm_range range = { + .notifier = NULL, + .start = args->start, + .end = args->end, + .hmm_pfns = args->src, + .dev_private_owner = args->pgmap_owner, + .migrate = args + }; args->start &= PAGE_MASK; args->end &= PAGE_MASK; @@ -759,17 +768,25 @@ int migrate_vma_setup(struct migrate_vma *args) args->cpages = 0; args->npages = 0; - migrate_vma_collect(args); + if (args->flags & MIGRATE_VMA_FAULT) + range.default_flags |= HMM_PFN_REQ_FAULT; + + ret = hmm_range_fault(&range); + + migrate_hmm_range_setup(&range); - if (args->cpages) - migrate_vma_unmap(args); + /* Remove migration PTEs */ + if (ret) { + migrate_vma_pages(args); + migrate_vma_finalize(args); + } /* * At this point pages are locked and unmapped, and thus they have * stable content and can safely be copied to destination memory that * is allocated by the drivers. */ - return 0; + return ret; } EXPORT_SYMBOL(migrate_vma_setup); @@ -1489,3 +1506,64 @@ int migrate_device_coherent_folio(struct folio *folio) return 0; return -EBUSY; } + +void migrate_hmm_range_setup(struct hmm_range *range) +{ + + struct migrate_vma *migrate = range->migrate; + + if (!migrate) + return; + + migrate->npages = (migrate->end - migrate->start) >> PAGE_SHIFT; + migrate->cpages = 0; + + for (unsigned long i = 0; i < migrate->npages; i++) { + + unsigned long pfn = range->hmm_pfns[i]; + + pfn &= ~HMM_PFN_INOUT_FLAGS; + + /* + * + * Don't do migration if valid and migrate flags are not both set. + * + */ + if ((pfn & (HMM_PFN_VALID | HMM_PFN_MIGRATE)) != + (HMM_PFN_VALID | HMM_PFN_MIGRATE)) { + migrate->src[i] = 0; + migrate->dst[i] = 0; + continue; + } + + migrate->cpages++; + + /* + * + * The zero page is encoded in a special way, valid and migrate is + * set, and pfn part is zero. Encode specially for migrate also. + * + */ + if (pfn == (HMM_PFN_VALID|HMM_PFN_MIGRATE)) { + migrate->src[i] = MIGRATE_PFN_MIGRATE; + migrate->dst[i] = 0; + continue; + } + if (pfn == (HMM_PFN_VALID|HMM_PFN_MIGRATE|HMM_PFN_COMPOUND)) { + migrate->src[i] = MIGRATE_PFN_MIGRATE|MIGRATE_PFN_COMPOUND; + migrate->dst[i] = 0; + continue; + } + + migrate->src[i] = migrate_pfn(page_to_pfn(hmm_pfn_to_page(pfn))) + | MIGRATE_PFN_MIGRATE; + migrate->src[i] |= (pfn & HMM_PFN_WRITE) ? MIGRATE_PFN_WRITE : 0; + migrate->src[i] |= (pfn & HMM_PFN_COMPOUND) ? MIGRATE_PFN_COMPOUND : 0; + migrate->dst[i] = 0; + } + + if (migrate->cpages) + migrate_vma_unmap(migrate); + +} +EXPORT_SYMBOL(migrate_hmm_range_setup); -- 2.50.0 ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH v2 1/3] mm: unified hmm fault and migrate device pagewalk paths 2026-01-19 11:25 ` [PATCH v2 1/3] mm: unified hmm fault and migrate device pagewalk paths mpenttil @ 2026-01-19 20:31 ` kernel test robot 2026-01-20 6:29 ` Dan Carpenter 2026-01-20 16:03 ` Matthew Brost 2 siblings, 0 replies; 8+ messages in thread From: kernel test robot @ 2026-01-19 20:31 UTC (permalink / raw) To: mpenttil, linux-mm Cc: llvm, oe-kbuild-all, linux-kernel, Mika Penttilä, David Hildenbrand, Jason Gunthorpe, Leon Romanovsky, Alistair Popple, Balbir Singh, Zi Yan, Matthew Brost Hi, kernel test robot noticed the following build errors: [auto build test ERROR on 24d479d26b25bce5faea3ddd9fa8f3a6c3129ea7] url: https://github.com/intel-lab-lkp/linux/commits/mpenttil-redhat-com/mm-unified-hmm-fault-and-migrate-device-pagewalk-paths/20260119-193100 base: 24d479d26b25bce5faea3ddd9fa8f3a6c3129ea7 patch link: https://lore.kernel.org/r/20260119112502.645059-2-mpenttil%40redhat.com patch subject: [PATCH v2 1/3] mm: unified hmm fault and migrate device pagewalk paths config: x86_64-kexec (https://download.01.org/0day-ci/archive/20260120/202601200418.y3IajnfX-lkp@intel.com/config) compiler: clang version 20.1.8 (https://github.com/llvm/llvm-project 87f0227cb60147a26a1eeb4fb06e3b505e9c7261) reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20260120/202601200418.y3IajnfX-lkp@intel.com/reproduce) If you fix the issue in a separate patch/commit (i.e. not just a new version of the same patch/commit), kindly add following tags | Reported-by: kernel test robot <lkp@intel.com> | Closes: https://lore.kernel.org/oe-kbuild-all/202601200418.y3IajnfX-lkp@intel.com/ All errors (new ones prefixed by >>): >> ld.lld: error: undefined symbol: hmm_range_fault >>> referenced by migrate_device.c:774 (mm/migrate_device.c:774) >>> vmlinux.o:(migrate_vma_setup) -- 0-DAY CI Kernel Test Service https://github.com/intel/lkp-tests/wiki ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH v2 1/3] mm: unified hmm fault and migrate device pagewalk paths 2026-01-19 11:25 ` [PATCH v2 1/3] mm: unified hmm fault and migrate device pagewalk paths mpenttil 2026-01-19 20:31 ` kernel test robot @ 2026-01-20 6:29 ` Dan Carpenter 2026-01-20 16:03 ` Matthew Brost 2 siblings, 0 replies; 8+ messages in thread From: Dan Carpenter @ 2026-01-20 6:29 UTC (permalink / raw) To: oe-kbuild, mpenttil, linux-mm Cc: lkp, oe-kbuild-all, linux-kernel, Mika Penttilä, David Hildenbrand, Jason Gunthorpe, Leon Romanovsky, Alistair Popple, Balbir Singh, Zi Yan, Matthew Brost Hi, kernel test robot noticed the following build warnings: url: https://github.com/intel-lab-lkp/linux/commits/mpenttil-redhat-com/mm-unified-hmm-fault-and-migrate-device-pagewalk-paths/20260119-193100 base: 24d479d26b25bce5faea3ddd9fa8f3a6c3129ea7 patch link: https://lore.kernel.org/r/20260119112502.645059-2-mpenttil%40redhat.com patch subject: [PATCH v2 1/3] mm: unified hmm fault and migrate device pagewalk paths config: sparc-randconfig-r071-20260119 (https://download.01.org/0day-ci/archive/20260120/202601200251.uRdWeQPq-lkp@intel.com/config) compiler: sparc-linux-gcc (GCC) 15.2.0 smatch version: v0.5.0-8985-g2614ff1a If you fix the issue in a separate patch/commit (i.e. not just a new version of the same patch/commit), kindly add following tags | Reported-by: kernel test robot <lkp@intel.com> | Reported-by: Dan Carpenter <dan.carpenter@linaro.org> | Closes: https://lore.kernel.org/r/202601200251.uRdWeQPq-lkp@intel.com/ smatch warnings: mm/hmm.c:960 hmm_vma_walk_pmd() warn: missing error code? 'r' mm/hmm.c:1310 hmm_range_fault() error: we previously assumed 'range->notifier' could be null (see line 1296) vim +/r +960 mm/hmm.c 53f5c3f489ecdd Jérôme Glisse 2018-04-10 926 static int hmm_vma_walk_pmd(pmd_t *pmdp, 53f5c3f489ecdd Jérôme Glisse 2018-04-10 927 unsigned long start, 53f5c3f489ecdd Jérôme Glisse 2018-04-10 928 unsigned long end, 53f5c3f489ecdd Jérôme Glisse 2018-04-10 929 struct mm_walk *walk) 53f5c3f489ecdd Jérôme Glisse 2018-04-10 930 { 53f5c3f489ecdd Jérôme Glisse 2018-04-10 931 struct hmm_vma_walk *hmm_vma_walk = walk->private; 53f5c3f489ecdd Jérôme Glisse 2018-04-10 932 struct hmm_range *range = hmm_vma_walk->range; 2733ea144dcce7 Jason Gunthorpe 2020-05-01 933 unsigned long *hmm_pfns = 2733ea144dcce7 Jason Gunthorpe 2020-05-01 934 &range->hmm_pfns[(start - range->start) >> PAGE_SHIFT]; 2288a9a68175ce Jason Gunthorpe 2020-03-05 935 unsigned long npages = (end - start) >> PAGE_SHIFT; adc5de78797562 Mika Penttilä 2026-01-19 936 struct mm_struct *mm = walk->vma->vm_mm; 2288a9a68175ce Jason Gunthorpe 2020-03-05 937 unsigned long addr = start; adc5de78797562 Mika Penttilä 2026-01-19 938 enum migrate_vma_info minfo; adc5de78797562 Mika Penttilä 2026-01-19 939 unsigned long i; adc5de78797562 Mika Penttilä 2026-01-19 940 spinlock_t *ptl; 53f5c3f489ecdd Jérôme Glisse 2018-04-10 941 pte_t *ptep; d08faca018c461 Jérôme Glisse 2018-10-30 942 pmd_t pmd; adc5de78797562 Mika Penttilä 2026-01-19 943 int r; adc5de78797562 Mika Penttilä 2026-01-19 944 adc5de78797562 Mika Penttilä 2026-01-19 945 minfo = hmm_select_migrate(range); 53f5c3f489ecdd Jérôme Glisse 2018-04-10 946 53f5c3f489ecdd Jérôme Glisse 2018-04-10 947 again: adc5de78797562 Mika Penttilä 2026-01-19 948 hmm_vma_walk->locked = false; adc5de78797562 Mika Penttilä 2026-01-19 949 hmm_vma_walk->pmdlocked = false; 26e1a0c3277d7f Hugh Dickins 2023-06-08 950 pmd = pmdp_get_lockless(pmdp); adc5de78797562 Mika Penttilä 2026-01-19 951 if (pmd_none(pmd)) { adc5de78797562 Mika Penttilä 2026-01-19 952 r = hmm_vma_walk_hole(start, end, -1, walk); adc5de78797562 Mika Penttilä 2026-01-19 953 if (r || !minfo) adc5de78797562 Mika Penttilä 2026-01-19 954 return r; if minfo is NULL do we return success? adc5de78797562 Mika Penttilä 2026-01-19 955 adc5de78797562 Mika Penttilä 2026-01-19 956 ptl = pmd_lock(walk->mm, pmdp); adc5de78797562 Mika Penttilä 2026-01-19 957 if (pmd_none(*pmdp)) { adc5de78797562 Mika Penttilä 2026-01-19 958 // hmm_vma_walk_hole() filled migration needs adc5de78797562 Mika Penttilä 2026-01-19 959 spin_unlock(ptl); adc5de78797562 Mika Penttilä 2026-01-19 @960 return r; And here? adc5de78797562 Mika Penttilä 2026-01-19 961 } adc5de78797562 Mika Penttilä 2026-01-19 962 spin_unlock(ptl); adc5de78797562 Mika Penttilä 2026-01-19 963 } 53f5c3f489ecdd Jérôme Glisse 2018-04-10 964 0ac881efe16468 Lorenzo Stoakes 2025-11-10 965 if (thp_migration_supported() && pmd_is_migration_entry(pmd)) { adc5de78797562 Mika Penttilä 2026-01-19 966 if (!minfo) { 2733ea144dcce7 Jason Gunthorpe 2020-05-01 967 if (hmm_range_need_fault(hmm_vma_walk, hmm_pfns, npages, 0)) { d08faca018c461 Jérôme Glisse 2018-10-30 968 hmm_vma_walk->last = addr; d2e8d551165ccb Ralph Campbell 2019-07-25 969 pmd_migration_entry_wait(walk->mm, pmdp); 73231612dc7c90 Jérôme Glisse 2019-05-13 970 return -EBUSY; d08faca018c461 Jérôme Glisse 2018-10-30 971 } 2288a9a68175ce Jason Gunthorpe 2020-03-05 972 } adc5de78797562 Mika Penttilä 2026-01-19 973 for (i = 0; addr < end; addr += PAGE_SIZE, i++) adc5de78797562 Mika Penttilä 2026-01-19 974 hmm_pfns[i] &= HMM_PFN_INOUT_FLAGS; 2288a9a68175ce Jason Gunthorpe 2020-03-05 975 adc5de78797562 Mika Penttilä 2026-01-19 976 return 0; adc5de78797562 Mika Penttilä 2026-01-19 977 } adc5de78797562 Mika Penttilä 2026-01-19 978 adc5de78797562 Mika Penttilä 2026-01-19 979 if (minfo) { adc5de78797562 Mika Penttilä 2026-01-19 980 hmm_vma_walk->ptl = pmd_lock(mm, pmdp); adc5de78797562 Mika Penttilä 2026-01-19 981 hmm_vma_walk->pmdlocked = true; adc5de78797562 Mika Penttilä 2026-01-19 982 pmd = pmdp_get(pmdp); adc5de78797562 Mika Penttilä 2026-01-19 983 } else adc5de78797562 Mika Penttilä 2026-01-19 984 pmd = pmdp_get_lockless(pmdp); adc5de78797562 Mika Penttilä 2026-01-19 985 adc5de78797562 Mika Penttilä 2026-01-19 986 if (pmd_trans_huge(pmd) || !pmd_present(pmd)) { adc5de78797562 Mika Penttilä 2026-01-19 987 adc5de78797562 Mika Penttilä 2026-01-19 988 if (!pmd_present(pmd)) { adc5de78797562 Mika Penttilä 2026-01-19 989 r = hmm_vma_handle_absent_pmd(walk, start, end, hmm_pfns, 10b9feee2d0dc8 Francois Dugast 2025-09-08 990 pmd); adc5de78797562 Mika Penttilä 2026-01-19 991 if (r || !minfo) adc5de78797562 Mika Penttilä 2026-01-19 992 return r; Same adc5de78797562 Mika Penttilä 2026-01-19 993 } else { d08faca018c461 Jérôme Glisse 2018-10-30 994 53f5c3f489ecdd Jérôme Glisse 2018-04-10 995 /* adc5de78797562 Mika Penttilä 2026-01-19 996 * No need to take pmd_lock here if not migrating, adc5de78797562 Mika Penttilä 2026-01-19 997 * even if some other thread is splitting the huge adc5de78797562 Mika Penttilä 2026-01-19 998 * pmd we will get that event through mmu_notifier callback. 53f5c3f489ecdd Jérôme Glisse 2018-04-10 999 * d2e8d551165ccb Ralph Campbell 2019-07-25 1000 * So just read pmd value and check again it's a transparent 53f5c3f489ecdd Jérôme Glisse 2018-04-10 1001 * huge or device mapping one and compute corresponding pfn 53f5c3f489ecdd Jérôme Glisse 2018-04-10 1002 * values. 53f5c3f489ecdd Jérôme Glisse 2018-04-10 1003 */ adc5de78797562 Mika Penttilä 2026-01-19 1004 adc5de78797562 Mika Penttilä 2026-01-19 1005 if (!pmd_trans_huge(pmd)) { adc5de78797562 Mika Penttilä 2026-01-19 1006 // must be lockless 53f5c3f489ecdd Jérôme Glisse 2018-04-10 1007 goto again; adc5de78797562 Mika Penttilä 2026-01-19 1008 } adc5de78797562 Mika Penttilä 2026-01-19 1009 adc5de78797562 Mika Penttilä 2026-01-19 1010 r = hmm_vma_handle_pmd(walk, addr, end, hmm_pfns, pmd); adc5de78797562 Mika Penttilä 2026-01-19 1011 adc5de78797562 Mika Penttilä 2026-01-19 1012 if (r || !minfo) adc5de78797562 Mika Penttilä 2026-01-19 1013 return r; Same? adc5de78797562 Mika Penttilä 2026-01-19 1014 } adc5de78797562 Mika Penttilä 2026-01-19 1015 adc5de78797562 Mika Penttilä 2026-01-19 1016 r = hmm_vma_handle_migrate_prepare_pmd(walk, pmdp, start, end, hmm_pfns); adc5de78797562 Mika Penttilä 2026-01-19 1017 adc5de78797562 Mika Penttilä 2026-01-19 1018 if (hmm_vma_walk->pmdlocked) { adc5de78797562 Mika Penttilä 2026-01-19 1019 spin_unlock(hmm_vma_walk->ptl); adc5de78797562 Mika Penttilä 2026-01-19 1020 hmm_vma_walk->pmdlocked = false; adc5de78797562 Mika Penttilä 2026-01-19 1021 } adc5de78797562 Mika Penttilä 2026-01-19 1022 adc5de78797562 Mika Penttilä 2026-01-19 1023 if (r == -ENOENT) { adc5de78797562 Mika Penttilä 2026-01-19 1024 r = hmm_vma_walk_split(pmdp, addr, walk); adc5de78797562 Mika Penttilä 2026-01-19 1025 if (r) { adc5de78797562 Mika Penttilä 2026-01-19 1026 /* Split not successful, skip */ adc5de78797562 Mika Penttilä 2026-01-19 1027 return hmm_pfns_fill(start, end, hmm_vma_walk, HMM_PFN_ERROR); adc5de78797562 Mika Penttilä 2026-01-19 1028 } adc5de78797562 Mika Penttilä 2026-01-19 1029 adc5de78797562 Mika Penttilä 2026-01-19 1030 /* Split successful or "again", reloop */ adc5de78797562 Mika Penttilä 2026-01-19 1031 hmm_vma_walk->last = addr; adc5de78797562 Mika Penttilä 2026-01-19 1032 return -EBUSY; adc5de78797562 Mika Penttilä 2026-01-19 1033 } adc5de78797562 Mika Penttilä 2026-01-19 1034 adc5de78797562 Mika Penttilä 2026-01-19 1035 return r; 53f5c3f489ecdd Jérôme Glisse 2018-04-10 1036 adc5de78797562 Mika Penttilä 2026-01-19 1037 } adc5de78797562 Mika Penttilä 2026-01-19 1038 adc5de78797562 Mika Penttilä 2026-01-19 1039 if (hmm_vma_walk->pmdlocked) { adc5de78797562 Mika Penttilä 2026-01-19 1040 spin_unlock(hmm_vma_walk->ptl); adc5de78797562 Mika Penttilä 2026-01-19 1041 hmm_vma_walk->pmdlocked = false; 53f5c3f489ecdd Jérôme Glisse 2018-04-10 1042 } 53f5c3f489ecdd Jérôme Glisse 2018-04-10 1043 d08faca018c461 Jérôme Glisse 2018-10-30 1044 /* d2e8d551165ccb Ralph Campbell 2019-07-25 1045 * We have handled all the valid cases above ie either none, migration, d08faca018c461 Jérôme Glisse 2018-10-30 1046 * huge or transparent huge. At this point either it is a valid pmd d08faca018c461 Jérôme Glisse 2018-10-30 1047 * entry pointing to pte directory or it is a bad pmd that will not d08faca018c461 Jérôme Glisse 2018-10-30 1048 * recover. d08faca018c461 Jérôme Glisse 2018-10-30 1049 */ 2288a9a68175ce Jason Gunthorpe 2020-03-05 1050 if (pmd_bad(pmd)) { 2733ea144dcce7 Jason Gunthorpe 2020-05-01 1051 if (hmm_range_need_fault(hmm_vma_walk, hmm_pfns, npages, 0)) 2288a9a68175ce Jason Gunthorpe 2020-03-05 1052 return -EFAULT; adc5de78797562 Mika Penttilä 2026-01-19 1053 return hmm_pfns_fill(start, end, hmm_vma_walk, HMM_PFN_ERROR); 2288a9a68175ce Jason Gunthorpe 2020-03-05 1054 } 53f5c3f489ecdd Jérôme Glisse 2018-04-10 1055 adc5de78797562 Mika Penttilä 2026-01-19 1056 if (minfo) { adc5de78797562 Mika Penttilä 2026-01-19 1057 ptep = pte_offset_map_lock(mm, pmdp, addr, &hmm_vma_walk->ptl); adc5de78797562 Mika Penttilä 2026-01-19 1058 if (ptep) adc5de78797562 Mika Penttilä 2026-01-19 1059 hmm_vma_walk->locked = true; adc5de78797562 Mika Penttilä 2026-01-19 1060 } else 53f5c3f489ecdd Jérôme Glisse 2018-04-10 1061 ptep = pte_offset_map(pmdp, addr); 6ec1905f6ec7f9 Hugh Dickins 2023-06-08 1062 if (!ptep) 6ec1905f6ec7f9 Hugh Dickins 2023-06-08 1063 goto again; adc5de78797562 Mika Penttilä 2026-01-19 1064 2733ea144dcce7 Jason Gunthorpe 2020-05-01 1065 for (; addr < end; addr += PAGE_SIZE, ptep++, hmm_pfns++) { 53f5c3f489ecdd Jérôme Glisse 2018-04-10 1066 2733ea144dcce7 Jason Gunthorpe 2020-05-01 1067 r = hmm_vma_handle_pte(walk, addr, end, pmdp, ptep, hmm_pfns); 53f5c3f489ecdd Jérôme Glisse 2018-04-10 1068 if (r) { dfdc22078f3f06 Jason Gunthorpe 2020-02-28 1069 /* hmm_vma_handle_pte() did pte_unmap() */ 53f5c3f489ecdd Jérôme Glisse 2018-04-10 1070 return r; 53f5c3f489ecdd Jérôme Glisse 2018-04-10 1071 } adc5de78797562 Mika Penttilä 2026-01-19 1072 adc5de78797562 Mika Penttilä 2026-01-19 1073 r = hmm_vma_handle_migrate_prepare(walk, pmdp, ptep, addr, hmm_pfns); adc5de78797562 Mika Penttilä 2026-01-19 1074 if (r == -EAGAIN) { adc5de78797562 Mika Penttilä 2026-01-19 1075 goto again; adc5de78797562 Mika Penttilä 2026-01-19 1076 } adc5de78797562 Mika Penttilä 2026-01-19 1077 if (r) { adc5de78797562 Mika Penttilä 2026-01-19 1078 hmm_pfns_fill(addr, end, hmm_vma_walk, HMM_PFN_ERROR); adc5de78797562 Mika Penttilä 2026-01-19 1079 break; da4c3c735ea4dc Jérôme Glisse 2017-09-08 1080 } adc5de78797562 Mika Penttilä 2026-01-19 1081 } adc5de78797562 Mika Penttilä 2026-01-19 1082 adc5de78797562 Mika Penttilä 2026-01-19 1083 if (hmm_vma_walk->locked) adc5de78797562 Mika Penttilä 2026-01-19 1084 pte_unmap_unlock(ptep - 1, hmm_vma_walk->ptl); adc5de78797562 Mika Penttilä 2026-01-19 1085 else da4c3c735ea4dc Jérôme Glisse 2017-09-08 1086 pte_unmap(ptep - 1); adc5de78797562 Mika Penttilä 2026-01-19 1087 da4c3c735ea4dc Jérôme Glisse 2017-09-08 1088 return 0; da4c3c735ea4dc Jérôme Glisse 2017-09-08 1089 } -- 0-DAY CI Kernel Test Service https://github.com/intel/lkp-tests/wiki ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH v2 1/3] mm: unified hmm fault and migrate device pagewalk paths 2026-01-19 11:25 ` [PATCH v2 1/3] mm: unified hmm fault and migrate device pagewalk paths mpenttil 2026-01-19 20:31 ` kernel test robot 2026-01-20 6:29 ` Dan Carpenter @ 2026-01-20 16:03 ` Matthew Brost 2026-01-21 8:05 ` Mika Penttilä 2 siblings, 1 reply; 8+ messages in thread From: Matthew Brost @ 2026-01-20 16:03 UTC (permalink / raw) To: mpenttil Cc: linux-mm, linux-kernel, David Hildenbrand, Jason Gunthorpe, Leon Romanovsky, Alistair Popple, Balbir Singh, Zi Yan On Mon, Jan 19, 2026 at 01:25:00PM +0200, mpenttil@redhat.com wrote: > From: Mika Penttilä <mpenttil@redhat.com> > > Currently, the way device page faulting and migration works > is not optimal, if you want to do both fault handling and > migration at once. > > Being able to migrate not present pages (or pages mapped with incorrect > permissions, eg. COW) to the GPU requires doing either of the > following sequences: > > 1. hmm_range_fault() - fault in non-present pages with correct permissions, etc. > 2. migrate_vma_*() - migrate the pages > > Or: > > 1. migrate_vma_*() - migrate present pages > 2. If non-present pages detected by migrate_vma_*(): > a) call hmm_range_fault() to fault pages in > b) call migrate_vma_*() again to migrate now present pages > > The problem with the first sequence is that you always have to do two > page walks even when most of the time the pages are present or zero page > mappings so the common case takes a performance hit. > > The second sequence is better for the common case, but far worse if > pages aren't present because now you have to walk the page tables three > times (once to find the page is not present, once so hmm_range_fault() > can find a non-present page to fault in and once again to setup the > migration). It is also tricky to code correctly. > > We should be able to walk the page table once, faulting > pages in as required and replacing them with migration entries if > requested. > > Add a new flag to HMM APIs, HMM_PFN_REQ_MIGRATE, > which tells to prepare for migration also during fault handling. > Also, for the migrate_vma_setup() call paths, a flag, MIGRATE_VMA_FAULT, > is added to tell to add fault handling to migrate. > > Cc: David Hildenbrand <david@redhat.com> > Cc: Jason Gunthorpe <jgg@nvidia.com> > Cc: Leon Romanovsky <leonro@nvidia.com> > Cc: Alistair Popple <apopple@nvidia.com> > Cc: Balbir Singh <balbirs@nvidia.com> > Cc: Zi Yan <ziy@nvidia.com> > Cc: Matthew Brost <matthew.brost@intel.com> A couple of comments/questions around the locking. Personally, I like the approach, but its of the maintianers if they like it. I also haven't pulled or tested this yet and likely won't have time for at least a few days, so all comments are based on inspection. > Suggested-by: Alistair Popple <apopple@nvidia.com> > Signed-off-by: Mika Penttilä <mpenttil@redhat.com> > --- > include/linux/hmm.h | 19 +- > include/linux/migrate.h | 27 +- > mm/hmm.c | 770 +++++++++++++++++++++++++++++++++++++--- > mm/migrate_device.c | 86 ++++- > 4 files changed, 839 insertions(+), 63 deletions(-) > > diff --git a/include/linux/hmm.h b/include/linux/hmm.h > index db75ffc949a7..e2f53e155af2 100644 > --- a/include/linux/hmm.h > +++ b/include/linux/hmm.h > @@ -12,7 +12,7 @@ > #include <linux/mm.h> > > struct mmu_interval_notifier; > - > +struct migrate_vma; > /* > * On output: > * 0 - The page is faultable and a future call with > @@ -27,6 +27,7 @@ struct mmu_interval_notifier; > * HMM_PFN_P2PDMA_BUS - Bus mapped P2P transfer > * HMM_PFN_DMA_MAPPED - Flag preserved on input-to-output transformation > * to mark that page is already DMA mapped > + * HMM_PFN_MIGRATE - Migrate PTE installed > * > * On input: > * 0 - Return the current state of the page, do not fault it. > @@ -34,6 +35,7 @@ struct mmu_interval_notifier; > * will fail > * HMM_PFN_REQ_WRITE - The output must have HMM_PFN_WRITE or hmm_range_fault() > * will fail. Must be combined with HMM_PFN_REQ_FAULT. > + * HMM_PFN_REQ_MIGRATE - For default_flags, request to migrate to device > */ > enum hmm_pfn_flags { > /* Output fields and flags */ > @@ -48,15 +50,25 @@ enum hmm_pfn_flags { > HMM_PFN_P2PDMA = 1UL << (BITS_PER_LONG - 5), > HMM_PFN_P2PDMA_BUS = 1UL << (BITS_PER_LONG - 6), > > - HMM_PFN_ORDER_SHIFT = (BITS_PER_LONG - 11), > + /* Migrate request */ > + HMM_PFN_MIGRATE = 1UL << (BITS_PER_LONG - 7), > + HMM_PFN_COMPOUND = 1UL << (BITS_PER_LONG - 8), > + HMM_PFN_ORDER_SHIFT = (BITS_PER_LONG - 13), > > /* Input flags */ > HMM_PFN_REQ_FAULT = HMM_PFN_VALID, > HMM_PFN_REQ_WRITE = HMM_PFN_WRITE, > + HMM_PFN_REQ_MIGRATE = HMM_PFN_MIGRATE, > > HMM_PFN_FLAGS = ~((1UL << HMM_PFN_ORDER_SHIFT) - 1), > }; > > +enum { > + /* These flags are carried from input-to-output */ > + HMM_PFN_INOUT_FLAGS = HMM_PFN_DMA_MAPPED | HMM_PFN_P2PDMA | > + HMM_PFN_P2PDMA_BUS, > +}; > + > /* > * hmm_pfn_to_page() - return struct page pointed to by a device entry > * > @@ -107,6 +119,7 @@ static inline unsigned int hmm_pfn_to_map_order(unsigned long hmm_pfn) > * @default_flags: default flags for the range (write, read, ... see hmm doc) > * @pfn_flags_mask: allows to mask pfn flags so that only default_flags matter > * @dev_private_owner: owner of device private pages > + * @migrate: structure for migrating the associated vma > */ > struct hmm_range { > struct mmu_interval_notifier *notifier; > @@ -117,12 +130,14 @@ struct hmm_range { > unsigned long default_flags; > unsigned long pfn_flags_mask; > void *dev_private_owner; > + struct migrate_vma *migrate; > }; > > /* > * Please see Documentation/mm/hmm.rst for how to use the range API. > */ > int hmm_range_fault(struct hmm_range *range); > +int hmm_range_migrate_prepare(struct hmm_range *range, struct migrate_vma **pargs); > > /* > * HMM_RANGE_DEFAULT_TIMEOUT - default timeout (ms) when waiting for a range > diff --git a/include/linux/migrate.h b/include/linux/migrate.h > index 26ca00c325d9..104eda2dd881 100644 > --- a/include/linux/migrate.h > +++ b/include/linux/migrate.h > @@ -3,6 +3,7 @@ > #define _LINUX_MIGRATE_H > > #include <linux/mm.h> > +#include <linux/hmm.h> > #include <linux/mempolicy.h> > #include <linux/migrate_mode.h> > #include <linux/hugetlb.h> > @@ -97,6 +98,16 @@ static inline int set_movable_ops(const struct movable_operations *ops, enum pag > return -ENOSYS; > } > > +enum migrate_vma_info { > + MIGRATE_VMA_SELECT_NONE = 0, > + MIGRATE_VMA_SELECT_COMPOUND = MIGRATE_VMA_SELECT_NONE, > +}; > + > +static inline enum migrate_vma_info hmm_select_migrate(struct hmm_range *range) > +{ > + return MIGRATE_VMA_SELECT_NONE; > +} > + > #endif /* CONFIG_MIGRATION */ > > #ifdef CONFIG_NUMA_BALANCING > @@ -140,11 +151,12 @@ static inline unsigned long migrate_pfn(unsigned long pfn) > return (pfn << MIGRATE_PFN_SHIFT) | MIGRATE_PFN_VALID; > } > > -enum migrate_vma_direction { > +enum migrate_vma_info { > MIGRATE_VMA_SELECT_SYSTEM = 1 << 0, > MIGRATE_VMA_SELECT_DEVICE_PRIVATE = 1 << 1, > MIGRATE_VMA_SELECT_DEVICE_COHERENT = 1 << 2, > MIGRATE_VMA_SELECT_COMPOUND = 1 << 3, > + MIGRATE_VMA_FAULT = 1 << 4, > }; > > struct migrate_vma { > @@ -182,6 +194,17 @@ struct migrate_vma { > struct page *fault_page; > }; > > +static inline enum migrate_vma_info hmm_select_migrate(struct hmm_range *range) > +{ > + enum migrate_vma_info minfo; > + > + minfo = range->migrate ? range->migrate->flags : 0; > + minfo |= (range->default_flags & HMM_PFN_REQ_MIGRATE) ? > + MIGRATE_VMA_SELECT_SYSTEM : 0; > + > + return minfo; > +} > + > int migrate_vma_setup(struct migrate_vma *args); > void migrate_vma_pages(struct migrate_vma *migrate); > void migrate_vma_finalize(struct migrate_vma *migrate); > @@ -192,7 +215,7 @@ void migrate_device_pages(unsigned long *src_pfns, unsigned long *dst_pfns, > unsigned long npages); > void migrate_device_finalize(unsigned long *src_pfns, > unsigned long *dst_pfns, unsigned long npages); > - > +void migrate_hmm_range_setup(struct hmm_range *range); > #endif /* CONFIG_MIGRATION */ > > #endif /* _LINUX_MIGRATE_H */ > diff --git a/mm/hmm.c b/mm/hmm.c > index 4ec74c18bef6..1fdb8665eeec 100644 > --- a/mm/hmm.c > +++ b/mm/hmm.c > @@ -20,6 +20,7 @@ > #include <linux/pagemap.h> > #include <linux/leafops.h> > #include <linux/hugetlb.h> > +#include <linux/migrate.h> > #include <linux/memremap.h> > #include <linux/sched/mm.h> > #include <linux/jump_label.h> > @@ -27,12 +28,20 @@ > #include <linux/pci-p2pdma.h> > #include <linux/mmu_notifier.h> > #include <linux/memory_hotplug.h> > +#include <asm/tlbflush.h> > > #include "internal.h" > > struct hmm_vma_walk { > - struct hmm_range *range; > - unsigned long last; > + struct mmu_notifier_range mmu_range; > + struct vm_area_struct *vma; > + struct hmm_range *range; > + unsigned long start; > + unsigned long end; > + unsigned long last; > + bool locked; > + bool pmdlocked; > + spinlock_t *ptl; > }; > > enum { > @@ -41,21 +50,38 @@ enum { > HMM_NEED_ALL_BITS = HMM_NEED_FAULT | HMM_NEED_WRITE_FAULT, > }; > > -enum { > - /* These flags are carried from input-to-output */ > - HMM_PFN_INOUT_FLAGS = HMM_PFN_DMA_MAPPED | HMM_PFN_P2PDMA | > - HMM_PFN_P2PDMA_BUS, > -}; > - > static int hmm_pfns_fill(unsigned long addr, unsigned long end, > - struct hmm_range *range, unsigned long cpu_flags) > + struct hmm_vma_walk *hmm_vma_walk, unsigned long cpu_flags) > { > + struct hmm_range *range = hmm_vma_walk->range; > unsigned long i = (addr - range->start) >> PAGE_SHIFT; > + enum migrate_vma_info minfo; > + bool migrate = false; > + > + minfo = hmm_select_migrate(range); > + if (cpu_flags != HMM_PFN_ERROR) { > + if (minfo && (vma_is_anonymous(hmm_vma_walk->vma))) { > + cpu_flags |= (HMM_PFN_VALID | HMM_PFN_MIGRATE); > + migrate = true; > + } > + } > + > + if (migrate && thp_migration_supported() && > + (minfo & MIGRATE_VMA_SELECT_COMPOUND) && > + IS_ALIGNED(addr, HPAGE_PMD_SIZE) && > + IS_ALIGNED(end, HPAGE_PMD_SIZE)) { > + range->hmm_pfns[i] &= HMM_PFN_INOUT_FLAGS; > + range->hmm_pfns[i] |= cpu_flags | HMM_PFN_COMPOUND; > + addr += PAGE_SIZE; > + i++; > + cpu_flags = 0; > + } > > for (; addr < end; addr += PAGE_SIZE, i++) { > range->hmm_pfns[i] &= HMM_PFN_INOUT_FLAGS; > range->hmm_pfns[i] |= cpu_flags; > } > + > return 0; > } > > @@ -171,11 +197,11 @@ static int hmm_vma_walk_hole(unsigned long addr, unsigned long end, > if (!walk->vma) { > if (required_fault) > return -EFAULT; > - return hmm_pfns_fill(addr, end, range, HMM_PFN_ERROR); > + return hmm_pfns_fill(addr, end, hmm_vma_walk, HMM_PFN_ERROR); > } > if (required_fault) > return hmm_vma_fault(addr, end, required_fault, walk); Can we assert in hmm_vma_fault that neither the PMD nor the PTE lock is held? That would be a quick sanity check to ensure we haven’t screwed anything up in the state-machine walk. We could add marco like HMM_ASSERT or HMM_WARN, etc... > - return hmm_pfns_fill(addr, end, range, 0); > + return hmm_pfns_fill(addr, end, hmm_vma_walk, 0); > } > > static inline unsigned long hmm_pfn_flags_order(unsigned long order) > @@ -208,8 +234,13 @@ static int hmm_vma_handle_pmd(struct mm_walk *walk, unsigned long addr, > cpu_flags = pmd_to_hmm_pfn_flags(range, pmd); > required_fault = > hmm_range_need_fault(hmm_vma_walk, hmm_pfns, npages, cpu_flags); > - if (required_fault) > + if (required_fault) { > + if (hmm_vma_walk->pmdlocked) { > + spin_unlock(hmm_vma_walk->ptl); > + hmm_vma_walk->pmdlocked = false; > + } > return hmm_vma_fault(addr, end, required_fault, walk); > + } > > pfn = pmd_pfn(pmd) + ((addr & ~PMD_MASK) >> PAGE_SHIFT); > for (i = 0; addr < end; addr += PAGE_SIZE, i++, pfn++) { > @@ -289,14 +320,28 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr, > goto fault; > > if (softleaf_is_migration(entry)) { > - pte_unmap(ptep); > - hmm_vma_walk->last = addr; > - migration_entry_wait(walk->mm, pmdp, addr); > - return -EBUSY; > + if (!hmm_select_migrate(range)) { > + if (hmm_vma_walk->locked) { > + pte_unmap_unlock(ptep, hmm_vma_walk->ptl); > + hmm_vma_walk->locked = false; I don’t think it should be possible for the lock to be held here, given that we only take it when selecting migration. So maybe we should assert that it is not locked. > + } else > + pte_unmap(ptep); > + > + hmm_vma_walk->last = addr; > + migration_entry_wait(walk->mm, pmdp, addr); > + return -EBUSY; > + } else > + goto out; > } > > /* Report error for everything else */ > - pte_unmap(ptep); > + > + if (hmm_vma_walk->locked) { > + pte_unmap_unlock(ptep, hmm_vma_walk->ptl); > + hmm_vma_walk->locked = false; > + } else > + pte_unmap(ptep); > + > return -EFAULT; > } > > @@ -313,7 +358,12 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr, > if (!vm_normal_page(walk->vma, addr, pte) && > !is_zero_pfn(pte_pfn(pte))) { > if (hmm_pte_need_fault(hmm_vma_walk, pfn_req_flags, 0)) { > - pte_unmap(ptep); > + if (hmm_vma_walk->locked) { > + pte_unmap_unlock(ptep, hmm_vma_walk->ptl); > + hmm_vma_walk->locked = false; > + } else > + pte_unmap(ptep); > + > return -EFAULT; > } > new_pfn_flags = HMM_PFN_ERROR; > @@ -326,7 +376,11 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr, > return 0; > > fault: > - pte_unmap(ptep); > + if (hmm_vma_walk->locked) { > + pte_unmap_unlock(ptep, hmm_vma_walk->ptl); > + hmm_vma_walk->locked = false; > + } else > + pte_unmap(ptep); > /* Fault any virtual address we were asked to fault */ > return hmm_vma_fault(addr, end, required_fault, walk); > } > @@ -370,13 +424,18 @@ static int hmm_vma_handle_absent_pmd(struct mm_walk *walk, unsigned long start, > required_fault = hmm_range_need_fault(hmm_vma_walk, hmm_pfns, > npages, 0); > if (required_fault) { > - if (softleaf_is_device_private(entry)) > + if (softleaf_is_device_private(entry)) { > + if (hmm_vma_walk->pmdlocked) { > + spin_unlock(hmm_vma_walk->ptl); > + hmm_vma_walk->pmdlocked = false; > + } > return hmm_vma_fault(addr, end, required_fault, walk); > + } > else > return -EFAULT; > } > > - return hmm_pfns_fill(start, end, range, HMM_PFN_ERROR); > + return hmm_pfns_fill(start, end, hmm_vma_walk, HMM_PFN_ERROR); > } > #else > static int hmm_vma_handle_absent_pmd(struct mm_walk *walk, unsigned long start, > @@ -384,15 +443,486 @@ static int hmm_vma_handle_absent_pmd(struct mm_walk *walk, unsigned long start, > pmd_t pmd) > { > struct hmm_vma_walk *hmm_vma_walk = walk->private; > - struct hmm_range *range = hmm_vma_walk->range; > unsigned long npages = (end - start) >> PAGE_SHIFT; > > if (hmm_range_need_fault(hmm_vma_walk, hmm_pfns, npages, 0)) > return -EFAULT; > - return hmm_pfns_fill(start, end, range, HMM_PFN_ERROR); > + return hmm_pfns_fill(start, end, hmm_vma_walk, HMM_PFN_ERROR); > } > #endif /* CONFIG_ARCH_ENABLE_THP_MIGRATION */ > > +#ifdef CONFIG_DEVICE_MIGRATION > +/** > + * migrate_vma_split_folio() - Helper function to split a THP folio > + * @folio: the folio to split > + * @fault_page: struct page associated with the fault if any > + * > + * Returns 0 on success > + */ > +static int migrate_vma_split_folio(struct folio *folio, > + struct page *fault_page) > +{ > + int ret; > + struct folio *fault_folio = fault_page ? page_folio(fault_page) : NULL; > + struct folio *new_fault_folio = NULL; > + > + if (folio != fault_folio) { > + folio_get(folio); > + folio_lock(folio); > + } > + > + ret = split_folio(folio); > + if (ret) { > + if (folio != fault_folio) { > + folio_unlock(folio); > + folio_put(folio); > + } > + return ret; > + } > + > + new_fault_folio = fault_page ? page_folio(fault_page) : NULL; > + > + /* > + * Ensure the lock is held on the correct > + * folio after the split > + */ > + if (!new_fault_folio) { > + folio_unlock(folio); > + folio_put(folio); > + } else if (folio != new_fault_folio) { > + if (new_fault_folio != fault_folio) { > + folio_get(new_fault_folio); > + folio_lock(new_fault_folio); > + } > + folio_unlock(folio); > + folio_put(folio); > + } > + > + return 0; > +} > + > +static int hmm_vma_handle_migrate_prepare_pmd(const struct mm_walk *walk, > + pmd_t *pmdp, > + unsigned long start, > + unsigned long end, > + unsigned long *hmm_pfn) > +{ > + struct hmm_vma_walk *hmm_vma_walk = walk->private; > + struct hmm_range *range = hmm_vma_walk->range; > + struct migrate_vma *migrate = range->migrate; > + struct folio *fault_folio = NULL; > + struct folio *folio; > + enum migrate_vma_info minfo; > + unsigned long i; > + int r = 0; > + > + minfo = hmm_select_migrate(range); > + if (!minfo) > + return r; > + Can we assert the PMD is locked here? > + fault_folio = (migrate && migrate->fault_page) ? > + page_folio(migrate->fault_page) : NULL; > + > + if (pmd_none(*pmdp)) > + return hmm_pfns_fill(start, end, hmm_vma_walk, 0); > + > + if (!(hmm_pfn[0] & HMM_PFN_VALID)) > + goto out; > + > + if (pmd_trans_huge(*pmdp)) { > + if (!(minfo & MIGRATE_VMA_SELECT_SYSTEM)) > + goto out; > + > + folio = pmd_folio(*pmdp); > + if (is_huge_zero_folio(folio)) > + return hmm_pfns_fill(start, end, hmm_vma_walk, 0); > + > + } else if (!pmd_present(*pmdp)) { > + const softleaf_t entry = softleaf_from_pmd(*pmdp); > + > + folio = softleaf_to_folio(entry); > + > + if (!softleaf_is_device_private(entry)) > + goto out; > + > + if (!(minfo & MIGRATE_VMA_SELECT_DEVICE_PRIVATE)) > + goto out; > + if (folio->pgmap->owner != migrate->pgmap_owner) > + goto out; > + > + } else { > + hmm_vma_walk->last = start; > + return -EBUSY; > + } > + > + folio_get(folio); > + > + if (folio != fault_folio && unlikely(!folio_trylock(folio))) { > + folio_put(folio); > + hmm_pfns_fill(start, end, hmm_vma_walk, HMM_PFN_ERROR); > + return 0; > + } > + > + if (thp_migration_supported() && > + (migrate->flags & MIGRATE_VMA_SELECT_COMPOUND) && > + (IS_ALIGNED(start, HPAGE_PMD_SIZE) && > + IS_ALIGNED(end, HPAGE_PMD_SIZE))) { > + > + struct page_vma_mapped_walk pvmw = { > + .ptl = hmm_vma_walk->ptl, > + .address = start, > + .pmd = pmdp, > + .vma = walk->vma, > + }; > + > + hmm_pfn[0] |= HMM_PFN_MIGRATE | HMM_PFN_COMPOUND; > + > + r = set_pmd_migration_entry(&pvmw, folio_page(folio, 0)); > + if (r) { > + hmm_pfn[0] &= ~(HMM_PFN_MIGRATE | HMM_PFN_COMPOUND); > + r = -ENOENT; // fallback > + goto unlock_out; > + } > + for (i = 1, start += PAGE_SIZE; start < end; start += PAGE_SIZE, i++) > + hmm_pfn[i] &= HMM_PFN_INOUT_FLAGS; > + > + } else { > + r = -ENOENT; // fallback > + goto unlock_out; > + } > + > + > +out: > + return r; > + > +unlock_out: > + if (folio != fault_folio) > + folio_unlock(folio); > + folio_put(folio); > + goto out; > + > +} > + > +/* > + * Install migration entries if migration requested, either from fault > + * or migrate paths. > + * > + */ > +static int hmm_vma_handle_migrate_prepare(const struct mm_walk *walk, > + pmd_t *pmdp, > + pte_t *ptep, > + unsigned long addr, > + unsigned long *hmm_pfn) > +{ > + struct hmm_vma_walk *hmm_vma_walk = walk->private; > + struct hmm_range *range = hmm_vma_walk->range; > + struct migrate_vma *migrate = range->migrate; > + struct mm_struct *mm = walk->vma->vm_mm; > + struct folio *fault_folio = NULL; > + enum migrate_vma_info minfo; > + struct dev_pagemap *pgmap; > + bool anon_exclusive; > + struct folio *folio; > + unsigned long pfn; > + struct page *page; > + softleaf_t entry; > + pte_t pte, swp_pte; > + bool writable = false; > + > + // Do we want to migrate at all? > + minfo = hmm_select_migrate(range); > + if (!minfo) > + return 0; > + Can we assert the PTE lock is held here? > + fault_folio = (migrate && migrate->fault_page) ? > + page_folio(migrate->fault_page) : NULL; > + > + if (!hmm_vma_walk->locked) { > + ptep = pte_offset_map_lock(mm, pmdp, addr, &hmm_vma_walk->ptl); > + hmm_vma_walk->locked = true; > + } > + pte = ptep_get(ptep); > + How would we get without PTE lock being held? Shouldn't the caller take the lock? > + if (pte_none(pte)) { > + // migrate without faulting case > + if (vma_is_anonymous(walk->vma)) { > + *hmm_pfn &= HMM_PFN_INOUT_FLAGS; > + *hmm_pfn |= HMM_PFN_MIGRATE | HMM_PFN_VALID; > + goto out; > + } > + } > + > + if (!(hmm_pfn[0] & HMM_PFN_VALID)) > + goto out; > + > + if (!pte_present(pte)) { > + /* > + * Only care about unaddressable device page special > + * page table entry. Other special swap entries are not > + * migratable, and we ignore regular swapped page. > + */ > + entry = softleaf_from_pte(pte); > + if (!softleaf_is_device_private(entry)) > + goto out; > + > + if (!(minfo & MIGRATE_VMA_SELECT_DEVICE_PRIVATE)) > + goto out; > + > + page = softleaf_to_page(entry); > + folio = page_folio(page); > + if (folio->pgmap->owner != migrate->pgmap_owner) > + goto out; > + > + if (folio_test_large(folio)) { > + int ret; > + > + pte_unmap_unlock(ptep, hmm_vma_walk->ptl); > + hmm_vma_walk->locked = false; > + ret = migrate_vma_split_folio(folio, > + migrate->fault_page); > + if (ret) > + goto out_error; > + return -EAGAIN; > + } > + > + pfn = page_to_pfn(page); > + if (softleaf_is_device_private_write(entry)) > + writable = true; > + } else { > + pfn = pte_pfn(pte); > + if (is_zero_pfn(pfn) && > + (minfo & MIGRATE_VMA_SELECT_SYSTEM)) { > + *hmm_pfn = HMM_PFN_MIGRATE|HMM_PFN_VALID; > + goto out; > + } > + page = vm_normal_page(walk->vma, addr, pte); > + if (page && !is_zone_device_page(page) && > + !(minfo & MIGRATE_VMA_SELECT_SYSTEM)) { > + goto out; > + } else if (page && is_device_coherent_page(page)) { > + pgmap = page_pgmap(page); > + > + if (!(minfo & > + MIGRATE_VMA_SELECT_DEVICE_COHERENT) || > + pgmap->owner != migrate->pgmap_owner) > + goto out; > + } > + > + folio = page ? page_folio(page) : NULL; > + if (folio && folio_test_large(folio)) { > + int ret; > + > + pte_unmap_unlock(ptep, hmm_vma_walk->ptl); > + hmm_vma_walk->locked = false; > + > + ret = migrate_vma_split_folio(folio, > + migrate->fault_page); > + if (ret) > + goto out_error; > + return -EAGAIN; > + } > + > + writable = pte_write(pte); > + } > + > + if (!page || !page->mapping) > + goto out; > + > + /* > + * By getting a reference on the folio we pin it and that blocks > + * any kind of migration. Side effect is that it "freezes" the > + * pte. > + * > + * We drop this reference after isolating the folio from the lru > + * for non device folio (device folio are not on the lru and thus > + * can't be dropped from it). > + */ > + folio = page_folio(page); > + folio_get(folio); > + > + /* > + * We rely on folio_trylock() to avoid deadlock between > + * concurrent migrations where each is waiting on the others > + * folio lock. If we can't immediately lock the folio we fail this > + * migration as it is only best effort anyway. > + * > + * If we can lock the folio it's safe to set up a migration entry > + * now. In the common case where the folio is mapped once in a > + * single process setting up the migration entry now is an > + * optimisation to avoid walking the rmap later with > + * try_to_migrate(). > + */ > + > + if (fault_folio == folio || folio_trylock(folio)) { > + anon_exclusive = folio_test_anon(folio) && > + PageAnonExclusive(page); > + > + flush_cache_page(walk->vma, addr, pfn); > + > + if (anon_exclusive) { > + pte = ptep_clear_flush(walk->vma, addr, ptep); > + > + if (folio_try_share_anon_rmap_pte(folio, page)) { > + set_pte_at(mm, addr, ptep, pte); > + folio_unlock(folio); > + folio_put(folio); > + goto out; > + } > + } else { > + pte = ptep_get_and_clear(mm, addr, ptep); > + } > + > + if (pte_dirty(pte)) > + folio_mark_dirty(folio); > + > + /* Setup special migration page table entry */ > + if (writable) > + entry = make_writable_migration_entry(pfn); > + else if (anon_exclusive) > + entry = make_readable_exclusive_migration_entry(pfn); > + else > + entry = make_readable_migration_entry(pfn); > + > + if (pte_present(pte)) { > + if (pte_young(pte)) > + entry = make_migration_entry_young(entry); > + if (pte_dirty(pte)) > + entry = make_migration_entry_dirty(entry); > + } > + > + swp_pte = swp_entry_to_pte(entry); > + if (pte_present(pte)) { > + if (pte_soft_dirty(pte)) > + swp_pte = pte_swp_mksoft_dirty(swp_pte); > + if (pte_uffd_wp(pte)) > + swp_pte = pte_swp_mkuffd_wp(swp_pte); > + } else { > + if (pte_swp_soft_dirty(pte)) > + swp_pte = pte_swp_mksoft_dirty(swp_pte); > + if (pte_swp_uffd_wp(pte)) > + swp_pte = pte_swp_mkuffd_wp(swp_pte); > + } > + > + set_pte_at(mm, addr, ptep, swp_pte); > + folio_remove_rmap_pte(folio, page, walk->vma); > + folio_put(folio); > + *hmm_pfn |= HMM_PFN_MIGRATE; > + > + if (pte_present(pte)) > + flush_tlb_range(walk->vma, addr, addr + PAGE_SIZE); > + } else > + folio_put(folio); > +out: > + return 0; > +out_error: > + return -EFAULT; > + > +} > + > +static int hmm_vma_walk_split(pmd_t *pmdp, > + unsigned long addr, > + struct mm_walk *walk) > +{ > + struct hmm_vma_walk *hmm_vma_walk = walk->private; > + struct hmm_range *range = hmm_vma_walk->range; > + struct migrate_vma *migrate = range->migrate; > + struct folio *folio, *fault_folio; > + spinlock_t *ptl; > + int ret = 0; > + > + fault_folio = (migrate && migrate->fault_page) ? > + page_folio(migrate->fault_page) : NULL; > + Assert the PMD lock isn't held? > + ptl = pmd_lock(walk->mm, pmdp); > + if (unlikely(!pmd_trans_huge(*pmdp))) { > + spin_unlock(ptl); > + goto out; > + } > + > + folio = pmd_folio(*pmdp); > + if (is_huge_zero_folio(folio)) { > + spin_unlock(ptl); > + split_huge_pmd(walk->vma, pmdp, addr); > + } else { > + folio_get(folio); > + spin_unlock(ptl); > + > + if (folio != fault_folio) { > + if (unlikely(!folio_trylock(folio))) { > + folio_put(folio); > + ret = -EBUSY; > + goto out; > + } > + } else > + folio_put(folio); > + > + ret = split_folio(folio); > + if (fault_folio != folio) { > + folio_unlock(folio); > + folio_put(folio); > + } > + > + } > +out: > + return ret; > +} > +#else > +static int hmm_vma_handle_migrate_prepare_pmd(const struct mm_walk *walk, > + pmd_t *pmdp, > + unsigned long start, > + unsigned long end, > + unsigned long *hmm_pfn) > +{ > + return 0; > +} > + > +static int hmm_vma_handle_migrate_prepare(const struct mm_walk *walk, > + pmd_t *pmdp, > + pte_t *pte, > + unsigned long addr, > + unsigned long *hmm_pfn) > +{ > + return 0; > +} > + > +static int hmm_vma_walk_split(pmd_t *pmdp, > + unsigned long addr, > + struct mm_walk *walk) > +{ > + return 0; > +} > +#endif > + > +static int hmm_vma_capture_migrate_range(unsigned long start, > + unsigned long end, > + struct mm_walk *walk) > +{ > + struct hmm_vma_walk *hmm_vma_walk = walk->private; > + struct hmm_range *range = hmm_vma_walk->range; > + > + if (!hmm_select_migrate(range)) > + return 0; > + > + if (hmm_vma_walk->vma && (hmm_vma_walk->vma != walk->vma)) > + return -ERANGE; > + > + hmm_vma_walk->vma = walk->vma; > + hmm_vma_walk->start = start; > + hmm_vma_walk->end = end; > + > + if (end - start > range->end - range->start) > + return -ERANGE; > + > + if (!hmm_vma_walk->mmu_range.owner) { > + mmu_notifier_range_init_owner(&hmm_vma_walk->mmu_range, MMU_NOTIFY_MIGRATE, 0, > + walk->vma->vm_mm, start, end, > + range->dev_private_owner); > + mmu_notifier_invalidate_range_start(&hmm_vma_walk->mmu_range); > + } > + > + return 0; > +} > + > static int hmm_vma_walk_pmd(pmd_t *pmdp, > unsigned long start, > unsigned long end, > @@ -403,43 +933,112 @@ static int hmm_vma_walk_pmd(pmd_t *pmdp, > unsigned long *hmm_pfns = > &range->hmm_pfns[(start - range->start) >> PAGE_SHIFT]; > unsigned long npages = (end - start) >> PAGE_SHIFT; > + struct mm_struct *mm = walk->vma->vm_mm; > unsigned long addr = start; > + enum migrate_vma_info minfo; > + unsigned long i; > + spinlock_t *ptl; > pte_t *ptep; > pmd_t pmd; > + int r; > + > + minfo = hmm_select_migrate(range); > > again: > + hmm_vma_walk->locked = false; > + hmm_vma_walk->pmdlocked = false; > pmd = pmdp_get_lockless(pmdp); > - if (pmd_none(pmd)) > - return hmm_vma_walk_hole(start, end, -1, walk); > + if (pmd_none(pmd)) { > + r = hmm_vma_walk_hole(start, end, -1, walk); > + if (r || !minfo) > + return r; > + > + ptl = pmd_lock(walk->mm, pmdp); > + if (pmd_none(*pmdp)) { > + // hmm_vma_walk_hole() filled migration needs > + spin_unlock(ptl); > + return r; > + } > + spin_unlock(ptl); > + } > > if (thp_migration_supported() && pmd_is_migration_entry(pmd)) { > - if (hmm_range_need_fault(hmm_vma_walk, hmm_pfns, npages, 0)) { > + if (!minfo) { > + if (hmm_range_need_fault(hmm_vma_walk, hmm_pfns, npages, 0)) { > + hmm_vma_walk->last = addr; > + pmd_migration_entry_wait(walk->mm, pmdp); > + return -EBUSY; > + } > + } > + for (i = 0; addr < end; addr += PAGE_SIZE, i++) > + hmm_pfns[i] &= HMM_PFN_INOUT_FLAGS; > + > + return 0; > + } > + > + if (minfo) { > + hmm_vma_walk->ptl = pmd_lock(mm, pmdp); > + hmm_vma_walk->pmdlocked = true; > + pmd = pmdp_get(pmdp); > + } else > + pmd = pmdp_get_lockless(pmdp); > + > + if (pmd_trans_huge(pmd) || !pmd_present(pmd)) { > + > + if (!pmd_present(pmd)) { > + r = hmm_vma_handle_absent_pmd(walk, start, end, hmm_pfns, > + pmd); > + if (r || !minfo) Do we need to drop the PMD lock here upon error? > + return r; > + } else { > + > + /* > + * No need to take pmd_lock here if not migrating, > + * even if some other thread is splitting the huge > + * pmd we will get that event through mmu_notifier callback. > + * > + * So just read pmd value and check again it's a transparent > + * huge or device mapping one and compute corresponding pfn > + * values. > + */ > + > + if (!pmd_trans_huge(pmd)) { > + // must be lockless > + goto again; How can '!pmd_trans_huge' be true here? Seems impossible based on outer if statement. > + } > + > + r = hmm_vma_handle_pmd(walk, addr, end, hmm_pfns, pmd); > + > + if (r || !minfo) Do we need to drop the PMD lock here upon error? > + return r; > + } > + > + r = hmm_vma_handle_migrate_prepare_pmd(walk, pmdp, start, end, hmm_pfns); > + > + if (hmm_vma_walk->pmdlocked) { > + spin_unlock(hmm_vma_walk->ptl); > + hmm_vma_walk->pmdlocked = false; > + } > + > + if (r == -ENOENT) { > + r = hmm_vma_walk_split(pmdp, addr, walk); > + if (r) { > + /* Split not successful, skip */ > + return hmm_pfns_fill(start, end, hmm_vma_walk, HMM_PFN_ERROR); > + } > + > + /* Split successful or "again", reloop */ > hmm_vma_walk->last = addr; > - pmd_migration_entry_wait(walk->mm, pmdp); > return -EBUSY; > } > - return hmm_pfns_fill(start, end, range, 0); > - } > > - if (!pmd_present(pmd)) > - return hmm_vma_handle_absent_pmd(walk, start, end, hmm_pfns, > - pmd); > + return r; > > - if (pmd_trans_huge(pmd)) { > - /* > - * No need to take pmd_lock here, even if some other thread > - * is splitting the huge pmd we will get that event through > - * mmu_notifier callback. > - * > - * So just read pmd value and check again it's a transparent > - * huge or device mapping one and compute corresponding pfn > - * values. > - */ > - pmd = pmdp_get_lockless(pmdp); > - if (!pmd_trans_huge(pmd)) > - goto again; > + } > > - return hmm_vma_handle_pmd(walk, addr, end, hmm_pfns, pmd); > + if (hmm_vma_walk->pmdlocked) { > + spin_unlock(hmm_vma_walk->ptl); > + hmm_vma_walk->pmdlocked = false; > } > > /* > @@ -451,22 +1050,41 @@ static int hmm_vma_walk_pmd(pmd_t *pmdp, > if (pmd_bad(pmd)) { > if (hmm_range_need_fault(hmm_vma_walk, hmm_pfns, npages, 0)) > return -EFAULT; > - return hmm_pfns_fill(start, end, range, HMM_PFN_ERROR); > + return hmm_pfns_fill(start, end, hmm_vma_walk, HMM_PFN_ERROR); > } > > - ptep = pte_offset_map(pmdp, addr); > + if (minfo) { > + ptep = pte_offset_map_lock(mm, pmdp, addr, &hmm_vma_walk->ptl); > + if (ptep) > + hmm_vma_walk->locked = true; > + } else > + ptep = pte_offset_map(pmdp, addr); > if (!ptep) > goto again; > + > for (; addr < end; addr += PAGE_SIZE, ptep++, hmm_pfns++) { > - int r; > > r = hmm_vma_handle_pte(walk, addr, end, pmdp, ptep, hmm_pfns); > if (r) { > /* hmm_vma_handle_pte() did pte_unmap() */ Drop the PTE lock if held? > return r; > } > + > + r = hmm_vma_handle_migrate_prepare(walk, pmdp, ptep, addr, hmm_pfns); > + if (r == -EAGAIN) { Assert the callee dropped the PTE lock? Matt > + goto again; > + } > + if (r) { > + hmm_pfns_fill(addr, end, hmm_vma_walk, HMM_PFN_ERROR); > + break; > + } > } > - pte_unmap(ptep - 1); > + > + if (hmm_vma_walk->locked) > + pte_unmap_unlock(ptep - 1, hmm_vma_walk->ptl); > + else > + pte_unmap(ptep - 1); > + > return 0; > } > > @@ -600,6 +1218,11 @@ static int hmm_vma_walk_test(unsigned long start, unsigned long end, > struct hmm_vma_walk *hmm_vma_walk = walk->private; > struct hmm_range *range = hmm_vma_walk->range; > struct vm_area_struct *vma = walk->vma; > + int r; > + > + r = hmm_vma_capture_migrate_range(start, end, walk); > + if (r) > + return r; > > if (!(vma->vm_flags & (VM_IO | VM_PFNMAP)) && > vma->vm_flags & VM_READ) > @@ -622,7 +1245,7 @@ static int hmm_vma_walk_test(unsigned long start, unsigned long end, > (end - start) >> PAGE_SHIFT, 0)) > return -EFAULT; > > - hmm_pfns_fill(start, end, range, HMM_PFN_ERROR); > + hmm_pfns_fill(start, end, hmm_vma_walk, HMM_PFN_ERROR); > > /* Skip this vma and continue processing the next vma. */ > return 1; > @@ -652,9 +1275,17 @@ static const struct mm_walk_ops hmm_walk_ops = { > * the invalidation to finish. > * -EFAULT: A page was requested to be valid and could not be made valid > * ie it has no backing VMA or it is illegal to access > + * -ERANGE: The range crosses multiple VMAs, or space for hmm_pfns array > + * is too low. > * > * This is similar to get_user_pages(), except that it can read the page tables > * without mutating them (ie causing faults). > + * > + * If want to do migrate after faulting, call hmm_range_fault() with > + * HMM_PFN_REQ_MIGRATE and initialize range.migrate field. > + * After hmm_range_fault() call migrate_hmm_range_setup() instead of > + * migrate_vma_setup() and after that follow normal migrate calls path. > + * > */ > int hmm_range_fault(struct hmm_range *range) > { > @@ -662,16 +1293,32 @@ int hmm_range_fault(struct hmm_range *range) > .range = range, > .last = range->start, > }; > - struct mm_struct *mm = range->notifier->mm; > + bool is_fault_path = !!range->notifier; > + struct mm_struct *mm; > int ret; > > + /* > + * > + * Could be serving a device fault or come from migrate > + * entry point. For the former we have not resolved the vma > + * yet, and the latter we don't have a notifier (but have a vma). > + * > + */ > +#ifdef CONFIG_DEVICE_MIGRATION > + mm = is_fault_path ? range->notifier->mm : range->migrate->vma->vm_mm; > +#else > + mm = range->notifier->mm; > +#endif > mmap_assert_locked(mm); > > do { > /* If range is no longer valid force retry. */ > - if (mmu_interval_check_retry(range->notifier, > - range->notifier_seq)) > - return -EBUSY; > + if (is_fault_path && mmu_interval_check_retry(range->notifier, > + range->notifier_seq)) { > + ret = -EBUSY; > + break; > + } > + > ret = walk_page_range(mm, hmm_vma_walk.last, range->end, > &hmm_walk_ops, &hmm_vma_walk); > /* > @@ -681,6 +1328,19 @@ int hmm_range_fault(struct hmm_range *range) > * output, and all >= are still at their input values. > */ > } while (ret == -EBUSY); > + > +#ifdef CONFIG_DEVICE_MIGRATION > + if (hmm_select_migrate(range) && range->migrate && > + hmm_vma_walk.mmu_range.owner) { > + // The migrate_vma path has the following initialized > + if (is_fault_path) { > + range->migrate->vma = hmm_vma_walk.vma; > + range->migrate->start = range->start; > + range->migrate->end = hmm_vma_walk.end; > + } > + mmu_notifier_invalidate_range_end(&hmm_vma_walk.mmu_range); > + } > +#endif > return ret; > } > EXPORT_SYMBOL(hmm_range_fault); > diff --git a/mm/migrate_device.c b/mm/migrate_device.c > index 23379663b1e1..bda6320f6242 100644 > --- a/mm/migrate_device.c > +++ b/mm/migrate_device.c > @@ -734,7 +734,16 @@ static void migrate_vma_unmap(struct migrate_vma *migrate) > */ > int migrate_vma_setup(struct migrate_vma *args) > { > + int ret; > long nr_pages = (args->end - args->start) >> PAGE_SHIFT; > + struct hmm_range range = { > + .notifier = NULL, > + .start = args->start, > + .end = args->end, > + .hmm_pfns = args->src, > + .dev_private_owner = args->pgmap_owner, > + .migrate = args > + }; > > args->start &= PAGE_MASK; > args->end &= PAGE_MASK; > @@ -759,17 +768,25 @@ int migrate_vma_setup(struct migrate_vma *args) > args->cpages = 0; > args->npages = 0; > > - migrate_vma_collect(args); > + if (args->flags & MIGRATE_VMA_FAULT) > + range.default_flags |= HMM_PFN_REQ_FAULT; > + > + ret = hmm_range_fault(&range); > + > + migrate_hmm_range_setup(&range); > > - if (args->cpages) > - migrate_vma_unmap(args); > + /* Remove migration PTEs */ > + if (ret) { > + migrate_vma_pages(args); > + migrate_vma_finalize(args); > + } > > /* > * At this point pages are locked and unmapped, and thus they have > * stable content and can safely be copied to destination memory that > * is allocated by the drivers. > */ > - return 0; > + return ret; > > } > EXPORT_SYMBOL(migrate_vma_setup); > @@ -1489,3 +1506,64 @@ int migrate_device_coherent_folio(struct folio *folio) > return 0; > return -EBUSY; > } > + > +void migrate_hmm_range_setup(struct hmm_range *range) > +{ > + > + struct migrate_vma *migrate = range->migrate; > + > + if (!migrate) > + return; > + > + migrate->npages = (migrate->end - migrate->start) >> PAGE_SHIFT; > + migrate->cpages = 0; > + > + for (unsigned long i = 0; i < migrate->npages; i++) { > + > + unsigned long pfn = range->hmm_pfns[i]; > + > + pfn &= ~HMM_PFN_INOUT_FLAGS; > + > + /* > + * > + * Don't do migration if valid and migrate flags are not both set. > + * > + */ > + if ((pfn & (HMM_PFN_VALID | HMM_PFN_MIGRATE)) != > + (HMM_PFN_VALID | HMM_PFN_MIGRATE)) { > + migrate->src[i] = 0; > + migrate->dst[i] = 0; > + continue; > + } > + > + migrate->cpages++; > + > + /* > + * > + * The zero page is encoded in a special way, valid and migrate is > + * set, and pfn part is zero. Encode specially for migrate also. > + * > + */ > + if (pfn == (HMM_PFN_VALID|HMM_PFN_MIGRATE)) { > + migrate->src[i] = MIGRATE_PFN_MIGRATE; > + migrate->dst[i] = 0; > + continue; > + } > + if (pfn == (HMM_PFN_VALID|HMM_PFN_MIGRATE|HMM_PFN_COMPOUND)) { > + migrate->src[i] = MIGRATE_PFN_MIGRATE|MIGRATE_PFN_COMPOUND; > + migrate->dst[i] = 0; > + continue; > + } > + > + migrate->src[i] = migrate_pfn(page_to_pfn(hmm_pfn_to_page(pfn))) > + | MIGRATE_PFN_MIGRATE; > + migrate->src[i] |= (pfn & HMM_PFN_WRITE) ? MIGRATE_PFN_WRITE : 0; > + migrate->src[i] |= (pfn & HMM_PFN_COMPOUND) ? MIGRATE_PFN_COMPOUND : 0; > + migrate->dst[i] = 0; > + } > + > + if (migrate->cpages) > + migrate_vma_unmap(migrate); > + > +} > +EXPORT_SYMBOL(migrate_hmm_range_setup); > -- > 2.50.0 > ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH v2 1/3] mm: unified hmm fault and migrate device pagewalk paths 2026-01-20 16:03 ` Matthew Brost @ 2026-01-21 8:05 ` Mika Penttilä 0 siblings, 0 replies; 8+ messages in thread From: Mika Penttilä @ 2026-01-21 8:05 UTC (permalink / raw) To: Matthew Brost Cc: linux-mm, linux-kernel, David Hildenbrand, Jason Gunthorpe, Leon Romanovsky, Alistair Popple, Balbir Singh, Zi Yan On 1/20/26 18:03, Matthew Brost wrote: > On Mon, Jan 19, 2026 at 01:25:00PM +0200, mpenttil@redhat.com wrote: >> From: Mika Penttilä <mpenttil@redhat.com> >> >> Currently, the way device page faulting and migration works >> is not optimal, if you want to do both fault handling and >> migration at once. >> >> Being able to migrate not present pages (or pages mapped with incorrect >> permissions, eg. COW) to the GPU requires doing either of the >> following sequences: >> >> 1. hmm_range_fault() - fault in non-present pages with correct permissions, etc. >> 2. migrate_vma_*() - migrate the pages >> >> Or: >> >> 1. migrate_vma_*() - migrate present pages >> 2. If non-present pages detected by migrate_vma_*(): >> a) call hmm_range_fault() to fault pages in >> b) call migrate_vma_*() again to migrate now present pages >> >> The problem with the first sequence is that you always have to do two >> page walks even when most of the time the pages are present or zero page >> mappings so the common case takes a performance hit. >> >> The second sequence is better for the common case, but far worse if >> pages aren't present because now you have to walk the page tables three >> times (once to find the page is not present, once so hmm_range_fault() >> can find a non-present page to fault in and once again to setup the >> migration). It is also tricky to code correctly. >> >> We should be able to walk the page table once, faulting >> pages in as required and replacing them with migration entries if >> requested. >> >> Add a new flag to HMM APIs, HMM_PFN_REQ_MIGRATE, >> which tells to prepare for migration also during fault handling. >> Also, for the migrate_vma_setup() call paths, a flag, MIGRATE_VMA_FAULT, >> is added to tell to add fault handling to migrate. >> >> Cc: David Hildenbrand <david@redhat.com> >> Cc: Jason Gunthorpe <jgg@nvidia.com> >> Cc: Leon Romanovsky <leonro@nvidia.com> >> Cc: Alistair Popple <apopple@nvidia.com> >> Cc: Balbir Singh <balbirs@nvidia.com> >> Cc: Zi Yan <ziy@nvidia.com> >> Cc: Matthew Brost <matthew.brost@intel.com> > A couple of comments/questions around the locking. Personally, I like > the approach, but its of the maintianers if they like it. I also haven't > pulled or tested this yet and likely won't have time for at least a few > days, so all comments are based on inspection. Thanks, your comments are valid. Will prepare a v3 with those addressed and fixed. >> Suggested-by: Alistair Popple <apopple@nvidia.com> >> Signed-off-by: Mika Penttilä <mpenttil@redhat.com> >> --- >> include/linux/hmm.h | 19 +- >> include/linux/migrate.h | 27 +- >> mm/hmm.c | 770 +++++++++++++++++++++++++++++++++++++--- >> mm/migrate_device.c | 86 ++++- >> 4 files changed, 839 insertions(+), 63 deletions(-) >> >> diff --git a/include/linux/hmm.h b/include/linux/hmm.h >> index db75ffc949a7..e2f53e155af2 100644 >> --- a/include/linux/hmm.h >> +++ b/include/linux/hmm.h >> @@ -12,7 +12,7 @@ >> #include <linux/mm.h> >> >> struct mmu_interval_notifier; >> - >> +struct migrate_vma; >> /* >> * On output: >> * 0 - The page is faultable and a future call with >> @@ -27,6 +27,7 @@ struct mmu_interval_notifier; >> * HMM_PFN_P2PDMA_BUS - Bus mapped P2P transfer >> * HMM_PFN_DMA_MAPPED - Flag preserved on input-to-output transformation >> * to mark that page is already DMA mapped >> + * HMM_PFN_MIGRATE - Migrate PTE installed >> * >> * On input: >> * 0 - Return the current state of the page, do not fault it. >> @@ -34,6 +35,7 @@ struct mmu_interval_notifier; >> * will fail >> * HMM_PFN_REQ_WRITE - The output must have HMM_PFN_WRITE or hmm_range_fault() >> * will fail. Must be combined with HMM_PFN_REQ_FAULT. >> + * HMM_PFN_REQ_MIGRATE - For default_flags, request to migrate to device >> */ >> enum hmm_pfn_flags { >> /* Output fields and flags */ >> @@ -48,15 +50,25 @@ enum hmm_pfn_flags { >> HMM_PFN_P2PDMA = 1UL << (BITS_PER_LONG - 5), >> HMM_PFN_P2PDMA_BUS = 1UL << (BITS_PER_LONG - 6), >> >> - HMM_PFN_ORDER_SHIFT = (BITS_PER_LONG - 11), >> + /* Migrate request */ >> + HMM_PFN_MIGRATE = 1UL << (BITS_PER_LONG - 7), >> + HMM_PFN_COMPOUND = 1UL << (BITS_PER_LONG - 8), >> + HMM_PFN_ORDER_SHIFT = (BITS_PER_LONG - 13), >> >> /* Input flags */ >> HMM_PFN_REQ_FAULT = HMM_PFN_VALID, >> HMM_PFN_REQ_WRITE = HMM_PFN_WRITE, >> + HMM_PFN_REQ_MIGRATE = HMM_PFN_MIGRATE, >> >> HMM_PFN_FLAGS = ~((1UL << HMM_PFN_ORDER_SHIFT) - 1), >> }; >> >> +enum { >> + /* These flags are carried from input-to-output */ >> + HMM_PFN_INOUT_FLAGS = HMM_PFN_DMA_MAPPED | HMM_PFN_P2PDMA | >> + HMM_PFN_P2PDMA_BUS, >> +}; >> + >> /* >> * hmm_pfn_to_page() - return struct page pointed to by a device entry >> * >> @@ -107,6 +119,7 @@ static inline unsigned int hmm_pfn_to_map_order(unsigned long hmm_pfn) >> * @default_flags: default flags for the range (write, read, ... see hmm doc) >> * @pfn_flags_mask: allows to mask pfn flags so that only default_flags matter >> * @dev_private_owner: owner of device private pages >> + * @migrate: structure for migrating the associated vma >> */ >> struct hmm_range { >> struct mmu_interval_notifier *notifier; >> @@ -117,12 +130,14 @@ struct hmm_range { >> unsigned long default_flags; >> unsigned long pfn_flags_mask; >> void *dev_private_owner; >> + struct migrate_vma *migrate; >> }; >> >> /* >> * Please see Documentation/mm/hmm.rst for how to use the range API. >> */ >> int hmm_range_fault(struct hmm_range *range); >> +int hmm_range_migrate_prepare(struct hmm_range *range, struct migrate_vma **pargs); >> >> /* >> * HMM_RANGE_DEFAULT_TIMEOUT - default timeout (ms) when waiting for a range >> diff --git a/include/linux/migrate.h b/include/linux/migrate.h >> index 26ca00c325d9..104eda2dd881 100644 >> --- a/include/linux/migrate.h >> +++ b/include/linux/migrate.h >> @@ -3,6 +3,7 @@ >> #define _LINUX_MIGRATE_H >> >> #include <linux/mm.h> >> +#include <linux/hmm.h> >> #include <linux/mempolicy.h> >> #include <linux/migrate_mode.h> >> #include <linux/hugetlb.h> >> @@ -97,6 +98,16 @@ static inline int set_movable_ops(const struct movable_operations *ops, enum pag >> return -ENOSYS; >> } >> >> +enum migrate_vma_info { >> + MIGRATE_VMA_SELECT_NONE = 0, >> + MIGRATE_VMA_SELECT_COMPOUND = MIGRATE_VMA_SELECT_NONE, >> +}; >> + >> +static inline enum migrate_vma_info hmm_select_migrate(struct hmm_range *range) >> +{ >> + return MIGRATE_VMA_SELECT_NONE; >> +} >> + >> #endif /* CONFIG_MIGRATION */ >> >> #ifdef CONFIG_NUMA_BALANCING >> @@ -140,11 +151,12 @@ static inline unsigned long migrate_pfn(unsigned long pfn) >> return (pfn << MIGRATE_PFN_SHIFT) | MIGRATE_PFN_VALID; >> } >> >> -enum migrate_vma_direction { >> +enum migrate_vma_info { >> MIGRATE_VMA_SELECT_SYSTEM = 1 << 0, >> MIGRATE_VMA_SELECT_DEVICE_PRIVATE = 1 << 1, >> MIGRATE_VMA_SELECT_DEVICE_COHERENT = 1 << 2, >> MIGRATE_VMA_SELECT_COMPOUND = 1 << 3, >> + MIGRATE_VMA_FAULT = 1 << 4, >> }; >> >> struct migrate_vma { >> @@ -182,6 +194,17 @@ struct migrate_vma { >> struct page *fault_page; >> }; >> >> +static inline enum migrate_vma_info hmm_select_migrate(struct hmm_range *range) >> +{ >> + enum migrate_vma_info minfo; >> + >> + minfo = range->migrate ? range->migrate->flags : 0; >> + minfo |= (range->default_flags & HMM_PFN_REQ_MIGRATE) ? >> + MIGRATE_VMA_SELECT_SYSTEM : 0; >> + >> + return minfo; >> +} >> + >> int migrate_vma_setup(struct migrate_vma *args); >> void migrate_vma_pages(struct migrate_vma *migrate); >> void migrate_vma_finalize(struct migrate_vma *migrate); >> @@ -192,7 +215,7 @@ void migrate_device_pages(unsigned long *src_pfns, unsigned long *dst_pfns, >> unsigned long npages); >> void migrate_device_finalize(unsigned long *src_pfns, >> unsigned long *dst_pfns, unsigned long npages); >> - >> +void migrate_hmm_range_setup(struct hmm_range *range); >> #endif /* CONFIG_MIGRATION */ >> >> #endif /* _LINUX_MIGRATE_H */ >> diff --git a/mm/hmm.c b/mm/hmm.c >> index 4ec74c18bef6..1fdb8665eeec 100644 >> --- a/mm/hmm.c >> +++ b/mm/hmm.c >> @@ -20,6 +20,7 @@ >> #include <linux/pagemap.h> >> #include <linux/leafops.h> >> #include <linux/hugetlb.h> >> +#include <linux/migrate.h> >> #include <linux/memremap.h> >> #include <linux/sched/mm.h> >> #include <linux/jump_label.h> >> @@ -27,12 +28,20 @@ >> #include <linux/pci-p2pdma.h> >> #include <linux/mmu_notifier.h> >> #include <linux/memory_hotplug.h> >> +#include <asm/tlbflush.h> >> >> #include "internal.h" >> >> struct hmm_vma_walk { >> - struct hmm_range *range; >> - unsigned long last; >> + struct mmu_notifier_range mmu_range; >> + struct vm_area_struct *vma; >> + struct hmm_range *range; >> + unsigned long start; >> + unsigned long end; >> + unsigned long last; >> + bool locked; >> + bool pmdlocked; >> + spinlock_t *ptl; >> }; >> >> enum { >> @@ -41,21 +50,38 @@ enum { >> HMM_NEED_ALL_BITS = HMM_NEED_FAULT | HMM_NEED_WRITE_FAULT, >> }; >> >> -enum { >> - /* These flags are carried from input-to-output */ >> - HMM_PFN_INOUT_FLAGS = HMM_PFN_DMA_MAPPED | HMM_PFN_P2PDMA | >> - HMM_PFN_P2PDMA_BUS, >> -}; >> - >> static int hmm_pfns_fill(unsigned long addr, unsigned long end, >> - struct hmm_range *range, unsigned long cpu_flags) >> + struct hmm_vma_walk *hmm_vma_walk, unsigned long cpu_flags) >> { >> + struct hmm_range *range = hmm_vma_walk->range; >> unsigned long i = (addr - range->start) >> PAGE_SHIFT; >> + enum migrate_vma_info minfo; >> + bool migrate = false; >> + >> + minfo = hmm_select_migrate(range); >> + if (cpu_flags != HMM_PFN_ERROR) { >> + if (minfo && (vma_is_anonymous(hmm_vma_walk->vma))) { >> + cpu_flags |= (HMM_PFN_VALID | HMM_PFN_MIGRATE); >> + migrate = true; >> + } >> + } >> + >> + if (migrate && thp_migration_supported() && >> + (minfo & MIGRATE_VMA_SELECT_COMPOUND) && >> + IS_ALIGNED(addr, HPAGE_PMD_SIZE) && >> + IS_ALIGNED(end, HPAGE_PMD_SIZE)) { >> + range->hmm_pfns[i] &= HMM_PFN_INOUT_FLAGS; >> + range->hmm_pfns[i] |= cpu_flags | HMM_PFN_COMPOUND; >> + addr += PAGE_SIZE; >> + i++; >> + cpu_flags = 0; >> + } >> >> for (; addr < end; addr += PAGE_SIZE, i++) { >> range->hmm_pfns[i] &= HMM_PFN_INOUT_FLAGS; >> range->hmm_pfns[i] |= cpu_flags; >> } >> + >> return 0; >> } >> >> @@ -171,11 +197,11 @@ static int hmm_vma_walk_hole(unsigned long addr, unsigned long end, >> if (!walk->vma) { >> if (required_fault) >> return -EFAULT; >> - return hmm_pfns_fill(addr, end, range, HMM_PFN_ERROR); >> + return hmm_pfns_fill(addr, end, hmm_vma_walk, HMM_PFN_ERROR); >> } >> if (required_fault) >> return hmm_vma_fault(addr, end, required_fault, walk); > Can we assert in hmm_vma_fault that neither the PMD nor the PTE lock is > held? That would be a quick sanity check to ensure we haven’t screwed > anything up in the state-machine walk. > > We could add marco like HMM_ASSERT or HMM_WARN, etc... Good idea, will add. > >> - return hmm_pfns_fill(addr, end, range, 0); >> + return hmm_pfns_fill(addr, end, hmm_vma_walk, 0); >> } >> >> static inline unsigned long hmm_pfn_flags_order(unsigned long order) >> @@ -208,8 +234,13 @@ static int hmm_vma_handle_pmd(struct mm_walk *walk, unsigned long addr, >> cpu_flags = pmd_to_hmm_pfn_flags(range, pmd); >> required_fault = >> hmm_range_need_fault(hmm_vma_walk, hmm_pfns, npages, cpu_flags); >> - if (required_fault) >> + if (required_fault) { >> + if (hmm_vma_walk->pmdlocked) { >> + spin_unlock(hmm_vma_walk->ptl); >> + hmm_vma_walk->pmdlocked = false; >> + } >> return hmm_vma_fault(addr, end, required_fault, walk); >> + } >> >> pfn = pmd_pfn(pmd) + ((addr & ~PMD_MASK) >> PAGE_SHIFT); >> for (i = 0; addr < end; addr += PAGE_SIZE, i++, pfn++) { >> @@ -289,14 +320,28 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr, >> goto fault; >> >> if (softleaf_is_migration(entry)) { >> - pte_unmap(ptep); >> - hmm_vma_walk->last = addr; >> - migration_entry_wait(walk->mm, pmdp, addr); >> - return -EBUSY; >> + if (!hmm_select_migrate(range)) { >> + if (hmm_vma_walk->locked) { >> + pte_unmap_unlock(ptep, hmm_vma_walk->ptl); >> + hmm_vma_walk->locked = false; > I don’t think it should be possible for the lock to be held here, given > that we only take it when selecting migration. So maybe we should assert > that it is not locked. Correct. Will fix. > >> + } else >> + pte_unmap(ptep); >> + >> + hmm_vma_walk->last = addr; >> + migration_entry_wait(walk->mm, pmdp, addr); >> + return -EBUSY; >> + } else >> + goto out; >> } >> >> /* Report error for everything else */ >> - pte_unmap(ptep); >> + >> + if (hmm_vma_walk->locked) { >> + pte_unmap_unlock(ptep, hmm_vma_walk->ptl); >> + hmm_vma_walk->locked = false; >> + } else >> + pte_unmap(ptep); >> + >> return -EFAULT; >> } >> >> @@ -313,7 +358,12 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr, >> if (!vm_normal_page(walk->vma, addr, pte) && >> !is_zero_pfn(pte_pfn(pte))) { >> if (hmm_pte_need_fault(hmm_vma_walk, pfn_req_flags, 0)) { >> - pte_unmap(ptep); >> + if (hmm_vma_walk->locked) { >> + pte_unmap_unlock(ptep, hmm_vma_walk->ptl); >> + hmm_vma_walk->locked = false; >> + } else >> + pte_unmap(ptep); >> + >> return -EFAULT; >> } >> new_pfn_flags = HMM_PFN_ERROR; >> @@ -326,7 +376,11 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr, >> return 0; >> >> fault: >> - pte_unmap(ptep); >> + if (hmm_vma_walk->locked) { >> + pte_unmap_unlock(ptep, hmm_vma_walk->ptl); >> + hmm_vma_walk->locked = false; >> + } else >> + pte_unmap(ptep); >> /* Fault any virtual address we were asked to fault */ >> return hmm_vma_fault(addr, end, required_fault, walk); >> } >> @@ -370,13 +424,18 @@ static int hmm_vma_handle_absent_pmd(struct mm_walk *walk, unsigned long start, >> required_fault = hmm_range_need_fault(hmm_vma_walk, hmm_pfns, >> npages, 0); >> if (required_fault) { >> - if (softleaf_is_device_private(entry)) >> + if (softleaf_is_device_private(entry)) { >> + if (hmm_vma_walk->pmdlocked) { >> + spin_unlock(hmm_vma_walk->ptl); >> + hmm_vma_walk->pmdlocked = false; >> + } >> return hmm_vma_fault(addr, end, required_fault, walk); >> + } >> else >> return -EFAULT; >> } >> >> - return hmm_pfns_fill(start, end, range, HMM_PFN_ERROR); >> + return hmm_pfns_fill(start, end, hmm_vma_walk, HMM_PFN_ERROR); >> } >> #else >> static int hmm_vma_handle_absent_pmd(struct mm_walk *walk, unsigned long start, >> @@ -384,15 +443,486 @@ static int hmm_vma_handle_absent_pmd(struct mm_walk *walk, unsigned long start, >> pmd_t pmd) >> { >> struct hmm_vma_walk *hmm_vma_walk = walk->private; >> - struct hmm_range *range = hmm_vma_walk->range; >> unsigned long npages = (end - start) >> PAGE_SHIFT; >> >> if (hmm_range_need_fault(hmm_vma_walk, hmm_pfns, npages, 0)) >> return -EFAULT; >> - return hmm_pfns_fill(start, end, range, HMM_PFN_ERROR); >> + return hmm_pfns_fill(start, end, hmm_vma_walk, HMM_PFN_ERROR); >> } >> #endif /* CONFIG_ARCH_ENABLE_THP_MIGRATION */ >> >> +#ifdef CONFIG_DEVICE_MIGRATION >> +/** >> + * migrate_vma_split_folio() - Helper function to split a THP folio >> + * @folio: the folio to split >> + * @fault_page: struct page associated with the fault if any >> + * >> + * Returns 0 on success >> + */ >> +static int migrate_vma_split_folio(struct folio *folio, >> + struct page *fault_page) >> +{ >> + int ret; >> + struct folio *fault_folio = fault_page ? page_folio(fault_page) : NULL; >> + struct folio *new_fault_folio = NULL; >> + >> + if (folio != fault_folio) { >> + folio_get(folio); >> + folio_lock(folio); >> + } >> + >> + ret = split_folio(folio); >> + if (ret) { >> + if (folio != fault_folio) { >> + folio_unlock(folio); >> + folio_put(folio); >> + } >> + return ret; >> + } >> + >> + new_fault_folio = fault_page ? page_folio(fault_page) : NULL; >> + >> + /* >> + * Ensure the lock is held on the correct >> + * folio after the split >> + */ >> + if (!new_fault_folio) { >> + folio_unlock(folio); >> + folio_put(folio); >> + } else if (folio != new_fault_folio) { >> + if (new_fault_folio != fault_folio) { >> + folio_get(new_fault_folio); >> + folio_lock(new_fault_folio); >> + } >> + folio_unlock(folio); >> + folio_put(folio); >> + } >> + >> + return 0; >> +} >> + >> +static int hmm_vma_handle_migrate_prepare_pmd(const struct mm_walk *walk, >> + pmd_t *pmdp, >> + unsigned long start, >> + unsigned long end, >> + unsigned long *hmm_pfn) >> +{ >> + struct hmm_vma_walk *hmm_vma_walk = walk->private; >> + struct hmm_range *range = hmm_vma_walk->range; >> + struct migrate_vma *migrate = range->migrate; >> + struct folio *fault_folio = NULL; >> + struct folio *folio; >> + enum migrate_vma_info minfo; >> + unsigned long i; >> + int r = 0; >> + >> + minfo = hmm_select_migrate(range); >> + if (!minfo) >> + return r; >> + > Can we assert the PMD is locked here? ack > >> + fault_folio = (migrate && migrate->fault_page) ? >> + page_folio(migrate->fault_page) : NULL; >> + >> + if (pmd_none(*pmdp)) >> + return hmm_pfns_fill(start, end, hmm_vma_walk, 0); >> + >> + if (!(hmm_pfn[0] & HMM_PFN_VALID)) >> + goto out; >> + >> + if (pmd_trans_huge(*pmdp)) { >> + if (!(minfo & MIGRATE_VMA_SELECT_SYSTEM)) >> + goto out; >> + >> + folio = pmd_folio(*pmdp); >> + if (is_huge_zero_folio(folio)) >> + return hmm_pfns_fill(start, end, hmm_vma_walk, 0); >> + >> + } else if (!pmd_present(*pmdp)) { >> + const softleaf_t entry = softleaf_from_pmd(*pmdp); >> + >> + folio = softleaf_to_folio(entry); >> + >> + if (!softleaf_is_device_private(entry)) >> + goto out; >> + >> + if (!(minfo & MIGRATE_VMA_SELECT_DEVICE_PRIVATE)) >> + goto out; >> + if (folio->pgmap->owner != migrate->pgmap_owner) >> + goto out; >> + >> + } else { >> + hmm_vma_walk->last = start; >> + return -EBUSY; >> + } >> + >> + folio_get(folio); >> + >> + if (folio != fault_folio && unlikely(!folio_trylock(folio))) { >> + folio_put(folio); >> + hmm_pfns_fill(start, end, hmm_vma_walk, HMM_PFN_ERROR); >> + return 0; >> + } >> + >> + if (thp_migration_supported() && >> + (migrate->flags & MIGRATE_VMA_SELECT_COMPOUND) && >> + (IS_ALIGNED(start, HPAGE_PMD_SIZE) && >> + IS_ALIGNED(end, HPAGE_PMD_SIZE))) { >> + >> + struct page_vma_mapped_walk pvmw = { >> + .ptl = hmm_vma_walk->ptl, >> + .address = start, >> + .pmd = pmdp, >> + .vma = walk->vma, >> + }; >> + >> + hmm_pfn[0] |= HMM_PFN_MIGRATE | HMM_PFN_COMPOUND; >> + >> + r = set_pmd_migration_entry(&pvmw, folio_page(folio, 0)); >> + if (r) { >> + hmm_pfn[0] &= ~(HMM_PFN_MIGRATE | HMM_PFN_COMPOUND); >> + r = -ENOENT; // fallback >> + goto unlock_out; >> + } >> + for (i = 1, start += PAGE_SIZE; start < end; start += PAGE_SIZE, i++) >> + hmm_pfn[i] &= HMM_PFN_INOUT_FLAGS; >> + >> + } else { >> + r = -ENOENT; // fallback >> + goto unlock_out; >> + } >> + >> + >> +out: >> + return r; >> + >> +unlock_out: >> + if (folio != fault_folio) >> + folio_unlock(folio); >> + folio_put(folio); >> + goto out; >> + >> +} >> + >> +/* >> + * Install migration entries if migration requested, either from fault >> + * or migrate paths. >> + * >> + */ >> +static int hmm_vma_handle_migrate_prepare(const struct mm_walk *walk, >> + pmd_t *pmdp, >> + pte_t *ptep, >> + unsigned long addr, >> + unsigned long *hmm_pfn) >> +{ >> + struct hmm_vma_walk *hmm_vma_walk = walk->private; >> + struct hmm_range *range = hmm_vma_walk->range; >> + struct migrate_vma *migrate = range->migrate; >> + struct mm_struct *mm = walk->vma->vm_mm; >> + struct folio *fault_folio = NULL; >> + enum migrate_vma_info minfo; >> + struct dev_pagemap *pgmap; >> + bool anon_exclusive; >> + struct folio *folio; >> + unsigned long pfn; >> + struct page *page; >> + softleaf_t entry; >> + pte_t pte, swp_pte; >> + bool writable = false; >> + >> + // Do we want to migrate at all? >> + minfo = hmm_select_migrate(range); >> + if (!minfo) >> + return 0; >> + > Can we assert the PTE lock is held here? ack > >> + fault_folio = (migrate && migrate->fault_page) ? >> + page_folio(migrate->fault_page) : NULL; >> + >> + if (!hmm_vma_walk->locked) { >> + ptep = pte_offset_map_lock(mm, pmdp, addr, &hmm_vma_walk->ptl); >> + hmm_vma_walk->locked = true; >> + } >> + pte = ptep_get(ptep); >> + > How would we get without PTE lock being held? Shouldn't the caller take > the lock? This was leftover from previous version when there was a jump to the beginning. Will remove. > >> + if (pte_none(pte)) { >> + // migrate without faulting case >> + if (vma_is_anonymous(walk->vma)) { >> + *hmm_pfn &= HMM_PFN_INOUT_FLAGS; >> + *hmm_pfn |= HMM_PFN_MIGRATE | HMM_PFN_VALID; >> + goto out; >> + } >> + } >> + >> + if (!(hmm_pfn[0] & HMM_PFN_VALID)) >> + goto out; >> + >> + if (!pte_present(pte)) { >> + /* >> + * Only care about unaddressable device page special >> + * page table entry. Other special swap entries are not >> + * migratable, and we ignore regular swapped page. >> + */ >> + entry = softleaf_from_pte(pte); >> + if (!softleaf_is_device_private(entry)) >> + goto out; >> + >> + if (!(minfo & MIGRATE_VMA_SELECT_DEVICE_PRIVATE)) >> + goto out; >> + >> + page = softleaf_to_page(entry); >> + folio = page_folio(page); >> + if (folio->pgmap->owner != migrate->pgmap_owner) >> + goto out; >> + >> + if (folio_test_large(folio)) { >> + int ret; >> + >> + pte_unmap_unlock(ptep, hmm_vma_walk->ptl); >> + hmm_vma_walk->locked = false; >> + ret = migrate_vma_split_folio(folio, >> + migrate->fault_page); >> + if (ret) >> + goto out_error; >> + return -EAGAIN; >> + } >> + >> + pfn = page_to_pfn(page); >> + if (softleaf_is_device_private_write(entry)) >> + writable = true; >> + } else { >> + pfn = pte_pfn(pte); >> + if (is_zero_pfn(pfn) && >> + (minfo & MIGRATE_VMA_SELECT_SYSTEM)) { >> + *hmm_pfn = HMM_PFN_MIGRATE|HMM_PFN_VALID; >> + goto out; >> + } >> + page = vm_normal_page(walk->vma, addr, pte); >> + if (page && !is_zone_device_page(page) && >> + !(minfo & MIGRATE_VMA_SELECT_SYSTEM)) { >> + goto out; >> + } else if (page && is_device_coherent_page(page)) { >> + pgmap = page_pgmap(page); >> + >> + if (!(minfo & >> + MIGRATE_VMA_SELECT_DEVICE_COHERENT) || >> + pgmap->owner != migrate->pgmap_owner) >> + goto out; >> + } >> + >> + folio = page ? page_folio(page) : NULL; >> + if (folio && folio_test_large(folio)) { >> + int ret; >> + >> + pte_unmap_unlock(ptep, hmm_vma_walk->ptl); >> + hmm_vma_walk->locked = false; >> + >> + ret = migrate_vma_split_folio(folio, >> + migrate->fault_page); >> + if (ret) >> + goto out_error; >> + return -EAGAIN; >> + } >> + >> + writable = pte_write(pte); >> + } >> + >> + if (!page || !page->mapping) >> + goto out; >> + >> + /* >> + * By getting a reference on the folio we pin it and that blocks >> + * any kind of migration. Side effect is that it "freezes" the >> + * pte. >> + * >> + * We drop this reference after isolating the folio from the lru >> + * for non device folio (device folio are not on the lru and thus >> + * can't be dropped from it). >> + */ >> + folio = page_folio(page); >> + folio_get(folio); >> + >> + /* >> + * We rely on folio_trylock() to avoid deadlock between >> + * concurrent migrations where each is waiting on the others >> + * folio lock. If we can't immediately lock the folio we fail this >> + * migration as it is only best effort anyway. >> + * >> + * If we can lock the folio it's safe to set up a migration entry >> + * now. In the common case where the folio is mapped once in a >> + * single process setting up the migration entry now is an >> + * optimisation to avoid walking the rmap later with >> + * try_to_migrate(). >> + */ >> + >> + if (fault_folio == folio || folio_trylock(folio)) { >> + anon_exclusive = folio_test_anon(folio) && >> + PageAnonExclusive(page); >> + >> + flush_cache_page(walk->vma, addr, pfn); >> + >> + if (anon_exclusive) { >> + pte = ptep_clear_flush(walk->vma, addr, ptep); >> + >> + if (folio_try_share_anon_rmap_pte(folio, page)) { >> + set_pte_at(mm, addr, ptep, pte); >> + folio_unlock(folio); >> + folio_put(folio); >> + goto out; >> + } >> + } else { >> + pte = ptep_get_and_clear(mm, addr, ptep); >> + } >> + >> + if (pte_dirty(pte)) >> + folio_mark_dirty(folio); >> + >> + /* Setup special migration page table entry */ >> + if (writable) >> + entry = make_writable_migration_entry(pfn); >> + else if (anon_exclusive) >> + entry = make_readable_exclusive_migration_entry(pfn); >> + else >> + entry = make_readable_migration_entry(pfn); >> + >> + if (pte_present(pte)) { >> + if (pte_young(pte)) >> + entry = make_migration_entry_young(entry); >> + if (pte_dirty(pte)) >> + entry = make_migration_entry_dirty(entry); >> + } >> + >> + swp_pte = swp_entry_to_pte(entry); >> + if (pte_present(pte)) { >> + if (pte_soft_dirty(pte)) >> + swp_pte = pte_swp_mksoft_dirty(swp_pte); >> + if (pte_uffd_wp(pte)) >> + swp_pte = pte_swp_mkuffd_wp(swp_pte); >> + } else { >> + if (pte_swp_soft_dirty(pte)) >> + swp_pte = pte_swp_mksoft_dirty(swp_pte); >> + if (pte_swp_uffd_wp(pte)) >> + swp_pte = pte_swp_mkuffd_wp(swp_pte); >> + } >> + >> + set_pte_at(mm, addr, ptep, swp_pte); >> + folio_remove_rmap_pte(folio, page, walk->vma); >> + folio_put(folio); >> + *hmm_pfn |= HMM_PFN_MIGRATE; >> + >> + if (pte_present(pte)) >> + flush_tlb_range(walk->vma, addr, addr + PAGE_SIZE); >> + } else >> + folio_put(folio); >> +out: >> + return 0; >> +out_error: >> + return -EFAULT; >> + >> +} >> + >> +static int hmm_vma_walk_split(pmd_t *pmdp, >> + unsigned long addr, >> + struct mm_walk *walk) >> +{ >> + struct hmm_vma_walk *hmm_vma_walk = walk->private; >> + struct hmm_range *range = hmm_vma_walk->range; >> + struct migrate_vma *migrate = range->migrate; >> + struct folio *folio, *fault_folio; >> + spinlock_t *ptl; >> + int ret = 0; >> + >> + fault_folio = (migrate && migrate->fault_page) ? >> + page_folio(migrate->fault_page) : NULL; >> + > Assert the PMD lock isn't held? ack > >> + ptl = pmd_lock(walk->mm, pmdp); >> + if (unlikely(!pmd_trans_huge(*pmdp))) { >> + spin_unlock(ptl); >> + goto out; >> + } >> + >> + folio = pmd_folio(*pmdp); >> + if (is_huge_zero_folio(folio)) { >> + spin_unlock(ptl); >> + split_huge_pmd(walk->vma, pmdp, addr); >> + } else { >> + folio_get(folio); >> + spin_unlock(ptl); >> + >> + if (folio != fault_folio) { >> + if (unlikely(!folio_trylock(folio))) { >> + folio_put(folio); >> + ret = -EBUSY; >> + goto out; >> + } >> + } else >> + folio_put(folio); >> + >> + ret = split_folio(folio); >> + if (fault_folio != folio) { >> + folio_unlock(folio); >> + folio_put(folio); >> + } >> + >> + } >> +out: >> + return ret; >> +} >> +#else >> +static int hmm_vma_handle_migrate_prepare_pmd(const struct mm_walk *walk, >> + pmd_t *pmdp, >> + unsigned long start, >> + unsigned long end, >> + unsigned long *hmm_pfn) >> +{ >> + return 0; >> +} >> + >> +static int hmm_vma_handle_migrate_prepare(const struct mm_walk *walk, >> + pmd_t *pmdp, >> + pte_t *pte, >> + unsigned long addr, >> + unsigned long *hmm_pfn) >> +{ >> + return 0; >> +} >> + >> +static int hmm_vma_walk_split(pmd_t *pmdp, >> + unsigned long addr, >> + struct mm_walk *walk) >> +{ >> + return 0; >> +} >> +#endif >> + >> +static int hmm_vma_capture_migrate_range(unsigned long start, >> + unsigned long end, >> + struct mm_walk *walk) >> +{ >> + struct hmm_vma_walk *hmm_vma_walk = walk->private; >> + struct hmm_range *range = hmm_vma_walk->range; >> + >> + if (!hmm_select_migrate(range)) >> + return 0; >> + >> + if (hmm_vma_walk->vma && (hmm_vma_walk->vma != walk->vma)) >> + return -ERANGE; >> + >> + hmm_vma_walk->vma = walk->vma; >> + hmm_vma_walk->start = start; >> + hmm_vma_walk->end = end; >> + >> + if (end - start > range->end - range->start) >> + return -ERANGE; >> + >> + if (!hmm_vma_walk->mmu_range.owner) { >> + mmu_notifier_range_init_owner(&hmm_vma_walk->mmu_range, MMU_NOTIFY_MIGRATE, 0, >> + walk->vma->vm_mm, start, end, >> + range->dev_private_owner); >> + mmu_notifier_invalidate_range_start(&hmm_vma_walk->mmu_range); >> + } >> + >> + return 0; >> +} >> + >> static int hmm_vma_walk_pmd(pmd_t *pmdp, >> unsigned long start, >> unsigned long end, >> @@ -403,43 +933,112 @@ static int hmm_vma_walk_pmd(pmd_t *pmdp, >> unsigned long *hmm_pfns = >> &range->hmm_pfns[(start - range->start) >> PAGE_SHIFT]; >> unsigned long npages = (end - start) >> PAGE_SHIFT; >> + struct mm_struct *mm = walk->vma->vm_mm; >> unsigned long addr = start; >> + enum migrate_vma_info minfo; >> + unsigned long i; >> + spinlock_t *ptl; >> pte_t *ptep; >> pmd_t pmd; >> + int r; >> + >> + minfo = hmm_select_migrate(range); >> >> again: >> + hmm_vma_walk->locked = false; >> + hmm_vma_walk->pmdlocked = false; >> pmd = pmdp_get_lockless(pmdp); >> - if (pmd_none(pmd)) >> - return hmm_vma_walk_hole(start, end, -1, walk); >> + if (pmd_none(pmd)) { >> + r = hmm_vma_walk_hole(start, end, -1, walk); >> + if (r || !minfo) >> + return r; >> + >> + ptl = pmd_lock(walk->mm, pmdp); >> + if (pmd_none(*pmdp)) { >> + // hmm_vma_walk_hole() filled migration needs >> + spin_unlock(ptl); >> + return r; >> + } >> + spin_unlock(ptl); >> + } >> >> if (thp_migration_supported() && pmd_is_migration_entry(pmd)) { >> - if (hmm_range_need_fault(hmm_vma_walk, hmm_pfns, npages, 0)) { >> + if (!minfo) { >> + if (hmm_range_need_fault(hmm_vma_walk, hmm_pfns, npages, 0)) { >> + hmm_vma_walk->last = addr; >> + pmd_migration_entry_wait(walk->mm, pmdp); >> + return -EBUSY; >> + } >> + } >> + for (i = 0; addr < end; addr += PAGE_SIZE, i++) >> + hmm_pfns[i] &= HMM_PFN_INOUT_FLAGS; >> + >> + return 0; >> + } >> + >> + if (minfo) { >> + hmm_vma_walk->ptl = pmd_lock(mm, pmdp); >> + hmm_vma_walk->pmdlocked = true; >> + pmd = pmdp_get(pmdp); >> + } else >> + pmd = pmdp_get_lockless(pmdp); >> + >> + if (pmd_trans_huge(pmd) || !pmd_present(pmd)) { >> + >> + if (!pmd_present(pmd)) { >> + r = hmm_vma_handle_absent_pmd(walk, start, end, hmm_pfns, >> + pmd); >> + if (r || !minfo) > Do we need to drop the PMD lock here upon error? Yes, will fix. > >> + return r; >> + } else { >> + >> + /* >> + * No need to take pmd_lock here if not migrating, >> + * even if some other thread is splitting the huge >> + * pmd we will get that event through mmu_notifier callback. >> + * >> + * So just read pmd value and check again it's a transparent >> + * huge or device mapping one and compute corresponding pfn >> + * values. >> + */ >> + >> + if (!pmd_trans_huge(pmd)) { >> + // must be lockless >> + goto again; > How can '!pmd_trans_huge' be true here? Seems impossible based on outer if statement. Good point. Will fix and refactor this. > >> + } >> + >> + r = hmm_vma_handle_pmd(walk, addr, end, hmm_pfns, pmd); >> + >> + if (r || !minfo) > Do we need to drop the PMD lock here upon error? Yes, will fix. > >> + return r; >> + } >> + >> + r = hmm_vma_handle_migrate_prepare_pmd(walk, pmdp, start, end, hmm_pfns); >> + >> + if (hmm_vma_walk->pmdlocked) { >> + spin_unlock(hmm_vma_walk->ptl); >> + hmm_vma_walk->pmdlocked = false; >> + } >> + >> + if (r == -ENOENT) { >> + r = hmm_vma_walk_split(pmdp, addr, walk); >> + if (r) { >> + /* Split not successful, skip */ >> + return hmm_pfns_fill(start, end, hmm_vma_walk, HMM_PFN_ERROR); >> + } >> + >> + /* Split successful or "again", reloop */ >> hmm_vma_walk->last = addr; >> - pmd_migration_entry_wait(walk->mm, pmdp); >> return -EBUSY; >> } >> - return hmm_pfns_fill(start, end, range, 0); >> - } >> >> - if (!pmd_present(pmd)) >> - return hmm_vma_handle_absent_pmd(walk, start, end, hmm_pfns, >> - pmd); >> + return r; >> >> - if (pmd_trans_huge(pmd)) { >> - /* >> - * No need to take pmd_lock here, even if some other thread >> - * is splitting the huge pmd we will get that event through >> - * mmu_notifier callback. >> - * >> - * So just read pmd value and check again it's a transparent >> - * huge or device mapping one and compute corresponding pfn >> - * values. >> - */ >> - pmd = pmdp_get_lockless(pmdp); >> - if (!pmd_trans_huge(pmd)) >> - goto again; >> + } >> >> - return hmm_vma_handle_pmd(walk, addr, end, hmm_pfns, pmd); >> + if (hmm_vma_walk->pmdlocked) { >> + spin_unlock(hmm_vma_walk->ptl); >> + hmm_vma_walk->pmdlocked = false; >> } >> >> /* >> @@ -451,22 +1050,41 @@ static int hmm_vma_walk_pmd(pmd_t *pmdp, >> if (pmd_bad(pmd)) { >> if (hmm_range_need_fault(hmm_vma_walk, hmm_pfns, npages, 0)) >> return -EFAULT; >> - return hmm_pfns_fill(start, end, range, HMM_PFN_ERROR); >> + return hmm_pfns_fill(start, end, hmm_vma_walk, HMM_PFN_ERROR); >> } >> >> - ptep = pte_offset_map(pmdp, addr); >> + if (minfo) { >> + ptep = pte_offset_map_lock(mm, pmdp, addr, &hmm_vma_walk->ptl); >> + if (ptep) >> + hmm_vma_walk->locked = true; >> + } else >> + ptep = pte_offset_map(pmdp, addr); >> if (!ptep) >> goto again; >> + >> for (; addr < end; addr += PAGE_SIZE, ptep++, hmm_pfns++) { >> - int r; >> >> r = hmm_vma_handle_pte(walk, addr, end, pmdp, ptep, hmm_pfns); >> if (r) { >> /* hmm_vma_handle_pte() did pte_unmap() */ > Drop the PTE lock if held? hmm_vma_handle_pte() does unmap/unlock on error. > >> return r; >> } >> + >> + r = hmm_vma_handle_migrate_prepare(walk, pmdp, ptep, addr, hmm_pfns); >> + if (r == -EAGAIN) { > Assert the callee dropped the PTE lock? ack > > Matt --Mika ^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH v2 2/3] mm: add new testcase for the migrate on fault case 2026-01-19 11:24 [PATCH v2 0/3] Migrate on fault for device pages mpenttil 2026-01-19 11:25 ` [PATCH v2 1/3] mm: unified hmm fault and migrate device pagewalk paths mpenttil @ 2026-01-19 11:25 ` mpenttil 2026-01-19 11:25 ` [PATCH v2 3/3] mm:/migrate_device.c: remove migrate_vma_collect_*() functions mpenttil 2 siblings, 0 replies; 8+ messages in thread From: mpenttil @ 2026-01-19 11:25 UTC (permalink / raw) To: linux-mm Cc: linux-kernel, Mika Penttilä, David Hildenbrand, Jason Gunthorpe, Leon Romanovsky, Alistair Popple, Balbir Singh, Zi Yan, Matthew Brost, Marco Pagani From: Mika Penttilä <mpenttil@redhat.com> Cc: David Hildenbrand <david@redhat.com> Cc: Jason Gunthorpe <jgg@nvidia.com> Cc: Leon Romanovsky <leonro@nvidia.com> Cc: Alistair Popple <apopple@nvidia.com> Cc: Balbir Singh <balbirs@nvidia.com> Cc: Zi Yan <ziy@nvidia.com> Cc: Matthew Brost <matthew.brost@intel.com> Signed-off-by: Marco Pagani <marpagan@redhat.com> Signed-off-by: Mika Penttilä <mpenttil@redhat.com> --- lib/test_hmm.c | 100 ++++++++++++++++++++++++- lib/test_hmm_uapi.h | 19 ++--- tools/testing/selftests/mm/hmm-tests.c | 54 +++++++++++++ 3 files changed, 163 insertions(+), 10 deletions(-) diff --git a/lib/test_hmm.c b/lib/test_hmm.c index 8af169d3873a..b82517cfd616 100644 --- a/lib/test_hmm.c +++ b/lib/test_hmm.c @@ -36,6 +36,7 @@ #define DMIRROR_RANGE_FAULT_TIMEOUT 1000 #define DEVMEM_CHUNK_SIZE (256 * 1024 * 1024U) #define DEVMEM_CHUNKS_RESERVE 16 +#define PFNS_ARRAY_SIZE 64 /* * For device_private pages, dpage is just a dummy struct page @@ -145,7 +146,7 @@ static bool dmirror_is_private_zone(struct dmirror_device *mdevice) HMM_DMIRROR_MEMORY_DEVICE_PRIVATE); } -static enum migrate_vma_direction +static enum migrate_vma_info dmirror_select_device(struct dmirror *dmirror) { return (dmirror->mdevice->zone_device_type == @@ -1194,6 +1195,99 @@ static int dmirror_migrate_to_device(struct dmirror *dmirror, return ret; } +static int do_fault_and_migrate(struct dmirror *dmirror, struct hmm_range *range) +{ + struct migrate_vma *migrate = range->migrate; + int ret; + + mmap_read_lock(dmirror->notifier.mm); + + /* Fault-in pages for migration and update device page table */ + ret = dmirror_range_fault(dmirror, range); + + pr_debug("Migrating from sys mem to device mem\n"); + migrate_hmm_range_setup(range); + + dmirror_migrate_alloc_and_copy(migrate, dmirror); + migrate_vma_pages(migrate); + dmirror_migrate_finalize_and_map(migrate, dmirror); + migrate_vma_finalize(migrate); + + mmap_read_unlock(dmirror->notifier.mm); + return ret; +} + +static int dmirror_fault_and_migrate_to_device(struct dmirror *dmirror, + struct hmm_dmirror_cmd *cmd) +{ + unsigned long start, size, end, next; + unsigned long src_pfns[PFNS_ARRAY_SIZE] = { 0 }; + unsigned long dst_pfns[PFNS_ARRAY_SIZE] = { 0 }; + struct migrate_vma migrate = { 0 }; + struct hmm_range range = { 0 }; + struct dmirror_bounce bounce; + int ret = 0; + + /* Whole range */ + start = cmd->addr; + size = cmd->npages << PAGE_SHIFT; + end = start + size; + + if (!mmget_not_zero(dmirror->notifier.mm)) { + ret = -EFAULT; + goto out; + } + + migrate.pgmap_owner = dmirror->mdevice; + migrate.src = src_pfns; + migrate.dst = dst_pfns; + + range.migrate = &migrate; + range.hmm_pfns = src_pfns; + range.pfn_flags_mask = 0; + range.default_flags = HMM_PFN_REQ_FAULT | HMM_PFN_REQ_MIGRATE; + range.dev_private_owner = dmirror->mdevice; + range.notifier = &dmirror->notifier; + + for (next = start; next < end; next = range.end) { + range.start = next; + range.end = min(end, next + (PFNS_ARRAY_SIZE << PAGE_SHIFT)); + + pr_debug("Fault and migrate range start:%#lx end:%#lx\n", + range.start, range.end); + + ret = do_fault_and_migrate(dmirror, &range); + if (ret) + goto out_mmput; + } + + /* + * Return the migrated data for verification. + * Only for pages in device zone + */ + ret = dmirror_bounce_init(&bounce, start, size); + if (ret) + goto out_mmput; + + mutex_lock(&dmirror->mutex); + ret = dmirror_do_read(dmirror, start, end, &bounce); + mutex_unlock(&dmirror->mutex); + if (ret == 0) { + ret = copy_to_user(u64_to_user_ptr(cmd->ptr), bounce.ptr, bounce.size); + if (ret) + ret = -EFAULT; + } + + cmd->cpages = bounce.cpages; + dmirror_bounce_fini(&bounce); + + +out_mmput: + mmput(dmirror->notifier.mm); +out: + return ret; +} + static void dmirror_mkentry(struct dmirror *dmirror, struct hmm_range *range, unsigned char *perm, unsigned long entry) { @@ -1510,6 +1604,10 @@ static long dmirror_fops_unlocked_ioctl(struct file *filp, ret = dmirror_migrate_to_device(dmirror, &cmd); break; + case HMM_DMIRROR_MIGRATE_ON_FAULT_TO_DEV: + ret = dmirror_fault_and_migrate_to_device(dmirror, &cmd); + break; + case HMM_DMIRROR_MIGRATE_TO_SYS: ret = dmirror_migrate_to_system(dmirror, &cmd); break; diff --git a/lib/test_hmm_uapi.h b/lib/test_hmm_uapi.h index f94c6d457338..0b6e7a419e36 100644 --- a/lib/test_hmm_uapi.h +++ b/lib/test_hmm_uapi.h @@ -29,15 +29,16 @@ struct hmm_dmirror_cmd { }; /* Expose the address space of the calling process through hmm device file */ -#define HMM_DMIRROR_READ _IOWR('H', 0x00, struct hmm_dmirror_cmd) -#define HMM_DMIRROR_WRITE _IOWR('H', 0x01, struct hmm_dmirror_cmd) -#define HMM_DMIRROR_MIGRATE_TO_DEV _IOWR('H', 0x02, struct hmm_dmirror_cmd) -#define HMM_DMIRROR_MIGRATE_TO_SYS _IOWR('H', 0x03, struct hmm_dmirror_cmd) -#define HMM_DMIRROR_SNAPSHOT _IOWR('H', 0x04, struct hmm_dmirror_cmd) -#define HMM_DMIRROR_EXCLUSIVE _IOWR('H', 0x05, struct hmm_dmirror_cmd) -#define HMM_DMIRROR_CHECK_EXCLUSIVE _IOWR('H', 0x06, struct hmm_dmirror_cmd) -#define HMM_DMIRROR_RELEASE _IOWR('H', 0x07, struct hmm_dmirror_cmd) -#define HMM_DMIRROR_FLAGS _IOWR('H', 0x08, struct hmm_dmirror_cmd) +#define HMM_DMIRROR_READ _IOWR('H', 0x00, struct hmm_dmirror_cmd) +#define HMM_DMIRROR_WRITE _IOWR('H', 0x01, struct hmm_dmirror_cmd) +#define HMM_DMIRROR_MIGRATE_TO_DEV _IOWR('H', 0x02, struct hmm_dmirror_cmd) +#define HMM_DMIRROR_MIGRATE_ON_FAULT_TO_DEV _IOWR('H', 0x03, struct hmm_dmirror_cmd) +#define HMM_DMIRROR_MIGRATE_TO_SYS _IOWR('H', 0x04, struct hmm_dmirror_cmd) +#define HMM_DMIRROR_SNAPSHOT _IOWR('H', 0x05, struct hmm_dmirror_cmd) +#define HMM_DMIRROR_EXCLUSIVE _IOWR('H', 0x06, struct hmm_dmirror_cmd) +#define HMM_DMIRROR_CHECK_EXCLUSIVE _IOWR('H', 0x07, struct hmm_dmirror_cmd) +#define HMM_DMIRROR_RELEASE _IOWR('H', 0x08, struct hmm_dmirror_cmd) +#define HMM_DMIRROR_FLAGS _IOWR('H', 0x09, struct hmm_dmirror_cmd) #define HMM_DMIRROR_FLAG_FAIL_ALLOC (1ULL << 0) diff --git a/tools/testing/selftests/mm/hmm-tests.c b/tools/testing/selftests/mm/hmm-tests.c index e8328c89d855..c75616875c9e 100644 --- a/tools/testing/selftests/mm/hmm-tests.c +++ b/tools/testing/selftests/mm/hmm-tests.c @@ -277,6 +277,13 @@ static int hmm_migrate_sys_to_dev(int fd, return hmm_dmirror_cmd(fd, HMM_DMIRROR_MIGRATE_TO_DEV, buffer, npages); } +static int hmm_migrate_on_fault_sys_to_dev(int fd, + struct hmm_buffer *buffer, + unsigned long npages) +{ + return hmm_dmirror_cmd(fd, HMM_DMIRROR_MIGRATE_ON_FAULT_TO_DEV, buffer, npages); +} + static int hmm_migrate_dev_to_sys(int fd, struct hmm_buffer *buffer, unsigned long npages) @@ -1034,6 +1041,53 @@ TEST_F(hmm, migrate) hmm_buffer_free(buffer); } + +/* + * Fault and migrate anonymous memory to device private memory. + */ +TEST_F(hmm, migrate_on_fault) +{ + struct hmm_buffer *buffer; + unsigned long npages; + unsigned long size; + unsigned long i; + int *ptr; + int ret; + + npages = ALIGN(HMM_BUFFER_SIZE, self->page_size) >> self->page_shift; + ASSERT_NE(npages, 0); + size = npages << self->page_shift; + + buffer = malloc(sizeof(*buffer)); + ASSERT_NE(buffer, NULL); + + buffer->fd = -1; + buffer->size = size; + buffer->mirror = malloc(size); + ASSERT_NE(buffer->mirror, NULL); + + buffer->ptr = mmap(NULL, size, + PROT_READ | PROT_WRITE, + MAP_PRIVATE | MAP_ANONYMOUS, + buffer->fd, 0); + ASSERT_NE(buffer->ptr, MAP_FAILED); + + /* Initialize buffer in system memory. */ + for (i = 0, ptr = buffer->ptr; i < size / sizeof(*ptr); ++i) + ptr[i] = i; + + /* Fault and migrate memory to device. */ + ret = hmm_migrate_on_fault_sys_to_dev(self->fd, buffer, npages); + ASSERT_EQ(ret, 0); + ASSERT_EQ(buffer->cpages, npages); + + /* Check what the device read. */ + for (i = 0, ptr = buffer->mirror; i < size / sizeof(*ptr); ++i) + ASSERT_EQ(ptr[i], i); + + hmm_buffer_free(buffer); +} + /* * Migrate anonymous memory to device private memory and fault some of it back * to system memory, then try migrating the resulting mix of system and device -- 2.50.0 ^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH v2 3/3] mm:/migrate_device.c: remove migrate_vma_collect_*() functions 2026-01-19 11:24 [PATCH v2 0/3] Migrate on fault for device pages mpenttil 2026-01-19 11:25 ` [PATCH v2 1/3] mm: unified hmm fault and migrate device pagewalk paths mpenttil 2026-01-19 11:25 ` [PATCH v2 2/3] mm: add new testcase for the migrate on fault case mpenttil @ 2026-01-19 11:25 ` mpenttil 2 siblings, 0 replies; 8+ messages in thread From: mpenttil @ 2026-01-19 11:25 UTC (permalink / raw) To: linux-mm Cc: linux-kernel, Mika Penttilä, David Hildenbrand, Jason Gunthorpe, Leon Romanovsky, Alistair Popple, Balbir Singh, Zi Yan, Matthew Brost From: Mika Penttilä <mpenttil@redhat.com> With the unified fault handling and migrate path, the migrate_vma_collect_*() functions are unused, let's remove them. Cc: David Hildenbrand <david@redhat.com> Cc: Jason Gunthorpe <jgg@nvidia.com> Cc: Leon Romanovsky <leonro@nvidia.com> Cc: Alistair Popple <apopple@nvidia.com> Cc: Balbir Singh <balbirs@nvidia.com> Cc: Zi Yan <ziy@nvidia.com> Cc: Matthew Brost <matthew.brost@intel.com> Signed-off-by: Mika Penttilä <mpenttil@redhat.com> --- mm/migrate_device.c | 508 -------------------------------------------- 1 file changed, 508 deletions(-) diff --git a/mm/migrate_device.c b/mm/migrate_device.c index bda6320f6242..c896a4d8bca2 100644 --- a/mm/migrate_device.c +++ b/mm/migrate_device.c @@ -18,514 +18,6 @@ #include <asm/tlbflush.h> #include "internal.h" -static int migrate_vma_collect_skip(unsigned long start, - unsigned long end, - struct mm_walk *walk) -{ - struct migrate_vma *migrate = walk->private; - unsigned long addr; - - for (addr = start; addr < end; addr += PAGE_SIZE) { - migrate->dst[migrate->npages] = 0; - migrate->src[migrate->npages++] = 0; - } - - return 0; -} - -static int migrate_vma_collect_hole(unsigned long start, - unsigned long end, - __always_unused int depth, - struct mm_walk *walk) -{ - struct migrate_vma *migrate = walk->private; - unsigned long addr; - - /* Only allow populating anonymous memory. */ - if (!vma_is_anonymous(walk->vma)) - return migrate_vma_collect_skip(start, end, walk); - - if (thp_migration_supported() && - (migrate->flags & MIGRATE_VMA_SELECT_COMPOUND) && - (IS_ALIGNED(start, HPAGE_PMD_SIZE) && - IS_ALIGNED(end, HPAGE_PMD_SIZE))) { - migrate->src[migrate->npages] = MIGRATE_PFN_MIGRATE | - MIGRATE_PFN_COMPOUND; - migrate->dst[migrate->npages] = 0; - migrate->npages++; - migrate->cpages++; - - /* - * Collect the remaining entries as holes, in case we - * need to split later - */ - return migrate_vma_collect_skip(start + PAGE_SIZE, end, walk); - } - - for (addr = start; addr < end; addr += PAGE_SIZE) { - migrate->src[migrate->npages] = MIGRATE_PFN_MIGRATE; - migrate->dst[migrate->npages] = 0; - migrate->npages++; - migrate->cpages++; - } - - return 0; -} - -/** - * migrate_vma_split_folio() - Helper function to split a THP folio - * @folio: the folio to split - * @fault_page: struct page associated with the fault if any - * - * Returns 0 on success - */ -static int migrate_vma_split_folio(struct folio *folio, - struct page *fault_page) -{ - int ret; - struct folio *fault_folio = fault_page ? page_folio(fault_page) : NULL; - struct folio *new_fault_folio = NULL; - - if (folio != fault_folio) { - folio_get(folio); - folio_lock(folio); - } - - ret = split_folio(folio); - if (ret) { - if (folio != fault_folio) { - folio_unlock(folio); - folio_put(folio); - } - return ret; - } - - new_fault_folio = fault_page ? page_folio(fault_page) : NULL; - - /* - * Ensure the lock is held on the correct - * folio after the split - */ - if (!new_fault_folio) { - folio_unlock(folio); - folio_put(folio); - } else if (folio != new_fault_folio) { - if (new_fault_folio != fault_folio) { - folio_get(new_fault_folio); - folio_lock(new_fault_folio); - } - folio_unlock(folio); - folio_put(folio); - } - - return 0; -} - -/** migrate_vma_collect_huge_pmd - collect THP pages without splitting the - * folio for device private pages. - * @pmdp: pointer to pmd entry - * @start: start address of the range for migration - * @end: end address of the range for migration - * @walk: mm_walk callback structure - * @fault_folio: folio associated with the fault if any - * - * Collect the huge pmd entry at @pmdp for migration and set the - * MIGRATE_PFN_COMPOUND flag in the migrate src entry to indicate that - * migration will occur at HPAGE_PMD granularity - */ -static int migrate_vma_collect_huge_pmd(pmd_t *pmdp, unsigned long start, - unsigned long end, struct mm_walk *walk, - struct folio *fault_folio) -{ - struct mm_struct *mm = walk->mm; - struct folio *folio; - struct migrate_vma *migrate = walk->private; - spinlock_t *ptl; - int ret; - unsigned long write = 0; - - ptl = pmd_lock(mm, pmdp); - if (pmd_none(*pmdp)) { - spin_unlock(ptl); - return migrate_vma_collect_hole(start, end, -1, walk); - } - - if (pmd_trans_huge(*pmdp)) { - if (!(migrate->flags & MIGRATE_VMA_SELECT_SYSTEM)) { - spin_unlock(ptl); - return migrate_vma_collect_skip(start, end, walk); - } - - folio = pmd_folio(*pmdp); - if (is_huge_zero_folio(folio)) { - spin_unlock(ptl); - return migrate_vma_collect_hole(start, end, -1, walk); - } - if (pmd_write(*pmdp)) - write = MIGRATE_PFN_WRITE; - } else if (!pmd_present(*pmdp)) { - const softleaf_t entry = softleaf_from_pmd(*pmdp); - - folio = softleaf_to_folio(entry); - - if (!softleaf_is_device_private(entry) || - !(migrate->flags & MIGRATE_VMA_SELECT_DEVICE_PRIVATE) || - (folio->pgmap->owner != migrate->pgmap_owner)) { - spin_unlock(ptl); - return migrate_vma_collect_skip(start, end, walk); - } - - if (softleaf_is_migration(entry)) { - migration_entry_wait_on_locked(entry, ptl); - spin_unlock(ptl); - return -EAGAIN; - } - - if (softleaf_is_device_private_write(entry)) - write = MIGRATE_PFN_WRITE; - } else { - spin_unlock(ptl); - return -EAGAIN; - } - - folio_get(folio); - if (folio != fault_folio && unlikely(!folio_trylock(folio))) { - spin_unlock(ptl); - folio_put(folio); - return migrate_vma_collect_skip(start, end, walk); - } - - if (thp_migration_supported() && - (migrate->flags & MIGRATE_VMA_SELECT_COMPOUND) && - (IS_ALIGNED(start, HPAGE_PMD_SIZE) && - IS_ALIGNED(end, HPAGE_PMD_SIZE))) { - - struct page_vma_mapped_walk pvmw = { - .ptl = ptl, - .address = start, - .pmd = pmdp, - .vma = walk->vma, - }; - - unsigned long pfn = page_to_pfn(folio_page(folio, 0)); - - migrate->src[migrate->npages] = migrate_pfn(pfn) | write - | MIGRATE_PFN_MIGRATE - | MIGRATE_PFN_COMPOUND; - migrate->dst[migrate->npages++] = 0; - migrate->cpages++; - ret = set_pmd_migration_entry(&pvmw, folio_page(folio, 0)); - if (ret) { - migrate->npages--; - migrate->cpages--; - migrate->src[migrate->npages] = 0; - migrate->dst[migrate->npages] = 0; - goto fallback; - } - migrate_vma_collect_skip(start + PAGE_SIZE, end, walk); - spin_unlock(ptl); - return 0; - } - -fallback: - spin_unlock(ptl); - if (!folio_test_large(folio)) - goto done; - ret = split_folio(folio); - if (fault_folio != folio) - folio_unlock(folio); - folio_put(folio); - if (ret) - return migrate_vma_collect_skip(start, end, walk); - if (pmd_none(pmdp_get_lockless(pmdp))) - return migrate_vma_collect_hole(start, end, -1, walk); - -done: - return -ENOENT; -} - -static int migrate_vma_collect_pmd(pmd_t *pmdp, - unsigned long start, - unsigned long end, - struct mm_walk *walk) -{ - struct migrate_vma *migrate = walk->private; - struct vm_area_struct *vma = walk->vma; - struct mm_struct *mm = vma->vm_mm; - unsigned long addr = start, unmapped = 0; - spinlock_t *ptl; - struct folio *fault_folio = migrate->fault_page ? - page_folio(migrate->fault_page) : NULL; - pte_t *ptep; - -again: - if (pmd_trans_huge(*pmdp) || !pmd_present(*pmdp)) { - int ret = migrate_vma_collect_huge_pmd(pmdp, start, end, walk, fault_folio); - - if (ret == -EAGAIN) - goto again; - if (ret == 0) - return 0; - } - - ptep = pte_offset_map_lock(mm, pmdp, start, &ptl); - if (!ptep) - goto again; - arch_enter_lazy_mmu_mode(); - ptep += (addr - start) / PAGE_SIZE; - - for (; addr < end; addr += PAGE_SIZE, ptep++) { - struct dev_pagemap *pgmap; - unsigned long mpfn = 0, pfn; - struct folio *folio; - struct page *page; - softleaf_t entry; - pte_t pte; - - pte = ptep_get(ptep); - - if (pte_none(pte)) { - if (vma_is_anonymous(vma)) { - mpfn = MIGRATE_PFN_MIGRATE; - migrate->cpages++; - } - goto next; - } - - if (!pte_present(pte)) { - /* - * Only care about unaddressable device page special - * page table entry. Other special swap entries are not - * migratable, and we ignore regular swapped page. - */ - entry = softleaf_from_pte(pte); - if (!softleaf_is_device_private(entry)) - goto next; - - page = softleaf_to_page(entry); - pgmap = page_pgmap(page); - if (!(migrate->flags & - MIGRATE_VMA_SELECT_DEVICE_PRIVATE) || - pgmap->owner != migrate->pgmap_owner) - goto next; - - folio = page_folio(page); - if (folio_test_large(folio)) { - int ret; - - arch_leave_lazy_mmu_mode(); - pte_unmap_unlock(ptep, ptl); - ret = migrate_vma_split_folio(folio, - migrate->fault_page); - - if (ret) { - if (unmapped) - flush_tlb_range(walk->vma, start, end); - - return migrate_vma_collect_skip(addr, end, walk); - } - - goto again; - } - - mpfn = migrate_pfn(page_to_pfn(page)) | - MIGRATE_PFN_MIGRATE; - if (softleaf_is_device_private_write(entry)) - mpfn |= MIGRATE_PFN_WRITE; - } else { - pfn = pte_pfn(pte); - if (is_zero_pfn(pfn) && - (migrate->flags & MIGRATE_VMA_SELECT_SYSTEM)) { - mpfn = MIGRATE_PFN_MIGRATE; - migrate->cpages++; - goto next; - } - page = vm_normal_page(migrate->vma, addr, pte); - if (page && !is_zone_device_page(page) && - !(migrate->flags & MIGRATE_VMA_SELECT_SYSTEM)) { - goto next; - } else if (page && is_device_coherent_page(page)) { - pgmap = page_pgmap(page); - - if (!(migrate->flags & - MIGRATE_VMA_SELECT_DEVICE_COHERENT) || - pgmap->owner != migrate->pgmap_owner) - goto next; - } - folio = page ? page_folio(page) : NULL; - if (folio && folio_test_large(folio)) { - int ret; - - arch_leave_lazy_mmu_mode(); - pte_unmap_unlock(ptep, ptl); - ret = migrate_vma_split_folio(folio, - migrate->fault_page); - - if (ret) { - if (unmapped) - flush_tlb_range(walk->vma, start, end); - - return migrate_vma_collect_skip(addr, end, walk); - } - - goto again; - } - mpfn = migrate_pfn(pfn) | MIGRATE_PFN_MIGRATE; - mpfn |= pte_write(pte) ? MIGRATE_PFN_WRITE : 0; - } - - if (!page || !page->mapping) { - mpfn = 0; - goto next; - } - - /* - * By getting a reference on the folio we pin it and that blocks - * any kind of migration. Side effect is that it "freezes" the - * pte. - * - * We drop this reference after isolating the folio from the lru - * for non device folio (device folio are not on the lru and thus - * can't be dropped from it). - */ - folio = page_folio(page); - folio_get(folio); - - /* - * We rely on folio_trylock() to avoid deadlock between - * concurrent migrations where each is waiting on the others - * folio lock. If we can't immediately lock the folio we fail this - * migration as it is only best effort anyway. - * - * If we can lock the folio it's safe to set up a migration entry - * now. In the common case where the folio is mapped once in a - * single process setting up the migration entry now is an - * optimisation to avoid walking the rmap later with - * try_to_migrate(). - */ - if (fault_folio == folio || folio_trylock(folio)) { - bool anon_exclusive; - pte_t swp_pte; - - flush_cache_page(vma, addr, pte_pfn(pte)); - anon_exclusive = folio_test_anon(folio) && - PageAnonExclusive(page); - if (anon_exclusive) { - pte = ptep_clear_flush(vma, addr, ptep); - - if (folio_try_share_anon_rmap_pte(folio, page)) { - set_pte_at(mm, addr, ptep, pte); - if (fault_folio != folio) - folio_unlock(folio); - folio_put(folio); - mpfn = 0; - goto next; - } - } else { - pte = ptep_get_and_clear(mm, addr, ptep); - } - - migrate->cpages++; - - /* Set the dirty flag on the folio now the pte is gone. */ - if (pte_dirty(pte)) - folio_mark_dirty(folio); - - /* Setup special migration page table entry */ - if (mpfn & MIGRATE_PFN_WRITE) - entry = make_writable_migration_entry( - page_to_pfn(page)); - else if (anon_exclusive) - entry = make_readable_exclusive_migration_entry( - page_to_pfn(page)); - else - entry = make_readable_migration_entry( - page_to_pfn(page)); - if (pte_present(pte)) { - if (pte_young(pte)) - entry = make_migration_entry_young(entry); - if (pte_dirty(pte)) - entry = make_migration_entry_dirty(entry); - } - swp_pte = swp_entry_to_pte(entry); - if (pte_present(pte)) { - if (pte_soft_dirty(pte)) - swp_pte = pte_swp_mksoft_dirty(swp_pte); - if (pte_uffd_wp(pte)) - swp_pte = pte_swp_mkuffd_wp(swp_pte); - } else { - if (pte_swp_soft_dirty(pte)) - swp_pte = pte_swp_mksoft_dirty(swp_pte); - if (pte_swp_uffd_wp(pte)) - swp_pte = pte_swp_mkuffd_wp(swp_pte); - } - set_pte_at(mm, addr, ptep, swp_pte); - - /* - * This is like regular unmap: we remove the rmap and - * drop the folio refcount. The folio won't be freed, as - * we took a reference just above. - */ - folio_remove_rmap_pte(folio, page, vma); - folio_put(folio); - - if (pte_present(pte)) - unmapped++; - } else { - folio_put(folio); - mpfn = 0; - } - -next: - migrate->dst[migrate->npages] = 0; - migrate->src[migrate->npages++] = mpfn; - } - - /* Only flush the TLB if we actually modified any entries */ - if (unmapped) - flush_tlb_range(walk->vma, start, end); - - arch_leave_lazy_mmu_mode(); - pte_unmap_unlock(ptep - 1, ptl); - - return 0; -} - -static const struct mm_walk_ops migrate_vma_walk_ops = { - .pmd_entry = migrate_vma_collect_pmd, - .pte_hole = migrate_vma_collect_hole, - .walk_lock = PGWALK_RDLOCK, -}; - -/* - * migrate_vma_collect() - collect pages over a range of virtual addresses - * @migrate: migrate struct containing all migration information - * - * This will walk the CPU page table. For each virtual address backed by a - * valid page, it updates the src array and takes a reference on the page, in - * order to pin the page until we lock it and unmap it. - */ -static void migrate_vma_collect(struct migrate_vma *migrate) -{ - struct mmu_notifier_range range; - - /* - * Note that the pgmap_owner is passed to the mmu notifier callback so - * that the registered device driver can skip invalidating device - * private page mappings that won't be migrated. - */ - mmu_notifier_range_init_owner(&range, MMU_NOTIFY_MIGRATE, 0, - migrate->vma->vm_mm, migrate->start, migrate->end, - migrate->pgmap_owner); - mmu_notifier_invalidate_range_start(&range); - - walk_page_range(migrate->vma->vm_mm, migrate->start, migrate->end, - &migrate_vma_walk_ops, migrate); - - mmu_notifier_invalidate_range_end(&range); - migrate->end = migrate->start + (migrate->npages << PAGE_SHIFT); -} - /* * migrate_vma_check_page() - check if page is pinned or not * @page: struct page to check -- 2.50.0 ^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2026-01-21 8:05 UTC | newest] Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2026-01-19 11:24 [PATCH v2 0/3] Migrate on fault for device pages mpenttil 2026-01-19 11:25 ` [PATCH v2 1/3] mm: unified hmm fault and migrate device pagewalk paths mpenttil 2026-01-19 20:31 ` kernel test robot 2026-01-20 6:29 ` Dan Carpenter 2026-01-20 16:03 ` Matthew Brost 2026-01-21 8:05 ` Mika Penttilä 2026-01-19 11:25 ` [PATCH v2 2/3] mm: add new testcase for the migrate on fault case mpenttil 2026-01-19 11:25 ` [PATCH v2 3/3] mm:/migrate_device.c: remove migrate_vma_collect_*() functions mpenttil
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox