* [PATCH v3] mm: Fix a hmm_range_fault() livelock / starvation problem
@ 2026-02-03 14:34 Thomas Hellström
2026-02-04 1:52 ` John Hubbard
2026-02-04 10:59 ` Alistair Popple
0 siblings, 2 replies; 4+ messages in thread
From: Thomas Hellström @ 2026-02-03 14:34 UTC (permalink / raw)
To: intel-xe
Cc: Thomas Hellström, Alistair Popple, Ralph Campbell,
Christoph Hellwig, Jason Gunthorpe, Jason Gunthorpe,
Leon Romanovsky, Andrew Morton, Matthew Brost, John Hubbard,
linux-mm, dri-devel, stable
If hmm_range_fault() fails a folio_trylock() in do_swap_page,
trying to acquire the lock of a device-private folio for migration,
to ram, the function will spin until it succeeds grabbing the lock.
However, if the process holding the lock is depending on a work
item to be completed, which is scheduled on the same CPU as the
spinning hmm_range_fault(), that work item might be starved and
we end up in a livelock / starvation situation which is never
resolved.
This can happen, for example if the process holding the
device-private folio lock is stuck in
migrate_device_unmap()->lru_add_drain_all()
The lru_add_drain_all() function requires a short work-item
to be run on all online cpus to complete.
A prerequisite for this to happen is:
a) Both zone device and system memory folios are considered in
migrate_device_unmap(), so that there is a reason to call
lru_add_drain_all() for a system memory folio while a
folio lock is held on a zone device folio.
b) The zone device folio has an initial mapcount > 1 which causes
at least one migration PTE entry insertion to be deferred to
try_to_migrate(), which can happen after the call to
lru_add_drain_all().
c) No or voluntary only preemption.
This all seems pretty unlikely to happen, but indeed is hit by
the "xe_exec_system_allocator" igt test.
Resolve this by waiting for the folio to be unlocked if the
folio_trylock() fails in the do_swap_page() function.
Future code improvements might consider moving
the lru_add_drain_all() call in migrate_device_unmap() to be
called *after* all pages have migration entries inserted.
That would eliminate also b) above.
v2:
- Instead of a cond_resched() in the hmm_range_fault() function,
eliminate the problem by waiting for the folio to be unlocked
in do_swap_page() (Alistair Popple, Andrew Morton)
v3:
- Add a stub migration_entry_wait_on_locked() for the
!CONFIG_MIGRATION case. (Kernel Test Robot)
Suggested-by: Alistair Popple <apopple@nvidia.com>
Fixes: 1afaeb8293c9 ("mm/migrate: Trylock device page in do_swap_page")
Cc: Ralph Campbell <rcampbell@nvidia.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Jason Gunthorpe <jgg@mellanox.com>
Cc: Jason Gunthorpe <jgg@ziepe.ca>
Cc: Leon Romanovsky <leon@kernel.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: John Hubbard <jhubbard@nvidia.com>
Cc: Alistair Popple <apopple@nvidia.com>
Cc: linux-mm@kvack.org
Cc: <dri-devel@lists.freedesktop.org>
Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Cc: <stable@vger.kernel.org> # v6.15+
---
include/linux/migrate.h | 6 ++++++
mm/memory.c | 3 ++-
2 files changed, 8 insertions(+), 1 deletion(-)
diff --git a/include/linux/migrate.h b/include/linux/migrate.h
index 26ca00c325d9..800ec174b601 100644
--- a/include/linux/migrate.h
+++ b/include/linux/migrate.h
@@ -97,6 +97,12 @@ static inline int set_movable_ops(const struct movable_operations *ops, enum pag
return -ENOSYS;
}
+static inline void migration_entry_wait_on_locked(softleaf_t entry, spinlock_t *ptl)
+ __releases(ptl)
+{
+ spin_unlock(ptl);
+}
+
#endif /* CONFIG_MIGRATION */
#ifdef CONFIG_NUMA_BALANCING
diff --git a/mm/memory.c b/mm/memory.c
index da360a6eb8a4..ed20da5570d5 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -4684,7 +4684,8 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
unlock_page(vmf->page);
put_page(vmf->page);
} else {
- pte_unmap_unlock(vmf->pte, vmf->ptl);
+ pte_unmap(vmf->pte);
+ migration_entry_wait_on_locked(entry, vmf->ptl);
}
} else if (softleaf_is_hwpoison(entry)) {
ret = VM_FAULT_HWPOISON;
--
2.52.0
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH v3] mm: Fix a hmm_range_fault() livelock / starvation problem
2026-02-03 14:34 [PATCH v3] mm: Fix a hmm_range_fault() livelock / starvation problem Thomas Hellström
@ 2026-02-04 1:52 ` John Hubbard
2026-02-04 10:59 ` Alistair Popple
1 sibling, 0 replies; 4+ messages in thread
From: John Hubbard @ 2026-02-04 1:52 UTC (permalink / raw)
To: Thomas Hellström, intel-xe
Cc: Alistair Popple, Ralph Campbell, Christoph Hellwig,
Jason Gunthorpe, Jason Gunthorpe, Leon Romanovsky, Andrew Morton,
Matthew Brost, linux-mm, dri-devel, stable
On 2/3/26 6:34 AM, Thomas Hellström wrote:
> If hmm_range_fault() fails a folio_trylock() in do_swap_page,
> trying to acquire the lock of a device-private folio for migration,
> to ram, the function will spin until it succeeds grabbing the lock.
>
> However, if the process holding the lock is depending on a work
> item to be completed, which is scheduled on the same CPU as the
> spinning hmm_range_fault(), that work item might be starved and
> we end up in a livelock / starvation situation which is never
> resolved.
>
> This can happen, for example if the process holding the
> device-private folio lock is stuck in
> migrate_device_unmap()->lru_add_drain_all()
> The lru_add_drain_all() function requires a short work-item
> to be run on all online cpus to complete.
>
> A prerequisite for this to happen is:
> a) Both zone device and system memory folios are considered in
> migrate_device_unmap(), so that there is a reason to call
> lru_add_drain_all() for a system memory folio while a
> folio lock is held on a zone device folio.
> b) The zone device folio has an initial mapcount > 1 which causes
> at least one migration PTE entry insertion to be deferred to
> try_to_migrate(), which can happen after the call to
> lru_add_drain_all().
> c) No or voluntary only preemption.
>
> This all seems pretty unlikely to happen, but indeed is hit by
> the "xe_exec_system_allocator" igt test.
>
> Resolve this by waiting for the folio to be unlocked if the
> folio_trylock() fails in the do_swap_page() function.
>
> Future code improvements might consider moving
> the lru_add_drain_all() call in migrate_device_unmap() to be
> called *after* all pages have migration entries inserted.
> That would eliminate also b) above.
>
> v2:
> - Instead of a cond_resched() in the hmm_range_fault() function,
> eliminate the problem by waiting for the folio to be unlocked
> in do_swap_page() (Alistair Popple, Andrew Morton)
> v3:
> - Add a stub migration_entry_wait_on_locked() for the
> !CONFIG_MIGRATION case. (Kernel Test Robot)
>
> Suggested-by: Alistair Popple <apopple@nvidia.com>
> Fixes: 1afaeb8293c9 ("mm/migrate: Trylock device page in do_swap_page")
> Cc: Ralph Campbell <rcampbell@nvidia.com>
> Cc: Christoph Hellwig <hch@lst.de>
> Cc: Jason Gunthorpe <jgg@mellanox.com>
> Cc: Jason Gunthorpe <jgg@ziepe.ca>
> Cc: Leon Romanovsky <leon@kernel.org>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Matthew Brost <matthew.brost@intel.com>
> Cc: John Hubbard <jhubbard@nvidia.com>
> Cc: Alistair Popple <apopple@nvidia.com>
> Cc: linux-mm@kvack.org
> Cc: <dri-devel@lists.freedesktop.org>
> Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> Cc: <stable@vger.kernel.org> # v6.15+
> ---
> include/linux/migrate.h | 6 ++++++
> mm/memory.c | 3 ++-
> 2 files changed, 8 insertions(+), 1 deletion(-)
>
> diff --git a/include/linux/migrate.h b/include/linux/migrate.h
> index 26ca00c325d9..800ec174b601 100644
> --- a/include/linux/migrate.h
> +++ b/include/linux/migrate.h
> @@ -97,6 +97,12 @@ static inline int set_movable_ops(const struct movable_operations *ops, enum pag
> return -ENOSYS;
> }
>
> +static inline void migration_entry_wait_on_locked(softleaf_t entry, spinlock_t *ptl)
> + __releases(ptl)
> +{
> + spin_unlock(ptl);
> +}
> +
> #endif /* CONFIG_MIGRATION */
>
> #ifdef CONFIG_NUMA_BALANCING
> diff --git a/mm/memory.c b/mm/memory.c
> index da360a6eb8a4..ed20da5570d5 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -4684,7 +4684,8 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
> unlock_page(vmf->page);
> put_page(vmf->page);
> } else {
> - pte_unmap_unlock(vmf->pte, vmf->ptl);
> + pte_unmap(vmf->pte);
> + migration_entry_wait_on_locked(entry, vmf->ptl);
This is neatly done.
Reviewed-by: John Hubbard <jhubbard@nvidia.com>
thanks,
--
John Hubbard
> }
> } else if (softleaf_is_hwpoison(entry)) {
> ret = VM_FAULT_HWPOISON;
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH v3] mm: Fix a hmm_range_fault() livelock / starvation problem
2026-02-03 14:34 [PATCH v3] mm: Fix a hmm_range_fault() livelock / starvation problem Thomas Hellström
2026-02-04 1:52 ` John Hubbard
@ 2026-02-04 10:59 ` Alistair Popple
2026-02-04 11:47 ` Thomas Hellström
1 sibling, 1 reply; 4+ messages in thread
From: Alistair Popple @ 2026-02-04 10:59 UTC (permalink / raw)
To: Thomas Hellström
Cc: intel-xe, Ralph Campbell, Christoph Hellwig, Jason Gunthorpe,
Jason Gunthorpe, Leon Romanovsky, Andrew Morton, Matthew Brost,
John Hubbard, linux-mm, dri-devel, stable
On 2026-02-04 at 01:34 +1100, Thomas Hellström <thomas.hellstrom@linux.intel.com> wrote...
> If hmm_range_fault() fails a folio_trylock() in do_swap_page,
> trying to acquire the lock of a device-private folio for migration,
> to ram, the function will spin until it succeeds grabbing the lock.
>
> However, if the process holding the lock is depending on a work
> item to be completed, which is scheduled on the same CPU as the
> spinning hmm_range_fault(), that work item might be starved and
> we end up in a livelock / starvation situation which is never
> resolved.
>
> This can happen, for example if the process holding the
> device-private folio lock is stuck in
> migrate_device_unmap()->lru_add_drain_all()
> The lru_add_drain_all() function requires a short work-item
> to be run on all online cpus to complete.
>
> A prerequisite for this to happen is:
> a) Both zone device and system memory folios are considered in
> migrate_device_unmap(), so that there is a reason to call
> lru_add_drain_all() for a system memory folio while a
> folio lock is held on a zone device folio.
> b) The zone device folio has an initial mapcount > 1 which causes
> at least one migration PTE entry insertion to be deferred to
> try_to_migrate(), which can happen after the call to
> lru_add_drain_all().
> c) No or voluntary only preemption.
>
> This all seems pretty unlikely to happen, but indeed is hit by
> the "xe_exec_system_allocator" igt test.
>
> Resolve this by waiting for the folio to be unlocked if the
> folio_trylock() fails in the do_swap_page() function.
>
> Future code improvements might consider moving
> the lru_add_drain_all() call in migrate_device_unmap() to be
> called *after* all pages have migration entries inserted.
> That would eliminate also b) above.
>
> v2:
> - Instead of a cond_resched() in the hmm_range_fault() function,
> eliminate the problem by waiting for the folio to be unlocked
> in do_swap_page() (Alistair Popple, Andrew Morton)
> v3:
> - Add a stub migration_entry_wait_on_locked() for the
> !CONFIG_MIGRATION case. (Kernel Test Robot)
>
> Suggested-by: Alistair Popple <apopple@nvidia.com>
> Fixes: 1afaeb8293c9 ("mm/migrate: Trylock device page in do_swap_page")
> Cc: Ralph Campbell <rcampbell@nvidia.com>
> Cc: Christoph Hellwig <hch@lst.de>
> Cc: Jason Gunthorpe <jgg@mellanox.com>
> Cc: Jason Gunthorpe <jgg@ziepe.ca>
> Cc: Leon Romanovsky <leon@kernel.org>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Matthew Brost <matthew.brost@intel.com>
> Cc: John Hubbard <jhubbard@nvidia.com>
> Cc: Alistair Popple <apopple@nvidia.com>
> Cc: linux-mm@kvack.org
> Cc: <dri-devel@lists.freedesktop.org>
> Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> Cc: <stable@vger.kernel.org> # v6.15+
> ---
> include/linux/migrate.h | 6 ++++++
> mm/memory.c | 3 ++-
> 2 files changed, 8 insertions(+), 1 deletion(-)
>
> diff --git a/include/linux/migrate.h b/include/linux/migrate.h
> index 26ca00c325d9..800ec174b601 100644
> --- a/include/linux/migrate.h
> +++ b/include/linux/migrate.h
> @@ -97,6 +97,12 @@ static inline int set_movable_ops(const struct movable_operations *ops, enum pag
> return -ENOSYS;
> }
>
> +static inline void migration_entry_wait_on_locked(softleaf_t entry, spinlock_t *ptl)
> + __releases(ptl)
> +{
> + spin_unlock(ptl);
> +}
> +
> #endif /* CONFIG_MIGRATION */
>
> #ifdef CONFIG_NUMA_BALANCING
> diff --git a/mm/memory.c b/mm/memory.c
> index da360a6eb8a4..ed20da5570d5 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -4684,7 +4684,8 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
> unlock_page(vmf->page);
> put_page(vmf->page);
> } else {
> - pte_unmap_unlock(vmf->pte, vmf->ptl);
> + pte_unmap(vmf->pte);
> + migration_entry_wait_on_locked(entry, vmf->ptl);
Code wise this looks fine to me, although it's confusing to see
migration_entry_wait_on_locked() being called on a non-migration entry and
ideally this would be renamed to something like softleaf_entry_wait_on_locked().
Regardless though the documentation for migration_entry_wait_on_locked() needs
updating to justify why calling this on device-private entries is valid (because
it's also just waiting for the page to be unlocked). Along with some equivalent
justification for how we know there is a reference on the device-private page:
* If a migration entry exists for the page the migration path must hold
* a valid reference to the page, and it must take the ptl to remove the
* migration entry. So the page is valid until the ptl is dropped.
Which is basically just the page is mapped in the page table, therefore it must
have a reference taken for the mapping and the mapping can't be removed while we
hold the PTL.
Thanks.
- Alistair
> }
> } else if (softleaf_is_hwpoison(entry)) {
> ret = VM_FAULT_HWPOISON;
> --
> 2.52.0
>
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH v3] mm: Fix a hmm_range_fault() livelock / starvation problem
2026-02-04 10:59 ` Alistair Popple
@ 2026-02-04 11:47 ` Thomas Hellström
0 siblings, 0 replies; 4+ messages in thread
From: Thomas Hellström @ 2026-02-04 11:47 UTC (permalink / raw)
To: Alistair Popple
Cc: intel-xe, Ralph Campbell, Christoph Hellwig, Jason Gunthorpe,
Jason Gunthorpe, Leon Romanovsky, Andrew Morton, Matthew Brost,
John Hubbard, linux-mm, dri-devel, stable
On Wed, 2026-02-04 at 21:59 +1100, Alistair Popple wrote:
> On 2026-02-04 at 01:34 +1100, Thomas Hellström
> <thomas.hellstrom@linux.intel.com> wrote...
> > If hmm_range_fault() fails a folio_trylock() in do_swap_page,
> > trying to acquire the lock of a device-private folio for migration,
> > to ram, the function will spin until it succeeds grabbing the lock.
> >
> > However, if the process holding the lock is depending on a work
> > item to be completed, which is scheduled on the same CPU as the
> > spinning hmm_range_fault(), that work item might be starved and
> > we end up in a livelock / starvation situation which is never
> > resolved.
> >
> > This can happen, for example if the process holding the
> > device-private folio lock is stuck in
> > migrate_device_unmap()->lru_add_drain_all()
> > The lru_add_drain_all() function requires a short work-item
> > to be run on all online cpus to complete.
> >
> > A prerequisite for this to happen is:
> > a) Both zone device and system memory folios are considered in
> > migrate_device_unmap(), so that there is a reason to call
> > lru_add_drain_all() for a system memory folio while a
> > folio lock is held on a zone device folio.
> > b) The zone device folio has an initial mapcount > 1 which causes
> > at least one migration PTE entry insertion to be deferred to
> > try_to_migrate(), which can happen after the call to
> > lru_add_drain_all().
> > c) No or voluntary only preemption.
> >
> > This all seems pretty unlikely to happen, but indeed is hit by
> > the "xe_exec_system_allocator" igt test.
> >
> > Resolve this by waiting for the folio to be unlocked if the
> > folio_trylock() fails in the do_swap_page() function.
> >
> > Future code improvements might consider moving
> > the lru_add_drain_all() call in migrate_device_unmap() to be
> > called *after* all pages have migration entries inserted.
> > That would eliminate also b) above.
> >
> > v2:
> > - Instead of a cond_resched() in the hmm_range_fault() function,
> > eliminate the problem by waiting for the folio to be unlocked
> > in do_swap_page() (Alistair Popple, Andrew Morton)
> > v3:
> > - Add a stub migration_entry_wait_on_locked() for the
> > !CONFIG_MIGRATION case. (Kernel Test Robot)
> >
> > Suggested-by: Alistair Popple <apopple@nvidia.com>
> > Fixes: 1afaeb8293c9 ("mm/migrate: Trylock device page in
> > do_swap_page")
> > Cc: Ralph Campbell <rcampbell@nvidia.com>
> > Cc: Christoph Hellwig <hch@lst.de>
> > Cc: Jason Gunthorpe <jgg@mellanox.com>
> > Cc: Jason Gunthorpe <jgg@ziepe.ca>
> > Cc: Leon Romanovsky <leon@kernel.org>
> > Cc: Andrew Morton <akpm@linux-foundation.org>
> > Cc: Matthew Brost <matthew.brost@intel.com>
> > Cc: John Hubbard <jhubbard@nvidia.com>
> > Cc: Alistair Popple <apopple@nvidia.com>
> > Cc: linux-mm@kvack.org
> > Cc: <dri-devel@lists.freedesktop.org>
> > Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> > Cc: <stable@vger.kernel.org> # v6.15+
> > ---
> > include/linux/migrate.h | 6 ++++++
> > mm/memory.c | 3 ++-
> > 2 files changed, 8 insertions(+), 1 deletion(-)
> >
> > diff --git a/include/linux/migrate.h b/include/linux/migrate.h
> > index 26ca00c325d9..800ec174b601 100644
> > --- a/include/linux/migrate.h
> > +++ b/include/linux/migrate.h
> > @@ -97,6 +97,12 @@ static inline int set_movable_ops(const struct
> > movable_operations *ops, enum pag
> > return -ENOSYS;
> > }
> >
> > +static inline void migration_entry_wait_on_locked(softleaf_t
> > entry, spinlock_t *ptl)
> > + __releases(ptl)
> > +{
> > + spin_unlock(ptl);
> > +}
> > +
> > #endif /* CONFIG_MIGRATION */
> >
> > #ifdef CONFIG_NUMA_BALANCING
> > diff --git a/mm/memory.c b/mm/memory.c
> > index da360a6eb8a4..ed20da5570d5 100644
> > --- a/mm/memory.c
> > +++ b/mm/memory.c
> > @@ -4684,7 +4684,8 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
> > unlock_page(vmf->page);
> > put_page(vmf->page);
> > } else {
> > - pte_unmap_unlock(vmf->pte, vmf-
> > >ptl);
> > + pte_unmap(vmf->pte);
> > + migration_entry_wait_on_locked(ent
> > ry, vmf->ptl);
>
> Code wise this looks fine to me, although it's confusing to see
> migration_entry_wait_on_locked() being called on a non-migration
> entry and
> ideally this would be renamed to something like
> softleaf_entry_wait_on_locked().
>
> Regardless though the documentation for
> migration_entry_wait_on_locked() needs
> updating to justify why calling this on device-private entries is
> valid (because
> it's also just waiting for the page to be unlocked). Along with some
> equivalent
> justification for how we know there is a reference on the device-
> private page:
>
> * If a migration entry exists for the page the migration
> path must hold
> * a valid reference to the page, and it must take the ptl
> to remove the
> * migration entry. So the page is valid until the ptl is
> dropped.
>
> Which is basically just the page is mapped in the page table,
> therefore it must
> have a reference taken for the mapping and the mapping can't be
> removed while we
> hold the PTL.
>
> Thanks.
>
> - Alistair
Thanks for reviewing. Let me respin this for a v4 addressing the above.
/Thomas
>
> > }
> > } else if (softleaf_is_hwpoison(entry)) {
> > ret = VM_FAULT_HWPOISON;
> > --
> > 2.52.0
> >
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2026-02-04 11:47 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2026-02-03 14:34 [PATCH v3] mm: Fix a hmm_range_fault() livelock / starvation problem Thomas Hellström
2026-02-04 1:52 ` John Hubbard
2026-02-04 10:59 ` Alistair Popple
2026-02-04 11:47 ` Thomas Hellström
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox