From: "Thomas Hellström" <thomas.hellstrom@linux.intel.com>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: intel-xe@lists.freedesktop.org,
Ralph Campbell <rcampbell@nvidia.com>,
Christoph Hellwig <hch@lst.de>,
Jason Gunthorpe <jgg@mellanox.com>,
Jason Gunthorpe <jgg@ziepe.ca>,
Leon Romanovsky <leon@kernel.org>,
Matthew Brost <matthew.brost@intel.com>,
linux-mm@kvack.org, stable@vger.kernel.org,
dri-devel@lists.freedesktop.org
Subject: Re: [PATCH] mm/hmm: Fix a hmm_range_fault() livelock / starvation problem
Date: Fri, 30 Jan 2026 20:56:31 +0100 [thread overview]
Message-ID: <b9dd97e7d9e62ebc33c4dfef53a9fd3f51352d3a.camel@linux.intel.com> (raw)
In-Reply-To: <20260130100013.fb1ce1cd5bd7a440087c7b37@linux-foundation.org>
On Fri, 2026-01-30 at 10:00 -0800, Andrew Morton wrote:
> On Fri, 30 Jan 2026 15:45:29 +0100 Thomas Hellström
> <thomas.hellstrom@linux.intel.com> wrote:
>
> > If hmm_range_fault() fails a folio_trylock() in do_swap_page,
> > trying to acquire the lock of a device-private folio for migration,
> > to ram, the function will spin until it succeeds grabbing the lock.
> >
> > However, if the process holding the lock is depending on a work
> > item to be completed, which is scheduled on the same CPU as the
> > spinning hmm_range_fault(), that work item might be starved and
> > we end up in a livelock / starvation situation which is never
> > resolved.
> >
> > This can happen, for example if the process holding the
> > device-private folio lock is stuck in
> > migrate_device_unmap()->lru_add_drain_all()
> > The lru_add_drain_all() function requires a short work-item
> > to be run on all online cpus to complete.
>
> This is pretty bad behavior from lru_add_drain_all().
>
> > A prerequisite for this to happen is:
> > a) Both zone device and system memory folios are considered in
> > migrate_device_unmap(), so that there is a reason to call
> > lru_add_drain_all() for a system memory folio while a
> > folio lock is held on a zone device folio.
> > b) The zone device folio has an initial mapcount > 1 which causes
> > at least one migration PTE entry insertion to be deferred to
> > try_to_migrate(), which can happen after the call to
> > lru_add_drain_all().
> > c) No or voluntary only preemption.
> >
> > This all seems pretty unlikely to happen, but indeed is hit by
> > the "xe_exec_system_allocator" igt test.
> >
> > Resolve this using a cond_resched() after each iteration in
> > hmm_range_fault(). Future code improvements might consider moving
> > the lru_add_drain_all() call in migrate_device_unmap() out of the
> > folio locked region.
> >
> > Also, hmm_range_fault() can be a very long-running function
> > so a cond_resched() at the end of each iteration can be
> > motivated even in the absence of an -EBUSY.
> >
> > Fixes: d28c2c9a4877 ("mm/hmm: make full use of walk_page_range()")
>
> Six years ago.
Yeah, although unlikely to have been hit due to our multi-device
migration code might have been the first instance of all those
prerequisites to be fulfilled.
>
> > --- a/mm/hmm.c
> > +++ b/mm/hmm.c
> > @@ -674,6 +674,13 @@ int hmm_range_fault(struct hmm_range *range)
> > return -EBUSY;
> > ret = walk_page_range(mm, hmm_vma_walk.last,
> > range->end,
> > &hmm_walk_ops,
> > &hmm_vma_walk);
> > + /*
> > + * Conditionally reschedule to let other work
> > items get
> > + * a chance to unlock device-private pages whose
> > locks
> > + * we're spinning on.
> > + */
> > + cond_resched();
> > +
> > /*
> > * When -EBUSY is returned the loop restarts with
> > * hmm_vma_walk.last set to an address that has
> > not been stored
>
> If the process which is running hmm_range_fault() has
> SCHED_FIFO/SHCED_RR then cond_resched() doesn't work. An explicit
> msleep() would be better?
Unfortunately hmm_range_fault() is typically called from a gpu
pagefault handler and it's crucial to get the gpu up and running again
as fast as possible.
Is there a way we could test for the cases where cond_resched() doesn't
work and in that case instead call sched_yield(), at least on -EBUSY
errors?
Thanks,
Thomas
next prev parent reply other threads:[~2026-01-30 19:56 UTC|newest]
Thread overview: 30+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-01-30 14:45 Thomas Hellström
2026-01-30 18:00 ` Andrew Morton
2026-01-30 19:56 ` Thomas Hellström [this message]
2026-01-30 20:38 ` Andrew Morton
2026-01-30 21:01 ` Matthew Brost
2026-01-30 21:08 ` Andrew Morton
2026-01-31 0:59 ` Matthew Brost
2026-01-31 3:01 ` John Hubbard
2026-01-31 12:57 ` Thomas Hellström
2026-01-31 19:00 ` Matthew Brost
2026-01-31 21:42 ` John Hubbard
2026-02-01 19:24 ` Matthew Brost
2026-02-01 20:48 ` John Hubbard
2026-02-01 21:07 ` Matthew Brost
2026-02-02 0:10 ` Alistair Popple
2026-02-02 9:30 ` Thomas Hellström
2026-02-02 10:25 ` Alistair Popple
2026-02-02 10:41 ` Thomas Hellström
2026-02-02 11:22 ` Alistair Popple
2026-02-02 11:44 ` Thomas Hellström
2026-02-02 12:26 ` Alistair Popple
2026-02-02 14:07 ` Thomas Hellström
2026-02-02 23:13 ` Alistair Popple
2026-02-02 9:13 ` Thomas Hellström
2026-02-02 10:34 ` Alistair Popple
2026-02-02 10:51 ` Thomas Hellström
2026-02-02 11:28 ` Alistair Popple
2026-02-02 22:28 ` John Hubbard
2026-02-03 9:31 ` Thomas Hellström
2026-02-04 1:13 ` pincount vs refcount: " John Hubbard
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=b9dd97e7d9e62ebc33c4dfef53a9fd3f51352d3a.camel@linux.intel.com \
--to=thomas.hellstrom@linux.intel.com \
--cc=akpm@linux-foundation.org \
--cc=dri-devel@lists.freedesktop.org \
--cc=hch@lst.de \
--cc=intel-xe@lists.freedesktop.org \
--cc=jgg@mellanox.com \
--cc=jgg@ziepe.ca \
--cc=leon@kernel.org \
--cc=linux-mm@kvack.org \
--cc=matthew.brost@intel.com \
--cc=rcampbell@nvidia.com \
--cc=stable@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox