From: "Thomas Hellström" <thomas.hellstrom@linux.intel.com>
To: John Hubbard <jhubbard@nvidia.com>,
Andrew Morton <akpm@linux-foundation.org>
Cc: intel-xe@lists.freedesktop.org,
Ralph Campbell <rcampbell@nvidia.com>,
Christoph Hellwig <hch@lst.de>,
Jason Gunthorpe <jgg@mellanox.com>,
Jason Gunthorpe <jgg@ziepe.ca>,
Leon Romanovsky <leon@kernel.org>,
Matthew Brost <matthew.brost@intel.com>,
linux-mm@kvack.org, stable@vger.kernel.org,
dri-devel@lists.freedesktop.org
Subject: Re: [PATCH] mm/hmm: Fix a hmm_range_fault() livelock / starvation problem
Date: Sat, 31 Jan 2026 13:57:21 +0100 [thread overview]
Message-ID: <2d96c9318f2a5fc594dc6b4772b6ce7017a45ad9.camel@linux.intel.com> (raw)
In-Reply-To: <57fd7f99-fa21-41eb-b484-56778ded457a@nvidia.com>
On Fri, 2026-01-30 at 19:01 -0800, John Hubbard wrote:
> On 1/30/26 10:00 AM, Andrew Morton wrote:
> > On Fri, 30 Jan 2026 15:45:29 +0100 Thomas Hellström
> > <thomas.hellstrom@linux.intel.com> wrote:
> ...
> > > This can happen, for example if the process holding the
> > > device-private folio lock is stuck in
> > > migrate_device_unmap()->lru_add_drain_all()
> > > The lru_add_drain_all() function requires a short work-item
> > > to be run on all online cpus to complete.
> >
> > This is pretty bad behavior from lru_add_drain_all().
>
> Yes. And also, by code inspection, it seems like other folio_batch
> items (I was going to say pagevecs, heh) can leak in after calling
> lru_add_drain_all(), making things even worse.
>
> Maybe we really should be calling lru_cache_disable/enable()
> pairs for migration, even though it looks heavier weight.
>
> This diff would address both points, and maybe fix Matthew's issue,
> although I haven't done much real testing on it other than a quick
> run of run_vmtests.sh:
It looks like lru_cache_disable() is using synchronize_rcu_expedited(),
which whould be a huge performance killer?
From the migrate code it looks like it's calling lru_add_drain_all()
once only, because migration is still best effort, so it's accepting
failures if someone adds pages to the per-cpu lru_add structures,
rather than wanting to take the heavy performance loss of
lru_cache_disable().
The problem at hand is also solved if we move the lru_add_drain_all()
out of the page-locked region in migrate_vma_setup(), like if we hit a
system folio not on the LRU, we'd unlock all folios, call
lru_add_drain_all() and retry from start.
But the root cause, even though lru_add_drain_all() is bad-behaving, is
IMHO the trylock spin in hmm_range_fault(). This is relatively recently
introduced to avoid another livelock problem, but there were other
fixes associated with that as well, so might not be strictly necessary.
IIRC he original non-trylocking code in do_swap_page() first took a
reference to the folio, released the page-table lock and then performed
a sleeping folio lock. Problem was that if the folio was already locked
for migration, that additional folio refcount would block migration
(which might not be a big problem considering do_swap_page() might want
to migrate to system ram anyway). @Matt Brost what's your take on this?
I'm also not sure a folio refcount should block migration after the
introduction of pinned (like in pin_user_pages) pages. Rather perhaps a
folio pin-count should block migration and in that case do_swap_page()
can definitely do a sleeping folio lock and the problem is gone.
But it looks like an AR for us to try to check how bad
lru_cache_disable() really is. And perhaps compare with an
unconditional lru_add_drain_all() at migration start.
Does anybody know who would be able to tell whether a page refcount
still should block migration (like today) or whether that could
actually be relaxed to a page pincount?
Thanks,
Thomas
>
> diff --git a/mm/migrate_device.c b/mm/migrate_device.c
> index 23379663b1e1..3c55a766dd33 100644
> --- a/mm/migrate_device.c
> +++ b/mm/migrate_device.c
> @@ -570,7 +570,6 @@ static unsigned long
> migrate_device_unmap(unsigned long *src_pfns,
> struct folio *fault_folio = fault_page ?
> page_folio(fault_page) : NULL;
> unsigned long i, restore = 0;
> - bool allow_drain = true;
> unsigned long unmapped = 0;
>
> lru_add_drain();
> @@ -595,12 +594,6 @@ static unsigned long
> migrate_device_unmap(unsigned long *src_pfns,
>
> /* ZONE_DEVICE folios are not on LRU */
> if (!folio_is_zone_device(folio)) {
> - if (!folio_test_lru(folio) && allow_drain) {
> - /* Drain CPU's lru cache */
> - lru_add_drain_all();
> - allow_drain = false;
> - }
> -
> if (!folio_isolate_lru(folio)) {
> src_pfns[i] &= ~MIGRATE_PFN_MIGRATE;
> restore++;
> @@ -759,11 +752,15 @@ int migrate_vma_setup(struct migrate_vma *args)
> args->cpages = 0;
> args->npages = 0;
>
> + lru_cache_disable();
> +
> migrate_vma_collect(args);
>
> if (args->cpages)
> migrate_vma_unmap(args);
>
> + lru_cache_enable();
> +
> /*
> * At this point pages are locked and unmapped, and thus
> they have
> * stable content and can safely be copied to destination
> memory that
> @@ -1395,6 +1392,8 @@ int migrate_device_range(unsigned long
> *src_pfns, unsigned long start,
> {
> unsigned long i, j, pfn;
>
> + lru_cache_disable();
> +
> for (pfn = start, i = 0; i < npages; pfn++, i++) {
> struct page *page = pfn_to_page(pfn);
> struct folio *folio = page_folio(page);
> @@ -1413,6 +1412,8 @@ int migrate_device_range(unsigned long
> *src_pfns, unsigned long start,
>
> migrate_device_unmap(src_pfns, npages, NULL);
>
> + lru_cache_enable();
> +
> return 0;
> }
> EXPORT_SYMBOL(migrate_device_range);
> @@ -1429,6 +1430,8 @@ int migrate_device_pfns(unsigned long
> *src_pfns, unsigned long npages)
> {
> unsigned long i, j;
>
> + lru_cache_disable();
> +
> for (i = 0; i < npages; i++) {
> struct page *page = pfn_to_page(src_pfns[i]);
> struct folio *folio = page_folio(page);
> @@ -1446,6 +1449,8 @@ int migrate_device_pfns(unsigned long
> *src_pfns, unsigned long npages)
>
> migrate_device_unmap(src_pfns, npages, NULL);
>
> + lru_cache_enable();
> +
> return 0;
> }
> EXPORT_SYMBOL(migrate_device_pfns);
>
>
>
>
> thanks,
next prev parent reply other threads:[~2026-01-31 12:57 UTC|newest]
Thread overview: 30+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-01-30 14:45 Thomas Hellström
2026-01-30 18:00 ` Andrew Morton
2026-01-30 19:56 ` Thomas Hellström
2026-01-30 20:38 ` Andrew Morton
2026-01-30 21:01 ` Matthew Brost
2026-01-30 21:08 ` Andrew Morton
2026-01-31 0:59 ` Matthew Brost
2026-01-31 3:01 ` John Hubbard
2026-01-31 12:57 ` Thomas Hellström [this message]
2026-01-31 19:00 ` Matthew Brost
2026-01-31 21:42 ` John Hubbard
2026-02-01 19:24 ` Matthew Brost
2026-02-01 20:48 ` John Hubbard
2026-02-01 21:07 ` Matthew Brost
2026-02-02 0:10 ` Alistair Popple
2026-02-02 9:30 ` Thomas Hellström
2026-02-02 10:25 ` Alistair Popple
2026-02-02 10:41 ` Thomas Hellström
2026-02-02 11:22 ` Alistair Popple
2026-02-02 11:44 ` Thomas Hellström
2026-02-02 12:26 ` Alistair Popple
2026-02-02 14:07 ` Thomas Hellström
2026-02-02 23:13 ` Alistair Popple
2026-02-02 9:13 ` Thomas Hellström
2026-02-02 10:34 ` Alistair Popple
2026-02-02 10:51 ` Thomas Hellström
2026-02-02 11:28 ` Alistair Popple
2026-02-02 22:28 ` John Hubbard
2026-02-03 9:31 ` Thomas Hellström
2026-02-04 1:13 ` pincount vs refcount: " John Hubbard
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=2d96c9318f2a5fc594dc6b4772b6ce7017a45ad9.camel@linux.intel.com \
--to=thomas.hellstrom@linux.intel.com \
--cc=akpm@linux-foundation.org \
--cc=dri-devel@lists.freedesktop.org \
--cc=hch@lst.de \
--cc=intel-xe@lists.freedesktop.org \
--cc=jgg@mellanox.com \
--cc=jgg@ziepe.ca \
--cc=jhubbard@nvidia.com \
--cc=leon@kernel.org \
--cc=linux-mm@kvack.org \
--cc=matthew.brost@intel.com \
--cc=rcampbell@nvidia.com \
--cc=stable@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox