linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: John Hubbard <jhubbard@nvidia.com>
To: "Matthew Brost" <matthew.brost@intel.com>,
	"Thomas Hellström" <thomas.hellstrom@linux.intel.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	intel-xe@lists.freedesktop.org,
	Ralph Campbell <rcampbell@nvidia.com>,
	Christoph Hellwig <hch@lst.de>,
	Jason Gunthorpe <jgg@mellanox.com>,
	Jason Gunthorpe <jgg@ziepe.ca>, Leon Romanovsky <leon@kernel.org>,
	linux-mm@kvack.org, stable@vger.kernel.org,
	dri-devel@lists.freedesktop.org
Subject: Re: [PATCH] mm/hmm: Fix a hmm_range_fault() livelock / starvation problem
Date: Sat, 31 Jan 2026 13:42:20 -0800	[thread overview]
Message-ID: <0025ee21-2a6c-4c6e-a49a-2df525d3faa1@nvidia.com> (raw)
In-Reply-To: <aX5RQBxYB029/dkt@lstrano-desk.jf.intel.com>

On 1/31/26 11:00 AM, Matthew Brost wrote:
> On Sat, Jan 31, 2026 at 01:57:21PM +0100, Thomas Hellström wrote:
>> On Fri, 2026-01-30 at 19:01 -0800, John Hubbard wrote:
>>> On 1/30/26 10:00 AM, Andrew Morton wrote:
>>>> On Fri, 30 Jan 2026 15:45:29 +0100 Thomas Hellström
>>>> <thomas.hellstrom@linux.intel.com> wrote:
>>> ...
>> It looks like lru_cache_disable() is using synchronize_rcu_expedited(),
>> which whould be a huge performance killer?
>>
> 
> Yep. I’ve done some quick testing with John’s patch, and
> xe_exec_system_alloc slows down by what seems like orders of magnitude in

ouchie!

> certain sections. I haven’t done a deep dive yet, but the initial results
> don’t look good.
> 
> I also eventually hit a kernel deadlock. I have the stack trace saved.
> 
>>  From the migrate code it looks like it's calling lru_add_drain_all()
>> once only, because migration is still best effort, so it's accepting
>> failures if someone adds pages to the per-cpu lru_add structures,
>> rather than wanting to take the heavy performance loss of
>> lru_cache_disable().

Yes, I'm clearly far too biased right now towards "make migration
succeed more often" (some notes below). lru_cache_disable() is sounding
awfully severe in terms of perf loss.

>>
>> The problem at hand is also solved if we move the lru_add_drain_all()
>> out of the page-locked region in migrate_vma_setup(), like if we hit a
>> system folio not on the LRU, we'd unlock all folios, call
>> lru_add_drain_all() and retry from start.
>>
> 
> That seems like something to try. It should actually be pretty easy to
> implement as well. It’s good to determine whether a backoff like this is

This does seem like a less drastic fix, and it keeps the same design.

> common, and whether the backoff causes a performance hit or leads to a
> large number of retries under the right race conditions.
> 
>> But the root cause, even though lru_add_drain_all() is bad-behaving, is
>> IMHO the trylock spin in hmm_range_fault(). This is relatively recently
>> introduced to avoid another livelock problem, but there were other
>> fixes associated with that as well, so might not be strictly necessary.
>>
>> IIRC he original non-trylocking code in do_swap_page() first took a
> 
> Here is change for reference:
> 
> git format-patch -1 1afaeb8293c9a
> 
>> reference to the folio, released the page-table lock and then performed
>> a sleeping folio lock. Problem was that if the folio was already locked
> 
> So original code never had page lock.
> 
>> for migration, that additional folio refcount would block migration
> 
> The additional folio refcount could block migration, so if multiple
> threads fault the same page you could spin thousands of times before
> one of them actually wins the race and moves the page. Or, if
> migrate_to_ram contends on some common mutex or similar structure
> (Xe/GPU SVM doesn’t, but AMD and Nouveau do), you could get a stable
> livelock.
> 
>> (which might not be a big problem considering do_swap_page() might want
>> to migrate to system ram anyway). @Matt Brost what's your take on this?
>>
> 
> The primary reason I used a trylock in do_swap_page is because the
> migrate_vma_* functions also use trylocks. It seems reasonable to

Those are trylocks because it is collecting multiple pages/folios, so in
order to avoid deadlocks (very easy to hit with that pattern), it goes
with trylock.

> simply convert the lock in do_swap_page to a sleeping lock. I believe
> that would solve this issue for both non-RT and RT threads. I don’t know
> enough about the MM to say whether using a sleeping lock here is
> acceptable, though. Perhaps Andrew can provide guidance.

This might actually be possible.

> 
>> I'm also not sure a folio refcount should block migration after the
>> introduction of pinned (like in pin_user_pages) pages. Rather perhaps a
>> folio pin-count should block migration and in that case do_swap_page()
>> can definitely do a sleeping folio lock and the problem is gone.

A problem for that specific point is that pincount and refcount both
mean, "the page is pinned" (which in turn literally means "not allowed
to migrate/move").

(In fact, pincount is implemented in terms of refcount, in most
configurations still.)

>>
> 
> I’m not convinced the folio refcount has any bearing if we can take a
> sleeping lock in do_swap_page, but perhaps I’m missing something.

So far, I am not able to find a problem with your proposal. So,
something like this I believe could actually work:

diff --git a/mm/memory.c b/mm/memory.c
index da360a6eb8a4..af73430e7888 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -4652,6 +4652,8 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
  			vmf->page = softleaf_to_page(entry);
  			ret = remove_device_exclusive_entry(vmf);
  		} else if (softleaf_is_device_private(entry)) {
+			struct dev_pagemap *pgmap;
+
  			if (vmf->flags & FAULT_FLAG_VMA_LOCK) {
  				/*
  				 * migrate_to_ram is not yet ready to operate
@@ -4674,18 +4676,13 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
  			 * Get a page reference while we know the page can't be
  			 * freed.
  			 */
-			if (trylock_page(vmf->page)) {
-				struct dev_pagemap *pgmap;
-
-				get_page(vmf->page);
-				pte_unmap_unlock(vmf->pte, vmf->ptl);
-				pgmap = page_pgmap(vmf->page);
-				ret = pgmap->ops->migrate_to_ram(vmf);
-				unlock_page(vmf->page);
-				put_page(vmf->page);
-			} else {
-				pte_unmap_unlock(vmf->pte, vmf->ptl);
-			}
+			get_page(vmf->page);
+			pte_unmap_unlock(vmf->pte, vmf->ptl);
+			lock_page(vmf->page);
+			pgmap = page_pgmap(vmf->page);
+			ret = pgmap->ops->migrate_to_ram(vmf);
+			unlock_page(vmf->page);
+			put_page(vmf->page);
  		} else if (softleaf_is_hwpoison(entry)) {
  			ret = VM_FAULT_HWPOISON;
  		} else if (softleaf_is_marker(entry)) {

> 
>> But it looks like an AR for us to try to check how bad
>> lru_cache_disable() really is. And perhaps compare with an
>> unconditional lru_add_drain_all() at migration start.
>>
>> Does anybody know who would be able to tell whether a page refcount
>> still should block migration (like today) or whether that could
>> actually be relaxed to a page pincount?

Yes, it really should block migration, see my response above: both
pincount and refcount literally mean "do not move this page".

As an aside because it might help at some point, I'm just now testing a
tiny patchset that uses:

     wait_var_event_killable(&folio->_refcount,
                             folio_ref_count(folio) <= expected)

during migration, paired with:

      wake_up_var(&folio->_refcount) in put_page().

This waits for the expected refcount, instead of doing a blind, tight
retry loop during migration attempts. This lets migration succeed even
when waiting a long time for another caller to release a refcount.

It works well, but of course, it also has a potentially serious
performance cost (which I need to quantify), because it adds cycles to
the put_page() hot path. Which is why I haven't posted it yet, even as
an RFC. It's still in the "is this even reasonable" stage, just food
for thought here.

thanks,
-- 
John Hubbard


  reply	other threads:[~2026-01-31 21:42 UTC|newest]

Thread overview: 30+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-01-30 14:45 Thomas Hellström
2026-01-30 18:00 ` Andrew Morton
2026-01-30 19:56   ` Thomas Hellström
2026-01-30 20:38     ` Andrew Morton
2026-01-30 21:01       ` Matthew Brost
2026-01-30 21:08         ` Andrew Morton
2026-01-31  0:59           ` Matthew Brost
2026-01-31  3:01   ` John Hubbard
2026-01-31 12:57     ` Thomas Hellström
2026-01-31 19:00       ` Matthew Brost
2026-01-31 21:42         ` John Hubbard [this message]
2026-02-01 19:24           ` Matthew Brost
2026-02-01 20:48             ` John Hubbard
2026-02-01 21:07               ` Matthew Brost
2026-02-02  0:10                 ` Alistair Popple
2026-02-02  9:30                   ` Thomas Hellström
2026-02-02 10:25                     ` Alistair Popple
2026-02-02 10:41                       ` Thomas Hellström
2026-02-02 11:22                         ` Alistair Popple
2026-02-02 11:44                           ` Thomas Hellström
2026-02-02 12:26                             ` Alistair Popple
2026-02-02 14:07                               ` Thomas Hellström
2026-02-02 23:13                                 ` Alistair Popple
2026-02-02  9:13           ` Thomas Hellström
2026-02-02 10:34             ` Alistair Popple
2026-02-02 10:51               ` Thomas Hellström
2026-02-02 11:28                 ` Alistair Popple
2026-02-02 22:28             ` John Hubbard
2026-02-03  9:31               ` Thomas Hellström
2026-02-04  1:13                 ` pincount vs refcount: " John Hubbard

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=0025ee21-2a6c-4c6e-a49a-2df525d3faa1@nvidia.com \
    --to=jhubbard@nvidia.com \
    --cc=akpm@linux-foundation.org \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=hch@lst.de \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=jgg@mellanox.com \
    --cc=jgg@ziepe.ca \
    --cc=leon@kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=matthew.brost@intel.com \
    --cc=rcampbell@nvidia.com \
    --cc=stable@vger.kernel.org \
    --cc=thomas.hellstrom@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox