From: John Hubbard <jhubbard@nvidia.com>
To: "Andrew Morton" <akpm@linux-foundation.org>,
"Thomas Hellström" <thomas.hellstrom@linux.intel.com>
Cc: intel-xe@lists.freedesktop.org,
Ralph Campbell <rcampbell@nvidia.com>,
Christoph Hellwig <hch@lst.de>,
Jason Gunthorpe <jgg@mellanox.com>,
Jason Gunthorpe <jgg@ziepe.ca>, Leon Romanovsky <leon@kernel.org>,
Matthew Brost <matthew.brost@intel.com>,
linux-mm@kvack.org, stable@vger.kernel.org,
dri-devel@lists.freedesktop.org
Subject: Re: [PATCH] mm/hmm: Fix a hmm_range_fault() livelock / starvation problem
Date: Fri, 30 Jan 2026 19:01:00 -0800 [thread overview]
Message-ID: <57fd7f99-fa21-41eb-b484-56778ded457a@nvidia.com> (raw)
In-Reply-To: <20260130100013.fb1ce1cd5bd7a440087c7b37@linux-foundation.org>
On 1/30/26 10:00 AM, Andrew Morton wrote:
> On Fri, 30 Jan 2026 15:45:29 +0100 Thomas Hellström <thomas.hellstrom@linux.intel.com> wrote:
...
>> This can happen, for example if the process holding the
>> device-private folio lock is stuck in
>> migrate_device_unmap()->lru_add_drain_all()
>> The lru_add_drain_all() function requires a short work-item
>> to be run on all online cpus to complete.
>
> This is pretty bad behavior from lru_add_drain_all().
Yes. And also, by code inspection, it seems like other folio_batch
items (I was going to say pagevecs, heh) can leak in after calling
lru_add_drain_all(), making things even worse.
Maybe we really should be calling lru_cache_disable/enable()
pairs for migration, even though it looks heavier weight.
This diff would address both points, and maybe fix Matthew's issue,
although I haven't done much real testing on it other than a quick
run of run_vmtests.sh:
diff --git a/mm/migrate_device.c b/mm/migrate_device.c
index 23379663b1e1..3c55a766dd33 100644
--- a/mm/migrate_device.c
+++ b/mm/migrate_device.c
@@ -570,7 +570,6 @@ static unsigned long migrate_device_unmap(unsigned long *src_pfns,
struct folio *fault_folio = fault_page ?
page_folio(fault_page) : NULL;
unsigned long i, restore = 0;
- bool allow_drain = true;
unsigned long unmapped = 0;
lru_add_drain();
@@ -595,12 +594,6 @@ static unsigned long migrate_device_unmap(unsigned long *src_pfns,
/* ZONE_DEVICE folios are not on LRU */
if (!folio_is_zone_device(folio)) {
- if (!folio_test_lru(folio) && allow_drain) {
- /* Drain CPU's lru cache */
- lru_add_drain_all();
- allow_drain = false;
- }
-
if (!folio_isolate_lru(folio)) {
src_pfns[i] &= ~MIGRATE_PFN_MIGRATE;
restore++;
@@ -759,11 +752,15 @@ int migrate_vma_setup(struct migrate_vma *args)
args->cpages = 0;
args->npages = 0;
+ lru_cache_disable();
+
migrate_vma_collect(args);
if (args->cpages)
migrate_vma_unmap(args);
+ lru_cache_enable();
+
/*
* At this point pages are locked and unmapped, and thus they have
* stable content and can safely be copied to destination memory that
@@ -1395,6 +1392,8 @@ int migrate_device_range(unsigned long *src_pfns, unsigned long start,
{
unsigned long i, j, pfn;
+ lru_cache_disable();
+
for (pfn = start, i = 0; i < npages; pfn++, i++) {
struct page *page = pfn_to_page(pfn);
struct folio *folio = page_folio(page);
@@ -1413,6 +1412,8 @@ int migrate_device_range(unsigned long *src_pfns, unsigned long start,
migrate_device_unmap(src_pfns, npages, NULL);
+ lru_cache_enable();
+
return 0;
}
EXPORT_SYMBOL(migrate_device_range);
@@ -1429,6 +1430,8 @@ int migrate_device_pfns(unsigned long *src_pfns, unsigned long npages)
{
unsigned long i, j;
+ lru_cache_disable();
+
for (i = 0; i < npages; i++) {
struct page *page = pfn_to_page(src_pfns[i]);
struct folio *folio = page_folio(page);
@@ -1446,6 +1449,8 @@ int migrate_device_pfns(unsigned long *src_pfns, unsigned long npages)
migrate_device_unmap(src_pfns, npages, NULL);
+ lru_cache_enable();
+
return 0;
}
EXPORT_SYMBOL(migrate_device_pfns);
thanks,
--
John Hubbard
next prev parent reply other threads:[~2026-01-31 3:01 UTC|newest]
Thread overview: 30+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-01-30 14:45 Thomas Hellström
2026-01-30 18:00 ` Andrew Morton
2026-01-30 19:56 ` Thomas Hellström
2026-01-30 20:38 ` Andrew Morton
2026-01-30 21:01 ` Matthew Brost
2026-01-30 21:08 ` Andrew Morton
2026-01-31 0:59 ` Matthew Brost
2026-01-31 3:01 ` John Hubbard [this message]
2026-01-31 12:57 ` Thomas Hellström
2026-01-31 19:00 ` Matthew Brost
2026-01-31 21:42 ` John Hubbard
2026-02-01 19:24 ` Matthew Brost
2026-02-01 20:48 ` John Hubbard
2026-02-01 21:07 ` Matthew Brost
2026-02-02 0:10 ` Alistair Popple
2026-02-02 9:30 ` Thomas Hellström
2026-02-02 10:25 ` Alistair Popple
2026-02-02 10:41 ` Thomas Hellström
2026-02-02 11:22 ` Alistair Popple
2026-02-02 11:44 ` Thomas Hellström
2026-02-02 12:26 ` Alistair Popple
2026-02-02 14:07 ` Thomas Hellström
2026-02-02 23:13 ` Alistair Popple
2026-02-02 9:13 ` Thomas Hellström
2026-02-02 10:34 ` Alistair Popple
2026-02-02 10:51 ` Thomas Hellström
2026-02-02 11:28 ` Alistair Popple
2026-02-02 22:28 ` John Hubbard
2026-02-03 9:31 ` Thomas Hellström
2026-02-04 1:13 ` pincount vs refcount: " John Hubbard
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=57fd7f99-fa21-41eb-b484-56778ded457a@nvidia.com \
--to=jhubbard@nvidia.com \
--cc=akpm@linux-foundation.org \
--cc=dri-devel@lists.freedesktop.org \
--cc=hch@lst.de \
--cc=intel-xe@lists.freedesktop.org \
--cc=jgg@mellanox.com \
--cc=jgg@ziepe.ca \
--cc=leon@kernel.org \
--cc=linux-mm@kvack.org \
--cc=matthew.brost@intel.com \
--cc=rcampbell@nvidia.com \
--cc=stable@vger.kernel.org \
--cc=thomas.hellstrom@linux.intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox