From: David Hildenbrand <david@redhat.com>
To: Will Deacon <will@kernel.org>, Hugh Dickins <hughd@google.com>
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org,
Keir Fraser <keirf@google.com>, Jason Gunthorpe <jgg@ziepe.ca>,
John Hubbard <jhubbard@nvidia.com>,
Frederick Mayle <fmayle@google.com>,
Andrew Morton <akpm@linux-foundation.org>,
Peter Xu <peterx@redhat.com>, Rik van Riel <riel@surriel.com>,
Vlastimil Babka <vbabka@suse.cz>, Ge Yang <yangge1116@126.com>
Subject: Re: [PATCH] mm/gup: Drain batched mlock folio processing before attempting migration
Date: Tue, 9 Sep 2025 13:50:51 +0200 [thread overview]
Message-ID: <655d2d29-8fe6-4684-aba4-4803bed0d4d0@redhat.com> (raw)
In-Reply-To: <aMAR1A1CidQrIFEW@willie-the-truck>
On 09.09.25 13:39, Will Deacon wrote:
> On Fri, Aug 29, 2025 at 08:46:52AM -0700, Hugh Dickins wrote:
>> On Fri, 29 Aug 2025, Will Deacon wrote:
>>> On Thu, Aug 28, 2025 at 01:47:14AM -0700, Hugh Dickins wrote:
>>>> diff --git a/mm/gup.c b/mm/gup.c
>>>> index adffe663594d..9f7c87f504a9 100644
>>>> --- a/mm/gup.c
>>>> +++ b/mm/gup.c
>>>> @@ -2291,6 +2291,8 @@ static unsigned long collect_longterm_unpinnable_folios(
>>>> struct folio *folio;
>>>> long i = 0;
>>>>
>>>> + lru_add_drain();
>>>> +
>>>> for (folio = pofs_get_folio(pofs, i); folio;
>>>> folio = pofs_next_folio(folio, pofs, &i)) {
>>>>
>>>> @@ -2307,7 +2309,8 @@ static unsigned long collect_longterm_unpinnable_folios(
>>>> continue;
>>>> }
>>>>
>>>> - if (!folio_test_lru(folio) && drain_allow) {
>>>> + if (drain_allow && folio_ref_count(folio) !=
>>>> + folio_expected_ref_count(folio) + 1) {
>>>> lru_add_drain_all();
>>>
>>> How does this synchronise with the folio being added to the mlock batch
>>> on another CPU?
>>>
>>> need_mlock_drain(), which is what I think lru_add_drain_all() ends up
>>> using to figure out which CPU batches to process, just looks at the
>>> 'nr' field in the batch and I can't see anything in mlock_folio() to
>>> ensure any ordering between adding the folio to the batch and
>>> incrementing its refcount.
>>>
>>> Then again, my hack to use folio_test_mlocked() would have a similar
>>> issue because the flag is set (albeit with barrier semantics) before
>>> adding the folio to the batch, meaning the drain could miss the folio.
>>>
>>> I guess there's some higher-level synchronisation making this all work,
>>> but it would be good to understand that as I can't see that
>>> collect_longterm_unpinnable_folios() can rely on much other than the pin.
>>
>> No such strict synchronization: you've been misled if people have told
>> you that this pinning migration stuff is deterministically successful:
>> it's best effort - or will others on the Cc disagree?
>>
>> Just as there's no synchronization between the calculation inside
>> folio_expected_ref_count() and the reading of folio's refcount.
>>
>> It wouldn't make sense for this unpinnable collection to anguish over
>> such synchronization, when a moment later the migration is liable to
>> fail (on occasion) for other transient reasons. All ending up reported
>> as -ENOMEM apparently? that looks unhelpful.
>
> I see this was tangentially discussed with David on the patches you sent
> and I agree that it's a distinct issue from what we're solving here,
> however, -ENOMEM is a particularly problematic way to report transient
> errors with migration due to a race. For KVM, the -ENOMEM will bubble
> back up to userspace and the VMM is likely to destroy the VM altogether
> whereas -EAGAIN would return back to the guest and retry the faulting
> instruction.
Migration code itself will retry multiple times, which usually takes
care of most races.
Not all of course.
Now, I recall John was working on that at some point (I recall an RFC
patch, but I might be daydreaming), and I recall discussions at LSF/MM
around improving handling when we are mixing a flood of short-therm gup
with a single long-term gup that wants to migrate these (short-term
pinned) pages.
Essentially, we would have to temporarily prevent new short-term GUP
pins in order to make the long-term GUP-pin succeed in migrating the folio.
--
Cheers
David / dhildenb
next prev parent reply other threads:[~2025-09-09 11:51 UTC|newest]
Thread overview: 23+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-08-15 10:18 Will Deacon
2025-08-16 1:03 ` John Hubbard
2025-08-16 4:33 ` Hugh Dickins
2025-08-18 13:38 ` Will Deacon
2025-08-16 4:14 ` Hugh Dickins
2025-08-16 8:15 ` David Hildenbrand
2025-08-18 13:31 ` Will Deacon
2025-08-18 14:31 ` Will Deacon
2025-08-25 1:25 ` Hugh Dickins
2025-08-25 16:04 ` David Hildenbrand
2025-08-28 8:47 ` Hugh Dickins
2025-08-28 8:59 ` David Hildenbrand
2025-08-28 16:12 ` Hugh Dickins
2025-08-28 20:38 ` David Hildenbrand
2025-08-29 1:58 ` Hugh Dickins
2025-08-29 8:56 ` David Hildenbrand
2025-08-29 11:57 ` Will Deacon
2025-08-29 13:21 ` Will Deacon
2025-08-29 16:04 ` Hugh Dickins
2025-08-29 15:46 ` Hugh Dickins
2025-09-09 11:39 ` Will Deacon
2025-09-09 11:50 ` David Hildenbrand [this message]
2025-09-10 0:24 ` John Hubbard
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=655d2d29-8fe6-4684-aba4-4803bed0d4d0@redhat.com \
--to=david@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=fmayle@google.com \
--cc=hughd@google.com \
--cc=jgg@ziepe.ca \
--cc=jhubbard@nvidia.com \
--cc=keirf@google.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=peterx@redhat.com \
--cc=riel@surriel.com \
--cc=vbabka@suse.cz \
--cc=will@kernel.org \
--cc=yangge1116@126.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox