From: John Hubbard <jhubbard@nvidia.com>
To: David Hildenbrand <david@redhat.com>,
Alistair Popple <apopple@nvidia.com>,
Jason Gunthorpe <jgg@nvidia.com>
Cc: Christoph Hellwig <hch@infradead.org>,
Andrew Morton <akpm@linux-foundation.org>,
LKML <linux-kernel@vger.kernel.org>,
linux-rdma@vger.kernel.org, linux-mm@kvack.org,
Mike Marciniszyn <mike.marciniszyn@intel.com>,
Leon Romanovsky <leon@kernel.org>,
Artemy Kovalyov <artemyko@nvidia.com>,
Michael Guralnik <michaelgur@nvidia.com>,
Pak Markthub <pmarkthub@nvidia.com>
Subject: Re: [RFC] RDMA/umem: pin_user_pages*() can temporarily fail due to migration glitches
Date: Thu, 2 May 2024 11:10:05 -0700 [thread overview]
Message-ID: <a2032a79-744d-4c00-a286-7d6fed3a1bdb@nvidia.com> (raw)
In-Reply-To: <92289167-5655-4c51-8dfc-df7ae53fdb7b@redhat.com>
On 5/1/24 11:56 PM, David Hildenbrand wrote:
> On 02.05.24 03:05, Alistair Popple wrote:
>> Jason Gunthorpe <jgg@nvidia.com> writes:
...
>>>> This doesn't make sense. IFF a blind retry is all that is needed it
>>>> should be done in the core functionality. I fear it's not that easy,
>>>> though.
>>>
>>> +1
>>>
>>> This migration retry weirdness is a GUP issue, it needs to be solved
>>> in the mm not exposed to every pin_user_pages caller.
>>>
>>> If it turns out ZONE_MOVEABLE pages can't actually be reliably moved
>>> then it is pretty broken..
>>
>> I wonder if we should remove the arbitrary retry limit in
>> migrate_pages() entirely for ZONE_MOVEABLE pages and just loop until
>> they migrate? By definition there should only be transient references on
>> these pages so why do we need to limit the number of retries in the
>> first place?
>
> There are some weird things that still needs fixing: vmsplice() is the
> example that doesn't use FOLL_LONGTERM.
>
Hi David!
Do you have any other call sites in mind? It sounds like one way forward
is to fix each call site...
This is an unhappy story right now: the pin_user_pages*() APIs are
significantly worse than before the "migrate pages away automatically"
upgrade, from a user point of view. Because now, the APIs fail
intermittently for callers who follow the rules--because
pin_user_pages() is not fully working yet, basically.
Other ideas, large or small, about how to approach a fix?
thanks,
--
John Hubbard
NVIDIA
next prev parent reply other threads:[~2024-05-02 18:10 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-05-01 0:31 John Hubbard
2024-05-01 5:10 ` Christoph Hellwig
2024-05-01 12:10 ` Jason Gunthorpe
2024-05-01 17:32 ` John Hubbard
2024-05-02 1:05 ` Alistair Popple
2024-05-02 6:49 ` John Hubbard
2024-05-02 6:56 ` David Hildenbrand
2024-05-02 18:10 ` John Hubbard [this message]
2024-05-02 18:34 ` Jason Gunthorpe
2024-05-02 18:44 ` Matthew Wilcox
2024-05-10 7:54 ` David Hildenbrand
2024-05-11 0:32 ` Kasireddy, Vivek
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=a2032a79-744d-4c00-a286-7d6fed3a1bdb@nvidia.com \
--to=jhubbard@nvidia.com \
--cc=akpm@linux-foundation.org \
--cc=apopple@nvidia.com \
--cc=artemyko@nvidia.com \
--cc=david@redhat.com \
--cc=hch@infradead.org \
--cc=jgg@nvidia.com \
--cc=leon@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-rdma@vger.kernel.org \
--cc=michaelgur@nvidia.com \
--cc=mike.marciniszyn@intel.com \
--cc=pmarkthub@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox