From: David Hildenbrand <david@redhat.com>
To: "linux-mm@kvack.org" <linux-mm@kvack.org>,
John Hubbard <jhubbard@nvidia.com>,
nouveau@lists.freedesktop.org, Jason Gunthorpe <jgg@nvidia.com>,
Alistair Popple <apopple@nvidia.com>,
DRI Development <dri-devel@lists.freedesktop.org>,
Karol Herbst <kherbst@redhat.com>, Lyude Paul <lyude@redhat.com>,
Danilo Krummrich <dakr@kernel.org>
Subject: Re: [Question] Are "device exclusive non-swap entries" / "SVM atomics in Nouveau" still getting used in practice?
Date: Fri, 24 Jan 2025 11:44:28 +0100 [thread overview]
Message-ID: <8c6f3838-f194-4a42-845d-10011192a234@redhat.com> (raw)
In-Reply-To: <Z5JbYC2-slPU0l3n@phenom.ffwll.local>
On 23.01.25 16:08, Simona Vetter wrote:
> On Thu, Jan 23, 2025 at 11:20:37AM +0100, David Hildenbrand wrote:
>> Hi,
>>
>> I keep finding issues in our implementation of "device exclusive non-swap
>> entries", and the way it messes with mapcounts is disgusting.
>>
>> As a reminder, what we do here is to replace a PTE pointing to an anonymous
>> page by a "device exclusive non-swap entry".
>>
>> As long as the original PTE is in place, the only CPU can access it, as soon
>> as the "device exclusive non-swap entry" is in place, only the device can
>> access it. Conversion back and forth is triggered by CPU / device faults.
>>
>> I have fixes/reworks/simplifications for most things, but as there is only a
>> "real" single user in-tree of make_device_exclusive():
>>
>> drivers/gpu/drm/nouveau/nouveau_svm.c
>>
>> to "support SVM atomics in Nouveau [1]"
>>
>> naturally I am wondering: is this still a thing on actual hardware, or is it
>> already stale on recent hardware and not really required anymore?
>>
>>
>> [1] https://lore.kernel.org/linux-kernel//6621654.gmDyfcmpjF@nvdebian/T/
>
Thanks for your answer!
Nvidia folks told me on a different channel that it's still getting used.
> As long as you don't have a coherent interconnect it's needed. On intel
> discrete device atomics require device memory, so they need full hmm
> migration (and hence wont use this function even once we land intel gpu
> svm code in upstream).
Makes sense.
> On integrated the gpu is tied into the coherency
> fabric, so there it's not needed.
>
> I think the more fundamental question with both this function here and
> with forced migration to device memory is that there's no guarantee it
> will work out.
Yes, in particular with device-exclusive, it doesn't really work with
THP and is only limited to anonymous memory. I have patches to at least
make it work reliably with THP.
Then, we seem to give up too easily if we cannot lock the folio when
wanting to convert to device-exclusive, which also looks rather odd. But
well, maybe it just works good enough in the common case, or there is
some other retry logic that makes it fly.
> At least that's my understanding. And for this gpu device
> atomics without coherent interconnect idea to work, we'd need to be able
> to guarantee that we can make any page device exclusive. So from my side I
> have some pretty big question marks on this entire thing overall.
I don't think other memory (shmem/file/...) is really feasible as soon
as other processes (not the current process) map/write/read file pages.
We could really only handle if we converted a single PTE and that PTE is
getting converted back again.
There are other concerns I have (what if the page is pinned and access
outside of the user space page tables?). Maybe there was not need to
handle these cases so far.
So best I can do is make anonymous memory more reliable with
device-exclusive and fixup some of the problematic parts that I see
(e.g., broken page reclaim, page migration, ...).
But before starting to cleanup+improve the existing handling of
anonymous memory, I was wondering if this whole thing is getting used at
all.
--
Cheers,
David / dhildenb
next prev parent reply other threads:[~2025-01-24 10:44 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-01-23 10:20 David Hildenbrand
2025-01-23 15:08 ` Simona Vetter
2025-01-24 10:44 ` David Hildenbrand [this message]
2025-01-24 14:11 ` Jason Gunthorpe
2025-01-24 14:39 ` David Hildenbrand
2025-01-24 15:28 ` Simona Vetter
2025-01-24 17:54 ` David Hildenbrand
2025-01-28 0:09 ` Alistair Popple
2025-01-28 20:14 ` Simona Vetter
2025-01-28 20:24 ` David Hildenbrand
2025-01-29 10:48 ` Simona Vetter
2025-01-29 11:28 ` Simona Vetter
2025-01-29 11:31 ` David Hildenbrand
2025-01-29 14:05 ` Simona Vetter
2025-01-29 16:13 ` David Hildenbrand
2025-01-30 8:55 ` Simona Vetter
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=8c6f3838-f194-4a42-845d-10011192a234@redhat.com \
--to=david@redhat.com \
--cc=apopple@nvidia.com \
--cc=dakr@kernel.org \
--cc=dri-devel@lists.freedesktop.org \
--cc=jgg@nvidia.com \
--cc=jhubbard@nvidia.com \
--cc=kherbst@redhat.com \
--cc=linux-mm@kvack.org \
--cc=lyude@redhat.com \
--cc=nouveau@lists.freedesktop.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox