From: Val Packett <val@invisiblethingslab.com>
To: Teddy Astie <teddy.astie@vates.tech>,
Demi Marie Obenour <demiobenour@gmail.com>,
Xen developer discussion <xen-devel@lists.xenproject.org>,
dri-devel@lists.freedesktop.org, linux-mm@kvack.org,
Ariadne Conill <ariadne@ariadne.space>
Subject: Re: Why memory lending is needed for GPU acceleration
Date: Tue, 31 Mar 2026 08:23:14 -0300 [thread overview]
Message-ID: <36d831f6-f21a-4c0d-b442-e526d8c946b9@invisiblethingslab.com> (raw)
In-Reply-To: <1de15ce0-9f7e-4253-80a7-ecd94caa4325@vates.tech>
On 3/31/26 6:42 AM, Teddy Astie wrote:
> Le 30/03/2026 à 22:13, Val Packett a écrit :
>> [..]
>>
>> we have no need to replicate what KVM does. That's far from the only
>> thing that can be done with a dmabuf.
>>
>> The import-export machinery on the other hand actually does pin the
>> buffers on the driver level, importers are not obligated to support
>> movable buffers (move_notify in dma_buf_attach_ops is entirely optional).
>>
> dma-buf is by concept non-movable if actively used (otherwise, it would
> break DMA). It's just a foreign buffer, and from device standpoint, just
> plain RAM that needs to be mapped.
>
>> Interestingly, there is already XEN_GNTDEV_DMABUF…
>>
>> Wait, do we even have any reason at all to suspect
>> that XEN_GNTDEV_DMABUF doesn't already satisfy all of our buffer-sharing
>> requirements?
>>
> XEN_GNTDEV_DMABUF has been designed for GPU use-cases, and more
> precisely for paravirtualizing a display. The only issue I would have
> with it is that grants are not scalable for GPU 3D use cases (with
> hundreds of MB to share).
At least for the Qubes side, we aren't aiming at running Crysis on a
paravirtualized GPU just yet anyway :) First we just want desktop apps
to run well.
Keep in mind that with virtgpu paravirtualization, actual buffer sharing
between domains only happens for CPU access, which is mostly used for:
- initial resource uploads;
- the occasional readback (which is inherently slow and all graphics
devs try not to *ever* do);
- special cases like screen capture.
Most CPU mappings of GPU driver managed buffers live for the duration of
a single memcpy. Mapping size can get large for games indeed, but for
desktop applications it's rather small.
On the rendering hot path the guest virtgpu driver just submits jobs
that refer to abstract handles managed by virglrenderer on the host, and
buffer sharing is *not* happening.
> But we can still keep the concept of a structured guest-owned memory
> that is shared with Dom0 (but for larger quantities), I have some ideas
> regarding improving that area in Xen.
>
> The only issue with changing the memory sharing model is that you would
> need to adjust the virtio-gpu aspect, but the rest can stay the same.
>
> The biggest concern regarding driver compatibility is more about :
> - can dma-buf be used as general buffers : probably yes (even with
> OpenGL/Vulkan); exception may be proprietary Nvidia drivers that lacks
> the feature; maybe very old hardware may struggle more with it
Current nvidia blob drivers do not lack the feature btw..
> - can guest UMD work without access to vram : yes (apparently), AMDGPU
> has a special case where VRAM is not visible (e.g too small PCI BAR),
> there is vram size vs "vram visible size" (which could be 0); you could
> fallback vram-guest-visible with ram mapped on device
UMDs work on a higher level, they work on buffers which are managed by
the KMD.
In any paravirtualization situation (whether "native
contexts"/vDRM which runs the full HW-specific UMD in the guest, or
API-forwarding solutions like Venus) the only guest KMD is virtio-gpu!
The guest kernel isn't really aware of what VRAM even is.
https://gitlab.freedesktop.org/mesa/mesa/-/blob/main/src/amd/common/virtio/amdgpu_virtio_bo.c
^ this 300-ish-line file is everything amdgpu ever does with buffer
objects on the virtio backend.
All it can do is manage host handles, import guest dmabufs into virtgpu
to get handles for them, export handles to get guest dmabufs, and map
handles for guest CPU access via the VIRTGPU_MAP ioctl. There are no
special details to any of this, it's all very straightforward.
It seems to me that implementing VIRTGPU_MAP in terms of dmabuf grants
would be easy!..
I'll need to get to that point first though, right now I'm still working
on making basic virtio itself work in our (x86) situation.
> - can it be defined in Vulkan terms (from driver) : You can have
> device_local memory without having it host-visible (i.e memory exists,
> but can't be added in the guest). You would probably just lose some
> zero-copy paths with VRAM. Though you still have RAM shared with GPU
> (GTT in AMDGPU) if that matters.
What did you mean by "added" in the guest?
We shouldn't ever have to touch this level at all, anyhow…
> Worth noting that if you're on integration graphics, you don't have VRAM
> and everything is RAM anyway.
Thanks,
~val
next prev parent reply other threads:[~2026-03-31 11:23 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-03-24 14:17 Mapping non-pinned memory from one Xen domain into another Demi Marie Obenour
2026-03-24 18:00 ` Teddy Astie
2026-03-26 17:18 ` Demi Marie Obenour
2026-03-26 18:26 ` Teddy Astie
2026-03-27 17:18 ` Demi Marie Obenour
2026-03-29 17:32 ` Why memory lending is needed for GPU acceleration Demi Marie Obenour
2026-03-30 10:15 ` Teddy Astie
2026-03-30 10:25 ` Jan Beulich
2026-03-30 12:24 ` Demi Marie Obenour
2026-03-30 20:07 ` Val Packett
2026-03-31 9:42 ` Teddy Astie
2026-03-31 11:23 ` Val Packett [this message]
2026-04-03 21:24 ` Marek Marczykowski-Górecki
2026-03-30 12:13 ` Mapping non-pinned memory from one Xen domain into another Teddy Astie
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=36d831f6-f21a-4c0d-b442-e526d8c946b9@invisiblethingslab.com \
--to=val@invisiblethingslab.com \
--cc=ariadne@ariadne.space \
--cc=demiobenour@gmail.com \
--cc=dri-devel@lists.freedesktop.org \
--cc=linux-mm@kvack.org \
--cc=teddy.astie@vates.tech \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox