linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Val Packett <val@invisiblethingslab.com>
To: Demi Marie Obenour <demiobenour@gmail.com>,
	Xen developer discussion <xen-devel@lists.xenproject.org>,
	dri-devel@lists.freedesktop.org, linux-mm@kvack.org,
	Jan Beulich <jbeulich@suse.com>,
	Ariadne Conill <ariadne@ariadne.space>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Juergen Gross <jgross@suse.com>,
	Teddy Astie <teddy.astie@vates.tech>
Subject: Re: Why memory lending is needed for GPU acceleration
Date: Mon, 30 Mar 2026 17:07:42 -0300	[thread overview]
Message-ID: <0bbf0349-1006-485f-a2db-6c8b795b4242@invisiblethingslab.com> (raw)
In-Reply-To: <c38387fe-beef-4f50-b928-74f96b881b7a@gmail.com>

Hi,

On 3/29/26 2:32 PM, Demi Marie Obenour wrote:
> On 3/24/26 10:17, Demi Marie Obenour wrote:
>> Here is a proposed design document for supporting mapping GPU VRAM
>> and/or file-backed memory into other domains.  It's not in the form of
>> a patch because the leading + characters would just make it harder to
>> read for no particular gain, and because this is still RFC right now.
>> Once it is ready to merge, I'll send a proper patch.  Nevertheless,
>> you can consider this to be
>>
>> Signed-off-by: Demi Marie Obenour <demiobenour@gmail.com>
>>
>> This approach is very different from the "frontend-allocates"
>> approach used elsewhere in Xen.  It is very much Linux-centric,
>> rather than Xen-centric.  In fact, MMU notifiers were invented for
>> KVM, and this approach is exactly the same as the one KVM implements.
>> However, to the best of my understanding, the design described here is
>> the only viable one.  Linux MM and GPU drivers require it, and changes
>> to either to relax this requirement will not be accepted upstream.
> Teddy Astie (CCd) proposed a couple of alternatives on Matrix:
>
> 1. Create dma-bufs for guest pages and import them into the host.
>
>     This is a win not only for Xen, but also for KVM.  Right now, shared
>     (CPU) memory buffers must be copied from the guest to the host,
>     which is pointless.  So fixing that is a good thing!  That said,
>     I'm still concerned about triggering GPU driver code-paths that
>     are not tested on bare metal.

To expand on this: the reason cross-domain Wayland proxies have been 
doing this SHM copy dance was a deficiency in Linux UAPI. Basically, 
applications allocate shared memory using local mechanisms like memfd 
(and good old unlink-of-regular-file, ugh) which weren't compatible with 
cross-VM sharing. However udmabuf should basically solve it, at least 
for memfds. (I haven't yet investigated what happens with "unlinked 
regular files" yet but I don't expect anything good there, welp.)

But I have landed a patch in Linux that removes a silly restriction that 
tied dmabuf import into virtgpu to KMS-only mode:

https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git/commit/?id=df4dc947c46bb9f80038f52c6e38cb2d40c10e50

And I have experimented with it and got a KVM-based VMM to successfully 
access and print guest memfd contents that were passed to the host via 
this mechanism. (Time to actually properly implement it into the full 
system..)

> 2. Use PASID and 2-stage translation so that the GPU can operate in
>     guest physical memory.
>     
>     This is also a win.  AMD XDNA absolutely requires PASID support,
>     and apparently AMD GPUs can also use PASID.  So being able to use
>     PASID is certainly helpful.
>
> However, I don't think either approach is sufficient for two reasons.
>
> First, discrete GPUs have dedicated VRAM, which Xen knows nothing about.
> Only dom0's GPU drivers can manage VRAM, and they will insist on being
> able to migrate it between the CPU and the GPU.  Furthermore, VRAM
> can only be allocated using GPU driver ioctls, which will allocate
> it from dom0-owned memory.
>
> Second, Certain Wayland protocols, such as screencapture, require programs
> to be able to import dmabufs.  Both of the above solutions would
> require that the pages be pinned.  I don't think this is an option,
> as IIUC pin_user_pages() fails on mappings of these dmabufs.  It's why
> direct I/O to dmabufs doesn't work.
>
> To the best of my knowledge, these problems mean that lending memory
> is the only way to get robust GPU acceleration for both graphics and
> compute workloads under Xen.  Simpler approaches might work for pure
> compute workloads, for iGPUs, or for drivers that have Xen-specific
> changes.  None of them, however, support graphics workloads on dGPUs
> while using the GPU driver the same way bare metal workloads do.
> […]
To recap, how virtio-gpu Host3d memory currently works with KVM is:

- the VMM/virtgpu receives a dmabuf over a socket 
(Wayland/D-Bus/whatever) and registers it internally with some resource 
ID that's passed to the guest;
- When the guest imports that resource, it calls 
VIRTIO_GPU_CMD_RESOURCE_MAP_BLOB to get a PRIME buffer that can be 
turned into a dmabuf fd;
- the VMM's handler for VIRTIO_GPU_CMD_RESOURCE_MAP_BLOB (referencing 
libkrun here) literally just calls mmap() on the host dmabuf, using the 
MAP_FIXED flag to place it correctly inside of the VMM process's 
guest-exposed VA region (configured via KVM_SET_USER_MEMORY_REGION);
- so any resource imported by the guest, even before guest userspace 
does mmap(), is mapped (as VM_PFNMAP|VM_IO) until the guest releases it.

So the generic kernel MM is out of the way, these mappings can't be 
paged out to swap etc. But accessing them may fault, as the comment for 
drm_gem_mmap_obj says:

  * Depending on their requirements, GEM objects can either
  * provide a fault handler in their vm_ops (in which case any accesses to
  * the object will be trapped, to perform migration, GTT binding, surface
  * register allocation, or performance monitoring), or mmap the buffer 
memory
  * synchronously after calling drm_gem_mmap_obj

It all "just works" in KVM because KVM's resolution of the guest's 
memory accesses tries to be literally equivalent to what's mapped into 
the userspace VMM process: hva_to_pfn_remapped explicitly calls 
fixup_user_fault and eventually gets to the GPU driver's fault handler.

Now for Xen this would be… painful,

but,

we have no need to replicate what KVM does. That's far from the only 
thing that can be done with a dmabuf.

The import-export machinery on the other hand actually does pin the 
buffers on the driver level, importers are not obligated to support 
movable buffers (move_notify in dma_buf_attach_ops is entirely optional).

Interestingly, there is already XEN_GNTDEV_DMABUF…

Wait, do we even have any reason at all to suspect 
that XEN_GNTDEV_DMABUF doesn't already satisfy all of our buffer-sharing 
requirements?


Thanks,
~val

P.S. while I have everyone's attention, can I get some eyes on:
https://lore.kernel.org/all/20251126062124.117425-1-val@invisiblethingslab.com/ 
?



  parent reply	other threads:[~2026-03-30 20:07 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-03-24 14:17 Mapping non-pinned memory from one Xen domain into another Demi Marie Obenour
2026-03-24 18:00 ` Teddy Astie
2026-03-26 17:18   ` Demi Marie Obenour
2026-03-26 18:26     ` Teddy Astie
2026-03-27 17:18       ` Demi Marie Obenour
2026-03-29 17:32 ` Why memory lending is needed for GPU acceleration Demi Marie Obenour
2026-03-30 10:15   ` Teddy Astie
2026-03-30 10:25     ` Jan Beulich
2026-03-30 12:24     ` Demi Marie Obenour
2026-03-30 20:07   ` Val Packett [this message]
2026-03-31  9:42     ` Teddy Astie
2026-03-31 11:23       ` Val Packett
2026-04-03 21:24       ` Marek Marczykowski-Górecki
2026-03-30 12:13 ` Mapping non-pinned memory from one Xen domain into another Teddy Astie

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=0bbf0349-1006-485f-a2db-6c8b795b4242@invisiblethingslab.com \
    --to=val@invisiblethingslab.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=ariadne@ariadne.space \
    --cc=demiobenour@gmail.com \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=jbeulich@suse.com \
    --cc=jgross@suse.com \
    --cc=linux-mm@kvack.org \
    --cc=teddy.astie@vates.tech \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox