From: 谢永吉 <xieyongji@bytedance.com>
To: "Michael S. Tsirkin" <mst@redhat.com>
Cc: jasowang@redhat.com, akpm@linux-foundation.org,
linux-mm@kvack.org, virtualization@lists.linux-foundation.org
Subject: Re: [External] Re: [RFC 0/4] Introduce VDUSE - vDPA Device in Userspace
Date: Tue, 20 Oct 2020 10:18:18 +0800 [thread overview]
Message-ID: <CACycT3vzpm_+v-DbqeVRMg8BRny_GoL2JxpbzYC3JYTMKGn_vg@mail.gmail.com> (raw)
In-Reply-To: <20201019130815-mutt-send-email-mst@kernel.org>
[-- Attachment #1: Type: text/plain, Size: 4686 bytes --]
On Tue, Oct 20, 2020 at 1:16 AM Michael S. Tsirkin <mst@redhat.com> wrote:
> On Mon, Oct 19, 2020 at 10:56:19PM +0800, Xie Yongji wrote:
> > This series introduces a framework, which can be used to implement
> > vDPA Devices in a userspace program. To implement it, the work
> > consist of two parts: control path emulating and data path offloading.
> >
> > In the control path, the VDUSE driver will make use of message
> > mechnism to forward the actions (get/set features, get/st status,
> > get/set config space and set virtqueue states) from virtio-vdpa
> > driver to userspace. Userspace can use read()/write() to
> > receive/reply to those control messages.
> >
> > In the data path, the VDUSE driver implements a MMU-based
> > on-chip IOMMU driver which supports both direct mapping and
> > indirect mapping with bounce buffer. Then userspace can access
> > those iova space via mmap(). Besides, eventfd mechnism is used to
> > trigger interrupts and forward virtqueue kicks.
> >
> > The details and our user case is shown below:
> >
> > ------------------------
> -----------------------------------------------------------
> > | APP | | QEMU
> |
> > | --------- | | --------------------
> -------------------+<-->+------ |
> > | |dev/vdx| | | | device emulation | | virtio
> dataplane | | BDS | |
> > ------------+-----------
> -----------+-----------------------+-----------------+-----
> > | | |
> |
> > | | emulating |
> offloading |
> >
> ------------+---------------------------+-----------------------+-----------------+------
> > | | block device | | vduse driver | | vdpa device
> | | TCP/IP | |
> > | -------+-------- --------+--------
> +------+------- -----+---- |
> > | | | | |
> | |
> > | | | | |
> | |
> > | ----------+---------- ----------+----------- | |
> | |
> > | | virtio-blk driver | | virtio-vdpa driver | | |
> | |
> > | ----------+---------- ----------+----------- | |
> | |
> > | | | | |
> | |
> > | | ------------------ |
> | |
> > | -----------------------------------------------------
> ---+--- |
> >
> ------------------------------------------------------------------------------
> | NIC |---
> >
> ---+---
> >
> |
> >
> ---------+---------
> >
> | Remote Storages |
> >
> -------------------
> > We make use of it to implement a block device connecting to
> > our distributed storage, which can be used in containers and
> > bare metal.
>
> What is not exactly clear is what is the APP above doing.
>
> Taking virtio blk requests and sending them over the network
> in some proprietary way?
>
>
No, the APP doesn't need to know details on virtio-blk. Maybe replace "APP"
with "Container" here could be more clear. Our purpose is to make virtio
device available for container and bare metal, so that we can reuse the
VM's technology stack to provide service, e.g. SPDK's remote bdev, ovs-dpdk
and so on.
> > Compared with qemu-nbd solution, this solution has
> > higher performance, and we can have an unified technology stack
> > in VM and containers for remote storages.
> >
> > To test it with a host disk (e.g. /dev/sdx):
> >
> > $ qemu-storage-daemon \
> > --chardev socket,id=charmonitor,path=/tmp/qmp.sock,server,nowait \
> > --monitor chardev=charmonitor \
> > --blockdev
> driver=host_device,cache.direct=on,aio=native,filename=/dev/sdx,node-name=disk0
> \
> > --export
> vduse-blk,id=test,node-name=disk0,writable=on,vduse-id=1,num-queues=16,queue-size=128
> >
> > The qemu-storage-daemon can be found at
> https://github.com/bytedance/qemu/tree/vduse
> >
> > Future work:
> > - Improve performance (e.g. zero copy implementation in datapath)
> > - Config interrupt support
> > - Userspace library (find a way to reuse device emulation code in
> qemu/rust-vmm)
>
>
> How does this driver compare with vhost-user-blk (which doesn't need
> kernel support)?
>
>
We want to implement a block device rather than a virtio-blk dataplane. And
with this driver's help, the vhost-user-blk process could provide storage
service to all APPs in the host.
Thanks,
Yongji
[-- Attachment #2: Type: text/html, Size: 6808 bytes --]
next prev parent reply other threads:[~2020-10-20 2:18 UTC|newest]
Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-10-19 14:56 Xie Yongji
2020-10-19 14:56 ` [RFC 1/4] mm: export zap_page_range() for driver use Xie Yongji
2020-10-19 15:14 ` Matthew Wilcox
2020-10-19 15:36 ` [External] " 谢永吉
2020-10-19 14:56 ` [RFC 2/4] vduse: Introduce VDUSE - vDPA Device in Userspace Xie Yongji
2020-10-19 15:08 ` Michael S. Tsirkin
2020-10-19 15:24 ` Randy Dunlap
2020-10-19 15:46 ` [External] " 谢永吉
2020-10-19 15:48 ` 谢永吉
2020-10-19 14:56 ` [RFC 3/4] vduse: grab the module's references until there is no vduse device Xie Yongji
2020-10-19 15:05 ` Michael S. Tsirkin
2020-10-19 15:44 ` [External] " 谢永吉
2020-10-19 15:47 ` Michael S. Tsirkin
2020-10-19 15:56 ` 谢永吉
2020-10-19 16:41 ` Michael S. Tsirkin
2020-10-20 7:42 ` Yongji Xie
2020-10-19 14:56 ` [RFC 4/4] vduse: Add memory shrinker to reclaim bounce pages Xie Yongji
2020-10-19 17:16 ` [RFC 0/4] Introduce VDUSE - vDPA Device in Userspace Michael S. Tsirkin
2020-10-20 2:18 ` 谢永吉 [this message]
2020-10-20 2:20 ` [External] " Jason Wang
2020-10-20 2:28 ` 谢永吉
2020-10-20 3:20 ` Jason Wang
2020-10-20 7:39 ` [External] " Yongji Xie
2020-10-20 8:01 ` Jason Wang
2020-10-20 8:35 ` Yongji Xie
2020-10-20 9:12 ` Jason Wang
2020-10-23 2:55 ` Yongji Xie
2020-10-23 8:44 ` Jason Wang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CACycT3vzpm_+v-DbqeVRMg8BRny_GoL2JxpbzYC3JYTMKGn_vg@mail.gmail.com \
--to=xieyongji@bytedance.com \
--cc=akpm@linux-foundation.org \
--cc=jasowang@redhat.com \
--cc=linux-mm@kvack.org \
--cc=mst@redhat.com \
--cc=virtualization@lists.linux-foundation.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox