qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC 05/10] vhost: Add vhost_dev_from_virtio


From: Jason Wang
Subject: Re: [RFC 05/10] vhost: Add vhost_dev_from_virtio
Date: Fri, 5 Feb 2021 11:51:48 +0800
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.10.0


On 2021/2/4 下午5:25, Eugenio Perez Martin wrote:
On Thu, Feb 4, 2021 at 4:14 AM Jason Wang <jasowang@redhat.com> wrote:

On 2021/2/2 下午6:17, Eugenio Perez Martin wrote:
On Tue, Feb 2, 2021 at 4:31 AM Jason Wang <jasowang@redhat.com> wrote:
On 2021/2/1 下午4:28, Eugenio Perez Martin wrote:
On Mon, Feb 1, 2021 at 7:13 AM Jason Wang <jasowang@redhat.com> wrote:
On 2021/1/30 上午4:54, Eugenio Pérez wrote:
Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
---
     include/hw/virtio/vhost.h |  1 +
     hw/virtio/vhost.c         | 17 +++++++++++++++++
     2 files changed, 18 insertions(+)

diff --git a/include/hw/virtio/vhost.h b/include/hw/virtio/vhost.h
index 4a8bc75415..fca076e3f0 100644
--- a/include/hw/virtio/vhost.h
+++ b/include/hw/virtio/vhost.h
@@ -123,6 +123,7 @@ uint64_t vhost_get_features(struct vhost_dev *hdev, const 
int *feature_bits,
     void vhost_ack_features(struct vhost_dev *hdev, const int *feature_bits,
                             uint64_t features);
     bool vhost_has_free_slot(void);
+struct vhost_dev *vhost_dev_from_virtio(const VirtIODevice *vdev);

     int vhost_net_set_backend(struct vhost_dev *hdev,
                               struct vhost_vring_file *file);
diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c
index 28c7d78172..8683d507f5 100644
--- a/hw/virtio/vhost.c
+++ b/hw/virtio/vhost.c
@@ -61,6 +61,23 @@ bool vhost_has_free_slot(void)
         return slots_limit > used_memslots;
     }

+/*
+ * Get the vhost device associated to a VirtIO device.
+ */
+struct vhost_dev *vhost_dev_from_virtio(const VirtIODevice *vdev)
+{
+    struct vhost_dev *hdev;
+
+    QLIST_FOREACH(hdev, &vhost_devices, entry) {
+        if (hdev->vdev == vdev) {
+            return hdev;
+        }
+    }
+
+    assert(hdev);
+    return NULL;
+}
I'm not sure this can work in the case of multiqueue. E.g vhost-net
multiqueue is a N:1 mapping between vhost devics and virtio devices.

Thanks

Right. We could add an "vdev vq index" parameter to the function in
this case, but I guess the most reliable way to do this is to add a
vhost_opaque value to VirtQueue, as Stefan proposed in previous RFC.
So the question still, it looks like it's easier to hide the shadow
virtqueue stuffs at vhost layer instead of expose them to virtio layer:

1) vhost protocol is stable ABI
2) no need to deal with virtio stuffs which is more complex than vhost

Or are there any advantages if we do it at virtio layer?

As far as I can tell, we will need the virtio layer the moment we
start copying/translating buffers.

In this series, the virtio dependency can be reduced if qemu does not
check the used ring _F_NO_NOTIFY flag before writing to irqfd. It
would enable packed queues and IOMMU immediately, and I think the cost
should not be so high. In the previous RFC this check was deleted
later anyway, so I think it was a bad idea to include it from the start.

I am not sure I understand here. For vhost, we can still do anything we
want, e.g accessing guest memory etc. Any blocker that prevent us from
copying/translating buffers? (Note that qemu will propagate memory
mappings to vhost).

There is nothing that forbids us to access directly, but if we don't
reuse the virtio layer functionality we would have to duplicate every
access function. "Need" was a too strong word maybe :).

In other words: for the shadow vq vring exposed for the device, qemu
treats it as a driver, and this functionality needs to be added to
qemu. But for accessing the guest's one do not reuse virtio.c would be
a bad idea in my opinion.


The problem is, virtio.c is not a library and it has a lot of dependency with other qemu modules basically makes it impossible to be reused at vhost level.

We can solve this by:

1) split the core functions out as a library or
2) switch to use contrib/lib-vhostuser but needs to decouple UNIX socket transport

None of the above looks trivial and they are only device codes. For shadow virtqueue, we need driver codes as well where no code can be reused.

As we discussed, we probably need IOVA allocated when forwarding descriptors between the two virtqueues. So my feeling is we can have our own codes to start then we can consider whether we can reuse some from the existing virtio.c or lib-vhostuser.

Thanks



Thanks






Thanks


I need to take this into account in qmp_x_vhost_enable_shadow_vq too.

+
     static void vhost_dev_sync_region(struct vhost_dev *dev,
                                       MemoryRegionSection *section,
                                       uint64_t mfirst, uint64_t mlast,





reply via email to

[Prev in Thread] Current Thread [Next in Thread]