qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC 02/10] vhost: add 3 commands for vhost-vdpa


From: Jason Wang
Subject: Re: [RFC 02/10] vhost: add 3 commands for vhost-vdpa
Date: Fri, 7 Jan 2022 10:53:22 +0800
User-agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0) Gecko/20100101 Thunderbird/91.4.1


在 2022/1/6 下午10:09, Michael S. Tsirkin 写道:
On Thu, Jan 06, 2022 at 10:34:20AM +0800, Jason Wang wrote:
On Wed, Jan 5, 2022 at 8:26 PM Michael S. Tsirkin <mst@redhat.com> wrote:
On Wed, Jan 05, 2022 at 05:09:07PM +0800, Jason Wang wrote:
On Wed, Jan 5, 2022 at 4:37 PM Longpeng (Mike, Cloud Infrastructure
Service Product Dept.) <longpeng2@huawei.com> wrote:


-----Original Message-----
From: Jason Wang [mailto:jasowang@redhat.com]
Sent: Wednesday, January 5, 2022 3:54 PM
To: Michael S. Tsirkin <mst@redhat.com>
Cc: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
<longpeng2@huawei.com>; Stefan Hajnoczi <stefanha@redhat.com>; Stefano
Garzarella <sgarzare@redhat.com>; Cornelia Huck <cohuck@redhat.com>; pbonzini
<pbonzini@redhat.com>; Gonglei (Arei) <arei.gonglei@huawei.com>; Yechuan
<yechuan@huawei.com>; Huangzhichao <huangzhichao@huawei.com>; qemu-devel
<qemu-devel@nongnu.org>
Subject: Re: [RFC 02/10] vhost: add 3 commands for vhost-vdpa

On Wed, Jan 5, 2022 at 3:02 PM Michael S. Tsirkin <mst@redhat.com> wrote:
On Wed, Jan 05, 2022 at 12:35:53PM +0800, Jason Wang wrote:
On Wed, Jan 5, 2022 at 8:59 AM Longpeng(Mike) <longpeng2@huawei.com> wrote:
From: Longpeng <longpeng2@huawei.com>

To support generic vdpa deivce, we need add the following ioctls:
- GET_VECTORS_NUM: the count of vectors that supported
Does this mean MSI vectors? If yes, it looks like a layer violation:
vhost is transport independent.
Well *guest* needs to know how many vectors device supports.
I don't think there's a way around that. Do you?
We have VHOST_SET_VRING/CONFIG_CALL which is per vq. I think we can
simply assume #vqs + 1?

Otherwise guests will at best be suboptimal.

  And it reveals device implementation
details which block (cross vendor) migration.

Thanks
Not necessarily, userspace can hide this from guest if it
wants to, just validate.
If we can hide it at vhost/uAPI level, it would be even better?

Not only MSI vectors, but also queue-size, #vqs, etc.
MSI is PCI specific, we have non PCI vDPA parent e.g VDUSE/simulator/mlx5

And it's something that is not guaranteed to be not changed. E.g some
drivers may choose to allocate MSI during set_status() which can fail
for various reasons.

Maybe the vhost level could expose the hardware's real capabilities
and let the userspace (QEMU) do the hiding? The userspace know how
to process them.
#MSI vectors is much more easier to be mediated than queue-size and #vqs.

For interrupts, we've already had VHOST_SET_X_KICK, we can keep
allocating eventfd based on #MSI vectors to make it work with any
number of MSI vectors that the virtual device had.
Right but if hardware does not support so many then what?
Just fail?
Or just trigger the callback of vqs that shares the vector.

Right but we want userspace to be able to report this to guest accurately
if it wants to. Guest can then configure itself correctly.


Having a query API would make things somewhat cleaner imho.
I may miss something,  even if we know #vectors, we still don't know
the associated virtqueues for a dedicated vector?
This is up to the guest.


Just to clarify the possible issue, this only works if vDPA parent is using the same irq binding policy as what viritio-pci did in the guest.

Consider vDPA has 3 vectors allocated:

host vector 0: tx/rx
host vector 1: cvq
host vector 2: config

So we return 3 for get_vectors. So the virtual device will have 3 vectors in this case.

But a guest driver may do:

guest vector 0: tx (eventfd0)
guest vector 1: rx (eventfd1)
guest vector 2: cvq/config (eventfd2)

The irq handler of host vector0 will notify both eventfd0(guest vector0) and eventfd1(guest vector1) in this case.

And using such "vectors passthrough" may block migration between the vDPA device where the #vectors is the only difference.

Thanks



For queue-size, it's Ok to have a new uAPI but it's not a must, Qemu
can simply fail if SET_VRING_NUM fail.

For #vqs, it's OK to have a new uAPI since the emulated virtio-pci
device requires knowledge the #vqs in the config space. (still not a
must, we can enumerate #vqs per device type)

For the config size, it's OK but not a must, technically we can simply
relay what guest write to vhost-vdpa. It's just because current Qemu
require to have it during virtio device initialization.

Thanks

I agree but these ok things make for a cleaner API I think.
Right.

Thanks

Thanks


- GET_CONFIG_SIZE: the size of the virtio config space
- GET_VQS_NUM: the count of virtqueues that exported

Signed-off-by: Longpeng <longpeng2@huawei.com>
---
  linux-headers/linux/vhost.h | 10 ++++++++++
  1 file changed, 10 insertions(+)

diff --git a/linux-headers/linux/vhost.h b/linux-headers/linux/vhost.h
index c998860d7b..c5edd75d15 100644
--- a/linux-headers/linux/vhost.h
+++ b/linux-headers/linux/vhost.h
@@ -150,4 +150,14 @@
  /* Get the valid iova range */
  #define VHOST_VDPA_GET_IOVA_RANGE      _IOR(VHOST_VIRTIO, 0x78, \
                                              struct vhost_vdpa_iova_range)
+
+/* Get the number of vectors */
+#define VHOST_VDPA_GET_VECTORS_NUM     _IOR(VHOST_VIRTIO, 0x79, int)
+
+/* Get the virtio config size */
+#define VHOST_VDPA_GET_CONFIG_SIZE     _IOR(VHOST_VIRTIO, 0x80, int)
+
+/* Get the number of virtqueues */
+#define VHOST_VDPA_GET_VQS_NUM         _IOR(VHOST_VIRTIO, 0x81, int)
+
  #endif
--
2.23.0





reply via email to

[Prev in Thread] Current Thread [Next in Thread]