qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 4/5] virtio-net: Update virtio-net curr_queue_pairs in vdpa b


From: Si-Wei Liu
Subject: Re: [PATCH 4/5] virtio-net: Update virtio-net curr_queue_pairs in vdpa backends
Date: Thu, 25 Aug 2022 20:49:32 -0700
User-agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101 Thunderbird/91.12.0

Hi Jason,

On 8/24/2022 7:53 PM, Jason Wang wrote:
On Thu, Aug 25, 2022 at 8:38 AM Si-Wei Liu <si-wei.liu@oracle.com> wrote:


On 8/23/2022 9:27 PM, Jason Wang wrote:
在 2022/8/20 01:13, Eugenio Pérez 写道:
It was returned as error before. Instead of it, simply update the
corresponding field so qemu can send it in the migration data.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
---

Looks correct.

Adding Si Wei for double check.
Hmmm, I understand why this change is needed for live migration, but
this would easily cause userspace out of sync with the kernel for other
use cases, such as link down or userspace fallback due to vdpa ioctl
error. Yes, these are edge cases.
Considering 7.2 will start, maybe it's time to fix the root cause
instead of having a workaround like this?
The fix for the immediate cause is not hard, though what is missing from my WIP series for a full blown fix is something similar to Shadow CVQ for all general cases than just live migration: QEMU will have to apply the curr_queue_pairs to the kernel once switched back from the userspace virtqueues. I think Shadow CVQ won't work if ASID support is missing from kernel. Do you think if it bother to build another similar facility, or we reuse Shadow CVQ code to make it work without ASID support?

I have been a bit busy with internal project for the moment, but I hope I can post my series next week. Here's what I have for the relevant patches from the WIP series.

diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
index dd0d056..16edfa3 100644
--- a/hw/net/virtio-net.c
+++ b/hw/net/virtio-net.c
@@ -361,16 +361,13 @@ static void virtio_net_drop_tx_queue_data(VirtIODevice *vdev, VirtQueue *vq)
     }
 }

-static void virtio_net_set_status(struct VirtIODevice *vdev, uint8_t status)
+static void virtio_net_queue_status(struct VirtIONet *n, uint8_t status)
 {
-    VirtIONet *n = VIRTIO_NET(vdev);
+    VirtIODevice *vdev = VIRTIO_DEVICE(n);
     VirtIONetQueue *q;
     int i;
     uint8_t queue_status;

-    virtio_net_vnet_endian_status(n, status);
-    virtio_net_vhost_status(n, status);
-
     for (i = 0; i < n->max_queue_pairs; i++) {
         NetClientState *ncs = qemu_get_subqueue(n->nic, i);
         bool queue_started;
@@ -418,6 +415,15 @@ static void virtio_net_set_status(struct VirtIODevice *vdev, uint8_t status)
     }
 }

+static void virtio_net_set_status(struct VirtIODevice *vdev, uint8_t status)
+{
+    VirtIONet *n = VIRTIO_NET(vdev);
+
+    virtio_net_vnet_endian_status(n, status);
+    virtio_net_vhost_status(n, status);
+    virtio_net_queue_status(n, status);
+}
+
 static void virtio_net_set_link_status(NetClientState *nc)
 {
     VirtIONet *n = qemu_get_nic_opaque(nc);
@@ -1380,7 +1386,6 @@ static int virtio_net_handle_mq(VirtIONet *n, uint8_t cmd,
 {
     VirtIODevice *vdev = VIRTIO_DEVICE(n);
     uint16_t queue_pairs;
-    NetClientState *nc = qemu_get_queue(n->nic);

     virtio_net_disable_rss(n);
     if (cmd == VIRTIO_NET_CTRL_MQ_HASH_CONFIG) {
@@ -1412,22 +1417,10 @@ static int virtio_net_handle_mq(VirtIONet *n, uint8_t cmd,
         return VIRTIO_NET_ERR;
     }

-    /* Avoid changing the number of queue_pairs for vdpa device in
-     * userspace handler. A future fix is needed to handle the mq
-     * change in userspace handler with vhost-vdpa. Let's disable
-     * the mq handling from userspace for now and only allow get
-     * done through the kernel. Ripples may be seen when falling
-     * back to userspace, but without doing it qemu process would
-     * crash on a recursive entry to virtio_net_set_status().
-     */
-    if (nc->peer && nc->peer->info->type == NET_CLIENT_DRIVER_VHOST_VDPA) {
-        return VIRTIO_NET_ERR;
-    }
-
     n->curr_queue_pairs = queue_pairs;
     /* stop the backend before changing the number of queue_pairs to avoid handling a
      * disabled queue */
-    virtio_net_set_status(vdev, vdev->status);
+    virtio_net_queue_status(n, vdev->status);
     virtio_net_set_queue_pairs(n);

     return VIRTIO_NET_OK;

----
Regards,
-Siwei

THanks

Not completely against it, but I
wonder if there's a way we can limit the change scope to live migration
case only?

-Siwei

Thanks


   hw/net/virtio-net.c | 17 ++++++-----------
   1 file changed, 6 insertions(+), 11 deletions(-)

diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
index dd0d056fde..63a8332cd0 100644
--- a/hw/net/virtio-net.c
+++ b/hw/net/virtio-net.c
@@ -1412,19 +1412,14 @@ static int virtio_net_handle_mq(VirtIONet *n,
uint8_t cmd,
           return VIRTIO_NET_ERR;
       }
   -    /* Avoid changing the number of queue_pairs for vdpa device in
-     * userspace handler. A future fix is needed to handle the mq
-     * change in userspace handler with vhost-vdpa. Let's disable
-     * the mq handling from userspace for now and only allow get
-     * done through the kernel. Ripples may be seen when falling
-     * back to userspace, but without doing it qemu process would
-     * crash on a recursive entry to virtio_net_set_status().
-     */
+    n->curr_queue_pairs = queue_pairs;
       if (nc->peer && nc->peer->info->type ==
NET_CLIENT_DRIVER_VHOST_VDPA) {
-        return VIRTIO_NET_ERR;
+        /*
+         * Avoid updating the backend for a vdpa device: We're only
interested
+         * in updating the device model queues.
+         */
+        return VIRTIO_NET_OK;
       }
-
-    n->curr_queue_pairs = queue_pairs;
       /* stop the backend before changing the number of queue_pairs
to avoid handling a
        * disabled queue */
       virtio_net_set_status(vdev, vdev->status);




reply via email to

[Prev in Thread] Current Thread [Next in Thread]