qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v3 6/6] virtio-net: don't handle mq request in userspace hand


From: Jason Wang
Subject: Re: [PATCH v3 6/6] virtio-net: don't handle mq request in userspace handler for vhost-vdpa
Date: Fri, 6 May 2022 15:35:48 +0800

On Fri, May 6, 2022 at 12:55 PM Si-Wei Liu <si-wei.liu@oracle.com> wrote:
>
> virtio_queue_host_notifier_read() tends to read pending event
> left behind on ioeventfd in the vhost_net_stop() path, and
> attempts to handle outstanding kicks from userspace vq handler.
> However, in the ctrl_vq handler, virtio_net_handle_mq() has a
> recursive call into virtio_net_set_status(), which may lead to
> segmentation fault as shown in below stack trace:
>
> 0  0x000055f800df1780 in qdev_get_parent_bus (dev=0x0) at 
> ../hw/core/qdev.c:376
> 1  0x000055f800c68ad8 in virtio_bus_device_iommu_enabled 
> (vdev=vdev@entry=0x0) at ../hw/virtio/virtio-bus.c:331
> 2  0x000055f800d70d7f in vhost_memory_unmap (dev=<optimized out>) at 
> ../hw/virtio/vhost.c:318
> 3  0x000055f800d70d7f in vhost_memory_unmap (dev=<optimized out>, 
> buffer=0x7fc19bec5240, len=2052, is_write=1, access_len=2052) at 
> ../hw/virtio/vhost.c:336
> 4  0x000055f800d71867 in vhost_virtqueue_stop (dev=dev@entry=0x55f8037ccc30, 
> vdev=vdev@entry=0x55f8044ec590, vq=0x55f8037cceb0, idx=0) at 
> ../hw/virtio/vhost.c:1241
> 5  0x000055f800d7406c in vhost_dev_stop (hdev=hdev@entry=0x55f8037ccc30, 
> vdev=vdev@entry=0x55f8044ec590) at ../hw/virtio/vhost.c:1839
> 6  0x000055f800bf00a7 in vhost_net_stop_one (net=0x55f8037ccc30, 
> dev=0x55f8044ec590) at ../hw/net/vhost_net.c:315
> 7  0x000055f800bf0678 in vhost_net_stop (dev=dev@entry=0x55f8044ec590, 
> ncs=0x55f80452bae0, data_queue_pairs=data_queue_pairs@entry=7, 
> cvq=cvq@entry=1)
>    at ../hw/net/vhost_net.c:423
> 8  0x000055f800d4e628 in virtio_net_set_status (status=<optimized out>, 
> n=0x55f8044ec590) at ../hw/net/virtio-net.c:296
> 9  0x000055f800d4e628 in virtio_net_set_status 
> (vdev=vdev@entry=0x55f8044ec590, status=15 '\017') at 
> ../hw/net/virtio-net.c:370
> 10 0x000055f800d534d8 in virtio_net_handle_ctrl (iov_cnt=<optimized out>, 
> iov=<optimized out>, cmd=0 '\000', n=0x55f8044ec590) at 
> ../hw/net/virtio-net.c:1408
> 11 0x000055f800d534d8 in virtio_net_handle_ctrl (vdev=0x55f8044ec590, 
> vq=0x7fc1a7e888d0) at ../hw/net/virtio-net.c:1452
> 12 0x000055f800d69f37 in virtio_queue_host_notifier_read (vq=0x7fc1a7e888d0) 
> at ../hw/virtio/virtio.c:2331
> 13 0x000055f800d69f37 in virtio_queue_host_notifier_read 
> (n=n@entry=0x7fc1a7e8894c) at ../hw/virtio/virtio.c:3575
> 14 0x000055f800c688e6 in virtio_bus_cleanup_host_notifier (bus=<optimized 
> out>, n=n@entry=14) at ../hw/virtio/virtio-bus.c:312
> 15 0x000055f800d73106 in vhost_dev_disable_notifiers 
> (hdev=hdev@entry=0x55f8035b51b0, vdev=vdev@entry=0x55f8044ec590)
>    at ../../../include/hw/virtio/virtio-bus.h:35
> 16 0x000055f800bf00b2 in vhost_net_stop_one (net=0x55f8035b51b0, 
> dev=0x55f8044ec590) at ../hw/net/vhost_net.c:316
> 17 0x000055f800bf0678 in vhost_net_stop (dev=dev@entry=0x55f8044ec590, 
> ncs=0x55f80452bae0, data_queue_pairs=data_queue_pairs@entry=7, 
> cvq=cvq@entry=1)
>    at ../hw/net/vhost_net.c:423
> 18 0x000055f800d4e628 in virtio_net_set_status (status=<optimized out>, 
> n=0x55f8044ec590) at ../hw/net/virtio-net.c:296
> 19 0x000055f800d4e628 in virtio_net_set_status (vdev=0x55f8044ec590, 
> status=15 '\017') at ../hw/net/virtio-net.c:370
> 20 0x000055f800d6c4b2 in virtio_set_status (vdev=0x55f8044ec590, 
> val=<optimized out>) at ../hw/virtio/virtio.c:1945
> 21 0x000055f800d11d9d in vm_state_notify (running=running@entry=false, 
> state=state@entry=RUN_STATE_SHUTDOWN) at ../softmmu/runstate.c:333
> 22 0x000055f800d04e7a in do_vm_stop (state=state@entry=RUN_STATE_SHUTDOWN, 
> send_stop=send_stop@entry=false) at ../softmmu/cpus.c:262
> 23 0x000055f800d04e99 in vm_shutdown () at ../softmmu/cpus.c:280
> 24 0x000055f800d126af in qemu_cleanup () at ../softmmu/runstate.c:812
> 25 0x000055f800ad5b13 in main (argc=<optimized out>, argv=<optimized out>, 
> envp=<optimized out>) at ../softmmu/main.c:51
>
> For now, temporarily disable handling MQ request from the ctrl_vq
> userspace hanlder to avoid the recursive virtio_net_set_status()
> call. Some rework is needed to allow changing the number of
> queues without going through a full virtio_net_set_status cycle,
> particularly for vhost-vdpa backend.
>
> This patch will need to be reverted as soon as future patches of
> having the change of #queues handled in userspace is merged.
>
> Fixes: 402378407db ("vhost-vdpa: multiqueue support")
> Signed-off-by: Si-Wei Liu <si-wei.liu@oracle.com>

Acked-by: Jason Wang <jasowang@redhat.com>

> ---
>  hw/net/virtio-net.c | 13 +++++++++++++
>  1 file changed, 13 insertions(+)
>
> diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> index f0bb29c..e263116 100644
> --- a/hw/net/virtio-net.c
> +++ b/hw/net/virtio-net.c
> @@ -1381,6 +1381,7 @@ static int virtio_net_handle_mq(VirtIONet *n, uint8_t 
> cmd,
>  {
>      VirtIODevice *vdev = VIRTIO_DEVICE(n);
>      uint16_t queue_pairs;
> +    NetClientState *nc = qemu_get_queue(n->nic);
>
>      virtio_net_disable_rss(n);
>      if (cmd == VIRTIO_NET_CTRL_MQ_HASH_CONFIG) {
> @@ -1412,6 +1413,18 @@ static int virtio_net_handle_mq(VirtIONet *n, uint8_t 
> cmd,
>          return VIRTIO_NET_ERR;
>      }
>
> +    /* Avoid changing the number of queue_pairs for vdpa device in
> +     * userspace handler. A future fix is needed to handle the mq
> +     * change in userspace handler with vhost-vdpa. Let's disable
> +     * the mq handling from userspace for now and only allow get
> +     * done through the kernel. Ripples may be seen when falling
> +     * back to userspace, but without doing it qemu process would
> +     * crash on a recursive entry to virtio_net_set_status().
> +     */
> +    if (nc->peer && nc->peer->info->type == NET_CLIENT_DRIVER_VHOST_VDPA) {
> +        return VIRTIO_NET_ERR;
> +    }
> +
>      n->curr_queue_pairs = queue_pairs;
>      /* stop the backend before changing the number of queue_pairs to avoid 
> handling a
>       * disabled queue */
> --
> 1.8.3.1
>




reply via email to

[Prev in Thread] Current Thread [Next in Thread]