qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PULL V3 08/15] vhost: Shadow virtqueue buffers forwarding


From: Peter Maydell
Subject: Re: [PULL V3 08/15] vhost: Shadow virtqueue buffers forwarding
Date: Thu, 12 May 2022 16:47:55 +0100

On Tue, 15 Mar 2022 at 06:14, Jason Wang <jasowang@redhat.com> wrote:
>
> From: Eugenio PĂ©rez <eperezma@redhat.com>
>
> Initial version of shadow virtqueue that actually forward buffers. There
> is no iommu support at the moment, and that will be addressed in future
> patches of this series. Since all vhost-vdpa devices use forced IOMMU,
> this means that SVQ is not usable at this point of the series on any
> device.
>
> For simplicity it only supports modern devices, that expects vring
> in little endian, with split ring and no event idx or indirect
> descriptors. Support for them will not be added in this series.
>
> It reuses the VirtQueue code for the device part. The driver part is
> based on Linux's virtio_ring driver, but with stripped functionality
> and optimizations so it's easier to review.
>
> However, forwarding buffers have some particular pieces: One of the most
> unexpected ones is that a guest's buffer can expand through more than
> one descriptor in SVQ. While this is handled gracefully by qemu's
> emulated virtio devices, it may cause unexpected SVQ queue full. This
> patch also solves it by checking for this condition at both guest's
> kicks and device's calls. The code may be more elegant in the future if
> SVQ code runs in its own iocontext.

Hi; Coverity thinks there's a memory leak in an error handling
path in this code (CID 1487559):

> +/**
> + * Forward available buffers.
> + *
> + * @svq: Shadow VirtQueue
> + *
> + * Note that this function does not guarantee that all guest's available
> + * buffers are available to the device in SVQ avail ring. The guest may have
> + * exposed a GPA / GIOVA contiguous buffer, but it may not be contiguous in
> + * qemu vaddr.
> + *
> + * If that happens, guest's kick notifications will be disabled until the
> + * device uses some buffers.
> + */
> +static void vhost_handle_guest_kick(VhostShadowVirtqueue *svq)
> +{
> +    /* Clear event notifier */
> +    event_notifier_test_and_clear(&svq->svq_kick);
> +
> +    /* Forward to the device as many available buffers as possible */
> +    do {
> +        virtio_queue_set_notification(svq->vq, false);
> +
> +        while (true) {
> +            VirtQueueElement *elem;
> +            bool ok;
> +
> +            if (svq->next_guest_avail_elem) {
> +                elem = g_steal_pointer(&svq->next_guest_avail_elem);
> +            } else {
> +                elem = virtqueue_pop(svq->vq, sizeof(*elem));
> +            }

Here virtqueue_pop() returns allocated memory...

> +
> +            if (!elem) {
> +                break;
> +            }
> +
> +            if (elem->out_num + elem->in_num > 
> vhost_svq_available_slots(svq)) {
> +                /*
> +                 * This condition is possible since a contiguous buffer in 
> GPA
> +                 * does not imply a contiguous buffer in qemu's VA
> +                 * scatter-gather segments. If that happens, the buffer 
> exposed
> +                 * to the device needs to be a chain of descriptors at this
> +                 * moment.
> +                 *
> +                 * SVQ cannot hold more available buffers if we are here:
> +                 * queue the current guest descriptor and ignore further 
> kicks
> +                 * until some elements are used.
> +                 */
> +                svq->next_guest_avail_elem = elem;
> +                return;
> +            }
> +
> +            ok = vhost_svq_add(svq, elem);
> +            if (unlikely(!ok)) {
> +                /* VQ is broken, just return and ignore any other kicks */
> +                return;

...but in this error return path we have neither put elem
anywhere, nor freed it, so the memory is leaked.

> +            }
> +            vhost_svq_kick(svq);
> +        }
> +
> +        virtio_queue_set_notification(svq->vq, true);
> +    } while (!virtio_queue_empty(svq->vq));
> +}

thanks
-- PMM



reply via email to

[Prev in Thread] Current Thread [Next in Thread]