qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH RFC v2 14/16] vfio-user: dma read/write operations


From: Stefan Hajnoczi
Subject: Re: [PATCH RFC v2 14/16] vfio-user: dma read/write operations
Date: Wed, 8 Sep 2021 10:51:11 +0100

On Mon, Aug 16, 2021 at 09:42:47AM -0700, Elena Ufimtseva wrote:
> diff --git a/hw/vfio/pci.c b/hw/vfio/pci.c
> index 2c9fcb2fa9..29a874c066 100644
> --- a/hw/vfio/pci.c
> +++ b/hw/vfio/pci.c
> @@ -3406,11 +3406,72 @@ type_init(register_vfio_pci_dev_type)
>   * vfio-user routines.
>   */
>  
> -static int vfio_user_pci_process_req(void *opaque, char *buf, VFIOUserFDs 
> *fds)
> +static int vfio_user_dma_read(VFIOPCIDevice *vdev, VFIOUserDMARW *msg)
>  {
> +    PCIDevice *pdev = &vdev->pdev;
> +    char *buf;
> +    int size = msg->count + sizeof(VFIOUserDMARW);

The caller has only checked that hdr->size is large enough for
VFIOUserHdr, not VFIOUserDMARW. We must not access VFIOUserDMARW fields
until this has been checked.

Size should be size_t to avoid signedness issues.

Even then, this can overflow on 32-bit hosts so I suggest moving this
arithmetic expression below the msg->count > vfio_user_max_xfer() check.
That way it's clear that overflow cannot happen.

> +
> +    if (msg->hdr.flags & VFIO_USER_NO_REPLY) {
> +        return -EINVAL;
> +    }
> +    if (msg->count > vfio_user_max_xfer()) {
> +        return -E2BIG;
> +    }

Does vfio-user allow the request to be smaller than the reply? In other
words, is it okay that we're not checking msg->count against hdr->size?

> +
> +    buf = g_malloc0(size);
> +    memcpy(buf, msg, sizeof(*msg));
> +
> +    pci_dma_read(pdev, msg->offset, buf + sizeof(*msg), msg->count);

The vfio-user spec doesn't go into errors but pci_dma_read() can return
errors. Hmm...

> +
> +    vfio_user_send_reply(vdev->vbasedev.proxy, buf, size);
> +    g_free(buf);
>      return 0;
>  }
>  
> +static int vfio_user_dma_write(VFIOPCIDevice *vdev,
> +                               VFIOUserDMARW *msg)
> +{
> +    PCIDevice *pdev = &vdev->pdev;
> +    char *buf = (char *)msg + sizeof(*msg);

Or:

  char *buf = msg->data;

> +
> +    /* make sure transfer count isn't larger than the message data */
> +    if (msg->count > msg->hdr.size - sizeof(*msg)) {
> +        return -E2BIG;
> +    }

msg->count cannot be accessed until we have checked that msg->hdr.size
is large enough for VFIOUserDMARW. Adding the check also eliminates the
underflow in the subtraction if msg->hdr.size was smaller than
sizeof(VFIOUserDMARW).

> +
> +    pci_dma_write(pdev, msg->offset, buf, msg->count);
> +
> +    if ((msg->hdr.flags & VFIO_USER_NO_REPLY) == 0) {
> +        vfio_user_send_reply(vdev->vbasedev.proxy, (char *)msg,
> +                             sizeof(msg->hdr));
> +    }
> +    return 0;
> +}
> +
> +static int vfio_user_pci_process_req(void *opaque, char *buf, VFIOUserFDs 
> *fds)
> +{
> +    VFIOPCIDevice *vdev = opaque;
> +    VFIOUserHdr *hdr = (VFIOUserHdr *)buf;
> +    int ret;
> +
> +    if (fds->recv_fds != 0) {
> +        return -EINVAL;

Where are the fds closed?

Attachment: signature.asc
Description: PGP signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]