qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v4 3/7] vdpa: Make vhost_vdpa_net_cvq_map_elem accept any out


From: Eugenio Perez Martin
Subject: Re: [PATCH v4 3/7] vdpa: Make vhost_vdpa_net_cvq_map_elem accept any out sg
Date: Mon, 1 Aug 2022 08:42:15 +0200

On Mon, Jul 25, 2022 at 10:48 AM Jason Wang <jasowang@redhat.com> wrote:
>
>
> 在 2022/7/22 19:12, Eugenio Pérez 写道:
> > So its generic enough to accept any out sg buffer and we can inject
> > NIC state messages.
> >
> > Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> > ---
> >   net/vhost-vdpa.c | 29 +++++++++++++++--------------
> >   1 file changed, 15 insertions(+), 14 deletions(-)
> >
> > diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
> > index 1b82ac2e07..bbe1830824 100644
> > --- a/net/vhost-vdpa.c
> > +++ b/net/vhost-vdpa.c
> > @@ -302,35 +302,36 @@ dma_map_err:
> >   }
> >
> >   /**
> > - * Copy the guest element into a dedicated buffer suitable to be sent to 
> > NIC
> > + * Maps out sg and in buffer into dedicated buffers suitable to be sent to 
> > NIC
> >    *
> > - * @iov: [0] is the out buffer, [1] is the in one
> > + * @dev_iov: [0] is the out buffer, [1] is the in one
>
>
> This still has assumption on the layout. I wonder if it's better to
> simply use in_sg and out_sg.
>

Sure, I can resend that way.

It complicates the code a little bit because of error paths. We
currently send one out sg and one in sg always. But we can make it
more generic for sure.

Thanks!

> Thanks
>
>
> >    */
> > -static bool vhost_vdpa_net_cvq_map_elem(VhostVDPAState *s,
> > -                                        VirtQueueElement *elem,
> > -                                        struct iovec *iov)
> > +static bool vhost_vdpa_net_cvq_map_sg(VhostVDPAState *s,
> > +                                      const struct iovec *out, size_t 
> > out_num,
> > +                                      struct iovec *dev_iov)
> >   {
> >       size_t in_copied;
> >       bool ok;
> >
> > -    iov[0].iov_base = s->cvq_cmd_out_buffer;
> > -    ok = vhost_vdpa_cvq_map_buf(&s->vhost_vdpa, elem->out_sg, 
> > elem->out_num,
> > -                                vhost_vdpa_net_cvq_cmd_len(), 
> > iov[0].iov_base,
> > -                                &iov[0].iov_len, false);
> > +    dev_iov[0].iov_base = s->cvq_cmd_out_buffer;
> > +    ok = vhost_vdpa_cvq_map_buf(&s->vhost_vdpa, out, out_num,
> > +                                vhost_vdpa_net_cvq_cmd_len(),
> > +                                dev_iov[0].iov_base, &dev_iov[0].iov_len,
> > +                                false);
> >       if (unlikely(!ok)) {
> >           return false;
> >       }
> >
> > -    iov[1].iov_base = s->cvq_cmd_in_buffer;
> > +    dev_iov[1].iov_base = s->cvq_cmd_in_buffer;
> >       ok = vhost_vdpa_cvq_map_buf(&s->vhost_vdpa, NULL, 0,
> > -                                sizeof(virtio_net_ctrl_ack), 
> > iov[1].iov_base,
> > -                                &in_copied, true);
> > +                                sizeof(virtio_net_ctrl_ack),
> > +                                dev_iov[1].iov_base, &in_copied, true);
> >       if (unlikely(!ok)) {
> >           vhost_vdpa_cvq_unmap_buf(&s->vhost_vdpa, s->cvq_cmd_out_buffer);
> >           return false;
> >       }
> >
> > -    iov[1].iov_len = sizeof(virtio_net_ctrl_ack);
> > +    dev_iov[1].iov_len = sizeof(virtio_net_ctrl_ack);
> >       return true;
> >   }
> >
> > @@ -434,7 +435,7 @@ static int 
> > vhost_vdpa_net_handle_ctrl_avail(VhostShadowVirtqueue *svq,
> >       };
> >       bool ok;
> >
> > -    ok = vhost_vdpa_net_cvq_map_elem(s, elem, dev_buffers);
> > +    ok = vhost_vdpa_net_cvq_map_sg(s, elem->out_sg, elem->out_num, 
> > dev_buffers);
> >       if (unlikely(!ok)) {
> >           goto out;
> >       }
>




reply via email to

[Prev in Thread] Current Thread [Next in Thread]