qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 2/5] vdpa: Add vhost_vdpa_net_load_mq


From: Eugenio Perez Martin
Subject: Re: [PATCH 2/5] vdpa: Add vhost_vdpa_net_load_mq
Date: Wed, 24 Aug 2022 11:18:40 +0200

On Wed, Aug 24, 2022 at 11:08 AM Jason Wang <jasowang@redhat.com> wrote:
>
> On Wed, Aug 24, 2022 at 5:06 PM Eugenio Perez Martin
> <eperezma@redhat.com> wrote:
> >
> > On Wed, Aug 24, 2022 at 10:52 AM Jason Wang <jasowang@redhat.com> wrote:
> > >
> > > On Wed, Aug 24, 2022 at 3:47 PM Eugenio Perez Martin
> > > <eperezma@redhat.com> wrote:
> > > >
> > > > On Wed, Aug 24, 2022 at 6:23 AM Jason Wang <jasowang@redhat.com> wrote:
> > > > >
> > > > >
> > > > > 在 2022/8/20 01:13, Eugenio Pérez 写道:
> > > > > > Same way as with the MAC, restore the expected number of queues at
> > > > > > device's start.
> > > > > >
> > > > > > Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> > > > > > ---
> > > > > >   net/vhost-vdpa.c | 33 +++++++++++++++++++++++++++++++++
> > > > > >   1 file changed, 33 insertions(+)
> > > > > >
> > > > > > diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
> > > > > > index 1e0dbfcced..96fd3bc835 100644
> > > > > > --- a/net/vhost-vdpa.c
> > > > > > +++ b/net/vhost-vdpa.c
> > > > > > @@ -391,6 +391,35 @@ static int 
> > > > > > vhost_vdpa_net_load_mac(VhostVDPAState *s,
> > > > > >       return 0;
> > > > > >   }
> > > > > >
> > > > > > +static int vhost_vdpa_net_load_mq(VhostVDPAState *s,
> > > > > > +                                  const VirtIONet *n)
> > > > > > +{
> > > > > > +    uint64_t features = n->parent_obj.guest_features;
> > > > > > +    ssize_t dev_written;
> > > > > > +    void *cursor = s->cvq_cmd_out_buffer;
> > > > > > +    if (!(features & BIT_ULL(VIRTIO_NET_F_MQ))) {
> > > > > > +        return 0;
> > > > > > +    }
> > > > > > +
> > > > > > +    *(struct virtio_net_ctrl_hdr *)cursor = (struct 
> > > > > > virtio_net_ctrl_hdr) {
> > > > > > +        .class = VIRTIO_NET_CTRL_MQ,
> > > > > > +        .cmd = VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET,
> > > > > > +    };
> > > > > > +    cursor += sizeof(struct virtio_net_ctrl_hdr);
> > > > > > +    *(struct virtio_net_ctrl_mq *)cursor = (struct 
> > > > > > virtio_net_ctrl_mq) {
> > > > > > +        .virtqueue_pairs = cpu_to_le16(n->curr_queue_pairs),
> > > > > > +    };
> > > > >
> > > > >
> > > > > Such casting is not elegant, let's just prepare buffer and then do the
> > > > > copy inside vhost_vdpa_net_cvq_add()?
> > > > >
> > > >
> > > > I'm not sure what you propose here. I can pre-fill a buffer in the
> > > > stack and then do an extra copy in vhost_vdpa_net_cvq_add. The
> > > > compiler should be able to optimize it, but I'm not sure if it
> > > > simplifies the code.
> > > >
> > > > We can have a dedicated buffer for mac, another for mq, and one for
> > > > each different command, and map all of them at the device's start. But
> > > > this seems too much overhead to me.
> > >
> > > Considering we may need to support and restore a lot of other fields,
> > > this looks a little complicated.
> > >
> > > I meant the caller can simply do:
> > >
> > > struct virtio_net_ctrl_mq mq = { ...};
> > >
> > > Then we do
> > >
> > > vhost_vdpa_net_cvq_add(&mq, sizeof(mq), ...);
> > >
> > > Then we can do memcpy inside vhost_vdpa_net_cvq_add() and hide the
> > > cmd_out_buffer etc from the caller.
> > >
> >
> > We need to add the ctrl header too. But yes, that is feasible, something 
> > like:
> >
> > vhost_vdpa_net_cvq_add(&ctrl, &mq, sizeof(mq), ...);
> >
> > > >
> > > > Some alternatives that come to my mind:
> > > >
> > > > * Declare a struct with both virtio_net_ctrl_hdr and each of the
> > > > control commands (using unions?), and cast s->cvq_cmd_out_buffer
> > > > accordingly.
> > > > * Declare a struct with all of the supported commands one after
> > > > another, and let qemu fill and send these accordingly.
> > > >
> > > > >
> > > > > > +    cursor += sizeof(struct virtio_net_ctrl_mq);
> > > > > > +
> > > > > > +    dev_written = vhost_vdpa_net_cvq_add(s, cursor - 
> > > > > > s->cvq_cmd_out_buffer,
> > > > > > +                                             
> > > > > > sizeof(virtio_net_ctrl_ack));
> > > > > > +    if (unlikely(dev_written < 0)) {
> > > > > > +        return dev_written;
> > > > > > +    }
> > > > > > +
> > > > > > +    return *((virtio_net_ctrl_ack *)s->cvq_cmd_in_buffer) != 
> > > > > > VIRTIO_NET_OK;
> > > > >
> > > > >
> > > > > So I think we should have a dedicated buffer just for ack, then 
> > > > > there's
> > > > > no need for such casting.
> > > > >
> > > >
> > > > You mean to declare cvq_cmd_in_buffer as virtio_net_ctrl_ack type
> > > > directly and map it to the device?
> > >
> > > Kind of, considering the ack is the only kind of structure in the near
> > > future, can we simply use the structure virtio_net_ctl_ack?
> > >
> >
> > Almost, but we need to map to the device in a page size. And I think
> > it's better to allocate a whole page for that, so it does not share
> > memory with qemu.
>
> I guess using a union will solve the problem?
>

It was more a nitpick than a problem, pointing out the need to
allocate a whole page casting it or not to virtio_net_ctrl_ack. In
other words, we must init status as "status =
g_malloc0(real_host_page_size())", not "g_malloc0(sizeof(*status))".

But I think the union is a good idea. The problem is that
qemu_real_host_page_size is not a constant so we cannot declare a type
with that.

> Thanks
>
> >
> > Other than that, yes, I think it can be declared as virtio_net_ctl_ack 
> > directly.
> >
> > Thanks!
> >
>




reply via email to

[Prev in Thread] Current Thread [Next in Thread]