qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v4 6/7] vdpa: Add virtio-net mac address via CVQ at start


From: Eugenio Perez Martin
Subject: Re: [PATCH v4 6/7] vdpa: Add virtio-net mac address via CVQ at start
Date: Tue, 2 Aug 2022 19:37:24 +0200

On Mon, Aug 1, 2022 at 9:09 AM Eugenio Perez Martin <eperezma@redhat.com> wrote:
>
> On Mon, Jul 25, 2022 at 11:32 AM Jason Wang <jasowang@redhat.com> wrote:
> >
> >
> > 在 2022/7/22 19:12, Eugenio Pérez 写道:
> > > This is needed so the destination vdpa device see the same state a the
> > > guest set in the source.
> > >
> > > Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> > > ---
> > >   net/vhost-vdpa.c | 61 ++++++++++++++++++++++++++++++++++++++++++++++++
> > >   1 file changed, 61 insertions(+)
> > >
> > > diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
> > > index 61516b1432..3e15a42c35 100644
> > > --- a/net/vhost-vdpa.c
> > > +++ b/net/vhost-vdpa.c
> > > @@ -365,10 +365,71 @@ static virtio_net_ctrl_ack 
> > > vhost_vdpa_net_cvq_add(VhostShadowVirtqueue *svq,
> > >       return VIRTIO_NET_OK;
> > >   }
> > >
> > > +static int vhost_vdpa_net_start(NetClientState *nc)
> > > +{
> > > +    VhostVDPAState *s = DO_UPCAST(VhostVDPAState, nc, nc);
> > > +    struct vhost_vdpa *v = &s->vhost_vdpa;
> > > +    VirtIONet *n;
> > > +    uint64_t features;
> > > +    VhostShadowVirtqueue *svq;
> > > +
> > > +    assert(nc->info->type == NET_CLIENT_DRIVER_VHOST_VDPA);
> > > +
> > > +    if (!v->shadow_vqs_enabled) {
> > > +        return 0;
> > > +    }
> > > +
> > > +    if (v->dev->nvqs != 1 &&
> > > +        v->dev->vq_index + v->dev->nvqs != v->dev->vq_index_end) {
> > > +        /* Only interested in CVQ */
> > > +        return 0;
> > > +    }
> >
> >
> > I'd have a dedicated NetClientInfo for cvq.
> >
>
> I'll try and come back to you.
>
> >
> > > +
> > > +    n = VIRTIO_NET(v->dev->vdev);
> > > +    features = v->dev->vdev->host_features;
> > > +    svq = g_ptr_array_index(v->shadow_vqs, 0);
> > > +    if (features & BIT_ULL(VIRTIO_NET_F_CTRL_MAC_ADDR)) {
> > > +        const struct virtio_net_ctrl_hdr ctrl = {
> > > +            .class = VIRTIO_NET_CTRL_MAC,
> > > +            .cmd = VIRTIO_NET_CTRL_MAC_ADDR_SET,
> > > +        };
> > > +        uint8_t mac[6];
> > > +        const struct iovec out[] = {
> > > +            {
> > > +                .iov_base = (void *)&ctrl,
> > > +                .iov_len = sizeof(ctrl),
> > > +            },{
> > > +                .iov_base = mac,
> > > +                .iov_len = sizeof(mac),
> > > +            },
> > > +        };
> > > +        struct iovec dev_buffers[2] = {
> > > +            { .iov_base = s->cvq_cmd_out_buffer },
> > > +            { .iov_base = s->cvq_cmd_in_buffer },
> > > +        };
> > > +        bool ok;
> > > +        virtio_net_ctrl_ack state;
> > > +
> > > +        ok = vhost_vdpa_net_cvq_map_sg(s, out, ARRAY_SIZE(out), 
> > > dev_buffers);
> >
> >
> > To speed up the state recovery, can we map those buffers during svq start?
> >
>
> Not sure if I follow you here. This is the callback that is called
> during the device startup.
>
> If you mean to make these buffers permanently mapped I think that can
> be done for this series, but extra care will be needed when we
> introduce ASID support to not make them visible from the guest. I'm ok
> if you prefer to make it that way for this series.
>

Sending v4 without this part, please let me know if it needs further changes.

Thanks!




reply via email to

[Prev in Thread] Current Thread [Next in Thread]