qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 4/5] virtio-net: Update virtio-net curr_queue_pairs in vdpa b


From: Eugenio Perez Martin
Subject: Re: [PATCH 4/5] virtio-net: Update virtio-net curr_queue_pairs in vdpa backends
Date: Fri, 26 Aug 2022 10:22:32 +0200

On Fri, Aug 26, 2022 at 6:29 AM Si-Wei Liu <si-wei.liu@oracle.com> wrote:
>
>
>
> On 8/24/2022 11:19 PM, Eugenio Perez Martin wrote:
> > On Thu, Aug 25, 2022 at 2:38 AM Si-Wei Liu <si-wei.liu@oracle.com> wrote:
> >>
> >>
> >> On 8/23/2022 9:27 PM, Jason Wang wrote:
> >>> 在 2022/8/20 01:13, Eugenio Pérez 写道:
> >>>> It was returned as error before. Instead of it, simply update the
> >>>> corresponding field so qemu can send it in the migration data.
> >>>>
> >>>> Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> >>>> ---
> >>>
> >>> Looks correct.
> >>>
> >>> Adding Si Wei for double check.
> >> Hmmm, I understand why this change is needed for live migration, but
> >> this would easily cause userspace out of sync with the kernel for other
> >> use cases, such as link down or userspace fallback due to vdpa ioctl
> >> error. Yes, these are edge cases.
> > The link down case is not possible at this moment because that cvq
> > command does not call virtio_net_handle_ctrl_iov.
> Right. Though shadow cvq would need to rely on extra ASID support from
> kernel. For the case without shadow cvq we still need to look for an
> alternative mechanism.
>
> > A similar treatment
> > than mq would be needed when supported, and the call to
> > virtio_net_set_status will be avoided.
> So, maybe the seemingly "right" fix for the moment is to prohibit manual
> set_link at all (for vDPA only)?

We can apply a similar solution and just save the link status, without
stopping any vqp backend. The code can be more elegant than checking
if the backend is vhost-vdpa of course, but what is the problem with
doing it that way?

> In longer term we'd need to come up
> with appropriate support for applying mq config regardless of asid or
> shadow cvq support.
>

What do you mean by applying "mq config"? To the virtio-net device
model in qemu? Is there any use case to apply it to the model outside
of live migration?

On the other hand, the current approach is not using ASID at all, it
will be added on top. Do you mean that it is needed for data
passthrough & CVQ shadow, isn't it?

> >
> > I'll double check device initialization ioctl failure with
> > n->curr_queue_pairs > 1 in the destination, but I think we should be
> > safe.
> >
> >> Not completely against it, but I
> >> wonder if there's a way we can limit the change scope to live migration
> >> case only?
> >>
> > The reason to update the device model is to send the curr_queue_pairs
> > to the destination in a backend agnostic way. To send it otherwise
> > would limit the live migration possibilities, but sure we can explore
> > another way.
> A hacky workaround that came off the top of my head was to allow sending
> curr_queue_pairs for the !vm_running case for vdpa. It doesn't look it
> would affect other backend I think. But I agree with Jason, this doesn't
> look decent so I give up on this idea. Hence for this patch,
>

I still don't get the problem. Also, the guest would need to reset the
device anyway, so that information will be lost, isn't it?

Thanks!

> Acked-by: Si-Wei Liu <si-wei.liu@oracle.com>
>
> >
> > Thanks!
> >
>




reply via email to

[Prev in Thread] Current Thread [Next in Thread]