qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 4/5] virtio-net: Update virtio-net curr_queue_pairs in vdpa b


From: Si-Wei Liu
Subject: Re: [PATCH 4/5] virtio-net: Update virtio-net curr_queue_pairs in vdpa backends
Date: Thu, 25 Aug 2022 21:28:23 -0700
User-agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101 Thunderbird/91.12.0



On 8/24/2022 11:19 PM, Eugenio Perez Martin wrote:
On Thu, Aug 25, 2022 at 2:38 AM Si-Wei Liu <si-wei.liu@oracle.com> wrote:


On 8/23/2022 9:27 PM, Jason Wang wrote:
在 2022/8/20 01:13, Eugenio Pérez 写道:
It was returned as error before. Instead of it, simply update the
corresponding field so qemu can send it in the migration data.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
---

Looks correct.

Adding Si Wei for double check.
Hmmm, I understand why this change is needed for live migration, but
this would easily cause userspace out of sync with the kernel for other
use cases, such as link down or userspace fallback due to vdpa ioctl
error. Yes, these are edge cases.
The link down case is not possible at this moment because that cvq
command does not call virtio_net_handle_ctrl_iov.
Right. Though shadow cvq would need to rely on extra ASID support from kernel. For the case without shadow cvq we still need to look for an alternative mechanism.

A similar treatment
than mq would be needed when supported, and the call to
virtio_net_set_status will be avoided.
So, maybe the seemingly "right" fix for the moment is to prohibit manual set_link at all (for vDPA only)? In longer term we'd need to come up with appropriate support for applying mq config regardless of asid or shadow cvq support.


I'll double check device initialization ioctl failure with
n->curr_queue_pairs > 1 in the destination, but I think we should be
safe.

Not completely against it, but I
wonder if there's a way we can limit the change scope to live migration
case only?

The reason to update the device model is to send the curr_queue_pairs
to the destination in a backend agnostic way. To send it otherwise
would limit the live migration possibilities, but sure we can explore
another way.
A hacky workaround that came off the top of my head was to allow sending curr_queue_pairs for the !vm_running case for vdpa. It doesn't look it would affect other backend I think. But I agree with Jason, this doesn't look decent so I give up on this idea. Hence for this patch,

Acked-by: Si-Wei Liu <si-wei.liu@oracle.com>


Thanks!





reply via email to

[Prev in Thread] Current Thread [Next in Thread]