qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: About restoring the state in vhost-vdpa device


From: Parav Pandit
Subject: RE: About restoring the state in vhost-vdpa device
Date: Wed, 18 May 2022 12:43:58 +0000

> From: Jason Wang <jasowang@redhat.com>
> Sent: Monday, May 16, 2022 11:05 PM
> >> Although it's a longer route, I'd very much prefer an in-band virtio
> >> way to perform it rather than a linux/vdpa specific. It's one of the
> >> reasons I prefer the CVQ behavior over a vdpa specific ioctl.
> >>
> > What is the in-band method to set last_avail_idx?
> > In-band virtio method doesn't exist.
> 
> 
> Right, but it's part of the vhost API which was there for more than 10 years.
> This should be supported by all the vDPA vendors.
Sure. My point to Eugenio was that vdpa doesn’t have to limited by virtio spec.
Plumbing exists to make vdpa work without virtio spec.
And hence, additional ioctl can be ok.

> >> layers of the stack need to maintain more state.
> > Mostly not. A complete virtio device state arrived from source vdpa device
> can be given to destination vdpa device without anyone else looking in the
> middle. If this format is known/well defined.
> 
> 
> That's fine, and it seems the virtio spec is a better place for this,
> then we won't duplicate efforts?
> 
Yes. for VDPA kernel, setting parameters doesn’t need virtio spec update.
It is similar to avail index setting.

> 
> >
> >>  From the guest point of view, to enable all the queues with
> >> VHOST_VDPA_SET_VRING_ENABLE and don't send DRIVER_OK is the
> same
> >> as send DRIVER_OK and not to enable any data queue with
> >> VHOST_VDPA_SET_VRING_ENABLE.
> > Enabling SET_VRING_ENABLE after DRIVER_OK has two basic things
> broken.
> 
> 
> It looks to me the spec:
> 
> 1) For PCI it doesn't forbid the driver to set queue_enable to 1 after
> DRIVER_OK.
Device init sequence sort of hints that vq setup should be done before 
driver_ok in below snippet.

"Perform device-specific setup, including discovery of virtqueues for the 
device, optional per-bus setup,
reading and possibly writing the device’s virtio configuration space, and 
population of virtqueues."

For a moment even if we assume, that queue can be enabled after driver_ok, it 
ends up going to incorrect queue.
Because the queue where it supposed to go, it not enabled and its rss is not 
setup.

So on restore flow it is desired to set needed config before doing driver_ok.

> 2) For MMIO, it even allows the driver to disable a queue after DRIVER_OK
> 
> 
> > 1. supplied RSS config and VQ config is not honored for several tens of
> hundreds of milliseconds
> > It will be purely dependent on how/when this ioctl are made.
> > Due to this behavior packet supposed to arrive in X VQ, arrives in Y VQ.
> 
> 
> I don't get why we end up with this situation.
> 
> 1) enable cvq
> 2) set driver_ok
> 3) set RSS
> 4) enable TX/RX
> 
> vs
> 
> 1) set RSS
> 2) enable cvq
> 3) enable TX/RX
> 4) set driver_ok
> 
> Is the latter faster?
> 
Yes, because later sequence has the ability to setup steering config once.
As opposed to that first sequence needs to incrementally update the rss setting 
on every new queue addition on step #4.

> 
> >
> > 2. Each VQ enablement one at a time, requires constant steering update
> for the VQ
> > While this information is something already known. Trying to reuse brings a
> callback result in this in-efficiency.
> > So better to start with more reusable APIs that fits the LM flow.
> 
> 
> I agree, but the method proposed in the mail seems to be the only way
> that can work with the all the major vDPA vendors.
> 
> E.g the new API requires the device has the ability to receive device
> state other than the control virtqueue which might not be supported the
> hardware. (The device might expects a trap and emulate model rather than
> save and restore).
> 
How a given vendor to return the values is in the vendor specific vdpa driver, 
just like avail_index which is not coming through the CVQ.

>  From qemu point of view, it might need to support both models.
> 
> If the device can't do save and restore:
> 
> 1.1) enable cvq
> 1.2) set driver_ok
> 1.3) set device state (MQ, RSS) via control vq
> 1.4) enable TX/RX
> 
> If the device can do save and restore:
> 
> 2.1) set device state (new API for setting MQ,RSS)
> 2.2) enable cvq
> 2.3) enable TX?RX
> 2.4) set driver_ok
> 
> We can start from 1 since it works for all device and then adding
> support for 2?
> 

How about:
3.1) create cvq for the supported device
Cvq not exposed to user space, stays in the kernel. Vdpa driver created it.

3.2) set device state (MQ, RSS) comes via user->kernel ioctl()
Vdpa driver internally decides whether to use cvq or something else (like avail 
index).

3.3) enable tx/rx
3.4) set driver_ok

reply via email to

[Prev in Thread] Current Thread [Next in Thread]