qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v4 12/15] vdpa: block migration if device has unsupported fea


From: Eugenio Perez Martin
Subject: Re: [PATCH v4 12/15] vdpa: block migration if device has unsupported features
Date: Fri, 3 Mar 2023 09:58:14 +0100

On Fri, Mar 3, 2023 at 4:48 AM Jason Wang <jasowang@redhat.com> wrote:
>
>
> 在 2023/3/2 03:32, Eugenio Perez Martin 写道:
> > On Mon, Feb 27, 2023 at 9:20 AM Jason Wang <jasowang@redhat.com> wrote:
> >> On Mon, Feb 27, 2023 at 4:15 PM Jason Wang <jasowang@redhat.com> wrote:
> >>>
> >>> 在 2023/2/24 23:54, Eugenio Pérez 写道:
> >>>> A vdpa net device must initialize with SVQ in order to be migratable at
> >>>> this moment, and initialization code verifies some conditions.  If the
> >>>> device is not initialized with the x-svq parameter, it will not expose
> >>>> _F_LOG so the vhost subsystem will block VM migration from its
> >>>> initialization.
> >>>>
> >>>> Next patches change this, so we need to verify migration conditions
> >>>> differently.
> >>>>
> >>>> QEMU only supports a subset of net features in SVQ, and it cannot
> >>>> migrate state that cannot track or restore in the destination.  Add a
> >>>> migration blocker if the device offer an unsupported feature.
> >>>>
> >>>> Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> >>>> ---
> >>>> v3: add mirgation blocker properly so vhost_dev can handle it.
> >>>> ---
> >>>>    net/vhost-vdpa.c | 12 ++++++++----
> >>>>    1 file changed, 8 insertions(+), 4 deletions(-)
> >>>>
> >>>> diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
> >>>> index 4f983df000..094dc1c2d0 100644
> >>>> --- a/net/vhost-vdpa.c
> >>>> +++ b/net/vhost-vdpa.c
> >>>> @@ -795,7 +795,8 @@ static NetClientState 
> >>>> *net_vhost_vdpa_init(NetClientState *peer,
> >>>>                                           int nvqs,
> >>>>                                           bool is_datapath,
> >>>>                                           bool svq,
> >>>> -                                       struct vhost_vdpa_iova_range 
> >>>> iova_range)
> >>>> +                                       struct vhost_vdpa_iova_range 
> >>>> iova_range,
> >>>> +                                       uint64_t features)
> >>>>    {
> >>>>        NetClientState *nc = NULL;
> >>>>        VhostVDPAState *s;
> >>>> @@ -818,7 +819,10 @@ static NetClientState 
> >>>> *net_vhost_vdpa_init(NetClientState *peer,
> >>>>        s->vhost_vdpa.shadow_vqs_enabled = svq;
> >>>>        s->vhost_vdpa.iova_range = iova_range;
> >>>>        s->vhost_vdpa.shadow_data = svq;
> >>>> -    if (!is_datapath) {
> >>>> +    if (queue_pair_index == 0) {
> >>>> +        vhost_vdpa_net_valid_svq_features(features,
> >>>> +                                          
> >>>> &s->vhost_vdpa.migration_blocker);
> >>>
> >>> Since we do validation at initialization, is this necessary to valid
> >>> once again in other places?
> >> Ok, after reading patch 13, I think the question is:
> >>
> >> The validation seems to be independent to net, can we valid it once
> >> during vhost_vdpa_init()?
> >>
> > vhost_vdpa_net_valid_svq_features also checks for net features. In
> > particular, all the non transport features must be in
> > vdpa_svq_device_features.
> >
> > This is how we protect that the device / guest will never negotiate
> > things like VLAN filtering support, as SVQ still does not know how to
> > restore at the destination.
> >
> > In the VLAN filtering case CVQ is needed to restore VLAN, so it is
> > covered by patch 11/15. But other future features may need support for
> > restoring it in the destination.
>
>
> I wonder how hard to have a general validation code let net specific
> code to advertise a blacklist to avoid code duplication.
>

A blacklist does not work here, because I don't know if SVQ needs
changes for future feature bits that are still not in / proposed to
the standard.

Regarding the code duplication, do you mean to validate transport
features and net specific features in one shot, instead of having a
dedicated function for SVQ transport?

Thanks!

> Thanks
>
>
> >
> > Thanks!
> >
> >> Thanks
> >>
> >>> Thanks
> >>>
> >>>
> >>>> +    } else if (!is_datapath) {
> >>>>            s->cvq_cmd_out_buffer = 
> >>>> qemu_memalign(qemu_real_host_page_size(),
> >>>>                                                
> >>>> vhost_vdpa_net_cvq_cmd_page_len());
> >>>>            memset(s->cvq_cmd_out_buffer, 0, 
> >>>> vhost_vdpa_net_cvq_cmd_page_len());
> >>>> @@ -956,7 +960,7 @@ int net_init_vhost_vdpa(const Netdev *netdev, const 
> >>>> char *name,
> >>>>        for (i = 0; i < queue_pairs; i++) {
> >>>>            ncs[i] = net_vhost_vdpa_init(peer, TYPE_VHOST_VDPA, name,
> >>>>                                         vdpa_device_fd, i, 2, true, 
> >>>> opts->x_svq,
> >>>> -                                     iova_range);
> >>>> +                                     iova_range, features);
> >>>>            if (!ncs[i])
> >>>>                goto err;
> >>>>        }
> >>>> @@ -964,7 +968,7 @@ int net_init_vhost_vdpa(const Netdev *netdev, const 
> >>>> char *name,
> >>>>        if (has_cvq) {
> >>>>            nc = net_vhost_vdpa_init(peer, TYPE_VHOST_VDPA, name,
> >>>>                                     vdpa_device_fd, i, 1, false,
> >>>> -                                 opts->x_svq, iova_range);
> >>>> +                                 opts->x_svq, iova_range, features);
> >>>>            if (!nc)
> >>>>                goto err;
> >>>>        }
>




reply via email to

[Prev in Thread] Current Thread [Next in Thread]