[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [PATCH for 9.0 08/12] vdpa: add vhost_vdpa_load_setup
From: |
Jason Wang |
Subject: |
Re: [PATCH for 9.0 08/12] vdpa: add vhost_vdpa_load_setup |
Date: |
Wed, 20 Dec 2023 13:21:05 +0800 |
On Sat, Dec 16, 2023 at 1:28 AM Eugenio Pérez <eperezma@redhat.com> wrote:
>
> Callers can use this function to setup the incoming migration thread.
>
> This thread is able to map the guest memory while the migration is
> ongoing, without blocking QMP or other important tasks. While this
> allows the destination QEMU not to block, it expands the mapping time
> during migration instead of making it pre-migration.
If it's just QMP, can we simply use bh with a quota here?
Btw, have you measured the hotspot that causes such slowness? Is it
pinning or vendor specific mapping that slows down the progress? Or if
VFIO has a similar issue?
>
> This thread joins at vdpa backend device start, so it could happen that
> the guest memory is so large that we still have guest memory to map
> before this time.
So we would still hit the QMP stall in this case?
> This can be improved in later iterations, when the
> destination device can inform QEMU that it is not ready to complete the
> migration.
>
> If the device is not started, the clean of the mapped memory is done at
> .load_cleanup. This is far from ideal, as the destination machine has
> mapped all the guest ram for nothing, and now it needs to unmap it.
> However, we don't have information about the state of the device so its
> the best we can do. Once iterative migration is supported, this will be
> improved as we know the virtio state of the device.
>
> If the VM migrates before finishing all the maps, the source will stop
> but the destination is still not ready to continue, and it will wait
> until all guest RAM is mapped. It is still an improvement over doing
> all the map when the migration finish, but next patches use the
> switchover_ack method to prevent source to stop until all the memory is
> mapped at the destination.
>
> The memory unmapping if the device is not started is weird
> too, as ideally nothing would be mapped. This can be fixed when we
> migrate the device state iteratively, and we know for sure if the device
> is started or not. At this moment we don't have such information so
> there is no better alternative.
>
> Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
>
> ---
Thanks
[PATCH for 9.0 09/12] vdpa: approve switchover after memory map in the migration destination, Eugenio Pérez, 2023/12/15
[PATCH for 9.0 03/12] vdpa: merge _begin_batch into _batch_begin_once, Eugenio Pérez, 2023/12/15
[PATCH for 9.0 10/12] vdpa: add vhost_vdpa_net_load_setup NetClient callback, Eugenio Pérez, 2023/12/15
[PATCH for 9.0 11/12] vdpa: add vhost_vdpa_net_switchover_ack_needed, Eugenio Pérez, 2023/12/15
[PATCH for 9.0 12/12] virtio_net: register incremental migration handlers, Eugenio Pérez, 2023/12/15
Re: [PATCH for 9.0 00/12] Map memory at destination .load_setup in vDPA-net migration, Lei Yang, 2023/12/24