qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 4/9] vfio/migration: Skip pre-copy if dirty page tracking is


From: Jason Gunthorpe
Subject: Re: [PATCH 4/9] vfio/migration: Skip pre-copy if dirty page tracking is not supported
Date: Tue, 17 May 2022 14:39:37 -0300

On Tue, May 17, 2022 at 11:22:32AM -0600, Alex Williamson wrote:

> > > It seems like a better solution would be to expose to management
> > > tools that the VM contains a device that does not support the
> > > pre-copy phase so that downtime expectations can be adjusted.  
> > 
> > I don't expect this to be a real use case though..
> > 
> > Remember, you asked for this patch when you wanted qemu to have good
> > behavior when kernel support for legacy dirty tracking is removed
> > before we merge v2 support.
> 
> Is wanting good behavior a controversial point?  Did we define this as
> the desired good behavior?  Ref?  

As I said, this was intended as a NOP, which is what I thought we
agreed to. Missing the SLA checking that existed before seems like
something to fix in this patch. This is the discussion thread:

https://lore.kernel.org/kvm/20220324231159.GA11336@nvidia.com/

 "I guess I was assuming that enabling v2 migration in QEMU was dependent
  on the existing type1 dirty tracking because it's the only means we
  have to tell QEMU that all memory is perpetually dirty when we have a
  DMA device.  Is that not correct?"

The only point was to prepare qemu for kernel's that don't support the
legacy dirty tracking API but do have a v2 migration interface. IIRC
something undesired happens if you do that right now.

We could also just dirty all memory in qemu and keep it exactly the
same so every SLA detail works. Or completely block pre-copy based
flows.

It it not intended to be a useful configuration, this is just covering
off backwards compat issues - so I'm not keen to do a bunch of
management work to support it.

Thanks,
Jason



reply via email to

[Prev in Thread] Current Thread [Next in Thread]