qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC PATCH v1 1/1] migration: Disable postcopy + multifd migration


From: Dr. David Alan Gilbert
Subject: Re: [RFC PATCH v1 1/1] migration: Disable postcopy + multifd migration
Date: Thu, 30 Mar 2023 16:59:09 +0100
User-agent: Mutt/2.2.9 (2022-11-12)

* Daniel P. Berrangé (berrange@redhat.com) wrote:
> On Thu, Mar 30, 2023 at 10:36:11AM -0400, Peter Xu wrote:
> > On Thu, Mar 30, 2023 at 03:20:14PM +0100, Daniel P. Berrangé wrote:
> > > On Mon, Mar 27, 2023 at 01:15:18PM -0300, Leonardo Bras wrote:
> > > > Since the introduction of multifd, it's possible to perform a multifd
> > > > migration and finish it using postcopy.
> > > > 
> > > > A bug introduced by yank (fixed on cfc3bcf373) was previously preventing
> > > > a successful use of this migration scenario, and now it should be
> > > > working on most cases.
> > > > 
> > > > But since there is not enough testing/support nor any reported users for
> > > > this scenario, we should disable this combination before it may cause 
> > > > any
> > > > problems for users.
> > > 
> > > Clearly we don't have enough testing, but multifd+postcopy looks
> > > like a clearly useful scenario that we should be supporting.
> > > 
> > > Every post-copy starts with at least one pre-copy iteration, and
> > > using multifd for that will be important for big VMs where single
> > > threaded pre-copy is going to be CPU bound. The greater amount we
> > > can transfer in the pre-copy phase, the less page faults / latency
> > > spikes postcopy is going to see.
> > 
> > If we're using 1-round precopy + postcopy approach, the amount of memory
> > will be the same which is the guest mem size.
> > 
> > Multifd will make the round shorter so more chance of getting less
> > re-dirtied pages during the iteration, but that effect is limited.  E.g.:
> > 
> >   - For a very idle guest, finishing 1st round in 1min or 3min may not
> >     bring a large difference because most of the pages will be constant
> >     anyway, or
> > 
> >   - For a very busy guest, probably similar amount of pages will be dirtied
> >     no matter in 1min / 3min.  Multifd will bring a benefit here, but
> >     busier the guest smaller the effect.
> 
> I don't feel like that follows. If we're bottlenecking mostly on CPU
> but have sufficient network bandwidth, then multifd can be the difference
> between needing to switch to post-copy or being successful in converging
> in pre-copy.
> 
> IOW, without multifd we can expect 90% of guests will get stuck and need
> a switch to post-copy, but with multifd 90% of the guest will complete
> while in precopy mode and only 10% need switch to post-copy. That's good
> because it means most guests will avoid the increased failure risk and
> the period of increased page fault latency from post-copy.

Agreed, although I think Peter's point was that in the cases where you
know the guests are crazy busy and you're always going to need postcopy,
it's a bit less of an issue.
(But still, getting multiple fd's in the postcopy phase is good to
reduce latency).

Dave

> 
> > > In terms of migration usage, my personal recommendation to mgmt
> > > apps would be that they should always enable the post-copy feature
> > > when starting a migration. Even if they expect to try to get it to
> > > complete using exclusively pre-copy in the common case, its useful
> > > to have post-copy capability flag enabled, as a get out of jail
> > > free card. ie if migration ends up getting stuck in non-convergance,
> > > or they have a sudden need to urgently complete the migration it is
> > > good to be able to flip to post-copy mode.
> > 
> > I fully agree.
> > 
> > It should not need to be enabled only if not capable, e.g., the dest host
> > may not have privilege to initiate the userfaultfd (since QEMU postcopy
> > requires kernel fault traps, so it's very likely).
> 
> Sure, the mgmt app (libvirt) should be checking support for userfaultfd
> on both sides before permitting / trying to enable the feature.
> 
> 
> > > I'd suggest that we instead add a multifd+postcopy test case to
> > > migration-test.c and tackle any bugs it exposes. By blocking it
> > > unconditionally we ensure no one will exercise it to expose any
> > > further bugs.
> > 
> > That's doable.  But then we'd better also figure out how to identify the
> > below two use cases of both features enabled:
> > 
> >   a. Enable multifd in precopy only, then switch to postcopy (currently
> >   mostly working but buggy; I think Juan can provide more information here,
> >   at least we need to rework multifd flush when switching, and test and
> >   test over to make sure there's nothing else missing).
> > 
> >   b. Enable multifd in both precopy and postcopy phase (currently
> >   definitely not supported)
> > 
> > So that mgmt app will be aware whether multifd will be enabled in postcopy
> > or not.  Currently we can't identify it.
> > 
> > I assume we can say by default "mutlifd+postcopy" means a) above, but we
> > need to document it, and when b) is wanted and implemented someday, we'll
> > need some other flag/cap for it.
> 
> As I've mentioned a few times, I think we need to throw away the idea
> of exposing capabilities that mgmt apps need to learn about, and make
> the migration protocol fully bi-directional so src + dst QEMU can
> directly negotiate features. Apps shouldn't have to care about the
> day-to-day improvements in the migration impl to the extent that they
> are today.
> 
> With regards,
> Daniel
> -- 
> |: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
> |: https://libvirt.org         -o-            https://fstop138.berrange.com :|
> |: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|
> 
-- 
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK




reply via email to

[Prev in Thread] Current Thread [Next in Thread]