qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC PATCH v1 1/1] migration: Disable postcopy + multifd migration


From: Peter Xu
Subject: Re: [RFC PATCH v1 1/1] migration: Disable postcopy + multifd migration
Date: Thu, 30 Mar 2023 18:18:38 -0400

On Thu, Mar 30, 2023 at 04:59:09PM +0100, Dr. David Alan Gilbert wrote:
> * Daniel P. Berrangé (berrange@redhat.com) wrote:
> > On Thu, Mar 30, 2023 at 10:36:11AM -0400, Peter Xu wrote:
> > > On Thu, Mar 30, 2023 at 03:20:14PM +0100, Daniel P. Berrangé wrote:
> > > > On Mon, Mar 27, 2023 at 01:15:18PM -0300, Leonardo Bras wrote:
> > > > > Since the introduction of multifd, it's possible to perform a multifd
> > > > > migration and finish it using postcopy.
> > > > > 
> > > > > A bug introduced by yank (fixed on cfc3bcf373) was previously 
> > > > > preventing
> > > > > a successful use of this migration scenario, and now it should be
> > > > > working on most cases.
> > > > > 
> > > > > But since there is not enough testing/support nor any reported users 
> > > > > for
> > > > > this scenario, we should disable this combination before it may cause 
> > > > > any
> > > > > problems for users.
> > > > 
> > > > Clearly we don't have enough testing, but multifd+postcopy looks
> > > > like a clearly useful scenario that we should be supporting.
> > > > 
> > > > Every post-copy starts with at least one pre-copy iteration, and
> > > > using multifd for that will be important for big VMs where single
> > > > threaded pre-copy is going to be CPU bound. The greater amount we
> > > > can transfer in the pre-copy phase, the less page faults / latency
> > > > spikes postcopy is going to see.
> > > 
> > > If we're using 1-round precopy + postcopy approach, the amount of memory
> > > will be the same which is the guest mem size.
> > > 
> > > Multifd will make the round shorter so more chance of getting less
> > > re-dirtied pages during the iteration, but that effect is limited.  E.g.:
> > > 
> > >   - For a very idle guest, finishing 1st round in 1min or 3min may not
> > >     bring a large difference because most of the pages will be constant
> > >     anyway, or
> > > 
> > >   - For a very busy guest, probably similar amount of pages will be 
> > > dirtied
> > >     no matter in 1min / 3min.  Multifd will bring a benefit here, but
> > >     busier the guest smaller the effect.
> > 
> > I don't feel like that follows. If we're bottlenecking mostly on CPU
> > but have sufficient network bandwidth, then multifd can be the difference
> > between needing to switch to post-copy or being successful in converging
> > in pre-copy.
> > 
> > IOW, without multifd we can expect 90% of guests will get stuck and need
> > a switch to post-copy, but with multifd 90% of the guest will complete
> > while in precopy mode and only 10% need switch to post-copy. That's good
> > because it means most guests will avoid the increased failure risk and
> > the period of increased page fault latency from post-copy.

Makes sense.  But we may need someone to look after that, though.  I am
aware that Juan used to plan doing work in this area.  Juan, have you
started looking into fixing multifd + postcopy (for current phase, not for
a complete support)?  If we're confident and think resolving it is easy
then I think it'll be worthwhile, and this patch may not be needed.

We should always keep in mind though that currently the user can suffer
from weird errors or crashes when using them together, and that's the major
reason Leonardo proposed this patch - we either fix things soon or we
disable them, which also makes sense to me.

I think time somehow proved that it's non-trivial to fix them soon, hence
this patch.  I'll be 100% more than happy when patches coming to prove me
wrong to fix things up (along with a multifd+postcopy qtest).

> 
> Agreed, although I think Peter's point was that in the cases where you
> know the guests are crazy busy and you're always going to need postcopy,
> it's a bit less of an issue.
> (But still, getting multiple fd's in the postcopy phase is good to
> reduce latency).

Yes, that'll be another story though, IMHO.

When talking about this, I'd guess it'll be easier (and much less code) to
just spawn more preempt threads than multifd ones: some of them can service
page faults only, but some of them just keeps dumping concurrently with the
migration thread.

It should be easy because all preempt threads on dest buffers the reads and
they'll be as simple as a wrapper of ram_load_postcopy().  I think it could
naturally just work, but I'll need to check when we think it more
seriously.

> 
> Dave
> 
> > 
> > > > In terms of migration usage, my personal recommendation to mgmt
> > > > apps would be that they should always enable the post-copy feature
> > > > when starting a migration. Even if they expect to try to get it to
> > > > complete using exclusively pre-copy in the common case, its useful
> > > > to have post-copy capability flag enabled, as a get out of jail
> > > > free card. ie if migration ends up getting stuck in non-convergance,
> > > > or they have a sudden need to urgently complete the migration it is
> > > > good to be able to flip to post-copy mode.
> > > 
> > > I fully agree.
> > > 
> > > It should not need to be enabled only if not capable, e.g., the dest host
> > > may not have privilege to initiate the userfaultfd (since QEMU postcopy
> > > requires kernel fault traps, so it's very likely).
> > 
> > Sure, the mgmt app (libvirt) should be checking support for userfaultfd
> > on both sides before permitting / trying to enable the feature.
> > 
> > 
> > > > I'd suggest that we instead add a multifd+postcopy test case to
> > > > migration-test.c and tackle any bugs it exposes. By blocking it
> > > > unconditionally we ensure no one will exercise it to expose any
> > > > further bugs.
> > > 
> > > That's doable.  But then we'd better also figure out how to identify the
> > > below two use cases of both features enabled:
> > > 
> > >   a. Enable multifd in precopy only, then switch to postcopy (currently
> > >   mostly working but buggy; I think Juan can provide more information 
> > > here,
> > >   at least we need to rework multifd flush when switching, and test and
> > >   test over to make sure there's nothing else missing).
> > > 
> > >   b. Enable multifd in both precopy and postcopy phase (currently
> > >   definitely not supported)
> > > 
> > > So that mgmt app will be aware whether multifd will be enabled in postcopy
> > > or not.  Currently we can't identify it.
> > > 
> > > I assume we can say by default "mutlifd+postcopy" means a) above, but we
> > > need to document it, and when b) is wanted and implemented someday, we'll
> > > need some other flag/cap for it.
> > 
> > As I've mentioned a few times, I think we need to throw away the idea
> > of exposing capabilities that mgmt apps need to learn about, and make
> > the migration protocol fully bi-directional so src + dst QEMU can
> > directly negotiate features. Apps shouldn't have to care about the
> > day-to-day improvements in the migration impl to the extent that they
> > are today.

I agree that setting the same caps on both sides are ugly, but shouldn't
this a separate problem, and we should allow the user to choose (no matter
to apply that to src only, or to both sides)?

To be explicit, I am thinking maybe even if multifd+postcopy full support
will be implemented, for some reason the user still wants to only use
multifd during precopy but not postcopy.  I'm afraid automatically choose
what's the latest supported may not always work for the user for whatever
reasons and could have other implications here.

In short, IMHO it's an ABI breakage if user enabled both features, then it
behaves differently after upgrading QEMU with a full multifd+postcopy
support added.

Thanks,

-- 
Peter Xu




reply via email to

[Prev in Thread] Current Thread [Next in Thread]