qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v5 00/21] migration: Postcopy Preemption


From: Peter Xu
Subject: Re: [PATCH v5 00/21] migration: Postcopy Preemption
Date: Mon, 16 May 2022 12:06:18 -0400

On Mon, May 16, 2022 at 08:55:50PM +0530, manish.mishra wrote:
> 
> On 26/04/22 5:08 am, Peter Xu wrote:
> > This is v5 of postcopy preempt series.  It can also be found here:
> > 
> >    https://github.com/xzpeter/qemu/tree/postcopy-preempt
> > 
> > RFC: 
> > https://lore.kernel.org/qemu-devel/20220119080929.39485-1-peterx@redhat.com
> > V1:  
> > https://lore.kernel.org/qemu-devel/20220216062809.57179-1-peterx@redhat.com
> > V2:  
> > https://lore.kernel.org/qemu-devel/20220301083925.33483-1-peterx@redhat.com
> > V3:  
> > https://lore.kernel.org/qemu-devel/20220330213908.26608-1-peterx@redhat.com
> > V4:  
> > https://lore.kernel.org/qemu-devel/20220331150857.74406-1-peterx@redhat.com
> > 
> > v4->v5 changelog:
> > - Fixed all checkpatch.pl warnings
> > - Picked up leftover patches from Dan's tls test case series:
> >    
> > https://lore.kernel.org/qemu-devel/20220310171821.3724080-1-berrange@redhat.com/
> > - Rebased to v7.0.0 tag, collected more R-bs from Dave/Dan
> > - In migrate_fd_cleanup(), use g_clear_pointer() for s->hostname [Dan]
> > - Mark postcopy-preempt capability for 7.1 not 7.0 [Dan]
> > - Moved migrate_channel_requires_tls() into tls.[ch] [Dan]
> > - Mention the bug-fixing side effect of patch "migration: Export
> >    tls-[creds|hostname|authz] params to cmdline too" on tls_authz [Dan]
> > - Use g_autoptr where proper [Dan]
> > - Drop a few (probably over-cautious) asserts on local_err being set [Dan]
> > - Quite a few renamings in the qtest in the last few test patches [Dan]
> > 
> > Abstract
> > ========
> > 
> > This series contains two parts now:
> > 
> >    (1) Leftover patches from Dan's tls unit tests v2, which is first half
> >    (2) Leftover patches from my postcopy preempt v4, which is second half
> > 
> > This series added a new migration capability called "postcopy-preempt".  It 
> > can
> > be enabled when postcopy is enabled, and it'll simply (but greatly) speed up
> > postcopy page requests handling process.
> > 
> > Below are some initial postcopy page request latency measurements after the
> > new series applied.
> > 
> > For each page size, I measured page request latency for three cases:
> > 
> >    (a) Vanilla:                the old postcopy
> >    (b) Preempt no-break-huge:  preempt enabled, 
> > x-postcopy-preempt-break-huge=off
> >    (c) Preempt full:           preempt enabled, 
> > x-postcopy-preempt-break-huge=on
> >                                (this is the default option when preempt 
> > enabled)
> > 
> > Here x-postcopy-preempt-break-huge parameter is just added in v2 so as to
> > conditionally disable the behavior to break sending a precopy huge page for
> > debugging purpose.  So when it's off, postcopy will not preempt precopy
> > sending a huge page, but still postcopy will use its own channel.
> > 
> > I tested it separately to give a rough idea on which part of the change
> > helped how much of it.  The overall benefit should be the comparison
> > between case (a) and (c).
> > 
> >    |-----------+---------+-----------------------+--------------|
> >    | Page size | Vanilla | Preempt no-break-huge | Preempt full |
> >    |-----------+---------+-----------------------+--------------|
> >    | 4K        |   10.68 |               N/A [*] |         0.57 |
> >    | 2M        |   10.58 |                  5.49 |         5.02 |
> >    | 1G        | 2046.65 |               933.185 |      649.445 |
> >    |-----------+---------+-----------------------+--------------|
> >    [*]: This case is N/A because 4K page does not contain huge page at all
> > 
> > [1] 
> > https://github.com/xzpeter/small-stuffs/blob/master/tools/huge_vm/uffd-latency.bpf
> 
> Hi Peter, Just wanted understand what setup was used for these experiments 
> like
> 
> number of vcpu, workload, network bandwidth so that i can make sense of these

40 vcpus, 20GB mem, workload is using mig_mon single thread dirtying
workload:

https://github.com/xzpeter/mig_mon

Network is 10gbps one port.

Another thing to mention is that all these numbers are average page
latencies.

> 
> numbers. Also i could not understand reason for so much difference between 
> preempt
> 
> full and Preempt no-break-huge especially for 1G case, so if you please share 
> little more
> 
> details on this.

The break-huge change is the part where the requested page comes within
sending one huge page in the precopy channel, so we can halt sending that
page and jumps quickly to the postcopy channel for sending that huge page.
When with no-break-huge option we use the separate channel, but we won't
start sending the postcopy page (via postcopy preempt channel) until the
precopy finishes sending the current page.

Please don't trust too much on the 1G use case because the samples are very
much limited (total of 20 pages, and my test memory should be only upon
10+GB which I forgot).

Thanks,

-- 
Peter Xu




reply via email to

[Prev in Thread] Current Thread [Next in Thread]