qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v1 0/3] QIOChannel flags + multifd zerocopy


From: Leonardo Bras Soares Passos
Subject: Re: [PATCH v1 0/3] QIOChannel flags + multifd zerocopy
Date: Wed, 1 Sep 2021 16:21:30 -0300

Hello Peter,

On Tue, Aug 31, 2021 at 6:24 PM Peter Xu <peterx@redhat.com> wrote:
>
> On Tue, Aug 31, 2021 at 08:02:36AM -0300, Leonardo Bras wrote:
> > Results:
> > So far, the resource usage of __sys_sendmsg() reduced 15 times, and the
> > overall migration took 13-18% less time, based in synthetic workload.
>
> Leo,
>
> Could you share some of the details of your tests?  E.g., what's the
> configuration of your VM for testing?  What's the migration time before/after
> the patchset applied?  What is the network you're using?
>
> Thanks,
>
> --
> Peter Xu
>

Sure,
- Both receiving and sending hosts have 128GB ram and a 10Gbps network interface
  - There is a direct connection between the network interfaces.
- The guest has 100GB ram, mem-lock=on and enable-kvm.
- Before sending, I use a simple application to completely fill all
guest pages with unique values, to avoid duplicated pages and zeroed
pages.

On a single test:

Without zerocopy (qemu/master)
- Migration took 123355ms, with an average of 6912.58 Mbps
With Zerocopy:
- Migration took 108514ms, with an average of 7858.39 Mbps

This represents a throughput improvement around 13.6%.

Comparing perf recorded during default and zerocopy migration:
Without zerocopy:
- copy_user_generic_string() uses 5.4% of cpu time
- __sys_sendmsg() uses 5.19% of cpu time
With zerocopy:
- copy_user_generic_string() uses 0.02% of cpu time (~1/270 of the original)
- __sys_sendmsg() uses 0.34% of cpu time (~1/15 of the original)




reply via email to

[Prev in Thread] Current Thread [Next in Thread]