qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC PATCH 0/8] *** A Method for evaluating dirty page rate ***


From: Zheng Chuan
Subject: Re: [RFC PATCH 0/8] *** A Method for evaluating dirty page rate ***
Date: Thu, 6 Aug 2020 15:36:14 +0800
User-agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:68.0) Gecko/20100101 Thunderbird/68.6.0


On 2020/8/5 0:19, Dr. David Alan Gilbert wrote:
> * Chuan Zheng (zhengchuan@huawei.com) wrote:
>> From: Zheng Chuan <zhengchuan@huawei.com>
> 
> Hi,
> 
>> Sometimes it is neccessary to evaluate dirty page rate before migration.
>> Users could decide whether to proceed migration based on the evaluation
>> in case of vm performance loss due to heavy workload.
>> Unlikey simulating dirtylog sync which could do harm on runnning vm,
>> we provide a sample-hash method to compare hash results for samping page.
>> In this way, it would have hardly no impact on vm performance.
>>
>> We evaluate the dirtypage rate on running vm.
>> The VM specifications for migration are as follows:
>> - VM use 4-K page;
>> - the number of VCPU is 32;
>> - the total memory is 32Gigabit;
>> - use 'mempress' tool to pressurize VM(mempress 4096 1024);
>>
>> ++++++++++++++++++++++++++++++++++++++++++
>> |                      |    dirtyrate    |
>> ++++++++++++++++++++++++++++++++++++++++++
>> | no mempress          |     4MB/s       |
>> ------------------------------------------
>> | mempress 4096 1024   |    1204MB/s     |
>> ++++++++++++++++++++++++++++++++++++++++++
>> | mempress 4096 4096   |    4000Mb/s     |
>> ++++++++++++++++++++++++++++++++++++++++++
> 
> This is quite neat; I know we've got other people who have asked
> for a similar feature!
> Have you tried to validate these numbers against a real migration - e.g.
> try setting mempress to dirty just under 1GByte/s and see if you can
> migrate it over a 10Gbps link?
> 
> Dave
> 
Hi, Dave.
Thank you for your review.

Note that, the original intention is evaluating dirty rate before migration.

However, I test dirty rate against a real migration over a bandwidth of 10Gps 
with various mempress, which shows as below:
++++++++++++++++++++++++++++++++++++++++++
|                      |    dirtyrate    |
++++++++++++++++++++++++++++++++++++++++++
| no mempress          |     8MB/s       |
------------------------------------------
| mempress 4096 1024   |    1188MB/s     |
++++++++++++++++++++++++++++++++++++++++++

It looks still close to actual dirty rate:)

Test results against a real migration will be posted in V2.

>> Test dirtyrate by qmp command like this:
>> 1.  virsh qemu-monitor-command [vmname] '{"execute":"cal_dirty_rate", 
>> "arguments": {"value": [sampletime]}}'
>> 2.  virsh qemu-monitor-command [vmname] '{"execute":"get_dirty_rate"}'
>>
>> Further test dirtyrate by libvirt api like this:
>> virsh getdirtyrate [vmname] [sampletime]
>>
>> Zheng Chuan (8):
>>   migration/dirtyrate: Add get_dirtyrate_thread() function
>>   migration/dirtyrate: Add block_dirty_info to store dirtypage info
>>   migration/dirtyrate: Add dirtyrate statistics series functions
>>   migration/dirtyrate: Record hash results for each ramblock
>>   migration/dirtyrate: Compare hash results for recorded ramblock
>>   migration/dirtyrate: Implement get_sample_gap_period() and
>>     block_sample_gap_period()
>>   migration/dirtyrate: Implement calculate_dirtyrate() function
>>   migration/dirtyrate: Implement
>>     qmp_cal_dirty_rate()/qmp_get_dirty_rate() function
>>
>>  migration/Makefile.objs |   1 +
>>  migration/dirtyrate.c   | 424 
>> ++++++++++++++++++++++++++++++++++++++++++++++++
>>  migration/dirtyrate.h   |  67 ++++++++
>>  qapi/migration.json     |  24 +++
>>  qapi/pragma.json        |   3 +-
>>  5 files changed, 518 insertions(+), 1 deletion(-)
>>  create mode 100644 migration/dirtyrate.c
>>  create mode 100644 migration/dirtyrate.h
>>
>> -- 
>> 1.8.3.1
>>
> --
> Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
> 
> 
> .
> 




reply via email to

[Prev in Thread] Current Thread [Next in Thread]