qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v3 00/14] File-based migration support and fixed-ram features


From: Claudio Fontana
Subject: Re: [PATCH v3 00/14] File-based migration support and fixed-ram features
Date: Mon, 20 Mar 2023 12:14:53 +0100
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.4.0

(adding Fabiano to the thread)

On 2/10/23 16:35, Daniel P. Berrangé wrote:
> On Thu, Feb 09, 2023 at 02:32:01PM +0100, Claudio Fontana wrote:
>> Hello Daniel and all,
>>
>> resurrecting this series from end of last year,
>>
>> do we think that this is the right approach and first step to
>> be able to provide good performance for virsh save and virsh
>> restore?
> 
> I've looked through the series in some more detail now and will
> send review comments separately. Overall, I'm pretty pleased with
> the series and think it is on the right path. The new format it
> provides should be amenable to parallel I/O with multifd and
> be able to support O_DIRECT to avoid burning through the host I/O
> cache.

Just wanted to add a thought we had with Fabiano a few days ago:

experimentally, it is clear that fixed-ram is an optimization, but the actual 
scalability seems to come from the successive parallel I/O with multifd.

Since the goal is being able to transfer _to disk_ (fdatasync) the whole 30G 
memory in 5 seconds, the need to split the cpu-intensive work into smaller 
tasks remains,
and the main scalability solution seems to come from the multifd part of the 
work (or another way to split the problem), combined with the O_DIRECT 
friendliness to avoid the trap of the cache trashing.

Not adding much, just highlighting that fixed-ram _alone_ does not seem to 
suffice, we probably need all pieces of the puzzle in place.

Thanks!

Claudio

> 
> There is obviously a bit of extra complexity from having a new
> way to map RAM to the output, but it looks fairly well contained
> in just a couple of places of the code. The performance wins
> should be able to justify the extra maint burden IMHO.
> 
>> Do we still agree on this way forward, any comments? Thanks,
> 
> I'm not a migration maintainer, but overall I think it is
> good.
> 
> With regards,
> Daniel




reply via email to

[Prev in Thread] Current Thread [Next in Thread]