qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 0/5] Live Migration Acceleration with IAA Compression


From: Juan Quintela
Subject: Re: [PATCH 0/5] Live Migration Acceleration with IAA Compression
Date: Thu, 19 Oct 2023 17:31:06 +0200
User-agent: Gnus/5.13 (Gnus v5.13) Emacs/28.3 (gnu/linux)

Peter Xu <peterx@redhat.com> wrote:
> On Thu, Oct 19, 2023 at 03:52:14PM +0100, Daniel P. Berrangé wrote:
>> On Thu, Oct 19, 2023 at 01:40:23PM +0200, Juan Quintela wrote:
>> > Yuan Liu <yuan1.liu@intel.com> wrote:
>> > > Hi,
>> > >
>> > > I am writing to submit a code change aimed at enhancing live migration
>> > > acceleration by leveraging the compression capability of the Intel
>> > > In-Memory Analytics Accelerator (IAA).
>> > >
>> > > Enabling compression functionality during the live migration process can
>> > > enhance performance, thereby reducing downtime and network bandwidth
>> > > requirements. However, this improvement comes at the cost of additional
>> > > CPU resources, posing a challenge for cloud service providers in terms of
>> > > resource allocation. To address this challenge, I have focused on 
>> > > offloading
>> > > the compression overhead to the IAA hardware, resulting in performance 
>> > > gains.
>> > >
>> > > The implementation of the IAA (de)compression code is based on Intel 
>> > > Query
>> > > Processing Library (QPL), an open-source software project designed for
>> > > IAA high-level software programming.
>> > >
>> > > Best regards,
>> > > Yuan Liu
>> > 
>> > After reviewing the patches:
>> > 
>> > - why are you doing this on top of old compression code, that is
>> >   obsolete, deprecated and buggy
>> > 
>> > - why are you not doing it on top of multifd.
>> > 
>> > You just need to add another compression method on top of multifd.
>> > See how it was done for zstd:
>> 
>> I'm not sure that is ideal approach.  IIUC, the IAA/QPL library
>> is not defining a new compression format. Rather it is providing
>> a hardware accelerator for 'deflate' format, as can be made
>> compatible with zlib:
>> 
>>   
>> https://intel.github.io/qpl/documentation/dev_guide_docs/c_use_cases/deflate/c_deflate_zlib_gzip.html#zlib-and-gzip-compatibility-reference-link
>> 
>> With multifd we already have a 'zlib' compression format, and so
>> this IAA/QPL logic would effectively just be a providing a second
>> implementation of zlib.
>> 
>> Given the use of a standard format, I would expect to be able
>> to use software zlib on the src, mixed with IAA/QPL zlib on
>> the target, or vica-verca.
>> 
>> IOW, rather than defining a new compression format for this,
>> I think we could look at a new migration parameter for
>> 
>> "compression-accelerator": ["auto", "none", "qpl"]
>> 
>> with 'auto' the default, such that we can automatically enable
>> IAA/QPL when 'zlib' format is requested, if running on a suitable
>> host.
>
> I was also curious about the format of compression comparing to software
> ones when reading.
>
> Would there be a use case that one would prefer soft compression even if
> hardware accelerator existed, no matter on src/dst?
>
> I'm wondering whether we can avoid that one more parameter but always use
> hardware accelerations as long as possible.

I asked for some benchmarks.
But they need to be againtst not using compression (i.e. plain precopy)
or against using multifd-zlib.

For a single page, I don't know if the added latency will be a winner in
general.

Later, Juan.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]