qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 1/1] osdep: asynchronous teardown for shutdown on Linux


From: Claudio Imbrenda
Subject: Re: [PATCH v2 1/1] osdep: asynchronous teardown for shutdown on Linux
Date: Fri, 5 Aug 2022 09:02:17 +0200

On Thu, 4 Aug 2022 17:58:34 +0100
Daniel P. Berrangé <berrange@redhat.com> wrote:

> On Thu, Aug 04, 2022 at 09:20:59AM +0100, Daniel P. Berrangé wrote:
> > On Thu, Aug 04, 2022 at 07:56:49AM +0200, Claudio Imbrenda wrote:  
> > > On Wed, 3 Aug 2022 18:34:45 +0100
> > > Daniel P. Berrangé <berrange@redhat.com> wrote:
> > >   
> > > > On Wed, Aug 03, 2022 at 07:31:41PM +0200, Claudio Imbrenda wrote:  
> > > > > This patch adds support for asynchronously tearing down a VM on Linux.
> > > > > 
> > > > > When qemu terminates, either naturally or because of a fatal signal,
> > > > > the VM is torn down. If the VM is huge, it can take a considerable
> > > > > amount of time for it to be cleaned up. In case of a protected VM, it
> > > > > might take even longer than a non-protected VM (this is the case on
> > > > > s390x, for example).
> > > > > 
> > > > > Some users might want to shut down a VM and restart it immediately,
> > > > > without having to wait. This is especially true if management
> > > > > infrastructure like libvirt is used.
> > > > > 
> > > > > This patch implements a simple trick on Linux to allow qemu to return
> > > > > immediately, with the teardown of the VM being performed
> > > > > asynchronously.
> > > > > 
> > > > > If the new commandline option -async-teardown is used, a new process 
> > > > > is
> > > > > spawned from qemu at startup, using the clone syscall, in such way 
> > > > > that
> > > > > it will share its address space with qemu.
> > > > > 
> > > > > The new process will then simpy wait until qemu terminates, and then 
> > > > > it
> > > > > will exit itself.
> > > > > 
> > > > > This allows qemu to terminate quickly, without having to wait for the
> > > > > whole address space to be torn down. The teardown process will exit
> > > > > after qemu, so it will be the last user of the address space, and
> > > > > therefore it will take care of the actual teardown.
> > > > > 
> > > > > The teardown process will share the same cgroups as qemu, so both
> > > > > memory usage and cpu time will be accounted properly.
> > > > > 
> > > > > This feature can already be used with libvirt by adding the following
> > > > > to the XML domain definition:
> > > > > 
> > > > >   <commandline xmlns="http://libvirt.org/schemas/domain/qemu/1.0";>
> > > > >   <arg value='-async-teardown'/>
> > > > >   </commandline>    
> > > > 
> > > > How does this work in practice ?  Libvirt should be blocking until  
> > > 
> > > I don't know the inner details of how libvirt works..
> > >   
> > > > all processes in the cgroup have exited, including this cloned
> > > > child process.  
> > > 
> > > ..but I tested it and it works
> > > 
> > > my impression is that libvirt by default is only waiting for the
> > > main qemu process.  
> > 
> > If true, that would be a bug that needs fixing and should not be
> > relied on.  
> 
> Libvirt is invoking 'TerminateMachine' DBus call on systemd-machined.
> That in turn iterates over every process in the cgroup and kills
> them off.
> 
> Docs are a little vague and I've not followed the code perfectly, but
> that should mean TeminateMachine doesnt return until every process in
> the cgroup has exited.
> 
> That said, since this is a dbus API call, libvirt will probably
> timeout waiting for the DBus reply after something like 30-60
> seconds IIRC.

I have not observed any delays.

could it be that DBus doesn't wait for the process to be completely
dead, but only that the signal is delivered?

and which signal will DBus use?

> 
> >   
> > > the only issue I have found is the log file, which stays open as long
> > > as some file descriptors (which the cloned process inherits from the
> > > main qemu process) stay open. A new VM cannot be started if its log file
> > > is still open by the logger process. The close_range() call solves the
> > > issue.  
> 
> With regards,
> Daniel




reply via email to

[Prev in Thread] Current Thread [Next in Thread]