qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [External] [PATCH] hostmem: Add clear option to file backend


From: Stefan Hajnoczi
Subject: Re: [External] [PATCH] hostmem: Add clear option to file backend
Date: Thu, 2 Mar 2023 11:14:38 -0500

On Thu, Mar 02, 2023 at 02:56:43PM +0100, David Hildenbrand wrote:
> On 02.03.23 12:57, Feiran Zheng wrote:
> > 
> > 
> > > On 2 Mar 2023, at 11:44, Daniel P. Berrangé <berrange@redhat.com> wrote:
> > > 
> > > On Thu, Mar 02, 2023 at 12:31:46PM +0100, David Hildenbrand wrote:
> > > > On 02.03.23 12:09, Fam Zheng wrote:
> > > > > This adds a memset to clear the backing memory. This is useful in the
> > > > > case of PMEM DAX to drop dirty data, if the backing memory is handed
> > > > > over from a previous application or firmware which didn't clean up
> > > > > before exiting.
> > > > > 
> > > > 
> > > > Why can't the VM manager do that instead? If you have a file that's
> > > > certainly easily possible.
> > > 
> > > This feels conceptually similar to the case where you expose a host
> > > block device to the guest. If that block device was previously given
> > > to a different guest it might still have data in it. Someone needs
> > > to take responsibility for scrubbing that data. Since that may take
> > > a non-trivial amount of time, it is typically todo that scrubbing in
> > > the background after the old VM is gone rather than put it into the
> > > startup path for a new VM which would delay boot.
> > > 
> > > PMEM is blurring the boundary between memory and disk, but the tradeoff
> > > is not so different. We know that in general merely faulting in guest
> > > memory is quite time consuming and delays VM startup significantly as
> > > RAM size increases. Doing the full memset can only be slower still.
> > > 
> > > For prealloc we've create complex code to fault in memory across many
> > > threads and even that's too slow, so we're considering doing it in the
> > > background as the VM starts up.
> > > 
> > > IIUC, this patch just puts the memset in the critical serialized path.
> > > This will inevitably lead to a demand for improving performance by
> > > parallelizing across threads, but we know that's too slow already,
> > > and we cant play the background async game with memset as that's
> > > actually changunig guest visible contents.
> > > 
> > > IOW, for large PMEM sizes, it does look compelling to do the clearing
> > > of old data in the background outside context of QEMU VM startup to
> > > avoid delayed startup.
> > > 
> > > I can still understand the appeal of a simple flag to set on QEMU from
> > > a usability POV, but not sure its a good idea to encourage this usage
> > > by mgmt apps.
> > 
> > I can totally see the reasoning about the latency here, but I’m a little 
> > dubious if multi-threading for memset can actaully help reduce the start-up 
> > time; the total cost is going to be bound by memory bandwidth between the 
> > CPU and memory (even more so if it’s PMEM) which is limited.
> 
> Right, daxio is the magic bit:
> 
> daxio.x86_64 : Perform I/O on Device DAX devices or zero a Device DAX device
> 
> # daxio -z -o /dev/dax0.0
> daxio: copied 8587837440 bytes to device "/dev/dax0.0"

I think Dan's concerns are valid, but I noticed daxio also just calls
pmem_memset_persist(), so it's doing pretty much the same
single-threaded thing as the patch:
https://github.com/pmem/pmdk/blob/master/src/tools/daxio/daxio.c#L506

Stefan

Attachment: signature.asc
Description: PGP signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]