qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] util: NUMA aware memory preallocation


From: Daniel P . Berrangé
Subject: Re: [PATCH] util: NUMA aware memory preallocation
Date: Wed, 11 May 2022 10:20:52 +0100
User-agent: Mutt/2.1.5 (2021-12-30)

On Wed, May 11, 2022 at 09:34:07AM +0100, Dr. David Alan Gilbert wrote:
> * Michal Privoznik (mprivozn@redhat.com) wrote:
> > When allocating large amounts of memory the task is offloaded
> > onto threads. These threads then use various techniques to
> > allocate the memory fully (madvise(), writing into the memory).
> > However, these threads are free to run on any CPU, which becomes
> > problematic on NUMA machines because it may happen that a thread
> > is running on a distant node.
> > 
> > Ideally, this is something that a management application would
> > resolve, but we are not anywhere close to that, Firstly, memory
> > allocation happens before monitor socket is even available. But
> > okay, that's what -preconfig is for. But then the problem is that
> > 'object-add' would not return until all memory is preallocated.
> > 
> > Long story short, management application has no way of learning
> > TIDs of allocator threads so it can't make them run NUMA aware.
> > 
> > But what we can do is to propagate the 'host-nodes' attribute of
> > MemoryBackend object down to where preallocation threads are
> > created and set their affinity according to the attribute.
> 
> Joe (cc'd) sent me some numbers for this which emphasise how useful it
> is:
>  | On systems with 4 physical numa nodes and 2-6 Tb of memory, this numa-aware
>  |preallocation provided about a 25% speedup in touching the pages.
>  |The speedup gets larger as the numa node count and memory sizes grow.
> ....
>  | In a simple parallel 1Gb page-zeroing test on a very large system (32-numa
>  | nodes and 47Tb of memory), the numa-aware preallocation was 2.3X faster
>  | than letting the threads float wherever.
>  | We're working with someone whose large guest normally takes 4.5 hours to
>  | boot.  With Michal P's initial patch to parallelize the preallocation, that
>  | time dropped to about 1 hour.  Including this numa-aware preallocation
>  | would reduce the guest boot time to less than 1/2 hour.
> 
> so chopping *half an hour* off the startup time seems a worthy
> optimisation (even if most of us aren't fortunate enough to have 47T of
> ram).

I presume this test was done with bare QEMU though, not libvirt managed
QEMU, as IIUC, the latter would not be able to set its affinity and so
never see this benefit.


With regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|




reply via email to

[Prev in Thread] Current Thread [Next in Thread]