[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [PATCH 0/3] hvf x86 correctness and efficiency improvements
From: |
Paolo Bonzini |
Subject: |
Re: [PATCH 0/3] hvf x86 correctness and efficiency improvements |
Date: |
Mon, 16 Oct 2023 18:48:54 +0200 |
On Mon, Oct 16, 2023 at 6:45 PM Phil Dennis-Jordan <lists@philjordan.eu> wrote:
>
> Hi Paolo,
>
>
> On Mon, 16 Oct 2023 at 16:39, Paolo Bonzini <pbonzini@redhat.com> wrote:
> >
> > On 9/22/23 16:09, Phil Dennis-Jordan wrote:
> > > Patch 1 enables the INVTSC CPUID bit when running with hvf. This can
> > > enable some optimisations in the guest OS, and I've not found any reason
> > > it shouldn't be allowed for hvf based hosts.
> >
> > It can be enabled, but it should include a migration blocker. In fact,
> > probably HVF itself should include a migration blocker because QEMU
> > doesn't support TSC scaling.
>
> I didn't think Qemu's HVF backend supported migration in any form at this
> point anyway? Or do you mean machine model versioning of the default setting?
If it doesn't support migration, it needs to register a migration blocker.
> switching to hv_vcpu_run_until() WITHOUT hv_vcpu_interrupt()
> causes some very obvious problems where the vCPU simply
> doesn't exit at all for long periods.)
Yes, that makes sense. It looks like hv_vcpu_run_until() has an
equivalent of a "do ... while (errno == EINTR)" loop inside it.
> 1. hv_vcpu_run() exits very frequently, and often there is actually
> nothing for the VMM to do except call hv_vcpu_run() again. With
> Qemu's current hvf backend, each exit causes a BQL acquisition,
> and VMs with a bunch of vCPUs rapidly become limited by BQL
> contention according to my profiling.
Yes, that should be fixed anyway, but I agree it is a separate issue.
Paolo