qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v11 05/10] arm/hvf: Add a WFI handler


From: Philippe Mathieu-Daudé
Subject: Re: [PATCH v11 05/10] arm/hvf: Add a WFI handler
Date: Thu, 16 Sep 2021 06:49:11 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.11.0

On 9/15/21 8:10 PM, Alexander Graf wrote:
> From: Peter Collingbourne <pcc@google.com>
> 
> Sleep on WFI until the VTIMER is due but allow ourselves to be woken
> up on IPI.
> 
> In this implementation IPI is blocked on the CPU thread at startup and
> pselect() is used to atomically unblock the signal and begin sleeping.
> The signal is sent unconditionally so there's no need to worry about
> races between actually sleeping and the "we think we're sleeping"
> state. It may lead to an extra wakeup but that's better than missing
> it entirely.
> 
> Signed-off-by: Peter Collingbourne <pcc@google.com>
> [agraf: Remove unused 'set' variable, always advance PC on WFX trap,
>         support vm stop / continue operations and cntv offsets]
> Signed-off-by: Alexander Graf <agraf@csgraf.de>
> Acked-by: Roman Bolshakov <r.bolshakov@yadro.com>
> Reviewed-by: Sergio Lopez <slp@redhat.com>
> 
> ---

> diff --git a/target/arm/hvf/hvf.c b/target/arm/hvf/hvf.c
> index 8fe008dab5..49f265cc08 100644
> --- a/target/arm/hvf/hvf.c
> +++ b/target/arm/hvf/hvf.c
> @@ -2,6 +2,7 @@
>   * QEMU Hypervisor.framework support for Apple Silicon
>  
>   * Copyright 2020 Alexander Graf <agraf@csgraf.de>
> + * Copyright 2020 Google LLC
>   *
>   * This work is licensed under the terms of the GNU GPL, version 2 or later.
>   * See the COPYING file in the top-level directory.
> @@ -490,6 +491,7 @@ int hvf_arch_init_vcpu(CPUState *cpu)
>  
>  void hvf_kick_vcpu_thread(CPUState *cpu)
>  {
> +    cpus_kick_thread(cpu);

Doesn't this belong to the previous patch?

>      hv_vcpus_exit(&cpu->hvf->fd, 1);
>  }

> +static void hvf_wfi(CPUState *cpu)
> +{
> +    ARMCPU *arm_cpu = ARM_CPU(cpu);
> +    hv_return_t r;
> +    uint64_t ctl;
> +    uint64_t cval;
> +    int64_t ticks_to_sleep;
> +    uint64_t seconds;
> +    uint64_t nanos;
> +
> +    if (cpu->interrupt_request & (CPU_INTERRUPT_HARD | CPU_INTERRUPT_FIQ)) {
> +        /* Interrupt pending, no need to wait */
> +        return;
> +    }
> +
> +    r = hv_vcpu_get_sys_reg(cpu->hvf->fd, HV_SYS_REG_CNTV_CTL_EL0, &ctl);
> +    assert_hvf_ok(r);
> +
> +    if (!(ctl & 1) || (ctl & 2)) {
> +        /* Timer disabled or masked, just wait for an IPI. */
> +        hvf_wait_for_ipi(cpu, NULL);
> +        return;
> +    }
> +
> +    r = hv_vcpu_get_sys_reg(cpu->hvf->fd, HV_SYS_REG_CNTV_CVAL_EL0, &cval);
> +    assert_hvf_ok(r);
> +
> +    ticks_to_sleep = cval - hvf_vtimer_val();
> +    if (ticks_to_sleep < 0) {
> +        return;
> +    }
> +
> +    nanos = ticks_to_sleep * gt_cntfrq_period_ns(arm_cpu);
> +    seconds = nanos / NANOSECONDS_PER_SECOND;

muldiv64()?

> +    nanos -= (seconds * NANOSECONDS_PER_SECOND);
> +
> +    /*
> +     * Don't sleep for less than the time a context switch would take,
> +     * so that we can satisfy fast timer requests on the same CPU.
> +     * Measurements on M1 show the sweet spot to be ~2ms.
> +     */
> +    if (!seconds && nanos < (2 * SCALE_MS)) {
> +        return;
> +    }
> +
> +    struct timespec ts = { seconds, nanos };

QEMU style still declares variables at top of function/block.

> +    hvf_wait_for_ipi(cpu, &ts);
> +}



reply via email to

[Prev in Thread] Current Thread [Next in Thread]