qemu-ppc
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v5 02/10] target/ppc: PMU basic cycle count for pseries TCG


From: Daniel Henrique Barboza
Subject: Re: [PATCH v5 02/10] target/ppc: PMU basic cycle count for pseries TCG
Date: Mon, 8 Nov 2021 16:35:25 -0300
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.2.0



On 11/5/21 09:56, Matheus K. Ferst wrote:
On 01/11/2021 20:56, Daniel Henrique Barboza wrote:
diff --git a/target/ppc/power8-pmu.c b/target/ppc/power8-pmu.c
index 3c2f73896f..a0a42b666c 100644
--- a/target/ppc/power8-pmu.c
+++ b/target/ppc/power8-pmu.c

<snip>

+static bool pmc_is_active(CPUPPCState *env, int sprn)
+{
+    if (sprn < SPR_POWER_PMC5) {
+        return !(env->spr[SPR_POWER_MMCR0] & MMCR0_FC14);
+    }
+
+    return !(env->spr[SPR_POWER_MMCR0] & MMCR0_FC56);
+}
+
+static void pmu_update_cycles(CPUPPCState *env)
+{
+    uint64_t now = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
+    uint64_t time_delta = now - env->pmu_base_time;
+    int sprn;
+
+    for (sprn = SPR_POWER_PMC1; sprn <= SPR_POWER_PMC6; sprn++) {
+        if (!pmc_is_active(env, sprn) ||
+            getPMUEventType(env, sprn) != PMU_EVENT_CYCLES) {
+            continue;
+        }
+
+        /*
+         * The pseries and powernv clock runs at 1Ghz, meaning
+         * that 1 nanosec equals 1 cycle.
+         */
+        env->spr[sprn] += time_delta;
+    }
+
+    /*
+     * Update base_time for future calculations if we updated
+     * the PMCs while the PMU was running.
+     */
+    if (!(env->spr[SPR_POWER_MMCR0] & MMCR0_FC)) {
+        env->pmu_base_time = now;
+    }
+}
+
+/*
+ * A cycle count session consists of the basic operations we
+ * need to do to support PM_CYC events: redefine a new base_time
+ * to be used to calculate PMC values and start overflow timers.
+ */
+static void start_cycle_count_session(CPUPPCState *env)
+{
+    /* Just define pmu_base_time for now */
+    env->pmu_base_time = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
+}
+
+void helper_store_mmcr0(CPUPPCState *env, target_ulong value)
+{
+    target_ulong curr_value = env->spr[SPR_POWER_MMCR0];
+    bool curr_FC = curr_value & MMCR0_FC;
+    bool new_FC = value & MMCR0_FC;
+
+    env->spr[SPR_POWER_MMCR0] = value;

I'm not sure if this is the right place to update MMCR0. If we set both FC and 
FC14 in one write, the code will call pmu_update_cycles, but PMCs 1-4 will not 
be updated because pmc_is_active will use the new value to check if the PMCs 
are frozen.


We can't postpone this MMCR0 update because the PMU might trigger
an overflow when updating the counters, and the event branch will
receive an outdated MMCR0 value. hflags will be outdated when the
thread goes out the branch and it will conflict with the current
hflags value (updated with hreg_compute_hflags).

You're right about the problem with handling FC14/FC56 bits in the
same MMCR0 write though. What I ended up doing was to pass the old
MMCR0 value to pmu_update_cycles(). That way we'll avoid the problem
you described above.

I took a step further and also added a similar handling that were
already being done with the overflow bits (patch 7) with the FC
bits as well. This means that we'll be able to freeze/update the
counters individually while the PMU is running.



Thanks,


Daniel


+
+    /* MMCR0 writes can change HFLAGS_PMCCCLEAR and HFLAGS_MMCR0FC */
+    if (((curr_value & MMCR0_PMCC) != (value & MMCR0_PMCC)) ||
+        (curr_FC != new_FC)) {
+        hreg_compute_hflags(env);
+    }
+
+    /*
+     * In an frozen count (FC) bit change:
+     *
+     * - if PMCs were running (curr_FC = false) and we're freezing
+     * them (new_FC = true), save the PMCs values in the registers.
+     *
+     * - if PMCs were frozen (curr_FC = true) and we're activating
+     * them (new_FC = false), set the new base_time for future cycle
+     * calculations.
+     */
+    if (curr_FC != new_FC) {
+        if (!curr_FC) { > +            pmu_update_cycles(env);
+        } else {
+            start_cycle_count_session(env);
+        }
+    }
+}




reply via email to

[Prev in Thread] Current Thread [Next in Thread]