qemu-ppc
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 18/23] plugins: add an API to read registers


From: Alex Bennée
Subject: Re: [PATCH 18/23] plugins: add an API to read registers
Date: Wed, 21 Feb 2024 14:14:21 +0000
User-agent: mu4e 1.11.28; emacs 29.1

Akihiko Odaki <akihiko.odaki@daynix.com> writes:

> On 2024/02/21 19:02, Alex Bennée wrote:
>> Akihiko Odaki <akihiko.odaki@daynix.com> writes:
>> 
>>> On 2024/02/20 23:14, Alex Bennée wrote:
>>>> Akihiko Odaki <akihiko.odaki@daynix.com> writes:
>>>>
>>>>> On 2024/02/17 1:30, Alex Bennée wrote:
>>>>>> We can only request a list of registers once the vCPU has been
>>>>>> initialised so the user needs to use either call the get function on
>>>>>> vCPU initialisation or during the translation phase.
>>>>>> We don't expose the reg number to the plugin instead hiding it
>>>>>> behind
>>>>>> an opaque handle. This allows for a bit of future proofing should the
>>>>>> internals need to be changed while also being hashed against the
>>>>>> CPUClass so we can handle different register sets per-vCPU in
>>>>>> hetrogenous situations.
>>>>>> Having an internal state within the plugins also allows us to expand
>>>>>> the interface in future (for example providing callbacks on register
>>>>>> change if the translator can track changes).
>>>>>> Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1706
>>>>>> Cc: Akihiko Odaki <akihiko.odaki@daynix.com>
>>>>>> Message-Id: <20240103173349.398526-39-alex.bennee@linaro.org>
>>>>>> Based-on: <20231025093128.33116-18-akihiko.odaki@daynix.com>
>>>>>> Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
>>>>>> Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
>>>> <snip>
>>>>>> +/*
>>>>>> + * Register handles
>>>>>> + *
>>>>>> + * The plugin infrastructure keeps hold of these internal data
>>>>>> + * structures which are presented to plugins as opaque handles. They
>>>>>> + * are global to the system and therefor additions to the hash table
>>>>>> + * must be protected by the @reg_handle_lock.
>>>>>> + *
>>>>>> + * In order to future proof for up-coming heterogeneous work we want
>>>>>> + * different entries for each CPU type while sharing them in the
>>>>>> + * common case of multiple cores of the same type.
>>>>>> + */
>>>>>> +
>>>>>> +static QemuMutex reg_handle_lock;
>>>>>> +
>>>>>> +struct qemu_plugin_register {
>>>>>> +    const char *name;
>>>>>> +    int gdb_reg_num;
>>>>>> +};
>>>>>> +
>>>>>> +static GHashTable *reg_handles; /* hash table of PluginReg */
>>>>>> +
>>>>>> +/* Generate a stable key - would xxhash be overkill? */
>>>>>> +static gpointer cpu_plus_reg_to_key(CPUState *cs, int gdb_regnum)
>>>>>> +{
>>>>>> +    uintptr_t key = (uintptr_t) cs->cc;
>>>>>> +    key ^= gdb_regnum;
>>>>>> +    return GUINT_TO_POINTER(key);
>>>>>> +}
>>>>>
>>>>> I have pointed out this is theoretically prone to collisions and
>>>>> unsafe.
>>>> How is it unsafe? The aim is to share handles for the same CPUClass
>>>> rather than having a unique handle per register/cpu combo.
>>>
>>> THe intention is legitimate, but the implementation is not safe. It
>>> assumes (uintptr)cs->cc ^ gdb_regnum is unique, but there is no such
>>> guarantee. The key of GHashTable must be unique; generating hashes of
>>> keys should be done with hash_func given to g_hash_table_new().
>> This isn't a hash its a non-unique key. It is however unique for
>> the same register on the same class of CPU so for each vCPU in a system
>> can share the same opaque handles.
>> The hashing is done internally by glib. We would assert if there was
>> a
>> duplicate key referring to a different register.
>> I'm unsure what you want here? Do you have a suggestion for the key
>> generation algorithm? As the comment notes I did consider a more complex
>> mixing algorithm using xxhash but that wouldn't guarantee no clash
>> either.
>
> I suggest using a struct that holds both of cs->cc and gdb_regnum, and
> pass g_direct_equal() and g_direct_hash() to g_hash_table_new().

We already do:

        if (!reg_handles) {
            reg_handles = g_hash_table_new(g_direct_hash, g_direct_equal);
        }

But we can't use g_direct_equal with something that exceeds the width of
gpointer as it is a straight equality test of the key. What you are
suggesting requires allocating memory for each key and de-referencing
with a custom GEqualFunc. 

This seems overkill for something that as I have said doesn't happen.
The reason it doesn't happen is you will never see two CPUClass
instances so close to each other they share all bits apart from where
gdb_regnum is being xor'd. We could assert that is the case with
something like:

  #define MAX_GDBREGS 300

  /* Generate a stable key - would xxhash be overkill? */
  static gpointer cpu_plus_reg_to_key(CPUState *cs, int gdb_regnum)
  {
      uintptr_t key = (uintptr_t) cs->cc;

      qemu_build_assert(sizeof(*cs->cc) >= MAX_GDBREGS);
      g_assert(gdb_regnum < MAX_GDBREGS);

      key ^= gdb_regnum;
      return GUINT_TO_POINTER(key);
  }

although MAX_GDBREGS is currently a guess based on aarch64. In practice
though there are so many allocations thing are much farther apart. As we
can see in the one heterogeneous model we support at the moment (the
last 2 CPUs are cortex-r5f's):

  ./qemu-system-aarch64 -M xlnx-zcu102 -audio none -smp 6 -serial mon:stdio -s 
-S -smp 6
  cpu_common_class_init: k = 0x5565bebf10f0
  arm_cpu_initfn: 0x7f32ee0a8360 -> klass = 0x5565bee50e00
  aarch64_cpu_instance_init: 0x7f32ee0a8360 -> klass = 0x5565bee50e00
  arm_cpu_initfn: 0x7f32ee0be1f0 -> klass = 0x5565bee50e00
  aarch64_cpu_instance_init: 0x7f32ee0be1f0 -> klass = 0x5565bee50e00
  arm_cpu_initfn: 0x7f32ee0d4080 -> klass = 0x5565bee50e00
  aarch64_cpu_instance_init: 0x7f32ee0d4080 -> klass = 0x5565bee50e00
  arm_cpu_initfn: 0x7f32ee0e9f10 -> klass = 0x5565bee50e00
  aarch64_cpu_instance_init: 0x7f32ee0e9f10 -> klass = 0x5565bee50e00
  arm_cpu_initfn: 0x7f32ee0ffda0 -> klass = 0x5565bed0fad0
  arm_cpu_initfn: 0x7f32ee115c30 -> klass = 0x5565bed0fad0

-- 
Alex Bennée
Virtualisation Tech Lead @ Linaro



reply via email to

[Prev in Thread] Current Thread [Next in Thread]