qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Plugin Register Accesses


From: Aaron Lindsay
Subject: Re: Plugin Register Accesses
Date: Thu, 7 Jan 2021 15:45:15 -0500

On Jan 07 16:49, Alex Bennée wrote:
> 
> Aaron Lindsay <aaron@os.amperecomputing.com> writes:
> 
> > On Dec 08 14:44, Aaron Lindsay wrote:
> >> On Dec 08 17:56, Alex Bennée wrote:
> >> > Aaron Lindsay <aaron@os.amperecomputing.com> writes:
> >> > > On Dec 08 12:17, Alex Bennée wrote:
> >> > >> Aaron Lindsay <aaron@os.amperecomputing.com> writes:
> >> > >>   Memory is a little trickier because you can't know at any point if a
> >> > >>   given virtual address is actually mapped to real memory. The safest 
> >> > >> way
> >> > >>   would be to extend the existing memory tracking code to save the 
> >> > >> values
> >> > >>   saved/loaded from a given address. However if you had register 
> >> > >> access
> >> > >>   you could probably achieve the same thing after the fact by 
> >> > >> examining
> >> > >>   the opcode and pulling the values from the registers.
> >> > >
> >> > > What if memory reads were requested by `qemu_plugin_hwaddr` instead of
> >> > > by virtual address? `qemu_plugin_get_hwaddr()` is already exposed, and 
> >> > > I
> >> > > would expect being able to successfully get a `qemu_plugin_hwaddr` in a
> >> > > callback would mean it is currently mapped. Am I overlooking
> >> > > something?
> >> > 
> >> > We can't re-run the transaction - there may have been a change to the
> >> > memory layout that instruction caused (see tlb_plugin_lookup and the
> >> > interaction with io_writex).
> >> 
> >> To make sure I understand, your concern is that such a memory access
> >> would be made against the state from *after* the instruction's execution
> >> rather than before (and that my `qemu_plugin_hwaddr` would be a
> >> reference to before)?
> >> 
> >> > However I think we can expand the options for memory instrumentation
> >> > to cache the read or written value.
> >> 
> >> Would this include any non-software accesses as well (i.e. page table
> >> reads made by hardware on architectures which support doing so)? I
> >> suspect you're going to tell me that this is hard to do without exposing
> >> QEMU/TCG internals, but I'll ask anyway!
> >> 
> >> > > I think I might actually prefer a plugin memory access interface be in
> >> > > the physical address space - it seems like it might allow you to get
> >> > > more mileage out of one interface without having to support accesses by
> >> > > virtual and physical address separately.
> >> > >
> >> > > Or, even if that won't work for whatever reason, it seems reasonable 
> >> > > for
> >> > > a plugin call accessing memory by virtual address to fail in the case
> >> > > where it's not mapped. As long as that failure case is well-documented
> >> > > and easy to distinguish from others within a plugin, why not?
> >> > 
> >> > Hmmm I'm not sure - I don't want to expose internal implementation
> >> > details to the plugins because we don't want plugins to rely on them.
> >> 
> >> Ohhh, was your "you can't know [...] mapped to real memory" discussing
> >> whether it was currently mapped on the *host*?
> >> 
> >> I assumed you were discussing whether it was mapped from the guest's
> >> point of view, and therefore expected that whether a guest VA was mapped
> >> was a function of the guest code being executed, and not of the TCG
> >> implementation. I confess I'm not that familiar with how QEMU handles
> >> memory internally.
> >
> > I'm trying to understand the issue here a little more... does calling
> > `cpu_memory_rw_debug()` not work because it's not safe to call in a
> > plugin instruction-execution callback? Is there any way to make that
> > sort of arbitrary access to memory safely?
> 
> It would be safe on a halted system. However you might not get the same
> data back as the load/store instruction just executed if the execution
> of the instruction caused a change in the page table mappings. For
> example on ARM M-profile writing to the mmio MPU control registers will
> flush the current address mappings. For example:
> 
>   1. access page X
>   2. update architecture page tables for page X -> Y
>   3. write to MPU control register, trigger tlb_flush
>   4. access page X from plugin -> get Y results
> 
> IOW accessing cpu_memory_rw_debug from a instrumented load/store helper
> for the address just accessed would be fine for that particular address
> because nothing will replace the TLB entry while in the helper. However
> as a generalised access to memory things may have changed.

To make sure I understand - are you saying that calling
`cpu_memory_rw_debug()` will always reflect a consistent view of memory,
just that it's the view of memory from *after* the current instruction
has executed (and with contents potentially modified by other cpus in
the meantime)?

> I think we can store enough data for a helper like:
> 
>   qemu_plugin_hwaddr_get_value(const struct qemu_plugin_hwaddr *haddr)
> 
> but we would certainly want to cache the values io_readx and io_writex
> use as they will otherwise be lost into the depths of the emulation.

I think this would be helpful, but it wouldn't get you to arbitrary
access of memory, correct? (Since you wouldn't be able to create an
qemu_plugin_hwaddr for an arbitrary, non-instrumented, address you
wanted to inspect).

-Aaron



reply via email to

[Prev in Thread] Current Thread [Next in Thread]