qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH memory v3 0/9] Memory Region QOMification


From: Peter Crosthwaite
Subject: Re: [Qemu-devel] [PATCH memory v3 0/9] Memory Region QOMification
Date: Tue, 3 Jun 2014 08:37:43 +1000

On Tue, Jun 3, 2014 at 12:26 AM, Peter Maydell <address@hidden> wrote:
> On 2 June 2014 05:13, Peter Crosthwaite <address@hidden> wrote:
>> This patch series QOMifies Memory regions. This is the Memory API
>> specific subset of patches forming part of the Memory/GPIO/Sysbus
>> QOMification.
>>
>> I think Paolo already has P1 enqeued. Including for ease of review.
>> some QOM patches in P2-3 that cut down on later boilerplate. TBH I can
>> live without them, if they not liked but they make life better IMO.
>>
>> For fuller context please see:
>>
>> http://lists.gnu.org/archive/html/qemu-devel/2014-05/msg03265.html
>
> So could you sketch an example of how this would work
> (for board model construction, not command line arguments)?
> I'm guessing something like:
>    dev = qdev_create(NULL, "my-device");
>    object_property_set_something(OBJECT(dev), "container", my_memregion);

object_property_set_link

Although my current thinking is the slaves would not handle the actual
containering device-side like this. It would expose the slave aperture
via sysbus and let the machine do it.

dev = qdev_create(NULL, "my-device");
memory_region_add_subregion(my_memory_region,
        sysbus_get_mmio(SYSBUS_DEVICE(dev), 0), 0xF00);

And we patch the existing public memory_region_add_subregion() API to
back onto the new link setters  (that one is a follow up that I
haven't got to just yet).

The machine could also just do the QOM properties itself although it
may be too verbose:

dev = qdev_create(NULL, "my-device");
MemoryRegion *slave_ap = sysbus_get_mmio(SBD(dev), n);
object_property_set_link(slave_ap, my_memregion, "container", &error_abort);
object_property_set_int(slave_ap, 0xF00, "addr", &error_abort);
/* priority and may overlap could be set too .. */

Size would be set by the device itself in init or realize (preserves
current semantics).

I have some patches demoing this. Will RFC.

>    object_property_set_uint64(OBJECT(dev), "addr", 0x40000);
>    qdev_init_nofail(dev);
>
> Code wise it looks OK but it feels oddly back-to-front to
> put a subregion into a container by setting properties on
> the subregion. At least personally I think of the mapping
> operation as an operation on the container that says "put
> this object X in at address Y", not as an operation on
> the object X that says "your container is C and you're
> at address Y in it". But I can see how this approach
> pretty much falls out of our current MemoryRegion data
> structure, so perhaps I just need to reverse the
> orientation of my brain...
>

If it helps, you can think of it as a slave decoded bus. It's
reasonable to implement a memory mapped bus (the "container" in this
case) as "im just a dumb bus that sits there and spams addresses and
data to everyone and see who responds" and leave the actual address
decoding to the many slaves. So IMO, this approach where the slaves
understand there mappings can have a very real hardware analogue.
Flatview generation will invert this relationship for the actual QEMU
implementation, but an API where slaves insert themselves into a space
is valid.

> (Also it doesn't make sense to set only one of the
> (container,address) property pair, but I guess two
> properties which we can already handle makes more sense
> than having one which would need a custom parser of some
> kind.)
>
> The other question is how you see this fitting into our
> other use-case for passing MemoryRegions around: what
> would the code for passing a container region to a
> memory transaction source like a CPU object look like?
>

I have those patches already too. Will RFC it along with the rest. My
series converts a board (microblaze ML605) to not use
&address_space_memory at all. Includes a DMA controller as well. Cover
letter will elaborate some of your questions.

Regards,
Peter


> thanks
> -- PMM
>



reply via email to

[Prev in Thread] Current Thread [Next in Thread]