bug-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: video mem access with oskit-mach


From: Roland McGrath
Subject: Re: video mem access with oskit-mach
Date: Mon, 10 Dec 2001 17:12:20 -0500 (EST)

> Ugh, except that I can't map a part of the mem device, because of libstore
> breakage:

Actually, it's right about the kernel being broken.  The only thing that
actually works in gnumach or in oskit-mach is to device_map with an offset
of zero to get a memory object covering the entire address space, and then
vm_map just the part of it you actually need.

> We should fix Mach where it ignores the offsets and just make this function
> do the obvious thing.  Shouldn't we?

Probably.

> I am not sure it's libstore's job to hide the Mach breakage where it
> exists.

Sure it is.  That doesn't mean we shouldn't fix Mach.  But it is exactly
libstore's job to hide the details of the kernel device interfaces and
whatever nonsense they might entail.  

> OSKit Mach seems to deal with offsets, at least in the mem device.

Actually, it does not.  You are looking at the dev->ops->map call in
ds_device_map, and thinking that makes it work.  It doesn't.  That call
actually does nothing at all except to check that the offset and size are
acceptable choices for the device.  The actual work is all done in the
"device pager" code that lives in device/dev_pager.c, which I mostly did
not change from the gnumach (actually, CMU Mach 3.0) version.  

(Mostly I just ripped out the "block io pager" support, which was for Mach
native block devices that weren't really mappable, but device_map would
give you a memory object of regular pages backed by doing i/o--like what
you get from io_map on a storeio device.  No extant drivers in gnumach
support that mode, since the Linux block driver glue doesn't.  The
filesystems need back-door synchronization with their pagers and so can't
use device_map with the current external-pager interface anwyay.  So the
only benefit of that support would be for mmap'ing storeio devices, which
would be made faster by the external pager interfaces being all in-kernel
instead of talking to the storeio task and bouncing the pages around for
device_read/device_write.  I guess something like a database with its own
disk formats might do that, so it's worth supporting this eventually.  It
shouldn't be too hard to add back.  But later for that.)

In device_pager_setup, you'll note that the OFFSET argument is completely
ignored, and the SIZE argument is just stored in a field that never gets used.
It seems straightforward to store these arguments in new struct dev_pager
fields and then apply/enforce them in device_pager_data_request.
I've written a patch for that, if you want to try it.

But there is another gotcha with doing that.
Note the lines commented /* HACK */ in device_pager_setup.
It caches one dev_pager under the device port and returns
it again for subsequent calls.

I don't really have a problem with multiple device_map calls returning
multiple memory objects and that being suboptimal (storeio could just cache
the memory object).  But I have no idea what will happen if there are two
objects made with vm_object_page_map that use the same physical pages.  I
suspect it will violate assumptions of other parts of the VM system and
cause it to panic and/or not work right in weird ways.  So perhaps we ought
to enforce some kind of constraint rather than just taking out the code
there now that returns the same one object.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]