bug-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: libusb+librump patch


From: Antti Kantee
Subject: Re: libusb+librump patch
Date: Sat, 17 Oct 2015 13:16:38 +0000

On 16/10/15 16:17, Olaf Buddenhagen wrote:
Let's start with the *ideal* architecture we would like to have. There
should be a PCI server process, which presents device files for every
PCI device present; along with a libpciaccess backend which can be
pointed at any of these device files, giving access to the associated
hardware device(s) -- and only these. For each device we would run a
Rump instance in another server, in turn presenting a device file giving
access to the device functionality. In some cases -- such as USB --
these first-level drivers would ideally only give raw access; and for
things like USB mass storage for example, we would in turn run separate
driver instances on top of the raw device files, which would finally
present device files giving access to the logical functionality. (Such
as block device files for storage devices; audio device files for sound
cards; etc.)

Now fully implementing this architecture would be pretty ambitious
(especially the last part about separating raw drivers from logical
drivers, as Rump itself doesn't have such a separation yet AIUI?) -- so
we might take some shortcuts: for example letting the first level
drivers directly export the higher-level functionality too. Or possibly
handling multiple devices in a single server instance.

Ok, so you'd decide how to limit the visibility and arbitrate bus access in your PCI backend (I think this was discussed already?).

On the rest, I'll speculate.

I guess splitting the stack, especially in the case of USB, would be possible, since the host controller is supposed to export "pipes" to the rest of the USB stack. The next place you can split it is at the USB device protocol level, a la ugen. And of top of that you really already have the actual devices.

Not sure how much pluggability you would gain especially when splitting between the host controller and the upper level USB stack, meaning if you could use alternative implementations. IIRC the pipes are specified in the USB standard, but I'm not sure if they're really standard. Plus, you'd probably have to write some code, and I'm not sure that code would be upstreamable, so you might have to maintain it yourself too.

For the USB protocol splitting, you could run one rump kernel instance so that it exports ugen, and another one which uses ugenhc and provides the high-level drivers. Yea, I can see why it would be attractive for plugging into the intermediate protocol and providing the ability to translate it left and right. However, I'm not sure it's a good approach if you want things to, you know, work. I guess it's possible to get that approach to limp along, though.

However, Bruno (initially at least) doesn't actually want to do any of
this. Rather, for a starting point he wants to try for the simplest
possible architecture that somehow gives applications access to USB
devices (especially mass storage devices). So he is thinking of
something along the lines of what Robert did (giving mplayer access to
USB sound devices through librump) -- in the case of USB mass storage,
that would mean giving the filesystem servers access to USB storage
devices through librump. He doesn't seem to be very clear on how that
would work exactly, though...

The rump kernel would export the block device, and you'd access it from the fs server with rump_sys_read() and rump_sys_write(). Notably, rump kernels support remote system calls, so driving things remotely already works. The keyword is "sysproxy". Some of the transport code probably requires a revamp for it to be possible to support whatever RPC Hurd uses, but I've been meaning to do that revamp for other reasons anyway. Ok, I've been meaning to do it for several years, so ...

If your fs server does the appropriate block level caching, that approach should be reasonably performant, too.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]