l4-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Broken dream of mine :(


From: Bas Wijnen
Subject: Re: Broken dream of mine :(
Date: Sun, 20 Sep 2009 20:29:37 +0200
User-agent: Mutt/1.5.18 (2008-05-17)

On Fri, Sep 18, 2009 at 01:35:02AM +0200, Michal Suchanek wrote:
> 2009/9/17 Bas Wijnen <address@hidden>:
> >> Is this somehow solved in your system?
> >
> > Yes, IMO it is, but judge for yourself.  The problem on Coyotos in fact
> > comes from constructors, which don't exist on my system.  Here's why
> > this creates the problem:
> 
> Your system is basically equivalent for the purposes of being "drm
> capable" but loses some functionality.

Indeed, and if the sysadmin would really want to, it would not be too
hard to add those parts (which aren't missing, but removed).

> I would expect that the user on Coyotos is free to run any session
> manager because it needs not be trusted by the system.

Yes, it does need to be trusted.  The session manager's most important
task is to handle access to terminal devices: keyboard, display, sound
card, etc.  Those devices should only be usable for the logged-in user.
For example, I should not be able to leave a keyboard sniffer active
after I logged out.

> On the other hand, an administrator would typically instantiate the
> session so it would run a standard process anyway, and you can make
> the top level session manager minimal with possibility of running
> different user shells, just like Windows'.

I have no idea how Windows works, but you are right that the trusted
session manager is very minimal.  Almost all visible things are
delegated to untrusted user programs.

> There are drivers that are neither critical system services nor user
> drivers.
> 
> For example, a driver for an USB camera is not critical for proper
> system operation yet it should be usable by any user of the system.

On my system, the USB bus driver would be a system driver.  Normal
devices like cameras would communicate with this bus driver.  They need
not be trusted themselves.

When the user logs out, and the camera should no longer be accessible,
the session will block access to it.  The driver is built in a way to
handle this.  (Currently I'm expecting to have an emulation library for
each driver, which will emulate the device and save its state when it's
not connected.  When it's reconnected (the user logs in again), the
state is restored and the emulation is replaced by the real device
again.  This is not well thought-out though.)

For a USB camera, there are two options: either it's connected
(semi-)permanently and is considered part of the terminal.  In that case
the driver is run by the administrator, and cannot be recovered by
normal users in case of failure.  However, it is supposed to behave
well.  Also, with the system as I described it, "being a system driver"
really only means that it can get access to opaque memory.  It is still
destroyable by the user.  So if abandoning the driver is an acceptable
recovery, no administrator action is required.

The other option is that it is a device that is not normally connected,
but brought by the user.  In that case the USB port is the system driver
or interest.  The user may access it (while loggid in) and can thereby
use a driver in user space, fully under his/her control.

> So the driver cannot run in each user's session, there is one camera
> and multiple users. Should the driver fail the users should be able to
> recover from this failure although administrator (= user with
> capabilities that allow privileged access to that particular part of
> the system) intervention may be required to restart the driver.

This is the first scenario above, which does indeed work as you
describe.

> It might be theoretically possible to terminate the driver every time
> a different user wants to use the camera and this option is reasonable
> for a single purpose device like a webcam but it might require
> administrator intervention when one user does not terminate the driver
> but other user wants to use the device. And this is not possible for
> all drivers.

I don't know if it's practical, but I want all driver interfaces to have
calls which can be used in any order.  So it is impossible that the next
user's calls will not work because the precious user was halfway a
sequence.  This often means that the driver interface is more than
simply allowing the user to send bytes to the device.  I see that as a
good thing.

> Similar situation arises in a TCP stack but this component has to be
> used by all users in parallel to be reasonably effective (eg ports are
> allocated from a shared space). If you have local access to a computer
> then a TCP stack is a non-vital disposable component. It is more
> important for a networked system but it still can be restarted
> occasionally if it fails to perform reasonably well without serious
> disruption of the whole system. You can use SSL which cannot be broken
> by the TCP stack any more than by any other man-in-the-middle if done
> properly.

I'm not sure what problem you want me to solve here.  The TCP stack will
be a system service; single-port communication capabilities can be
retrieved and used by users.  I don't think any part of the TCP stack
should be in untrusted user space.  However (almost?) all things done
with it, such as SSL, should be.

> So all that is needed for performing drm is installing (part of) a
> media player as a trusted system driver.

Yes.

> Needless to say, users that
> have that privilege do that daily, and users that don't demand that it
> be done for them.

No.  The player cannot detect how it is running.  A user can install the
player as a normal driver, tell it that it is a system driver, and it
cannot know that it isn't true.  So yes, if a system administrator wants
to allow drm to his/her users, that is possible.  But on Coyotos,
changes to the kernel would be needed to debug a drm application.  This
is not the case on Iris.

The philosophy is that on my system the user can debug everything he/she
owns.  But the computer owner is always in control, and can set up
things in ways I don't like.  I have no intention to try to stop that.
(Trusted/Treacherous Computing is an attempt to stop it, and I have no
intention to support it.)

> But you cannot use services that are shared between users and are not
> trusted, not without creating a quite direct communication path
> between all the processes using the service, and possibly further
> disrupting the service by allowing all users to modify its state.

Indeed.  So installing a driver as a system driver should not be done
lightly.  Still, it is not as bad as it sounds.  Before this
communication channel can work, the communicating agents must have a
capability to the device.  So it's not as bad as /tmp, it's only a
communication channel for "all programs that use a webcam", for example.
Still not a good idea, of course.

> > Does this answer your question?
> 
> In a way, yes.
> 
> This is not a solution in my view. You only removed unwanted features
> without changing the technical facilities that allow implementing them

Yes, but I need the technical facilities, and can't remove them anyway:
they're not part of the kernel, so any system administrator can create a
single user which implements these top-level authentication drivers and
include drm features in them.  As said, it is even possible with the
system as I intend to write it.

> yet these features removed seem vital for a working system with
> resource management.

Sort of.  See below.

> It may be that opaque memory is not in itself sufficient for a working
> system or it may not be necessary at all but only seeing a detailed
> system design decides this. Until then opaque memory seems to be a
> promising building block in such a system.

There is a big difference between technically opaque and socially
opaque.  My system supports technically opaque memory, even between
different users: this is memory given to process S by process C, paid
for by process C, but not usable by it.  This is trivial to implement,
and as I said I shall do this in the session manager.

Socially opaque is about people.  With this I mean that person A gives
resources to person B, where A pays and B uses the resources.  A has
lost control over them, except the control to reclaim them.  This seems
very similar, but isn't.  The difference is that memory which is
technically opaque may still be inspected and even changed by the
*person* providing it.  Doing so (especially changing) might violate a
social agreement between A and B.  IMO it is important that such a
social agreement doesn't become a technical agreement.  There seems to
be no use case worth implementing where A should be technically
prevented from debugging his own resources.

Writing a driver which can handle its own memory changing underneath it
may be tricky.  Then again, I don't think it often happens that users
will connect to each other's drivers and provide memory for them.
Either they run their own copy of the driver, or the driver allocates
its own memory.  I would expect the latter for example for services
which allow connections from multiple users simultaneously.  Such
servers often also listen on network sockets, and they need to allocate
for those connections, so this is part of their design anyway.

> It also seems to emerge naturally in systems that attempt to approach
> resource management better than "best effort", it's not specific to
> the coyotos lineage of systems.

Technically opaque memory, yes.  Socially opaque, no.  It is easy to
overlook the difference initially, but after thinking about it this has
become clear to me.

Thanks,
Bas

Attachment: signature.asc
Description: Digital signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]