l4-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Broken dream of mine :(


From: Michal Suchanek
Subject: Re: Broken dream of mine :(
Date: Mon, 21 Sep 2009 12:19:05 +0200

2009/9/20 Bas Wijnen <address@hidden>:
> On Fri, Sep 18, 2009 at 01:35:02AM +0200, Michal Suchanek wrote:
>> 2009/9/17 Bas Wijnen <address@hidden>:
>> >> Is this somehow solved in your system?
>> >
>> > Yes, IMO it is, but judge for yourself.  The problem on Coyotos in fact
>> > comes from constructors, which don't exist on my system.  Here's why
>> > this creates the problem:
>>
>> Your system is basically equivalent for the purposes of being "drm
>> capable" but loses some functionality.
>
> Indeed, and if the sysadmin would really want to, it would not be too
> hard to add those parts (which aren't missing, but removed).
>
>> I would expect that the user on Coyotos is free to run any session
>> manager because it needs not be trusted by the system.
>
> Yes, it does need to be trusted.  The session manager's most important
> task is to handle access to terminal devices: keyboard, display, sound
> card, etc.  Those devices should only be usable for the logged-in user.
> For example, I should not be able to leave a keyboard sniffer active
> after I logged out.

As I understand it the user session in EROS or Coyotos is connected to
the terminal by a system service which authenticates users.
That is, the decision to connect or disconnect the devices which are
part of the terminal hardware is not within the user session, and the
user can indeed install a keyboard sniffer and have it running at all
times but it will only record keys while his session is connected so
he will not even see his own password.

Obviously, if your system image includes a user with access to this
system service or the drivers it uses that user can install a sniffer
that will record all keystrokes.

Since you put more trust into the user's toplevel session manager it
may be somewhat useful to think of this manager as part of the system
rather as part of the user's session. It has privileges that the user
does not have (cannot invoke freely).

>
>> Similar situation arises in a TCP stack but this component has to be
>> used by all users in parallel to be reasonably effective (eg ports are
>> allocated from a shared space). If you have local access to a computer
>> then a TCP stack is a non-vital disposable component. It is more
>> important for a networked system but it still can be restarted
>> occasionally if it fails to perform reasonably well without serious
>> disruption of the whole system. You can use SSL which cannot be broken
>> by the TCP stack any more than by any other man-in-the-middle if done
>> properly.
>
> I'm not sure what problem you want me to solve here.  The TCP stack will
> be a system service; single-port communication capabilities can be
> retrieved and used by users.  I don't think any part of the TCP stack
> should be in untrusted user space.  However (almost?) all things done
> with it, such as SSL, should be.

>
> Writing a driver which can handle its own memory changing underneath it
> may be tricky.  Then again, I don't think it often happens that users
> will connect to each other's drivers and provide memory for them.
> Either they run their own copy of the driver, or the driver allocates
> its own memory.  I would expect the latter for example for services
> which allow connections from multiple users simultaneously.  Such
> servers often also listen on network sockets, and they need to allocate
> for those connections, so this is part of their design anyway.

Either of listening or established connections are done on behalf of a
process. How many connections is the service going to handle?

If it runs on its own memory then it is either limited to a fixed
number of connections or the system can be overloaded by a process
(group of processes) that initiate multiple connections. The only way
such service can be reasonably accountable in my view is that each
connection is initiated on behalf of a process that provides resources
for initiating it.

>
>
>> So all that is needed for performing drm is installing (part of) a
>> media player as a trusted system driver.
>
> Yes.
>
>> Needless to say, users that
>> have that privilege do that daily, and users that don't demand that it
>> be done for them.
>
> No.  The player cannot detect how it is running.  A user can install the
> player as a normal driver, tell it that it is a system driver, and it
> cannot know that it isn't true.  So yes, if a system administrator wants
> to allow drm to his/her users, that is possible.  But on Coyotos,
> changes to the kernel would be needed to debug a drm application.  This
> is not the case on Iris.
>
> The philosophy is that on my system the user can debug everything he/she
> owns.  But the computer owner is always in control, and can set up
> things in ways I don't like.  I have no intention to try to stop that.
> (Trusted/Treacherous Computing is an attempt to stop it, and I have no
> intention to support it.)
>

This can be probably done on Coyotos as well. You can tell the
application anything you want, perhaps with some proxy service or a
small modification to the kernel if required, and you can do that out
of the box on Iris I assume.

The problem is that if you get a version of either kernel that does
not lie to the process, and you can verify that with some scheme
involving TPM or similar you can now make application that refuses to
run unless it has access to true opaque memory.

And then you have to attach the debugger to a FireWire port or the RAM bus.

So the actual difference is minor if any.

>> But you cannot use services that are shared between users and are not
>> trusted, not without creating a quite direct communication path
>> between all the processes using the service, and possibly further
>> disrupting the service by allowing all users to modify its state.
>
> Indeed.  So installing a driver as a system driver should not be done
> lightly.  Still, it is not as bad as it sounds.  Before this
> communication channel can work, the communicating agents must have a
> capability to the device.  So it's not as bad as /tmp, it's only a
> communication channel for "all programs that use a webcam", for example.
> Still not a good idea, of course.
>
>> > Does this answer your question?
>>
>> In a way, yes.
>>
>> This is not a solution in my view. You only removed unwanted features
>> without changing the technical facilities that allow implementing them
>
> Yes, but I need the technical facilities, and can't remove them anyway:
> they're not part of the kernel, so any system administrator can create a
> single user which implements these top-level authentication drivers and
> include drm features in them.  As said, it is even possible with the
> system as I intend to write it.
>
>> yet these features removed seem vital for a working system with
>> resource management.
>
> Sort of.  See below.
>
>> It may be that opaque memory is not in itself sufficient for a working
>> system or it may not be necessary at all but only seeing a detailed
>> system design decides this. Until then opaque memory seems to be a
>> promising building block in such a system.
>
> There is a big difference between technically opaque and socially
> opaque.  My system supports technically opaque memory, even between
> different users: this is memory given to process S by process C, paid
> for by process C, but not usable by it.  This is trivial to implement,
> and as I said I shall do this in the session manager.
>
> Socially opaque is about people.  With this I mean that person A gives
> resources to person B, where A pays and B uses the resources.  A has
> lost control over them, except the control to reclaim them.  This seems
> very similar, but isn't.  The difference is that memory which is
> technically opaque may still be inspected and even changed by the
> *person* providing it.  Doing so (especially changing) might violate a
> social agreement between A and B.  IMO it is important that such a
> social agreement doesn't become a technical agreement.  There seems to
> be no use case worth implementing where A should be technically
> prevented from debugging his own resources.

No, a memory that is technically opaque is opaque under certain
technical conditions. While these conditions are met it is really
inaccessible. Processes and techniques are part of the system, users
aren't. They live outside of the system, access it, and use the
technical facilities provided by it. What techniques the users invoke
is in part driven by what they need or want and in part what you or
somebody else implements.

The problem with drm is not that opaque memory can be created, you can
just have a key store in the Linux kernel and it may be secure enough
for drm purposes. The problem is that by social engineering a group
holding substantial share of resources that the users want to access
may coerce them to use a version of the system that makes the opaque
memory really opaque for all practical purposes. Then the technical
conditions in effect are no longer controlled by the user.

>> It also seems to emerge naturally in systems that attempt to approach
>> resource management better than "best effort", it's not specific to
>> the coyotos lineage of systems.
>
> Technically opaque memory, yes.  Socially opaque, no.  It is easy to
> overlook the difference initially, but after thinking about it this has
> become clear to me.
>

In my view a system only implements technical features. In an
opensource system omitting a feature that is logically part of the
system but providing the building blocks for it is of little
relevance.

Social contracts can be specified in license, suggested in
documentation or restricted by law, for example. Still the system is
purely a set of technical features that the user might invoke or not.

Thanks

Michal




reply via email to

[Prev in Thread] Current Thread [Next in Thread]