l4-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Broken dream of mine :(


From: Bas Wijnen
Subject: Re: Broken dream of mine :(
Date: Mon, 21 Sep 2009 18:44:20 +0200
User-agent: Mutt/1.5.18 (2008-05-17)

On Mon, Sep 21, 2009 at 12:19:05PM +0200, Michal Suchanek wrote:
> 2009/9/20 Bas Wijnen <address@hidden>:
> > On Fri, Sep 18, 2009 at 01:35:02AM +0200, Michal Suchanek wrote:
> >> I would expect that the user on Coyotos is free to run any session
> >> manager because it needs not be trusted by the system.

Note that we're probably using a different definition of "session
manager" here.  My definition is the part which needs to be trusted.
Most parts of what your definition seems to be will indeed be replacable
on Iris as well.

> > Yes, it does need to be trusted.  The session manager's most important
> > task is to handle access to terminal devices: keyboard, display, sound
> > card, etc.  Those devices should only be usable for the logged-in user.
> > For example, I should not be able to leave a keyboard sniffer active
> > after I logged out.
> 
> As I understand it the user session in EROS or Coyotos is connected to
> the terminal by a system service which authenticates users.

I would call that service the session manager.  As it looks currently,
each user will run its own copy.  But it may end up to be one program
for all users as well.

> That is, the decision to connect or disconnect the devices which are
> part of the terminal hardware is not within the user session, and the
> user can indeed install a keyboard sniffer and have it running at all
> times but it will only record keys while his session is connected so
> he will not even see his own password.

The same is true on Iris, because programs the user starts will talk to
the emulated device while not logged in.

> Obviously, if your system image includes a user with access to this
> system service or the drivers it uses that user can install a sniffer
> that will record all keystrokes.

Of course.  I'm still not sure if that user should exist at all, but
even if it does, it should hardly ever be used (only for upgrading
critical parts of the system).

> Since you put more trust into the user's toplevel session manager it
> may be somewhat useful to think of this manager as part of the system
> rather as part of the user's session. It has privileges that the user
> does not have (cannot invoke freely).

Yes, I do think of it as part of the system.  I'm sorry if my choice of
words confused the matter for you.

> > Writing a driver which can handle its own memory changing underneath it
> > may be tricky.  Then again, I don't think it often happens that users
> > will connect to each other's drivers and provide memory for them.
> > Either they run their own copy of the driver, or the driver allocates
> > its own memory.  I would expect the latter for example for services
> > which allow connections from multiple users simultaneously.  Such
> > servers often also listen on network sockets, and they need to allocate
> > for those connections, so this is part of their design anyway.
> 
> Either of listening or established connections are done on behalf of a
> process. How many connections is the service going to handle?

The point is: when a connection is made over an ethernet socket, there
is no way you can let the caller provide the memory for the service.  So
a server which allows random calls from ethernet must have some mechnism
to make sure it doesn't run out of memory.  If it has that mechanism
anyway, it can be used for internal calls as well, even though those
could provide their own memory.

> If it runs on its own memory then it is either limited to a fixed
> number of connections or the system can be overloaded by a process
> (group of processes) that initiate multiple connections. The only way
> such service can be reasonably accountable in my view is that each
> connection is initiated on behalf of a process that provides resources
> for initiating it.

Which is impossible in the general case.  It means any public service
can be DoSsed given enough resources to attack.  Note that the scheduler
can help a bit here: giving the server a higher priority than any client
will make sure that local requests are handled without a queue.

> > The philosophy is that on my system the user can debug everything he/she
> > owns.  But the computer owner is always in control, and can set up
> > things in ways I don't like.  I have no intention to try to stop that.
> > (Trusted/Treacherous Computing is an attempt to stop it, and I have no
> > intention to support it.)
> 
> This can be probably done on Coyotos as well. You can tell the
> application anything you want, perhaps with some proxy service or a
> small modification to the kernel if required, and you can do that out
> of the box on Iris I assume.

Yes, I know this is possible on Coyotos (probably with a small
modification indeed).  I did not mean to suggest that this is something
Coyotos is unable to do (you seem to think I am writing Iris because I
want to improve this point of Coyotos; this is not the case, even though
my default user space will be more suited to it than what Coyotos was
supposed to provide).

> The problem is that if you get a version of either kernel that does
> not lie to the process, and you can verify that with some scheme
> involving TPM or similar you can now make application that refuses to
> run unless it has access to true opaque memory.

Indeed, when people start using TPM, the users lose.

> And then you have to attach the debugger to a FireWire port or the RAM bus.
> 
> So the actual difference is minor if any.

Perhaps.  It does however mean that users will want only certified
versions of Iris, and it will be very hard to get patches used by
people.  I hope that not many kernel patches will be needed, but I don't
think it's a good thing if it's hard to get them accepted.

> > There is a big difference between technically opaque and socially
> > opaque.  My system supports technically opaque memory, even between
> > different users: this is memory given to process S by process C, paid
> > for by process C, but not usable by it.  This is trivial to implement,
> > and as I said I shall do this in the session manager.
> >
> > Socially opaque is about people.  With this I mean that person A gives
> > resources to person B, where A pays and B uses the resources.  A has
> > lost control over them, except the control to reclaim them.  This seems
> > very similar, but isn't.  The difference is that memory which is
> > technically opaque may still be inspected and even changed by the
> > *person* providing it.  Doing so (especially changing) might violate a
> > social agreement between A and B.  IMO it is important that such a
> > social agreement doesn't become a technical agreement.  There seems to
> > be no use case worth implementing where A should be technically
> > prevented from debugging his own resources.
> 
> No, a memory that is technically opaque is opaque under certain
> technical conditions.

In the above I was trying to define those terms for this discussion.  So
there really is no point in disagreeing, it's just the definition I use.
;-)  You have a point that I have perhaps chosen the wrong term again.
Perhaps you would have preferred to have the terms reversed, even.  For
this discussion I find it important to be clear about things, so very
well, let's reverse the terms.

So:
technically opaque: opaque through technical protection
socially opaque: opaque through social agreement between people

> The problem with drm is not that opaque memory can be created, you can
> just have a key store in the Linux kernel and it may be secure enough
> for drm purposes. The problem is that by social engineering a group
> holding substantial share of resources that the users want to access
> may coerce them to use a version of the system that makes the opaque
> memory really opaque for all practical purposes. Then the technical
> conditions in effect are no longer controlled by the user.

In other words, the technical problem to solve is that the user must be
unable to prove to that group of people that he is doing what they want.
They must rely on his honesty.  Which is exactly what I try to do with
Iris: make it impossible for a program to see if the storage it receives
is really opaque.

> In my view a system only implements technical features. In an
> opensource system omitting a feature that is logically part of the
> system but providing the building blocks for it is of little
> relevance.

It depends how the building is done, and who can check.  If a user wants
to lock himself out, that's his problem.  But if other people tell him
to do so, he should be able to lie about whether he did so.

> Social contracts can be specified in license, suggested in
> documentation or restricted by law, for example. Still the system is
> purely a set of technical features that the user might invoke or not.

IMO social agreements are very important when it comes to what a
computer should and shouldn't do.  In particular, for a computer to
completely do what the user wants (that's what Iris aims for, and it's a
social goal at the core), it must also allow to lie for the user.

Thanks,
Bas

Attachment: signature.asc
Description: Digital signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]