l4-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Broken dream of mine :(


From: Michal Suchanek
Subject: Re: Broken dream of mine :(
Date: Mon, 21 Sep 2009 19:45:00 +0200

2009/9/21 Bas Wijnen <address@hidden>:
> On Mon, Sep 21, 2009 at 12:19:05PM +0200, Michal Suchanek wrote:
>> 2009/9/20 Bas Wijnen <address@hidden>:

>> That is, the decision to connect or disconnect the devices which are
>> part of the terminal hardware is not within the user session, and the
>> user can indeed install a keyboard sniffer and have it running at all
>> times but it will only record keys while his session is connected so
>> he will not even see his own password.
>
> The same is true on Iris, because programs the user starts will talk to
> the emulated device while not logged in.
>
>> Obviously, if your system image includes a user with access to this
>> system service or the drivers it uses that user can install a sniffer
>> that will record all keystrokes.
>
> Of course.  I'm still not sure if that user should exist at all, but
> even if it does, it should hardly ever be used (only for upgrading
> critical parts of the system).
>
>> Since you put more trust into the user's toplevel session manager it
>> may be somewhat useful to think of this manager as part of the system
>> rather as part of the user's session. It has privileges that the user
>> does not have (cannot invoke freely).
>
> Yes, I do think of it as part of the system.  I'm sorry if my choice of
> words confused the matter for you.

No problem.

When I see "session manager" it makes me think of the X11 session
manager, the login shell. login.exe, or similar process that is
started to initiate the user session once the user authenticates and
whose exit terminates the user session. That's where the term was used
previously.

>> > Writing a driver which can handle its own memory changing underneath it
>> > may be tricky.  Then again, I don't think it often happens that users
>> > will connect to each other's drivers and provide memory for them.
>> > Either they run their own copy of the driver, or the driver allocates
>> > its own memory.  I would expect the latter for example for services
>> > which allow connections from multiple users simultaneously.  Such
>> > servers often also listen on network sockets, and they need to allocate
>> > for those connections, so this is part of their design anyway.
>>
>> Either of listening or established connections are done on behalf of a
>> process. How many connections is the service going to handle?
>
> The point is: when a connection is made over an ethernet socket, there
> is no way you can let the caller provide the memory for the service.  So
> a server which allows random calls from ethernet must have some mechnism
> to make sure it doesn't run out of memory.  If it has that mechanism
> anyway, it can be used for internal calls as well, even though those
> could provide their own memory.

If there is a call form the internet and it is received it then it
means that some process registered a listen port and it has to pay for
the memory and cpu time required to receive the call. Otherwise it
will not be received.

It can run out of memory, the system resources are finite. The
system's job is to constrain this service to the resources that were
assigned to it so that the rest of the system can run without undue
interference from this service.

On Linux there is an illusion of infinite memory, an allocation can
never fail. However, as the amount of used memory grows the system
swaps out more and more pages and slows down, eventually locking up or
starting the OOM killer which (almost) randomly kills processes to
free up memory.

>
>> If it runs on its own memory then it is either limited to a fixed
>> number of connections or the system can be overloaded by a process
>> (group of processes) that initiate multiple connections. The only way
>> such service can be reasonably accountable in my view is that each
>> connection is initiated on behalf of a process that provides resources
>> for initiating it.
>
> Which is impossible in the general case.  It means any public service
> can be DoSsed given enough resources to attack.  Note that the scheduler

Yes, any public service can be DoSed given enough resources to attack.

> can help a bit here: giving the server a higher priority than any client
> will make sure that local requests are handled without a queue.
>
>> > The philosophy is that on my system the user can debug everything he/she
>> > owns.  But the computer owner is always in control, and can set up
>> > things in ways I don't like.  I have no intention to try to stop that.
>> > (Trusted/Treacherous Computing is an attempt to stop it, and I have no
>> > intention to support it.)
>>
>> This can be probably done on Coyotos as well. You can tell the
>> application anything you want, perhaps with some proxy service or a
>> small modification to the kernel if required, and you can do that out
>> of the box on Iris I assume.
>
> Yes, I know this is possible on Coyotos (probably with a small
> modification indeed).  I did not mean to suggest that this is something
> Coyotos is unable to do (you seem to think I am writing Iris because I
> want to improve this point of Coyotos; this is not the case, even though
> my default user space will be more suited to it than what Coyotos was
> supposed to provide).

At the time your "toy system" was first announced I got the impression
that the situation is different and that the new kernel will have
resource accounting very different from that of Coyotos, possibly
avoiding opaque memory completely.

>
>> The problem is that if you get a version of either kernel that does
>> not lie to the process, and you can verify that with some scheme
>> involving TPM or similar you can now make application that refuses to
>> run unless it has access to true opaque memory.
>
> Indeed, when people start using TPM, the users lose.
>
>> And then you have to attach the debugger to a FireWire port or the RAM bus.
>>
>> So the actual difference is minor if any.
>
> Perhaps.  It does however mean that users will want only certified
> versions of Iris, and it will be very hard to get patches used by
> people.  I hope that not many kernel patches will be needed, but I don't
> think it's a good thing if it's hard to get them accepted.
>
>> > There is a big difference between technically opaque and socially
>> > opaque.  My system supports technically opaque memory, even between
>> > different users: this is memory given to process S by process C, paid
>> > for by process C, but not usable by it.  This is trivial to implement,
>> > and as I said I shall do this in the session manager.
>> >
>> > Socially opaque is about people.  With this I mean that person A gives
>> > resources to person B, where A pays and B uses the resources.  A has
>> > lost control over them, except the control to reclaim them.  This seems
>> > very similar, but isn't.  The difference is that memory which is
>> > technically opaque may still be inspected and even changed by the
>> > *person* providing it.  Doing so (especially changing) might violate a
>> > social agreement between A and B.  IMO it is important that such a
>> > social agreement doesn't become a technical agreement.  There seems to
>> > be no use case worth implementing where A should be technically
>> > prevented from debugging his own resources.
>>
>> No, a memory that is technically opaque is opaque under certain
>> technical conditions.
>
> In the above I was trying to define those terms for this discussion.  So
> there really is no point in disagreeing, it's just the definition I use.
> ;-)  You have a point that I have perhaps chosen the wrong term again.
> Perhaps you would have preferred to have the terms reversed, even.  For
> this discussion I find it important to be clear about things, so very
> well, let's reverse the terms.
>
> So:
> technically opaque: opaque through technical protection
> socially opaque: opaque through social agreement between people

In your definition I am missing one part that you seem to imply.

socially opaque: opaque through social agreement between people in
situation where technical means for ensuring that the memory is indeed
opaque is missing or not used.

drm is then a situation where memory is both socially and technically
opaque, with the technical conditions enforcing a stricter policy than
would be possible to uphold through a social contract only

>
>> The problem with drm is not that opaque memory can be created, you can
>> just have a key store in the Linux kernel and it may be secure enough
>> for drm purposes. The problem is that by social engineering a group
>> holding substantial share of resources that the users want to access
>> may coerce them to use a version of the system that makes the opaque
>> memory really opaque for all practical purposes. Then the technical
>> conditions in effect are no longer controlled by the user.
>
> In other words, the technical problem to solve is that the user must be
> unable to prove to that group of people that he is doing what they want.
> They must rely on his honesty.  Which is exactly what I try to do with
> Iris: make it impossible for a program to see if the storage it receives
> is really opaque.
>
>> In my view a system only implements technical features. In an
>> opensource system omitting a feature that is logically part of the
>> system but providing the building blocks for it is of little
>> relevance.
>
> It depends how the building is done, and who can check.  If a user wants
> to lock himself out, that's his problem.  But if other people tell him
> to do so, he should be able to lie about whether he did so.
>
>> Social contracts can be specified in license, suggested in
>> documentation or restricted by law, for example. Still the system is
>> purely a set of technical features that the user might invoke or not.
>
> IMO social agreements are very important when it comes to what a
> computer should and shouldn't do.  In particular, for a computer to
> completely do what the user wants (that's what Iris aims for, and it's a
> social goal at the core), it must also allow to lie for the user.
>

The problem does not lie in the technical features of the system here.
The problem would possibly lie in the availability of technique that
ensures that the system cannot lie, and a pressure on users to employ
this technique in their systems.

The more secure the system and the hardware used for verification the
more confidence the verification gains.

By implementing a more secure and reliable system you further the
usability of the system for everyday tasks and protection from viruses
an software errors but you also further the confidence in verification
because with a more reliable system it's harder to break the
verification.

Thanks

Michal




reply via email to

[Prev in Thread] Current Thread [Next in Thread]