l4-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: bit-split, or: the schizophrenia of trusted computing


From: Michal Suchanek
Subject: Re: bit-split, or: the schizophrenia of trusted computing
Date: Mon, 1 May 2006 22:42:50 +0200

On 5/1/06, Jonathan S. Shapiro <address@hidden> wrote:
On Mon, 2006-05-01 at 20:30 +0200, Marcus Brinkmann wrote:
>
> > I will go further: in the absence of OS support, such violations cannot
> > (in general) even be *detected*, so the suggestion that their can be
> > deferred to social or legal enforcement actually means that you are
> > declaring that these types of encapsulation can be violated without any
> > human consequence at all -- or at least that the possibility of such a
> > violation with serious human consequence places the problem domain, by
> > definition, outside of the applications that are "of interest to the
> > Hurd".
>
> I can't parse that paragraph.

Sorry. Let me try to explain.

If we say "mechanical prevention has other bad consequences, so we will
leave problem X for social enforcement" we have a problem. In order for
social enforcement to actually occur, we must be able to detect that the
undesirable action X actually occurred. Any means of detecting this is
necessarily built on top of enforcing primitives that are used "softly".

So if we say that we wish to remove those primitives, we are saying that
it is not important to be able to detect the undesired actions X. In
consequence, we are declaring that they are not important enough to
deserve "enforcement" in the social sense either.


Ehm, this looks like the discussin is getting out of focus, and the
argument is no longer understandable.

As I undrestand it, there are a few things we might (or might not)
support in the system.

a) It looks like we all want _confinement_, the ability to determine
that a process can only get certain set of capabilities. This means
that a potentially buggy or malitious process can only scribble a
piece of memory we allocated for it, and should not leak any
information we gave it. And that we can give it only the infrmation
that is needed to perform its service, nothing more.

b) we may need _isolation_ to perform some services. Isolation is the
suport of the OS for running programs that cannot be inspected. They
can be confined as well. This means that the program is an isolated
black box that gets some input, and in an ideal case provides some
output.
In EROS this is implemented in constructors - some sort of services
that can instantiate new processes, and can instantiate them so that
their memory is not revealed to the client instantiating the process
through the constructor.

Marcus suggests that isolation support should be removed from
constructors or constructors dropped entirely. Instead, isolation
should be provided between user sessions by some trusted service that
creates the sessions. This is more or less equivalent to having
constructors, because the process that wants to be isolated can always
ask to be run in a separate user session.

Note that user that does not posses the administrative right to create
new user sessions is possibly not able to run such software in this
case.
But it can be worked around by running a sub-hurd (sub-OS) that
appears to be a separate OS from inside. It can proxy all space,
hardware, etc capabilities to appear genunine within its context -
similar to a machine emelation but hopfully with only low overhead.
Now the process is isolated within the emulated environment but the
whole environment is open to the user.

The user can also simply drop the capabilities that allow inspecting
some processes if she wishes so. However, the process never knows if
it can be inspected or not. In contrast, constructors (with isolation)
allow checking what constructor created a particular object, and this
allows a program to check if it was isolated. But without verification
the user may create a 'constructor' that says it creates isolated
processes but gives away the capabilities to their memory anyway.

c)verification (of system software, and specifically of isolation) by
means of TPM chips. This allows to verify the identity of a computer
system (both hardware and software). Although there is going to be
enormous amount of variations and upgrades for various systems it may
allow to verify that a sysem can be trusted to implement some feature
(such as isolation) properly and irreversibly.

Note that this service should be also able to tell a 'top os' from
'sub-os' which may need disclosing a lot of information. I am not sure
how this would affect the system architecture.

The TPM verification (and the possibility for an application to
request isolation) leads to software that may indefinitely hold some
information and only release it at will - ie DRM mechanisms. This
would allow for data on your machine to be only accessible when some
third party gives consent for such access.
The difference from encryption is that the access can be granted for a
period of time, and revoked irreversibly.
This opens such possibilities like writing a word processor that is
the only application that can open its documents. Note that people
would cheerfully use such software. The fact that so many people use
MS Office proves that. MS Office is basically the only software that
can open MS documents, even today.
However, DRM would allow the software maker to request renewing
license anually to access your existing documents, and similar tricks.
Sure, it will be first used for movies and the like. But when people
fall for that and the mechanisms become ubiqutous they might not even
notice that their word processor is now using them as well.

Even the case with movies is quite evil. You are  guaranteed some
freedoms when you buy the right to view a movie (listen to a song, ..)
- ie a DVD { This is not very important here but you are also not
allowed to circumvent any mechanisms that restrict your freedoms to
use that movie, at least in the US } The copyright for the movie lasts
for some time, and the movie should be free to use by anybody aftrer
that (unless the movie owners lobby successfully for prolonging the
period indefinitely). But if it is irreversibly sealed by DRM nobody
will ever see it again.
Not that sealing away some amusing film is terribly evil. It is an
annoyance but one can make a new one. But some documentary films are
unique and probably cannot be remade.

The verification also disables debugging and reverse engeneering of
programs that request to be isolated. The user can verify what a
program can access on her computer (and should be able to prevent the
program to leak any information it processes) but does not know how it
calculates its results.

The verification could be also usable in other scenarios: you could
trust a service to be reliable (in not disclosing your data) if you
could verify it by means of TPM, even without much trust in the party
providing the service. But it may be that there is a bug in the signed
software that was not yet discovered that allows the service provider
to spy on you anyway.. So using encryption is desirable.


To me it looks like (b) is needed in some form but (c) should be
avoided. But (c) might be quite easy to add with (b).


As  for the ownership of digital works: I beleive it is the authors
decision how his work can (or cannot) be shared (in the framework of
copyright laws). The current problem is that authors sell their rights
to 'producents' who own the majority of rights to all kinds of works
(digital or not), and want to hold to these rights. This restricts the
freedom of both authors and consumers. And it is not something that
can (or should) be solved in an OS discussion.

Thanks

Michal

reply via email to

[Prev in Thread] Current Thread [Next in Thread]