l4-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Part 2: System Structure


From: Bas Wijnen
Subject: Re: Part 2: System Structure
Date: Tue, 23 May 2006 20:53:27 +0200
User-agent: Mutt/1.5.11+cvs20060403

On Tue, May 23, 2006 at 01:36:02PM -0400, Jonathan S. Shapiro wrote:
> > You are (perhaps intenionally) forgetting the option that the default is
> > opaque.
> 
> I did not forget it. I omitted it because it seemed from previous
> discussion that this decision about the default was already made. The
> option certainly exists technically.

I expected this, I just mentioned it for completeness.

> > > It appears to me that you are choosing position (2).
> > 
> > No, actually if I have to choose from those two, I think I'm tempted to
> > choose 1.  What I said is that I would like to choose 2 if that would not
> > raise any problems.  I now think that the ping case may actually show a
> > real problem for which users will want to provide read-only memory.
> 
> Technically, I do not see how to choose (1) without introducing opaque
> banks.

The system can provide a means to allow the user to give out banks that she is
unable to change and/or inspect herself.  This doesn't need to be enforced by
making the spacebank opaque, it can be done by not giving the user the
capability to perform those actions.

In practice, to give out an opaque space bank, the user must give out a
capability to create subspacebanks which guarantee that they are not disclosed
to siblings.  If the user session is providing this service, the user can give
a capability to this service away.  The receiver of the capability must then
be able to check that this is a real session (to know that it can trust the
answer) and then ask it if this is indeed an "opaque" capability.  For this,
a capability which can be used for asking this question must be passed to the
receiving program.  Hmm, this is getting messy.  Here's the protocol:
The admin can create new sessions by using a capability to the session manager
(SM).  Assume SM has created a user session US at some point.  US is running a
program P, which wants to give opaque storage to server S.  Here's what they
do:

P gives S a capability to US, which allows creating opaque banks.  This
capability also allows checking that these banks are opaque.
S has its own capability to SM.  It uses this to ask SM if the received
capability is indeed a capability to US (that is, to a real session that was
created by SM).  If not, it rejects the space bank.

Now it knows US is a real session and can be trusted.  So it asks US if the
capability is indeed for opaque space banks (in particular, if it is
implemented by US for that purpose).  If not, it rejects the space bank.

Now it is, and it uses the capability to create a new space bank and use it
for whatever it wanted to do in secret.

This way, S can hide things from P, but it cannot hide things from their
common parent.  Usually the only common parent is the TCB, so there's no need
to hide.  But in the case of a sub-Hurd, the common parent is in fact
interested in debugging _and_ has all the rights to do so, because it owns
both of them.  This should not be prevented.

> An opaque bank must be opaque even to users of its parent banks.
> It is possible that there is an alternative implementation that I am not
> seeing.

I think you are assuming that a program can only give away child banks of
their own.  I am proposing that for opaque banks, they actually give out the
capability to make siblings of their own bank (possibly with a quota), which
are direct children of the user session.  The user session is part of the TCB
and in some cases can be set up to not allow disclosure to the user (that is,
it is _not_ an agent for the user, such as her shell is).

> > The point I was trying to make is that the system should not allow
> > programs to make things harder that are possible anyway.  That means that
> > the machine owner must simply be able to see everything in the system if
> > she desires this, and a program must not be able to hamper debugging it if
> > all users which provide capabilities to it agree that debugging is a good
> > idea.
> 
> I believe that this is technically inconsistent with option (1).

I don't think so.  Don't you think the above procedure would make it possible?

> In particular, it makes the user's voluntary decision contingent on the
> machine owner.

The machine owner can do whatever she wants with the machine (including
installing a different OS).  I think it is a good idea to not give out a
capability to the primary space bank at all by default.  But if the machine
owner wants it, it should not be useless.  However, for the machine owner case
this isn't even important.  As I said, I don't want the capability to be given
out anyway.

However, for sub-Hurds this is very relevant.  In many respects, a sub-Hurd is
simply a standalone version of the Hurd.  The main difference is that the
machine owner (which is the user who started the sub-Hurd) very likely _does_
want the capability to the primary space bank.  That is actually very useful,
and it should be possible.

A limitation on this is like the limitation on Linux that you can't use your
floppy as a file system, even if you do have all the rights to /dev/fd0.
There's no technical reason for this (except that Linux is crappy, and becomes
totally insecure if it would allow it, but I'll ignore that here).  Allowing
to make space banks opaque even to their parents is very similar.  It prevents
using a sub-Hurd for debugging without a technical need.

> I see many use cases for Coyotos-OS where this is not okay (in the sense
> that violation by the machine owner must have prohibitive cost) for *good*
> social reasons (such as medical privacy).

This is about the total system.  If the machine owner doesn't get a capability
to the primary space bank (and by default it shouldn't IMO, so she would in
fact need to hack the code for it), then everything you want works.  You may
have a bit harder time proving that it does, though. ;-)

> Satisfying these cases relies on exploitation of DRM technology. I
> understand that the Hurd designers are opposed to DRM, and I have given
> up the expectation that Marcus will think with an open mind about the
> feasibility and desirability of technical solutions satisfying HIPAA and
> similar laws.

I do not know enough about HIPAA (or similar laws) to say much about this, but
you seem to be changing positions all the time.  First it needs DRM, then it
doesn't, and now it does again.

> I am simply trying to be clear about where our objectives seem to differ.

I am not sure that our objectives differ.  I want a system which offers
privacy as well.  I am simply saying that you cannot protect against the
machine owner anyway (in the absence of TC hardware).  So it just doesn't make
sense to try.  And it definitely isn't worth to make life hard for the users
(because their sub-Hurds aren't debuggable anymore) when trying.

> > Since programs don't have a will...
> 
> This seems to be a deep philosophical assumption. I think that it may be
> over-simplified. It is certainly true that programs do not have a will
> in and of themselves. However, a program *embodies* the will of its
> developer in the form of decisions made by the developer about how the
> program executes.

That's exactly what I said in the other half of the sentence that you cut off.
;-)

> It therefore may be imagined to have a "will by proxy". This is widely
> recognized in the high-security community: the author of a program must be
> modeled as a principal, because programs are executing that obey the
> author's will.

Yes.  I'm saying that only users should be allowed to take part in agreements
on a system.  Not the programmer of a program.  So if I start a program in
opaque storage, that's _my_ decision, not the program's.  And if a service
asks me to provide opaque storage for it, that's the decision of the user who
owns it, not of the program that runs.

On a multi-user system, I think we have enough trouble with just the people
(and the system code as an extra party).  There's no reason at all to allow
programmers to be part of agreements as well.

> Ignore programs for a minute, and consider only humans, I believe that
> we would agree that if two people A and B wish to engage in some legal
> and ethical action, then (1) it is legitimate for them to agree to do
> so, (2) it is legitimate for B to say to A "I will take care of some
> part of the problem X, but I will do so using undisclosed means", (3) it
> is legitimate for A to *choose* to trust B or not, and (4) provided A
> decides to trust B, it is permissible for A to lend money to B for the
> undisclosed action.

Actually, (2) is something which should be avoided if it can.  I agree that it
can be legitimate anyway.  But that's only because it's useful.  This is
exactly why we have been asking for use cases.  If it's not useful, it should
be avoided.  And how better to avoid something than by making it impossible?
:-)

> Money is similar to storage in two ways: (1) it does not disclose how it
> is used, and (2) it can be paid back. It is an opaque resource. We have
> agreed that A may choose to trust B, and we have agreed that in the real
> world it is acceptable for A to provide B with opaque resource.

I don't think this question is fully answered for programs.  If there are
valid use cases (and there seem to be), then it must of course be possible to
pay in restricted (for the provider) storage.

> Now let us return to the programmatic scenario: we appear to be saying
> that all of the above is *not* acceptable if the form of B's behavior is
> to embody that behavior in a program that is executed by A.

It is something I'd like to avoid.  I just fear that it may not be possible
(well, not while still having a robust, flexible, usable system)

Thanks,
Bas

-- 
I encourage people to send encrypted e-mail (see http://www.gnupg.org).
If you have problems reading my e-mail, use a better reader.
Please send the central message of e-mails as plain text
   in the message body, not as HTML and definitely not as MS Word.
Please do not use the MS Word format for attachments either.
For more information, see http://129.125.47.90/e-mail.html

Attachment: signature.asc
Description: Digital signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]