l4-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: DRM vs. Privacy


From: Bas Wijnen
Subject: Re: DRM vs. Privacy
Date: Tue, 8 Nov 2005 17:58:07 +0100
User-agent: Mutt/1.5.11

On Tue, Nov 08, 2005 at 09:41:13AM -0500, Jonathan S. Shapiro wrote:
> > If there is trust between people, then no digitally guaranteed trust is
> > needed.  That is: if you want to build a cluster of machines with mutual
> > trust, then you can do that by moving the trust from the software domain to
> > the social domain. This will not be a problem for remote authentication of
> > your own computer (in fact, I do it on a daily basis with ssh public key
> > encryption).
> 
> I agree in substance, but I would add a minor caveat: you are exchanging
> trust in *hardware* (the TPM/TCPA chip) for a combination of social
> trust (you are trusting the remote administrator) and physical security
> (you are trusting that only the remote administrator has potentially
> threatening access).

This is true, and potentially unwanted.  However, if the machine is cracked,
and the kernel changed at run time, then I don't expect the chip to notice.
As far as I understand the things it's all about what system is booted, not
what happens once it's running.  But I am not very well informed about this,
so it is very well possible that I am wrong.

> >   In the case of ATM machines it may be a bit trickier, because
> > you don't trust the administrator on the other machine.
> 
> To be honest, I don't see Hurd running on ATM machines any time soon,
> but I *can* imagine Hurd being used in distributed gaming scenarios. One
> of the endemic problems in distributed gaming is that the players cheat.
> I don't care if the player hacks the game -- that is not what I mean by
> cheating. But in my view, when a player signs on to a collaborative
> gaming system, part of what they are saying is "I promise to play by the
> rules of the game." Given the actually observed player behavior, it is
> not unreasonable to check that this promise is being upheld.

True.  But does such a chip solve this problem?  If we know we're running on a
true Hurd system, we're not anywhere near authenticating that it's a real
version of the game.  Of course that could be implemented, but that would mean
some programs are not debuggable.  IMO this is something to avoid (as you
noticed already).

> > It is probably acceptable to demand this hardware authentication.  I think
> > it is also acceptable if the Hurd doesn't support such systems.
> 
> It is always legitimate to ask a question. Nobody can be required to
> answer.

Sure.  What I was saying is that it's fine with me if the Hurd doesn't support
answering, or more specifically, if it doesn't support any protocols on top
which certify authentic programs.  If it's trivial to do and doesn't cost
anything, I don't have a problem with it.

> > > 2. If we disclose the master disk encryption key, then we similarly
> > > cannot build highly trusted federations, and we expose our users to
> > > various forms of search -- some legal, others not. I am not sure that I
> > > want to build a system in which an employer can examine the disk of an
> > > employee without restriction.
> > 
> > On any system I envision, any other system does not exist.  If the employer
> > wants to be able to search without restriction, he can arrange that it's
> > possible.  The problem here is that he doesn't need to install the system
> > unmodified.  In fact, he doesn't need to install the system at all, he may
> > choose a different system if spying is so important for him.
> 
> True, but the user can determine whether an authentic Hurd is being run.
> The employer is free to set up a spyware-compatible environment. The
> user is then free not to use it (with the possible consequence of
> termination, but it is still a choice).

That sounds fine to me.

> > > The basic scenarios of "secrecy by protection" all boil down to
> > > variations on one idea: a program holds something in its memory that it
> > > does not wish to disclose.
> > 
> > IMO this should only work as far as the user in control wishes it to work.
> > The user in control is the one who created the constructor.
> 
> Then the user in control is me, personally.

Hehe, no you're not the one I meant. :-)  I meant the one who called the
meta-constructor and thereby instantiated the creation of the constructor.

> > Right.  However, be able to prevent is not the same as must prevent.  That
> > is: if the user (with sufficient authority) wants to debug a program, it
> > should be possible.  Any program must be debuggable.
> 
> This sounds like dogma. *Why* should any program be debuggable?

Because the user who creates the constructor for it (which once again isn't
always you ;-) ) _should_ be in control.  If I'm creating a constructor around
some downloaded binary, then I should be the one who controls what happens
with it.  I don't want untrusted code to be in control.

> Remember that there is no notion of UID, and the program has no way to know
> who is debugging it.

It can create a capability at constructor creation time.  Whoever holds it is
allowed to debug it.  Obviously this will break the confinement unless the
process calling the constructor authorizes the capability.  But if I create
the constructor this way, then I know that noone else has the capability, so I
do authorize it.

> > > The problem is that all software in a system is subject to the same
> > > rules, and the software that implements DRM can hide its secrets too.
> > 
> > It cannot do that against the combination of administrator and user, which
> > can debug any program they collectively start.
> 
> I believe that you persist in a deep misunderstanding of which
> separations of ownership and control are technically feasible and that
> you are reasoning from flawed premises.

Perhaps I wasn't too clear.  I was assuming here that the code to be debugged
was started by some constructor, which is either created by the administrator
(in case of some system component) or by the user, or perhaps by a
combination.  Anyone who is involved in creating the constructor can refuse to
do that.  Noone else has anything to do with it.  DRM code is either installed
by the administrator, or external and installed by the user, possibly with the
help of some system tool which adds capabilities.  In that case, that tool was
installed by the administrator.

Now if both the user and the administrator are cooperating, it must be
possible for them to debug the program.  This follows from the axiom that the
user must always be in control.  Obviously it is possible to build a system
where this isn't true.  I'm saying that we shouldn't do that.

Hey, that sounds like a design principle with which we can reject things. :-)

Thanks,
Bas

-- 
I encourage people to send encrypted e-mail (see http://www.gnupg.org).
If you have problems reading my e-mail, use a better reader.
Please send the central message of e-mails as plain text
   in the message body, not as HTML and definitely not as MS Word.
Please do not use the MS Word format for attachments either.
For more information, see http://129.125.47.90/e-mail.html

Attachment: signature.asc
Description: Digital signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]