l4-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: User sessions, system request


From: Bas Wijnen
Subject: Re: User sessions, system request
Date: Wed, 30 Jan 2008 22:46:55 +0100
User-agent: Mutt/1.5.17+20080114 (2008-01-14)

On Wed, Jan 30, 2008 at 02:42:02PM -0500, Jonathan S. Shapiro wrote:
> On Wed, 2008-01-30 at 18:30 +0100, Bas Wijnen wrote:
> > Right.  But my keyboard driver is working around all cleverness of
> > keyboards anyway.  It sends exactly one event for each make or break
> > that happens.  Fake shifts are removed, prefixes are interpreted (and
> > merged with the actual events), key repeat is ignored.
> 
> This will break a small number of programs, but they turn out to be
> depressingly important programs.

If they are programs which want direct access to the hardware, then I
don't have a problem with needing to change them in order to port them.
If they aren't, then I think they should be able to manage.  If it's
really a problem, it's always possible to create a wrapper for them
which expands all the events to what they expect. :-)  I already have a
similar wrapper, which gets key events on input, and sends text as
output.  This is used for the keyboard part of a terminal.

> > I don't think this is acceptable.  Say I want to use Alt as one of
> > the controls in a game.  This delay would make it unplayable.  Break
> > is unuable for this anyway, because it doesn't generate an event
> > when it is released.
> 
> Agreed. That is a good reason to use "bare" SysRq rather than ALT-SysRq
> as your system attention key. The *purpose* of SysRq was to serve as the
> system attention key.

This was the purpose of alt+sysreq.  Without alt, it's printscreen.
This can be seen by the fact that the scancode changes depending on if
alt is pressed or not (0x54 with alt vs 0xe0 0x37 without alt).

As you seem to agree, Alt+SysRq may be designed for the purpose, but it
is badly designed and should not be used for it.

> > > Note that on modern machines this means that the low-level USB channel
> > > driver, keyboard driver, and all USB hub drivers must be trusted.
> > 
> > The USB bus driver must be trusted in any case.  The hub driver doesn't...
> 
> Since a keyboard can be plugged into a hub, the driver for any hub
> between the keyboard and the CPU must be trusted.

Yes, the hub which connects the keyboard must be trusted, but not all
hub drivers.  However, I think it is a good idea to detect USB hubs and
handle them all with a trusted driver.  It might be a good idea to do
the same with keyboards.

> > > Here is a pair of "litmus test" questions:
> > > 
> > > If I am a user typing in a password,
> > > 
> > >   1. How does the receiving software know that the password is
> > >      coming from the user, and not from software simulating the user?
> > 
> > "Normal" programs don't get passwords, in the same way that they don't
> > open files.  They use their powerbox for this, and in reply they get a
> > message telling them if it succeeded; the actual password doesn't go to
> > the program.
> 
> This is the right goal. The problem is to ensure that a "normal" program
> cannot simulate a password box well enough to fool the user into
> entering a password into an unauthorized program.

The user needs to be educated for this: when entering a password,
_always_ press break first.  The program can have told the session that
a password is needed, and the session can respond to the break with a
password dialog (plus (access to) the usual menu for task switching,
killing programs, and whatever else it's normally doing when a user
presses break).

Since the session cannot know that an application is trying to fake a
password entry, there is no other way.  Well, there's one, but that also
needs user education: being in "trusted mode" can make a dedicated light
be on.  Then the user must learn to not enter a password if that light
isn't on.  In that case it is possible to enter a password without first
pressing break.

> > Some programs may need a password, for example a pdf reader which reads
> > encrypted pdf files.  But in those cases you're lost anyway, unless you
> > trust the program itself with your password.
> 
> Not necessarily. If the program is willing to delegate the password
> check to some password power box, it doesn't need to receive the
> password.

I was talking about programs which need to do something with the
password, because it is a decryption key, for example.  If the session
doesn't know how to do the decryption, the only thing you can do is hand
the key to the program.  Of course this doesn't need to be a big problem
if the program is confined.  The session can know this.

> At a minimum, confusion should be avoided by *never* calling a string
> that goes to a program a password.

Well, _we_ can agree on that, but when people receive a
"password-protected document", I don't think we can convince them that
the thing they need to read it isn't a password. ;-)

> > If the program is started with a "debugging" powerbox, which is
> > simulating the user, there is no way it can find out.
> 
> True, but a debugging powerbox should never have access to the "real"
> password powerbox, and therefore should never have access to any
> sensitive password.

The real powerbox capability only provides a way to check a password, it
doesn't actually give access to the password itself.  In any case, the
simulating program (which implements the debugger, and hands out the
debugging powerbox capability) has itself a capability to the real
powerbox, because it is started by the user.  This doesn't really help
it though: the powerbox only allows it to request things from the user,
but the user can always refuse.  If the debugger wants to pass on
requests on its fake powerbox to the real one, then the user can
consider if it trusts the debugger with whatever rights it asks for.

> > >   2. How does the user know that the password they type is going
> > >      to software that can be trusted to protect it, rather than
> > >      software that will broadcast the password to the entire world?
> > 
> > The user only types her password to the session manager, which is part
> > of the TCB.  Note that for sub-users, this isn't a guarantee that it is
> > safe.  But that's what you accept by being a sub-user.
> 
> Sub-users do not preclude a true trusted path, but confinement is
> required to achieve trusted path in this case.

This is the opaque memory thing.  My sub-users have a complete user
session in their TCB.  The person owning that session can do whatever he
likes with whatever happens to the sub-user.  This is intentional.  But
it does mean that they cannot have a true trusted path without trusting
the person owning the parent user.

If the system wants to allow users to make real new users, it can of
course also provide a capability for this.  Then the new user, being
real, doesn't have these problems.

> But the problem here is more subtle. How does the user know that they
> are typing to an authentic session manager, and that this session
> manager cannot leak their password.

This is simple: First of all, they know they are no sub-user, because
they pressed shift-break to log in directly (and not through some other
user).  Secondly, before typing in the password, they pressed break.

shift-break cannot be caught by anyone, it is always handled by the
uppermost user and will present the authentic top-level login screen.
This means that the operation of shift-break+login will make sure you
are logged in, and not someone else who is faking you.

break will be sent to the logged in user's session, so because you know
you're logged in, this means your personal session is responding, and
not some application faking to be your session.

> > On Coyotos, it can happen that an untrusted program starts a trusted
> > program.  In that case, I see that you have a problem.  However, on my
> > system that is not possible: all parents are part of the TCB for a
> > program, so if the user doesn't trust the program's parent with the
> > password, she shouldn't give it to the child either.
> 
> This is a case of mis-defining a problem away rather than solving it.
> The problem to consider is that the user cannot know whether they are
> talking to a sub-program or not unless there is a trusted path that can
> be relied on to inform them that this is happening.

This is no problem.  A child of an untrusted program is always untrusted
itself.  If you use a window manager which doesn't let you know which of
your windows is talking to you (which one spawned this subwindow), then
you have a problem.  But that isn't a hard problem to solve.

If you have a trusted program which spawns an untrusted program, then
that trusted program can be trusted to make clear to the user which part
is untrusted.

The equivalent of an untrusted program starting a trusted program would
be for the untrusted program to request from a trusted program (most
likely its parent) to spawn another trusted program.  The parent can
even use the spaces in a way that it effectively comes from its own
storage.  In other words, no functionality is lost.

> > In reality, this isn't a problem, I think.  For the user, programs are
> > started by herself.  When a program starts a new program, which asks for
> > a password, it looks the same as if the program itself asks for the
> > password directly.
> 
> That is exactly the outcome that must be avoided in a secure design.

Not at all.  Because from a security standpoint, it is also the same.
If a program has an untrusted parent, it must not be trusted.

> > Hardware keyboard sniffers...
> > are a serious problem for people who want a secure system,
> > but they are of no concern to OS writers.
> 
> Actually, this is one of the issues where the larger-scale NGSCB design
> made productive, useful, and socially acceptable progress.

If we're talking about hardware hackers, then there is nothing they can
do.  I can buy a secure trusted keyboard and insert an extra switch in
every key.  I can use wireless communication to transfer the key log to
wherever I want.  This cannot be detected.  If they do come up with
something to prevent this, I'm sure some hardware hacker will come up
with a way to circumvent it.

> > > Both issues tend to prohibit designs in which arbitrary drivers can be
> > > replaced by untrusted users.
> > 
> > Yes.  Well, my kernel doesn't allow as much as you seem to think.
> 
> I did not assume one way or the other. I raised the issue because some
> people on l4-hurd believe that users should be able to replace almost
> anything at all, and you are working in the area that clearly
> illustrates why this may not be such a wonderful thing.

Ok.  I fully agree that some things must not be replacable.  I don't
actually think people on l4-hurd think very differently from me on this.
The thing is, that access to drivers is provided through capabilities.
Even if the real drivers aren't replacable, the user can still give fake
capabilities, pretending to be to the drivers.  From the application's
point of view, this means every driver can be replaced.

Except for the one handling the break key, of course.  There is no way
for an application to see a real break key event.

> > I encourage people to send encrypted e-mail (see http://www.gnupg.org).
> 
> Umm. Do you mean encrypted or cryptographically signed? :-)

Both.  And I'm not living in the US, so I'm assuming some common sense.
;-)  For those of you who are so unfortunate to live there: I don't
encourage encryption when writing to a public mailinglist. :-P

Thanks,
Bas

-- 
I encourage people to send encrypted e-mail (see http://www.gnupg.org).
If you have problems reading my e-mail, use a better reader.
Please send the central message of e-mails as plain text
   in the message body, not as HTML and definitely not as MS Word.
Please do not use the MS Word format for attachments either.
For more information, see http://pcbcn10.phys.rug.nl/e-mail.html

Attachment: signature.asc
Description: Digital signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]