l4-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

reuse of task IDs


From: Marcus Brinkmann
Subject: reuse of task IDs
Date: Mon, 5 May 2003 19:00:46 +0200
User-agent: Mutt/1.5.3i

Hi,

now, I stumbled upon a couple of serious issue.  They are all similar to the
task death notification issue.  The basic problem is synchronizing IPC with
task creation and death.

All proposals so far have put a object system with server side managed
objects, and client handles that specify the server (thread id) and the
object (object id).  Clients send messages directly to servers.  (Remember
that in L4, all messages are routed through the ports system in the kernel. 
Sending and receiving messages is unrelated, as well as ports and tasks).

Note that one consequence is that in Mach, it is easy to move a port
(receive right) from one task to another, while in L4, moving an object from
one server to another is downright impossible in this simplistic scheme.
Ludovic noticed this when considering persistency.  This is just a note, as
moving objects is not a requirement for the Hurd.

However, Ludovic persistency remark has an analogous counterpart: It is
impossible to notice if a server was replaced by another task!  Imagine you
run vipw to edit the password table.  Then you go and make yourself a
coffee.  While doing that, the /etc filesystem is restarted (for example, it
died because of a bug, or because it is networked and the connection was
lost, whatever).  Now, an attacker tries to hijack the server's thread and
task id, and succeeds in that.  You come back and save your file by sending
io_write to the thread id you still have in memory, and the data ends up in
the attackers hand.

Now, you might say that task death notifications solve this, but there is
still a race between receiving the notification and sending the io_write. 
You can make this race as small as you want, but for now it is still there.
Contacting the task server for every RPC, just to get some guarantee that
the server is still alive, is stupid: It would be better to have a port
server then, which reroutes RPCs from the client to the server.

There are many other similar races: How does a server guarantee that it is
getting the RPC from the iedntical task which got the send right?  How, in a
handle transaction, does B guarantee it is using the same server that A used?
Every protocol involving two threads and RPCs from one thread to another has
to deal with this issue of reused thread and task ids.  And I am not happy.

After I recovered from my virtual heart attack, I looked for solutions.  I
basically came up with two schemes:

1. Use a port server.  A port server implements something like the ports
   system in Mach.  Our port server could be a bit simpler, for example, it
   could not buffer messages.  Or it could.  Just as we want.
   Pro: The port server is static, and all object management is internal to
   it.  It can be tightly integrated with the task server to make any guarantee
   we need, because it is trusted.
   Contra: Every message, even simple ones, have an imposed overhead of one
   or two context switches (the reply could probably be sent directly to the
   client, if it doesn't contain object handles).  Also, messages need to be
   analyzed for object handles which need to be converted from one IPC space to
   another (to use Mach terminology).  We wanted to avoid all that, because
   it really kills performance.

2. Use Zombie Dust.  Dissolve 25g Zombie Dust in holy water, stir the whole
   time and boil for fifteen minutes.  Drink before going to bed.

Seriously, zombie dust is what becomes of a zombie when it isn't
allowed to leave.  Consider that every task in the system can register that it
is interested in the fate of any other task.  It could receive death
notifications for this task, but the task server would also make another
guarantee: A task id will not be reused if there are still tasks
interested in that task id.  A task which is dead and not ripped will become
a zombie as we know it in Unix.  But a task which is dead and ripped will
become zombie dust, if there are still tasks interested in its fate.  Only
when the last task relinquished its interest, the task id is free for reuse.

This obviously solves our problem: A client of a server tells the task
server that it is interested in the fate of the server, as long as it uses
the server handle.  When the server dies, the message send will fail because
the thread id is invalid.  Then the task can invalidate its object handle,
erlinquish its interest, and then the server task id can be reused without
problems.

Similar, a server will declare interest in all of its clients, and thus
ensures that the sender thread id contains a task id that is guaranteed to
be one of these clients if it matches or isn't if not.

The third example is sending handles.  A has interest in S and B (because it
is sending messages to both, so it is a client to each of them).  B has
interest in A, because it is a server for A.  S has interest in A because it
is its client.  Now, A sends a server transaction handle (see earlier post
of me about transfering handles) to B, and, important: A keeps its interest
in S and B alive for the whole time.  This is because only A can guarantee that
S's task id is not reused while B establishes the connection.  Also, A has
to keep its interest in B, because only A can guarantee that B's task id
isn't reused.  This is exactly how it should be: S doesn't care who gets the
handle, it got B's task ID from A in the first place, and it is A's
responsibility to ensure integrity of B.  Similar for B with respect to S.

This shows that all these protocols can be made robust by just hanging onto
the task id and not reuse it without consent of all tasks involved.  But
let's look at the costs and drawbacks:

* A task has to explicitely release its interest in another task.  However,
  this is softened because the task server can clean up behind A if it fails
  to do so (when A dies).
* A task ID is not reused before there is nobody interested in it anymore.
  This opens the door for a simple DoS attack where you just declare
  interest in all tasks in the system, and thus starve the available task
  ids.  However, this is softened in two ways: First, "hanging on to a task
  id" is not anonymous.  The sysadmin will be able to track down who has
  interest in which tasks, and can kill tasks with abnormal interest.
  Second, the user could just as well simply create that same number of
  tasks (fork bomb).  Maybe quotas can restrict the number of interesting
  tasks you can declare.  However, definitely should a couple of task ids
  be reserved for root to use.
* The runtime cost is very low, because you only need one additional message
  to the task server per client-server connection (namely at the end of the
  connection to release the task id).

So, what I described is simply a global reference counting for task ids that
prevents premature reuse of task ids.  You might say that this is dangerous
because task ids can be occupied by anybody, but I say that such Zombie Dust
can easily be seen (just as other Zombies or a fork bomb), and that the
sysadmin can learn about this scenario and find out which tasks are
responsible with a special tool (ps).  It's a bit like open files and lsof:
If you can not umount a filesystem, you can use lsof to find out which tasks
still hang to it.  I can also imagine that there could be a command to kill
zombie dust without regards to other tasks interested in it.

If you want to use thread ids directly, and not a port server, the only
alternative you have is that the task server reuses a task id unlaterally
although there are still servers and users using it for IPC.  And in that
case you have to do some timeout, and IPC partners have to check for that
timeout, and I believe that this would be a mess to get right, but maybe
I am wrong.

As always, any comments appreciated.

Thanks,
Marcus

-- 
`Rhubarb is no Egyptian god.' GNU      http://www.gnu.org    address@hidden
Marcus Brinkmann              The Hurd http://www.gnu.org/software/hurd/
address@hidden
http://www.marcus-brinkmann.de/




reply via email to

[Prev in Thread] Current Thread [Next in Thread]