bug-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Thoughts about the new X.2 spec...


From: Farid Hajji
Subject: Thoughts about the new X.2 spec...
Date: Sat, 2 Feb 2002 00:59:13 +0100 (CET)

[Cc-ing bug-hurd, because it is of general architectural interest.
 Please read on, even if you're not intersted in the Hurd/L4 project,
 because parts of the Hurd could be affected by this. Thanks]

Hi,

as most of you on l4-hurd already know, the L4 community released
the long awaited X.2 specification which is an experimental draft
of the upcoming final Version 4 L4 API/ABI:

  http://l4ka.org/documentation/files/l4-x2.pdf

This new spec is not implemented yet, but the L4ka team is working on
the Pistachio kernel which should implement the final Version 4.

X.2 is an extremely terse, yet very clear and understandable
document. If you compare it with X.0, you'll notice a lot of
enhancements, not only in the API and semantics themselves,
but also in a much improved naming of data structures and
syscall names. X.2 specifies a generic programming interface
(currently in a C++-like style, leaving open the issue of
C-bindings) as well as suggesting convenience library function
names.

Interesting NEW properties of X.2 are e.g.
  1. Unlimited number of threads per address space
     (actually only limited by the amount of memory for
     the TCBs of each thread)
  2. Specification for 32 and 64 bit CPUs
  3. Specification for multiprocessor machines (!!!)
  4. Each thread has now not only a pager thread associated
     with it, but also a scheduler thread. [User-land scheduling!]
  5. Each thread can register or have registered on its behalf
     an exception handler thread as well.
  6. Interrupts are handled by threads as before.
  7. Additional LIPC system call (lightweight IPC) for
     inter-address-space thread IPCs.
  8. A very comprehensive definitiion of the synchroneous
     IPC mechanisms.
  9. The spec defines clearly the various protocols between
     the kernel and its various user-level threads.

As far as the Hurd is concerned, many points here are worth
considering carefully:

1: This permits us to use a 1:1 thread-mapping library. We don't
   need to fiddle with an n:1 or even n:m model anymore. That is
   very important, because IPC blocking/timeout semantics are
   heavily dependant on native threads.

   I don't know what thread library we should use here. Perhaps
   C-Threads could be adapted to this new environment, perhaps not.
   Maybe C-Threads is not the best Threads library to use anyway?
   Hard to tell. Please read the X.2 spec w.r.t. Threads _AND_ IPC
   and send some feedback to l4-hurd.

2: We could be theoretically one of the first kernels that run
   in native 64 bit mode on Itanium, Alpha and UltraSPARCs ;-)
   Seriously though: Anyone thinking on porting oskit-mach
   or even gnumach to 64 bit? Hmmm...

3: Here too, Pistachio would probably implement clean X.2 compliant
   SMP, so we could write a scheduler that distributes the tasks on
   the various CPUs randomly, according to some yet-to-be-found-out
   measure, per task, per IPC pair or whatever. We would be free
   to do SMP-scheduling in user-space. Something like a
     /cpu/1
     /cpu/2
     ...
   translator hierarchy would be even thinkable (??? but perhaps
   not desirable?).

4: This permits essentially flexible real-time scheduling of
   important threads. I could imagine that we introduce a class
   of real-time threads, and leave the normal Hurd operations to
   regularly (prioritized round-robin driven) scheduling. This way,
   things like control of real-time processes could run on the
   same kernel, without being interfered by the normal system.

5: That would be one way to solve the dreaded no-senders notification
   problem. Such an exception thread could use its bookkeeping to
   notify some server of dead threads. Well, we should spend a little
   more time thinking on this special case, in light of the new 
   X.2 IPC spec model.

6: This is the ideal way to implement user-land device drivers.
   Together with the mapping of IO pages, we've got everything
   that is needed to fully drive most if not all hardware on
   a typical PC.

   The ideal vision would be to have an address space (new talk for
   task) with say, 15 interrupt handler threads. Thread i would handle
   hardware interrupt i, passing the request further down to any
   driver thread that registered for this interrupt, then [depending
   on parameter settings] reenable the interrupt when the drivers
   accepted [and optionally handeld] the interrupt. More on this
   in subsequent discussions.

7: The use here is obvious: If a receiver thread gets a message
   from, say, another address space, it would relay this message
   to the inner thread that will handle the specified function call
   (a.k.a. unmarshalling/demuxing). This can be done very efficiently
   with X.2's LIPC call and user-defined 'label' tag in each message.
   That could eventually be a great substitute to (or better: l4-specific
   implementation of) usermux!

8: This is highly recommended reading for all Hurd hackers willing
   to understand the L4 IPC model and think of ways to change the
   current HURD asynch model to a more efficient/streamlined synch.
   IPC model. Even if you read the X.0 spec before, please read X.2
   IPC spec as well!

   Sure, most of the IPC would be done by a decent code generator
   that would ideally support IDL as input language. In this case,
   we would have probably caught most IPC cases between client(libs)
   and the Hurd servers. The remaining cases could still be handled
   manually (read: through specially tailored libraries that use
   the specific IPC semantics of X.2).

   What are the special cases exactly? Let's (theoretically for now)
   try to write them in pseudo code using the X.2 generic programming
   interface. This would be highly instructive!

9: This is especially important w.r.t. pagers, exception handlers,
   schedulers etc... One example: The initial user-level pager
   \sigma0 would be queried by OS-personality pagers once to get
   either the full physical memory (have it mapped/granted in their
   address spaces) or only a part of it (if two or more OS-Personalities
   are configured to run side by side).

   Now we'll "just" have to implement a decent pager for the Hurd
   that exports the Mach vm_*() semantics or anything else, if
   we decide to get rid of mach specific stuff a la MOs entirely.
   I still suggest that we base our work on UVM, but that is not
   a religious issue anyway.

   Because L4 supports multiple pagers, I could imagine that we
   will also have special kinds of persistent threads that register
   with a persistency pager. Anyway, we're free to use the pager
   we want for the threads we want. That's a Good Thing(tm).

   Another thing is that we should consider wether server threads
   should be allowed to map/grant pages with user data (say e.g.
   contents of a file[-buffer]) to clients directly, bypassing
   the global [UVM?] pager. This is possible in L4, with the
   cooperation of both pager theads associated with the sender
   and receiver.

      Mapping pages as part of the IPC between a file translator
      and a client could already result in heavy optimization
      compared to the current copying model. Bypassing the
      global pager would _perhaps_ result in yet faster IPC,
      but that is not sure. More on this later...

Okay, enough hype for now. Please do read the X.2 specs and let
us share thoughs about this ONLY on l4-hurd@gnu.org (no need to bog
down bug-hurd with this. This mail is the only announcement to
bug-hurd as a friendly HEADS UP).

Thanks,

-Farid.

-- 
Farid Hajji -- Unix Systems and Network Admin | Phone: +49-2131-67-555
Broicherdorfstr. 83, D-41564 Kaarst, Germany  | farid.hajji@ob.kamp.net
- - - - - - - - - - - - - - - - - - - - - - - + - - - - - - - - - - - -
One OS To Rule Them All And In The Darkness Bind Them... --Bill Gates.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]