bug-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Thread model


From: Marcus Brinkmann
Subject: Re: Thread model
Date: Wed, 12 Mar 2008 17:12:03 +0100
User-agent: Wanderlust/2.14.0 (Africa) SEMI/1.14.6 (Maruoka) FLIM/1.14.8 (Shijō) APEL/10.7 Emacs/23.0.60 (i486-pc-linux-gnu) MULE/6.0 (HANACHIRUSATO)

Hi,

At Tue, 11 Mar 2008 12:10:17 +0100,
Neal H. Walfield wrote:
> What you are suggesting is essentially using a user-level thread
> package.  (Compacting a thread's state in the form of a closure is a
> nice optimization, but the model essentially remains the same.)  The
> main advantage to user-level thread package is that the thread memory
> is pagable and is thus less likely to exhaust the sparser kernel
> resources.  In the end, however, it suffers from the same problems as
> the current approach.
> 
> The approach that I am taking on Viengoos is to expose an interface
> that is atomic and restartable.

Thinking about it, it seems to me that atomicity and restartability of
the interfaces is quite unrelated to threading model and resource
management issues in the implementation.  I was surprised by this
myself.  I think we conflate them too eagerly because they usually
appear on the scene at the same time, but for different reasons.

That is not to say that these are not desirable properties, but I
don't want to get side-tracked.

> (This is explored by Ford et al. in
> [1].)  The basic design principle that I have adopted is that object
> methods should be designed such that the server can obtain all
> resources it needs before making any changes to visible state.  In
> this way, the server knows that once the operation starts, it will
> complete atomically.  When the server fails to obtain the resources,
> it frees all the resources it obtained so far and queues the message
> buffer on the blocked resource.  When the blocked resource becomes
> free, the message buffer is placed on the incoming queue and handled
> as if it just arrived.  The result is that no intermediate state is
> required!

There is intermediate state in the form of the message buffer and the
queue item.  In the abstract sense, this suffers again from the same
problem as using user-level threads.  Quantitatively there is a big
difference, of course.

It is possible to let the caller pay for the message buffer and the
queue item (by allowing the server to push back the message buffer on
a kernel-managed queue), thereby really removing the intermediate
state entirely.  But this requires a very careful design of the IPC
system that may have other undesirable consequences.  (For example,
the server can no longer cache any results with regards to the
operation.  ).

In practice, all Hurd RPCs are non-blocking[1] except for select()
like activities.  The number of concurrent pending select() operations
in a system can be considerable, but the costs for one pending
select() operation can be kept small (linked list pointers + select
flags + one reply port and associated overhead), so that queueing the
operations as you described is a reasonable design IMO.

[1] Or can be made non-blocking.  For example, the auth protocol is
currently needlessly blocking on both the server and the client side.

As for the threading model, more than one kernel thread per real CPU
doesn't seem to make much sense in most cases.

Thanks,
Marcus

> An orthogonal concern is the use of locks.  An approach to reducing
> their number is the use of lock-free data structures.  See Valois'
> thesis for a starting point [2].
> 
> Neal
> 
> [1] "Interface and Execution Models in the Fluke Kernel" by Bryan
> Ford, Mike Hibler, Jay Lepreau, Roland McGrath and Patrick Tullmann.
> http://www.bford.info/pub/os/atomic-osdi99.pdf
> 
> [2] ftp://ftp.cs.rpi.edu/pub/valoisj/thesis.ps.gz
> 
> 
> 





reply via email to

[Prev in Thread] Current Thread [Next in Thread]