l4-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Future Direction of GNU Hurd?


From: Olaf Buddenhagen
Subject: Re: Future Direction of GNU Hurd?
Date: Sun, 14 Mar 2021 18:57:27 +0100
User-agent: NeoMutt/20170609 (1.8.3)

Hi,

On Fri, Feb 26, 2021 at 11:06:14AM -0800, Jonathan S. Shapiro wrote:
> On Fri, Feb 19, 2021 at 8:23 AM Olaf Buddenhagen <olafbuddenhagen@gmx.net> 
> wrote:

> > (BTW, I didn't get the desired clarity: but perhaps you could chime
> > in? Is there a good generic term for a capability referencing the
> > receive end of an IPC port, such as the receive right in Mach?...)
>
> Not that I know of. One of the critical notions in capabilities is
> that the capability you wield names the object you manipulate. If the
> receive port can be transferred, this intuition is violated. In
> consequence, few capability-bases systems have implemented receive
> ports.

Interesting... Didn't realise that this is something capability designs
frown upon.

FWIW, I was personally never able to conclude whether the ability to
transfer receivers is a useful feature in general or not.

However, I have a specific use case for it, where I don't think it would
be a problem...

> No member of the KeyKOS family implemented such a notion. Coyotos
> comes closest. "Entry" capabilities actually point to Endpoint
> objects, which in turn contain a Process capability to the
> implementing process. A scheduler activation is performed within this
> process. This is comparable to a receive port capability because the
> process capability within the Endpoint object can be updated.

Will have to think about whether such a design would work for what I'm
trying to do.

> I went back and forth for a long time about how multiple processes
> might wait on a common receive port. The problem with this is that the
> objects implemented by these processes tend to have state, so if
> successive invocations can go to any of the participant processes you
> end up in a multi-threading shared memory regime anyway. When this
> takes the invocation across a CPU to CPU boundary, the cache coherency
> costs can be higher than the total invocation cost. We decided it
> would be better to use scheduler activations and an event-driven
> approach, and let the receiving process make its own decisions about
> threading.

I totally agree that it's probably not useful to have multiple active
listeners... It's not what I'm looking for :-)

> This also has the advantage that all of the "pointers" (the object
> references) point from the invoker to the invokee. That turns out to
> be essential if you want to implement transparent orthogonal
> persistence. It rules out receive port capabilities.

That's funny: the thing that (I think) I need receiver capabilities for,
is actually for implementing a (not quite orthogonal) persistence
mechanism :-)

> > In the end, Neal's experimental "Viengoos" kernel used an approach
> > where the receiver (not the kernel) provides a receive buffer: but
> > the receive operation can nevertheless happen asynchronously from
> > the receiver's application threads. (Using some sort of
> > activation-based mechanism -- though I'm not sure about the
> > details.)
>
> I have not looked at Viengoos, but this sounds functionally similar to
> what Coyotos does. In Coyotos, the receiving process designates a
> scheduler activation block that says where the incoming data should
> go.

That seems very plausible :-) Although I don't know the full history,
the Viengoos approach is quite likely inspired by the Coyotos one; or
maybe both go back to the same design discussions: after all, Viengoos
was conceived not long after Neal and Marcus had some intense
discussions with you about what it would take to create a Hurd-like
system on top of Coyotos -- including async IPC, among other things...

-antrik-



reply via email to

[Prev in Thread] Current Thread [Next in Thread]