[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Message passing in user-land

From: rreale
Subject: Re: Message passing in user-land
Date: Wed, 17 Jul 2002 14:53:27 +0200
User-agent: Mutt/1.2.5i

On Tue, Jul 16, 2002 at 10:58:40PM +0200, Niels M?ller wrote:
> rreale@iol.it writes:
> > I'll try to roughly explain my idea, which doesn't apply
> > to every IPC type, but only to a small subset thereof. This 
> > subset includes the message passing between a client/server
> > pair and among the servers themselves, with the additional
> > constraint of very small pieces of data being passed at once.
> I think the problem is that you have some memory shared between
> threads (of different processes). For now, I'm assuming that this
> sharing is setup in advance by some memory manager or kernel
> mechanism.
> But as several threads need to access the memory in parallell, you
> need to use some syncronization primitives (mutexes, condition
> variables, see any book on programming with threads) to coordinate
> access. And as far as I can see, that's hard to do without kernel help.

Firstly thank you for your deep analysis. 
You've hit the very trouble of the whole matter. I think one approach 
to the problem might be the following:

a) when we refer to a server we are talking about core servers like
   exec, init, and so on, which should be reasonably trustworthy;

b) we cannot of course rely upon any assumption about how trustworthy
   a client process is;

c) we should design the system-wide queue in such a way that each subqueue
   acts as a sort of ``water-tight compartement'';

d) mutexes might be implemented with flags in the data structure itself,
   thus providing a form of ``non-mandatory locking'' on a per-subqueue

e) clearly we suppose that both the server and the client (or clients)
   will honour the locking policy; but if the client doesn't, only its
   own subqueue gets corrupted, and the server might detect this by
   some simple sanity checking on the data structure and cause the 
   client's termination.

f) for critical operations we may still use the traditional IPC method.

> One basic problem is: I'm an idle server process, and I want to wait
> for a client to send me a message. How do I do that? There are two
> ways: Either I poll the structures regularly, or I need some basic ipc
> primitive, typically a wakeup call to a thread in a different process.
> In the former case, I'll either waste cpu time or get slow ipc
> response time, and for the latter case, it seems hard to do it
> completely in user space.

Clearly an IPC mechanism that doesn't make use of the kernel might 
become useful only in non-real-time applications, because it provides 
no form of asynchronous I/O on the communication channels.  I would
suggest that the system should switch between the traditional and the
``new'' form of IPC according to the workload and the needs.

> So to give a good answer I think one first need a thourough
> understanding of how how threading primitives like mutexes and
> condition variables are implemented, and that's not really my area.

Not mine either unfortunately...


reply via email to

[Prev in Thread] Current Thread [Next in Thread]