l4-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: self-paging


From: ness
Subject: Re: self-paging
Date: Wed, 30 Nov 2005 22:39:20 +0100
User-agent: Mozilla Thunderbird 1.0.7 (X11/20051031)

Bas Wijnen wrote:
On Wed, Nov 30, 2005 at 08:14:32PM +0100, ness wrote:

Bas Wijnen wrote:

On Wed, Nov 30, 2005 at 03:49:48PM +0100, Marcus Brinkmann wrote:
[...]
may be delayed for a long time.


I think we have a covert channel whenever an application knows whether a particular page is in memory or not (because it then can count the pages in memory and this is sth. the OS has to change on the behaviour of other applications).


There is a covert channel whenever one process has any influence at all on the
other.  This is not related to self-paging.  If the process doesn't know which
pages are in memory, it can still find out by reading from the page and
checking if it took nanoseconds or milliseconds.  If the system is noisy (on
purpose or not), then the process may need to average over a number of tries.
This makes the communication slower, but it can still take place.

The only way to close this channel is to never change a physical memory quota
which has been given out.  In practice this means that either we will not be
able to start new processes really soon, or memory-intensive processes will
run much slower than needed (because they have to swap their pages out, even
if there is free memory available, because it's not available _to them_).
This may be acceptable for systems which need hard real time (I think they
don't really have a choice), but it isn't for the Hurd.


How is this solved in EROS/coyotos?


So there still is a covert channel in your proposal (it only needs some time to transfer the "messages").


Definitely.  I never thought otherwise.  I'm sorry if that wasn't clear from
the start.  What I was talking about was narrowing the channel, not closing
it.

Thanks,
Bas

--
-ness-




reply via email to

[Prev in Thread] Current Thread [Next in Thread]