l4-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Questions


From: Marcus Brinkmann
Subject: Re: Questions
Date: Wed, 29 Oct 2003 21:06:02 +0100
User-agent: Mutt/1.5.4i

On Tue, Oct 28, 2003 at 06:57:30PM +0100, Martin Schaffner wrote:
> Does that mean that syscalls are also potentially a magnitude cheaper
> (assuming the syscall doesn't have to do much)?

I am not sure what you are asking about, the cost for the syscall itself
(ie, entering the kernel) or its implementation (the operation you want to
perform).  I don't even know the answer to the first question, but I trust
the L4 people that they worked out the whatever fastest way there is to make
a system call on any given architecture.  The second question is sort of
easy to answer: L4 doesn't implement much functions (in terms of policy or
operations), and the functions that are there are highly optimized.  As
Niels pointed out, the bunch of the functionality of the operating system is
implemented in servers and used by RPCs, so that's not easily comparable.

I think the only meaningful performance analysis if you want to compare the
Hurd to Unix is to make benchmarks of I/O operations and such stuff.  For
that, we first have to implement them :)  There is a lot of things that
affect performance, and a lot of optimizations that you end up wanting to
implement at various levels.  I/O data buffer transfer (DMA), caches etc are
having a tremendous impact on performance of course.

> Marcus Brinkmann wrote:
> > There are two issues.  The first is how L4 implements IPC.  The second is
> > how the Hurd uses IPC.  The L4 IPC mechanism is very fast.  It does not
> copy
> > the message twice, it copies it directly from the source TCB to the
> > destination TCB.
> 
> TCB == task control block ?

thread control block
 
> > Of course, you _have_ to switch the context eventually to the server,
> which
> > then processes the RPC, and then you have to switch back.  This can
> > be a problem, but probably cache pollution and TLB flushing is a bigger
> > problem than the actual context switch
> 
> TLB == translation lookaside buffer ?

Yes.
 
> > No, no.  For any IPC you make a system call to the kernel.  Even for local
> > IPC within a process, and definitely between processes.
> 
> Too bad :-(

Well, as Espen pointed out, this is not necessarily so, only if a fixup is
required (as I understand it now).  However, it seems you are easy
emotionalised by such statements, so I better be careful :D
 
> >>which in turn RPCs
> >>the driver of the backing store (will probably reside in ring 0) for the
> data.
> > 
> > Yes.  In fact, there will probably another task inbetween the driver and
> the
> > server.
> 
> Which task would that be?

The device access server, which is the Hurd glue to the device driver
framework, which should be implemented in a way to allow sharing it with
other OSes.  There might even be other tasks, if for example you have
drivers splitted up in tasks and some driver needs to talk to a bus
driver... however, those are implementation details that of course must be
asked and answered, but which are not fundamental to the design.

> >>Does this mean that the Linux Userland Filesystem
> >>(http://lufs.sourceforge.net/lufs/intro.html) poses security risks?
> > 
> > I don't know lufs, so you are the judge.
> 
> Unfortunately, I don't know it either :-)
> A friend pointed me to it, saying that the hurd won't be necessary since you
> can have user-filesystems on linux, too!

Well, that's like saying you don't need any music by J.S. Bach if you already
have a Britney Spears CD, if you excuse me for answering a simplication by
another one ;)

> I also think it would be better to have this feature designed into the
> system from ground up. I look forward to trying hurd/l4 out. Unfortunately, 
> I'm
> not well-versed enough in system-level programming to be a big help for 
> coding,
> all I can do at the moment is to point out comma-mistakes in the doc ;-)

Every bit helps, except for those bits which don't.

> I have an idea for optimizing the hurd, don't know what it's worth:
> 
> Make a build option to assemble all always-used servers into one process.
> The servers, by design processes, would be just threads with this option. This
> option would effectively turn the hurd into a monoserver. The disadvantage
> would be, of course, that each server thread could take down the system, but 
> if
> we only include servers which we (have to) trust anyway, like the root fs,
> the hard disk driver, and other essential servers, a crash would render the
> system unusable anyway. The advantage would be that IPC between servers would
> be able to be super-fast (according to my understanding). The other advantages
> of the hurd (flexibility, easier development of new filesystems/drivers and
> their not bringing down the whole system) would be retained.

It's potentially possible to some extent, although I am doubtful if that
helps much.  It definitely can't be a simple compile time switch, that would
be way too much work.

Thanks,
Marcus

-- 
`Rhubarb is no Egyptian god.' GNU      http://www.gnu.org    address@hidden
Marcus Brinkmann              The Hurd http://www.gnu.org/software/hurd/
address@hidden
http://www.marcus-brinkmann.de/




reply via email to

[Prev in Thread] Current Thread [Next in Thread]