bug-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: GSoC: the plan for the project network virtualization


From: olafBuddenhagen
Subject: Re: GSoC: the plan for the project network virtualization
Date: Sat, 28 Jun 2008 05:10:44 +0200
User-agent: Mutt/1.5.18 (2008-05-17)

Hi,

On Thu, Jun 26, 2008 at 11:11:12PM +0200, zhengda wrote:
> olafBuddenhagen@gmx.net wrote:

>> If that is the case, we would again need a proxy for the master
>> device port, which would forward open() on the network device, but
>> block all others.
>>   
> Do you mean something like a translator who open the network
> interface,  and returns the port of the network interface to the
> socket server such  as pfinet?

Perhaps... The description does fit, but I'm not sure we really mean the
same here :-)

>> The only disadvantage of separate components is that the packets may
>> need to traverse more servers before they arrive. But, as explained
>> on IRC, this probably could be optimized with some BPF magic, if it
>> turns out to be a serious problem.

> Last time on IRC, if I understand it correctly, you said the
> optimization is to make all packets go through the kernel, and the
> kernel dispatches the packet with the BPF.

Not quite. The idea was that if you have a multiplexer sitting directly
on the kernel interface, it could just upload the rules to the kernel,
instead of running the BPF implementation itself. But that is only a
minor additional optimization in a specific situation.

The main idea was that if we have filter translators sitting on a
multiplexer, the filter rules could be combined with the user-supplied
rules and all be handled in the multiplexer's BPF implementation, rather
than actually filtering them twice...

Maybe it would indeed be possible to upload all rules to the kernel
after sanitizing them, even through several layers of multiplexers and
filters, and not run any BPF code in user space at all. But I don't
think this would be really useful, so no need to consider it further :-)

> But if the multiplexer is responsible to dispatch  the packet, I think
> we go to the original idea (the hypervisor).

Not really. I think we are talking past each other here.

I understand that you always focused on the dispatching functionality,
and considered filtering only an afterthought. But that is not what I
was suggesting. When talking about the hypervisor, I was always thinking
primarily of the filtering functionality. A pure multiplexer that
doesn't filter has nothing to do with a hypervisor -- after all, it
doesn't hypervise anything :-)

When I first suggested splitting the multiplexing and filtering
functionality, I was rather seeing it as splitting the multiplexing out,
and leaving the hypervisor only with filtering, rather than the other
way round. But I realized that this causes confusion, and thus dropped
the "hypervisor" term alltogether in the last mail, only speaking of
multiplexers and filters now. This is much clearer I am sure.

> By the way, is it possible to provide a mechanism that allows two
> components to only share the virtual memory (the user of the
> components  can make the choice: to share the memory or not)? maybe
> it's a bit like  the clone with the flag of CLONE_VM in Linux. If it's
> possible, I think  it can much improve the performance issue above.

Well, if you run two programs in a single address space, that's
effectively making them into a single process with two threads.

I guess this would be possible, but I doubt the usefulness. As I pointed
out in the last discussion about translator stacking/libchannel (
http://lists.gnu.org/archive/html/bug-hurd/2008-01/msg00002.html ), it's
not really the address space boundaries that make RPC expensive...

-antrik-




reply via email to

[Prev in Thread] Current Thread [Next in Thread]