bug-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: GSoC: the plan for the project network virtualization


From: olafBuddenhagen
Subject: Re: GSoC: the plan for the project network virtualization
Date: Wed, 18 Jun 2008 04:18:32 +0200
User-agent: Mutt/1.5.18 (2008-05-17)

Hi,


On Wed, Jun 11, 2008 at 12:36:59PM +0200, zhengda wrote:
> olafBuddenhagen@gmx.net wrote:
>> On Sat, Jun 07, 2008 at 10:12:21PM +0200, zhengda wrote:

>>>    If pfinet can open the interface with device_open(), I think we
>>>    need  to write another program like boot to give pfinet the
>>>    pseudo master  device port and help pfinet open the virtual
>>>    network interface.
>>
>> Why another program? I'm pretty sure "boot" is the right place to
>> handle this.
>   
> Yes. we can modify "boot" to let pfinet connect to the virtual network
> interface created by the hypervisor. but we also want the pfinet
> servers to use the hypervisor. If the hypervisor provides the virtual
> network interface, we have to find a way to give the pfinet server a
> pseudo master device port, so the pfinet server can call device_open()
> to connect to the virtual  network interface. That is what I was
> thinking about. but how do we give the pfinet server running in the
> main hurd the pseudo  master device port? As you said, the pfinet
> server gets the master device port from the proc  server. It can work
> in subhurd because the proc server in subhurd can give a  pseudo
> master device port which actually connect to "boot". but it should be
> more tricky in the main hurd if we don't modify the  code of pfinet
> because the pfinet server always gets the real master device port to
> the  kernel.

Indeed, I was thinking only about subhurds here... You are perfectly
right that if we want to be able to use the hypervisor for pfinets
running in the same Hurd instance as the hypervisor, we need some method
to make the pfinets connect to the virtual interface provided by the
hypervisor instead of the real interface.

One way to achieve this is to modify pfinet so it can connect to an
explicitely provided device instead of trying to open the kernel device
directly through the device master port.

The other way is to provide a proxy for the device master port by
overriding the proc server -- either using a full-blown proc only for
the pfinet (kind of a micro-subhurd), or a proxy proc that only diverts
the device master port, and forwards all other requests to the real
proc...

I'm not sure which is easier to implement. (It's kind of a
para-virtualization vs. full virtualization question again...) Modifying
pfinet might be more pargmatical; but replacing proc is certainly more
interesting -- this kind of partial subhurd-like environments is
precisely the direction I'd like to explore...

>>>    The second question is: who can create the virtual network
>>>    interface?     
>>
>> By default, the user who invokes the hypervisor. (I.e. the one
>> running "boot".)
[...]
> It seems you expect several hypervisors can run at the same time and
> every user can run his own hypervisor. but I thought only root could
> run the hypervisor. The reason is that the hypervisor should be able
> to access the network  interface.

Well, root could delegate access to the real network interface, so the
user could run a hypervisor. Or root could run a hypervisor himself,
giving the user access only to one IP address. Or root could even use
the hypervisor to give the user access to a range of IP addresses, and
the user could run another hypervisor to control individual IPs...

All of these variants have certain merits in different situations, and I
think we should support all of them.

> and I think we only need one hypervisor running in the system in most
> cases, because we want that pfinet servers can communicate with each
> other and  it's one main function of the hypervisor. It's possible
> that one set of pfinet servers connect to one hypervisor  and another
> set of pfinet servers connects to another hypervisor. In this case,
> the pfinet server can only communicate with the one in the  same set.

I realize now that the actual hypervisor (filtering) functionality is
quite orthogonal from the routing (hub) functionality -- either can be
useful on its own... You could have one or more filters running on top
of a hub to restrict the actual pfinets, or you could have the pfinets
directly connect to the hub if you don't need the filtering, or you
could use filters alone without any routing between pfinets.

The more I think of it, the more I'm convinced that it would actually
make most sense to implement the filtering and the routing
functionalities in independant translators. It should simplify things
and increase flexibility a lot.

> But the question is how the hypervisor does the check. It will be much
> work if we want the hypervisor understands all packets.

I don't think there is a need to understand all packets -- in most
cases, a simple IP-based filter should be sufficient. But of course, you
could employ other filters if necessary. The modular approach suggested
above makes this easy :-)

On Fri, Jun 13, 2008 at 10:59:13PM +0200, zhengda wrote:

>   1. How many pfinet servers are allowed to connect to one hypervisor?
>   If only one pfinet server is allowed to connect to one hypervisor,
>   hypervisors must communicate with each other to route packets sent
>   by pfinet servers. If several pfinet servers are allowed to connect
>   to the same hypervisor, a hypervisor can route packets inside
>   itself. If so, does the hypervisor only route the packet among
>   pfinet servers that connect to the hypervisor? If several pfinet
>   servers are allowed to connect to the same hypervisor, it's better
>   for the hypervisor to create multiple virtual network interfaces and
>   each pfinet server can attach to one interface.

The original idea was that the hypervisor can create multiple virtual
interfaces with different filter rules, *and* several pfinets can
connect to the same virtual interface if desired. (Just as several
pfinets can connect to the same real interface.) This would have made
for a rather complicated setup...

With the modular approach, it will be much simpler: Each hypervisor
(filter) will provide exactly one virtual interface, and no routing. A
hub can be used explicitely to connect several hypervisors.

>   3. How does the routing work? it can always work if the packet is
>   broadcasted to any pfinet servers that connect to the hypervisor.
>   then pfinet servers can filter packets in the IP layer. but it
>   cannot give a good performance and there may be a security problem:
>   every user can see others' packets.

A simple hub would always forward to all clients. Filters between the
hub and the actual clients can enforce security if necessary.

On Sat, Jun 14, 2008 at 06:06:40PM +0200, zhengda wrote:

> 1. One solution is that the hypervisor broadcasts a packet to every
> pfinet server, as I said before.

> 2. The hypervisor can always track which packet is from which virtual
> network interface. and a table can be built to record which interface
> has what IP. It sends a packet to the interface who owns the
> destination IP.

> The first solution can be seen as a hub, and the second one as a
> switch.

The nice thing obout the modular approach is that you can use a simple
hub, or if this doesn't suffice for some reason, implement a switch or
even a true router...

> An acceptable solution (at least for me) can be: when a virtual
> network interface is created, a network address must be  bound with
> it, so the hypervisor can know where to send the packet.

I think this variant could be considered a router? I think it would
overlap with the functionality of filters though, destroying the
modularity...

> The user should also tell the hypervisor what is the network address
> of  the external network, so the hypervisor can know when to send the
> packet to the external network. It's reasonable to do that because the
> real network interface also  connects to the network with a fixed
> network address.

I don't understand. Each pfinet has a static address, and it needs to be
an address valid and distinct in the external network, if this pfinet is
ever to communicate over the external network. So how would you use that
to distinguish packets to send to the external network?...

(It should be possible to set up NAT between the external and virtual
network, but I guess that's not what you were thinking about... :-) )

The simple hub probably should just forward all packets to the external
interface just as to all other clients. (If there is an external
interface at all -- after all, we might set up a purely virtual
network...) In fact, the external interface need not be treated
different in any way from the other clients, I think.

But again, more sophisticated variants can be built if necessary :-)

-antrik-




reply via email to

[Prev in Thread] Current Thread [Next in Thread]