bug-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: New network implementation proposal [was: Re: ipv6 on hurd]


From: Niels Möller
Subject: Re: New network implementation proposal [was: Re: ipv6 on hurd]
Date: 25 Oct 2002 10:25:00 +0200
User-agent: Gnus/5.09 (Gnus v5.9.0) Emacs/21.2

Olivier Péningault <peningault@free.fr> writes:

> le jeu 24-10-2002 à 21:49, Niels Möller a écrit :
> ipv4 and ipv6 can run separatly.

Sure, but do you really want them to run fully separately? It's common
(although not required by any standard, afaik), that an ipv6 socket
bound to the ipv6 wildcard interface should be able to accept ipv4
connections.

> ip-arp-icmp are a problem, because this layer is divided in three
> protocols, each one has different goals. hurd-net will know how to
> manage that.

arp is a lot different from icmp. arp is a link-layer mechanism that
only the ip-over-ethernet code should need to worry about. But in
order to do tcp, tcp, ip and icmp are used together. I'm all for code
separation, but I'm afraid that putting the implementations into
separate *processes* will just add unnecessary complexity to the
system, for no real gain.

> Ok, i wasn't clear. Users can create interfaces, but they are only used
> and seen by them. Users can replace default interfaces, and use them, so
> when hurd-net will check interfaces and verify which one to use, user's
> rights will be an important information (but also the logical describer
> of the interface in hurd-net). When a user creates a new interface,
> rights are set to this user's rights.
> hurd-net can't be replaced, because it will contain important datas,
> such as addresses. The only thing users can do is multiplexing
> protocols, by setting up new network interfaces.

I don't buy this. By adding ownership information and access control
into hurd-net, you make things a lot more complex than they need be.
You'll likely end up with something a lot more complex than the
current pfinet. A guiding principle is that anything that can't be
easily replaced by users should be as small as possible.

Also note that the current pfinet *can* be replaced by users, the only
real problem is that the ethernet device can't deal with that, so
you'll need a separate ethernet card for each pfinet. So a hurd-net
server that can't be replaced would actually be a step backwards.

> > If a user wants to hack his own networking stacks, it makes more sense
> > to me that he runs his own pfinet, delegating some interfaces to the
> > other pfinet.
> Yes, but you need to centralize some datas, running many pfinet will not
> garantee that data is in sync in every pfinet (think about addresses,
> you must have a central repository, because you can't change them too
> easilly.

That's not pfinet's, or even the hurd's, problem. If I start my own
pfinet, then I'm going to need a new ip address. So I'll get one from
the dhcp server, or ask whoever manages my local network for a static
address. For configuration purposes, different pfinets are as separate
as different machines, even if they may share the same ethernet
device. If different pfinets need to share a lot of information, the
design is broken.

> - it does not exist, we create a new thread, which will autoconfigure
> itself

Configuration should happen the same way as if you connect a new
machine to the network. If that can be done automatically (dhcp,
autoaddress configuration, etc), that's nice of course, but that's
mostly independent of the pfinet redesign, I think.

> I see what you mean. For the layer 3+ translators, at first I thought
> that it could be libraries, but concepts of the hurd are differents. How
> can users replace a library if they want ? Translators are better for
> this kind of features.

Have a look at libstore. It can handle various kinds of stores, like
partitions, devices, files, gzipped files, etc. These store modules
can even be loaded dynamically when needed. If you have a gzipped file
mounted as an ext2 filesystem, there will be rpc:s to the underlying
filesystem where the gzipped file lives, and there will be clients
sending rpc:s to your ext2 filesystem process. But there will *not* be
any translator representing the unzipped data. Because that's done by
a libstore module that is linked into your ext2 process.

As for hackability, if you have written your own store type, you can't
force other people's processes use that, for instance, you can't tell
the root filesystem "hey, I'd like you to use my new store". But if
you run your own libstore-using processes, you can link them with any
new store type you like, at link time or runtime.

I think you could deal with ip-over-foo in an analogue way, where each
foo is a new "if type" in "libif", or some such. This kind of
modularization applies both to a "central" pfinet, and if you move
networking code into the user processes.

The more I think about it, the cooler the network-in-user-process
model seems. A process that creates a socket would talk to one or more
pfinets. Each pfinet offers a directory of interfaces. And if I bind
to the wildcard interface, my process will simply open each interface,
and ask for dir-notification on the directories so that it can pick up
any new interfaces that show up. The hard problem in making this work
is to define the interface-interface in such a way that different
users can't mess up eachother's connections, allocate the same port
numbers, etc. That's probably non-trivial.

/Niels




reply via email to

[Prev in Thread] Current Thread [Next in Thread]