bug-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Questions on the interoperability between libc, gnumach and hurd ser


From: Svante Signell
Subject: Re: Questions on the interoperability between libc, gnumach and hurd servers
Date: Sun, 16 Sep 2012 17:53:19 +0200

On Sun, 2012-09-16 at 13:04 +0200, Samuel Thibault wrote:
> Svante Signell, le Sun 16 Sep 2012 11:10:36 +0200, a écrit :
> > Q1: Where to find that the second code is used when building eglibc: 
> > Is it the presence of the second that makes this defining that
> > function?
> 
> Yes. The easiest way to know that is to just grep through the build
> logs. The precise rules are in the makefiles, but it's hard to read.

Yes.

> > And if there were no mach/hurd/*.c code, the stub would define
> > it?
> 
> Yes.

Good to know.

> > Q2: The build log does not show any traces of setsockopt.c being
> > compiled??
> 
> Are you sure that you didn't have an already-built setsockopt.o? The
> log on buildd.debian.org/eglibc clearly shows
> 

I might have a truncated build log. libc was built in two steps after an
error, maybe I was overwriting the first build log.
> 
> What is puzzling exactly?  That said, you don't need to understand that
> part.

I want to know what's happening (not necessarilty understand every
detail).

> > Q3: I cannot see where the code is included/the macros expanded, going
> > from setsockopt.c to RPC_socket_setopt.c?? What am I missing?
> 
> I don't see what you are missing. setsockopt.c calls __socket_setopt(),
> and RPC_socket_setopt.c defines __socket_setopt(). It's a mere C call,
> what don't you understand?

OK, it is a C call. The question is how the RPC_socket_setopt.c is
created: A assume mig is behind this, and the only trace I found in the
build log is the issuing of mig and then, echoing weak_alias and a move
of tmp_${call},c to RPC_${call}.c, in this case is call=socket_getopt.

> > There is some mig stuff before the above in the build log:
> 
> Again, what is the question?

See above: It seems to be mig that creates the RPC_socket_getopt.c from
its definition in socket.defs, etc??

> > Q4: Is that where the build-tree/hurd-i386-libc/mach/tmp_*.c functions
> > are created?
> 
> You don't need to understand that. RPC_socket_setopt.c simply provides
> the function that setsockopt.c calls. I don't see what more you want.

See above, the truth and noting but the truth is enough :)

> > Then looking at the generated RPC_socket_setopt.c file it seems that
> > mach is invoked in the generated code:  __mach_msg(...)
> 
> Yes.

The trees in the wood are not so fearsome any longer, thanks!

> > That was where I lost track of the function.
> 
> See rpc.mdwn, it's a system call. So it ends up in Mach, the kernel. And
> you don't want to go into the details inside mach. All you need to know
> is what is described in rpc.mdwn: mach_msg sends a message to the port.

Yes, and before rpc.mdwn was written, the gnumach and hurd reference
manuals did not give you that hint... Nice indeed, reading the source
code wouldn't help much here, agreed?

> > According to the IRC discussion this is taken up again in the pflocal
> > hurd server via:  S_socket_setopt()
> 
> Yes, see rpc.mdwn, pflocal was waiting for a message on a port, and
> mach_msg returns it, simply. Then there is the demuxer etc. See
> rpc.mdwn.

I have.

> > hurd-20120710/pfinet/socket-ops.c:S_socket_setopt (struct sock_user
> > *user,
> > hurd-20120710/pflocal/socket.c:S_socket_setopt (struct sock_user *user,
> > 
> > Now we have two definitions of S_socket_setopt
> 
> Of course. pfinet and pflocal are two different translators.  They thus
> both have to define the function.

I didn't observe that the first one was pfinet not pclocal. Problem
solved.

> > Q5: Which one is used?
> 
> The one from pflocal of course, since IIRC you have an AF_UNIX socket,
> and I've described in rpc.mdwn that that's served by pflocal.

See above.

> > And from the build of hurd:
> > hurd-20120710/build/pflocal/socketServer.c:
> > mig_external kern_return_t S_socket_setopt
> > but there is also:
> > mig_internal void _Xsocket_setopt
> > 
> > Q6: Where do the _X and S_ definitions come into play?
> 
> See rpc.mdwn, it's mentioned there.

rpc.mdwn does not explain where and how these _X and S_ prefixed
versions are hooked together with the RPC_*.c code. 

> > We also have:
> > hurd-20120710/build/hurd/socket.msgids:socket 26000 socket_setopt 13
> > 26013 26113
> > hurd-20120710/build/hurd/hurd.msgids:socket 26000 socket_setopt 13 26013
> > 26113
> > 
> > The socket_setopt function is defined in hurd-20120710/hurd/socket.defs:
> > /* Set a socket option.  */
> > routine socket_setopt (
> >         sock: socket_t;
> >         level: int;
> >         option: int;
> >         optval: data_t SCP);
> 
> Yes, that's the definition that mig uses to create the stubs. That's
> what is referenced in rpc.mdwn btw.

Maybe the mig stuff could be explained more in detail.

> > Note: the gnumach and hurd (incomplete) reference manuals does _not_
> > reveal this information (at least not easily). 
> 
> documentations usually don't described the whole build process, indeed.

rpc.mdwn is a step in the right direction :-)

> > The real code used seems to be in:
> > error_t
> > S_socket_setopt (struct sock_user *user,
> >                  int level, int opt, char *value, size_t value_len)
> 
> Yes, that's what is described in rpc.mdwn
> 
>
> When hacking, one does *not* need to keep all that in mind. All one needs
> to remember is that when the application program calls open(), the glibc
> implementation actually calls dir_lookup(), which triggers a call to
> diskfs_S_dir_lookup in the ext2fs translator. When the application program 
> calls
> lseek(), the glibc implementation calls __io_seek(), which triggers a call to
> diskfs_S_io_seek in the ext2fs translator. And so on...
>
> 
> That is really *ALL* you need to understand to hack on the Hurd.
> 
> > Q8: How can I modify it to support  SO_REUSEADDR option?
> >     case SOL_SOCKET:
> >       switch (opt)
> >         {
> >         case SO_REUSEADDR:
> > Q9: Where to find what to add here?
> 
> You have to invent it. I.e. check in the POSIX standard what it means
> for a local socket to enable/disable SO_REUSEADDR, check how pflocal is
> supposed to implement it (yes, that means reading the source code, we
> won't document the internal mechanisms of pflocal, just like the Linux
> kernel didn't document its own), and then implement it.

Sorry to hear. A reference manula could include such stuff, but of
course not a users manual.

> > Q10: If gnumach is kernel space and hurd user space, what is the eglibc
> > code?
> 
> Don't try to map monolithic-kernel vocabulary on the Hurd, that can't
> work.
> 
> > Is this really a client-server implementation?
> 
> Yes: glibc is the client, the translators are the servers, and the
> kernel is only the mailman (mach_msg)

So it looks like:
user_code <-> libc(client) <-> gnumach(mailman) <-> hurd(server)

Why make things so complicated, via the RPC stuff? Implementing the
whole function setsockopt in eglibc would simplify a lot. And the reason
for not doing that is: flexibility, client-server?? how does the above
look for a monolithic kernel, like Linux (I could dig that up but if you
know already).

> > Don't say that the eglibc-gnumach-hurd combination is simple, then you
> > are not serious :(
> 
> I don't think we ever said that. It's not, and it's not really meant to,
> otherwise we wouldn't have so much powerful features.

What are the powerful features compared to a monolithic kernel. Sorry, I
cannot see them. I only find tons of not implemented things and nasty
bugs.

> > And one can understand why people have problems
> > contributing to such a complicated software structure. 
> 
> I can tell you, it's definitely *more* complex to contribute to the
> Linux kernel nowadays than to the Hurd. Not only because it's a very
> complex and big software, but also because the Linux kernel doesn't have
> internal documentation either!

Sad case for Linux too if that is true :(

> > Q11: Is the complexity mainly due to that most things should happen in
> > userspace, or is there any other reason?
> 
> "userspace" is not the reason. "flexibility" is. And it's just the same
> in the Linux kernel: there are a lot of function indirections, and one
> can very easily get lost.

Yes, obviously.

> > If somebody can explain this properly to me, I will write it down and
> > add to the existing (incomplete) documentation.
> 
> Please tell what is missing from rpc.mdwn. For now I believe there is
> already everything you need to know. The rest is details that are not
> needed for understanding RPCs.

Why not adding to hurd.texi or creating some overview document
describing the overall picture. The wiki is good, but at least me
appreciate written manuals too.

> > Source code is OK, but you should be able to know where to look too,
> 
> In most cases it's very simple: just grep.

I know grep -r is very useful.

> > especially when there are
> > three big chunks of code to go though: eglibc, gnumach and hurd ;)
> 
> You indeed need to know where to grep. grep in them all then.  You
> know what? That's *precisely* how I discovered what is described in
> rpc.mdwn. I haven't divined it.

OK, and thanks!




reply via email to

[Prev in Thread] Current Thread [Next in Thread]