bug-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Recent checkins


From: Roland McGrath
Subject: Re: Recent checkins
Date: Wed, 8 May 2002 18:50:05 -0400 (EDT)

> Roland McGrath <roland@gnu.org> writes:
> 
> > > MiG interfaces must be the *SAME* bit widths on all platforms.  That
> > > is, a MiG integer_t is 32 bits and must be 32 bits on *every*
> > > architecture.
> > 
> > That is not true at all.
> 
> Um, so can you elucidate?
> 
> A sample RPC is, say, file_chauthor.

A relevant sample RPC is one that uses integer_t.  integer_t is the signed
variant of natural_t, and is the Mach type name for the machine's natural
word size.  But that's only relevant to your comment about integer_t.  Now
you are talking about int.  The actual changes in question had to do with
mach_msg_type_number_t, which Mach defines as integer_t.

> then we have a bug, and the right fix is probably to stop using names
> like "short" and "int" in MiG, and instead use the names with explicit
> sizes.

The fact of the matter is that de facto in this world, short means int16_t,
int means int32_t, and long is the only one that varies.  I think we should
stick to purpose-specific type names in MiG .defs files, i.e. uid_t and
pid_t and off_t so forth.  It would be most clear to define those types in
hurd_types.defs types with explicit sizes, but the fact that they don't
vary among extant platforms is all that matters (if that).  (Right now they
are all defined as int, unsigned or short; those are ambiguous in theory,
but definitively 32, 32, 16 in practice.)

The MiG presentations themselves use types such as mach_msg_type_number_t
and mach_msg_id_t, which vary by machine (i.e. it's natural_t).  Some of
our interfaces (including struct layouts) that refer to values from Mach
use those types as well, and so vary by machine.  That's just the way the
cookie crumbles if we want to refer to the natural uninterpreted Mach
values instead of having a widened universal set of constants in all our
interfaces.

I frankly am not real concerned about this question of type sizes matching
up on different machines.  If we ever have a netmsgserver kind of reality,
I find it entirely acceptable for it to be based on the OSFMach style of
"self-describing to receiver" instead of "self describing to transport".
(That style seems the only thing anyone would consider implementing on an
IPC system like L4's.)  That is, rather than every fine-grained item being
self-describing and genericized at the transport layer, the bits in a
message are not examined by the transport (only the special things like
ports or VM) but instead every message contains enough information about
the encodings (word size, byte order, etc) used by its sender for an
intelligent stub tailored to the particular RPC to do appropriate
transformations.  

Some interfaces are for things like "amount of memory" or "words that were
in registers".  On a 64-bit machine, the amounts transferred can actually
be >4G, so int32 for i/o amounts and suchlike are wrong.  For those cases
and for registers and such, it would be silly to widen natural words to 64
bits on a 32-bit machine.  

I think all of this is much better handled by intelligence in the stubs
generated by the IDL compiler than by intelligence in the transport layer
or rigid constraint on the protocol definitions.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]