Hello
Is this really a problem for tcc? An old version of VC produces the
same sizes as tcc. The spec seems to say (not sure I'm reading this
right, first time I've read the spec)
"An implementation may allocate any addressable storage unit large
enough to hold a bitfield.....snip..... the order of bitfields within a
unit is ( ... high to low or ..... low to high) implementation
defined. The alignment of the addressable storage unit is undefined"
This seems to suggest that each implementation can do what it wants
with bitfields and that passing them between different compilers is
probably undefined.
Having said all that, I'm not overly worried if it gets changed, just
seems like risk for something that might not be broken. And, I am not
an expert on C compiler internals. I do pass structures a lot between
tcc and other compilers, but they are all carefully crafted with PACK
directives/pragmas to ensure exact memory layout, and I dont use
bitfields as part of compiler - hence why I looked at this.
Cheers
.Richard
David Mertens wrote:
Hello everyone,
I recently uncovered some segfaulting code when compiling code with
macros that manipulate certain Perl structs on 64-bit Linux. I boiled
the problem down to a discrepancy between how tcc and gcc determine the
size needed by a series of bit fields. The tcc-compiled function would
get the Perl interpreter struct produced by gcc-compiled code, then
reach into the wrong memory slot for something. A reduced example is
provided below.
Question 1: Would anybody be opposed to changing tcc's behavior
to match gcc's behavior here? This could lead to binary incompatibility
with object code previously compiled with tcc, but that seems to me
highly unlikely to be a real problem for anyone.
Question 2: Does anybody know tccgen.c well enough to fix this?
I can work on it, but if anybody knows exactly where this goes wrong,
it would save me a few hours.
--------%<--------
#include <stdint.h>
#include <stdio.h>
struct t1 {
uint8_t op_type:1;
uint8_t op_flags;
};
struct t2 {
uint32_t op_type:1;
uint8_t op_flags;
};
struct t3 {
unsigned op_type:1;
char op_flags;
};
int main() {
printf("t1 struct size: %ld\n", sizeof(struct t1));
printf("t2 struct size: %ld\n", sizeof(struct t2));
printf("t3 struct size: %ld\n", sizeof(struct t3));
return 0;
}
-------->%--------
With tcc, this prints:
t1 struct size: 2
t2 struct size: 8
t3 struct size: 8
With gcc, this prints:
t1 struct size: 2
t2 struct size: 4
t3 struct size: 4
This suggests that with tcc, the number of bytes given to a series of
bitfields in a struct depends upon the integer type of the bitfield. In
particular, plain old "unsigned" is interpreted (in 64-bit context) to
be "unsigned int", which has 32 bits. This is incompatible with gcc's
interpretation.
The relevant code is, I think, in tccgen.c's struct_decl. However, I
can't quite tease apart where the declaration comes in, and how it
effect struct size calculations.
David
--
"Debugging is twice as hard as writing
the code in the first place.
Therefore, if you write the code as cleverly as possible, you are,
by definition, not smart enough to debug it." -- Brian Kernighan
_______________________________________________
Tinycc-devel mailing list
address@hidden
https://lists.nongnu.org/mailman/listinfo/tinycc-devel
|