bug-gnulib
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: changing "configure" to default to "gcc -g -O2 -fwrapv ..."


From: Robert Dewar
Subject: Re: changing "configure" to default to "gcc -g -O2 -fwrapv ..."
Date: Sun, 31 Dec 2006 09:22:22 -0500
User-agent: Thunderbird 1.5.0.9 (Windows/20061207)

Vincent Lefevre wrote:

My point was that if you see this in a source program, it is in
fact a possible candidiate for code that can be destroyed by
the optimization.

Well, only for non-portable code (i.e. code based on wrap). I also
suppose that this kind of code is used only to check for overflows.

No, you suppose wrong, this is an idiom for a normal range test. If
you have

   if (a > x && a < y)

you can often replace this by a single test with a wrapping subtraction.
As Richard said, you should do this unsigned, but I can easily imagine
those who are accustomed to signed arithmetic wrapping not bothering

And that's the trouble, this is an optimization which does improve
performance, but may destroy existing code, and the very example
you gave to talk about improved performance is also a nice case
of showing why it may destroy performance. In fact the wrap
around range test is a standard idiom for "hand optimization"
of range tests.

Yes, and the lack of optimization would be even worse.

Well that's a claim without substantiation. No one has any data that
I know of that shows that the optimization of copmarisons like this
is important. Sure you can concoct an example, but that says nothing
about real world code.

This is a long thread, but it is on an important subject. I find
that compiler writers (and hardware manufacturers too) tend to
be far too focused on performance, when what is important to the
majority of users is reliability.

Yes, there may be some people who look at gcc and try their code
and are concerned with performance and come away disappointed that
gcc is x% slower.

But just as likely, perhaps more likely, is people with a big body
of code who try it out on gcc and are disappointed to find out that
it doesn't work. Sure, it's their fault in a language lawyer sense,
but they will still vote with their actions and avoid using a
compiler that from their point of view does not work.

Getting wrong results fast is of dubious value, where by wrong
I mean wrong from the users point of view.

I mentioned hardware manufacturers, and a similar phenomenon
happens with fudging IEEE fpt semantics (e.g. the R10K not
handling denormals right) to get a bit more performance at
the expense of correctness. After all, gcc could enable
fast math by default, and still be a completely conforming
C compiler, but we recognize that as a bad idea. Not so
clear that this is such a different case.






reply via email to

[Prev in Thread] Current Thread [Next in Thread]