help-gplusplus
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Precompiled headers do not speed compilation up?


From: Bernd Strieder
Subject: Re: Precompiled headers do not speed compilation up?
Date: Wed, 19 Nov 2008 09:57:09 +0100
User-agent: KNode/0.10.9

Hello,

Markus Dehmann wrote:

> I'm not sure what you mean. It is standard to compile each .cc file
> separately into an object file. So you'd usually not give more than
> one .cc file as g++ argument. Are you saying that, in this standard
> situation where there is only one .cc file on the command line, there
> will never be an advantage from precompile headers? Then they really
> seem useless to me. But maybe I misunderstood.

If there are many .cc files on the command-line they will be separate
compilation units, one compiler process will be launched by the
compiler driver.

I meant if the precompiled header itself is only used a few times, then
precompiling and saving that once and a few times loading it in some
compilation units might take longer than just reading the original
header a few times in those compilation units. Precompiled headers are
significant in size, IO time seems to matter. The precompilation is an
initial cost that can only be amortized, if reading the precompiled
header takes less time than reading the individual headers, and if the
precompiled header is used often enough. Imagine the linear graphs.

In the case that loading the precompiled header always takes longer than
reading the original headers, you will never have a win from
precompiled headers. This happens very easily, if you use some "all.h"
strategy to make precompilation easy and precompile that file. Then
every compilation unit will be burdened by reading the whole big fat
precompiled header, and my bet is that in most projects out there will
be a net loss.

Maybe the situation could be improved, if information could be kept in
RAM by means of some gcc daemon.

The other important technique to improve building time is including
many .cc files into some all.cc, and compiling that. Then by means of
include guards every included header will be read and parsed only once,
if there are many double includes among the included .cc files, the net
win will be considerable, and as an added gift the compiler has more
opportunities to optimize, inline and whatever. Usually you will get
into trouble here without enough RAM, because some optimization
techniques in gcc have an over-linear complexity in size of the
compilation unit. There is again a hard optimization problem lurking
here to partition your project into compilation units with overall
optimal compilation time.

Bernd Strieder



reply via email to

[Prev in Thread] Current Thread [Next in Thread]