[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Parallel make

From: Kruppa, Jason
Subject: Re: Parallel make
Date: Wed, 29 Apr 2015 21:55:35 -0500

Last time I looked at distcc, it still ran the c++ preprocessor on the main 
machine (to eliminate the risk of different machines having different 
configurations), and then sent the preprocessed files to the various machines 
at its disposal.  This, in my experience, still does a huge portion of the 
build on the machine where make is invoked.

There are a number or things you can look at for reducing compile times.

You mention the job server capability like it is new, which probably means you 
build on windows machines.  My build performance on Linux is 3x faster via NFS 
than my windows builds are on local disk.  And network builds on windows are 
much slower than local disk.  We found that when the files are on a local disk, 
windows will cache the file contents in ram until ram is full.  This lets you 
basically store your entire source tree in ram.  I was unable to get the 
network share to behave this way on the windows machine, likely because windows 
does not assume it has exclusive access to the network file system.  To work 
around this in our windows environment we copy source locally and build, via a 
subst'd drive, and then copy the build to the network for consumption by the 
team at large.

We also found that our windows build performance did not scale well between 24 
and 48 core machines.  There is some resource that is pegged out that keeps the 
build speed from further reduction.

We ran into similar things in Linux due to NFS, autofs, and NFS via apparmor 
not scaling well when 100 compilers are trying to search for header files 
through a long number of sourcedirs.  Recent kernels have eliminated most of 
these spinlocks, but symlinks are still not scalable, and having them in your 
sourcedirs will kill performance if you've got enough cores.

I worked with someone today who thought their Linux build system was utilizing 
a precompiled header.  It wasn't actually using it, and ensuring the pch was 
getting used resulted in an immediate 33% reduction.  Turning off the updating 
of directory and file access time on the 80-core box provided us another 50% 
reduction in build time.  These two tweaks reduced a 12 minute build to a 4 
minute build.  Reducing the directory depth of the build will give us further 

I hope some of these lessons are applicable to you.  Maybe faster builds are 
available to you without developing a distributed build system.  Good luck!

> On Apr 29, 2015, at 3:20 PM, Paul Smith <address@hidden> wrote:
>> On Wed, 2015-04-29 at 13:50 -0600, Ryan P. Steele wrote:
>> The multithreaded version of make (-j#) is wonderful, and we have made
>> great use of it.  Because we're dealing with some very large code, 
>> however, it would be great to be able to parallelize compilation over 
>> multiple machines.  I can't seem to find any option for doing this, 
>> though.  Does such functionality exist?  It would appear to be a
>> fairly straightforward extension of multithreading, especially on a
>> network file system, but thus far, we haven't been able to make it
>> work.  Any help would be appreciated.
> Just to be clear, GNU make is not multithreaded.  It simply spawns
> multiple processes and lets them run in parallel.
> GNU make has no built-in capability to use multiple machines:
> conceptually it may be a straightforward extension but the effort needed
> to communicate between multiple systems over a network, send and receive
> results reliably, kill jobs when someone stops the main make process
> with ^C, etc. is FAR out of GNU make's current wheelhouse.
> You can get a very cheap implementation, if you're willing to live with
> many prerequisite configuration requirements such as a networked
> filesystem, SSH access that doesn't require a password, etc. by writing
> your own script to forward the job via SSH and setting the GNU make
> SHELL variable to point to your script.
> Luckily, if you are building C or C++ code someone has already done all
> the necessary work for you.  I recommend you investigate the distcc
> package: https://code.google.com/p/distcc/
> Cheers!
> _______________________________________________
> Bug-make mailing list
> address@hidden
> https://lists.gnu.org/mailman/listinfo/bug-make

reply via email to

[Prev in Thread] Current Thread [Next in Thread]