|Subject:||RE: Parallel builds across makefiles|
|Date:||Fri, 22 Jul 2016 09:19:47 +0000|
Thanks for the suggestions. I have captured a bunch more data and hence the delay. First I have started by reproducing the behavior with 4.2.1:
gmake realclean >& cleanlog.txt
/usr/bin/gmake-4.2.1 -j32 -Otarget dirsparallel=all release >& gmake421dpa32.log; date
Tue Jul 19 11:24:47 EDT 2016
Tue Jul 19 20:43:17 EDT 2016
gmake realclean >& cleanlog.txt
/usr/bin/gmake-4.2.1 -j32 -Onone dirsparallel=all release > & gmake421dpn32.log ; date
Tue Jul 19 21:01:48 EDT 2016
Tue Jul 19 22:05:40 EDT 2016
Essentially the same as 4.1. This –Otarget run is not just “a little slower,” it is pathologically slow. The 9 hour build time is slower than if I had run the
build with no parallelization whatsoever. (The –Onone time is a bit faster than what I see doing parallelization at just the leaf level)
Our build is a relatively straight-forward “choreographed” build as followed:
Makefile.libs – This is a list of directories containing C++ code getting built into different relocatable libraries (one per dir)
Mikefile.bins – This is a list of directories that link executables
There are O(120) libraries to build, each of which ranges from 10-50 objects. There are about 30 executables to build. What “dirsparallel=all” does is to turn on parallel make for Makefile.libs, Makefile.dirs, etc. The actual build is more complicated, having codegen, unit test, lint, and packaging activities, but it follows this simple model.
The defining characteristic here is a wide fan-out. There are a large number of relatively simple and brief makefiles. There will therefore be a few “middle management” makefiles that are waiting on children to complete (I imagine each of these occupies a make job??).
I have confirmed that my count of “g++” invocations is exactly the same for the two runs (4403)
The TMPDIR variable was not set. I set it to /tmp to insure that it is on a physical device. (The build and our home directories are NFS mounted) The total elapsed time for the “-Otarget” run was then 6 hours. This is enough of a reduction to be a little suspicious, but not much more.
I ran with strace for both the “none” case and the “target” case. The summaries are attached. Unfortunately, I don’t think there is a smoking gun here.
I have also run the “-Otrace” case at a few different job-levels:
j8 – 6:34
j32 – 6:00
j256 – 9:49
From: David Boyce [mailto:address@hidden
A couple of suggestions:
1. Check your value of TMPDIR if any. All -O is doing is redirecting output into a temp file and dumping it later. Effectively it turns command "foo" into "foo > $TMPDIR/blah; cat $TMPDIR/blah; unlink $TMPDIR/blah". This is why it seems almost impossible for it to slow things down appreciably, and a slow temp dir device might be one explanation. Along similar lines you could try "TMPDIR=/tmp make ..."
2. Try strace. It has a mode which will timestamp each system call and another which prints a table of system calls used and how much time it took. One of these will probably be instructive, especially when compared with the same without -O.
On Sun, Jul 17, 2016 at 12:22 PM, Gardell, Steven <address@hidden> wrote:
|[Prev in Thread]||Current Thread||[Next in Thread]|