[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [bug #33138] .PARLLELSYNC enhancement with patch

From: Frank Heckenbach
Subject: Re: [bug #33138] .PARLLELSYNC enhancement with patch
Date: Tue, 23 Apr 2013 21:41:22 +0200

David Boyce wrote:

> The first thing is get the word "lock" out of your mind because we aren't
> really locking anything. Yes, that API is in use but it's only to create a
> semaphore or baton. Nobody is ever prevented from doing anything. It just
> happens that on Unix the most portable (i.e. oldest) way of implementing a
> semaphore is with the advisory locking API. All cooperating processes agree
> not to proceed unless and until they are able to acquire the exclusive lock
> on a shared file descriptor, but it's not necessary to ever actually write
> anything to that descriptor.

Just to clarify: It's not necessary to write anything to that
descriptor in order for the locking to work. We do actually write to
the descriptor when we have the lock, but that's just an
implementation detail, i.e. the lock could be something else, as
long as the different make instances agree not to write to the
descriptor without holding the lock.

> You're right that simply writing to temp files and dumping everything at
> once when the job finished would be likely to reduce the incidence of
> garbling even without the semaphore, but not to zero.
> It may be that the locking of stdout is only useful on Unix due to the fact
> that it's inherited into child processes. I don't know what Paul or Frank
> is thinking, and as mentioned I haven't looked at the current version, but
> my thinking originally was that Windows could easily handle this using its
> own far richer set of semaphore/locking APIs. I'd actually expect this to
> be easier and more natural on Windows than Unix. All that's required is to
> choose a semaphore to synchronize on, dump output to temp files, and copy
> it to stdout/stderr only after acquiring the semaphore. And remove the temp
> files of course.

Yes, as I wrote in another mail, even a completely global semaphore
should do. Sure, it's excluding much more than necessary, but since
the critical section is very short, this shouldn't hurt much. (In
other words, if make jobs produce such huge output that copying it
around takes a significant amount of time, nobody's ever going to
read that output anyway, or someone is post-processing it which will
take much longer than copying it, anyway.)

Indeed, as David wrote, under Unix, locking stdout/stderr is most
straightforward because it's available without further setup.
Incidentally, this way of locking is also as fine-grained as
possible (considering recursive makes). However, as I wrote, this
fine-grained locking is not really necessary, so it's not worth
worrying about replicating it on Windows if this causes trouble.

> On Tue, Apr 23, 2013 at 10:50 AM, Eli Zaretskii <address@hidden> wrote:
> > Please tell me that nothing in this feature relies on
> > 'fork', with its copying of handles and other data structures.

All it requires is inheriting the redirected stdout/stderr to child
processes. This was already possible under Dos (with the exception
that since there was no fork, you had to redirect in the parent
process, call the child, then undirect in the parent, IIRC).

It's just like shell redirections, i.e. if you do "foo > bar & baz",
the stdout of foo and all processes called by foo is redirected to a
file called bar, while baz and the rest of the shell continue
running with their original stdout. If that's the same under Windows
(at least using bash or a similar shell; no idea what Windows's own
shell does), there should be no problem (or the bash sources should
reveal what needs to be done). Perhaps you know all of this already
and perhaps it's trivial, or perhaps it's impossible ... (I really
don't know how different things are under Windows.)

> > In an old thread, Paul explained something similar:
> >
> >     > David, can you explain why you needed to lock the files?  Also, what
> >     > region(s) of the file you are locking?  fcntl with F_WRLCK won't work
> >     > on Windows, so the question is how to emulate it.
> >
> >     David wants to interlock between ALL instances of make printing output,
> >     so that even during recursive makes no matter how many you have running
> >     concurrently, only one will print its output at a time.
> >
> >     There is no specific region of the file that's locked: the lockfile is
> >     basically a file-based, system-wide semaphore.  The entire file is
> >     "locked"; it's empty and has no content.
> >
> > Assuming this all is still basically true,

It's almost still true. As written above, we now don't use an extra,
empty, file for locking, but stdout itself. Otherwise it's still so,
we don't lock a particular region of the file, but use it as a

> > I guess I still don't
> > understand what exactly is being locked and why.  E.g., why do we only
> > want to interlock instances of Make, but not the programs they run?
> > Also, acquire_semaphore is used only in sync_output, which is called
> > only when a child exits.  IOW, nothing is locked while the child
> > runs, only when its output is ready.

As David wrote, that's necessary to preserve parallelism.

> > In addition, we are locking stdout.  But doesn't each instance of Make
> > have, or can have, its own stdout?  If so, how will the interlock
> > work?

They can have their own stdout, in particular with the
"--output-sync=make" option. But that's actually the harmless case:
Each sub-make runs with its stdout already redirected to a temp file
by the main make. In turn, it redirects the stdout of its children
to separate temp files, and when they are done, collects the data to
its stdout, i.e. the temp file from the main make. When the sub make
is finished, the main make collects its output to the original
stdout. So unless I'm mistaken, no locking is actually required in
this case.

It is required with "--output-sync=target" when all the recursive
makes share the original stdout and try to copy their children's
output to it, possibly at the same time.

Another situation you may be thinking of is when a recipe explicitly
redirects the stdout/stderr of a sub-make, like:

        $(MAKE) something > bar

Then, the output redirection bypasses the temp file we have just set
up for the sub-make, but it's alright since it writes to a different
place anyway. Of course, the output-sync option is passed down, so
the sub-make will again synchronize its children's output to bar by
using its own temp files and locking (with the current code, it will
lock on bar then; but again, if it's easier to share a global lock,
it's also good, it may just unneedly wait a few microseconds while
some other make instance is write something else somewhere else, but
no big deal really).

Eli Zaretskii wrote:

> > So in addition to the temp file change above, you ALSO need a way to
> > synchronize the use of the single resource (stdout) that is being shared
> > by all instances of recursive make.  On UNIX we have chosen to use an
> > advisory lock on the stdout file descriptor: it's handy, and it's the
> > resource being contended for, so it makes sense.
> I still don't know how does Make achieve on Unix the interlocking with
> its sub-Make's.  Is it because the lock is inherited as part of fork?

The fd is inherited as part of fork. Each make that needs to takes a
lock on the fd. The lock is only held shortly (sync_output()) and no
children are forked while it's held so locks are not inherited.

> If so, we will need a special command-line argument on Windows to pass
> the name of the mutex,

This may well be the case (probably similar to how the jobserver
info is passed down).

> (I wish the design and implementation of this feature were less
> Posix-centric...)

The implementation my be (though it's only two functions,
acquire_semaphore(), release_semaphore()) that can be completely
replaced (note, again, the fact that the stdout or stderr fd also
serves for locking, is just an implementation detail and not central
to the design). Otherwise, I don't see how the design could be much

reply via email to

[Prev in Thread] Current Thread [Next in Thread]