autoconf
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Automake's file locking (was Re: Autoconf/Automake is not using vers


From: Nick Bowler
Subject: Re: Automake's file locking (was Re: Autoconf/Automake is not using version from AC_INIT)
Date: Thu, 28 Jan 2021 15:30:15 -0500

On 2021-01-28, Zack Weinberg <zackw@panix.com> wrote:
> There is a potential way forward here.  The *only* place in all of
> Autoconf and Automake where XFile::lock is used, is by autom4te, to
> take an exclusive lock on the entire contents of autom4te.cache.
> For this, open-file locks are overkill; we could instead use the
> battle-tested technique used by Emacs: symlink sentinels.  (See
> https://git.savannah.gnu.org/cgit/emacs.git/tree/src/filelock.c .)
>
> The main reason I can think of, not to do this, is that it would make
> the locking strategy incompatible with that used by older autom4te;
> this could come up, for instance, if you’ve got your source directory
> on NFS and you’re building on two different clients in two different
> build directories.  On the other hand, this kind of version skew is
> going to cause problems anyway when they fight over who gets to write
> generated scripts to the source directory, so maybe it would be ok to
> declare “don’t do that” and move on.  What do others think?

I think it's reasonable to expect concurrent builds running on different
hosts to work if and only if they are in different build directories and
no rules modify anything in srcdir.  Otherwise "don't do that."

If I understand correctly the issue at hand is multiple concurrent
rebuild rules, from a single parallel make implementation, are each
invoking autom4te concurrently and since file locking didn't work,
they clobber each other and things go wrong.

I believe mkdir is the most portable mechanism to achieve "test and set"
type semantics at the filesystem level.  I believe this works everywhere,
even on old versions of NFS that don't support O_EXCL, and on filesystems
like FAT that don't support any kind of link.

The challenge with alternate filesystem locking methods compared to
proper file locks is that you need a way to recover when your program
dies before it can clean up its lock files or directories.

Could the issue be fixed by just serializing the rebuild rules within
make?  This might be way easier to do.  For example, we can easily
do it in NetBSD make:

  all: recover-rule1 recover-rule2
  clean:
        rm -f recover-rule1 recover-rule2

  recover-rule1 recover-rule2:
        @echo start $@; sleep 5; :>$@; echo end $@

  .ORDER: recover-rule1 recover-rule2

Heirloom make has a very similar mechanism that does not guarantee
relative order:

  .MUTEX: recover-rule1 recover-rule2

Both of these will ensure the two rules are not run concurrently by a
single parallel make invocation.

GNU make has order-only prerequisites.  Unlike the prior methods, this
is trickier to do without breaking other makes, but I have used a method
like this one with success:

  # goal here is to get rule1_seq set to empty string on non-GNU makes
  features = $(.FEATURES) # workaround problem with old FreeBSD make
  orderonly = $(findstring order-only,$(features))
  rule1_seq = $(orderonly:order-only=|recover-rule1)

  recover-rule2: $(rule1_seq)

I don't have experience with parallel builds using other makes.

Cheers,
  Nick



reply via email to

[Prev in Thread] Current Thread [Next in Thread]