bug-gnulib
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: a saner bootstrap script


From: Stefano Lattarini
Subject: Re: a saner bootstrap script
Date: Mon, 26 Sep 2011 12:27:06 +0200
User-agent: KMail/1.13.7 (Linux/2.6.30-2-686; KDE/4.6.5; i686; ; )

On Monday 26 September 2011, Gary V wrote:
> Hi Stefano,
> 
> On 25 Sep 2011, at 22:55, Stefano Lattarini wrote:
> > On Thursday 22 September 2011, Gary V wrote:
> >> Anyone:
> >> 
> >> It's beginning to look as though all this work is, once again, in very
> >> real danger of just slipping quietly through the cracks.
> >> 
> > Hi Gary.  While I don't pretend to really understand the scope and purposes
> > of your script, I might anyway throw in some few observations, ideas and
> > nits; hopefully they will be helpful.
> 
> Thanks for the reviews, all feedback much appreciated :)  I've applied
> upstream (the copy kept in Zile currently does double-duty as the master
> copy while awaiting adoption into gnulib) wherever I didn't comment other-
> wise below.
>
Do you have a diff file to show what you've changed exactly?  I'd find that
really helpful.

> >> # CDPATH.
> >> (unset CDPATH) >/dev/null 2>&1 && unset CDPATH
> >> 
> > If you are not using `set -e`, you could just call "unset CDPATH" here IMHO.
> 
> In principle, I agree. But for the sake of having shared boilerplate
> code that won't blow up when adopted into a subsequent script, I'm
> loathe to tidy it up here and end up scratching my head later if things
> change again.
>
Makes sense.  If anything, the fix should first go in autoconf, so that
we can re-sync from there (but this is very very low priority for both
bootstrap and autoconf).

> >> # func_append VAR VALUE
> >> # ---------------------
> >> # Append VALUE onto the existing contents of VAR.
> >> if (eval 'x=a; x+=" b"; test "x$x" = "xa b"') 2>/dev/null
> >> then
> >>  # This is an XSI compatible shell, allowing a faster implementation...
> >>  eval 'func_append ()
> >>  {
> >>    $debug_cmd
> >> 
> >>    eval "$1+=\$2"
> >>  }'
> >> else
> >>  # ...otherwise fall back to using expr, which is often a shell builtin.
> >>  func_append ()
> >>  {
> >>    $debug_cmd
> >> 
> >>    eval "$1=\$$1\$2"
> >>  }
> >> fi
> >> 
> > Why the empty line after '$debug_cmd'?  IMHO it does not improve 
> > readability,
> > but only wastes vertcal space.  Ditto for other "one-liner" functions
> > below (e.g., 'func_hookable' and 'func_remove_hook'), but *not* for longer
> > functions (e.g., 'func_add_hook' and 'func_run_hooks')
> 
> For consistency, which I consider quite important.  I don't think it's a
> good idea to add choke points that break ones flow when writing code - the
> more often I have to stop and think "do I apply rule X or rule Y here", the
> more likely I'll lose track of some other important context in my mind.
>
OK, no big deal; it was a cosmetic issue anyway.

> Note also, there are many snippets of code donated from other scripts that
> have been tested for years in other projects (like libtoolize) if at times
> there is something that might look a little odd out of that context.
>
I agree that keeping such snippets as unchanged as possible is a good idea.
The real fix for the style inconsistencies they introduce is to add proper
comments referencing the place from where any snipped had been copied.

> Also,
> I can't claim complete authorship of every line of code, since the bulk of
> my testing involved making sure this bootstrap did a better job of 
> bootstrapping
> a moderate selection of existing gnulib bootstrap using projects, and in the
> process absorbing common features of their bootstrap.conf customisations.
>

> >> # func_run_hooks FUNC_NAME [ARG]...
> >> # ---------------------------------
> >> # Run all hook functions registered to FUNC_NAME.
> >> func_run_hooks ()
> >> {
> >>    $debug_cmd
> >> 
> >>    case " $hookable_funcs " in
> >>      *" $1 "*) ;;
> >>      *) func_fatal_error "Error: \`$1' does not support hook funcions.n" ;;
> >>    esac
> >> 
> > Code duplication with 'func_add_hook' above, and with many other functions
> > throughout the script.  Couldn't the logic to determine whether an item is
> > already present in a space-separated list be factored out in its own
> > subroutine?  E.g.,
> > 
> >  # Usage: is_item_in_list ITEM [LIST ..]
> >  is_item_in_list ()
> >  {
> >    item=$1; shift;
> >    case " $* " in *" $item "*) return 0;; *) return 1;; esac
> >  }
> > 
> > Such a refactoring could as well be done in a follow-up patch, though
> > (in fact, I'd personally prefer to do it that way).
> 
> I've avoided using return under the impression that it is not entirely
> portable, although I would be happy to see evidence that there are no Bourne
> shells with function support and yet with broken return support.
>
Well, autoconf-generated configure script use "return", and no one has
complained so far, so my guess is that their use is pretty safe.  And if
you are going to worry about museum-piece shells that don't grasp `return',
you should probably also worry about the fact that some of them won't
preserve positional parameters after a call to a shell function; to quote
the autoconf manual:

  Some ancient Bourne shell variants with function support did not reset
  $1, $2, etc, upon function exit, so effectively the arguments of the
  script were lost after the first function invocation. It is probably
  not worth worrying about these shells any more.

And here <http://www.in-ulm.de/~mascheck/bourne/> (what an incredibly
useful resource, BTW!) I read that shell functions (and the "return"
built-in) had been introduced already in the SVR2 shell (1984) (although
the positional parameters weren't local in that version yet; this
limitation has been fixed only in the SVR3 shell (1986)).

> Actually,
> in a couple of places where the code was inspired by the incumbent bootstrap
> script or some repeated bootstrap.conf snippets, there are still some latent
> return statements that I didn't get around to factoring out.
>
Given the above, you should instead start to use `return' more freely and
consistently.

> In the hypothetical follow-up patch, I should definitely either eliminate
> the last few return statements, or if they turn out to be a portable construct
> after all, then make better use of them throughout where some of the code
> to avoid them is unnecessarily torturous.
>
Yes yes the second approach please :-)

> >>    eval hook_funcs="\$$1_hooks"
> >> 
> >>    # shift away the first argument (FUNC_NAME)
> >>    shift
> >>    func_run_hooks_result=${1+"$@"}
> >> 
> >>    for hook_func in $hook_funcs; do
> >>      eval $hook_func '"$@"'
> >> 
> > Useless use of eval I think, since $hook_func is not expected to contain
> > spaces nor metacharacters.  Simply using:
> >  $hook_func "$@"
> > should be enough.
> 
> You may be right, but the hook functions code is very delicate, and was
> quite tricky to get right.  I arrived at the current implementation after
> careful testing on many platforms and shells, and worry that changing it
> now might upset one of those combinations that I won't discover without
> going through the few days of rigorous testing all over again, which is
> too much work to save a single eval IMHO.
>
OK (even though, for every workarounf or non-obvious indirection, I'd really
like to see comments explaining why and where it was needed).

> >>      # store returned options list back into positional
> >>      # parameters for next `cmd' execution.
> >>      eval set dummy "$func_run_hooks_result"; shift
> >> 
> > This (together with the `func_run_hooks_result' assignement above) will
> > not proprly preserve spaces in positional arguments:
> > 
> >  $ bash -c '(set "x  y"; a="$@"; eval set dummy "$a"; shift; echo "$@")'
> >  x y
> > 
> > I don't know whether this matters or not, but is something to consider.
> 
> I can't think of a situation where whitespace preservation would be
> important in this case.
>
OK, but I've now noticed that it gets worse.  Your implementation does in
fact break up positional paramenters containing whitespace:

 $ bash -c '(set "x y"; a="$@"; eval set dummy "$a"; shift; printf :%s:\\n 
"$@")'
 :x:
 :y:

and worse again, expands wildcards in the positional paramenters:

 $ cat /tmp && touch f1 f2 f3
 $ bash -c '(set "f*"; a="$@"; eval set dummy "$a"; shift; printf :%s:\\n "$@")'
 :f1:
 :f2:
 :f3:

If these limitations are acceptable, you should state them explicitly in
the comments, and could then simplify your code as follows:

  func_run_hooks_result=$*
  ...
  set dummy $func_run_hooks_result; shift

> >> # func_update_translations
> >> # ------------------------
> >> # Update package po files and translations.
> >> func_hookable func_update_translations
> >> func_update_translations ()
> >> {
> >>    $debug_cmd
> >> 
> >>    $opt_skip_po || {
> >>      test -d po && {
> >>        $require_package
> >> 
> >>        func_update_po_files po $package || exit $?
> >>      }
> >> 
> >>      func_run_hooks func_update_translations
> >>    }
> >> }
> >> 
> > I personally find the code flow here quite unclear.  Something like
> > this would be clearer IMHO:
> > 
> >  $opt_skip_po && return
> >  if test -d po; then
> >    $require_package
> >    func_update_po_files po $package || exit $?
> >  fi
> >  func_run_hooks func_update_translations
> > 
> > The same goes for similar usages throughout the script.
> 
> I suppose it's just a matter of which idioms your eye have become accustomed
> to.  Personally, I find the additional noise of 'if', '; then' and 'fi'
> distracting, and have been happily using the braces idiom (when there's no
> 'else' branch to consider) for many many years as a result (e.g. the shell
> code in libtool).
> 
> If all that code churn and retesting is a prerequisite for acceptance into
> gnulib,
>
This is only for the gnulib maintainers to decide (I'm an "occasional
small contributor" at the very most, so my opinion carries no weight in
this regard).  What I can say is that the current style is very, *very*
confusing to me, and the need to respect it would shy me away from the
idea of contributing to the bootstrap script.

> then I'll reluctantly go through and change the style to match...
> but that will also prevent cross-patching with donor code in libtool and
> various other scripts I have that share the current idioms.
>
As I've said, occasional style inconsistencies are OK if they are meant to
facilitate syncing with third-party projects (but I'd like to see big noisy
comments about this fact, for every snippet of inconsistent code).

> >> # require_autobuild_buildreq
> >> # --------------------------
> >> # Try to find whether the bootstrap requires autobuild.
> >> require_autobuild_buildreq=func_require_autobuild_buildreq
> >> func_require_autobuild_buildreq ()
> >> {
> >>    $debug_cmd
> >> 
> >>    printf "$buildreq"| func_grep_q '^[      ]*autobuild' || {
> >> 
> > Missing space before `|` here (cosmetic-only issue).
> 
> I've been using this style:
> 
>   produce \
>   |filter1 \
>   |filter2
> 
> productively for at least a couple of decades now (including my scripts
> in libtool and m4), but going through the entire bootstrap script, I see that
> I've been pretty inconsistent for some reason, so I normalized to this format
> everywhere.  Thanks for pointing out the inconsistency.
>
OK; I'm fine with this style, as long as it is used consistenctly.

> >  Also, this use of
> > printf can cause portability problems with grep, in case the string
> > "$buildreq" is not newline-terminated.  I would suggest to use this instead:
> > 
> >  printf '%s\n' "$buildreq" | func_grep_q '^[         ]*autobuild'
> 
> Nice catch, thanks.  I found a handful of other cases of this, and fixed them
> too.
>
Good!

> >> # require_gnulib_submodule
> >> # ------------------------
> >> # Ensure that there is a current gnulib submodule at `$gnulib_path'.
> >> require_gnulib_submodule=func_require_gnulib_submodule
> >> func_require_gnulib_submodule ()
> >> {
> >>    $debug_cmd
> >> 
> >> [SNIP]
> >>
> >>      trap - 1 2 13 15
> >> 
> > Not portable to at least Solaris 10 /bin/sh; quoting the Autoconf manual:
> > 
> >  Posix says that "trap - 1 2 13 15" resets the traps for the specified 
> > signals to
> >  their default values, but many common shells (e.g., Solaris /bin/sh) 
> > misinterpret
> >  this and attempt to execute a "command" named '-' when the specified 
> > conditions
> >  arise. Posix 2008 also added a requirement to support "trap 1 2 13 15" to 
> > reset
> >  traps, as this is supported by a larger set of shells, but there are still 
> > shells
> >  like dash that mistakenly try to execute 1 instead of resetting the traps.
> >  Therefore, there is no portable workaround, except for "trap - 0", for 
> > which
> >  "trap '' 0" is a portable substitute.
> 
> You've lost me.  So, I should write:
> 
>   trap '' 0
>
> instead of:
> 
>   trap - 1 2 13 15
> 
> ?
>
No; the point is that you *can't* portably reset the signal handlers to their
default.  I'm not offering a solution about this, since I don't know any.
Maybe you could restructure the code to avoid having to deal with signal
handlers altogether?  Or could you install proper handlers unconditionally?

> That seems like I might as well just miss out the signal resetting altogether
> (effectively what trap '' 0 seems to do anyway), and let the gnulib cleanup 
> function
> fire impotently if those signals are trapped long after the window it would 
> have been
> useful.
>
Could be a solution.  Or you might install a more general anf flexible
exit trap earlier and unconditionally, and since you are at it, could
make it user-extensible as well (isn't this the point of your rewrite
of bootstrap? ;-)
Of course, this could safely be done with a follow-up patch.

> >> func_require_package_version ()
> >> {
> >>    $debug_cmd
> >> 
> >>    func_extract_trace AC_INIT
> >> 
> >>    save_ifs="$IFS"
> >> 
> > Useless use of quotes.
> 
> I think this is another style issue.  I'd rather be able to blindly write 
> foo="$bar"
> and know that it will always work the way I expect to avoid breaking the flow 
> of my
> coding session by adding another stall point where I have to stop and think 
> about
> whether there is any whitespace in the literal that needs to be protected, 
> and whether
> quote marks are spurious in each case.
> 
> >> # func_unset VAR
> >> # --------------
> >> # Portably unset VAR.
> >> func_unset ()
> >> {
> >>    { eval $1=; unset $1; }
> >> }
> >> unset=func_unset
> >> 
> > What is the point of this function, if boostrap does not run under
> > 'set -e'?  Using a simple 'unset' should be good enough, since even
> > in shells where it returns a non-zero exit status for already-unset
> > variables, it does not display any error message (at leat not that
> > I know of).  Or are you just "erring on the side of safety", and
> > deliberately?
> 
> It's boilerplate code that I have in all my scripts so that I can use
> func_unset without having to stop and think, whether or not the script
> is in set -e mode.
>
Still, future users/maintainers of your script will have to stop and
think about `func_unset', since they are not used to it, and its
purpose might not be clear to them.  At the very list, you should
add a more precise comment to the function definition, like this:

  Portably unset VAR.  In some shells, an `unset VAR' statement leaves
  a non-zero return status if VAR is already unset, which might be
  problematic if the statement is used at the end of a function (thus
  poisoning its return value) or when `set -e' is active (causing even
  a spurious abort of the script in this case).

> >> func_cmp_s ()
> >> {
> >>    $CMP "$@" >/dev/null 2>&1
> >> }
> >> func_grep_q ()
> >> {
> >>    $GREP "$@" >/dev/null 2>&1
> >> 
> > Why redirecting also the stderr of grep and cmp to /dev/null?  This will
> > hide potential problems due to e.g., incorrect regular expressions or
> > non-existent files ...
> 
> I could rename them to func_cmp_sshhh_dont_make_a_sound ;)
>
I mean, is the silencing of such errors really wanted, or an unthought
side effect?  Feature or bug?  If the former is true, then it should be
explicitly stated in the functions' descriptions IMO.

> >>    $SED -n '/(C)/!b go
> >>        :more
> >>        /\./!{
> >>          N
> >>          s|\n# | |
> >>          b more
> >>        }
> >>        :go
> >>        /^# Written by /,/# warranty; / {
> >>          s|^# ||
> >>          s|^# *$||
> >>          s|\((C)\)[ 0-9,-]*[ ,-]\([1-9][0-9]* \)|\1 \2|
> >>          p
> >>        }
> >>        /^# Written by / {
> >>          s|^# ||
> >>          p
> >>        }' < "$progpath"
> >> 
> >>    exit $?
> >> }
> >> 
> > Is this complexity really warranted by a function whose purpose is
> > simply to print a version message?
> 
> DRY.  All the meta-data is in the header comment, and anything else that
> wants to make use of that meta-data should extract it rather than create
> another copy.  I've been using this code as is in many scripts for many
> years, so it deals with all the corner cases very well.
>
IMVHO, in this case at least, copy & paste and duplication is preferable
to the added complexity your code entails, since even a botched copy
& paste won't cause any real bug.  Anyway, the complexity added by your
approach is pretty limited, so I won't hold my breath on this issue :-)

> >> # func_quote_for_eval ARG...
> >> # --------------------------
> >> # Aesthetically quote ARGs to be evaled later.
> >> # This function returns two values: FUNC_QUOTE_FOR_EVAL_RESULT
> >> # is double-quoted, suitable for a subsequent eval, whereas
> >> # FUNC_QUOTE_FOR_EVAL_UNQUOTED_RESULT has merely all characters
> >> # which are still active within double quotes backslashified.
> >> sed_quote_subst='s|\([`"$\\]\)|\\\1|g'
> >> func_quote_for_eval ()
> >> {
> >>    $debug_cmd
> >> 
> >>    func_quote_for_eval_result=
> >> 
> >>    while test $# -gt 0; do
> >>      case $1 in
> >>        *[\\\`\"\$]*)
> >>          my_unquoted_arg=`printf "$1" | $SED "$sed_quote_subst"` ;;
> >> 
> > Unportable use of sed, in case $1 is not newline-terminated.  Also,
> > potential problems with printf, in case $1 contains `\' characters:
> >  $ a='\t'; printf x"$a"x
> >  $ x       x
> > Use this instead:
> >  printf '%s\n' "$1"
> > 
> > There might be similar problems in other part of the script?  I have not
> > checked in great detail ...  You might want to take a better look.
> 
> Thanks, I found and fixed a few.
>
Good!  But then I'm curious to know: how many exactly, and where?  A diff
file would be really helpful and instructive in this regard ;-)

> I inherited these problems from libtool, so I should go back and fix them
> there too when I have time.
> 
> >> # func_show_eval CMD [FAIL_EXP]
> >> # -----------------------------
> >> # Unless opt_silent is true, then output CMD.  Then, if opt_dryrun is
> >> # not true, evaluate CMD.  If the evaluation of CMD fails, and FAIL_EXP
> >> # is given, then evaluate it.
> >> func_show_eval ()
> >> {
> >>    $debug_cmd
> >> 
> >>    my_cmd="$1"
> >>    my_fail_exp="${2-:}"
> >> 
> >>    ${opt_silent-false} || {
> >>      func_quote_for_eval $my_cmd
> >>      eval func_truncate_cmd $func_quote_for_eval_result
> >>      func_echo "running: $func_truncate_cmd_result"
> >>    }
> >> 
> >>    if ${opt_dry_run-false}; then :; else
> >>      eval "$my_cmd"
> >>      my_status=$?
> >>      if test "$my_status" -eq 0;
> >> 
> > Useless use of quites around $my_status.    
> > 
> >>    then :; else
> >>        eval "(exit $my_status); $my_fail_exp"
> >>      fi
> >>    fi
> >> }
> >> 
> 
> Actually, yuck for the mixed styles in that function.  Refactored the last
> block to match the first:
> 
>     ${opt_dry_run-false} || {
>       eval "$my_cmd"
>       my_status=$?
>       test 0 -eq $my_status || eval "(exit $my_status); $my_fail_exp"
>     }
> 
Oh no!  Not these nested `||'!  They obfuscate a perfectly clear logic IMHO;
couldn't you use this instead, please?

    ${opt_dry_run-false} || {
      eval "$my_cmd"
      my_status=$?
      if test $my_status -ne 0; then
         eval "(exit $my_status); $my_fail_exp"
      fi
    }

> I should reflect that back into the upstream copy in libtool too at some
> point.
> 
> >> # func_get_version APP
> >> # --------------------
> >> # echo the version number (if any) of APP, which is looked up along your
> >> # PATH.
> >> func_get_version ()
> >> {
> >>    $debug_cmd
> >> 
> >>    app=$1
> >> 
> >>    { $app --version || $app --version </dev/null; } >/dev/null 2>&1 \
> >>      || return 1
> >> 
> > What's the point of the second `$app --version'?
> 
> To accomodate tools like git2cl, which error out with no stdin.
>
Then why not a simpler:
  $app --version </dev/null >/dev/null 2>&1
instead?

> >>    $app --version 2>&1 |
> >>    $SED -n '# extract version within line
> >>             ...'
> >> 
> > Comments in sed scripts are not portable, sigh :-(
> 
> This was pasted from one of the GNU bootstrap.conf files I used to flesh
> out the functionality of bootstrap... but I should have spotted it myself,
> thanks for the reminder :)
> 
> > Still, instead of obfuscating all the script for the sake of some ancient
> > sed implementation, you might want to check at the beginning of the script
> > that $SED handles comments correctly, and abort if it is not the case
> > (pointing the user to GNU sed maybe?).
> 
> Agreed.  I've added it to my TODO list.
>
Yes, I agree that doing so in a follow-up patch would be acceptable, and even
preferable in fact (the resulting git history will be much more useful and
instructive this way).

> >> # func_symlink_to_dir SRCDIR FILENAME [DEST_FILENAME]
> >> # ---------------------------------------------------
> >> # Make a symlink from FILENAME in SRCDIR to the current directory,
> >> # optionally calling the symlink DEST_FILENAME.
> >> func_symlink_to_dir ()
> >> {
> >>    $debug_cmd
> >> [SNIP]
> >> 
> > The body of this function is IMVHO complex enough to deserve some more
> > comments.
> 
> I rescued this implementation from one of Paul or Bruno's bootstrap.conf
> files during testing of my bootstrap rewrite, and commented as best I could.
>
Ah ok, pre-existing implementation and pre-existing limitation.  Sorry, I
didn't get that.  Fell free to ignore my comment.

> But I'll be the first to admit I don't really follow the slightly torturous
> logic every step of the way.  Better comments, or clearer implementation
> gratefully accepted.
>

> >> func_update_po_files ()
> >> {
> >>    $debug_cmd
> >> 
> >> [SNIP]
> >> 
> >>      if ! test -f "$cksum_file"
> >> 
> > Special `!' command is unportable to at least Solaris 10 /bin/sh:
> > 
> >  $ /bin/sh -c '! test -f /none'
> > /bin/sh: !: not found
> > 
> > Possibly other similar usages throughout the script (I haven't checked
> > in detail).
> 
> Only func_update_po_files() and func_symlink_to_dir() use that construct,
> but these functions were both absorbed from other GNU project bootstrap.conf
> files, and are delicate enough that I'm afraid of breaking them at this
> stage.  Most likely the originals were written by Paul, Jim or Bruno, so
> they might be able to suggest alternative implementations?  Otherwise I will
> try to go back and find the original projects and pass the bug report back
> upstream.
>
Good idea.  Also, my suggestion below about requiring the use of a "real"
POSIX shell to run the bootstrap might make this point mostly moot (but
then, having seen your answer to that, maybe not :-(

> >> # Work around backward compatibility issue on IRIX 6.5. On IRIX 6.4+, sh
> >> # is ksh but when the shell is invoked as "sh" and the current value of
> >> # the _XPG environment variable is not equal to 1 (one), the special
> >> # positional parameter $0, within a function call, is the name of the
> >> # function.
> >> 
> > Something similar holds for Zsh if I'm not mistaken (it would be worth
> > to mention that in this comments IMO).
> 
> Care to check, and suggest the additional comment text if necessary?
> 
> > Finally, a couple of more general observations:
> > 
> >  1. The bootstrap script is now complex enough to warrant the
> >     introduction of a testsuite.
> 
> That's an excellent notion.  But after a year or more of prodding and
> cajoling, I haven't even gotten the script itself accepted into gnulib.
> I'm not ready to burn another month or two of my GNU hacking time yet,
> at least until all the work I put into bootstrap itself is legitimized
> by it's acceptance.
>
Makes sense.  As long as you agree that adding a testsuite once your
bootstrap rewrite has been accepted is something that should be done,
I'm really fine.

> Actually, I'd really like to see unit tests for at least the functions
> this script has in common with libtoolize and some of libtool's other
> m4sh generated scripts.  But I don't have the energy to pour into that
> just now.
>
Even a coarse test coverage would be enough at first.  The important thing
is that, in the future, once we hit a bug or a problematic corner case,
we'll be able to add a test for it easily and naturally.

> >  2. IMHO, since bootstrap is a maintainer tool anyway, we should
> >     assume that a POSIX shell is used to run it, thus allowing
> >     various simplifications and optimizations.  If we want to be
> >     helpful towards developer having an inferior default shell,
> >     we could run a sanity check on the shell itself early in the
> >     script, warning the user if his shell isn't able to handle
> >     the required constructs and usages.
> 
> Because of the amount of boilerplate in this script (from my own code,
> much of which I also share with libtool and m4) I'd rather not optimize
> any of it for bootstraps particular use, since that reduces future
> sharing opportunities.
>
OK.  Still, I'm somewhat saddened by the fact that, when it comes to the
shell, we are still coding as if we were in the late eighties :-(

> > HTH, and thanks for all your work on this!
> 
> I'm extremely happy to have another set of eyes on the code at last, so
> thanks again for making the time to review.
> 
> Cheers,
> -- 
> Gary V. Vaughan (gary AT gnu DOT org)

Regards, and thanks,
  Stefano



reply via email to

[Prev in Thread] Current Thread [Next in Thread]