[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: set -e yet again (Re: saving bash....)

From: Linda Walsh
Subject: Re: set -e yet again (Re: saving bash....)
Date: Fri, 12 Aug 2011 12:19:59 -0700
User-agent: Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US; rv: Gecko/20100228 Thunderbird/ Mnenhy/

Greg Wooledge wrote:
 On Fri, Aug 12, 2011 at 08:18:42AM -0700, Linda Walsh wrote:
>     If I write a==0 on the bash command line, it will generate an error.
> a=0 does not.  'Bash' knows the difference between an assignment and
> a equality test in math.

 imadev:~$ ((a==0))
 imadev:~$ ((a=0))
 imadev:~$ a==0
 imadev:~$ a=0
 They all mean something different, but they are all valid
 commands.  Do I need to explain what each one does?
       Braindead: me:  forgot about a='=0' (duh!?)

> Maybe that's the 'compatible "out", we are, perhaps, all looking for.
 I am not looking for anything.  You, on the other hand, appear to be
 looking for something... I'm not sure what.
Compatibility with previous behavior in 3x which existed for quite a
while.  If you change a feature, to be incompat w/a previous version,
then you need to enable the new functionality with a pragma:

   set -o bash_features="4.1"

or something similar, to do otherwise would seem to be irresponsible to
your userbase.  Why else would bash have several 'set -o compatX'
commands -- because features were broken.  I woudln't want to see this
become another 'set -o compatX', as being non-POSIX compliant shouldn't
has not been the default and this is a matter of the default state
changing behavior for a non-POSIX feature to be POSIX-compliant.

I just ran into another program that would fail under -e, -- in running
an old prog, developed several months or a year back, I have the

   function validate_param_len {
       # return 1 on err, 0 on success
       local odd
       let odd=$strlen%2
       if [[ $odd == 1 || $strlen == 0 ]]; then err_exit $UNEVEN_DIGS; fi

Under -e, it would fail on the 'let' statement and never reach my test
It's not called as a 'self contained' function, but as an out-of-line
check on a param in the environment.  So it's not expected to 'fail'
other than by exiting.

This is what I mean by it's a good idea to use "-e" long after
development is over.

The above code is unpredictable now -- as it would behave one way under
-e, while running a different way with out it.

"-e" -- in a ****WELL DESIGNED PROG***, where errors are caught,
shouldn't cause a otherwise working program to fail.

That's what is happening, and that's what has changed.  It hasn't been
that way for 30 years despite your allusions to the contrary, as bash,
with let and (()), wasn't around 30 years ago. Now, (turnabout
fairplay), maybe you expect every programmer just just jump up and say
.. "Wow, you're right, bash has been this way for 30 years, and it
should be that way forever -- in fact that -e doesn't get passed to
sub-functions as POSIX requires, is a bash-bug, heck -- should be in the
environment to all programs -- any program that returns a '0' value
internally should fail when -e is set!..."

Yeah, I can put lame lines in your mouth too.  But forcing bash to have
compat with 'let', and '(())', and fail on eval's to 0, makes as much
sense as forcing bash, in non-POSIX mode, to propagate "-e" (as POSIX
requires), and forcing such a standard on programs using features never
designed to be covered by such a check (i.e. C functions returning 0),
to also use such.  Bash's "(())" and internal "let" weren't defined
to be subject to "-e"'s constraints by their initial design -- as
they were meant to be calculations, not commands that returned an error
-- following a rule that one must *always* do "X", is a pefect example
"a foolish consistency"  (as in "A foolish consistency
is the hobgoblin of little minds" - Emerson).

The -e check wasn't designed with an internal 'let' and (()) in mind, it
was meant to check for errors returned by simple commands -- NOT
errors in calculations.  Assignments don't return errors, unless
there was a syntax error in the expression.

The statements:


  let a=0

are both assignment statements (not 'commands') that should not cause -e
to exit -- it's contrary to the intention of "-e" -- that it check for
errors from simple commands, and neither of those are an error.

This is an example of a problem in writing a spec -- the intent was
to make the spec conform to existing practice.  Existing practice in
bash was such that calculations performed with the internal let or with
(()) were not status checked for purposes of 'failing' a program.
They still *set* status, but it didn't trigger an immediate exit.

Bash was *intelligently designed*, and, I believe,  unfortunately, (I
can see myself doing the same...so I don't fault anyone...  it's just
a problem that needs to be fixed, IMO)...the bash designer got too
close to the POSIX community/standard and forgot that bash's
design was 'bifurcated' -- by default it didn't operate in POSIX compat
mode, and it required a --POSIX switch to have it enforce POSIX rules.

You can't just break compatibility in the normal features of bash
without expecting "feedback" ;-)..

It NOT about POSIX compatibility -- it never has been!   So there's
no arguing that 'programmers have expected 'this or that' for N years,
because that's just not how bash has worked -- nor do those programmers
expect a shell touted to be 'extended' and not fully compatible unless
"--POSIX" is specified, to behave, in every way, like "POSIX" would
require.  So notions 30 years of programming don't apply.  I've been
programming for 5 years longer than that -- so I would know.  I wrote
in bourne shell.  I loved the new features in ksh (never did like csh),
and ...well, bash was a "new heaven" for bourne-style shell languages.

Unfortunately, I feel that since the main bash dev is on the POSIX
committee, overseeing the standard that governs POSIX SHELL compliance,
he can't help but feel some obligation to implement such in bash, but
also, occasionally, forget that bash runs in EXTENDED, NON-POSIX mode by
default, and only in POSIX mode, would such a forced-exit be required.

Bash's  builtin-"let", and  (()) aren't POSIX -- so while it might be an
optimization to use a builtin "let" in POSIX mode, it would be safer
(though maybe unnecessary) to call the external "let", and let it return
it's status and be handled as per POSIX convention.

 Some sort of argument you can produce which will make EVERY programmer
 in the rest of the world say "Wow, you're right, we should throw away
 30 years of compatibility and make set -e work like you expect!!"
       As soon as you show me 'bash' with (()) and builtin let, from 30
years ago, I'll agree -- if not, piddle on your poor attempts to put
bogosity in my mouth (I can do quite well by myself, thank-you! ;-)).

 I'm fairly confident this will not happen.

 Maybe you just want Chet to accept your argument that he should ignore
 POSIX and follow the Walsh Standard For Bash instead?  That's somewhat
 less impossible, I guess.  I won't speak for him.

>>  cd /foo || exit
>>  rm -rf bar
> Right, and as you are developing, you don't right your first code cd
> /foo && rm -fr bar?

 I only showed two lines.  In most programs, there would probably be
 several commands following the cd and the rm.
    Of course, but you see, I write differently.

These days, I never 'cd /foo ; {do command dependent on being in /foo'

I _start_ with a contingent syntax:

       cd /foo && {dep-code} && finalstuff

then evolve to:

       test -d /foo && cd /foo || {
               err_exit "no such /foo ($?)"
       {depcode;} || {
               errexit "depcode failed w/stat $?"
       finalstuff || {
               errexit "finalstuff failed w/stat $?"

Or similar or more complex depending on the 'app'/script.  Depends on
how reusable I want the script to be....

> if the 'rm' fails I'd expect it to die...
 Because you use set -e.  I do not.  I would not expect a script to die
 if an rm command fails.
       It depends on the script.  Most of my scripts, don't use -e.

       The one I'm working on now creates and destroys file systems, so I
really, really want it to die, ASAP, if anything is amiss. Unfortunately, due to the non-propagation of "-e", it doesn't (I didn't
realize that as a bash extension, -e wasn't propagated, I'd just assumed
it was -- then when it wasn't, I just figured, I had been 'wrong'...now
I know why I had the impression. I already figured out my workaround. Saving such flags in a global "FLAG" that is reapplied at the beginning
of each function I write that I want so checked...

 I would expect the script to carry on.  If I want the script to take
 action in the event of an rm command's failure, I will *add
 error-checking*.  I will make it do whatever is appropriate, which may
 be exiting, or returning from a function, or setting a variable to
 indicate "don't bother" later on, or printing a message into a log
 file, or who knows.
Same here. The current script I'm writing affects system state in a
major way.  I'd rather it fail immediately on any command failure, But
failing on a calculation is completely useless.

Same with any function -- if "-e" is NOT being propagated (as is the
case in bash-extended mode), then a function shouldn't be checked for
status 0/1 for purposes of "-e", as if it were in POSIX mode, the
function would have likely failed internally -- only in the explicit
case of someone doing a return <non-zero-val>; would a function fail in
POSIX mode, any '$?' set to non-zero status before that would have
failed in the function as the "-e" is propagated.

But another example of a code that will fail under -e -- that shouldn't
under non-POSIX-mode BASH:

   function utf8_str_len  {
           local str="${1:-""}"
           ((${#str}==0)) && return 0
           ((${#str}==1)) && return 1
           local n1=${str:0:1}
           local n2=${str:1:2}
           local -i v1=${hexdigs_to_vals[$n1]}
           local -i v2=${hexdigs_to_vals[$n2]}
           if (( (v1&8) == 0 )) ; then return 2
           elif (( (v1&0xe) == 0xc )) ; then return 4
           elif (( v1 == 14 )) ; then return 6
           elif (( v1 == 0xf )) ; then
                   if (( (v2&0x8) == 0 )) ; then return 8
                   elif (( (v2&0xc) == 8 )) ; then return 10
                   elif(( (v2&0xe) == 0xc )); then return 12
           return 0

> I won't have an error check in a first script that comes from doing
> it  'interactively, bash line-re-edits, that are later saved as a
> script.
> Scripts born that way are just 'batch jobs' that need to be turned
> into  full on error checked scripts.

 Not all of us work that way.
       So you never enter commands interactively?

       Never re-edit them?

       Never press 'v' (in vi mode anyway), to invoke the editor with it
gets a bit long?

       Never decide you should save it to a file, and call it as a script?

       You never do any of those things?    Wow. -- you should use 'ash'
and forget about bash, you don't need it.  Some of us do have script
born that way.

       My prog to  auto-convert all .wav files in a dir to .mp3 or .flac
based on naming conventions and an optional datafile, started as me
passing command line params to 'lame'...too many to remember.

It grew to accept options and data input files, parse filename formats,
eventually outgrew bash and got rewritten into perl -- but the perl
started out looking ALOT like shell, as perl was born from shell and
other unix utils.

It was horrid perl code -- initially, but I wanted to get to 'working'
as a first priority.  On each iteration (getting a new CD to
rip)....wanting to add some new feature, ...etc., I'd clean up the
code...eventually, completely restructured.

Then the code was split to handle 'flac', ... then joined again and had
functionality determined by invocation name.

Now it does the entire conversion in parallel children based on #cpu's,
converting an entire album into either mp3 or lame in about 30 seconds
-- it detects the status of the children as it waits for them and
reports any problems.

It's not yet 'finished'... ;-), I still have other things I'd like it to
do...but eh! it's not a high priority program...  Point was it grew
incrementally, starting maybe 10-12 years ago because I got tired of
typing in about 6-7 params that I could never remember.

You've never wanted to 'automate' tasks that you find yourself doing
multiple times?   That's why I fell in love w/computers in the first
place.  I could program them to do what I did manually, and it could
then take care of things for me -- of course, as I have gained more
knowledge about errors and contingencies, my programs have had to do so
as well -- as they need to function, at least at some level, acceptably,
when I'm not running them interactively, so life has gotten more complex
over time.

But that's where I've used the extensions provided by BASH, -- and
having them yanked out from under my feet because the extension weren't
POSIX compliant, ain't my idea of fun!


reply via email to

[Prev in Thread] Current Thread [Next in Thread]