bug-bash
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: process substitution error handling


From: Jason A. Donenfeld
Subject: Re: process substitution error handling
Date: Thu, 6 Aug 2020 17:31:10 +0200

On Thu, Aug 6, 2020 at 4:49 PM Chet Ramey <chet.ramey@case.edu> wrote:
>
> On 8/6/20 10:36 AM, Jason A. Donenfeld wrote:
> > Hi Chet,
> >
> > On Thu, Aug 6, 2020 at 4:30 PM Chet Ramey <chet.ramey@case.edu> wrote:
> >>
> >> On 8/6/20 6:05 AM, Jason A. Donenfeld wrote:
> >>> Hi,
> >>>
> >>> It may be a surprise to some that this code here winds up printing
> >>> "done", always:
> >>>
> >>> $ cat a.bash
> >>> set -e -o pipefail
> >>> while read -r line; do
> >>>        echo "$line"
> >>> done < <(echo 1; sleep 1; echo 2; sleep 1; false; exit 1)
> >>> sleep 1
> >>> echo done
> >>>
> >>> $ bash a.bash
> >>> 1
> >>> 2
> >>> done
> >>>
> >>> The reason for this is that process substitution right now does not
> >>> propagate errors. It's sort of possible to almost make this better
> >>> with `|| kill $$` or some variant, and trap handlers, but that's very
> >>> clunky and fraught with its own problems.
> >>>
> >>> Therefore, I propose a `set -o substfail` option for the upcoming bash
> >>> 5.1, which would cause process substitution to propagate its errors
> >>> upwards, even if done asynchronously.
> >>>
> >>> Chet - thoughts?
> >>
> >> I don't like it, for two reasons:
> >>
> >> 1. Process substitution is a word expansion, and, with one exception, word
> >>    expansions don't contribute to a command's exit status and
> >>    consequently the behavior of errexit, and this proposal isn't compelling
> >>    enough to change that even with a new option; and
> >>
> >> 2. Process substitution is asynchronous. I can't think of how spontaneously
> >>    changing $? (and possibly exiting) at some random point in a script when
> >>    the shell reaps a process substitution will make scripts more reliable.
> >
> > Demi (CC'd) points out that there might be security dangers around
> > patterns like:
> >
> > while read -r one two three; do
> >     add_critical_thing_for "$one" "$two" "$three"
> > done < <(get_critical_things)
> >
> > If get_critical_things returns a few lines but then exits with a
> > failure, the script will forget to call add_critical_thing_for, and
> > some kind of door will be held wide open. This is problematic and
> > arguably makes bash unsuitable for many of the sysadmin things that
> > people use bash for.
>
> If this is a problem for a particular script, add the usual `wait $!'
> idiom and react accordingly. If that's not feasible, you can always
> use some construct other than process substitution (e.g., a file).
> I don't see how this "makes bash unsuitable for many [...] sysadmin
> things."
>
> >
> > Perhaps another, clunkier, proposal would be to add `wait -s` so that
> > the wait builtin also waits for process substitutions and returns
> > their exit codes and changes $?. The downside would be that scripts
> > now need to add a "wait" after all of above such loops, but on the
> > upside, it's better than the current problematic situation.
>
> You can already do this. Since process substitution sets $!, you can
> keep track of all of the process substitutions of interest and wait
> for as many of them as you like. `wait' will return their statuses
> and set $? for you.

That doesn't always work:

set -e
while read -r line; do
       echo "$line" &
done < <(echo 1; sleep 1; echo 2; sleep 1; exit 77)
sleep 1
wait $!
echo done

Either way, tagging on `wait $!` everywhere, and hoping it works like
I want feels pretty flimsy. Are you sure you're opposed to set -o
procsuberr that would do the right thing for most common use cases?



reply via email to

[Prev in Thread] Current Thread [Next in Thread]