bug-bash
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: process substitution error handling


From: Chet Ramey
Subject: Re: process substitution error handling
Date: Thu, 6 Aug 2020 11:53:00 -0400
User-agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0) Gecko/20100101 Thunderbird/68.11.0

On 8/6/20 11:31 AM, Jason A. Donenfeld wrote:
> On Thu, Aug 6, 2020 at 4:49 PM Chet Ramey <chet.ramey@case.edu> wrote:
>>
>> On 8/6/20 10:36 AM, Jason A. Donenfeld wrote:
>>> Hi Chet,
>>>
>>> On Thu, Aug 6, 2020 at 4:30 PM Chet Ramey <chet.ramey@case.edu> wrote:
>>>>
>>>> On 8/6/20 6:05 AM, Jason A. Donenfeld wrote:
>>>>> Hi,
>>>>>
>>>>> It may be a surprise to some that this code here winds up printing
>>>>> "done", always:
>>>>>
>>>>> $ cat a.bash
>>>>> set -e -o pipefail
>>>>> while read -r line; do
>>>>>        echo "$line"
>>>>> done < <(echo 1; sleep 1; echo 2; sleep 1; false; exit 1)
>>>>> sleep 1
>>>>> echo done
>>>>>
>>>>> $ bash a.bash
>>>>> 1
>>>>> 2
>>>>> done
>>>>>
>>>>> The reason for this is that process substitution right now does not
>>>>> propagate errors. It's sort of possible to almost make this better
>>>>> with `|| kill $$` or some variant, and trap handlers, but that's very
>>>>> clunky and fraught with its own problems.
>>>>>
>>>>> Therefore, I propose a `set -o substfail` option for the upcoming bash
>>>>> 5.1, which would cause process substitution to propagate its errors
>>>>> upwards, even if done asynchronously.
>>>>>
>>>>> Chet - thoughts?
>>>>
>>>> I don't like it, for two reasons:
>>>>
>>>> 1. Process substitution is a word expansion, and, with one exception, word
>>>>    expansions don't contribute to a command's exit status and
>>>>    consequently the behavior of errexit, and this proposal isn't compelling
>>>>    enough to change that even with a new option; and
>>>>
>>>> 2. Process substitution is asynchronous. I can't think of how spontaneously
>>>>    changing $? (and possibly exiting) at some random point in a script when
>>>>    the shell reaps a process substitution will make scripts more reliable.
>>>
>>> Demi (CC'd) points out that there might be security dangers around
>>> patterns like:
>>>
>>> while read -r one two three; do
>>>     add_critical_thing_for "$one" "$two" "$three"
>>> done < <(get_critical_things)
>>>
>>> If get_critical_things returns a few lines but then exits with a
>>> failure, the script will forget to call add_critical_thing_for, and
>>> some kind of door will be held wide open. This is problematic and
>>> arguably makes bash unsuitable for many of the sysadmin things that
>>> people use bash for.
>>
>> If this is a problem for a particular script, add the usual `wait $!'
>> idiom and react accordingly. If that's not feasible, you can always
>> use some construct other than process substitution (e.g., a file).
>> I don't see how this "makes bash unsuitable for many [...] sysadmin
>> things."
>>
>>>
>>> Perhaps another, clunkier, proposal would be to add `wait -s` so that
>>> the wait builtin also waits for process substitutions and returns
>>> their exit codes and changes $?. The downside would be that scripts
>>> now need to add a "wait" after all of above such loops, but on the
>>> upside, it's better than the current problematic situation.
>>
>> You can already do this. Since process substitution sets $!, you can
>> keep track of all of the process substitutions of interest and wait
>> for as many of them as you like. `wait' will return their statuses
>> and set $? for you.
> 
> That doesn't always work:

Because you have structured the loop so it's difficult to save the last
asynchronous command you want outside of it? There's an easy fix for that:
decide what your priority is and write code in a way to make that happen.

> 
> set -e
> while read -r line; do
>        echo "$line" &
> done < <(echo 1; sleep 1; echo 2; sleep 1; exit 77)
> sleep 1
> wait $!
> echo done

> Either way, tagging on `wait $!` everywhere, and hoping it works like
> I want feels pretty flimsy. Are you sure you're opposed to set -o
> procsuberr that would do the right thing for most common use cases?

Yes, I don't think there's real consensus on what the right thing is,
and "most common use cases" is already possible with existing features.
There will be more support in bash-5.1 to make certain use cases easier,
but at least they will be deterministic. I don't think introducing more
non-determinism into the shell is helpful.

-- 
``The lyf so short, the craft so long to lerne.'' - Chaucer
                 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, UTech, CWRU    chet@case.edu    http://tiswww.cwru.edu/~chet/



reply via email to

[Prev in Thread] Current Thread [Next in Thread]