bug-bash
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: BASH recursion segfault, FUNCNEST doesn't help


From: Gergely
Subject: Re: BASH recursion segfault, FUNCNEST doesn't help
Date: Tue, 07 Jun 2022 11:57:44 +0000

On 6/6/22 16:14, Chet Ramey wrote:

> On 6/2/22 4:00 PM, Gergely wrote:
>
>> I could not produce a scenario in 15 minutes that would indicate that
>> this corrupts other sections, as there is a considerable gap between the
>> stack and everything else. This is OS-dependent though and bash has no
>> control over what happens should this occur.
>
> Because you haven't forced bash to write outside its own address space or
> corrupt another area on the stack. This is a resource exhaustion issue,
> no more.

I did force it to write out of bounds, hence the segfault.

>> Well, the issue is not the fact that this is a resource exhaustion, but
>> rather the fact that it's entirely OS-dependent and the programmer has
>> zero control over it.
>
> The programmer has complete control over this, at least in the scenario you
> reported.

Not really, a programmer can't know how large the stack is and how many more 
recursions bash can take. This is also kernel/distro/platform dependent. I get 
that it's a hard limit to hit, but to say the programmer has complete control 
is not quite true.

Also this is a point about the "protection" being OS-dependent. In embedded 
devices the stack might very well be next to the heap, in which case this can 
be a legitimate issue. Even if Busybox is preferred in these devices, it's 
something worth considering (at least for IoT maintainers). Busybox is also 
vulnerable to this by the way.

>> What happens should the situation occur, is not up
>> to bash or the programmer. The behaviour is not portable and not
>> recoverable. A programmer might expect a situation like this, but there
>> is no knob to turn to prevent an abrupt termination, unlike FUNCNEST.
>
> If you think it's more valuable, you can build bash with a definition for
> SOURCENEST_MAX that you find acceptable. There's no user-visible variable
> to control that; it's just not something that many people request. But it's
> there if you (or a distro) want to build it in.

Recompiling works perfectly fine, however there is not configure switch, so I 
had to edit the code. This might be why the distributions are not setting this? 
I'm not sure. At least it's there.

This will not help programmers though, who just want something that Just Works.

>> Speaking for myself, I'd find an error a much MUCH more palatable
>> condition than a segfault in this case. In the case of an error I at
>> least have a chance to do cleanup or emit a message, as opposed to just
>> terminating out of the blue. I don't think most bash programs are
>> written with the expectation that they might seize to run any moment
>> without any warning.
>
> I think anyone who codes up an infinite recursion should expect abrupt
> termination. Any other scenario is variable and controlled by resource
> limits.

Sure, for unmitigated disasters of code like infinite recursions, I agree with 
you. This problem is not about that though. It's about a bounded - albeit large 
- number of recursions.

For the sake of example, consider a program with a somewhat slow signal 
handler. This program might be forced to segfault by another program that can 
send it large amounts of signals in quick succession.

Something like this:

# terminal 1

$ cat signal.sh
#!/bin/bash
echo $$
export FUNCNEST=100
trap 'echo TRAP; sleep 0.01' SIGUSR1
while true
do
    sleep 1
    date
done
$ ./signal.sh
39817
Tue Jun  7 01:35:41 PM UTC 2022
...
TRAP
./signal.sh: line 1: echo: write error: Interrupted system call
Segmentation fault

# terminal 2

$ while :; do kill -SIGUSR1 39817; done
bash: kill: (39817) - No such process
...

Gergely

reply via email to

[Prev in Thread] Current Thread [Next in Thread]