[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: BASH recursion segfault, FUNCNEST doesn't help

From: Gergely
Subject: Re: BASH recursion segfault, FUNCNEST doesn't help
Date: Thu, 02 Jun 2022 20:00:20 +0000

Hi Martin,

>> There's a slim chance this might be exploitable.
> I would really be interested in an example.

I could not produce a scenario in 15 minutes that would indicate that
this corrupts other sections, as there is a considerable gap between the
stack and everything else. This is OS-dependent though and bash has no
control over what happens should this occur.

It's not inconceivable that other OS-es or even (old) Linux in certain
configurations will place the stack close to something else that is
valuable though. However, I'm not forcing the idea that this is a
vulnerability. It technically might be, but I do understand the
hesitation of fixing something that is hard to and pretty much pointless
to exploit.

> There are many ways to exhaust memory (and other) recources, recursion is one 
> them. In your case a variable like SRCNEST (and all the code with its 
> performance impacts needed behind it) might help, but what exactly is the 
> advantage of a "maximum source nesting level exceeded" error over a 
> segmentation fault?
> Next we will need MAXARRUSAGE, MAXBRACEEXPAN, ...

Well, the issue is not the fact that this is a resource exhaustion, but
rather the fact that it's entirely OS-dependent and the programmer has
zero control over it. What happens should the situation occur, is not up
to bash or the programmer. The behaviour is not portable and not
recoverable. A programmer might expect a situation like this, but there
is no knob to turn to prevent an abrupt termination, unlike FUNCNEST.

Speaking for myself, I'd find an error a much MUCH more palatable
condition than a segfault in this case. In the case of an error I at
least have a chance to do cleanup or emit a message, as opposed to just
terminating out of the blue. I don't think most bash programs are
written with the expectation that they might seize to run any moment
without any warning.

On top of it, this already works just fine for FUNCNEST and even though
the default behavior is still a segfault, now a careful programmer has a
fighting chance.

Regarding performance:

I don't think many extremely high performance applications are written
in bash, but that might just be my ignorance. In any case I did some
rudimentary testing:

I ran the same recursive function until segfault 10x in a loop and
measured the total execution time. Once without FUNCTEST set and once
with FUNCTEST set to a very large number.

Here are the results:

- with FUNCTEST: 1m2.233s

- without: 1m1.691s

The difference is less than 1% with an insanely unrealistic workload.

With this in mind I suggest that SCRNEST might not be
performance-prohibitive if it can only be ran when sourcing is involved
or if it can be rolled into FUNCNEST. I am not sure how painful that
would be to implement, but I for one would happily sacrifice a bit of
speed for a better coding experience.

With that in mind I am in no position to even consider whether what I'm
suggesting makes any sense and since I am unable to do the work myself I
humbly thank you for yours and wish you a very nice day!


reply via email to

[Prev in Thread] Current Thread [Next in Thread]