[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Segfault after many stackframes

From: Ole Tange
Subject: Re: Segfault after many stackframes
Date: Fri, 19 Apr 2019 10:21:00 +0200

On Fri, Apr 12, 2019 at 7:18 PM Andrew Church <address@hidden> wrote:
> >This recursive function causes bash to segfault:
> >
> >$ re() { t=$((t+1)); if [[ $t -gt 8000000 ]]; then echo foo; return;
> >fi; re; }; re
> >Segmentation fault (core dumped)
> >
> >Ideally Bash ought to run out of memory before this fails. But an
> >acceptable solution could also be to say 'stack overflow'.
> That's exactly what bash is saying there.  I'm not sure what (if
> anything) POSIX specifies for stack overflow behavior, but at least on
> Linux, stack overflow raises SIGSEGV:
> $ echo 'int main(void) {return main();}' | cc -o foo -x c -
> $ ./foo
> Segmentation fault

I believe that is an unfair comparison: Bash is an interpreted
language - not a compiled language. It should be fairly easy to avoid
raising SIGSEGV.

A better comparison would be perl:

perl -e 'sub ds{ ds() }; ds'
# Runs out of memory

Or Zsh:

re: maximum nested function level reached

Or Ksh:

ksh: re: recursion too deep

Or fish:

function re
  true; and re

/tmp/re (line 1): The function call stack limit has been exceeded. Do
you have an accidental infinite loop?
true; and re

Or Python:

def hop():

RuntimeError: maximum recursion depth exceeded

Or R:

f <- function() {

Error: C stack usage  7971732 is too close to the limit

Or Octave:

function name

error: max_recursion_depth exceeded

Or Ruby:

def functionname(variable)
   return functionname(variable)

rub:2:in `functionname': stack level too deep (SystemStackError)

Reading https://www.gnu.org/prep/standards/standards.html#Semantics

"""Avoid arbitrary limits on the length or number of any data
structure, including file names, lines, files, and symbols, by
allocating all data structures dynamically."""

You could argue that Bash being a GNU tool, it should do like Perl:
Run out of memory before failing.

Do users assume SIGSEGV is normal? When they see a SIGSEGV will their
first thought be: "Oh, my script probably made a stack overflow." Or
do they rather think there is a pointer bug in Bash? IMHO fish's error
message is quite a bit more helpful than Bash's: There is no doubt
that the problem is in my script.

Of course it is up to you, but if the current behaviour is a
controlled exit working the way it was designed, I find it odd that
there is no mention of it in the docs.


reply via email to

[Prev in Thread] Current Thread [Next in Thread]