[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: BASH recursion segfault, FUNCNEST doesn't help

From: Chet Ramey
Subject: Re: BASH recursion segfault, FUNCNEST doesn't help
Date: Thu, 9 Jun 2022 11:01:51 -0400
User-agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0) Gecko/20100101 Thunderbird/91.10.0

On 6/7/22 10:17 AM, Gergely wrote:
On 6/7/22 15:49, Chet Ramey wrote:
On 6/7/22 7:57 AM, Gergely wrote:

Because you haven't forced bash to write outside its own address space or
corrupt another area on the stack. This is a resource exhaustion issue,
no more.
I did force it to write out of bounds, hence the segfault.
That's backwards. You got a SIGSEGV, but it doesn't mean you forced bash to
write beyond its address space. You get SIGSEGV when you exceed your stack
or VM resource limits. Given the nature of the original script, it's
probably the former.

I am not saying the write was successful, but the only reason it wasn't is because the kernel doesn't map pages there. Bash not caring about this makes it's relying on the kernel behaving "right".

There are plenty of places where applications rely on the kernel to keep
them from reading or writing memory that is not mapped into their address
space. I don't think that's unreasonable.

Here's a very trivial example that'll show $rsp containing an address that is outside of the stack:

Sure, because expanding the stack fails and the process gets SIGSEGV. I'm
not sure whether you think this is a vulnerability, but it's not. You can
do weird stuff if you block or catch SIGSEGV, but that crosses the line
between vulnerability and malicious behavior.

Not really, a programmer can't know how large the stack is and how many
more recursions bash can take. This is also kernel/distro/platform
dependent. I get that it's a hard limit to hit, but to say the programmer
has complete control is not quite true.
True, the programmer can't know the stack size. But in a scenario where you
really need to recurse hundreds or thousands of times (is there one?), the
programmer can try to increase the stack size with `ulimit -s' and warn the
user if that fails.

If there's a way for an attacker to make bash allocate very large stack frames, this number doesn't have to be very big.

To exceed the stack size resource limit? Again, what would this `attack'
gain? To make a process crash? That's the purpose of resource limits.

Sure, for unmitigated disasters of code like infinite recursions, I agree
with you. This problem is not about that though. It's about a bounded -
albeit large - number of recursions.
This is not an example of a bounded number of recursions, since the second
process sends a continuous stream of SIGUSR1s.

This was meant to be an example of some more reasonable code that is likely to exist in the wild.

Sure. Don't call it what it's not, though -- it's unbounded recursion,
externally forced, resulting from receiving a signal for which a trap has
been set while executing a trap handler.

For the sake of example, consider a program with a somewhat slow signal
handler. This program might be forced to segfault by another program that
can send it large amounts of signals in quick succession.
This is another example of recursive execution that results in a stack size
resource limit failure, and wouldn't be helped by any of the things we're
talking about -- though there is an EVALNEST_MAX define that could.

Not sure what "stack size resource limit failure" means in this context, but this does end in a segfault.

It means the same thing it did before: the stack size resource limit is
exceeded, and the process receives a SIGSEGV.

``The lyf so short, the craft so long to lerne.'' - Chaucer
                 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, UTech, CWRU    chet@case.edu    http://tiswww.cwru.edu/~chet/

reply via email to

[Prev in Thread] Current Thread [Next in Thread]