bug-bash
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Memory leak in wait


From: Chet Ramey
Subject: Re: Memory leak in wait
Date: Fri, 07 Nov 2014 08:24:07 -0500
User-agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.10; rv:24.0) Gecko/20100101 Thunderbird/24.6.0

On 11/7/14, 3:49 AM, Jean Delvare wrote:
> Hi Chet,
> 
> Thanks for the fast reply and the explanations, very appreciated.
> 
> On Thu, 06 Nov 2014 20:57:11 -0500, Chet Ramey wrote:
>> On 11/6/14 8:09 AM, Jean Delvare wrote:
>>> A memory leak has been reported in a bash script I maintain [1]. After
>>> investigation, I was able to shrink the test case down to:
>>>
>>> while true
>>> do
>>>     sleep 1 &
>>>     wait $!
>>> done
>>
>> This isn't a memory leak, and the memory use is bounded.  The shell, as
> 
> OK, this is in line with valgrind's claim that all allocated memory was
> still reachable. But how bounded is it? The shell script for which the
> issue was originally reported starts with about 3 MB of memory but
> reaches 32 MB over time (I don't know which version of bash did that
> though.) That seems to be a lot of memory just to record process exit
> statuses.

Maybe.  That depends on the number of children your maxproc resource says
you can create.  On my system, the struct that holds a pid's status is 16
bytes.

>> per Posix, keeps the statues of the last child_max processes.  It gets
>> child_max from the process's resource limit (ulimit -v, 709 (?) on my
>> system).  The list is FIFO, so when the number of background statuses
>> equals child_max, the oldest statuses are discarded.
> 
> "help ulimit" says:
> 
>       -v      the size of virtual memory
> 
> the scope of which seems to exceed just the number of child process
> statuses to remember?

Sorry, it's `ulimit -u'.

> So I think in the case of a shell script daemon, I have two options:
> 1* Use "wait" without argument. As we are always waiting for a single
>    process (sleep) to finish, I suppose this is equivalent to "wait
>    $?", right?

Yes, as long as you mean $!.

> 2* Limit the value child_max. But using "ulimit -v" for that seems to
>    be a rather violent and risky way to achieve this?

        -u      the maximum number of user processes

> Also three questions remain, if you would be kind enough to answer them.
> 
> 1* Why is the memory consumption steady when using bash 3.0.16? Was
>    this version of bash not recording process exit statuses yet?

That feature came in in bash-3.1.

> 2* If bash remembers the process statuses, how does one access the
>    information? I couldn't find anything related to that in the manual
>    page, but it's huge so maybe I missed it.

You can use wait with a pid argument, maybe one that you saved earlier
in a script or obtained from `ps'.

> 3* If I never call "wait" and thus never free the recorded exit
>    statuses, does it mean that the system can't recycle the process IDs
>    in question until the shell script exits? That would be a major flaw
>    in my script :-(

No, the system will recycle the process ID as soon as bash reaps it with
waitpid(2).  It's bash that is responsible for making sure to discard a
saved status if fork(2) returns a PID that's been saved.

Chet
-- 
``The lyf so short, the craft so long to lerne.'' - Chaucer
                 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, ITS, CWRU    chet@case.edu    http://cnswww.cns.cwru.edu/~chet/



reply via email to

[Prev in Thread] Current Thread [Next in Thread]