bug-bash
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Leak in BASH "named" file descriptors?


From: Mathieu Patenaude
Subject: Re: Leak in BASH "named" file descriptors?
Date: Thu, 28 Jan 2016 13:34:11 -0500

I guess the question remains, why does a "here string" assigned to a named FD rely on the system to the the clean-up but when assigned to a regular, numbered FD, it does not?

The issue I see with relying on the bash EXIT to actually have the system do the cleanup is when you have a script that does things in a forever loop, you end up with FD exhaustion when using "named" FD and here strings.  (kind of what my original script was showing)  Say you have a "ulimit -f 1024, and you do more than 1024 iterations of the function, you clearly end up with a "Too many open files".  Again, not the case if I do the same thing with a regular FD number.

Similar scenario if you replace the "here string" with the process substitution output...

The issue does not seams to be with the "here string" at all, but the use of the "named" FD.

Again, simple example of this idea, i.e. let's watch for a file and do something if found.  Assuming a f ulimit of 1024, this will stop printing at about 1011 on my system

ittr=0
while :; do
  while read -r -u $fh fname; do
    if [[ $fname == file3 ]]; then
      printf "$((ittr++)), "
      # we could delete the file here and wait for the
      # next time it appears, etc...  Just an example.
    fi
  done {fh}< <(ls -1)

  # we would normally wait some time here.
  #sleep 1

  [[ $? -ne 0 ]] && exit $?
done

But this will go on forever:

ittr=0
while :; do
  while read -r -u 9 fname; do
    if [[ $fname == file3 ]]; then
      printf "$((ittr++)), "
      # we could delete the file here and wait for the
      # next time it appears, etc...  Just an example.
    fi
  done 9< <(ls -1)
  # we would normally wait some time here.
  #sleep 1

  [[ $? -ne 0 ]] && exit $?
done


Same thing with the "here" string, the following will print forever, but not if I replace the 9 with $fh...

ittr=0
while :; do
  while read -r -u 9 fname; do
    if [[ $fname == file3 ]]; then
      printf "$((ittr++)), "
      # we could delete the file here and wait for the
      # next time it appears, etc...  Just an example.
    fi
  done 9<<<"file3"
  # we would normally wait some time here.
  #sleep 1

  [[ $? -ne 0 ]] && exit $?
done

Again, thanks for looking into this weirdness.  


On Thu, Jan 28, 2016 at 12:54 PM, Greg Wooledge <wooledg@eeg.ccf.org> wrote:
On Thu, Jan 28, 2016 at 12:40:57PM -0500, Mathieu Patenaude wrote:
> Yes, using ":" also illustrate the same (or similar) behavior that I'm
> experiencing with my script.  Using the "here" string creates the
> additional weirdness of showing that the temporary file content is actually
> "deleted", but the bash process keep the FD open.  Which is quite strange
> since it appears to have done half the job...

That's perfectly normal.  The here-document or here-string payload is
written to a temporary file, which is kept open, but unlinked.  That
way, when bash closes it (or is killed) the contents are simply deleted
by the file system, and bash doesn't have to do the clean-up.

I still don't know whether your original issue is a bug or not, though.


reply via email to

[Prev in Thread] Current Thread [Next in Thread]