help-bash
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Multiple concurrent coprocesses


From: Zachary Santer
Subject: Re: Multiple concurrent coprocesses
Date: Tue, 30 Mar 2021 23:16:21 -0400

On Tue, Mar 30, 2021 at 11:14 AM Chet Ramey <chet.ramey@case.edu> wrote:
> It's just more bookkeeping, and I'd want to make sure that bash takes
> care of closing coproc file descriptors where necessary. I think it does,
> but more real-world test cases are always nice.
>
> Say you have a pipe file descriptor that's (inadvertently) shared by
> more processes than intended. If that file descriptor is open for read,
> the kernel won't send a SIGPIPE to a writer that anticipates getting
> one. Similarly, if a stray file descriptor open for write exists, a
> reader won't get the expected EOF. These situations can result in
> deadlock.
>
> The pipeline code passes around bitmaps of file descriptors to close to
> handle this.
>
> As I said, I think the multiple coprocs code takes care of these cases,
> but it's good to be sure.

I.e., making sure that, "Other than those created to execute command
and process substitutions, the file descriptors are not available in
subshells." From my perspective, the important bit there is making
sure that the file descriptors from coprocesses created first don't
make their way into the subshells of any further coprocesses created.

The version of my code with FIFOs and automatic file descriptors
spawns the (explicit?) subshells before creating any of the automatic
file descriptors in the parent shell, mostly to avoid deadlock waiting
for the other end of each FIFO to be accessed. This also avoids the
above situation, though. Using automatic file descriptors this way,
I've not made any attempt to close the file descriptors in pipelines.
These subshells both stay alive until the end of the script, when the
automatic file descriptors in the parent shell are closed, before the
FIFOs and any other shared files are deleted. The version of the code
with coprocesses is laid out almost entirely the same way, just
without having to start subshells and then open file descriptors.

I don't anticipate coming up with a real-world test case that closes
file descriptors reading from or writing to a coprocess in the parent
shell while a pipeline spawned in that shell is running, any time
soon. That's a really specific thing.

I just wrote the attached, which is kind of nonsense. If that is what
you're talking about, then at least I understand what you mean now.

Zack

Attachment: dogs-cats.txt
Description: Text document

Attachment: pipeline-test
Description: Binary data


reply via email to

[Prev in Thread] Current Thread [Next in Thread]