bug-bash
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Examples of concurrent coproc usage?


From: Chet Ramey
Subject: Re: Examples of concurrent coproc usage?
Date: Tue, 16 Apr 2024 11:48:58 -0400
User-agent: Mozilla Thunderbird

On 4/12/24 12:49 PM, Carl Edquist wrote:

Where with a coproc

     coproc X { potentially short lived command with output; }
     exec {xr}<&${X[0]} {xw}>&${X[1]}

there is technically the possibility that the coproc can finish and be reaped before the exec command gets a chance to run and duplicate the fds.

But, I also get what you said, that your design intent with coprocs was for them to be longer-lived, so immediate termination was not a concern.

The bigger concern was how to synchronize between the processes, but that's
something that the script writer has to do on their own.

Personally I like the idea of 'closing' a coproc explicitly, but if it's a bother to add options to the coproc keyword, then I would say just let the user be responsible for closing the fds.  Once the coproc has terminated _and_ the coproc's fds are closed, then the coproc can be deallocated.

This is not backwards compatible. coprocs may be a little-used feature, but you're adding a burden on the shell programmer that wasn't there previously.

Ok, so, I'm trying to imagine a case where this would cause any problems or extra work for such an existing user.  Maybe you can provide an example from your own uses?  (Where it would cause trouble or require adding code if the coproc deallocation were deferred until the fds are closed explicitly.)

My concern was always coproc fds leaking into other processes, especially
pipelines. If someone has a coproc now and is `messy' about cleaning it up,
I feel like there's the possibility of deadlock. But I don't know how
extensively they're used, or all the use cases, so I'm not sure how likely
it is. I've learned there are users who do things with shell features I
never imagined. (People wanting to use coprocs without the shell as the
arbiter, for instance. :-) )

My first thought is that in the general case, the user doesn't really need to worry much about closing the fds for a terminated coproc anyway, as they will all be closed implicitly when the shell exits (either an interactive session or a script).

Yes.


[This is a common model for using coprocs, by the way, where an auxiliary coprocess is left open for the lifetime of the shell session and never explicitly closed.  When the shell session exits, the fds are closed implicitly by the OS, and the coprocess sees EOF and exits on its own.]

That's one common model, yes. Another is that the shell process explicitly
sends a close or shutdown command to the coproc, so termination is
expected.

If a user expects the coproc variable to go away automatically, that user won't be accessing a still-open fd from that variable for anything.

I'm more concerned about a pipe with unread data that would potentially
cause problems. I suppose we just need more testing.


As for the forgotten-about half-closed pipe fds to the reaped coproc, I don't see how they could lead to deadlock, nor do I see how a shell programmer expecting the existing behavior would even attempt to access them at all, apart from programming error.

Probably not.


The only potential issue I can imagine is if a script (or a user at an interactive prompt) would start _so_ many of these longer-lived coprocs (more than 500??), one at a time in succession, in a single shell session, that all the available fds would be exhausted.  (That is, if the shell is not closing them automatically upon coproc termination.)  Is that the backwards compatibility concern?

That's more of a "my arm hurts when I do this" situation. If a script
opened 500 fds using exec redirection, resource exhaustion would be their
own responsibility.


Meanwhile, the bash man page does not specify the shell's behavior for when a coproc terminates, so you might say there's room for interpretation and the new deferring behavior would not break any promises.

I could always enable it in the devel branch and see what happens with the
folks who use that. It would be three years after any release when distros
would put it into production anyway.


And as it strikes me anyway, the real "burden" on the programmer with the existing behavior is having to make a copy of the coproc fds every time

     coproc X { cmd; }
     exec {xr}<&${X[0]} {xw}>&${X[1]}

and use the copies instead of the originals in order to reliably read the final output from the coproc.

Maybe, though it's easy enough to wrap that in a shell function.


First, just to be clear, the fds to/from the coproc pipes are not invalid when the coproc terminates (you can still read from them); they are only invalid after they are closed.

That's only sort of true; writing to a pipe for which there is no reader generates SIGPIPE, which is a fatal signal.

Eh, when I talk about an fd being "invalid" here I mean "fd is not a valid file descriptor" (to use the language for EBADF from the man page for various system calls like read(2), write(2), close(2)).  That's why I say the fds only become invalid after they are closed.

And of course the primary use I care about is reading the final output from a completed coproc.  (Which is generally after explicitly closing the write end.)  The shell's read fd is still open, and can be read - it'll either return data, or return EOF, but that's not an error and not invalid.

But since you mention it, writing to a broken pipe is still semantically meaningful also.  (I would even say valid.)  In the typical case it's expected behavior for a process to get killed when it attempts this and shell pipeline programming is designed with this in mind.

You'd be surprised at how often I get requests to put in an internal
SIGPIPE handler to avoid problems/shell termination with builtins writing
to closed pipes.


So even for write attempts, you introduce uncertain behavior by automatically closing the fds, when the normal, predictable, valid thing would be to die by SIGPIPE.

Again, you might be surprised at how many people view that as a bug in
the shell.


If the coproc terminates, the file descriptor to write to it becomes invalid because it's implicitly closed.

Yes, but the distinction I was making is that they do not become invalid when or because the coproc terminates, they become invalid when and because the shell closes them.  (I'm saying that if the shell did not close them automatically, they would remain valid.)


 The surprising bit is when they become invalid unexpectedly (from the
 point of view of the user) because the shell closes them
 automatically, at the somewhat arbitrary timing when the coproc is
 reaped.

No real difference from procsubs.

I think I disagree?  The difference is that the replacement string for a procsub (/dev/fd/N or a fifo path) remains valid for the command in question.  (Right?)

Using your definition of valid, I believe so, yes.

Avoiding SIGPIPE depends on how the OS handles opens on /dev/fd/N: an
internal dup or a handle to the same fd. In the latter case, I think the
file descriptor obtained when opening /dev/fd/N would become `invalid'
at the same time the process terminates.

I think we're talking about our different interpretations of `invalid'
(EBADF as opposed to EPIPE/SIGPIPE).


So the command in question can count on that path being valid.  And if a procsub is used in an exec redirection, in order to extend its use for future commands (and the redirection is guaranteed to work, since it is guaranteed to be valid for that exec command), then the newly opened pipe fd will not be subject to automatic closing either.

Correct.


As far as I can tell there is no arbitrary timing for when the shell closes the fds for procsubs.  As far as I can tell, it closes them when the command in question completes, and that's the end of the story. (There's no waiting for the timing of the background procsub process to complete.)

Right. There are reasonably well-defined rules for when redirections associated with commands are disposed, and exec redirections to procsubs
just follow from those. The shell closes file descriptors (and
potentially unlinks the FIFO) when it reaps the process substitution, but
it takes some care not to do that prematurely, and the user isn't using
those fds.



Second, why is it a problem if the variables keep their (invalid) fds after closing them, if the user is the one that closed them anyway?

Isn't this how it works with the auto-assigned fd redirections?

Those are different file descriptors.


      $ exec {d}<.
      $ echo $d
      10
      $ exec {d}<&-
      $ echo $d
      10

The shell doesn't try to manage that object in the same way it does a
coproc. The user has explicitly indicated they want to manage it.

Ok - your intention makes sense then.  My reasoning was that auto-allocated redirection fds ( {x}>file or {x}>&$N ) are a way of asking the shell to automatically place fds in a variable for you to manage - and I imagined 'coproc X {...}' the same way.

The philosophy is the same as if you picked the file descriptor number
yourself and assigned it to the variable -- the shell just does some of
the bookkeeping for you so you don't have to worry about the file
descriptor resource limit. You still have to manage file descriptor $x the
same way you would if you had picked file descriptor 15 (for example).


But there is a window there where a short-lived coprocess could be reaped before you dup the file descriptors. Since the original intent of the feature was that coprocs were a way to communicate with long-lived processes -- something more persistent than a process substitution -- it was not really a concern at the time.

Makes sense.  For me, working with coprocesses is largely a more flexible way of setting up interesting pipelines - which is where the shell excels.

Once a 'pipework' is set up (I'm making up this word now to distinguish from a simple pipeline), the shell does not have to be in the middle shoveling data around - the external commands can do that on their own.

My original intention for the coprocs (and Korn's from whence they came)
was that the shell would be in the middle -- it's another way for the shell
to do IPC.

Chet
--
``The lyf so short, the craft so long to lerne.'' - Chaucer
                 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, UTech, CWRU    chet@case.edu    http://tiswww.cwru.edu/~chet/




reply via email to

[Prev in Thread] Current Thread [Next in Thread]