bug-bash
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: lost output from asynchronous lists


From: Stephane Chazelas
Subject: Re: lost output from asynchronous lists
Date: Tue, 28 Oct 2008 10:26:18 +0000
User-agent: Mutt/1.5.16 (2007-09-19)

On Mon, Oct 27, 2008 at 11:12:24PM +0100, Ralf Wildenhues wrote:
[...]
> --- foo.sh ---
> #! /bin/sh
> 
> do_work ()
> {
>   sleep 1
>   echo "work $i is done"
> }
> 
> for i in 1 2 3 4 5 6 7 8 9 10
> do
>   (
>     do_work $i
>   ) &
> done
> wait
> 
> --- bar.sh ---
> #! /bin/sh
> 
> ./foo.sh > stdout 2>stderr
> echo stdout:; cat stdout
> test `grep -c done stdout` -eq 10 || { echo "FAILED"; exit 1; }
> 
> ---
> 
> Run
>   while ./bar.sh; do :; done
[...]

I have to admit I would have thought the code above to be safe
as well and I wonder if it's the same on all systems. But I can
reproduce the problem on Linux. As far as I can tell, if you
don't use O_APPEND, the system doesn't guarantee the write(2) to
be atomic, so I suppose you can get this kind of behavior if a
context switch occurs in the middle of a write(2) system call.
That wouldn't have anything to do with the shell.

Replacing foo.sh > stdout 2> stderr with
: > stdout > stderr
./foo.sh >> stdout 2>> stderr

should be guaranteed to work.

I think

{ ./foo.sh | cat > stdout; } 2>&1 | cat > stderr

should be OK as well as write(2)s to a pipe are meant to be
atomic as long as they are less than PIPE_BUF bytes (a page size
on Linux) and even if they were not atomic, I would still
consider it a bug if one process' output to a pipe was to
overwrite another one's.

-- 
Stéphane




reply via email to

[Prev in Thread] Current Thread [Next in Thread]