[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: bash uses tmp files for inter-process communication instead of pipes

From: Linda Walsh
Subject: Re: bash uses tmp files for inter-process communication instead of pipes?
Date: Mon, 06 Oct 2014 12:38:21 -0700
User-agent: Thunderbird

Greg Wooledge wrote:
On Mon, Oct 06, 2014 at 12:14:57PM -0700, Linda Walsh wrote:
   done <<<"$(get_net_IFnames_hwaddrs)"

Where am I using a HERE doc?

<<< and << both create temporary files.

According to Chet , only way to do a multi-var assignment in bash is

read a b c d  <<<$(echo address@hidden)

Forcing a simple assignment into using a tmp file seems Machiavellian --
as it does exactly the thing the user is trying to avoid through
unexpected means.

The point of grouping assignments is to save space (in the code) and have
the group initialized at the same time -- and more quickly than using
separate assignments.

So why would someone use a tmp file to do an assignment.

Even the gcc chain is able to use "pipe" to send the results of one stage
of the compiler to the next without using a tmp.

That's been around for at least 10 years.

So why would a temp file be used?

Creating a tmp file to do an assignment, I assert is a bug.

It is entirely counter-intuitive that such wouldn't use the same mechanism
as LtR ordered pipes.


cmd1 | cmd2 -- that hasn't used tmp files on modern *nix systems for
probably 20 years or more (I think DOS was the last shell I knew that used
tmp files...)

so why would "cmd2 < <(cmd1 [|])" not use the same paradigm -- worse, is

cmd1 >& MEMVAR   -- output is already in memory...

so why would read a b c <<<${MEMVAR} need a tmp file if the text to be
read is already in memory?

reply via email to

[Prev in Thread] Current Thread [Next in Thread]