bug-bash
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: "here strings" and tmpfiles


From: L A Walsh
Subject: Re: "here strings" and tmpfiles
Date: Mon, 08 Apr 2019 17:04:41 -0700
User-agent: Thunderbird


On 4/8/2019 7:10 AM, Chet Ramey wrote:
> On 4/7/19 4:21 PM, L A Walsh wrote:
>   
>> On 3/22/2019 6:49 AM, Chet Ramey wrote:
>>     
>>> Yes, that's how bash chooses to implement it. There are a few portable
>>> ways
>>> to turn a string into a file descriptor, and a temp file is one of them (a
>>> child process using a pipe is another, but pipes have other issues).
>>>   
>>>       
>> Such as?  That are more common that having no writeable tmp?
>>     
>
> Pipes are objectively not the same as files. They
>
> 1. Do not have file semantics. For instance, they are not seekable.
>   
In the case of an object that is only meant to be read from,
I would argue, "that's fine".  Optionally, I would accept that
an implementation would support forward seeking as some equivalent
to having read the bytes.
> 2. Have limited capacity. Writers will sleep when the pipe becomes full.
>   
So does a read-only disk, except writer doesn't flag the error to
the reader in the same way a broken pipe would.  Instead, execution
proceeds as though nothing had happened -- and if stderr was mixed
in with hundreds of other startup lines might be what the user
would see (nothing happened) and wouldn't know something didn't
get initialized or brought up properly. 
> 3. Have ordering constraints: you can't write a pipe with no
> reader, for instance.
>
> These, unlike a "no writeable tmp," have been around for as 
> long as pipes have existed in Unix.
>   
The fact that the pipe does execution sequencing is often
a bonus, since writing to a read-only tmp or reading from a non
existent fileshould be regarded as writing to a pipe with no
listeners (because no one will ever be able to read from that
'tmp' file since it doesn't exist).

Using a file doesn't sequence -- the writer can still continue
execution pass the point of bash possibly flagging an internal
error for a non-existent tmp file (writable media) and the
reader won't get that the "pipe" (file) had no successful writer,
but instead get an EOF indication and continue, not knowing that
a fatal error had just occurred.
> There is a middle ground, which is to use pipes for here 
> documents that are shorter than the pipe capacity, but fall 
> back to temp files for others, which doesn't require a child
> process. I implemented that in the devel version.
>   
I can't say that's wrong, though I would _like_ for the pipe to
try expanding its buffer via memory allocation, which no pipe
implementation, that I'm aware of, does.  However, that would
be code in the pipe implementation or an IO library on top
of some StdIO implementation using such.

W/pipes, there is the race condition of the reader not being able
to read in the condition where the writer has already gone away.
To avoid that i've had the parent send some message (signal,
semaphore, etc) to the child to indicate the parent has finished
reading what the child has written.  If the child's last write
included an "EOF", then the parent's msg to the child causes
the child to close the pipe and exit.
>   
>> Then came along a way to do a process in background and end up
>> with being able to read & process its data in the main (foreground) 
>> process w/this syntax:
>>
>> readarray -t foregnd < <(echo  $'one\ntwo\nthree')
>>
>> Which I envisioned as 
>> as implemented something like (C-ish example
>>     
>
> I don't think you've ever really understood that these are two
> separate constructs: process substitution, which turns a 
> process into a filename you can write to and read from for
> various purposes, and input redirection.
>   
"Various purposes"...  Ok, so how do I give that file name
to 'cp' in the next line and copy it somewhere?

It's not really a filename is it?  It's a file descriptor --
a handle -- just like a pipe is a handle, but there's no name
associated with it.  It doesn't have 'name' semantics where the
'name' is associated with a data-stream that can be read later.
They are different types of objects. 

A Name-object doesn't have the data in it, but can be passed
around, "dataless', with its data stored elsewhere.  An open
call can connect a program with the data stored for a given name.
Whereas what "< <()" creates is a file descriptor to be READ from.
The parent can't write to it with useful effect.  What's in
parens needs to generate some output.  That is read from the
parent, which is what it is used for.

When I use '< <()', I've never wanted a filename.  I've wanted:

readarray dlines < $("ls /tmp" | )

So that 'dlines' ends up in the parent when done.
I realize that 'lastpipe' was added at some point that,
used with some syntax, would allow me to put the last
item in a pipe in the parent.  But changing what side of
a pipe ends up persisting after, vs. using the above which
does a 1 time read to ensure output in parents ends up
persisting makes me more nervous than the 1-time usage.

>> So I didn't realize instead of doing it simply using
>> native pipes like above, it was implemented some other way.
>>     
> And that's probably why.
>   
---
    Not exactly, as I thought of it as a way get the pipe
read from the parent so the process getting the output of
the pipe, persists.
>   
>> didn't understand the complexity of the need
>> for < <( to need a named pipe or fifo)....
>>     
>
> That, too.
>   
    Not sure it does nor why it can't use a pipe.


================


The fact is, if you write to a file, instead of an OS pipe, both
the OS pipesize and the file are "implementation dependent". 
There is always some group of people who want /tmp to be of
type tmpfs (or memfs).  That's simply creating a pipe as large as
memory.  Going to disk will create a pipe as large as the
free space on partition '/tmp'.

On *my* system, tmp is on a partition of size 7.8G (w/4.7G free)
Running 'df' on tmpfs give me '79G'.

If bash uses /tmp, it can have a pipe of size 4.7G.  If
it uses memory, it would have pipe of 79G.  If it uses
an OS pipe...that's OS dependent, no?  If the OS transparently
used memory to add dynamic space to a pipe, it would
also get 79G, or at least, some value like
/proc/sys/fs/pipe-max-size.





-l




reply via email to

[Prev in Thread] Current Thread [Next in Thread]