bug-bash
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: bash sockets: printf \x0a does TCP fragmentation


From: Bob Proulx
Subject: Re: bash sockets: printf \x0a does TCP fragmentation
Date: Sun, 23 Sep 2018 12:29:14 -0600
User-agent: Mutt/1.10.1 (2018-07-13)

Robert Elz wrote:
> Bob Proulx wrote:
>   | Using the same buffer size
>   | for input and output is usually most efficient.
> 
> Yes, but as the objective seemed to be to make big packets, that is probably
> not as important.

The original complaint concerned flushing a data blob content upon
every newline (0x0a) character due to line buffering, write(2)'ing the
buffer up to that point.  As I am sure you already know that will
cause the network stack in the kernel to emit the buffered data up to
that point with whatever has been read up to that point.  Which was
apparently a small'ish amount of data.  And then instead of having
some number of full MTU sized packets there were many more smaller
ones.  It shouldn't have been about big packets, nor fragmentation,
but about streaming efficiency and performance.  Though achieving
correct behavior with more buffer flushes than desired this was
apparently less efficient than they wanted and were therefore
complaining about it.  They wanted the data blob buffered as much as
possible so as to use the fewest number of TCP network packets.  My
choice of a large one meg buffer size was to be larger than any
network MTU size.  My intention was that the network stack would then
split the data blob up into MTU sizes for transmission.  The largest
MTU size that I routinely see is 64k.  I expect that to increase
further in size in the future when 1 meg might not be big enough.  And
I avoid mentioning jumbo frames.

>   |   $ printf -- "%s\n" one two | strace -o /tmp/out -e write,read dd 
> status=none obs=1M ; cat /tmp/out
>   |   one
>   |   two
>   |   ...
>   |   read(0, "one\ntwo\n", 512)              = 8
> 
> What is relevant there is that you're getrting both lines from the printf in 
> one read.  If that had happened, there would ne no need for any rebuffering.
> The point of the original complaint was that  that was not ahppening, and
> the reads were being broken at the \n ... here it might easily make a 
> difference whether the output is a pipe or a socket (I have no idea.)

I dug into this further and see that we were both right. :-)

I was getting misdirected by the Linux kernel's pipeline buffering.
The pipeline buffering was causing me to think that it did not matter.
But digging deeper I see that it was a race condition timing issue and
could go either way.  That's obviously a mistake on my part.

You are right that depending upon timing this must be handled properly
or it might fail.  I am wrong that it would always work regardless of
timing.  However it was working in my test case which is why I had not
noticed.  Thank you for pushing me to see the problem here.

>   | It can then use the same buffer that data was read into for the output
>   | buffer directly.
> 
> No, it can't, that's what bs= does - you're right, that is most effecient,
> but there is no rebuffering, whatever is read, is written, and in that case
> even more effecient is not to interpose dd at all.  The whole point was
> to get the rebuffering.
> 
> Try tests more like
> 
>       { printf %s\\n aaa; sleep 1; printf %s\\n bbb ; } | dd ....
> 
> so there will be clearly 2 different writes, and small reads for dd
> (however big the input buffer has) - with obs= (somethingbig enough)
> there will be just 1 write, with bs= (anything big enough for the whole
> output) there will still be two writes.

  $ { command printf "one\n"; command printf "two\n" ;} | strace -v -o 
/tmp/dd.strace.out -e write,read dd status=none bs=1M ; head /tmp/*.strace.out
  one
  two
  ...
  read(0, "one\ntwo\n", 1048576)          = 8
  write(1, "one\ntwo\n", 8)               = 8
  read(0, "", 1048576)                    = 0
  +++ exited with 0 +++

Above the data is definitely written in two different processes but
due to Linux kernel buffering in the pipeline it is read in one read.
The data is written into the pipeline so quickly, before the next
stage of the pipeline could read it out, that by the time the read
eventually happened it was able to read the multiple writes as one
data block.  This is what I had been seeing but you are right that it
is a timing related success and could also be a timing related
failure.

  $ { command printf "one\n"; sleep 1; command printf "two\n" ;} | strace -v -o 
/tmp/dd.strace.out -e write,read dd status=none bs=1M ; head /tmp/*.strace.out
  one
  two
  ...
  read(0, "one\n", 1048576)               = 4
  write(1, "one\n", 4)                    = 4
  read(0, "two\n", 1048576)               = 4
  write(1, "two\n", 4)                    = 4
  read(0, "", 1048576)                    = 0
  +++ exited with 0 +++

The above illustrates the point you were trying to make.  Thank you
for persevering in educating me as to the issue. :-)

  $ { command printf "one\n"; sleep 1; command printf "two\n" ;} | { sleep 2; 
strace -v -o /tmp/dd.strace.out -e write,read dd status=none bs=1M ; head 
/tmp/*.strace.out ;}
  one
  two
  ...
  read(0, "one\ntwo\n", 1048576)          = 8
  write(1, "one\ntwo\n", 8)               = 8
  read(0, "", 1048576)                    = 0
  +++ exited with 0 +++

The above is just me showing that it is definitely a race condition
problem that can go either way.  But obviously race conditions are
timing bugs and should never be counted upon always working one way or
the other.  Just showing why I got sucked into it.  :-(

  $ { command printf "one\n"; sleep 1; command printf "two\n" ;} | strace -v -o 
/tmp/dd.strace.out -e write,read dd status=none obs=1M ; head /tmp/*.strace.out
  one
  two
  ...
  read(0, "one\n", 512)                   = 4
  read(0, "two\n", 512)                   = 4
  read(0, "", 512)                        = 0
  write(1, "one\ntwo\n", 8)               = 8
  +++ exited with 0 +++

And the above using a large output block size, as you suggest, shows
the solution where dd is re-blocking the output.

  $ { command printf "one\n"; sleep 1; command printf "two\n" ;} | strace -v -o 
/tmp/dd.strace.out -e write,read dd status=none ibs=1M obs=1M ; head 
/tmp/*.strace.out
  one
  two
  ...
  read(0, "one\n", 1048576)               = 4
  read(0, "two\n", 1048576)               = 4
  read(0, "", 1048576)                    = 0
  write(1, "one\ntwo\n", 8)               = 8
  +++ exited with 0 +++

And just for completeness I will show the above with both a large
input buffer and a large output buffer of the same size and show that
result too.  The required dd option, as you correctly insisted, really
is obs= in order to set the output block size.  I stand corrected. :-)

I had missed the documented dd behavior:

  ‘bs=BYTES’
     Set both input and output block sizes to BYTES.  This makes ‘dd’
     read and write BYTES per block, overriding any ‘ibs’ and ‘obs’
     settings.  In addition, if no data-transforming ‘conv’ option is
     specified, input is copied to the output as soon as it’s read, even
     if it is smaller than the block size.

It is always good to learn something new about fundamental behavior in
a command one has been using for some decades! :-)

> ps: this is not really the correct place to discuss dd.

The help-bash list would be better generally for random shell stuff
but the discussion started here in this bug thread and this part of
the discussion is topical to the solution for it.  This is the right
place for this.

Bob



reply via email to

[Prev in Thread] Current Thread [Next in Thread]