[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: ssh and scp get stuck after some amount of data

From: Thomas Schwinge
Subject: Re: ssh and scp get stuck after some amount of data
Date: Thu, 24 Oct 2019 13:05:56 +0200
User-agent: Notmuch/0.29.1+93~g67ed7df ( Emacs/26.1 (x86_64-pc-linux-gnu)


Ha, I remembered right that I had seen such a problem reported before...

On 2018-01-15T12:31:04+0100, Bruno Haible <> wrote:
> I tried:
>> # tar cf - directory | ssh bruno@ tar xf -
>> It hangs after transferring 1.6 GB. I.e. no more data arrives within 15 
>> minutes.

You were lucky: for me it stopped much earlier (a few dozen MiB) -- the
file I'm trying to transfer is just 460 MiB in size.  ;-)

> Found a workaround: Throttling of the bandwidth.
> - Throttling at the network adapter level [1] is not applicable to Hurd.
> - The 'throttle' program [2] is no longer available.
> - But a replacement program [3] is available.

Or use the '--bwlimit' functionality of 'rsync', or the '--rate-limit'
functionality of 'pv', which are often already packaged, readily

> The command that worked for me (it limits the bandwidth to 1 MB/sec):
> # tar cf - directory | ~/ --bandwidth 1024576 | ssh bruno@ 
> tar xf -

I thus tried:

    $ pv --rate-limit 1M [file] | ssh [...] 'cat > [file]'

..., which crashed after 368 MiB.  After rebooting, a 'rsync -Pa
--inplace --bwlimit=500K' was then able to complete the transfer; the two
files' checksums do match.

> But really, this is only a workaround. It smells like a bug in ssh or the 
> Hurd.

As all networking seems to go down, maybe it's that the GNU Hurd
networking stack ('pfinet', 'netdde') gets "overwhelmed" by that much

(That was on a Debian GNU/Hurd installation that's more than a one year
out of date, so there's a -- slight? ;-D -- chance that this has been
fixed already.)


Attachment: signature.asc
Description: PGP signature

reply via email to

[Prev in Thread] Current Thread [Next in Thread]