Home » Questions » Computers [ Ask a new question ]

Inverse multiplexing to speed up file transfer

Inverse multiplexing to speed up file transfer

I have send a large amount of data from one machine to another. If I send with rsync (or any other method), it will go at a steady 320kb/sec. If I initiate two or three transfers at once, each will go at 320, and if I do four at once, they will max out the link.

Asked by: Guest | Views: 27
Total answers/comments: 4
Guest [Entry]

"If you have few and large files use lftp -e 'mirror --parallel=2 --use-pget-n=10 <remote_dir> <local_dir>' <ftp_server>: you willll download 2 files with each file split in 10 segments with a total of 20 ftp connections to <ftp_server>;

If you have a large amount of small files, then use lftp -e 'mirror --parallel=100 <remote_dir> <local_dir>' <ftp_server>: you'll download 100 files in parallel without segmentation, then. A total of 100 connections will be open. This may exaust the available clients on the server, or can get you banned on some servers.

You can use --continue to resume the job :) and the -R option to upload instead of download (then switching argument order to <local_dir> <remote_dir>)."
Guest [Entry]

"If you can setup passwordless ssh login, then this will open 4 concurrent scp connections (-n) with each connection handling 4 files (-L):

find . -type f | xargs -L 4 -n 4
/tmp/scp.sh user@host:path

File /tmp/scp.sh:


#Display the help page
function showHelp()
echo ""Usage: $0 <destination> <file1 [file2 ... ]>""

#No arguments?
if [ -z ""$1"" ] || [ -z ""$2"" ]; then
exit 1

#Display help?
if [ ""$1"" = ""--help"" ] || [ ""$1"" = ""-h"" ]; then
exit 0

#Programs and options

#Check other parameters
if [ -z ""$DESTINATION"" ]; then
exit 1

echo ""$@""

#Run scp in the background with the remaining parameters.
Guest [Entry]

Try sort all files on inode (find /mydir -type f -print | xargs ls -i | sort -n) and transfer them with for example cpio over ssh. This will max out your disk and make the network you bottleneck. Faster than that it's hard to go when going across network.
Guest [Entry]

"I know a tool that can transfer files in chunks. The tool is called 'rtorrent' package/port that's available on both hosts ;) BitTorrent clients often reserve disk space before the transfer, and chunks are written directly from sockets to the disk. Additionally, you'll be able to review ALL transfers' states in a nice ncurses screen.

You can create simple bash scripts to automate ""*.torrent"" file creation and ssh a command to the remote machine so it downloads it. This looks a bit ugly, but I don't think you'll find any simple solution without developing :)"