0

I'm trying to upload large files to a remote SFTP-server. The regular OpenSSH sftp-client averages 4-5 Mb/second.

My code is:

    inp = open(fName, "rb")
    bsize = os.stat(inp.fileno()).st_blksize
    out = SFTP.open(os.path.split(fName)[-1], "w",
        bsize * 4) # SFTP is a result of paramiko.SFTPClient()
    out.set_pipelined()

    while True:
        buf = inp.read(bsize)
        if not buf:
            break
        out.write(buf)

    inp.close()
    out.close()

Averages 40-180Kb -- even if I artificially raise the bsize. One could blame the fact, that Paramiko is a "pure Python" implementation, but the difference should not be this huge...

There is no significant CPU-load on my machine, which runs FreeBSD-11, python-3.6, Paramiko-2.7.1

What's going on?

Update: adding out.set_pipelined() helps raise the throughput to 1-2Mb/s, but it still lags behinds that of the OpenSSH sftp-client by a lot...

Update: adding an explicit buffer-size to the SFTP.open() call -- as suggested by Martin in a comment -- had no perceptible effect. (I suspect, Paramiko already uses some buffering by default.)

Mikhail T.
  • 3,043
  • 3
  • 29
  • 46

0 Answers0