I've written a script to help others run a simple day to day commands on our storage system here at work. The script works fine with commands that return a short and simple output, for example, ls, however, when the script wants to run a command which has a large output, the output isn't returned. It's almost as if it times out but there's no feedback at all, e.g. I thought there might be part of the command output. I've done some research around this and discovered other people with the same problem. The answers they got was to use:
stdin, stdout, stderr = client.exec_command(command)
Which I was already using in my code. I'm wondering if it's something to do with the buffer size, which annoyingly I don't know how to implement that in my code. I've tried adding a time delay using:
time.sleep(10)
But no joy from that. I have also tried using:
print stdout.channel.recv_exit_status()
However, I got a return of 127 so I think I'm way off the mark there! My code is:
def ssh_command(ip, user, passwd, command):
client = paramiko.SSHClient()
client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
client.connect(ip, username=user, password=passwd)
stdin, stdout, stderr = client.exec_command(command)
print stdout.read()
print stderr.read()
return
if __name__ == '__main__':
ssh_command(ip, user, passwd, command)
I've omitted the first few blocks of code which are where a few variables are defined by raw input from the user. It's rather long so I thought it best to omit but naturally, I can post it if needs be.
For those interested in the command I'm trying to run, it's an IBM command unique to their GPFS (Spectrum Scale) storage system. The command is:
mmdf mmfs1 --block-size auto
The command returns the storage space on all the disk pools on the storage system.
UPDATE:
The stderr.read()
states the command isn't recognised (bash: mmdf: command not found
) despite it working when SSH'd into the storage controller.