I have Python software installed on my remote node which I can run everywhere when I'm in a ssh-session. This will read various metrics and store them in the text file. There are reasons why these modules work like this, and I cannot change it.
user@node
$ read_sensors.py -p /data/log_20231213.txt
But now I want to automatically trigger this script from remote via SSH out of my second script. I need this external trigger due to synchronization issues. So not possible over cronjob or similar.
Now I tried to use fabric
but ended up using plain paramiko
.
Unfortunately I cannot figure out how to trigger the script with exec_command
but managed to do it with invoke_shell
. But due as I don't now how long the script will run I would use the stderr
for the error handling.
The sensor node runs Ubuntu and my script runs on Windows 10.
My script was so far:
import paramiko
import time
hostname = "192.168.1.10"
username = "user"
password = "pw"
commands = [
"mkdir 20230212",
"read_sensors.py"
]
# initialize the SSH client
client = paramiko.SSHClient()
# add to known hosts
client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
try:
client.connect(hostname=hostname, username=username, password=password)
except:
print("[!] Cannot connect to the SSH Server")
exit()
remote_conn = client.invoke_shell()
# execute the commands
for command in commands:
print("="*50, command, "="*50)
remote_conn.send(command+' \n')
time.sleep(2)
output = remote_conn.recv(1000)
print(output)
But I could not figure out how to do it with execc_command
. I just get the following error bash: read_sensors.py: command not found
for command in commands:
print("="*50, command, "="*50)
stdin, stdout, stderr = client.exec_command(command)
print(stdout.read().decode())
err = stderr.read().decode()
if err:
print(err)
Any ideas where I can start looking for the error?