I'm using Python and Paramiko to run tail -f
on a whole bunch logfiles on remote servers, with help from a previous thread:
Paramiko and exec_command - killing remote process?
My final script looks something like this:
#!/usr/bin/env python2
import paramiko
import select
import random
tail_id = random.randint(0,500)
username = 'victorhooi'
client = paramiko.SSHClient()
client.load_system_host_keys()
client.connect('someserver.com', username='victorhooi', password='somepassword')
transport = client.get_transport()
#transport.set_keepalive(1)
channel = transport.open_session()
channel.exec_command("tail -%df /home/victorhooi/macbeth.txt" % tail_id)
while True:
try:
rl, wl, xl = select.select([channel],[],[],0.0)
if len(rl) > 0:
# Must be stdout
print channel.recv(1024).strip()
except KeyboardInterrupt:
print("Caught control-C")
client.get_transport().open_session().exec_command("kill -9 `ps -fu %s | grep 'tail -%df /home/victorhooi/macbeth.txt' | grep -v grep | awk '{print $2}'`" % (username, tail_id))
#channel.close()
#transport.close()
client.close()
exit(0)
I now need to extend this to handle being backgrounded, and managing multiple tails, and then killing specific tails on demand.
Ideally, I'd have one Python script that I could spin up and background itself (Or daemonize? python-daemon?). This process could then read in a configuration file, and start up separate paramiko calls to tail each remote logfile.
I'd also have a control script that I could run to list the remote tails that were running, and kill specific ones, or kill all of them, or stop/restart the daemon.
What's a good way of tackling this problem? Should I be using threading or multiprocessing, or something else to achieve each Paramiko call? Are there any existing scripts/programs that I can look to for an example of how to do this?
And what's a good way of having the management script communicate with each of the processes/threads?
Cheers, Victor