25

I want to run a tail -f logfile command on a remote machine using python's paramiko module. I've been attempting it so far in the following fashion:

interface = paramiko.SSHClient()
#snip the connection setup portion
stdin, stdout, stderr = interface.exec_command("tail -f logfile")
#snip into threaded loop
print stdout.readline()

I'd like the command to run as long as necessary, but I have 2 problems:

  1. How do I stop this cleanly? I thought of making a Channel and then using the shutdown() command on the channel when I'm through with it- but that seems messy. Is it possible to do something like sent Ctrl-C to the channel's stdin?
  2. readline() blocks, and I could avoid threads if I had a non-blocking method of getting output- any thoughts?
user17925
  • 989
  • 2
  • 10
  • 20

6 Answers6

23

Instead of calling exec_command on the client, get hold of the transport and generate your own channel. The channel can be used to execute a command, and you can use it in a select statement to find out when data can be read:

#!/usr/bin/env python
import paramiko
import select
client = paramiko.SSHClient()
client.load_system_host_keys()
client.connect('host.example.com')
transport = client.get_transport()
channel = transport.open_session()
channel.exec_command("tail -f /var/log/everything/current")
while True:
  rl, wl, xl = select.select([channel],[],[],0.0)
  if len(rl) > 0:
      # Must be stdout
      print channel.recv(1024)

The channel object can be read from and written to, connecting with stdout and stdin of the remote command. You can get at stderr by calling channel.makefile_stderr(...).

I've set the timeout to 0.0 seconds because a non-blocking solution was requested. Depending on your needs, you might want to block with a non-zero timeout.

Andrew Aylett
  • 39,182
  • 5
  • 68
  • 95
  • you can't select on the stdout object, because it lacks a fileno attribute. goathens isn't using a channel object. – JimB May 07 '09 at 17:33
  • I've modified and expanded the example, and tested it to make sure it works :). – Andrew Aylett Jun 10 '09 at 23:12
  • 1
    @Vivek: you'd still need to look at `rl`, that's the list of sockets that can be read. Take a look at the documentation for `channel.recv_stderr()` (and `channel.recv_stderr_ready()`) to see how to read the remote stderr. – Andrew Aylett Jun 27 '12 at 13:17
  • I see, thanks, I was trying with xl but was getting weird results. – abc def foo bar Jun 27 '12 at 14:07
  • This script still takes 100% of my CPU just to execute a distant `sleep 10` for instance. – azmeuk Jul 13 '16 at 15:13
  • Hi @azmeuk, that's probably because the timeout is set to zero, so you'll be busy-waiting for data. I've updated the answer to set the timeout to one second, but you should think about what value you need and whether you want a timeout at all. – Andrew Aylett Jul 13 '16 at 19:45
  • Hi again @azmeuk, I remembered why the timeout was zero -- it's because the OP wanted a non-blocking solution. So you probably either want to block or to wait locally -- it might be sensible to add a timeout if the previous iteration yielded no work. – Andrew Aylett Jul 13 '16 at 19:54
15

1) You can just close the client if you wish. The server on the other end will kill the tail process.

2) If you need to do this in a non-blocking way, you will have to use the channel object directly. You can then watch for both stdout and stderr with channel.recv_ready() and channel.recv_stderr_ready(), or use select.select.

JimB
  • 104,193
  • 13
  • 262
  • 255
  • 2
    I am late to the party, but isn't `exec_command` itself non-blocking? – zengr Oct 30 '13 at 19:16
  • 2
    On some newer servers, your processes won't be killed even after you terminate your client. You have to set `get_pty=True` in the `exec_command()` in order for the processes to be cleaned up after exiting the client. – nlsun Jul 13 '16 at 22:54
  • get_pty=True enables you to execute Ctrl+C properly, but it is causing all commands to timeout after about 10 minutes. So you cannot execute long running commands – Łukasz Strugała Jul 17 '23 at 14:46
9

Just a small update to the solution by Andrew Aylett. The following code actually breaks the loop and quits when the external process finishes:

import paramiko
import select

client = paramiko.SSHClient()
client.load_system_host_keys()
client.connect('host.example.com')
channel = client.get_transport().open_session()
channel.exec_command("tail -f /var/log/everything/current")
while True:
    if channel.exit_status_ready():
        break
    rl, wl, xl = select.select([channel], [], [], 0.0)
    if len(rl) > 0:
        print channel.recv(1024)
Anton Beloglazov
  • 4,939
  • 1
  • 21
  • 9
  • Also, see http://stackoverflow.com/questions/760978/long-running-ssh-commands-in-python-paramiko-module-and-how-to-end-them#comment64123397_766255 – azmeuk Jul 13 '16 at 15:14
  • 1
    @azmeuk Both solutions are slightly incorrect, because you don't want to stop receiving output as soon as the exit status is ready. You want to stop when there is no output to be received AND the exit status is ready. Otherwise you may end up quitting before receiving all output. – user7610 Aug 10 '16 at 15:35
  • @ Jiri, you are correct, I am facing the similar issue which you mentioned. Can you please let me know if there is any workaround. Some of my output is skipped from tail -f . –  Dec 15 '16 at 02:55
0

Just for information, there is a solution to do this using channel.get_pty(). Fore more details have a look at: https://stackoverflow.com/a/11190727/1480181

Community
  • 1
  • 1
Sven
  • 710
  • 9
  • 18
0

The way I've solved this is with a context manager. This will make sure my long running commands are aborted. The key logic is to wrap to mimic SSHClient.exec_command but capture the created channel and use a Timer that will close that channel if the command runs for too long.

import paramiko
import threading


class TimeoutChannel:

    def __init__(self, client: paramiko.SSHClient, timeout):
        self.expired = False
        self._channel: paramiko.channel = None
        self.client = client
        self.timeout = timeout

    def __enter__(self):
        self.timer = threading.Timer(self.timeout, self.kill_client)
        self.timer.start()

        return self

    def __exit__(self, exc_type, exc_val, exc_tb):
        print("Exited Timeout. Timed out:", self.expired)
        self.timer.cancel()

        if exc_val:
            return False  # Make sure the exceptions are re-raised

        if self.expired:
            raise TimeoutError("Command timed out")

    def kill_client(self):
        self.expired = True
        print("Should kill client")
        if self._channel:
            print("We have a channel")
            self._channel.close()

    def exec(self, command, bufsize=-1, timeout=None, get_pty=False, environment=None):
        self._channel = self.client.get_transport().open_session(timeout=timeout)
        if get_pty:
            self._channel.get_pty()
        self._channel.settimeout(timeout)
        if environment:
            self._channel.update_environment(environment)
        self._channel.exec_command(command)
        stdin = self._channel.makefile_stdin("wb", bufsize)
        stdout = self._channel.makefile("r", bufsize)
        stderr = self._channel.makefile_stderr("r", bufsize)
        return stdin, stdout, stderr

To use the code it's pretty simple now, the first example will throw a TimeoutError

ssh = paramiko.SSHClient()
ssh.connect('hostname', username='user', password='pass')

with TimeoutChannel(ssh, 3) as c:
    ssh_stdin, ssh_stdout, ssh_stderr = c.exec("cat")    # non-blocking
    exit_status = ssh_stdout.channel.recv_exit_status()  # block til done, will never complete because cat wants input

This code will work fine (unless the host is under insane load!)

ssh = paramiko.SSHClient()
ssh.connect('hostname', username='user', password='pass')

with TimeoutChannel(ssh, 3) as c:
    ssh_stdin, ssh_stdout, ssh_stderr = c.exec("uptime")    # non-blocking
    exit_status = ssh_stdout.channel.recv_exit_status()     # block til done, will complete quickly
    print(ssh_stdout.read().decode("utf8"))                 # Show results
AndrewWhalan
  • 417
  • 3
  • 12
0

To close the process simply run:

interface.close()

In terms of nonblocking, you can't get a non-blocking read. The best you would be able to to would be to parse over it one "block" at a time, "stdout.read(1)" will only block when there are no characters left in the buffer.

lfaraone
  • 49,562
  • 17
  • 52
  • 70