0

I would like to automate a script execution on a subprocess, so I am using subprocess lib to create the thread and schedule lib to schedule it.

I would like to verify that the script remotely executed, is working without problems. The code I was trying does not print any error when the script returns 1 (error) or when the script_file is absent. (And if I am not mistaken, adding the exception wrapper kills the subprocess and do its job )

import os
import sys
import subprocess
import multiprocessing
import schedule
import time
import functools

class MyClass:

        def catch_exceptions(job_func):
                @functools.wraps(job_func)
                def wrapper(*args, **kwargs):
                        try:
                                job_func(*args, **kwargs)
                        except:
                                import traceback
                                print ("Error")
                                print(traceback.format_exc())
                return wrapper


        @catch_exceptions
        def run(self, user, host, command):
                subprocess.call(["ssh", user + "@" + host, command])


        def sched(user, host, script_path):          
               schedule.every(0.01).minutes.do(self.run, user, host, script_path)

All suggestions are welcome, using wrappers is not the goal, but any solution to verify the execution of sched method is good also.

4m1nh4j1
  • 4,289
  • 16
  • 62
  • 104

3 Answers3

3

call returns the exit code of process. So you can check against that. Or try subprocess.check_call. It throws an exception when the process exits with a non-zero value, so you don't have to explicitly check the exit value, and catch the exception where you want to deal with it.

Examples:

exit_value = subprocess.call(cmd) 
if exit_value:
    ... 

or

try:
    subprocess.check_call(cmd) 
except CalledProcessError as e:
    ... 
Dunes
  • 37,291
  • 7
  • 81
  • 97
1
x=subprocess.Popen(["ssh", user + "@" + host, command],stdout=subprocess.PIPE,stderr=subprocess.PIPE)
output,error=x.communicate()
if error:
    print "there's an error!!!!!!!!!!!!!!:",error

You can try like this.subprocess.communicate returns output if successful or error if command fails.

vks
  • 67,027
  • 10
  • 91
  • 124
-1

Use subprocess.Popen()

RunCmd = Popen( sCmd, shell=True, stdout=PIPE, stderr=PIPE)

for sLine in RunCmd.stdout:
            print( sLine.decode() )
            stdout.flush()
            aLog.append( sLine.decode() )
for sLine in RunCmd.stderr:
            print( sLine.decode() )
            stderr.flush()
            aErrorLog.append( sLine.decode() )
RunCmd.wait()
if RunCmd.returncode == 0:
            return RunCmd.wait()
        else: 
            return RunCmd.returncode

RunCmd.wait() will wait for the process to finish, and will return process code(for succuss or failure) This also prints the output at realtime.

Niyojan
  • 544
  • 1
  • 6
  • 23
  • if both stdout and stderr are redirected then you should read them both concurrently otherwise the program may hang forever. In addition, OP doesn't redirect the output of the remote subprocess. – jfs Oct 28 '14 at 17:29
  • FYI : http://en.wikipedia.org/wiki/Standard_streams#Standard_output_.28stdout.29 "streams are independent and can be redirected separately". I have never faced any issue with this code. – Niyojan Oct 29 '14 at 03:55
  • It just mean that the output was small in all your cases. Imagine what happens when `sCmd` prints to stderr the content that (accumulatively) is larger than OS pipe buffer: your parent process is trying to read from stdout while the child tries to write to stderr but its stderr pipe buffer is full -- deadlock. There is a warning about it in the subprocess docs. – jfs Oct 29 '14 at 07:11
  • Believe me, output was big enough to choke the stream to deadlock, but I am flushing the streams to handle the deadlock. I can freely use `Popen.wait()` since my buffers are always empty to accept more input from streams. – Niyojan Oct 29 '14 at 08:02
  • *where* do you flush the streams? If OS pipe buffer is full. It is full. The only thing that you could do is read it but you can't because your code reads the other stream that is why I said: read them *both* **concurrently**. Just try: `for c in 'x'*(1<<20): for f in [sys.stdout, sys.stderr]: print >>f, c` as a child and see how your code deals with it – jfs Oct 29 '14 at 08:19
  • http://stackoverflow.com/questions/230751/how-to-flush-output-of-python-print `stdout.flush()` flush the OS pipes after each print statement? – Niyojan Oct 29 '14 at 08:27
  • no, `stdout.flush()` moves data from *internal* stdout process buffer into the corresponding OS pipe buffer. Try the code from my previous comment. – jfs Oct 29 '14 at 08:32