1381

I want to write a function that will execute a shell command and return its output as a string, no matter, is it an error or success message. I just want to get the same result that I would have gotten with the command line.

What would be a code example that would do such a thing?

For example:

def run_command(cmd):
    # ??????

print run_command('mysqladmin create test -uroot -pmysqladmin12')
# Should output something like:
# mysqladmin: CREATE DATABASE failed; error: 'Can't create database 'test'; database exists'
martineau
  • 119,623
  • 25
  • 170
  • 301
Silver Light
  • 44,202
  • 36
  • 123
  • 164
  • 3
    related: http://stackoverflow.com/questions/2924310/whats-a-good-equivalent-to-pythons-subprocess-check-call-that-returns-the-conte – jfs Jan 24 '11 at 09:22
  • 1
    The duplicate at https://stackoverflow.com/questions/34431673/how-to-get-the-output-from-os-system explains why you can't use `os.system` here, if that's your actual question. – tripleee Sep 10 '21 at 21:00

25 Answers25

1849

In all officially maintained versions of Python, the simplest approach is to use the subprocess.check_output function:

>>> subprocess.check_output(['ls', '-l'])
b'total 0\n-rw-r--r--  1 memyself  staff  0 Mar 14 11:04 files\n'

check_output runs a single program that takes only arguments as input.1 It returns the result exactly as printed to stdout. If you need to write input to stdin, skip ahead to the run or Popen sections. If you want to execute complex shell commands, see the note on shell=True at the end of this answer.

The check_output function works in all officially maintained versions of Python. But for more recent versions, a more flexible approach is available.

Modern versions of Python (3.5 or higher): run

If you're using Python 3.5+, and do not need backwards compatibility, the new run function is recommended by the official documentation for most tasks. It provides a very general, high-level API for the subprocess module. To capture the output of a program, pass the subprocess.PIPE flag to the stdout keyword argument. Then access the stdout attribute of the returned CompletedProcess object:

>>> import subprocess
>>> result = subprocess.run(['ls', '-l'], stdout=subprocess.PIPE)
>>> result.stdout
b'total 0\n-rw-r--r--  1 memyself  staff  0 Mar 14 11:04 files\n'

The return value is a bytes object, so if you want a proper string, you'll need to decode it. Assuming the called process returns a UTF-8-encoded string:

>>> result.stdout.decode('utf-8')
'total 0\n-rw-r--r--  1 memyself  staff  0 Mar 14 11:04 files\n'

This can all be compressed to a one-liner if desired:

>>> subprocess.run(['ls', '-l'], stdout=subprocess.PIPE).stdout.decode('utf-8')
'total 0\n-rw-r--r--  1 memyself  staff  0 Mar 14 11:04 files\n'

If you want to pass input to the process's stdin, you can pass a bytes object to the input keyword argument:

>>> cmd = ['awk', 'length($0) > 5']
>>> ip = 'foo\nfoofoo\n'.encode('utf-8')
>>> result = subprocess.run(cmd, stdout=subprocess.PIPE, input=ip)
>>> result.stdout.decode('utf-8')
'foofoo\n'

You can capture errors by passing stderr=subprocess.PIPE (capture to result.stderr) or stderr=subprocess.STDOUT (capture to result.stdout along with regular output). If you want run to throw an exception when the process returns a nonzero exit code, you can pass check=True. (Or you can check the returncode attribute of result above.) When security is not a concern, you can also run more complex shell commands by passing shell=True as described at the end of this answer.

Later versions of Python streamline the above further. In Python 3.7+, the above one-liner can be spelled like this:

>>> subprocess.run(['ls', '-l'], capture_output=True, text=True).stdout
'total 0\n-rw-r--r--  1 memyself  staff  0 Mar 14 11:04 files\n'

Using run this way adds just a bit of complexity, compared to the old way of doing things. But now you can do almost anything you need to do with the run function alone.

Older versions of Python (3-3.4): more about check_output

If you are using an older version of Python, or need modest backwards compatibility, you can use the check_output function as briefly described above. It has been available since Python 2.7.

subprocess.check_output(*popenargs, **kwargs)  

It takes takes the same arguments as Popen (see below), and returns a string containing the program's output. The beginning of this answer has a more detailed usage example. In Python 3.5+, check_output is equivalent to executing run with check=True and stdout=PIPE, and returning just the stdout attribute.

You can pass stderr=subprocess.STDOUT to ensure that error messages are included in the returned output. When security is not a concern, you can also run more complex shell commands by passing shell=True as described at the end of this answer.

If you need to pipe from stderr or pass input to the process, check_output won't be up to the task. See the Popen examples below in that case.

Complex applications and legacy versions of Python (2.6 and below): Popen

If you need deep backwards compatibility, or if you need more sophisticated functionality than check_output or run provide, you'll have to work directly with Popen objects, which encapsulate the low-level API for subprocesses.

The Popen constructor accepts either a single command without arguments, or a list containing a command as its first item, followed by any number of arguments, each as a separate item in the list. shlex.split can help parse strings into appropriately formatted lists. Popen objects also accept a host of different arguments for process IO management and low-level configuration.

To send input and capture output, communicate is almost always the preferred method. As in:

output = subprocess.Popen(["mycmd", "myarg"], 
                          stdout=subprocess.PIPE).communicate()[0]

Or

>>> import subprocess
>>> p = subprocess.Popen(['ls', '-a'], stdout=subprocess.PIPE, 
...                                    stderr=subprocess.PIPE)
>>> out, err = p.communicate()
>>> print out
.
..
foo

If you set stdin=PIPE, communicate also allows you to pass data to the process via stdin:

>>> cmd = ['awk', 'length($0) > 5']
>>> p = subprocess.Popen(cmd, stdout=subprocess.PIPE,
...                           stderr=subprocess.PIPE,
...                           stdin=subprocess.PIPE)
>>> out, err = p.communicate('foo\nfoofoo\n')
>>> print out
foofoo

Note Aaron Hall's answer, which indicates that on some systems, you may need to set stdout, stderr, and stdin all to PIPE (or DEVNULL) to get communicate to work at all.

In some rare cases, you may need complex, real-time output capturing. Vartec's answer suggests a way forward, but methods other than communicate are prone to deadlocks if not used carefully.

As with all the above functions, when security is not a concern, you can run more complex shell commands by passing shell=True.

Notes

1. Running shell commands: the shell=True argument

Normally, each call to run, check_output, or the Popen constructor executes a single program. That means no fancy bash-style pipes. If you want to run complex shell commands, you can pass shell=True, which all three functions support. For example:

>>> subprocess.check_output('cat books/* | wc', shell=True, text=True)
' 1299377 17005208 101299376\n'

However, doing this raises security concerns. If you're doing anything more than light scripting, you might be better off calling each process separately, and passing the output from each as an input to the next, via

run(cmd, [stdout=etc...], input=other_output)

Or

Popen(cmd, [stdout=etc...]).communicate(other_output)

The temptation to directly connect pipes is strong; resist it. Otherwise, you'll likely see deadlocks or have to do hacky things like this.

senderle
  • 145,869
  • 36
  • 209
  • 233
  • 8
    Both with `check_output()` and `communicate()` you have to wait until the process is done, with `poll()` you're getting output as it comes. Really depends what you need. – vartec Apr 05 '12 at 09:44
  • 2
    Not sure if this only applies to later versions of Python, but the variable `out` was of type `` for me. In order to get the output as a string I had to decode it before printing like so: `out.decode("utf-8")` – PolyMesh Oct 31 '13 at 19:42
  • @PolyMesh: `out` is `bytes` on all Python versions unless `universal_newlines=True` on Python 3. The `print` statement clearly indicates that it is Python 2 code where `bytes = str`. – jfs Sep 18 '14 at 03:17
  • @senderle You said "don't pass stderr=subprocess.PIPE" and then your examples did exactly that. What did you mean? Thanks. – Eleno Jun 14 '15 at 09:38
  • // , Is there a way to do this with STDOUT and STDERR instead of PIPE? – Nathan Basanese Aug 31 '15 at 19:31
  • You could use the shell=True option and then enter you command with all the options cmd = "ls -lh" p = subprocess.Popen(cmd, shell=True,stdout=subprocess.PIPE).communicate() Gives ('total 12K\n-rw-r--r-- 1 dokwii dokwii 3.4K Apr 19 16:57 routes.py\ndrwxr-xr-x 5 dokwii dokwii 4.0K Apr 15 16:41 static\ndrwxr-xr-x 2 dokwii dokwii 4.0K Apr 18 10:23 templates\n', None) – David Okwii Apr 19 '16 at 14:24
  • How can I do this, if I have multiple subprocess? http://stackoverflow.com/questions/37683445/python-subprocess-handle-exception – zombi_man Jun 07 '16 at 15:45
  • @senderle It fails if I use pipe and grep to capture particular portion of the output. `ps aux | grep /usr/lib/evolution/evolution-calendar-factory-subprocess | grep webcal | grep -oE org.gnome.evolution.dataserver.Subprocess.Backend.Calendarx[0-9][0-9]+x[0-9]+` What would be proper way to deal with this kind of command? – Khurshid Alam Mar 27 '17 at 16:18
  • I am slightly confused by the way subprocess expects its input. Why do I need to split my linux command into an array of arguments. What if I want python to run a string as a command in a bash environment as a black box and just wait for the output and capture it? I just dont understand the additional complexity and wish we could still use the commands.getstatusoutput which is now deprecated – Parsa Apr 09 '17 at 22:21
  • 2
    @Parsa See [Actual meaning of `shell=True` in `subprocess`](https://stackoverflow.com/questions/3172470/actual-meaning-of-shell-true-in-subprocess) for a discussion. – tripleee Sep 10 '21 at 21:02
  • 1
    @Khurshid The obvious quick fix is to run that with `shell=True` but a much more efficient and elegant solution is to run only `ps` is a subprocess and do the filtering in Python. (You really should refactor those repeated `grep`s if you decide to keep it in shell.) – tripleee Sep 10 '21 at 21:05
  • 1
    Thanks for the answer but I think most people are looking for the `subprocess.check_output('cat books/* | wc', shell=True, text=True)` functionality so if you can just put that at the top of your post it will be very helpful. – Digio Jan 03 '22 at 11:17
  • @Digio `cat` has to open the file, so why not do it in Python? And it's more robust to to count lines in Python. – Daisuke Aramaki Jun 05 '23 at 19:10
  • `subprocess.run(['ls', '-l'], stdout=subprocess.PIPE).stdout.decode('utf-8')` gives me `UnicodeDecodeError: 'utf-8' codec can't decode byte 0xb9 in position 175: invalid start byte` – rokejulianlockhart Jun 07 '23 at 21:35
203

This is way easier, but only works on Unix (including Cygwin) and Python2.7.

import commands
print commands.getstatusoutput('wc -l file')

It returns a tuple with the (return_value, output).

For a solution that works in both Python2 and Python3, use the subprocess module instead:

from subprocess import Popen, PIPE
output = Popen(["date"],stdout=PIPE)
response = output.communicate()
print response
Josh Correia
  • 3,807
  • 3
  • 33
  • 50
byte_array
  • 2,767
  • 1
  • 16
  • 10
165

python3 offers subprocess.getoutput():

import subprocess
output = subprocess.getoutput("ls -l")
print(output)
Semnodime
  • 1,872
  • 1
  • 15
  • 24
azhar22k
  • 4,765
  • 4
  • 21
  • 24
  • 9
    It returns the output of command as string, as simple as that – azhar22k Dec 04 '16 at 07:55
  • 4
    Note that this explicitly marked as a [legacy function](https://docs.python.org/3/library/subprocess.html#legacy-shell-invocation-functions) with poor support for exception handling and no security guarantees. – senderle Jan 30 '20 at 17:16
  • 2
    This offers no benefits over `subprocess.check_output` except being a few characters shorter, but given the drawbacks, that should hardly sway you. – tripleee Sep 10 '21 at 20:54
  • Hey @PranavPatil, are you using linux? can you share the output of "which ls" command? – azhar22k Mar 21 '22 at 07:59
  • 1
    After that, splitting lines of output string can be helpful: `for line in output.splitlines(): print(line)` or words: `for word in output.split(): print(word)` – Celuk Mar 24 '22 at 06:08
128

Something like that:

def runProcess(exe):    
    p = subprocess.Popen(exe, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
    while(True):
        # returns None while subprocess is running
        retcode = p.poll() 
        line = p.stdout.readline()
        yield line
        if retcode is not None:
            break

Note, that I'm redirecting stderr to stdout, it might not be exactly what you want, but I want error messages also.

This function yields line by line as they come (normally you'd have to wait for subprocess to finish to get the output as a whole).

For your case the usage would be:

for line in runProcess('mysqladmin create test -uroot -pmysqladmin12'.split()):
    print line,
dominik andreas
  • 155
  • 1
  • 7
vartec
  • 131,205
  • 36
  • 218
  • 244
  • Be sure to implement some sort of active loop to get the output to avoid the potential deadlock in `wait` and `call` functions. – André Caron Jan 21 '11 at 15:19
  • @Silver Light: your process is probably waiting for input from the user. Try providing a `PIPE` value for `stdin` and closing that file as soon as `Popen` returns. – André Caron Jan 21 '11 at 15:21
  • @Andre: yeah, actually in my production code I'm using threading.Timer() to execute p.terminate() upon timeout. – vartec Jan 21 '11 at 15:24
  • 6
    -1: it is an infinite loop the if `retcode` is `0`. The check should be `if retcode is not None`. You should not yield empty strings (even an empty line is at least one symbol '\n'): `if line: yield line`. Call `p.stdout.close()` at the end. – jfs Jan 24 '11 at 09:37
  • This does not work if more than one line is returned at once, better to use `readlines()` (note the `s`) instead. – fuenfundachtzig Sep 21 '12 at 09:04
  • 1
    btw, you could just return `iter(p.stdout.readline, b'')` from runProcess() instead of the while loop. – jfs Nov 22 '12 at 15:42
  • 2
    I tried the code with ls -l /dirname and it breaks after listing two files while there are much more files in the directory – Vasilis Sep 30 '13 at 20:01
  • 5
    @fuenfundachtzig: `.readlines()` won't return until *all* output is read and therefore it breaks for large output that does not fit in memory. Also to avoid missing buffered data after the subprocess exited there should be an analog of `if retcode is not None: yield from p.stdout.readlines(); break` – jfs Dec 21 '13 at 05:15
  • You exit loop when process is finished. Isn't it possible that at that moment stdout is not empty yet? (because of buffering or whatever else). Why not to exit when readline returns empty bytes? – lesnik Sep 23 '21 at 08:08
  • @jfs Thank you so much for the tip to add `p.stdout.close()` at the end; this was necessary for my script to continue running after this function call. – Daggerpov Aug 18 '23 at 18:33
86

This is a tricky but super simple solution which works in many situations:

import os
os.system('sample_cmd > tmp')
print(open('tmp', 'r').read())

A temporary file(here is tmp) is created with the output of the command and you can read from it your desired output.

Extra note from the comments: You can remove the tmp file in the case of one-time job. If you need to do this several times, there is no need to delete the tmp.

os.remove('tmp')
xjcl
  • 12,848
  • 6
  • 67
  • 89
Mehdi Saman Booy
  • 2,760
  • 5
  • 26
  • 32
  • 11
    Hacky but super simple + works anywhere .. can combine it with `mktemp` to make it work in threaded situations I guess – Prakash Rajagaopal Oct 18 '16 at 01:32
  • 4
    Maybe the fastest method, but better add `os.remove('tmp')` to make it "fileless". – XuMuK Jul 03 '17 at 16:11
  • @XuMuK You're right in the case of a one-time job. If it is a repetitive work maybe deleting is not necessary – Mehdi Saman Booy Jul 05 '17 at 15:18
  • 4
    bad for concurrency, bad for reentrant functions, bad for not leaving the system as it was before it started ( no cleanup ) – 2mia Jul 13 '18 at 12:49
  • 1
    @2mia Obviously it's easy for a reason! If you want to use the file as a kind of shared memory for concurrent reads and writes, this is not a good choice. But, for s.th. like having the output of a command (e.g. ls or find or ...) it can be a good and fast choice. B.t.w. if you need a fast solution for a simple problem it's the best I think. If you need a pipeline, subprocess works for you more efficient. – Mehdi Saman Booy Jul 15 '18 at 06:17
  • I was exactly looking for something like this! – Shahriar Rahman Zahin May 07 '20 at 10:23
  • I ran into contexts with multiple levels of nested sub processes where I couldn't seem to directly poll those streams or inherit a file descriptor, etc. The principle illustrated here solved the problem. Note it might be easier in some cases to generate a shell / batch file for a complex command, execute that with the stream redirections on the outside of it, and then delete the script. – BuvinJ Feb 22 '21 at 21:16
  • Depending upon what you are doing, note that rather than using a temp file for the output, you might start a logger in your parent script, but when using this method, close that logger / file and than use `>>` (append) in your system call to that file. Then restart your logger (also in append mode) after the sub process has completed. – BuvinJ Feb 22 '21 at 21:17
77

Vartec's answer doesn't read all lines, so I made a version that did:

def run_command(command):
    p = subprocess.Popen(command,
                         stdout=subprocess.PIPE,
                         stderr=subprocess.STDOUT)
    return iter(p.stdout.readline, b'')

Usage is the same as the accepted answer:

command = 'mysqladmin create test -uroot -pmysqladmin12'.split()
for line in run_command(command):
    print(line)
Community
  • 1
  • 1
Max Ekman
  • 931
  • 6
  • 8
  • 7
    you could use `return iter(p.stdout.readline, b'')` instead of the while loop – jfs Nov 22 '12 at 15:44
  • 2
    That is a pretty cool use of iter, didn't know that! I updated the code. – Max Ekman Nov 28 '12 at 21:53
  • I'm pretty sure stdout keeps all output, it's a stream object with a buffer. I use a very similar technique to deplete all remaining output after a Popen have completed, and in my case, using poll() and readline during the execution to capture output live also. – Max Ekman Nov 28 '12 at 21:55
  • I've removed my misleading comment. I can confirm, `p.stdout.readline()` may return the non-empty previously-buffered output even if the child process have exited already (`p.poll()` is not `None`). – jfs Sep 18 '14 at 03:12
  • 1
    This code doesn't work. See here http://stackoverflow.com/questions/24340877/why-does-this-bash-call-from-python-not-work – thang May 03 '15 at 06:00
  • @thang: `shell=False` by default and therefore you should pass the command as a list (not as a string). I've updated the answer to add missing `.split()`. – jfs May 23 '15 at 11:24
  • How do you get the exit code (as returned by "echo $?") of the command you run? – Vespene Gas Oct 19 '18 at 06:50
30

You can use following commands to run any shell command. I have used them on ubuntu.

import os
os.popen('your command here').read()

Note: This is deprecated since python 2.6. Now you must use subprocess.Popen. Below is the example

import subprocess

p = subprocess.Popen("Your command", shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE).communicate()[0]
print p.split("\n")
Muhammad Hassan
  • 14,086
  • 7
  • 32
  • 54
  • 3
    Deprecated since version 2.6 – https://docs.python.org/2/library/os.html#os.popen – Filippo Vitale May 26 '17 at 13:28
  • 1
    @FilippoVitale Thanks. I did not know that it is deprecated. – Muhammad Hassan May 26 '17 at 14:44
  • 5
    According to https://raspberrypi.stackexchange.com/questions/71547/is-there-a-problem-with-using-deprecated-os-popen `os.popen()` is deprecated in Python 2.6, but it is *not* deprecated in Python 3.x, since in 3.x it is implemented using `subprocess.Popen()`. – J-L Aug 13 '18 at 19:07
  • ... But you want to avoid `subprcess.Popen` too for simple tasks that `subprocess.check_output` and friends can handle with much less code and better robustness. This has multiple bugs for nontrivial commands. – tripleee Sep 10 '21 at 20:52
  • `print ` isn't a command. Do you mean `print()`? – rokejulianlockhart Jun 07 '23 at 21:39
  • @beedell.rokejulianlockhart yes. Its an old answer. Was working on Python 2.7 at that time so wrote according to that. – Muhammad Hassan Jun 09 '23 at 06:17
13

I had a slightly different flavor of the same problem with the following requirements:

  1. Capture and return STDOUT messages as they accumulate in the STDOUT buffer (i.e. in realtime).
    • @vartec solved this Pythonically with his use of generators and the 'yield'
      keyword above
  2. Print all STDOUT lines (even if process exits before STDOUT buffer can be fully read)
  3. Don't waste CPU cycles polling the process at high-frequency
  4. Check the return code of the subprocess
  5. Print STDERR (separate from STDOUT) if we get a non-zero error return code.

I've combined and tweaked previous answers to come up with the following:

import subprocess
from time import sleep

def run_command(command):
    p = subprocess.Popen(command,
                         stdout=subprocess.PIPE,
                         stderr=subprocess.PIPE,
                         shell=True)
    # Read stdout from subprocess until the buffer is empty !
    for line in iter(p.stdout.readline, b''):
        if line: # Don't print blank lines
            yield line
    # This ensures the process has completed, AND sets the 'returncode' attr
    while p.poll() is None:                                                                                                                                        
        sleep(.1) #Don't waste CPU-cycles
    # Empty STDERR buffer
    err = p.stderr.read()
    if p.returncode != 0:
       # The run_command() function is responsible for logging STDERR 
       print("Error: " + str(err))

This code would be executed the same as previous answers:

for line in run_command(cmd):
    print(line)
Community
  • 1
  • 1
The Aelfinn
  • 13,649
  • 2
  • 54
  • 45
  • 2
    Do you mind explaining how the addition of sleep(.1) won't waste CPU cycles? – Moataz Elmasry Aug 02 '17 at 09:41
  • 2
    If we continued to call ``p.poll()`` without any sleep in between calls, we would waste CPU cycles by calling this function millions of times. Instead, we "throttle" our loop by telling the OS that we don't need to be bothered for the next 1/10th second, so it can carry out other tasks. (It's possible that p.poll() sleeps too, making our sleep statement redundant). – The Aelfinn Aug 02 '17 at 11:04
12

Your Mileage May Vary, I attempted @senderle's spin on Vartec's solution in Windows on Python 2.6.5, but I was getting errors, and no other solutions worked. My error was: WindowsError: [Error 6] The handle is invalid.

I found that I had to assign PIPE to every handle to get it to return the output I expected - the following worked for me.

import subprocess

def run_command(cmd):
    """given shell command, returns communication tuple of stdout and stderr"""
    return subprocess.Popen(cmd, 
                            stdout=subprocess.PIPE, 
                            stderr=subprocess.PIPE, 
                            stdin=subprocess.PIPE).communicate()

and call like this, ([0] gets the first element of the tuple, stdout):

run_command('tracert 11.1.0.1')[0]

After learning more, I believe I need these pipe arguments because I'm working on a custom system that uses different handles, so I had to directly control all the std's.

To stop console popups (with Windows), do this:

def run_command(cmd):
    """given shell command, returns communication tuple of stdout and stderr"""
    # instantiate a startupinfo obj:
    startupinfo = subprocess.STARTUPINFO()
    # set the use show window flag, might make conditional on being in Windows:
    startupinfo.dwFlags |= subprocess.STARTF_USESHOWWINDOW
    # pass as the startupinfo keyword argument:
    return subprocess.Popen(cmd,
                            stdout=subprocess.PIPE, 
                            stderr=subprocess.PIPE, 
                            stdin=subprocess.PIPE, 
                            startupinfo=startupinfo).communicate()

run_command('tracert 11.1.0.1')
Russia Must Remove Putin
  • 374,368
  • 89
  • 403
  • 331
  • 1
    Interesting -- this must be a Windows thing. I'll add a note pointing to this in case people are getting similar errors. – senderle May 01 '14 at 14:04
  • 1
    use [`DEVNULL` instead](http://stackoverflow.com/a/11270665/4279) of `subprocess.PIPE` if you don't write/read from a pipe otherwise you may hang the child process. – jfs Sep 09 '14 at 10:57
12

On Python 3.7+, use subprocess.run and pass capture_output=True:

import subprocess
result = subprocess.run(['echo', 'hello', 'world'], capture_output=True)
print(repr(result.stdout))

This will return bytes:

b'hello world\n'

If you want it to convert the bytes to a string, add text=True:

result = subprocess.run(['echo', 'hello', 'world'], capture_output=True, text=True)
print(repr(result.stdout))

This will read the bytes using your default encoding:

'hello world\n'

If you need to manually specify a different encoding, use encoding="your encoding" instead of text=True:

result = subprocess.run(['echo', 'hello', 'world'], capture_output=True, encoding="utf8")
print(repr(result.stdout))
Boris Verkhovskiy
  • 14,854
  • 11
  • 100
  • 103
9

Splitting the initial command for the subprocess might be tricky and cumbersome.

Use shlex.split() to help yourself out.

Sample command

git log -n 5 --since "5 years ago" --until "2 year ago"

The code

from subprocess import check_output
from shlex import split

res = check_output(split('git log -n 5 --since "5 years ago" --until "2 year ago"'))
print(res)
>>> b'commit 7696ab087a163e084d6870bb4e5e4d4198bdc61a\nAuthor: Artur Barseghyan...'

Without shlex.split() the code would look as follows

res = check_output([
    'git', 
    'log', 
    '-n', 
    '5', 
    '--since', 
    '5 years ago', 
    '--until', 
    '2 year ago'
])
print(res)
>>> b'commit 7696ab087a163e084d6870bb4e5e4d4198bdc61a\nAuthor: Artur Barseghyan...'
Artur Barseghyan
  • 12,746
  • 4
  • 52
  • 44
  • 2
    `shlex.split()` is a convenience, especially if you don't know how exactly quoting in the shell works; but manually converting this string to the list `['git', 'log', '-n', '5', '--since', '5 years ago', '--until', '2 year ago']` is not hard at all if you understand quoting. – tripleee Jun 18 '19 at 06:24
6

Here a solution, working if you want to print output while process is running or not.


I added the current working directory also, it was useful to me more than once.


Hoping the solution will help someone :).

import subprocess

def run_command(cmd_and_args, print_constantly=False, cwd=None):
"""Runs a system command.

:param cmd_and_args: the command to run with or without a Pipe (|).
:param print_constantly: If True then the output is logged in continuous until the command ended.
:param cwd: the current working directory (the directory from which you will like to execute the command)
:return: - a tuple containing the return code, the stdout and the stderr of the command
"""
output = []

process = subprocess.Popen(cmd_and_args, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, cwd=cwd)

while True:
    next_line = process.stdout.readline()
    if next_line:
        output.append(str(next_line))
        if print_constantly:
            print(next_line)
    elif not process.poll():
        break

error = process.communicate()[1]

return process.returncode, '\n'.join(output), error
6

For some reason, this one works on Python 2.7 and you only need to import os!

import os 

def bash(command):
    output = os.popen(command).read()
    return output

print_me = bash('ls -l')
print(print_me)
Guillaume Jacquenot
  • 11,217
  • 6
  • 43
  • 49
George Chalhoub
  • 14,968
  • 3
  • 38
  • 61
5

If you need to run a shell command on multiple files, this did the trick for me.

import os
import subprocess

# Define a function for running commands and capturing stdout line by line
# (Modified from Vartec's solution because it wasn't printing all lines)
def runProcess(exe):    
    p = subprocess.Popen(exe, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
    return iter(p.stdout.readline, b'')

# Get all filenames in working directory
for filename in os.listdir('./'):
    # This command will be run on each file
    cmd = 'nm ' + filename

    # Run the command and capture the output line by line.
    for line in runProcess(cmd.split()):
        # Eliminate leading and trailing whitespace
        line.strip()
        # Split the output 
        output = line.split()

        # Filter the output and print relevant lines
        if len(output) > 2:
            if ((output[2] == 'set_program_name')):
                print filename
                print line

Edit: Just saw Max Persson's solution with J.F. Sebastian's suggestion. Went ahead and incorporated that.

Ethan Strider
  • 7,849
  • 3
  • 24
  • 29
  • 1
    `Popen` accepts either a string, but then you need `shell=True`, or a list of arguments, in which case you should pass in `['nm', filename]` instead of a string. The latter is preferable because the shell adds complexity without providing any value here. Passing a string without `shell=True` apparently happens to work on Windows, but that could change in any next Python version. – tripleee Jun 18 '19 at 06:17
5

According to @senderle, if you use python3.6 like me:

def sh(cmd, input=""):
    rst = subprocess.run(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, input=input.encode("utf-8"))
    assert rst.returncode == 0, rst.stderr.decode("utf-8")
    return rst.stdout.decode("utf-8")
sh("ls -a")

Will act exactly like you run the command in bash

Neo li
  • 378
  • 3
  • 8
  • 2
    You are reinventing the keyword arguments `check=True, universal_newlines=True`. In other words `subprocess.run()` already does everything your code does. – tripleee Jun 12 '19 at 17:00
2

Improvement for better logging.
For better output you can use iterator. From below, we get better

from subprocess import Popen, getstatusoutput, PIPE
def shell_command(cmd):
    result = Popen(cmd, shell=True, stdout=PIPE, stderr=PIPE)

    output = iter(result.stdout.readline, b'')
    error = iter(result.stderr.readline, b'')
    print("##### OutPut ###")
    for line in output:
        print(line.decode("utf-8"))
    print("###### Error ########")
    for line in error:
        print(error.decode("utf-8")) # Convert bytes to str

    status, terminal_output = run_command(cmd)
    print(terminal_output)

shell_command("ls") # this will display all the files & folders in directory

Other method using getstatusoutput ( Easy to understand)

from subprocess import Popen, getstatusoutput, PIPE

status_Code, output = getstausoutput(command)
print(output) # this will give the terminal output

# status_code, output = getstatusoutput("ls") # this will print the all files & folder available in the directory

Tushar
  • 880
  • 9
  • 17
2

The key is to use the function subprocess.check_output

For example, the following function captures stdout and stderr of the process and returns that as well as whether or not the call succeeded. It is Python 2 and 3 compatible:

from subprocess import check_output, CalledProcessError, STDOUT

def system_call(command):
    """ 
    params:
        command: list of strings, ex. `["ls", "-l"]`
    returns: output, success
    """
    try:
        output = check_output(command, stderr=STDOUT).decode()
        success = True 
    except CalledProcessError as e:
        output = e.output.decode()
        success = False
    return output, success

output, success = system_call(["ls", "-l"])

If you want to pass commands as strings rather than arrays, use this version:

from subprocess import check_output, CalledProcessError, STDOUT
import shlex

def system_call(command):
    """ 
    params:
        command: string, ex. `"ls -l"`
    returns: output, success
    """
    command = shlex.split(command)
    try:
        output = check_output(command, stderr=STDOUT).decode()
        success = True 
    except CalledProcessError as e:
        output = e.output.decode()
        success = False
    return output, success

output, success = system_call("ls -l")
Zags
  • 37,389
  • 14
  • 105
  • 140
1

If you use the subprocess python module, you are able to handle the STDOUT, STDERR and return code of command separately. You can see an example for the complete command caller implementation. Of course you can extend it with try..except if you want.

The below function returns the STDOUT, STDERR and Return code so you can handle them in the other script.

import subprocess

def command_caller(command=None)
    sp = subprocess.Popen(command, stderr=subprocess.PIPE, stdout=subprocess.PIPE, shell=False)
    out, err = sp.communicate()
    if sp.returncode:
        print(
            "Return code: %(ret_code)s Error message: %(err_msg)s"
            % {"ret_code": sp.returncode, "err_msg": err}
            )
    return sp.returncode, out, err
milanbalazs
  • 4,811
  • 4
  • 23
  • 45
1

I would like to suggest simppl as an option for consideration. It is a module that is available via pypi: pip install simppl and was runs on python3.

simppl allows the user to run shell commands and read the output from the screen.

The developers suggest three types of use cases:

  1. The simplest usage will look like this:
    from simppl.simple_pipeline import SimplePipeline
    sp = SimplePipeline(start=0, end=100):
    sp.print_and_run('<YOUR_FIRST_OS_COMMAND>')
    sp.print_and_run('<YOUR_SECOND_OS_COMMAND>') ```

  1. To run multiple commands concurrently use:
    commands = ['<YOUR_FIRST_OS_COMMAND>', '<YOUR_SECOND_OS_COMMAND>']
    max_number_of_processes = 4
    sp.run_parallel(commands, max_number_of_processes) ```

  1. Finally, if your project uses the cli module, you can run directly another command_line_tool as part of a pipeline. The other tool will be run from the same process, but it will appear from the logs as another command in the pipeline. This enables smoother debugging and refactoring of tools calling other tools.
    from example_module import example_tool
    sp.print_and_run_clt(example_tool.run, ['first_number', 'second_nmber'], 
                                 {'-key1': 'val1', '-key2': 'val2'},
                                 {'--flag'}) ```

Note that the printing to STDOUT/STDERR is via python's logging module.


Here is a complete code to show how simppl works:

import logging
from logging.config import dictConfig

logging_config = dict(
    version = 1,
    formatters = {
        'f': {'format':
              '%(asctime)s %(name)-12s %(levelname)-8s %(message)s'}
        },
    handlers = {
        'h': {'class': 'logging.StreamHandler',
              'formatter': 'f',
              'level': logging.DEBUG}
        },
    root = {
        'handlers': ['h'],
        'level': logging.DEBUG,
        },
)
dictConfig(logging_config)

from simppl.simple_pipeline import SimplePipeline
sp = SimplePipeline(0, 100)
sp.print_and_run('ls')
0x90
  • 39,472
  • 36
  • 165
  • 245
0

Here is a simple and flexible solution that works on a variety of OS versions, and both Python 2 and 3, using IPython in shell mode:

from IPython.terminal.embed import InteractiveShellEmbed
my_shell = InteractiveShellEmbed()
result = my_shell.getoutput("echo hello world")
print(result)

Out: ['hello world']

It has a couple of advantages

  1. It only requires an IPython install, so you don't really need to worry about your specific Python or OS version when using it, it comes with Jupyter - which has a wide range of support
  2. It takes a simple string by default - so no need to use shell mode arg or string splitting, making it slightly cleaner IMO
  3. It also makes it cleaner to easily substitute variables or even entire Python commands in the string itself

To demonstrate:

var = "hello world "
result = my_shell.getoutput("echo {var*2}")
print(result)

Out: ['hello world hello world']

Just wanted to give you an extra option, especially if you already have Jupyter installed

Naturally, if you are in an actual Jupyter notebook as opposed to a .py script you can also always do:

result = !echo hello world
print(result)

To accomplish the same.

dnola
  • 61
  • 5
  • 4
    This sort of string construction is a bad idea for safety and reliability. The other answers here include various options that use only the standard library, so it's hard to argue that this is more portable. – Davis Herring Sep 06 '21 at 19:06
  • By "portable" I mean "runs the same in every environment". The other answers here rely on using different steps for different versions of Python, and different environments. Additionally, their failure conditions differ based on approach. For example, check_output based approaches will fail to yield any output if the underlying process fails, while other subprocess approaches will not. The above solution is agnostic to environment and version - and consistently yields the same result you would get as if you ran it in shell yourself, even during failure, which is what I think the user expects. – dnola Sep 27 '21 at 21:16
  • w.r.t. string construction - I agree it can be dangerous in production scenarios. But other scenarios - such as exploratory data analysis - value code efficiency over safety, as they are not going directly to production. Such string construction has value in several such situations. – dnola Sep 27 '21 at 21:18
  • `subprocess.check_output(shell=True)` is just as platform-independent (surely we can assume Python 2.7 or 3.1 by now!), and its `CalledProcessError` *does* have `output` available. I certainly respect the idea that research software has different objectives, but I’ve seen plenty of it suffer from insufficient care around things like process exit codes and thus do not advocate for “just like the interactive interface” design (although I grant that it is what’s explicitly solicited in this question!). – Davis Herring Sep 27 '21 at 22:52
  • The accepted answer does not account for CalledProcessError, despite this being explicitly what is requested by TC. Sounds like TC basically wanted a one liner, this is a true cross-platform one liner. I accept that "magic" solutions are controversial, but it can be valuable - and sometimes preferable - to know they exist. IPython and Jupyter as a project exist for explicitly this purpose, and people find those plenty valuable - unless you are arguing that IPython/Jupyter have no place in a Python programmer's workflow. It basically depends on whether TC believes in "magic" or not! – dnola Sep 28 '21 at 21:01
0

This is the code that I use to support multithreading in Jupyter notebook cell and make cell printing the shell outputs, in real time. It make use of bufsize and stderr.

from subprocess import Popen, PIPE, STDOUT

def verbosecmd(command):
    with Popen(
        command, stdout=PIPE, shell=True, stderr=STDOUT, bufsize=0, close_fds=True
    ) as process:
        for line in iter(process.stdout.readline, b""):
            print(line.rstrip().decode("utf-8"))
Muhammad Yasirroni
  • 1,512
  • 12
  • 22
-1

If you want to capture both stdout and stderr and display them as they would appear when a shell command is executed in a interactive terminal and you needed to know the return status of the command, you can do a following hack.

import time, os
cmd_str = "ls -d /bin/nonsense"

tmpfile= "/tmp/results." + str(os.getpid()) + "." + str(time.time()) 
status = os.system( cmd_str + " > " + tmpfile +" 2>&1")
with open(tmpfile, 'r') as f:
  print (f.read())
os.remove(tmpfile)
  
print("status=" + str(status))

Note: The tmpfile is unique and removed as soon as it is used. Shell is used to parse the cmd_str, hence don't use this technique for arbitrary strings because it is unsafe.

jacobm654321
  • 643
  • 6
  • 8
-1

1-liner (for the obsessed)

def cmd(x): from subprocess import check_output as c; return c(x, shell=True, text=True) # noqa

usage

print(cmd('ls -la')) #linux
print(cmd('ipconfig')) #windows

comments

  • dont worry about subprocess being imported every time, subsequent calls are just indexed like variable
  • it is not according to pep8 formatting standard and looses single line on save if any formatter is configured in vscode, you may ignore using --ignore=E702,E704,E501 by adding in settings or appending # noqa at end (i already added) ref this answer
  • Also errors are raised and not passed silently
nikhil swami
  • 2,360
  • 5
  • 15
  • 1
    What is the purpose here? The only reason to write code this way is to be able to run it from shell with `python -c "..."`, which obviously has no point in this case – Nikolaj Š. Sep 01 '23 at 15:39
  • @NikolajŠ. you have perhaps misunderstood the dummy test case with main objective. it was meant for demo only. i have omitted it. a good deal of thought went before writing, and no intentions to spam. – nikhil swami Sep 01 '23 at 22:19
-2

The output can be redirected to a text file and then read it back.

import subprocess
import os
import tempfile

def execute_to_file(command):
    """
    This function execute the command
    and pass its output to a tempfile then read it back
    It is usefull for process that deploy child process
    """
    temp_file = tempfile.NamedTemporaryFile(delete=False)
    temp_file.close()
    path = temp_file.name
    command = command + " > " + path
    proc = subprocess.run(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, universal_newlines=True)
    if proc.stderr:
        # if command failed return
        os.unlink(path)
        return
    with open(path, 'r') as f:
        data = f.read()
    os.unlink(path)
    return data

if __name__ == "__main__":
    path = "Somepath"
    command = 'ecls.exe /files ' + path
    print(execute(command))
Masoud Rahimi
  • 5,785
  • 15
  • 39
  • 67
  • Sure it *can,* but why would you want to; and why would you use the shell instead of passing `stdout=temp_file`? – tripleee Jun 18 '19 at 06:25
  • Actually, in general way you are right but in my example the `ecls.exe` seems to deploy another command line tool, so the simple way didn't work sometimes. – Masoud Rahimi Jun 18 '19 at 06:47
-4

eg, execute('ls -ahl') differentiated three/four possible returns and OS platforms:

  1. no output, but run successfully
  2. output empty line, run successfully
  3. run failed
  4. output something, run successfully

function below

def execute(cmd, output=True, DEBUG_MODE=False):
"""Executes a bash command.
(cmd, output=True)
output: whether print shell output to screen, only affects screen display, does not affect returned values
return: ...regardless of output=True/False...
        returns shell output as a list with each elment is a line of string (whitespace stripped both sides) from output
        could be 
        [], ie, len()=0 --> no output;    
        [''] --> output empty line;     
        None --> error occured, see below

        if error ocurs, returns None (ie, is None), print out the error message to screen
"""
if not DEBUG_MODE:
    print "Command: " + cmd

    # https://stackoverflow.com/a/40139101/2292993
    def _execute_cmd(cmd):
        if os.name == 'nt' or platform.system() == 'Windows':
            # set stdin, out, err all to PIPE to get results (other than None) after run the Popen() instance
            p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)
        else:
            # Use bash; the default is sh
            p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True, executable="/bin/bash")

        # the Popen() instance starts running once instantiated (??)
        # additionally, communicate(), or poll() and wait process to terminate
        # communicate() accepts optional input as stdin to the pipe (requires setting stdin=subprocess.PIPE above), return out, err as tuple
        # if communicate(), the results are buffered in memory

        # Read stdout from subprocess until the buffer is empty !
        # if error occurs, the stdout is '', which means the below loop is essentially skipped
        # A prefix of 'b' or 'B' is ignored in Python 2; 
        # it indicates that the literal should become a bytes literal in Python 3 
        # (e.g. when code is automatically converted with 2to3).
        # return iter(p.stdout.readline, b'')
        for line in iter(p.stdout.readline, b''):
            # # Windows has \r\n, Unix has \n, Old mac has \r
            # if line not in ['','\n','\r','\r\n']: # Don't print blank lines
                yield line
        while p.poll() is None:                                                                                                                                        
            sleep(.1) #Don't waste CPU-cycles
        # Empty STDERR buffer
        err = p.stderr.read()
        if p.returncode != 0:
            # responsible for logging STDERR 
            print("Error: " + str(err))
            yield None

    out = []
    for line in _execute_cmd(cmd):
        # error did not occur earlier
        if line is not None:
            # trailing comma to avoid a newline (by print itself) being printed
            if output: print line,
            out.append(line.strip())
        else:
            # error occured earlier
            out = None
    return out
else:
    print "Simulation! The command is " + cmd
    print ""
Jerry T
  • 1,541
  • 1
  • 19
  • 17