How do I call an external command within Python as if I had typed it in a shell or command prompt?
66 Answers
Use subprocess.run
:
import subprocess
subprocess.run(["ls", "-l"])
Another common way is os.system
but you shouldn't use it because it is unsafe if any parts of the command come from outside your program or can contain spaces or other special characters, also subprocess.run
is generally more flexible (you can get the stdout
, stderr
, the "real" status code, better error handling, etc.). Even the documentation for os.system
recommends using subprocess
instead.
On Python 3.4 and earlier, use subprocess.call
instead of .run
:
subprocess.call(["ls", "-l"])

- 14,854
- 11
- 100
- 103

- 78,318
- 8
- 63
- 70
-
9Is there a way to use variable substitution? IE I tried to do `echo $PATH` by using `call(["echo", "$PATH"])`, but it just echoed the literal string `$PATH` instead of doing any substitution. I know I could get the PATH environment variable, but I'm wondering if there is an easy way to have the command behave exactly as if I had executed it in bash. – Kevin Wheeler Sep 01 '15 at 23:17
-
@KevinWheeler You'll have to use `shell=True` for that to work. – SethMMorton Sep 02 '15 at 20:38
-
73@KevinWheeler You should NOT use `shell=True`, for this purpose Python comes with [os.path.expandvars](https://docs.python.org/2/library/os.path.html#os.path.expandvars). In your case you can write: `os.path.expandvars("$PATH")`. @SethMMorton please reconsider your comment -> [Why not to use shell=True](https://docs.python.org/2/library/subprocess.html#frequently-used-arguments) – Murmel Nov 11 '15 at 20:24
-
what if I want to pipe things e.g. `pip list | grep anatome`? – Charlie Parker Nov 12 '21 at 16:51
-
Many arguments version looks like that: `subprocess.run(["balcon.exe","-n","Tatyana","-t", "Hello world"])` – Sergey Anisimov May 15 '22 at 18:55
-
@thatrandomperson your comment is useless because you didn't provide an example. no one is able to evaluate what you actually tried, why you expected it to work and explain to you why it didn't work. We just have to take your word that a function used by millions of programs "doesn't work"? – Boris Verkhovskiy May 19 '23 at 23:56
Here is a summary of ways to call external programs, including their advantages and disadvantages:
os.system
passes the command and arguments to your system's shell. This is nice because you can actually run multiple commands at once in this manner and set up pipes and input/output redirection. For example:os.system("some_command < input_file | another_command > output_file")
However, while this is convenient, you have to manually handle the escaping of shell characters such as spaces, et cetera. On the other hand, this also lets you run commands which are simply shell commands and not actually external programs.
os.popen
will do the same thing asos.system
except that it gives you a file-like object that you can use to access standard input/output for that process. There are 3 other variants of popen that all handle the i/o slightly differently. If you pass everything as a string, then your command is passed to the shell; if you pass them as a list then you don't need to worry about escaping anything. Example:print(os.popen("ls -l").read())
subprocess.Popen
. This is intended as a replacement foros.popen
, but has the downside of being slightly more complicated by virtue of being so comprehensive. For example, you'd say:print subprocess.Popen("echo Hello World", shell=True, stdout=subprocess.PIPE).stdout.read()
instead of
print os.popen("echo Hello World").read()
but it is nice to have all of the options there in one unified class instead of 4 different popen functions. See the documentation.
subprocess.call
. This is basically just like thePopen
class and takes all of the same arguments, but it simply waits until the command completes and gives you the return code. For example:return_code = subprocess.call("echo Hello World", shell=True)
subprocess.run
. Python 3.5+ only. Similar to the above but even more flexible and returns aCompletedProcess
object when the command finishes executing.os.fork
,os.exec
,os.spawn
are similar to their C language counterparts, but I don't recommend using them directly.
The subprocess
module should probably be what you use.
Finally, please be aware that for all methods where you pass the final command to be executed by the shell as a string and you are responsible for escaping it. There are serious security implications if any part of the string that you pass can not be fully trusted. For example, if a user is entering some/any part of the string. If you are unsure, only use these methods with constants. To give you a hint of the implications consider this code:
print subprocess.Popen("echo %s " % user_input, stdout=PIPE).stdout.read()
and imagine that the user enters something "my mama didnt love me && rm -rf /
" which could erase the whole filesystem.

- 7,243
- 6
- 49
- 61

- 186,300
- 67
- 213
- 256
-
53Nice answer/explanation. How is this answer justifying Python's motto as described in this article ? http://www.fastcompany.com/3026446/the-fall-of-perl-the-webs-most-promising-language "Stylistically, Perl and Python have different philosophies. Perl’s best known mottos is " There’s More Than One Way to Do It". Python is designed to have one obvious way to do it" Seem like it should be the other way! In Perl I know only two ways to execute a command - using back-tick or `open`. – Jean May 26 '15 at 21:16
-
28If using Python 3.5+, use `subprocess.run()`. https://docs.python.org/3.5/library/subprocess.html#subprocess.run – phoenix Oct 07 '15 at 16:37
-
11What one typically needs to know is what is done with the child process's STDOUT and STDERR, because if they are ignored, under some (quite common) conditions, eventually the child process will issue a system call to write to STDOUT (STDERR too?) that would exceed the output buffer provided for the process by the OS, and the OS will cause it to block until some process reads from that buffer. So, with the currently recommended ways, `subprocess.run(..)`, what exactly does *"This does not capture stdout or stderr by default."* imply? What about `subprocess.check_output(..)` and STDERR? – Evgeni Sergeev Jun 01 '16 at 10:44
-
which of the commands you recommended block my script? i.e. if I want to run multiple commands in a `for` loop how do I do it without it blocking my python script? I don't care about the output of the command I just want to run lots of them. – Charlie Parker Oct 24 '17 at 19:08
-
14This is arguably the wrong way around. Most people only need `subprocess.run()` or its older siblings `subprocess.check_call()` et al. For cases where these do not suffice, see `subprocess.Popen()`. `os.popen()` should perhaps not be mentioned at all, or come even after "hack your own fork/exec/spawn code". – tripleee Dec 03 '18 at 06:00
-
-
@BenL: That causes the command to be passed directly to a shell (e.g. bash) as opposed to doing a fork/exec using the underlying system calls. Using shell might be required because some commands aren't actually executable programs and are instead shell commands build into your shell (e.g. `export` in bash is a shell command). Using a shell is a bad idea if you're taking untrusted user input, because it's more likely to cause a security issue. – Eli Courtwright Sep 14 '22 at 18:11
Typical implementation:
import subprocess
p = subprocess.Popen('ls', shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
for line in p.stdout.readlines():
print line,
retval = p.wait()
You are free to do what you want with the stdout
data in the pipe. In fact, you can simply omit those parameters (stdout=
and stderr=
) and it'll behave like os.system()
.

- 8,690
- 4
- 21
- 43

- 7,753
- 2
- 17
- 19
-
59`.readlines()` reads *all* lines at once i.e., it blocks until the subprocess exits (closes its end of the pipe). To read in real time (if there is no buffering issues) you could: `for line in iter(p.stdout.readline, ''): print line,` – jfs Nov 16 '12 at 14:12
-
4Could you elaborate on what you mean by "if there is no buffering issues"? If the process blocks definitely, the subprocess call also blocks. The same could happen with my original example as well. What else could happen with respect to buffering? – EmmEff Nov 17 '12 at 13:25
-
20the child process may use block-buffering in non-interactive mode instead of line-buffering so `p.stdout.readline()` (note: no `s` at the end) won't see any data until the child fills its buffer. If the child doesn't produce much data then the output won't be in real time. See the second reason in [Q: Why not just use a pipe (popen())?](http://www.noah.org/wiki/Pexpect#Q:_Why_not_just_use_a_pipe_.28popen.28.29.29.3F). Some workarounds are provided [in this answer](http://stackoverflow.com/a/12471855/4279) (pexpect, pty, stdbuf) – jfs Nov 17 '12 at 13:51
-
7the buffering issue only matters if you want output in real time and doesn't apply to your code that doesn't print anything until *all* data is received – jfs Nov 17 '12 at 13:53
-
10This answer was fine for its time, but we should no longer recommend `Popen` for simple tasks. This also needlessly specifies `shell=True`. Try one of the `subprocess.run()` answers. – tripleee Dec 03 '18 at 05:39
Some hints on detaching the child process from the calling one (starting the child process in background).
Suppose you want to start a long task from a CGI script. That is, the child process should live longer than the CGI script execution process.
The classical example from the subprocess module documentation is:
import subprocess
import sys
# Some code here
pid = subprocess.Popen([sys.executable, "longtask.py"]) # Call subprocess
# Some more code here
The idea here is that you do not want to wait in the line 'call subprocess' until the longtask.py is finished. But it is not clear what happens after the line 'some more code here' from the example.
My target platform was FreeBSD, but the development was on Windows, so I faced the problem on Windows first.
On Windows (Windows XP), the parent process will not finish until the longtask.py has finished its work. It is not what you want in a CGI script. The problem is not specific to Python; in the PHP community the problems are the same.
The solution is to pass DETACHED_PROCESS Process Creation Flag to the underlying CreateProcess function in Windows API. If you happen to have installed pywin32, you can import the flag from the win32process module, otherwise you should define it yourself:
DETACHED_PROCESS = 0x00000008
pid = subprocess.Popen([sys.executable, "longtask.py"],
creationflags=DETACHED_PROCESS).pid
/* UPD 2015.10.27 @eryksun in a comment below notes, that the semantically correct flag is CREATE_NEW_CONSOLE (0x00000010) */
On FreeBSD we have another problem: when the parent process is finished, it finishes the child processes as well. And that is not what you want in a CGI script either. Some experiments showed that the problem seemed to be in sharing sys.stdout. And the working solution was the following:
pid = subprocess.Popen([sys.executable, "longtask.py"], stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=subprocess.PIPE)
I have not checked the code on other platforms and do not know the reasons of the behaviour on FreeBSD. If anyone knows, please share your ideas. Googling on starting background processes in Python does not shed any light yet.

- 30,738
- 21
- 105
- 131

- 31,286
- 11
- 84
- 89
-
i noticed a possible "quirk" with developing py2exe apps in pydev+eclipse. i was able to tell that the main script was not detached because eclipse's output window was not terminating; even if the script executes to completion it is still waiting for returns. but, when i tried compiling to a py2exe executable, the expected behavior occurs (runs the processes as detached, then quits). i am not sure, but the executable name is not in the process list anymore. this works for all approaches (os.system("start *"), os.spawnl with os.P_DETACH, subprocs, etc.) – maranas Apr 09 '10 at 08:09
-
Windows gotcha: even though I spawned process with DETACHED_PROCESS, when I killed my Python daemon all ports opened by it wouldn't free until all spawned processes terminate. WScript.Shell solved all my problems. Example here: http://pastebin.com/xGmuvwSx – Alexey Lebedev Apr 16 '12 at 10:04
-
3you might also need CREATE_NEW_PROCESS_GROUP flag. See [Popen waiting for child process even when the immediate child has terminated](http://stackoverflow.com/q/13243807/4279) – jfs Nov 16 '12 at 14:16
-
2I'm seeing `import subprocess as sp;sp.Popen('calc')` not waiting for the subprocess to complete. It seems the creationflags aren't necessary. What am I missing? – ubershmekel Oct 27 '14 at 21:01
-
1@ubershmekel, I am not sure what you mean and don't have a windows installation. If I recall correctly, without the flags you can not close the `cmd` instance from which you started the `calc`. – newtover Oct 28 '14 at 12:25
-
I'm on Windows 8.1 and `calc` seems to survive the closing of `python`. – ubershmekel Oct 30 '14 at 05:45
-
Is there any significance to using '0x00000008'? Is that a specific value that has to be used or one of multiple options? – SuperBiasedMan May 05 '15 at 13:13
-
10The following is incorrect: "[o]n windows (win xp), the parent process will not finish until the longtask.py has finished its work". The parent will exit normally, but the console window (conhost.exe instance) only closes when the last attached process exits, and the child may have inherited the parent's console. Setting `DETACHED_PROCESS` in `creationflags` avoids this by preventing the child from inheriting or creating a console. If you instead want a new console, use `CREATE_NEW_CONSOLE` (0x00000010). – Eryk Sun Oct 27 '15 at 00:27
-
3I didn't mean that executing as a detached process is incorrect. That said, you may need to set the standard handles to files, pipes, or `os.devnull` because some console programs exit with an error otherwise. Create a new console when you want the child process to interact with the user concurrently with the parent process. It would be confusing to try to do both in a single window. – Eryk Sun Oct 27 '15 at 17:37
-
`stdout=subprocess.PIPE` will make your code hang up if you have long output from a child. For more details see https://thraxil.org/users/anders/posts/2008/03/13/Subprocess-Hanging-PIPE-is-your-enemy/ – Dr_Zaszuś Mar 08 '18 at 08:56
-
your answer seems strange to me. I just opened a `subprocess.Popen` and nothing bad happened (not had to wait). Why exactly do we need to worry about the scenario you are pointing out? I'm skeptical. – Charlie Parker Feb 24 '19 at 19:38
import os
os.system("your command")
Note that this is dangerous, since the command isn't cleaned. See the documentation of the os
and sys
modules. There are a bunch of functions (exec*
and spawn*
) that will do similar things.

- 14,854
- 11
- 100
- 103

- 4,755
- 3
- 24
- 34
-
19No idea what I meant nearly a decade ago (check the date!), but if I had to guess, it would be that there's no validation done. – nimish Jun 06 '18 at 16:01
-
4This should now point to `subprocess` as a slightly more versatile and portable solution. Running external commands is of course inherently unportable (you have to make sure the command is available on every architecture you need to support) and passing user input as an external command is inherently unsafe. – tripleee Dec 03 '18 at 05:11
-
-
@NathanB by using `result = subprocess.run()` instead and then doing `result.stdout`. It might also be possible by [intercepting stdout](https://stackoverflow.com/a/40417352) – Boris Verkhovskiy May 22 '23 at 09:51
I'd recommend using the subprocess module instead of os.system because it does shell escaping for you and is therefore much safer.
subprocess.call(['ping', 'localhost'])

- 33,817
- 13
- 115
- 143

- 2,444
- 2
- 15
- 7
-
If you want to **create a list out of a command with parameters**, a list which can be used with `subprocess` when `shell=False`, then use `shlex.split` for an easy way to do this https://docs.python.org/2/library/shlex.html#shlex.split (it's the recommended way according to the docs https://docs.python.org/2/library/subprocess.html#popen-constructor) – Daniel F Sep 20 '18 at 18:07
-
14This is incorrect: "**it does shell escaping for you and is therefore much safer**". subprocess doesn't do shell escaping, subprocess doesn't pass your command through the shell, so there's no need to shell escape. – Lie Ryan Dec 04 '18 at 08:36
import os
cmd = 'ls -al'
os.system(cmd)
If you want to return the results of the command, you can use os.popen
. However, this is deprecated since version 2.6 in favor of the subprocess module, which other answers have covered well.

- 10,547
- 9
- 68
- 101

- 2,982
- 1
- 19
- 23
-
16popen [is deprecated](https://docs.python.org/2/library/os.html#os.popen) in favor of [subprocess](https://docs.python.org/2/library/subprocess.html). – tew Aug 08 '14 at 00:22
-
You can also save your result with the os.system call, since it works like the UNIX shell itself, like for example os.system('ls -l > test2.txt') – Stefan Gruenwald Nov 07 '17 at 23:19
There are lots of different libraries which allow you to call external commands with Python. For each library I've given a description and shown an example of calling an external command. The command I used as the example is ls -l
(list all files). If you want to find out more about any of the libraries I've listed and linked the documentation for each of them.
Sources
- subprocess: https://docs.python.org/3.5/library/subprocess.html
- shlex: https://docs.python.org/3/library/shlex.html
- os: https://docs.python.org/3.5/library/os.html
- sh: https://amoffat.github.io/sh/
- plumbum: https://plumbum.readthedocs.io/en/latest/
- pexpect: https://pexpect.readthedocs.io/en/stable/
- fabric: http://www.fabfile.org/
- envoy: https://github.com/kennethreitz/envoy
- commands: https://docs.python.org/2/library/commands.html
These are all the libraries
Hopefully this will help you make a decision on which library to use :)
subprocess
Subprocess allows you to call external commands and connect them to their input/output/error pipes (stdin, stdout, and stderr). Subprocess is the default choice for running commands, but sometimes other modules are better.
subprocess.run(["ls", "-l"]) # Run command
subprocess.run(["ls", "-l"], stdout=subprocess.PIPE) # This will run the command and return any output
subprocess.run(shlex.split("ls -l")) # You can also use the shlex library to split the command
os
os is used for "operating system dependent functionality". It can also be used to call external commands with os.system
and os.popen
(Note: There is also a subprocess.popen). os will always run the shell and is a simple alternative for people who don't need to, or don't know how to use subprocess.run
.
os.system("ls -l") # Run command
os.popen("ls -l").read() # This will run the command and return any output
sh
sh is a subprocess interface which lets you call programs as if they were functions. This is useful if you want to run a command multiple times.
sh.ls("-l") # Run command normally
ls_cmd = sh.Command("ls") # Save command as a variable
ls_cmd() # Run command as if it were a function
plumbum
plumbum is a library for "script-like" Python programs. You can call programs like functions as in sh
. Plumbum is useful if you want to run a pipeline without the shell.
ls_cmd = plumbum.local("ls -l") # Get command
ls_cmd() # Run command
pexpect
pexpect lets you spawn child applications, control them and find patterns in their output. This is a better alternative to subprocess for commands that expect a tty on Unix.
pexpect.run("ls -l") # Run command as normal
child = pexpect.spawn('scp foo user@example.com:.') # Spawns child application
child.expect('Password:') # When this is the output
child.sendline('mypassword')
fabric
fabric is a Python 2.5 and 2.7 library. It allows you to execute local and remote shell commands. Fabric is simple alternative for running commands in a secure shell (SSH)
fabric.operations.local('ls -l') # Run command as normal
fabric.operations.local('ls -l', capture = True) # Run command and receive output
envoy
envoy is known as "subprocess for humans". It is used as a convenience wrapper around the subprocess
module.
r = envoy.run("ls -l") # Run command
r.std_out # Get output
commands
commands
contains wrapper functions for os.popen
, but it has been removed from Python 3 since subprocess
is a better alternative.

- 30,738
- 21
- 105
- 131

- 5,291
- 7
- 33
- 42
-
I think it should be `ls_cmd = plumbum.local["ls -l"]` # Get command – innisfree Jul 13 '23 at 02:55
With the standard library
Use the subprocess module (Python 3):
import subprocess
subprocess.run(['ls', '-l'])
It is the recommended standard way. However, more complicated tasks (pipes, output, input, etc.) can be tedious to construct and write.
Note on Python version: If you are still using Python 2, subprocess.call works in a similar way.
ProTip: shlex.split can help you to parse the command for run
, call
, and other subprocess
functions in case you don't want (or you can't!) provide them in form of lists:
import shlex
import subprocess
subprocess.run(shlex.split('ls -l'))
With external dependencies
If you do not mind external dependencies, use plumbum:
from plumbum.cmd import ifconfig
print(ifconfig['wlan0']())
It is the best subprocess
wrapper. It's cross-platform, i.e. it works on both Windows and Unix-like systems. Install by pip install plumbum
.
Another popular library is sh:
from sh import ifconfig
print(ifconfig('wlan0'))
However, sh
dropped Windows support, so it's not as awesome as it used to be. Install by pip install sh
.

- 30,738
- 21
- 105
- 131

- 8,566
- 8
- 47
- 66
I always use fabric
for doing these things. Here is a demo code:
from fabric.operations import local
result = local('ls', capture=True)
print "Content:/n%s" % (result, )
But this seems to be a good tool: sh
(Python subprocess interface).
Look at an example:
from sh import vgdisplay
print vgdisplay()
print vgdisplay('-v')
print vgdisplay(v=True)

- 324
- 3
- 12

- 92,161
- 3
- 37
- 44
Check the "pexpect" Python library, too.
It allows for interactive controlling of external programs/commands, even ssh, ftp, telnet, etc. You can just type something like:
child = pexpect.spawn('ftp 192.168.0.24')
child.expect('(?i)name .*: ')
child.sendline('anonymous')
child.expect('(?i)password')

- 30,738
- 21
- 105
- 131

- 1,039
- 8
- 5
If you need the output from the command you are calling, then you can use subprocess.check_output (Python 2.7+).
>>> subprocess.check_output(["ls", "-l", "/dev/null"])
'crw-rw-rw- 1 root root 1, 3 Oct 18 2007 /dev/null\n'
Also note the shell parameter.
If shell is
True
, the specified command will be executed through the shell. This can be useful if you are using Python primarily for the enhanced control flow it offers over most system shells and still want convenient access to other shell features such as shell pipes, filename wildcards, environment variable expansion, and expansion of ~ to a user’s home directory. However, note that Python itself offers implementations of many shell-like features (in particular,glob
,fnmatch
,os.walk()
,os.path.expandvars()
,os.path.expanduser()
, andshutil
).

- 30,738
- 21
- 105
- 131

- 10,065
- 8
- 42
- 63
-
2Note that `check_output` requires a list rather than a string. If you don't rely on quoted spaces to make your call valid, the simplest, most readable way to do this is `subprocess.check_output("ls -l /dev/null".split())`. – Bruno Bronosky Jan 30 '18 at 18:18
-
Like the answer vaguely mentions, and many other answers on this page explain in more detail, you can pass a list, or with `shell=True` a single string which the shell then takes care of parsing and executing. Using plain `.split()` is fine under the circumstances you mention, but beginners typically don't understand the nuances; you are probably better off recommending `shlex.split()` which does handle quoting and backslash escapes correctly. – tripleee Jun 09 '21 at 19:39
Update:
subprocess.run
is the recommended approach as of Python 3.5 if your code does not need to maintain compatibility with earlier Python versions. It's more consistent and offers similar ease-of-use as Envoy. (Piping isn't as straightforward though. See this question for how.)
Here's some examples from the documentation.
Run a process:
>>> subprocess.run(["ls", "-l"]) # Doesn't capture output
CompletedProcess(args=['ls', '-l'], returncode=0)
Raise on failed run:
>>> subprocess.run("exit 1", shell=True, check=True)
Traceback (most recent call last):
...
subprocess.CalledProcessError: Command 'exit 1' returned non-zero exit status 1
Capture output:
>>> subprocess.run(["ls", "-l", "/dev/null"], stdout=subprocess.PIPE)
CompletedProcess(args=['ls', '-l', '/dev/null'], returncode=0,
stdout=b'crw-rw-rw- 1 root root 1, 3 Jan 23 16:23 /dev/null\n')
Original answer:
I recommend trying Envoy. It's a wrapper for subprocess, which in turn aims to replace the older modules and functions. Envoy is subprocess for humans.
Example usage from the README:
>>> r = envoy.run('git config', data='data to pipe in', timeout=2)
>>> r.status_code
129
>>> r.std_out
'usage: git config [options]'
>>> r.std_err
''
Pipe stuff around too:
>>> r = envoy.run('uptime | pbcopy')
>>> r.command
'pbcopy'
>>> r.status_code
0
>>> r.history
[<Response 'uptime'>]

- 30,738
- 21
- 105
- 131

- 16,328
- 12
- 61
- 75
This is how I run my commands. This code has everything you need pretty much
from subprocess import Popen, PIPE
cmd = "ls -l ~/"
p = Popen(cmd , shell=True, stdout=PIPE, stderr=PIPE)
out, err = p.communicate()
print "Return code: ", p.returncode
print out.rstrip(), err.rstrip()

- 701
- 5
- 3
-
4I think it's acceptable for hard-coded commands, if it increases readability. – Adam Matan Apr 02 '14 at 13:07
How to execute a program or call a system command from Python
Simple, use subprocess.run
, which returns a CompletedProcess
object:
>>> from subprocess import run
>>> from shlex import split
>>> completed_process = run(split('python --version'))
Python 3.8.8
>>> completed_process
CompletedProcess(args=['python', '--version'], returncode=0)
(run
wants a list of lexically parsed shell arguments - this is what you'd type in a shell, separated by spaces, but not where the spaces are quoted, so use a specialized function, split
, to split up what you would literally type into your shell)
Why?
As of Python 3.5, the documentation recommends subprocess.run:
The recommended approach to invoking subprocesses is to use the run() function for all use cases it can handle. For more advanced use cases, the underlying Popen interface can be used directly.
Here's an example of the simplest possible usage - and it does exactly as asked:
>>> from subprocess import run
>>> from shlex import split
>>> completed_process = run(split('python --version'))
Python 3.8.8
>>> completed_process
CompletedProcess(args=['python', '--version'], returncode=0)
run
waits for the command to successfully finish, then returns a CompletedProcess
object. It may instead raise TimeoutExpired
(if you give it a timeout=
argument) or CalledProcessError
(if it fails and you pass check=True
).
As you might infer from the above example, stdout and stderr both get piped to your own stdout and stderr by default.
We can inspect the returned object and see the command that was given and the returncode:
>>> completed_process.args
['python', '--version']
>>> completed_process.returncode
0
Capturing output
If you want to capture the output, you can pass subprocess.PIPE
to the appropriate stderr
or stdout
:
>>> from subprocess import PIPE
>>> completed_process = run(shlex.split('python --version'), stdout=PIPE, stderr=PIPE)
>>> completed_process.stdout
b'Python 3.8.8\n'
>>> completed_process.stderr
b''
And those respective attributes return bytes.
Pass a command list
One might easily move from manually providing a command string (like the question suggests) to providing a string built programmatically. Don't build strings programmatically. This is a potential security issue. It's better to assume you don't trust the input.
>>> import textwrap
>>> args = ['python', textwrap.__file__]
>>> cp = run(args, stdout=subprocess.PIPE)
>>> cp.stdout
b'Hello there.\n This is indented.\n'
Note, only args
should be passed positionally.
Full Signature
Here's the actual signature in the source and as shown by help(run)
:
def run(*popenargs, input=None, timeout=None, check=False, **kwargs):
The popenargs
and kwargs
are given to the Popen
constructor. input
can be a string of bytes (or unicode, if specify encoding or universal_newlines=True
) that will be piped to the subprocess's stdin.
The documentation describes timeout=
and check=True
better than I could:
The timeout argument is passed to Popen.communicate(). If the timeout expires, the child process will be killed and waited for. The TimeoutExpired exception will be re-raised after the child process has terminated.
If check is true, and the process exits with a non-zero exit code, a CalledProcessError exception will be raised. Attributes of that exception hold the arguments, the exit code, and stdout and stderr if they were captured.
and this example for check=True
is better than one I could come up with:
>>> subprocess.run("exit 1", shell=True, check=True) Traceback (most recent call last): ... subprocess.CalledProcessError: Command 'exit 1' returned non-zero exit status 1
Expanded Signature
Here's an expanded signature, as given in the documentation:
subprocess.run(args, *, stdin=None, input=None, stdout=None, stderr=None, shell=False, cwd=None, timeout=None, check=False, encoding=None, errors=None)
Note that this indicates that only the args list should be passed positionally. So pass the remaining arguments as keyword arguments.
Popen
When use Popen
instead? I would struggle to find use-case based on the arguments alone. Direct usage of Popen
would, however, give you access to its methods, including poll
, 'send_signal', 'terminate', and 'wait'.
Here's the Popen
signature as given in the source. I think this is the most precise encapsulation of the information (as opposed to help(Popen)
):
def __init__(self, args, bufsize=-1, executable=None,
stdin=None, stdout=None, stderr=None,
preexec_fn=None, close_fds=True,
shell=False, cwd=None, env=None, universal_newlines=None,
startupinfo=None, creationflags=0,
restore_signals=True, start_new_session=False,
pass_fds=(), *, user=None, group=None, extra_groups=None,
encoding=None, errors=None, text=None, umask=-1, pipesize=-1):
But more informative is the Popen
documentation:
subprocess.Popen(args, bufsize=-1, executable=None, stdin=None, stdout=None, stderr=None, preexec_fn=None, close_fds=True, shell=False, cwd=None, env=None, universal_newlines=None, startupinfo=None, creationflags=0, restore_signals=True, start_new_session=False, pass_fds=(), *, group=None, extra_groups=None, user=None, umask=-1, encoding=None, errors=None, text=None)
Execute a child program in a new process. On POSIX, the class uses os.execvp()-like behavior to execute the child program. On Windows, the class uses the Windows CreateProcess() function. The arguments to Popen are as follows.
Understanding the remaining documentation on Popen
will be left as an exercise for the reader.

- 374,368
- 89
- 403
- 331
-
1A simple example of two-way communication between a primary process and a subprocess can be found here: https://stackoverflow.com/a/52841475/1349673 – James Hirschorn Oct 16 '18 at 18:05
As of Python 3.7.0 released on June 27th 2018 (https://docs.python.org/3/whatsnew/3.7.html), you can achieve your desired result in the most powerful while equally simple way. This answer intends to show you the essential summary of various options in a short manner. For in-depth answers, please see the other ones.
TL;DR in 2021
The big advantage of os.system(...)
was its simplicity. subprocess
is better and still easy to use, especially as of Python 3.5.
import subprocess
subprocess.run("ls -a", shell=True)
Note: This is the exact answer to your question - running a command
like in a shell
Preferred Way
If possible, remove the shell overhead and run the command directly (requires a list).
import subprocess
subprocess.run(["help"])
subprocess.run(["ls", "-a"])
Pass program arguments in a list. Don't include \"
-escaping for arguments containing spaces.
Advanced Use Cases
Checking The Output
The following code speaks for itself:
import subprocess
result = subprocess.run(["ls", "-a"], capture_output=True, text=True)
if "stackoverflow-logo.png" in result.stdout:
print("You're a fan!")
else:
print("You're not a fan?")
result.stdout
is all normal program output excluding errors. Read result.stderr
to get them.
capture_output=True
- turns capturing on. Otherwise result.stderr
and result.stdout
would be None
. Available from Python 3.7.
text=True
- a convenience argument added in Python 3.7 which converts the received binary data to Python strings you can easily work with.
Checking the returncode
Do
if result.returncode == 127: print("The program failed for some weird reason")
elif result.returncode == 0: print("The program succeeded")
else: print("The program failed unexpectedly")
If you just want to check if the program succeeded (returncode == 0) and otherwise throw an Exception, there is a more convenient function:
result.check_returncode()
But it's Python, so there's an even more convenient argument check
which does the same thing automatically for you:
result = subprocess.run(..., check=True)
stderr should be inside stdout
You might want to have all program output inside stdout, even errors. To accomplish this, run
result = subprocess.run(..., stderr=subprocess.STDOUT)
result.stderr
will then be None
and result.stdout
will contain everything.
Using shell=False with an argument string
shell=False
expects a list of arguments. You might however, split an argument string on your own using shlex.
import subprocess
import shlex
subprocess.run(shlex.split("ls -a"))
That's it.
Common Problems
Chances are high you just started using Python when you come across this question. Let's look at some common problems.
FileNotFoundError: [Errno 2] No such file or directory: 'ls -a': 'ls -a'
You're running a subprocess without shell=True
. Either use a list (["ls", "-a"]
) or set shell=True
.
TypeError: [...] NoneType [...]
Check that you've set capture_output=True
.
TypeError: a bytes-like object is required, not [...]
You always receive byte results from your program. If you want to work with it like a normal string, set text=True
.
subprocess.CalledProcessError: Command '[...]' returned non-zero exit status 1.
Your command didn't run successfully. You could disable returncode checking or check your actual program's validity.
TypeError: init() got an unexpected keyword argument [...]
You're likely using a version of Python older than 3.7.0; update it to the most recent one available. Otherwise there are other answers in this Stack Overflow post showing you older alternative solutions.

- 30,738
- 21
- 105
- 131

- 3,451
- 1
- 19
- 31
-
1"The big advantage of os.system(...) was its simplicity. subprocess is better" - how subprocess is better? I am happily using os.system, not sure how switching to subprocess and remembering extra `shell=True` benefits me. What kind of thing is better in subprocess? – reducing activity Mar 26 '21 at 09:14
-
1You're right in that `os.system(...)` is a reasonable choice for executing commands in terms of simple "blind" execution. However, the use cases are rather limited - as soon as you want to capture the output, you have to use a whole other library and then you start having both - subprocess and os for similar use cases in your code. I prefer to keep the code clean and use only one of them. Second, and I would have put that section at the top but the TL;DR has to answer the question **exactly**, you should **not** use `shell=True`, but instead what I've written in the `Preferred Way` section. – fameman Mar 27 '21 at 11:30
-
1The problem with `os.system(...)` and `shell=True` is that you're spawning a new shell process, just to execute your command. This means, you have to do manual escaping which is not as simple as you might think - especially when targeting both POSIX and Windows. For user-supplied input, this is a no-go (just imagine the user entered something with `"` quotes - you'd have to escape them as well). Also, the shell process itself could load code you don't need - not only does it delay the program, but it could also lead to unexpected side effects, ending with a wrong return code. – fameman Mar 27 '21 at 11:34
-
2Summing up, `os.system(...)` is valid to use, indeed. But as soon as you're writing more than a quick python helper script, I'd recommend you to go for subprocess.run without `shell=True`. For more information about the drawbacks of os.system, I'd like to propose you a read through this SO answer: https://stackoverflow.com/a/44731082/6685358 – fameman Mar 27 '21 at 11:35
os.system
is OK, but kind of dated. It's also not very secure. Instead, try subprocess
. subprocess
does not call sh directly and is therefore more secure than os.system
.
Get more information here.

- 150,925
- 31
- 268
- 253

- 1,359
- 7
- 12
-
3While I agree with the overall recommendation, `subprocess` does not remove all of the security problems, and has some pesky issues of its own. – tripleee Dec 03 '18 at 05:36
There is also Plumbum
>>> from plumbum import local
>>> ls = local["ls"]
>>> ls
LocalCommand(<LocalPath /bin/ls>)
>>> ls()
u'build.py\ndist\ndocs\nLICENSE\nplumbum\nREADME.rst\nsetup.py\ntests\ntodo.txt\n'
>>> notepad = local["c:\\windows\\notepad.exe"]
>>> notepad() # Notepad window pops up
u'' # Notepad window is closed by user, command returns

- 2,449
- 3
- 27
- 33
Use:
import os
cmd = 'ls -al'
os.system(cmd)
os - This module provides a portable way of using operating system-dependent functionality.
For the more os
functions, here is the documentation.

- 30,738
- 21
- 105
- 131

- 768
- 12
- 24
It can be this simple:
import os
cmd = "your command"
os.system(cmd)

- 527
- 5
- 13
-
2This fails to point out the drawbacks, which are explained in much more detail in [PEP-324](https://www.python.org/dev/peps/pep-0324/). The documentation for `os.system` explicitly recommends avoiding it in favor of `subprocess`. – tripleee Dec 03 '18 at 05:02
os.system
does not allow you to store results, so if you want to store results in some list or something, a subprocess.call
works.

- 30,738
- 21
- 105
- 131

- 387
- 3
- 2
There is another difference here which is not mentioned previously.
subprocess.Popen
executes the <command> as a subprocess. In my case, I need to execute file <a> which needs to communicate with another program, <b>.
I tried subprocess, and execution was successful. However <b> could not communicate with <a>. Everything is normal when I run both from the terminal.
One more: (NOTE: kwrite behaves different from other applications. If you try the below with Firefox, the results will not be the same.)
If you try os.system("kwrite")
, program flow freezes until the user closes kwrite. To overcome that I tried instead os.system(konsole -e kwrite)
. This time program continued to flow, but kwrite became the subprocess of the console.
Anyone runs the kwrite not being a subprocess (i.e. in the system monitor it must appear at the leftmost edge of the tree).

- 30,738
- 21
- 105
- 131

- 295
- 3
- 2
-
1What do you mean by *"Anyone runs the kwrite not being a subprocess"*? – Peter Mortensen Jun 03 '18 at 20:14
-
I tend to use subprocess together with shlex (to handle escaping of quoted strings):
>>> import subprocess, shlex
>>> command = 'ls -l "/your/path/with spaces/"'
>>> call_params = shlex.split(command)
>>> print call_params
["ls", "-l", "/your/path/with spaces/"]
>>> subprocess.call(call_params)

- 13,329
- 8
- 53
- 75
subprocess.check_call
is convenient if you don't want to test return values. It throws an exception on any error.

- 17,657
- 8
- 55
- 45
I wrote a library for this, shell.py.
It's basically a wrapper for popen and shlex for now. It also supports piping commands, so you can chain commands easier in Python. So you can do things like:
ex('echo hello shell.py') | "awk '{print $2}'"

- 30,738
- 21
- 105
- 131

- 761
- 1
- 8
- 12
Under Linux, in case you would like to call an external command that will execute independently (will keep running after the Python script terminates), you can use a simple queue as task spooler or the at command.
An example with task spooler:
import os
os.system('ts <your-command>')
Notes about task spooler (ts
):
You could set the number of concurrent processes to be run ("slots") with:
ts -S <number-of-slots>
Installing
ts
doesn't requires admin privileges. You can download and compile it from source with a simplemake
, add it to your path and you're done.

- 30,738
- 21
- 105
- 131

- 5,645
- 3
- 41
- 74
-
1`ts` is not standard on any distro I know of, though the pointer to `at` is mildly useful. You should probably also mention `batch`. As elsewhere, the `os.system()` recommendation should probably at least mention that `subprocess` is its recommended replacement. – tripleee Dec 03 '18 at 05:43
In Windows you can just import the subprocess
module and run external commands by calling subprocess.Popen()
, subprocess.Popen().communicate()
and subprocess.Popen().wait()
as below:
# Python script to run a command line
import subprocess
def execute(cmd):
"""
Purpose : To execute a command and return exit status
Argument : cmd - command to execute
Return : exit_code
"""
process = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
(result, error) = process.communicate()
rc = process.wait()
if rc != 0:
print "Error: failed to execute command:", cmd
print error
return result
# def
command = "tasklist | grep python"
print "This process detail: \n", execute(command)
Output:
This process detail:
python.exe 604 RDP-Tcp#0 4 5,660 K

- 30,738
- 21
- 105
- 131

- 2,152
- 1
- 19
- 32
Invoke is a Python (2.7 and 3.4+) task execution tool and library. It provides a clean, high-level API for running shell commands:
>>> from invoke import run
>>> cmd = "pip install -r requirements.txt"
>>> result = run(cmd, hide=True, warn=True)
>>> print(result.ok)
True
>>> print(result.stdout.splitlines()[-1])
Successfully installed invocations-0.13.0 pep8-1.5.7 spec-1.3.1

- 30,738
- 21
- 105
- 131

- 1,473
- 18
- 19
-
This is a great library. I was trying to explain it to a coworker the other day adn described it like this: `invoke` is to `subprocess` as `requests` is to `urllib3`. – user9074332 Mar 12 '19 at 02:00
You can use Popen, and then you can check the procedure's status:
from subprocess import Popen
proc = Popen(['ls', '-l'])
if proc.poll() is None:
proc.kill()
Check out subprocess.Popen.

- 30,738
- 21
- 105
- 131

- 350
- 2
- 6
A simple way is to use the os module:
import os
os.system('ls')
Alternatively, you can also use the subprocess module:
import subprocess
subprocess.check_call('ls')
If you want the result to be stored in a variable try:
import subprocess
r = subprocess.check_output('ls')

- 30,738
- 21
- 105
- 131

- 1,307
- 3
- 18
- 22
To fetch the network id from the OpenStack Neutron:
#!/usr/bin/python
import os
netid = "nova net-list | awk '/ External / { print $2 }'"
temp = os.popen(netid).read() /* Here temp also contains new line (\n) */
networkId = temp.rstrip()
print(networkId)
Output of nova net-list
+--------------------------------------+------------+------+
| ID | Label | CIDR |
+--------------------------------------+------------+------+
| 431c9014-5b5d-4b51-a357-66020ffbb123 | test1 | None |
| 27a74fcd-37c0-4789-9414-9531b7e3f126 | External | None |
| 5a2712e9-70dc-4b0e-9281-17e02f4684c9 | management | None |
| 7aa697f5-0e60-4c15-b4cc-9cb659698512 | Internal | None |
+--------------------------------------+------------+------+
Output of print(networkId)
27a74fcd-37c0-4789-9414-9531b7e3f126

- 30,738
- 21
- 105
- 131

- 2,855
- 30
- 39
-
You should not recommend `os.popen()` in 2016. The Awk script could easily be replaced with native Python code. – tripleee Dec 03 '18 at 05:49
Very simplest way to run any command and get the result back:
from commands import getstatusoutput
try:
return getstatusoutput("ls -ltr")
except Exception, e:
return None

- 30,738
- 21
- 105
- 131

- 2,487
- 4
- 31
- 54
-
4Indeed, the [`commands` documentation from Python 2.7](https://docs.python.org/2/library/commands.html) says it was deprecated in 2.6 and will be removed in 3.0. – tripleee Dec 03 '18 at 05:06
MOST OF THE CASES:
For the most of cases, a short snippet of code like this is all you are going to need:
import subprocess
import shlex
source = "test.txt"
destination = "test_copy.txt"
base = "cp {source} {destination}'"
cmd = base.format(source=source, destination=destination)
subprocess.check_call(shlex.split(cmd))
It is clean and simple.
subprocess.check_call
run command with arguments and wait for command to complete.
shlex.split
split the string cmd using shell-like syntax
REST OF THE CASES:
If this do not work for some specific command, most probably you have a problem with command-line interpreters. The operating system chose the default one which is not suitable for your type of program or could not found an adequate one on the system executable path.
Example:
Using the redirection operator on a Unix system
input_1 = "input_1.txt"
input_2 = "input_2.txt"
output = "merged.txt"
base_command = "/bin/bash -c 'cat {input} >> {output}'"
base_command.format(input_1, output=output)
subprocess.check_call(shlex.split(base_command))
base_command.format(input_2, output=output)
subprocess.check_call(shlex.split(base_command))
As it is stated in The Zen of Python: Explicit is better than implicit
So if using a Python >=3.6 function, it would look something like this:
import subprocess
import shlex
def run_command(cmd_interpreter: str, command: str) -> None:
base_command = f"{cmd_interpreter} -c '{command}'"
subprocess.check_call(shlex.split(base_command)

- 982
- 8
- 8
Often, I use the following function for external commands, and this is especially handy for long running processes. The below method tails process output while it is running and returns the output, raises an exception if process fails.
It comes out if the process is done using the poll() method on the process.
import subprocess,sys
def exec_long_running_proc(command, args):
cmd = "{} {}".format(command, " ".join(str(arg) if ' ' not in arg else arg.replace(' ','\ ') for arg in args))
print(cmd)
process = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
# Poll process for new output until finished
while True:
nextline = process.stdout.readline().decode('UTF-8')
if nextline == '' and process.poll() is not None:
break
sys.stdout.write(nextline)
sys.stdout.flush()
output = process.communicate()[0]
exitCode = process.returncode
if (exitCode == 0):
return output
else:
raise Exception(command, exitCode, output)
You can invoke it like this:
exec_long_running_proc(command = "hive", args=["-f", hql_path])

- 3,064
- 1
- 11
- 12
-
1You'll get unexpected results passing an arg with space. Using `repr(arg)` instead of `str(arg)` might help by the mere coincidence that python and sh escape quotes the same way – sbk May 17 '18 at 12:08
-
1@sbk `repr(arg)` didn't really help, the above code handles spaces as well. Now the following works `exec_long_running_proc(command = "ls", args=["-l", "~/test file*"])` – am5 Nov 17 '18 at 00:07
Here are my two cents: In my view, this is the best practice when dealing with external commands...
These are the return values from the execute method...
pass, stdout, stderr = execute(["ls","-la"],"/home/user/desktop")
This is the execute method...
def execute(cmdArray,workingDir):
stdout = ''
stderr = ''
try:
try:
process = subprocess.Popen(cmdArray,cwd=workingDir, stdout=subprocess.PIPE, stderr=subprocess.PIPE, bufsize=1)
except OSError:
return [False, '', 'ERROR : command(' + ' '.join(cmdArray) + ') could not get executed!']
for line in iter(process.stdout.readline, b''):
try:
echoLine = line.decode("utf-8")
except:
echoLine = str(line)
stdout += echoLine
for line in iter(process.stderr.readline, b''):
try:
echoLine = line.decode("utf-8")
except:
echoLine = str(line)
stderr += echoLine
except (KeyboardInterrupt,SystemExit) as err:
return [False,'',str(err)]
process.stdout.close()
returnCode = process.wait()
if returnCode != 0 or stderr != '':
return [False, stdout, stderr]
else:
return [True, stdout, stderr]

- 30,738
- 21
- 105
- 131
-
1
-
Better yet, avoid `Popen()` and use the higher-level API which is now collected into the single function `subprocess.run()` – tripleee Dec 03 '18 at 05:27
Just to add to the discussion, if you include using a Python console, you can call external commands from IPython. While in the IPython prompt, you can call shell commands by prefixing '!'. You can also combine Python code with the shell, and assign the output of shell scripts to Python variables.
For instance:
In [9]: mylist = !ls
In [10]: mylist
Out[10]:
['file1',
'file2',
'file3',]

- 30,738
- 21
- 105
- 131

- 5,293
- 7
- 42
- 78
I wrote a small library to help with this use case:
https://pypi.org/project/citizenshell/
It can be installed using
pip install citizenshell
And then used as follows:
from citizenshell import sh
assert sh("echo Hello World") == "Hello World"
You can separate standard output from standard error and extract the exit code as follows:
result = sh(">&2 echo error && echo output && exit 13")
assert result.stdout() == ["output"]
assert result.stderr() == ["error"]
assert result.exit_code() == 13
And the cool thing is that you don't have to wait for the underlying shell to exit before starting processing the output:
for line in sh("for i in 1 2 3 4; do echo -n 'It is '; date +%H:%M:%S; sleep 1; done", wait=False)
print ">>>", line + "!"
will print the lines as they are available thanks to the wait=False
>>> It is 14:24:52!
>>> It is 14:24:53!
>>> It is 14:24:54!
>>> It is 14:24:55!
More examples can be found at https://github.com/meuter/citizenshell

- 30,738
- 21
- 105
- 131

- 156
- 1
- 6
Calling an external command in Python
A simple way to call an external command is using os.system(...)
. And this function returns the exit value of the command. But the drawback is we won't get stdout and stderr.
ret = os.system('some_cmd.sh')
if ret != 0 :
print 'some_cmd.sh execution returned failure'
Calling an external command in Python in background
subprocess.Popen
provides more flexibility for running an external command rather than using os.system
. We can start a command in the background and wait for it to finish. And after that we can get the stdout and stderr.
proc = subprocess.Popen(["./some_cmd.sh"], stdout=subprocess.PIPE)
print 'waiting for ' + str(proc.pid)
proc.wait()
print 'some_cmd.sh execution finished'
(out, err) = proc.communicate()
print 'some_cmd.sh output : ' + out
Calling a long running external command in Python in the background and stop after some time
We can even start a long running process in the background using subprocess.Popen
and kill it after sometime once its task is done.
proc = subprocess.Popen(["./some_long_run_cmd.sh"], stdout=subprocess.PIPE)
# Do something else
# Now some_long_run_cmd.sh exeuction is no longer needed, so kill it
os.system('kill -15 ' + str(proc.pid))
print 'Output : ' proc.communicate()[0]

- 30,738
- 21
- 105
- 131

- 12,790
- 16
- 88
- 100
There are a lot of different ways to run external commands in Python, and all of them have their own plus sides and drawbacks.
My colleagues and me have been writing Python system administration tools, so we need to run a lot of external commands, and sometimes you want them to block or run asynchronously, time-out, update every second, etc.
There are also different ways of handling the return code and errors, and you might want to parse the output, and provide new input (in an expect kind of style). Or you will need to redirect standard input, standard output, and standard error to run in a different tty (e.g., when using GNU Screen).
So you will probably have to write a lot of wrappers around the external command. So here is a Python module which we have written which can handle almost anything you would want, and if not, it's very flexible so you can easily extend it:
https://github.com/hpcugent/vsc-base/blob/master/lib/vsc/utils/run.py
It doesn't work stand-alone and requires some of our other tools, and got a lot of specialised functionality over the years, so it might not be a drop-in replacement for you, but it can give you a lot of information on how the internals of Python for running commands work and ideas on how to handle certain situations.

- 30,738
- 21
- 105
- 131

- 9,316
- 1
- 42
- 48
Use subprocess.call:
from subprocess import call
# Using list
call(["echo", "Hello", "world"])
# Single string argument varies across platforms so better split it
call("echo Hello world".split(" "))

- 30,738
- 21
- 105
- 131

- 1,955
- 1
- 18
- 26
Use:
import subprocess
p = subprocess.Popen("df -h", shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE).communicate()[0]
print p.split("\n")
It gives nice output which is easier to work with:
['Filesystem Size Used Avail Use% Mounted on',
'/dev/sda6 32G 21G 11G 67% /',
'none 4.0K 0 4.0K 0% /sys/fs/cgroup',
'udev 1.9G 4.0K 1.9G 1% /dev',
'tmpfs 387M 1.4M 386M 1% /run',
'none 5.0M 0 5.0M 0% /run/lock',
'none 1.9G 58M 1.9G 3% /run/shm',
'none 100M 32K 100M 1% /run/user',
'/dev/sda5 340G 222G 100G 69% /home',
'']

- 30,738
- 21
- 105
- 131

- 7,352
- 2
- 36
- 29
As an example (in Linux):
import subprocess
subprocess.run('mkdir test.dir', shell=True)
This creates test.dir in the current directory. Note that this also works:
import subprocess
subprocess.call('mkdir test.dir', shell=True)
The equivalent code using os.system is:
import os
os.system('mkdir test.dir')
Best practice would be to use subprocess instead of os, with .run favored over .call. All you need to know about subprocess is here. Also, note that all Python documentation is available for download from here. I downloaded the PDF packed as .zip. I mention this because there's a nice overview of the os module in tutorial.pdf (page 81). Besides, it's an authoritative resource for Python coders.
-
2According to https://docs.python.org/2/library/subprocess.html#frequently-used-arguments, "shell=True" may raise a security concern. – Nick Mar 20 '18 at 18:54
-
@Nick Predley: noted, but "shell=False" doesn't perform the desired function. What specifically are the security concerns and what's the alternative? Please let me know asap: I do not wish to post anything which may cause problems for anyone viewing this. – Mar 21 '18 at 19:49
-
1The basic warning is in the documentation but this question explains it in more detail: https://stackoverflow.com/questions/3172470/actual-meaning-of-shell-true-in-subprocess – tripleee Dec 03 '18 at 05:14
For using subprocess
in Python 3.5+, the following did the trick for me on Linux:
import subprocess
# subprocess.run() returns a completed process object that can be inspected
c = subprocess.run(["ls", "-ltrh"], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
print(c.stdout.decode('utf-8'))
As mentioned in the documentation, PIPE
values are byte sequences and for properly showing them decoding should be considered. For later versions of Python, text=True
and encoding='utf-8'
are added to kwargs of subprocess.run()
.
The output of the abovementioned code is:
total 113M
-rwxr-xr-x 1 farzad farzad 307 Jan 15 2018 vpnscript
-rwxrwxr-x 1 farzad farzad 204 Jan 15 2018 ex
drwxrwxr-x 4 farzad farzad 4.0K Jan 22 2018 scripts
.... # Some other lines

- 30,738
- 21
- 105
- 131

- 2,458
- 1
- 29
- 32
After some research, I have the following code which works very well for me. It basically prints both standard output and standard error in real time.
stdout_result = 1
stderr_result = 1
def stdout_thread(pipe):
global stdout_result
while True:
out = pipe.stdout.read(1)
stdout_result = pipe.poll()
if out == '' and stdout_result is not None:
break
if out != '':
sys.stdout.write(out)
sys.stdout.flush()
def stderr_thread(pipe):
global stderr_result
while True:
err = pipe.stderr.read(1)
stderr_result = pipe.poll()
if err == '' and stderr_result is not None:
break
if err != '':
sys.stdout.write(err)
sys.stdout.flush()
def exec_command(command, cwd=None):
if cwd is not None:
print '[' + ' '.join(command) + '] in ' + cwd
else:
print '[' + ' '.join(command) + ']'
p = subprocess.Popen(
command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, cwd=cwd
)
out_thread = threading.Thread(name='stdout_thread', target=stdout_thread, args=(p,))
err_thread = threading.Thread(name='stderr_thread', target=stderr_thread, args=(p,))
err_thread.start()
out_thread.start()
out_thread.join()
err_thread.join()
return stdout_result + stderr_result

- 30,738
- 21
- 105
- 131

- 2,788
- 34
- 39
-
4your code may lose data when the subprocess exits while there is some data is buffered. Read until EOF instead, see [teed_call()](http://stackoverflow.com/q/4984428/4279) – jfs Jul 13 '15 at 18:52
Here is calling an external command and return or print the command's output:
Python Subprocess check_output is good for
Run command with arguments and return its output as a byte string.
import subprocess
proc = subprocess.check_output('ipconfig /all')
print proc

- 30,738
- 21
- 105
- 131

- 6,746
- 1
- 52
- 54
-
The argument should properly be tokenized into a list, or you should explicitly pass in `shell=True`. In Python 3.x (where x > 3 I think) you can retrieve the output as a proper string with `universal_newlines=True` and you probably want to switch to `subproces.run()` – tripleee Dec 03 '18 at 05:22
If you need to call a shell command from a Python notebook (like Jupyter, Zeppelin, Databricks, or Google Cloud Datalab) you can just use the !
prefix.
For example,
!ls -ilF

- 30,738
- 21
- 105
- 131

- 1,101
- 10
- 20
If you're writing a Python shell script and have IPython installed on your system, you can use the bang prefix to run a shell command inside IPython:
!ls
filelist = !ls

- 4,739
- 1
- 33
- 46

- 9,967
- 2
- 50
- 67
-
@PeterMortensen I don't think it works in DOS, but it should work in Cygwin. – noɥʇʎԀʎzɐɹƆ Nov 30 '19 at 01:39
Update 2015: Python 3.5 added subprocess.run which is much easier to use than subprocess.Popen. I recommend that.
>>> subprocess.run(["ls", "-l"]) # doesn't capture output
CompletedProcess(args=['ls', '-l'], returncode=0)
>>> subprocess.run("exit 1", shell=True, check=True)
Traceback (most recent call last):
...
subprocess.CalledProcessError: Command 'exit 1' returned non-zero exit status 1
>>> subprocess.run(["ls", "-l", "/dev/null"], capture_output=True)
CompletedProcess(args=['ls', '-l', '/dev/null'], returncode=0,
stdout=b'crw-rw-rw- 1 root root 1, 3 Jan 23 16:23 /dev/null\n', stderr=b'')

- 132,665
- 89
- 401
- 465
-
10Deprecated doesn't only mean "isn't developed anymore" but also "you are discouraged from using this". Deprecated features may break anytime, may be removed anytime, or may dangerous. You should never use this in important code. Deprecation is merely a better way than removing a feature immediately, because it gives programmers the time to adapt and replace their deprecated functions. – Misch Apr 19 '13 at 08:07
-
4Just to prove my point: "Deprecated since version 2.6: The commands module has been removed in Python 3. Use the subprocess module instead." – Misch Apr 19 '13 at 08:14
-
It's not dangerous! The Python devs are careful only to break features between major releases (ie. between 2.x and 3.x). I've been using the commands module since 2004's Python 2.4. It works the same today in Python 2.7. – Colonel Panic Apr 23 '13 at 16:09
-
6With dangerous, I didn't mean that it may be removed anytime (that's a different problem), neither did I say that it is dangerous to use this specific module. However it may become dangerous if a security vulnerability is discovered but the module isn't further developed or maintained. (I don't want to say that this module is or isn't vulnerable to security issues, just talking about deprecated stuff in general) – Misch Apr 23 '13 at 16:23
For Python 3.5+ it is recommended that you use the run function from the subprocess module. This returns a CompletedProcess
object, from which you can easily obtain the output as well as return code.
from subprocess import PIPE, run
command = ['echo', 'hello']
result = run(command, stdout=PIPE, stderr=PIPE, universal_newlines=True)
print(result.returncode, result.stdout, result.stderr)

- 30,738
- 21
- 105
- 131

- 14,924
- 14
- 78
- 116
-
3answer with run function was added in 2015 year. You repeated it. I think it was a reason of down vote – Greg Eremeev Mar 11 '17 at 18:27
If you are not using user input in the commands, you can use this:
from os import getcwd
from subprocess import check_output
from shlex import quote
def sh(command):
return check_output(quote(command), shell=True, cwd=getcwd(), universal_newlines=True).strip()
And use it as
branch = sh('git rev-parse --abbrev-ref HEAD')
shell=True
will spawn a shell, so you can use pipe and such shell things sh('ps aux | grep python')
. This is very very handy for running hardcoded commands and processing its output. The universal_lines=True
make sure the output is returned in a string instead of binary.
cwd=getcwd()
will make sure that the command is run with the same working directory as the interpreter. This is handy for Git commands to work like the Git branch name example above.
Some recipes
- free memory in megabytes:
sh('free -m').split('\n')[1].split()[1]
- free space on / in percent
sh('df -m /').split('\n')[1].split()[4][0:-1]
- CPU load
sum(map(float, sh('ps -ef -o pcpu').split('\n')[1:])
But this isn't safe for user input, from the documentation:
Security Considerations
Unlike some other popen functions, this implementation will never implicitly call a system shell. This means that all characters, including shell metacharacters, can safely be passed to child processes. If the shell is invoked explicitly, via shell=True, it is the application’s responsibility to ensure that all whitespace and metacharacters are quoted appropriately to avoid shell injection vulnerabilities.
When using shell=True, the shlex.quote() function can be used to properly escape whitespace and shell metacharacters in strings that are going to be used to construct shell commands.
Even using the shlex.quote()
, it is good to keep a little paranoid when using user inputs on shell commands. One option is using a hardcoded command to take some generic output and filtering by user input. Anyway using shell=False
will make sure that only the exactly process that you want to execute will be executed or you get a No such file or directory
error.
Also there is some performance impact on shell=True
, from my tests it seems about 20% slower than shell=False
(the default).
In [50]: timeit("check_output('ls -l'.split(), universal_newlines=True)", number=1000, globals=globals())
Out[50]: 2.6801227919995654
In [51]: timeit("check_output('ls -l', universal_newlines=True, shell=True)", number=1000, globals=globals())
Out[51]: 3.243950183999914

- 30,738
- 21
- 105
- 131

- 5,687
- 1
- 41
- 53
import subprocess
p = subprocess.run(["ls", "-ltr"], capture_output=True)
print(p.stdout.decode(), p.stderr.decode())

- 19,879
- 23
- 80
- 93
-
An explanation would be in order. E.g, what is the idea and how is it different from the previous 50 answers 11 years later? – Peter Mortensen Apr 07 '21 at 17:49
-
This is identical to [your own answer from a few days earlier](https://stackoverflow.com/a/58212263) only with even less detail. – tripleee Jun 09 '21 at 19:34
You can try using os.system()
for running external commands.
Example:
import os
try:
os.system('ls')
pass
except:
print("Error running command")
pass
In the example, the script imports os
and tries to run the command listed in os.system()
. If the command was to fail then it would print "Error running command" without the script stopping due to the error.
And yes, it’s just that simple!
Using the Popen
function of the subprocess
Python module is the simplest way of running Linux commands. In that, the Popen.communicate()
function will give your commands output. For example
import subprocess
..
process = subprocess.Popen(..) # Pass command and arguments to the function
stdout, stderr = process.communicate() # Get command output and error
..

- 30,738
- 21
- 105
- 131

- 179
- 1
- 1
-
This is no longer true, and probably wasn't when this answer was posted. You should prefer `subprocess.check_call()` and friends unless you absolutely need the lower-level control of the more-complex `Popen()`. In recent Python versions, the go-to workhorse is `subprocess.run()` – tripleee Dec 03 '18 at 05:30
There are many ways to call a command.
- For example:
if and.exe
needs two parameters. In cmd we can call sample.exe
use this:
and.exe 2 3
and it show 5
on screen.
If we use a Python script to call and.exe
, we should do like..
os.system(cmd,...)
os.system(("and.exe" + " " + "2" + " " + "3"))
os.popen(cmd,...)
os.popen(("and.exe" + " " + "2" + " " + "3"))
subprocess.Popen(cmd,...)
subprocess.Popen(("and.exe" + " " + "2" + " " + "3"))
It's too hard, so we can join cmd with a space:
import os
cmd = " ".join(exename,parameters)
os.popen(cmd)

- 30,738
- 21
- 105
- 131

- 95
- 1
- 2
-
`os.popen` should not be recommended and perhaps even mentioned any longer. The `subpocess` example should pass the arguments as a list instead of joining them with spaces. – tripleee Dec 03 '18 at 05:25
os.popen()
is the easiest and the most safest way to execute a command. You can execute any command that you run on the command line. In addition you will also be able to capture the output of the command using os.popen().read()
You can do it like this:
import os
output = os.popen('Your Command Here').read()
print (output)
An example where you list all the files in the current directory:
import os
output = os.popen('ls').read()
print (output)
# Outputs list of files in the directory

- 30,738
- 21
- 105
- 131

- 2,759
- 2
- 30
- 35
There are a number of ways of calling an external command from Python. There are some functions and modules with the good helper functions that can make it really easy. But the recommended thing among all is the subprocess
module.
import subprocess as s
s.call(["command.exe", "..."])
The call function will start the external process, pass some command line arguments and wait for it to finish. When it finishes you continue executing. Arguments in call function are passed through the list. The first argument in the list is the command typically in the form of an executable file and subsequent arguments in the list whatever you want to pass.
If you have called processes from the command line in the windows before, you'll be aware that you often need to quote arguments. You need to put quotations mark around it. If there's a space then there's a backslash and there are some complicated rules, but you can avoid a whole lot of that in Python by using subprocess
module because it is a list and each item is known to be a distinct and python can get quoting correctly for you.
In the end, after the list, there are a number of optional parameters one of these is a shell and if you set shell equals to true then your command is going to be run as if you have typed in at the command prompt.
s.call(["command.exe", "..."], shell=True)
This gives you access to functionality like piping, you can redirect to files, you can call multiple commands in one thing.
One more thing, if your script relies on the process succeeding then you want to check the result and the result can be checked with the check call helper function.
s.check_call(...)
It is exactly the same as a call function, it takes the same arguments, takes the same list, you can pass in any of the extra arguments but it going to wait for the functions to complete. And if the exit code of the function is anything other then zero, it will through an exception in the python script.
Finally, if you want tighter control Popen
constructor which is also from the subprocess
module. It also takes the same arguments as incall & check_call function but it returns an object representing the running process.
p=s.Popen("...")
It does not wait for the running process to finish also it's not going to throw any exception immediately but it gives you an object that will let you do things like wait for it to finish, let you communicate to it, you can redirect standard input, standard output if you want to display output somewhere else and a lot more.

- 30,738
- 21
- 105
- 131

- 284
- 1
- 9
- 28
I would recommend the following method 'run' and it will help us in getting standard output, standard error and exit status as a dictionary; the caller of this can read the dictionary return by 'run' method to know the actual state of the process.
def run (cmd):
print "+ DEBUG exec({0})".format(cmd)
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, universal_newlines=True, shell=True)
(out, err) = p.communicate()
ret = p.wait()
out = filter(None, out.split('\n'))
err = filter(None, err.split('\n'))
ret = True if ret == 0 else False
return dict({'output': out, 'error': err, 'status': ret})
#end

- 30,738
- 21
- 105
- 131

- 4,674
- 2
- 28
- 45
-
1This incompletely reimplements something like `subprocess.run()`. You should particularly avoid `shell=True` when it's not strictly necessary. – tripleee Dec 03 '18 at 05:51
-
Python 3.5+
import subprocess
p = subprocess.run(["ls", "-ltr"], capture_output=True)
print(p.stdout.decode(), p.stderr.decode())

- 19,879
- 23
- 80
- 93
You can run any command using Popen from the subprocess module.
from subprocess import Popen
First of all, a command object is created with all arguments which you want to run. For example, in the snippet below, the gunicorm command object has been formed with all the arguments:
cmd = (
"gunicorn "
"-c gunicorn_conf.py "
"-w {workers} "
"--timeout {timeout} "
"-b {address}:{port} "
"--limit-request-line 0 "
"--limit-request-field_size 0 "
"--log-level debug "
"--max-requests {max_requests} "
"manage:app").format(**locals())
Then this command object is used with Popen to instantiate a process:
process = Popen(cmd, shell=True)
This process can be terminated as well based upon any signal, using the code line below:
Popen.terminate(process)
And you can wait till the completion of above command's execution:
process.wait()

- 30,738
- 21
- 105
- 131

- 2,506
- 2
- 23
- 48
Here there are a lot of answers, but none fulfilled all my needs.
- I need to run the command and capture the output and exit code.
- I need to timeout the executed program and force it to exit if timeout is reached, and kill all its child processes.
- and I need that it works in Windows XP and later, Cygwin and Linux. In Python 2 and 3.
So I created this:
def _run(command, timeout_s=False, shell=False):
### run a process, capture the output and wait for it to finish. if timeout is specified then Kill the subprocess and its children when the timeout is reached (if parent did not detach)
## usage: _run(arg1, arg2, arg3)
# arg1: command + arguments. Always pass a string; the function will split it when needed
# arg2: (optional) timeout in seconds before force killing
# arg3: (optional) shell usage. default shell=False
## return: a list containing: exit code, output, and if timeout was reached or not
# - Tested on Python 2 and 3 on Windows XP, Windows 7, Cygwin and Linux.
# - preexec_fn=os.setsid (py2) is equivalent to start_new_session (py3) (works on Linux only), in Windows and Cygwin we use TASKKILL
# - we use stderr=subprocess.STDOUT to merge standard error and standard output
import sys, subprocess, os, signal, shlex, time
def _runPY3(command, timeout_s=None, shell=False):
# py3.3+ because: timeout was added to communicate() in py3.3.
new_session=False
if sys.platform.startswith('linux'): new_session=True
p = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, start_new_session=new_session, shell=shell)
try:
out = p.communicate(timeout=timeout_s)[0].decode('utf-8')
is_timeout_reached = False
except subprocess.TimeoutExpired:
print('Timeout reached: Killing the whole process group...')
killAll(p.pid)
out = p.communicate()[0].decode('utf-8')
is_timeout_reached = True
return p.returncode, out, is_timeout_reached
def _runPY2(command, timeout_s=0, shell=False):
preexec=None
if sys.platform.startswith('linux'): preexec=os.setsid
p = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, preexec_fn=preexec, shell=shell)
start_time = time.time()
is_timeout_reached = False
while timeout_s and p.poll() == None:
if time.time()-start_time >= timeout_s:
print('Timeout reached: Killing the whole process group...')
killAll(p.pid)
is_timeout_reached = True
break
time.sleep(1)
out = p.communicate()[0].decode('utf-8')
return p.returncode, out, is_timeout_reached
def killAll(ParentPid):
if sys.platform.startswith('linux'):
os.killpg(os.getpgid(ParentPid), signal.SIGTERM)
elif sys.platform.startswith('cygwin'):
# subprocess.Popen(shlex.split('bash -c "TASKKILL /F /PID $(</proc/{pid}/winpid) /T"'.format(pid=ParentPid)))
winpid=int(open("/proc/{pid}/winpid".format(pid=ParentPid)).read())
subprocess.Popen(['TASKKILL', '/F', '/PID', str(winpid), '/T'])
elif sys.platform.startswith('win32'):
subprocess.Popen(['TASKKILL', '/F', '/PID', str(ParentPid), '/T'])
# - In Windows, we never need to split the command, but in Cygwin and Linux we need to split if shell=False (default), shlex will split the command for us
if shell==False and (sys.platform.startswith('cygwin') or sys.platform.startswith('linux')):
command=shlex.split(command)
if sys.version_info >= (3, 3): # py3.3+
if timeout_s==False:
returnCode, output, is_timeout_reached = _runPY3(command, timeout_s=None, shell=shell)
else:
returnCode, output, is_timeout_reached = _runPY3(command, timeout_s=timeout_s, shell=shell)
else: # Python 2 and up to 3.2
if timeout_s==False:
returnCode, output, is_timeout_reached = _runPY2(command, timeout_s=0, shell=shell)
else:
returnCode, output, is_timeout_reached = _runPY2(command, timeout_s=timeout_s, shell=shell)
return returnCode, output, is_timeout_reached
Then use it like this:
Always pass the command as one string (it is easier). You do not need to split it; the function will split it when needed.
If your command works in your shell, it will work with this function, so test your command in your shell first cmd/Bash.
So we can use it like this with a timeout:
a=_run('cmd /c echo 11111 & echo 22222 & calc',3)
for i in a[1].splitlines(): print(i)
Or without a timeout:
b=_run('cmd /c echo 11111 & echo 22222 & calc')
More examples:
b=_run('''wmic nic where 'NetConnectionID="Local Area Connection"' get NetConnectionStatus /value''')
print(b)
c=_run('cmd /C netsh interface ip show address "Local Area Connection"')
print(c)
d=_run('printf "<%s>\n" "{foo}"')
print(d)
You can also specify shell=True, but it is useless in most cases with this function. I prefer to choose myself the shell I want, but here it is if you need it too:
# windows
e=_run('echo 11111 & echo 22222 & calc',3, shell=True)
print(e)
# Cygwin/Linux:
f=_run('printf "<%s>\n" "{foo}"', shell=True)
print(f)
Why did I not use the simpler new method subprocess.run()
?
- because it is supported in Python 3.7+, but the last supported Python version in Windows XP is 3.4.
- and because the timeout argument of this function is useless in Windows, it does not kill the child processes of the executed command.
- if you use the
capture_output
+timeout
argument, it will hang if there is a child process still running. And it is still broken in Windows, for which the issue 31447 is still open.

- 30,738
- 21
- 105
- 131

- 1,487
- 14
- 20
-
1I removed the "2022 answer" because it’s misleading. In 2022 you don’t support winxp nor Python 2 and you use `subprocess.run()`. The question is not "how do I run system commands on old platforms" but "how do I run system commands", period. – bfontaine May 24 '22 at 10:08
I have written a wrapper to handle errors and redirecting output and other stuff.
import shlex
import psutil
import subprocess
def call_cmd(cmd, stdout=sys.stdout, quiet=False, shell=False, raise_exceptions=True, use_shlex=True, timeout=None):
"""Exec command by command line like 'ln -ls "/var/log"'
"""
if not quiet:
print("Run %s", str(cmd))
if use_shlex and isinstance(cmd, (str, unicode)):
cmd = shlex.split(cmd)
if timeout is None:
process = subprocess.Popen(cmd, stdout=stdout, stderr=sys.stderr, shell=shell)
retcode = process.wait()
else:
process = subprocess.Popen(cmd, stdout=stdout, stderr=sys.stderr, shell=shell)
p = psutil.Process(process.pid)
finish, alive = psutil.wait_procs([p], timeout)
if len(alive) > 0:
ps = p.children()
ps.insert(0, p)
print('waiting for timeout again due to child process check')
finish, alive = psutil.wait_procs(ps, 0)
if len(alive) > 0:
print('process {} will be killed'.format([p.pid for p in alive]))
for p in alive:
p.kill()
if raise_exceptions:
print('External program timeout at {} {}'.format(timeout, cmd))
raise CalledProcessTimeout(1, cmd)
retcode = process.wait()
if retcode and raise_exceptions:
print("External program failed %s", str(cmd))
raise subprocess.CalledProcessError(retcode, cmd)
You can call it like this:
cmd = 'ln -ls "/var/log"'
stdout = 'out.txt'
call_cmd(cmd, stdout)

- 30,738
- 21
- 105
- 131

- 1,113
- 1
- 7
- 25
Sultan is a recent-ish package meant for this purpose. It provides some niceties around managing user privileges and adding helpful error messages.
from sultan.api import Sultan
with Sultan.load(sudo=True, hostname="myserver.com") as sultan:
sultan.yum("install -y tree").run()

- 30,738
- 21
- 105
- 131

- 1,783
- 1
- 20
- 35
Here is a Python script that will run the command on Ubuntu, while also showing the logs in real-time:
command = 'your command here'
process = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
while True:
output = process.stdout.readline().decode()
if output == '' and process.poll() is not None:
break
if output:
print(output.strip())
rc = process.poll()
if rc == 0:
print("Command succeeded.")
else:
print("Command failed.")

- 355
- 3
- 11
Using the Python subprocess module to execute shell commands and write the output to a file.
The below script will run the ps -ef command, filter lines containing python3, and write them to a file called python_processes.txt. Note that the code does not handle any exceptions that might occur during execution.
import subprocess
# Command to execute
cmd = ["ps", "-ef"]
# Execute the command
process = subprocess.Popen(cmd, stdout=subprocess.PIPE)
output, error = process.communicate()
# Check if the command was executed without errors
if error is None:
# Filter lines with 'python3'
python_processes = [line for line in output.decode('utf-8').split('\n') if 'python3' in line]
# Write the output to a file
with open('python_processes.txt', 'w') as f:
for process in python_processes:
f.write(process + '\n')
else:
print(f"Error occurred while executing command: {error}")

- 11
- 4
I use this for Python 3.6+:
import subprocess
def execute(cmd):
"""
Purpose : To execute a command and return exit status
Argument : cmd - command to execute
Return : result, exit_code
"""
process = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
(result, error) = process.communicate()
rc = process.wait()
if rc != 0:
print ("Error: failed to execute command: ", cmd)
print (error.rstrip().decode("utf-8"))
return result.rstrip().decode("utf-8"), serror.rstrip().decode("utf-8")
# def

- 30,738
- 21
- 105
- 131

- 91
- 2
- 5
-
1Don't use set ``shell=True`` to run commands, it opens the program to command injection vulnerabilities. You're supposed to pass the command as a list with arguments ``cmd=["/bin/echo", "hello word"]``. https://docs.python.org/3/library/subprocess.html#security-considerations – user5994461 Apr 19 '20 at 16:52