-1

Like executing a command using backticks or exec() or system().

brian d foy
  • 129,424
  • 31
  • 207
  • 592
saisrinivas
  • 24
  • 1
  • 6
  • 1
    This is pretty broad. In general, if the language has a built-in way to do something, you should use that instead of spawning a subprocess, but there are lots of exceptions, and there are times when your only option is to run an external command. Do you have a specific example in mind? – ThisSuitIsBlackNot May 11 '16 at 16:52
  • 1
    Short answer better not too. But if you have to make sure that user input is screened properly to avoid any inclusion attacks. And execute nothing as superuser – E_p May 11 '16 at 16:53

4 Answers4

4

Yes, this is bad practice in almost all cases! This is especially true if you have a choice.

Executing external commands is:

  • Very hard to do correctly (but very easy to get sort-of-working). You can't just escape the shell args and call it a day, you have to account for potentially multiple levels of escaping for potentially different languages.

  • Slow. A fork+exec to e.g. rm is easily a thousand times slower than the corresponding syscall.

  • A rigid, error-prone and inexpressive integration point. You typically have to convert data to flat lists of strings and back. You can't use the language's features like exception handling, nested data structures or callbacks.

Due to this, the following are BAD reasons to call external commands:

  • Not knowing how to do X in your language, but knowing a shell command for it. A typical example is cp -R foo bar.

  • Not knowing how something works, but knowing a shell oneliner that does it. A typical example is foo *.mp4 > >(tee file).

  • Not wanting to learn a new API for e.g. json or http, and instead using shell tools like jq or curl.

However, if you are calling a program that does non-trivial things, that doesn't have a native library or bindings, AND that you know how to invoke with execve semantics (NOT system nor perl exec semantics that invoke shells), this is a valuable tool.

Examples of good uses of executing external commands that follow all the above is invoking make to build a project from an installer, or running java -jar ... to start a Minecraft server.

that other guy
  • 116,971
  • 11
  • 170
  • 194
2

Some operating system commands might have built-in functionality not available in language (Perl's mkdir() lacks *ix mkdir's -p). Then again, something might be easier to do using languages constructs instead of parsing output (readdir() vs ls).

And it is important to remember that something written in Perl might be more portable to non-Unix systems than calling OS-specific external programs.

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
Suzie
  • 41
  • 2
  • `mkdir -p` isn't a great example since Perl has [`File::Path::make_path`](http://perldoc.perl.org/File/Path.html) in core. – ThisSuitIsBlackNot May 11 '16 at 17:12
  • 3
    Just making a point that especially gnu versions of commands have many switches with functionality not available in perl functions. Of course, there are modules for everything but you have to include them... – Suzie May 11 '16 at 17:49
2

It's a great* practice. Perl and PHP are great for lots of things, but they're not great at everything, and there's a use case for using external programs and other tools in your project. But one of things that Perl is definitely great at is gluing together input and output formats and letting you mash together several different tools into a single project, letting each part of the project do what they do best.

* by which I mean, often a great practice. Things like @files=qx(ls $dir) and @txt=qx(cat $textfile) make all right-thinking Perl programmers cringe.

mob
  • 117,087
  • 18
  • 149
  • 283
0

Why not?

But, be careful and escape all strings inserted in the command:

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
ebahi
  • 536
  • 2
  • 7