4

I'm on linux, and I have one python script that I want to call from another python script. I don't want to import it as a module (for a layer of security, and now for an academic exercise because I want to figure this out), I want to actually have one script call the other with os.system() or another function like that, and have the other script return a list of tuples to the first script.

I know this might not be the optimal way to do it, but I'd like to try it out, and learn some stuff in the process.

Cœur
  • 37,241
  • 25
  • 195
  • 267
Nathan
  • 73,987
  • 14
  • 40
  • 69

5 Answers5

4

You can use subprocess:

subprocess.call(["python", "myscript.py"])

This will also return the process return value (such as 0 or 1).

Simeon Visser
  • 118,920
  • 18
  • 185
  • 180
3

Importing a module is different from executing it as a script. If you don't trust the child Python script; you shouldn't run any code from it.

A regular way to use some code from another Python module:

import another_module

result = another_module.some_function(args)

If you want to execute it instead of importing:

namespace = {'args': [1,2,3]} # define __name__, __file__ if necessary
execfile('some_script.py', namespace)
result = namespace['result']

execfile() is used very rarely in Python. It might be useful in a debugger, a profiler, or to run setup.py in tools such as pip, easy_install.

See also runpy module.

If another script is executed in a different process; you could use many IPC methods. The simplest way is just pipe serialized (Python objects converted to a bytestring) input args into subprocess' stdin and read the result back from its stdout as suggested by @kirelagin:

import json
import sys
from subprocess import Popen, PIPE

marshal, unmarshal = json.dumps, json.loads

p = Popen([sys.executable, 'some_script.py'], stdin=PIPE, stdout=PIPE)
result = unmarshal(p.communicate(marshal(args))[0])

where some_script.py could be:

#!/usr/bin/env python
import json
import sys

args = json.load(sys.stdin)   # read input data from stdin
result = [x*x for x in args]  # compute result
json.dump(result, sys.stdout) # write result to stdout
Community
  • 1
  • 1
jfs
  • 399,953
  • 195
  • 994
  • 1,670
  • Thanks! I ended up piping it, but because the data-size was acceptably small and easy to model as a string, I didn't need to serialize it. – Nathan Jun 02 '13 at 21:35
2

You will need some kind of serialization to return a list. I'd suggest pickle or JSON.

Your other script will print it to stdout and the calling script will read it back and deserialize.

kirelagin
  • 13,248
  • 2
  • 42
  • 57
  • So basically turn the data into a large int, return that code, and then reverse it? How long can a system code return be? – Nathan Jun 01 '13 at 21:18
  • @Nathan No-no, turn it into a string and write to `stdout`. Then your calling script will read it through a pipe (check `subprocess.Popen` documentation). – kirelagin Jun 01 '13 at 21:20
  • Is this secure? Would stout somehow be able to be intercepted? Could you give me an example of doing this? Say I have a.py and b.py, how would I go about this process? – Nathan Jun 01 '13 at 21:23
  • Now, it's not secure at all. Any communication with another process is not secure. I don't have an example, why don't you just try it? In `a.py` you call `pickle` and `print` the resulting string. In `b.py` you use `Subprocess.Popen` to execute `a.py` and read its `stdout`. An example of reading `subprocess` `stdout` can be found, for example, here: http://stackoverflow.com/questions/2804543/read-subprocess-stdout-line-by-line – kirelagin Jun 01 '13 at 21:26
  • Okay, I'll check it out. Thanks! "Any communication with another process is not secure" are you sure that's accurate? There must be some underlying security for process interaction at the os level. – Nathan Jun 01 '13 at 21:31
  • It depends on your definition of “secure communication”. Define it and I'll give you a more accurate statement ;). For example root can do _anything_. _Absoultely anything_, so you just can't be secure. – kirelagin Jun 01 '13 at 21:34
  • Fair enough. By the way, it worked! I ended up just piping stdout instead of serializing though, because the data size wasn't large. Thanks! :) – Nathan Jun 01 '13 at 22:27
  • @Nathan Well, you can only wright text to stdout and list is a list, not text. Serialization is a process of converting an object to a string so that it can be passed over network or a stream. You can't just write a list. Of course you can cast it to a string and then restore with `eval` but that's not really cool. – kirelagin Jun 01 '13 at 23:32
  • I was able to format it so it worked, ended up just passing a null delimited string, then I split it into a list. :) – Nathan Jun 01 '13 at 23:46
  • 1
    Yeah, that's what they call “reinventing the wheel”. Was that really simpler than calling `json.dumps` (or `pickle.dumps`) in the first script and `json.loads` (or `pickle.loads`) in the second one? – kirelagin Jun 01 '13 at 23:46
  • Reinventing the wheel is quite fun :) and dumping pickle made it available as a shared resource to all python scripts, whereas the way I did it just accesses the local stdin/stdout – Nathan Jun 02 '13 at 00:56
1

You know, ZMQ is very useful for inter proces communication. You can well use it with json objects.

Here is the python binding

ScotchAndSoda
  • 3,811
  • 4
  • 34
  • 38
0

It's very simple.

import popen2
fout, fin = popen2.popen2('python your_python_script.py')
data = fout.readline().strip()
if data <> '':
    # you have your `data`, so do something with it  
Val Neekman
  • 17,692
  • 14
  • 63
  • 66