I have two versions of Python (these are actually two conda environments)
/path/to/bin-1/python
/path/to/bin-2/python
From one version of python I want to launch a function that runs in the other version using something like the multiprocessing.Process
object. It turns out that this is doable using the set_executable
method:
ctx = multiprocess.get_context('spawn')
ctx.set_executable('/path/to/bin-2/python')
And indeed we can see that this does in fact launch using that executable:
def f(q):
import sys
q.put(sys.executable)
if __name__ == '__main__':
import multiprocessing
ctx = multiprocessing.get_context('spawn')
ctx.set_executable('/path/to/bin-2/python')
q = ctx.Queue()
proc = ctx.Process(target=f, args=(q,))
proc.start()
print(q.get())
$ python foo.py
/path/to/bin-2/python
However Path is Wrong
However when I do the same thing with sys.path
rather than sys.executable
I find that the sys.path for the hosting python process is printed out instead, rather than the sys.path I would find from running /path/to/bin-2/python -c "import sys; print(sys.path)"
directly.
I'm used to this sort of thing if I use fork. I would have expected 'spawn'
to act the same as though I had entered the python interpreter from the shell.
Question
Is it possible to use the multiprocessing library to run functions and use Queues from another Python executable with the environment that it would have had had I started it from the shell?
More broadly, how does sys.path get populated and what is different between using multiprocessing in this way and launching the interpreter directly?