I am attempting to create a python script that can drive multiple MPI simulations (F90 executables, though it doesn't matter). Each of these MPI simulations use 2 processors. Lets say I want to have three of these MPI simulations running simultaneously. If I run these 3 simulation from the command line in 3 separate terminals, without python, they each get their own 2 processors, and run as though they are the only things that exist in the world.
My current implementation does not appear to be doing this. It is clear from tracking the MPI simulations that there is competition amongst the MPI simulations. Here is my current procedure
import subprocess
import multiprocessing as mp
def execute(inputs, output):
do_stuff_with_inputs()
subprocess.call('mpiexec -np 2 my_executable.x', shell=True)
results = post_process_stuff()
output.put(results)
output = mp.Queue()
processes = []
for i in xrange(3):
process.append(mp.Process(target=execute, args=args)))
for p in process:
p.start()
for p in process:
p.join()
results = [output.get() for p in process]
What I would like to do is be more explicit with the procedure, somehow 'creating' processor space in python so that the executable call has its own dedicated number of processors.