Background
I have a collection of Python scripts used to build and execute Verilog-AMS tesbenches. The overall design was built with threading in mind, as each major test case is its own testbench and I have all of the supporting files / data output separate for each instance. The only shared items will be the launcher script and my data extraction script. The problem that I'm faced with is that my Verilog-AMS simulator does not natively support multithreading and for my test cases it takes a substantial amount of time to complete.
Problem
The machine I'm running this on has 32GiB of RAM and 8 "cores" available for me to use and I may be able to access a machine with 32. I would like to take advantage of the available computing power and execute the simulations simultaneously. What would be the best approach?
I currently use subprocess.call
to execute my simulation. I would like to execute up to n
commands at once, with each one executing on a separate thread / as a separate process. Once a simulation has completed, the next one in the queue (if one exists) would execute.
I'm pretty new to Python and haven't really written a threaded application. I would like some advice on how I should proceed. I saw this question, and from that I think the multiprocessing
module may be better suited to my needs.
What do you all recommend?