I have several different inputs to a Python script, which I invoke in the form:
./myscript.py myfile1
./myscript.py myfile2
./myscript.py myfile3
...
I can profile the code on a per-function basis for any one input file using python -m cProfile -s ./myscript.py myfile1
. Unfortunately, depending on the input file, the time is spent in completely different parts of the code.
The rough ideas I have for profiling the code for all inputs is to either (1) quick and dirty, write a bash script to call python -m cProfile -s ./myscript.py myfile
for each myfile
, and parse the output or (2) parse the cProfile results within python itself.
How can I profile myscript.py
for all of my input files, and average over the results, so I know where the hotspots are in general?