I'm starting to learn Python, and quickly found that there is a lot of overhead to loops, function calls, and this sort of thing. I found the profile module, which was very helpful, however it only seems to display function calls and not basic operations. For example:
import profile
def funct(n):
myrange = range(n)
for i in myrange:
for j in myrange:
pass
profile.run("funct(10000)")
Produces the following output:
5 function calls in 2.556 seconds
Ordered by: standard name
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 0.000 0.000 :0(range)
1 0.001 0.001 0.001 0.001 :0(setprofile)
1 0.000 0.000 2.555 2.555 <string>:1(<module>)
1 0.000 0.000 2.556 2.556 profile:0(funct(10000))
0 0.000 0.000 profile:0(profiler)
1 2.555 2.555 2.555 2.555 profiler.py:3(funct)
Almost no function calls are actually being made, yet the time is still large. Further, replacing the pass line with even a simple statement like "a == b" ramps the computing time even further (to 8.4s in this case).
Is it possible to get more than just information about function call runtimes out of the profiler? And if not, is there something else I can use to get more profiling information? (Such as how many comparison operations I use, assignments, etc.)
Really, I see that the total runtime for my program is 2.555s, yet all the tottime for the profliing list only adds up to .001. I'd like to see where all the other time is going. I would also be applying this to much more complicated code, not this simple example. I've tried searching for information about this, but I cannot find anything (or maybe I'm looking for the wrong thing).
In the end I would like to use this information to find more detailed information about the bottlenecks in my code. Trying to get more out of profiling in the way that I just asked might not be the right way, so if there is something better I could be doing, feel free to address that instead.