0

I'm starting to learn Python, and quickly found that there is a lot of overhead to loops, function calls, and this sort of thing. I found the profile module, which was very helpful, however it only seems to display function calls and not basic operations. For example:

import profile

def funct(n):
    myrange = range(n)
    for i in myrange:
        for j in myrange:
            pass

profile.run("funct(10000)")

Produces the following output:

5 function calls in 2.556 seconds

Ordered by: standard name

ncalls  tottime  percall  cumtime  percall filename:lineno(function)
    1    0.000    0.000    0.000    0.000 :0(range)
    1    0.001    0.001    0.001    0.001 :0(setprofile)
    1    0.000    0.000    2.555    2.555 <string>:1(<module>)
    1    0.000    0.000    2.556    2.556 profile:0(funct(10000))
    0    0.000             0.000          profile:0(profiler)
    1    2.555    2.555    2.555    2.555 profiler.py:3(funct)

Almost no function calls are actually being made, yet the time is still large. Further, replacing the pass line with even a simple statement like "a == b" ramps the computing time even further (to 8.4s in this case).

Is it possible to get more than just information about function call runtimes out of the profiler? And if not, is there something else I can use to get more profiling information? (Such as how many comparison operations I use, assignments, etc.)

Really, I see that the total runtime for my program is 2.555s, yet all the tottime for the profliing list only adds up to .001. I'd like to see where all the other time is going. I would also be applying this to much more complicated code, not this simple example. I've tried searching for information about this, but I cannot find anything (or maybe I'm looking for the wrong thing).

In the end I would like to use this information to find more detailed information about the bottlenecks in my code. Trying to get more out of profiling in the way that I just asked might not be the right way, so if there is something better I could be doing, feel free to address that instead.

Nyles
  • 185
  • 9
  • That output you get is just a fraction of the information gathered by the profiler. The detailed data is in the cachegrind file that can be visualized by a variety of tools. See [my answer on Profiling](http://stackoverflow.com/questions/19857749/what-is-the-reliable-method-to-find-most-time-consuming-part-of-the-code/19857889#19857889) for an example. I'm not 100% sure if you'll be able to dig up more than function calls though - but it should certainly make finding your bottleneck easier by visualizing the call-graph. – Lukas Graf Jul 24 '14 at 18:28
  • (Note: If you don't want to use the `profilestats` wrapper mentioned in my answer, but still write the profile results to a file, use the [`Profile` class](https://docs.python.org/2/library/profile.html#profile.Profile) directly instead of `profile.run()`) – Lukas Graf Jul 24 '14 at 18:34

1 Answers1

0

You can use the timeit module for measuring small pieces of code.

You can also just manually time operations by recording time before and after operations and then computing the difference.

merlin2011
  • 71,677
  • 44
  • 195
  • 329