0

Captain Hindsight, reporting in:

After reading through the comment and answer and running a few tests, I found out that I had made a subtle error in my calculations. Turns out, I was comparing compiled lookups to interpreted calls. When I precompiled the call using the NON-IPython line magic version ( ie: timeit.timeit(codestr, setup_codestr), I found that the function calls were indeed on the same order of magnitude as the lookups :)

Now there's a whole world of caching function results, precompiling functions, and precompiling types to explore! ..and that's nice :)

For posterity:

I realize that sounds like a strange question, but someone might know a way around this, and that would be great. So here goes:

If I do something like:

%%timeit somelist[42]

Then I get times in the 90 nanosecond range. A slice will get it up to 190ish; and, to my pleasant surprise, even big crazy ones were still fast. This bad boy, for instance, weighs in at 385 nanseconds:

%%timeit some_nested_list[2:5][1][6:13]

Here's the thing. Function calls, it seems, are a lot slower than that. I like decomposing problems functionally, and am starting to give functional programming a bit more thought, but the speed difference is significant and (3.34 microseconds vs 100-150 nanoseconds (realistic actual avgs of conditionals, etc)). The following takes 3.34 micros:

def func():
    some_nested_list[2:5][1][6:13]
%%timeit func()

So, there's presumably a lot of functional programmers out there? You all must have dealt with this little hiccup? Someone care to point me in the right direction?

Inversus
  • 3,125
  • 4
  • 32
  • 37
  • 1
    if your function takes arguments you could arrange for it to be memomized (i.e. once it is called, the result is cached and the cached result is returned on subsequent calls, but that only helps if the original function is expensive, and the function is invariant (i.e. the same argument produce exactly the same results every time). – Tony Suffolk 66 Oct 07 '14 at 03:34
  • @TonySuffolk66 Thank you very much for that. It pointed me in the right direction. – Inversus Oct 15 '14 at 21:56

1 Answers1

2

Not really. Python function calls involve a certain amount of overhead for setting up the stack frame, etc., and you can't eliminate that overhead while still writing a Python function. The reason the operations in your example are fast is that you're doing them on a list, and lists are written in C.

One thing to keep in mind is that, in many practical situations, the function call overhead will be small relative to what the function actually does. See this question for some discussion. However, if you move toward a pure-functional style in which each function just evaluates one expression, you may indeed suffer a performance penalty.

An alternative is to look at PyPy, which makes many pure-Python operations faster. I don't know whether it improves function call speed specifically. Also, by using PyPy you restrict the set of libraries you can use.

Finally, there is Cython, which allows you to write code in a language that looks basically the same as Python, but actually compiles to C. This can be much faster than Python in some cases.

The bottom line is that how to speed up your functions depends on what your functions actually do. There is no magic way to just magically make all function calls faster while still keeping everything else about Python the same. If there were, they probably would have already added it to Python.

Community
  • 1
  • 1
BrenBarn
  • 242,874
  • 37
  • 412
  • 384
  • Ya, I'm not looking to develop a pure-functional style, but no hate for those on that holy quest, for sure haha. I aim for a balance between decomposability and speed, but was not enjoying how much code I have to write to avoid having to call functions. However, I found out that I had made a subtle error in my calculations. I was comparing compiled lookups to interpreted calls. When I precompiled the call using the non-IPython magic version ( ie: `timeit.timeit(codestr, setup_codestr)`, I found that the function calls were indeed on the same order of magnitude :) – Inversus Oct 15 '14 at 22:03