One problem for Python efficiency is that the language is completely dynamic. For example consider the simple loop
def myfunc():
for i in range(10):
foo(bar(i))
seems that the function foo
will be called with the result of calling bar
ten times.
However the function bar
can for example change what foo
is and the code of foo
can in turn change what bar
is. Python is thus forced to check at each iteration what foo
is pointing to and what bar
is pointing to. This requires looking in the module globals and, if nothing is found there in the builtin (predefined) names. At each of the 10 iterations.
The very same happens with all global lookups (and for example it's not forbidden for you to even define a function named len
thus "hiding" the standard function with that name).
When using a local variable instead things are simpler
def myfunc():
f = foo
b = bar
for i in range(10):
f(b(i))
the reason is that f
and b
are local variables so getting the value to be able to make the calls is much simpler. Code outside of myfunc
cannot change what f
and b
are pointing to.
So one trick to get some speed is to write things like
def myfunc(x, sin=math.sin):
...
so that when using sin
you don't need to look up math
first and then sin
inside math
.
It's a kind of micro-optimization that is however considered bad style unless you really found (measured) the speed to be a problem, the fix has been measured to give a reasonable gain and that the slowness is however not serious enough to require some more radical approach.