40

If there is a library from which I'm going to use at least two methods, is there any difference in performance or memory usage between the following?

from X import method1, method2

and

import X
JohnE
  • 29,156
  • 8
  • 79
  • 109
Tom
  • 441
  • 1
  • 4
  • 4

4 Answers4

50

There is a difference, because in the import x version there are two name lookups: one for the module name, and the second for the function name; on the other hand, using from x import y, you have only one lookup.

You can see this quite well, using the dis module:

import random
def f_1():
    random.seed()

dis.dis(f_1)
     0 LOAD_GLOBAL              0 (random)
     3 LOAD_ATTR                0 (seed)
     6 CALL_FUNCTION            0
     9 POP_TOP
    10 LOAD_CONST               0 (None)
    13 RETURN_VALUE

from random import seed

def f_2():
    seed()

dis.dis(f_2)
     0 LOAD_GLOBAL              0 (seed)
     3 CALL_FUNCTION            0
     6 POP_TOP
     7 LOAD_CONST               0 (None)
    10 RETURN_VALUE

As you can see, using the form from x import y is a bit faster.

On the other hand, import x is less expensive than from x import y, because there's a name lookup less; let's look at the disassembled code:

def f_3():
    import random

dis.dis(f_3)
     0 LOAD_CONST               1 (-1)
     3 LOAD_CONST               0 (None)
     6 IMPORT_NAME              0 (random)
     9 STORE_FAST               0 (random)
    12 LOAD_CONST               0 (None)
    15 RETURN_VALUE

def f_4():
    from random import seed

dis.dis(f_4)
     0 LOAD_CONST               1 (-1)
     3 LOAD_CONST               2 (('seed',))
     6 IMPORT_NAME              0 (random)
     9 IMPORT_FROM              1 (seed)
    12 STORE_FAST               0 (seed)
    15 POP_TOP
    16 LOAD_CONST               0 (None)
    19 RETURN_VALUE

I do not know the reason, but it seems the form from x import y looks like a function call, and therefore is even more expensive than anticipated; for this reason, if the imported function is used only once, it means it would be faster to use import x, while if it is being used more than once, it becomes then faster to use from x import y.

That said, as usual, I would suggest you not following this knowledge for your decision on how to import modules and functions, because this is just some premature optimization.
Personally, I think in a lot of cases, explicit namespaces are much more readable, and I would suggest you doing the same: use your own sense of esthetic :-)

rob
  • 36,896
  • 2
  • 55
  • 65
  • 17
    It's one opcode on the virtual machine. Let that take 10 cycles and that is **5 nanoseconds** on a 2 Ghz machine. Just reading this sentence takes more time than the program could save in your lifetime. – Jochen Ritzel Aug 28 '10 at 20:17
  • @THC4k: I don't think any of us are disagreeing with you, I certainly agree with the "use whatever makes sense". @Roberto: I did some more timeit tests with multiple calls... the `from ...` statement starts to get slightly faster after ~8 function calls. It's interesting, but not important... we're dealing with microseconds of difference here. – Sam Dolan Aug 28 '10 at 20:37
  • Explicit is better than implicit. Namespaces are one honking great idea—let's do more of those! by https://pt.wikipedia.org/wiki/Zen_of_Python – SleX Feb 12 '21 at 21:23
11

There is no memory or speed difference (the whole module has to be evaluated either way, because the last line could be Y = something_else). Unless your computer is from the 1980s it doesn't matter anyways.

Jochen Ritzel
  • 104,512
  • 31
  • 200
  • 194
  • 2
    In my project, doing 40 imports from PyQt5 causes 500 ms lag during start up, which I find unacceptable. That's on SSD and i7-4790K. Because of this I'm probably going to shift to C++, so yes, it matters... perhaps not in exactly OPs question's context, but it shows that imports can cause performance problems indeed. – rr- May 10 '15 at 06:24
  • @rr- Shed Skin, Cython, or Nuitka can compile Python as C. – Cees Timmerman May 27 '19 at 07:48
9

It can matter if you are calling a function a lot of times in a loop (millions or more). Doing the double dictionary lookup will eventually accumulate. The example below shows a 20% increase.

Times quoted are for Python 3.4 on a Win7 64 bit machine. (Change the range command to xrange for Python 2.7).

This example is highly based on the book High Performance Python, although their third example of local function lookups being better no longer seemed to hold for me.

import math
from math import sin

def tight_loop_slow(iterations):
    """
    >>> %timeit tight_loop_slow(10000000)
    1 loops, best of 3: 3.2 s per loop
    """
    result = 0
    for i in range(iterations):
        # this call to sin requires two dictionary lookups
        result += math.sin(i)

def tight_loop_fast(iterations):
    """
    >>> %timeit tight_loop_fast(10000000)
    1 loops, best of 3: 2.56 s per loop
    """
    result = 0
    for i in range(iterations):
        # this call to sin only requires only one lookup
        result += sin(i)
DStauffman
  • 3,960
  • 2
  • 21
  • 30
7

I don't believe there's any real difference, and generally worrying about that little amount of memory isn't typically worth it. If you're going to be pressing memory considerations, it will far more likely be in your code.

heckj
  • 7,136
  • 3
  • 39
  • 50