56

I am getting really weird timings for the following code:

import numpy as np
s = 0
for i in range(10000000):
    s += np.float64(1) # replace with np.float32 and built-in float
  • built-in float: 4.9 s
  • float64: 10.5 s
  • float32: 45.0 s

Why is float64 twice slower than float? And why is float32 5 times slower than float64?

Is there any way to avoid the penalty of using np.float64, and have numpy functions return built-in float instead of float64?

I found that using numpy.float64 is much slower than Python's float, and numpy.float32 is even slower (even though I'm on a 32-bit machine).

numpy.float32 on my 32-bit machine. Therefore, every time I use various numpy functions such as numpy.random.uniform, I convert the result to float32 (so that further operations would be performed at 32-bit precision).

Is there any way to set a single variable somewhere in the program or in the command line, and make all numpy functions return float32 instead of float64?

EDIT #1:

numpy.float64 is 10 times slower than float in arithmetic calculations. It's so bad that even converting to float and back before the calculations makes the program run 3 times faster. Why? Is there anything I can do to fix it?

I want to emphasize that my timings are not due to any of the following:

  • the function calls
  • the conversion between numpy and python float
  • the creation of objects

I updated my code to make it clearer where the problem lies. With the new code, it would seem I see a ten-fold performance hit from using numpy data types:

from datetime import datetime
import numpy as np

START_TIME = datetime.now()

# one of the following lines is uncommented before execution
#s = np.float64(1)
#s = np.float32(1)
#s = 1.0

for i in range(10000000):
    s = (s + 8) * s % 2399232

print(s)
print('Runtime:', datetime.now() - START_TIME)

The timings are:

  • float64: 34.56s
  • float32: 35.11s
  • float: 3.53s

Just for the hell of it, I also tried:

from datetime import datetime import numpy as np

START_TIME = datetime.now()

s = np.float64(1)
for i in range(10000000):
    s = float(s)
    s = (s + 8) * s % 2399232
    s = np.float64(s)

print(s)
print('Runtime:', datetime.now() - START_TIME)

The execution time is 13.28 s; it's actually 3 times faster to convert the float64 to float and back than to use it as is. Still, the conversion takes its toll, so overall it's more than 3 times slower compared to the pure-python float.

My machine is:

  • Intel Core 2 Duo T9300 (2.5GHz)
  • WinXP Professional (32-bit)
  • ActiveState Python 3.1.3.5
  • Numpy 1.5.1

EDIT #2:

Thank you for the answers, they help me understand how to deal with this problem.

But I still would like to know the precise reason (based on the source code perhaps) why the code below runs 10 times slow with float64 than with float.

EDIT #3:

I rerun the code under the Windows 7 x64 (Intel Core i7 930 @ 3.8GHz).

Again, the code is:

from datetime import datetime
import numpy as np

START_TIME = datetime.now()

# one of the following lines is uncommented before execution
#s = np.float64(1)
#s = np.float32(1)
#s = 1.0

for i in range(10000000):
    s = (s + 8) * s % 2399232

print(s)
print('Runtime:', datetime.now() - START_TIME)

The timings are:

  • float64: 16.1s
  • float32: 16.1s
  • float: 3.2s

Now both np floats (either 64 or 32) are 5 times slower than the built-in float. Still, a significant difference. I'm trying to figure out where it comes from.

END OF EDITS

Jonas Schäfer
  • 20,140
  • 5
  • 55
  • 69
max
  • 49,282
  • 56
  • 208
  • 355
  • 1
    What version of Python? What version of numpy? If Python 2.x, use xrange instead of range (range will be building an enormous list). float(1) is not an operation that many folk would expect to use often; float(i) may be a tad more realistic. Why on earth do you want to use 32-bit precision? – John Machin May 10 '11 at 21:47
  • Numpy says its floats are 64 bit by default, which would explain why 32 bit floats are slower (it has to change them up). Why specifying `float64` makes it so much slower, I don't know. Note that, AFAIK, your architecture doesn't affect float data: 32-bit or 64-bit architectures just relate to memory addresses. – Thomas K May 10 '11 at 21:50
  • 6
    Try `s=10000000.`, that should be faster. More seriously: you're profiling function call speed, while Numpy excels when it can vectorize operations. Is the `import` statement also in the version that uses built-in `float`? – Fred Foo May 10 '11 at 21:50
  • Aren't the Core 2 Duos 64-bit machines? http://ark.intel.com/Product.aspx?id=33917 – marshall.ward May 10 '11 at 22:43
  • @John Machin: sorry, updated my question to provide more detail. @larsmans: yes, the `import` statement is still there. And I don't think it's a function call time issue; I saw this problem in a much larger program, where function call time is negligible compared to the calculation time. @MLW: yes, you're right... only my OS is 32-bit. – max May 11 '11 at 02:31
  • 1
    you could use `python -mtimeit -s "import numpy; s = numpy.float(1)" "(s + 8) * s % 2399232"` to time it. Replace `numpy.float` by `numpy.float32(1)`, `numpy.float64(1)` or `1.0` for other variants. – jfs May 19 '11 at 07:01

8 Answers8

51

CPython floats are allocated in chunks

The key problem with comparing numpy scalar allocations to the float type is that CPython always allocates the memory for float and int objects in blocks of size N.

Internally, CPython maintains a linked list of blocks each large enough to hold N float objects. When you call float(1) CPython checks if there is space available in the current block; if not it allocates a new block. Once it has space in the current block it simply initializes that space and returns a pointer to it.

On my machine each block can hold 41 float objects, so there is some overhead for the first float(1) call but the next 40 run much faster as the memory is allocated and ready.

Slow numpy.float32 vs. numpy.float64

It appears that numpy has 2 paths it can take when creating a scalar type: fast and slow. This depends on whether the scalar type has a Python base class to which it can defer for argument conversion.

For some reason numpy.float32 is hard-coded to take the slower path (defined by the _WORK0 macro), while numpy.float64 gets a chance to take the faster path (defined by the _WORK1 macro). Note that scalartypes.c.src is a template which generates scalartypes.c at build time.

You can visualize this in Cachegrind. I've included screen captures showing how many more calls are made to construct a float32 vs. float64:

float64 takes the fast path

float64 takes the fast path

float32 takes the slow path

float32 takes the slow path

Updated - Which type takes the slow/fast path may depend on whether the OS is 32-bit vs 64-bit. On my test system, Ubuntu Lucid 64-bit, the float64 type is 10 times faster than float32.

samplebias
  • 37,113
  • 6
  • 107
  • 103
  • 1
    Cool. I understand how this can make float32 slow. But why is float64 much slower than the built-in float? (10 times slower in my latest example!) Is it just from the time it takes to allocate memory? But in my loop, memory only needs to be allocated for a handful of objects, and can then be reused in subsequent loop iterations, no? – max May 19 '11 at 05:56
  • @max I updated my answer with a guess. Since you're running a 32-bit OS, the `float64` type might take the slow path on your platform. If you have access to valgrind+cachegrind, see if you can reproduce my call traces on your platform. – samplebias May 19 '11 at 06:08
  • 1
    I tried 64-bit OS (see my update to the question). Both `np` float types are 5 times slower than the builtin `float`. I don't have valgrind, would it help in analyzing this particular performance hit? – max May 19 '11 at 06:22
  • @max Valgrind's cachegrind tool can show you a lot of detail about how often particular functions are called, and from where. One of its primary uses is finding bottlenecks in applications. – samplebias May 19 '11 at 06:26
23

Operating with Python objects in a heavy loop like that, whether they are float, np.float32, is always slow. NumPy is fast for operations on vectors and matrices, because all of the operations are performed on big chunks of data by parts of the library written in C, and not by the Python interpreter. Code run in the interpreter and/or using Python objects is always slow, and using non-native types makes it even slower. That's to be expected.

If your app is slow and you need to optimize it, you should try either converting your code to a vector solution that uses NumPy directly, and is fast, or you could use tools such as Cython to create a fast implementation of the loop in C.

Rosh Oxymoron
  • 20,355
  • 6
  • 41
  • 43
  • Hmm.. I'm sorry, perhaps I misunderstand your comment. But my question is not about `float` being slow; it's about `np.float64` being much slower than `float`. If you are saying that even `float` in a loop is too slow, I will be happy to hear your alternative suggestions (I'm not switching from Python to C though.) – max May 11 '11 at 02:49
  • 4
    Rosh has the right of it. np.float64's are non-native types and will have extra layers of (slow) indirection in the python interpreter. What makes numpy fast is that it avoids the python interpreter for collective operations and can take advantage of sequential memory access. – matt May 11 '11 at 03:53
  • 2
    Ahh thank you. I think I got it now. `numpy` is not good for single-number operations because of the overhead of working with non-builtin types (`numpy` is great for arrays because this overhead is spread out over many operations). To get any speed improvement on single-number operations I need to either find a way to do them in an array with `numpy`, or to use something like CPython. Correct? – max May 11 '11 at 04:18
  • That's it. And collective operations are very fast. Compare selinap's timings with you first loop. 4900 ms vs 17. That's a 288x speedup. – matt May 11 '11 at 04:40
  • 1
    @Rosh Oxymoron: "Using non_native types makes it even slower" ... what is your basis for saying that? – John Machin May 11 '11 at 05:06
  • @matt: what extra layers of indirection?? The same bytecode (BINARY_ADD) is generated; the interpreter will pop two objects off the stack and do the C-extension equivalent of `do a.__add__(b)` – John Machin May 11 '11 at 05:10
  • @John Machin: interesting. Then the question comes back: why is `np.float64` 10x slower in my (updated) code sample? – max May 11 '11 at 06:43
  • @John Machin: Here are a couple of reasons why np.float64 may be slower. (1) The expression mixes np.float and integer operands. If the expression is parsed as (integer).__add__(float64), it will return NotImplemented and cause (float64).__radd__(integer) to be called. This will most likely force the integer to be be converted to a float64 before the addition can performed. Converting all the constants into float64 first will probably help. (2) IIRC, Python caches PyFloat objects so repeated deletion/creation is fast. Numpy probably don't cache deleted objects. (Haven't checked the source.) – casevh May 11 '11 at 07:22
  • It depends on how `__add__`, `__mul__` and `__mod__` are implemented inside `np.float64` (including the implicit conversion of the integer argument to `__mod__`). There are lots of possibilities, since your inner loop is quite coarse. – ncoghlan May 11 '11 at 07:23
  • @casevh: You don't need to look at the source. Smallish ints are cached (`a=123;b=100+23;id(a)==id(b)` produces `True`) ... try that with floats. – John Machin May 11 '11 at 08:40
  • 1
    @John Machin: I had a different meaning in mind. For many object types, Python maintains a list of "freed" objects that are "resurrected" when a new instance of an object is created. This avoids memory allocation overhead and is faster than creating an object from scratch. This is different than creating multiple references to small integers. (I implemented a free-list for objects in gmpy and it increased performance by 20% in actual applictions.) – casevh May 11 '11 at 13:34
12

The answer is quite simple: the memory allocation might be part of it, but the biggest problem is that arithmetic operations for numpy scalars is done using "ufuncs" which are meant to be fast for several hundred values not just 1. There is some overhead in choosing the correct function to call and setting up the loops. Overhead which is un-necessary for scalars.

It was easier to just have the scalars be converted to 0-d arrays and then passed to the corresponding numpy ufunc then write separate calculation methods for each of the many different scalar types that NumPy supports.

The intent was that optimized versions of the scalar math would be added to the type-objects in C. This could still happen, but it never has happened because no-one has been motivated enough to do it. Possibly because the work-around is to convert numpy scalars to Python scalars which do have optimized arithmetic.

Travis Oliphant
  • 1,745
  • 14
  • 10
  • I suppose if the developer of numpy answers the question then that should eventually become the accepted answer... – orbeckst Feb 04 '16 at 05:29
12

Perhaps, that is why you should use Numpy directly instead of using loops.

s1 = np.ones(10000000, dtype=np.float)
s2 = np.ones(10000000, dtype=np.float32)
s3 = np.ones(10000000, dtype=np.float64)

np.sum(s1) <-- 17.3 ms
np.sum(s2) <-- 15.8 ms
np.sum(s3) <-- 17.3 ms
riza
  • 16,274
  • 7
  • 29
  • 29
  • I agree; on my machine, numpy array sum is 70-140 times faster than the builtin sum over a builtin list (70 in the case of `float` and 140 in the case of `np.float64`). But it's not always possible to use an array, as my updated example shows. In that case, it's somewhat disconcerting that using `np.float64` increases the execution speed by a huge constant factor (2 in the case of a simple sum; 10 in the case of my code). – max May 11 '11 at 06:41
  • Your updated example works fine with numpy, no need for a for-loop there. – tillsten May 11 '11 at 10:39
  • @tillsten how would you rewrite it to work without a for-loop? – max Jan 27 '12 at 23:06
  • IINM, on a 64-bit machine, `np.float` is `np.float64`. It's not the same as the built-in `float`. – syockit Nov 10 '15 at 07:20
9

Summary

If an arithmetic expression contains both numpy and built-in numbers, Python arithmetics works slower. Avoiding this conversion removes almost all of the performance degradation I reported.

Details

Note that in my original code:

s = np.float64(1)
for i in range(10000000):
  s = (s + 8) * s % 2399232

the types float and numpy.float64 are mixed up in one expression. Perhaps Python had to convert them all to one type?

s = np.float64(1)
for i in range(10000000):
  s = (s + np.float64(8)) * s % np.float64(2399232)

If the runtime is unchanged (rather than increased), it would suggest that's what Python indeed was doing under the hood, explaining the performance drag.

Actually, the runtime fell by 1.5 times! How is it possible? Isn't the worst thing that Python could possibly have to do was these two conversions?

I don't really know. Perhaps Python had to dynamically check what needs to be converted into what, which takes time, and being told what precise conversions to perform makes it faster. Perhaps, some entirely different mechanism is used for arithmetics (which doesn't involve conversions at all), and it happens to be super-slow on mismatched types. Reading numpy source code might help, but it's beyond my skill.

Anyway, now we can obviously speed things up more by moving the conversions out of the loop:

q = np.float64(8)
r = np.float64(2399232)
for i in range(10000000):
  s = (s + q) * s % r

As expected, the runtime is reduced substantially: by another 2.3 times.

To be fair, we now need to change the float version slightly, by moving the literal constants out of the loop. This results in a tiny (10%) slowdown.

Accounting for all these changes, the np.float64 version of the code is now only 30% slower than the equivalent float version; the ridiculous 5-fold performance hit is largely gone.

Why do we still see the 30% delay? numpy.float64 numbers take the same amount of space as float, so that won't be the reason. Perhaps the resolution of the arithmetic operators takes longer for user-defined types. Certainly not a major concern.

max
  • 49,282
  • 56
  • 208
  • 355
  • 1
    I learned a lot from all the answers, but I'm accepting this answer, since it directly addresses the original question. If someone is concerned with using `numpy.float` for scalar arithmetic, they should know it's not an issue as long as *everything* is `numpy.float`. – max Apr 12 '12 at 16:20
1

If you're after fast scalar arithmetic, you should be looking at libraries like gmpy rather than numpy (as others have noted, the latter is optimised more for vector operations rather than scalar ones).

ncoghlan
  • 40,168
  • 10
  • 71
  • 80
  • I'm not sure `gmpy` really helps here: it's mostly about doing fast *arbitrary precision* arithmetic. If anything, I'd expect a small slowdown when using `gmpy` types as a substitute for Python floats and small Python ints. – Mark Dickinson May 27 '14 at 07:19
  • These days, I'd agree with you, in 2011, I don't think I knew any better :) – ncoghlan May 29 '14 at 12:44
  • Yep, apologies; that was a reading fail on my part. The SO question got linked to from a recent internal discussion, and I didn't notice the dates until after commenting. – Mark Dickinson May 30 '14 at 15:56
1

I can confirm the results also. I tried to see what it would look like using all numpy types, and the difference persists. So then, my tests were:

def testStandard(length=100000):
    s = 1.0
    addend = 8.0
    modulo = 2399232.0
    startTime = datetime.now()
    for i in xrange(length):
        s = (s + addend) * s % modulo
    return datetime.now() - startTime

def testNumpy(length=100000):
    s = np.float64(1.0)
    addend = np.float64(8.0)
    modulo = np.float64(2399232.0)
    startTime = datetime.now()
    for i in xrange(length):
        s = (s + addend) * s % modulo
    return datetime.now() - startTime

So at this point, the numpy types are all interacting with each other, but the 10x difference persists (2 sec vs 0.2 sec).

If I had to guess, I would say that there are two possible reasons for why the default float types are much faster. The first possibility is that python performs significant optimizations under the hood for dealing with certain numeric operations or looping in general (e.g. loop unrolling). The second possibility is that the numpy types involves an extra layer of abstraction (i.e. having to read from an address). To look into the effects of each, I did a few extra checks.

One difference could be the result of python having to take extra steps to resolve the float64 types. Unlike compiled languages that generate efficient tables, python 2.6 (and maybe 3) has a significant cost for resolving things that you'd generally think of as free. Even a simple X.a resolution has to resolve the dot operator EVERY time it is called. (Which is why if you have a loop that calls instance.function() you're better off having a variable "function = instance.function" declared outside the loop).

From my understanding, when you use python standard operators, these are fairly similar to using the ones from "import operator." If you substitute add, mul, and mod in for your +, *, and %, you see a static performance hit of about 0.5 sec versus the standard operators (to both cases). This means that by wrapping the operators, the standard python float operations get 3x slower. If you do one further, using operator.add and those variants adds on 0.7 sec approximately (over 1m trials, starting with 2 sec and 0.2 sec respectively). That's verging on the 5x slowness. So basically, if each of these issues happens twice, you're basically at the 10x slower point.

So let's assume we're the python interpreter for a moment. Case 1, we do an operation on native types, let's say a+b. Under the hood, we can check the types of a and b and dispatch our addition to python's optimized code. Case 2, we have an operation of two other types (also a+b). Under the hood, we check if they're native types (they're not). We move on to the 'else' case. The else case sends us to something like a.add(b). a.add can then do a dispatch to numpy's optimized code. So at this point we have had additional overhead of an extra branch, one '.' get slots property, and a function call. And we've only got into the addition operation. We then have to use the result to create a new float64 (or alter an existing float64). Meanwhile, the python native code probably cheats by treating its types specially to avoid this sort of overhead.

Based on the above examination of the costliness of python function calls and scoping overhead, it would be pretty easy for numpy to incur a 9x penalty just getting to and from its c math functions. I can entirely imagine this process taking many times longer than a simple math operation call. For each operation, the numpy library will have to wade through layers of python to get to its C implementation.

So in my opinion, the reason for this is probably captured in this effect:

length = 10000000
class A():
    X = 10
startTime = datetime.now()
for i in xrange(length):
    x = A.X
print "Long Way", datetime.now() - startTime
startTime = datetime.now()
y = A.X
for i in xrange(length):
    x = y
print "Short Way", datetime.now() - startTime

This simple case shows a difference of 0.2 sec vs 0.14 sec (short way faster, obviously). I think what you're seeing is mainly just a bunch of those issues adding up.

To avoid this, I can think of a a couple possible solutions that mainly echo what has been said. The first solution is to try to keep your evaluations inside NumPy as much as possible, as Selinap said. A large amount of the losses are probably due to the interfacing. I would look into ways to dispatch your job into numpy or some other numeric library optimized in C (gmpy has been mentioned). The goal should be to push as much into C at the same time as possible, then get the result(s) back. You want to put in big jobs, not lots of small jobs.

The second solution, of course, would be to do more of your intermediate and small operations in python if you can. Clearly, using the native objects are going to be faster. They're going to be the first options on all the branch statements and will always have the shortest path to C code. Unless you have a specific need for fixed precision calculation or other issues with the default operators, I don't see why one wouldn't use the straight python functions for many things.

Namey
  • 1,172
  • 10
  • 28
  • This is very helpful. I use numpy because I wanted its random functions; they are much much faster than Python's functions (especially when I ask for an array of many random numbers). But unfortunately they can't be told to return built-in `float`. So I found it's cheaper to convert `np.float64` into built-in `float` before doing the arithmetics... – max May 23 '11 at 01:16
-1

Really strange...I confirm the results in Ubuntu 11.04 32bit, python 2.7.1, numpy 1.5.1 (official packages):

import numpy as np
def testfloat():
    s = 0
    for i in range(10000000):  
        s+= float(1)
def testfloat32():
    s = 0
    for i in range(10000000):  
        s+= np.float32(1)
def testfloat64():
    s = 0
    for i in range(10000000):  
        s+= np.float64(1)

%time testfloat()
CPU times: user 4.66 s, sys: 0.06 s, total: 4.73 s
Wall time: 4.74 s

%time testfloat64()
CPU times: user 11.43 s, sys: 0.07 s, total: 11.50 s
Wall time: 11.57 s


%time testfloat32()
CPU times: user 47.99 s, sys: 0.09 s, total: 48.08 s
Wall time: 48.23 s

I don't see why float32 should be 5 times slower that float64.

Andrea Zonca
  • 8,378
  • 9
  • 42
  • 70
  • You appear to be getting the same results that I originally did. But with my updated code, `float64` and `float32` are nearly the same performance-wise. I'd really like to focus on `float64` vs `float`. After all, who cares to use float32 if it's slow. – max May 11 '11 at 02:52