Using numpy.float32
.
t = numpy.float32(.3)
x = numpy.float32(1)
r = numpy.float32(-.3)
_t = t+x+r
_t == 1 # -> False
Using regular Python float
.
t = .3
x = 1
r = -.3
_t = t+x+r
_t == 1 # -> True
Why?
Using numpy.float32
.
t = numpy.float32(.3)
x = numpy.float32(1)
r = numpy.float32(-.3)
_t = t+x+r
_t == 1 # -> False
Using regular Python float
.
t = .3
x = 1
r = -.3
_t = t+x+r
_t == 1 # -> True
Why?
Floating point values are inherently non-exact on computers. The python default float
is a what's called a double precision floating point number on most machines according to https://docs.python.org/2/tutorial/floatingpoint.html. numpy.float32
is a single precision float. It's double precision counterpart is numpy.float64
. This could explain the difference in this case.
In general floating point numbers shouldn't be compared directly using ==
. You can use numpy.isclose
to deal with the small errors caused by non-exact floating point representations.
Python float is a C double type: documentation:
Floating point numbers are usually implemented using double in C; information about the precision and internal representation of floating point numbers for the machine on which your program is running is available in
sys.float_info
.
Therefore, you are comparing 32 and 64 precision floating point numbers. The following will work:
t = numpy.float64(.3)
x = numpy.float64(1)
r = numpy.float64(-.3)
_t = t+x+r
_t == 1