I was running a small python test I wrote against some data and got some weird results. Boiled it down to this:
priceDiff = 219.92 - 219.52
if(priceDiff >= .40):
print "YES"
else:
print "NO"
The result is "NO"
Why is 0.40 not >= .40?
I was running a small python test I wrote against some data and got some weird results. Boiled it down to this:
priceDiff = 219.92 - 219.52
if(priceDiff >= .40):
print "YES"
else:
print "NO"
The result is "NO"
Why is 0.40 not >= .40?
Python offers controlled environment to work with floats in the form of "Decimal". It provides multiple options to control/tweak the rounding with amount of rounding along with different strategies.(https://docs.python.org/3.5/library/decimal.html#rounding-modes).
from decimal import Decimal, ROUND_HALF_EVEN
a = Decimal(219.92).quantize(Decimal('.01'), rounding=ROUND_HALF_EVEN)
b = Decimal(219.52).quantize(Decimal('.01'), rounding=ROUND_HALF_EVEN)
priceDiff = a - b
cmp = Decimal(0.40).quantize(Decimal('.01'), rounding=ROUND_HALF_EVEN)
if priceDiff.compare(cmp) >= 0:
print "YES"
else:
print "NO"
print(d)
print(d2)
print(priceDiff)
print(cmp)
IMHO this is better interms of readability and implementaion of calculations that are precision sensitive w.r.t application. Hope this helps
From Documentation
Representation error refers to the fact that some (most, actually) decimal fractions cannot be represented exactly as binary (base 2) fractions. This is the chief reason why Python (or Perl, C, C++, Java, Fortran, and many others) often won’t display the exact decimal number you expect:
0.1 + 0.2
0.30000000000000004