0

I'm expecting 0.1 as a result but:

In [1]: 0.3 / 3 
Out[1]: 0.09999999999999999

Tried with Decimal, nothing changed.

In [2]: from decimal import Decimal
In [3]: Decimal(0.3) / Decimal(3)
Out[3]: Decimal('0.09999999999999999629925658458')

What should I have to do to get correct result?

Mirat Can Bayrak
  • 631
  • 7
  • 18
  • You can try rounding off to some digits ? Depending on the accuracy you need. – Anand S Kumar Jun 24 '15 at 08:46
  • Because of floating point precision. And for your second point, you can discuss about it in Meta :) – Thomas Ayoub Jun 24 '15 at 08:46
  • 5
    You gave a **floating point value** to one of the `Decimal()` objects, starting it out with an imprecise representation. Use strings instead: `Decimal('0.3') / Decimal('3')`. – Martijn Pieters Jun 24 '15 at 08:47
  • Martijn Pieters was right, You should always resolve to use strings with Decimal. I still wonder how the author of Decimal forgot to consider this basic test case. – nehem Jun 24 '15 at 08:53
  • @itsneo: sorry? The documentation is pretty clear on the pitfalls of passing in floating point values to the constructor. – Martijn Pieters Jun 24 '15 at 08:55
  • I find it as not a valid excuse from the user's point of view. The document claims like --Decimal “is based on a floating-point model which was designed with people in mind, and necessarily has a paramount guiding principle – computers must provide an arithmetic that works in the same way as the arithmetic that people learn at school -- but didn't deliver what it promised, that originally triggered this Stackoverflow question and brought us towards this chat ! – nehem Jun 24 '15 at 08:56
  • @itsneo: *If value is a `float`, the binary floating point value is losslessly converted to its exact decimal equivalent. This conversion can often require 53 or more digits of precision. For example, `Decimal(float('1.1'))` converts to `Decimal('1.100000000000000088817841970012523233890533447265625')`.* – Martijn Pieters Jun 24 '15 at 09:03
  • @itsneo: as of Python 3.3, you can also switch the library to throw an exception if you are trying to pass floats to the constructor. – Martijn Pieters Jun 24 '15 at 09:04
  • Agreed, If you recall from your past, what was the last time you wanted to instantiate a Decimal object of exact machine precision?. In real life a user wish to construct Decimal object based on how he sees in paper/screen which would have been a better design choice. Anyways it's still debatable. – nehem Jun 24 '15 at 09:14

1 Answers1

0

Just think about what the result of 0.3/3 is, if done non-numerically. It's 0.1, right? If you perform any mathematical operation numerically (read: using a computer of some sort), you will introduce errors, which are unavoidable. They are due to the way computers do arithmetical operations. And the result that python is giving you is not really wrong. It's just objected to those arithmetical errors. The result that you get is 0.1, and only off by approximately 1e-16, which his machine tolerance. This is basically the best computers can do.

jhoepken
  • 1,842
  • 3
  • 17
  • 24
  • `0.3/3` numerically is `0.1`, not `1`. And the terms *numerically* and *non-numerically* make *no sense* here. They are numbers. The computer treats them as binary data, but data with meaning. Floating point numbers are approximations using binary fractions. – Martijn Pieters Jun 24 '15 at 08:52
  • That's right. Changed it. Thanks! – jhoepken Jun 24 '15 at 08:53
  • You can easily get a computer to represent the numbers differently, which is what `Decimal()` does. It'll just be *slower* because the floating point representation can be handled in hardware (the CPU floating point unit). – Martijn Pieters Jun 24 '15 at 08:54