3

Why does this happen in Python:

>>> 
>>> 483.6 * 3
1450.8000000000002
>>> 

I know this happens in other languages, and I'm not asking how to fix this. I know you can do:

>>> 
>>> from decimal import Decimal
>>> Decimal('483.6') * 3
Decimal('1450.8')
>>> 

So what exactly causes this to happen? Why do decimals get slightly inaccurate when doing math like this?

Is there any specific reason the computer doesn't get this right?

wim
  • 338,267
  • 99
  • 616
  • 750
jackcogdill
  • 4,900
  • 3
  • 30
  • 48
  • 8
    [Obligatory link](http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html). I'll leave it to someone else to track down one of the many, many questions this is a duplicate to. – Gareth Latty Jan 16 '13 at 22:17
  • See this explanation: http://effbot.org/pyfaq/why-are-floating-point-calculations-so-inaccurate.htm If you're doing things where you need accuracy (like in banking), you generally use two ints or two longs to represent the number. – Erik Nedwidek Jan 16 '13 at 22:19
  • 1
    @Lattyware I sometimes wonder... How many of those that propagate the link have actually read through it? Further, how many of those have understood it? – phant0m Jan 16 '13 at 22:29
  • Decimals don't. floating point numbers do. By definition they must have limited accuracy. – John La Rooy Jan 16 '13 at 22:30
  • @phant0m 19.38 and 17.23 respectively. – Daniel Fischer Jan 17 '13 at 16:38
  • @Daniel 17.23% out of 19.38% or of all people? The former, surely? – phant0m Jan 17 '13 at 17:43
  • 1
    @phant0m I never mentioned percent, those are absolute numbers. – Daniel Fischer Jan 17 '13 at 18:26
  • Your assumption is wrong - you aren't doing decimal multiplication. Most computers do binary math. Compilers and assemblers take decimal floating point literals and estimate the nearest binary equivalent. You choose this path if you want faster machine based calculations instead of a slower mechanism like `decimal`. You'd factor this into your +/- error estimates in your calculations. – tdelaney Mar 16 '23 at 06:11

3 Answers3

5

See the Python documentation on floating point numbers. Essentially when you create a floating point number you are using base 2 arithmetic. Just as 1/3 is .333.... on into infinity, so most floating point numbers cannot be exactly expressed in base 2. Hence your result.

The difference between the Python interpreter and some other languages is that others may not display these extra digits. It's not a bug in Python, just how the hardware computes using floating-point arithmetic.

Kyle
  • 4,202
  • 1
  • 33
  • 41
  • This suggests it wouldn't happen in base 10. That's simply not true. – phant0m Jan 16 '13 at 22:31
  • 2
    @phant0m It does? He gives an example of it (1/3), so I don't really see the suggestion myself. – Gareth Latty Jan 16 '13 at 22:33
  • 1
    @Lattyware I was judging from the second sentence: `Essentially when you create a floating point number you are using base 2 arithmetic.` It sounds like it's a peculiarity from base 2. That the base is 2 isn't really relevant to explain why there *are* certain numbers that can't be represented. – phant0m Jan 16 '13 at 22:36
4

Computers can't represent every floating point number perfectly.

Basically, floating point numbers are represented in scientific notation, but in base 2. Now, try representing 1/3 (base 10) with scientific notation. You might try 3 * 10-1 or, better yet, 33333333 * 10-8. You could keep adding 3's, but you'd never have an exact value of 1/3. Now, try representing 1/10 in binary scientific notation, and you'll find that the same thing happens.

Here is a good link about floating point in python.

As you delve into lower level topics, you'll see how floating point is represented in a computer. In C, for example, floating point numbers are represented as explained in this stackoverflow question. You don't need to read this to understand why decimals can't be represented exactly, but it might give you a better idea of what's going on.

Community
  • 1
  • 1
Joshua Kravitz
  • 2,785
  • 2
  • 15
  • 15
2

Computers store numbers as bits (in binary). Unfortunately, even with infinite memory, you cannot accurately represent some decimals in binary, for example 0.3. The notion is a kin to trying to store 1/3 in decimal notation exactly.

Volatility
  • 31,232
  • 10
  • 80
  • 89