-3

I use following operations in python in interpreter and got following result:

>>> 0.10*3
0.30000000000000004
>>> .10+.10
0.2
>>> 0.10 + 0.10 + 0.10
0.30000000000000004
>>> .2+0.1
0.30000000000000004
>>> _+.1
0.4

My question is that in 0.30000000000000004 how this 000000000004 come from ?

This is not only in Python but also in JS and I assume also in other languages.

Marcin
  • 48,559
  • 18
  • 128
  • 201
Hafiz
  • 4,187
  • 12
  • 58
  • 111
  • 4
    Search SO for the topic `floating-point arithmetic`. What you ask is asked about 100 times a week, and answered almost as often. – High Performance Mark Dec 13 '13 at 14:15
  • check this lil dude. http://stackoverflow.com/questions/18995148/floating-point-arithmetic-error this question is happening a lot in SO. – johnny Dec 13 '13 at 14:21

1 Answers1

3

Why don’t my numbers, like 0.1 + 0.2 add up to a nice round 0.3, and instead I get a weird result like 0.30000000000000004?

Because internally, computers use a format (binary floating-point) that cannot accurately represent a number like 0.1, 0.2 or 0.3 at all.

When the code is compiled or interpreted, your “0.1” is already rounded to the nearest number in that format, which results in a small rounding error even before the calculation happens.

Decimal numbers cannot accurately represent a number like 1/3, so you have to round to something like 0.33 - and you don’t expect 0.33 + 0.33 + 0.33 to add up to 1, either - do you?

Computers use binary numbers because they’re faster at dealing with those, and because for most calculations, a tiny error in the 17th decimal place doesn’t matter at all since the numbers you work with aren’t round (or that precise) anyway.

http://floating-point-gui.de/basic/

This should help