7

When I multiply 1.265 by 10000 , I get 126499.99999999999 when using Javascript.

Why is this so?

Manishearth
  • 14,882
  • 8
  • 59
  • 76
vladaruz
  • 229
  • 5
  • 11

10 Answers10

25

Floating point numbers can't handle decimals correctly in all cases. Check out

sunny256
  • 9,246
  • 2
  • 25
  • 22
12

You should be aware that all information in computers is in binary and the expansions of fractions in different bases vary.

For instance 1/3 in base 10= .33333333333333333333333333, while 1/3 in base 3 is equal to .1 and in base 2 is equal to .0101010101010101.

In case you don't have a complete understanding of how different bases work, here's an example:

The base 4 number 301.12. would be equal to 3 * 4^2 + 0 * 4^1 + 1 * 4^0 + 1 * 4^-1 + 2 *4^-2= 3 * 4^2 +1+ 1 * 4^-1 + 2 * 4^-2=49.375 in base 10.

Now the problems with accuracy in floating point comes from a limited number of bits in the significand. Floating point numbers have 3 parts to them, a sign bit, exponent and mantissa, most likely javascript uses 32 or 64 bit IEEE 754 floating point standard. For simpler calculations we'll use 32 bit, so 1.265 in floating point would be

Sign bit of 0 (0 for positive , 1 for negative) exponent of 0 (which with a 127 offset would be, ie exponent+offset, so 127 in unsigned binary) 01111111 (then finally we have the signifcand of 1.265, ieee floating point standard makes use of a hidden 1 representation so our binary represetnation of 1.265 is 1.01000011110101110000101, ignoring the 1:) 01000011110101110000101.

So our final IEEE 754 single (32-bit) representation of 1.625 is:

Sign Bit(+)      Exponent (0)       Mantissa (1.625)
0                 01111111          01000011110101110000101

Now 1000 would be:

Sign Bit (+) Exponent(9) Mantissa(1000) 0 10001000 11110100000000000000000

Now we have to multiply these two numbers. Floating point multiplication consists of re-adding the hidden 1 to both mantissas, multiplying the two mantissa, subtracting the offset from the two exponents and then adding th two exponents together. After this the mantissa has to be normalized again.

First 1.01000011110101110000101*1.11110100000000000000000=10.0111100001111111111111111000100000000000000000 (this multiplication is a pain)

Now obviously we have an exponent of 9 + an exponent of 0 so we keep 10001000 as our exponent, and our sign bit remains, so all that is left is normalization.

We need our mantissa to be of the form 1.000000, so we have to shift it right once which also means we have to increment our exponent bringing us up to 10001001, now that our mantissa is normalized to 1.00111100001111111111111111000100000000000000000. It must be truncated to 23 bits so we are left with 1.00111100001111111111111 (not including the 1, because it will be hidden in our final representation) so our final answer that we are left with is

Sign Bit (+)   Exponent(10)   Mantissa
0              10001001       00111100001111111111111

Finally if we conver this answer back to decimal we get (+) 2^10 * (1+ 2^-3 + 2^-4 +2^-5+2^-6+2^-11+2^-12+2^-13+2^-14+2^-15+2^-16+2^-17+2^-18+2^-19+2^-20+2^-21+2^-22+2^-23)=1264.99987792

While I did simplify the problem multiplying 1000 by 1.265 instead of 10000 and using single floating point, instead of double, the concept stays the same. You use lose accuracy because the floating point representation only has so many bits in the mantissa with which to represent any given number.

Hope this helps.

JSchlather
  • 1,564
  • 2
  • 13
  • 22
5

It's a result of floating point representation error. Not all numbers that have finite decimal representation have a finite binary floating point representation.

Mehrdad Afshari
  • 414,610
  • 91
  • 852
  • 789
4

Have a read of this article. Essentially, computers and floating-point numbers do not go together perfectly!

4

Purely due to the inaccuracies of floating point representation.

You could try using Math.round:

var x = Math.round(1.265 * 10000);
Garry Shutler
  • 32,260
  • 12
  • 84
  • 119
4

On the other hand, 126500 IS equal to 126499.99999999.... :)

Just like 1 is equal to 0.99999999....

Because 1 = 3 * 1/3 = 3 * 0.333333... = 0.99999999....

Philippe Leybaert
  • 168,566
  • 31
  • 210
  • 223
  • why was this anonymously downvoted? It's not a direct answer to the question, but it is a mathematical truth, and it partly explains why computers calculate this way. – Philippe Leybaert Jun 08 '09 at 08:33
  • 3
    There is no ... in the question, this is not a question about recurring decimal representations being non-unique, but about the accuracy of floating point decimal representations. – Sam Meldrum Jun 08 '09 at 08:34
  • Although it wasn't me who downvoted, but I suspect the above has something to do with it. – Sam Meldrum Jun 08 '09 at 08:36
  • Also, the given "proof" is circular logic and thus not proof at all. – Michael Borgwardt Jun 08 '09 at 08:42
  • 1
    Oh really? Before making such a statement, I would at least do some research on the subject. This proof is 100% mathematically correct – Philippe Leybaert Jun 08 '09 at 08:45
  • 2
    Your mathematical statement is correct, activa, but it does not answer the original question. – Barry Brown Jun 08 '09 at 08:53
  • I am aware it doesn't answer the question (as I stated in my first comment), but it relates in a way to the way math is handled by software. Howver, I can understand that it doesn't help the OP. – Philippe Leybaert Jun 08 '09 at 09:00
  • 2
    I fully agree with this answer. And it IS an answer, because he was asking 'WHY'. This perfectly explains - why. I was going to post a similar answer, but found that you've already answered it correctly. Thanks! – Thevs Jun 08 '09 at 09:35
  • EIDT: And he wasn't asking for how to avoid this (as I understand) – Thevs Jun 08 '09 at 09:39
1

These small errors are usually caused by the precision of the floating points as used by the language. See this wikipedia page for more information about the accuracy problems of floating points.

elmuerte
  • 720
  • 5
  • 21
1

Here's a way to overcome your problem, although arguably not very pretty:

var correct = parseFloat((1.265*10000).toFixed(3));

// Here's a breakdown of the line of code:
var result = (1.265*10000);
var rounded = result.toFixed(3); // Gives a string representation with three decimals
var correct = parseFloat(rounded); // Convert string into a float 
                                   // (doesn't show decimals)
PatrikAkerstrand
  • 45,315
  • 11
  • 79
  • 94
1

If you need a solution, stop using floats or doubles and start using BigDecimal. Check the BigDecimal implementation stz-ida.de/html/oss/js_bigdecimal.html.en

Eugene Ryzhikov
  • 17,131
  • 3
  • 38
  • 60
0

Even additions on the MS JScript engine : WScript.Echo(1083.6-1023.6) give 59.9999999

Emmanuel Caradec
  • 2,302
  • 1
  • 19
  • 38
  • My favourite 'short example' for this business is 0.1+0.2-0.3, which generally doesn't come out as zero. .NET gets it wrong; Google gets it right; WolframAlpha gets it half right :) – AakashM Jun 14 '09 at 16:18
  • Yeah, that's a great example. A partial solution to that is an engine that keeps numerators and denominators separate for as long as possible. So you have {1,10} + {2,10} - {3,10} = {0,10}. – Nosredna Jun 14 '09 at 16:45