0

If I run this code:

var a = new Float32Array(3);
a[0] = 1;
a[1] = 1.1;
a[2] = 1.00001;

I get this result for a:

[1, 1.100000023841858, 1.0000100135803223]

Why does the Float32Array screw with my numbers like that? Also, how do I know which numbers can be accurately represented as a 32 bit float?

benekastah
  • 5,651
  • 1
  • 35
  • 50
  • possible duplicate of [understanding floating point variables](http://stackoverflow.com/questions/2480699/understanding-floating-point-variables) – mu is too short Dec 17 '11 at 03:57
  • @muistooshort I actually knew the answer to that question before, though I see that the two are very similar. My confusion came from the fact that it was hard for me to understand why a 32-bit float couldn't store the number properly, while javascript in general does fine. (I forgot to mention, that a `Float64Array` appears to do fine with these numbers, at least in Chrome). – benekastah Dec 17 '11 at 06:10

1 Answers1

3

A 32 bit floating point number only have 7 significant digits. It's normal for a floating point number to be stored as the closest possible approximation of the specificed number.

A 32 bit floating point number simply can't store the value 1.1 exactly, the closest value that it can store is 1.100000023841858.

For the seven significant digits, the number is still accurate, i.e. 1.100000.

Normally when a floating point number is displayed, it's rounded according to the number of significant digits that it can accurately store. What you are using to display the numbers does obviously not do this rounding, that is why you are seeing the limitations in the precision of the numbers.

Assuming that those 32 bit floats are using the IIIE 754 standard, they can represent values in the range from 1.18 * 10-38 to 3.4 * 1038.

Guffa
  • 687,336
  • 108
  • 737
  • 1,005
  • 1
    Thanks for explaining that 32-bit floating points have seven significant digits. Coming from a dynamic background, that's simple enough for me to quickly grasp :) – benekastah Dec 17 '11 at 06:16