3

I have a 64 element JavaScript array that I'm using as a bitmask. Unfortunately, I've run into a problem when converting from a string to binary, and back. This has worked for some other arrays, but what is going on here?

var a = [1, 1, 1, 1, 1, 1, 1, 1,
         1, 1, 1, 1, 1, 1, 1, 1,
         1, 1, 0, 0, 1, 1, 1, 1,
         1, 1, 0, 0, 1, 1, 1, 1,
         1, 1, 0, 0, 0, 0, 1, 1,
         1, 1, 0, 0, 0, 0, 1, 1,
         1, 1, 1, 1, 1, 1, 1, 1,
         1, 1, 1, 1, 1, 1, 1, 1];

var str1 = a.join('');
  //-> '1111111111111111110011111100111111000011110000111111111111111111'

var str2 = parseInt(str1, 2).toString(2);
  //-> '1111111111111111110011111100111111000011110001000000000000000000'

str1 === str2  //-> false

I would expect str2 to be the same as str1, which is not the case.

  • 1
    You're loosing precision. See http://stackoverflow.com/questions/307179/what-is-javascripts-max-int-whats-the-highest-integer-value-a-number-can-go-t. – ziesemer Jan 09 '12 at 00:36
  • If you use a more flexible language (like python) you can see that those two binary strings are only 1 value off. `18446691089982423040` and `18446691089982423039` respectively. (As @zie was saying about precision) – Brigand Jan 09 '12 at 00:41
  • May I just say that using parseInt on a string-representation of a bitmask sound .... rather stupid? If you must have it as a string representation, how about just doing someting like `function matchMask(str, pos) { return str.charAt(pos) != '0'; }`. – Alxandr Jan 09 '12 at 00:41
  • 1
    If you don't care about politeness, you can say whatever you like. But calling `matchMask` for each element sounds a tad inefficient. – Marcus Booster Jan 09 '12 at 00:51

1 Answers1

6

In JavaScript, the Number type is a 64-bit double-precision value (more, and more). You've specified 64 bits there, which is beyond the realm that a 64-bit double-precision value can specify accurately (as it's a floating point type, and so must devote some bits to precision). JavaScript doesn't have an integer type (much less a 64-bit version of one), which is what a perfect-fidelity conversion would require.

I'm not all that up on floating point bit representations, but IIRC a 64-bit double-precision number can accurately represent integer values on the order of 53 significant bits, see the links for details.

T.J. Crowder
  • 1,031,962
  • 187
  • 1,923
  • 1,875
  • 1
    According to [this](http://stackoverflow.com/a/307200/1074592), it's only accurate to 53 bits. – Brigand Jan 09 '12 at 00:37
  • @FakeRainBrigand: Thanks, I've added a link to Section 8.5. (I'd already added a mention of the 53-bit thing, but I couldn't remember a source for it; I read it somewhere other than the spec, thanks.) – T.J. Crowder Jan 09 '12 at 00:40