0

I'm trying to convert an integer to Int32 in Javascript (similar to how you'd convert a string to an integer using parseInteger()), however I cannot find the appropriate method. How can I do this?

flowermia
  • 389
  • 3
  • 17
  • 1
    Out of curiosity, in what context do you require this. Is it for some IPC protocol, or just for display purposes. Generally speaking in JS an integer is just a number, like float is just a number. – Keith May 13 '21 at 06:49
  • Related: https://stackoverflow.com/questions/9049677/how-does-x0-floor-the-number-in-javascript, https://stackoverflow.com/search?q=%5Bjs%5D+convert+32+bit+integer – T.J. Crowder May 13 '21 at 07:55

2 Answers2

2

JavaScript's number type is IEEE-754 double-precision binary floating point; it doesn't have an integer type except temporarily during some math operations or as part of a typed array (Int32Array, for instance, or a Uint32Array if you mean unsigned). So you have two options:

  1. Ensure that the number has a value that fits in a 32-bit int, even though it's still a number (floating point double). One way to do that is to do a bitwise OR operation with the value 0, because the bitwise operations in JavaScript convert their operands to 32-bit integers before doing the operation:

    | 0 does a signed conversion using the specification's ToInt32 operation:

    value = value | 0;
    // Use `value`...
    

    With that, -5 becomes -5. 123456789123 becomes -1097262461 (yes, negative).

    or >>> 0 does an unsigned conversion using the spec's ToUint32:

    value = value >>> 0;
    // Use `value`...
    

    The latter converts to unsigned 32-bit int. -5 becomes 4294967291, 123456789123 becomes 3197704835.

  2. Use an Int32Array or Uint32Array:

    const a = new Int32Array(1); // Or use Uint32Array for unsigned
    a[0] = value;
    // Use `a[0]`...
    

    Int32Array uses ToInt32, Uint32Array uses ToUint32.

    Note that any time you use a[0], it will be converted back to a standard number (floating point double), but if you use the array, depending on what you use it for, it will get used as-is.

Note that there's a method that may seem like it's for doing this, but isn't: Math.fround. That doesn't convert to 32-bit int, it converts to 32-bit float (IEEE-754 single-precision floating point). So it isn't useful for this.

T.J. Crowder
  • 1,031,962
  • 187
  • 1,923
  • 1,875
0

Well, the easiest way I found is using bitwise not ~.

This is the description from MDN:

The operands are converted to 32-bit integers and expressed by a series of bits (zeroes and ones).

So you can just type double ~ to convert your numbers. Here's some examples:

~~1 // 1
~~-1 // -1
~~5.05 // 5
~~-5.05 // -5
~~2147483647 // 2147483647
~~2147483648 // -2147483648
~~Math.pow(2, 32) // 0
Yan
  • 854
  • 1
  • 8
  • 15
  • 2
    All of the bitwise operations convert to 32-bit integer. So any operation that *logically* doesn't change the number, will result in truncating the value to a 32-bit. So, NOT NOT: `~~n`; OR: `n | 0`; AND `n & 1`; Right shift`n >> 0`; Left shift: `n << 0`. – VLAZ May 13 '21 at 07:30