4

I need to rewrite some legacy Java code performing arithmetic transformations from Java to TypeScript/JavaScript. The problem is the legacy code uses the int Java type (signed 32-bits) and relies on overflows. I almost got what I want using Int32Array in JavaScript, but I still have a difference I can't explain. Look below.

Java:

int current = -1599751945;
int next = current * 0x08088405 + 1;
System.out.println("next = " + next);

Output: next = 374601940

Javascript:

const a = new Int32Array(4)

a[0] = -1599751945
a[1] = 0x08088405
a[2] = 1
a[3] = a[0]*a[1] + a[2]

console.log('a[3] = ' + a[3])

Output: a[3] = 374601952

Can someone explain the difference? And how can I get same result in JavaScript? I tried shift operations, coerce with |0, methods to convert etc., but best result is the one above.

Mark Rotteveel
  • 100,966
  • 191
  • 140
  • 197
airone
  • 45
  • 3

3 Answers3

1

I can't provide a definitive answer as to exactly why you get these exact numbers; but consider that all numbers in JS are doubles.

So, whereas current * 0x08088405 is done using integer arithmetic in Java, a[0]*a[1] is done using double arithmetic in JS, so these intermediate results are different; and the limited precision of a double means that adding 1 to that doesn't actually change the value:

console.log(a[0]*a[1]) => -215607868985706270
console.log(a[0]*a[1] + a[2]) => = -215607868985706270

Compare this to Java, where integer arithmetic is used:

int[] a = { -1599751945, 0x08088405, 1};
System.out.println(a[0]*a[1])          => 374601939
System.out.println(a[0]*a[1] + a[2])   => 374601940

If we make Java do this in double arithmetic:

double[] a = { -1599751945, 0x08088405, 1};
System.out.println(a[0]*a[1]);         => -2.15607868985706272E17
System.out.println(a[0]*a[1] + a[2]);  => -2.15607868985706272E17

You can see that this is almost the same, but differs in the least significant digit:

-215607868985706270  // JS
-215607868985706272  // Java

I don't know why there is such a difference here.

Andy Turner
  • 137,514
  • 11
  • 162
  • 243
1

Use Math.imul() in JavaScript. That should produce the correct result.

const a = new Int32Array(4)

a[0] = -1599751945
a[1] = 0x08088405
a[2] = 1
a[3] = Math.imul(a[0], a[1]) + a[2]

console.log('a[3] = ' + a[3])

Additional details as to why can be found here.

antonio_s87
  • 127
  • 2
0

Floating point numbers only support precision up to a specific number of digits. JavaScript promotes numbers larger than 32 bits to floating point numbers.

Java is "more correct" here.

// -215607868985706285 is the 64 bit result of your multiplication
console.log(-215607868985706285); // will print -215607868985706285

See Is floating point math broken? for a general discussion of this topic.

knittl
  • 246,190
  • 53
  • 318
  • 364