I'm writing code in different languages that involves double precision arithmetic. Ideally the programs need to yield exactly the same values. I'm aware that not all double/float arithmetic is deterministic (explained nicely here: https://randomascii.wordpress.com/2013/07/16/floating-point-determinism/), so I need to be carefeful. Can someone explain what is going on here though?:
C program:
#include <stdio.h>
#include <stdlib.h>
int
main(void) {
printf("%.52f\n", 1.66007664274403694e-03);
return (EXIT_SUCCESS);
}
Result: 0.0016600766427440369430584832244335302675608545541763
"Equivalent" Java 8 program:
class A {
public static void main(String args[]) {
System.out.printf("%.52f\n", 1.66007664274403694e-03D);
}
}
Result: 0.0016600766427440370000000000000000000000000000000000
The results are different. I have a feeling that this may be related to floating point rounding modes, however, as far as I can see C and Java have the same defaults (?).
How can I ensure the two programs have the same result?
EDIT:
FWIW, If I print the constant as a BigDecimal: System.out.printf("%.52f\n", new BigDecimal(1.66007664274403694e-03));
, I get: 0.0016600766427440369430584832244335302675608545541763
. This might prove that this is not a display issue, but who knows what magic the JVM does underneath.
EDIT2:
Using strictfp
as @chris-k suggests, I annotated the class, and the result remains as 0.0016600766427440370000000000000000000000000000000000
.
EDIT3:
Another suggestion was to try System.out.printf("%.52f\n", new BigDecimal("1.66007664274403694e-03"));
which gives a result we have not seen yet: 0.0016600766427440369400000000000000000000000000000000
.