The following C program produces different results on my Mac and on Linux. I'm suprised because I assumed that the implementation of libm
is somehow standardized
#include<math.h>
#include<stdio.h>
int main()
{
double x18=-6.899495205106946e+01;
double x19 = exp(-x18);
printf("x19 = %.15e\n", x19);
printf("x19 hex = %llx\n", *((unsigned long long *)(&x19)));
}
The output on Mac is
x19 = 9.207186811339878e+29
x19 hex = 46273e0149095886
and on Linux
x19 = 9.207186811339876e+29
x19 hex = 46273e0149095885
Both were compiled without any optimization flags as follows:
gcc -lm ....
I know that I never should compare floats to be excatly the same.
This issue came up during debugging, regrettably the algorithm using this calculation proofs to be numerically unstable and this slight difference leads to significant deviations in the final result. But this is a different problem.
I'm just surprised that such basic operations as exp
are not standardized as I can expect for the basic algebraic operations specified by IEEE 754.
Are there any assumptions about precision I can rely on for different implementations of libm
for different machines or for different versions ?
Because of the discussion below I used mpmath
to compute the value with higher than machine precision and I get with two more figures the result 9.2071868113398768244
, so for both of my results the last figure is already wrong. The result on linux can be explained by down rounding this value, the Mac result is also off if the computer uses up rounding.