Here's the code:
#include <iostream>
#include <math.h>
const double ln2per12 = log(2.0) / 12.0;
int main() {
std::cout.precision(100);
double target = 9.800000000000000710542735760100185871124267578125;
double unnormalizatedValue = 9.79999999999063220457173883914947509765625;
double ln2per12edValue = unnormalizatedValue * ln2per12;
double errorLn2per12 = fabs(target - ln2per12edValue / ln2per12);
std::cout << unnormalizatedValue << std::endl;
std::cout << ln2per12 << std::endl;
std::cout << errorLn2per12 << " <<<<< its different" << std::endl;
}
If I try on my machine (MSVC
), or here (GCC
):
errorLn2per12 = 9.3702823278363212011754512786865234375e-12
Instead, here (GCC
):
errorLn2per12 = 9.368505970996920950710773468017578125e-12
which is different. Its due to Machine Epsilon? Or Compiler precision flags? Or a different IEEE
evaluation?
What's the cause here for this drift? The problem seems in fabs()
function (since the other values seems the same).