2

I have a c++ code in visual studio 2015 c++11. The code does EXP of

val = 7.3526137268510955991

double myCalulatedEXP= EXP(val);
//Lets print
std::cout.precision(20);
std::cout<<myCalulatedEXP;

On one machine I get 1560.2693207530153359 and on the other, I get 1560.2693207530151085 See that last 4 digits does not match causing trouble for me. Though the difference is small it multiplies and adds up to a bigger difference later on.

Both machines have same processor identifier .

PROCESSOR_IDENTIFIER=Intel64 Family 6 Model 63 Stepping 2, GenuineIntel

Both machines have same OS via

 `systeminfo | findstr /B /C:"OS Name" /C:"OS Version"`


OS Name:                   Microsoft Windows Server 2012 R2 Standard
OS Version:                6.3.9600 N/A Build 9600

Both are windows server 2012 machines. I ran dependency walker to check if they are having different dlls versions linked . Both are exactly the same .

Please suggest

1) What can cause such differences?

2) How can I avoid such differences?

3) If processor, OS, dlls are the same can I still expect different results

MAG
  • 2,841
  • 6
  • 27
  • 47
  • 6
    Looks like you're good out to 16 digits. Double is generally only good out to 15 digits so you're doing pretty good. I don't think there is much of anything you can do. – user4581301 May 28 '19 at 18:35
  • Is the exact same executable running on both machines? – alter_igel May 28 '19 at 18:35
  • 3
    If these are 64-bit double precision then there are only 15.95 decimal digits plus the exponent - https://en.wikipedia.org/wiki/IEEE_754 – Dave S May 28 '19 at 18:36
  • code updated inline .. yes exactly the same exe and same dlls – MAG May 28 '19 at 18:36
  • Point is if processor, OS, dlls are same can i still expect different results ? – MAG May 28 '19 at 18:37
  • Using `double` you only have 16 digits of accuracy per number, and lose even than if the magnitude of numbers differs enough in a chain of multiple operations. The digits beyond the 16 are garbage, random, undefined, noise. – Dave S May 28 '19 at 18:39
  • 1
    @DaveS still there is no random generator involved. I would expect on the same platform the same code compiled with the same compiler will produce the same results, even beyond defined precision for doubles. I would also be wondering why the bit pattern is different. – SergeyA May 28 '19 at 18:41
  • Look at the IEEE format at wiki. A double only holds 53 binary digits / 15.95 decimal digits, period. It just can't store more than that. If you are viewing more digits than that you're viewing some other junk from who knows where. – Dave S May 28 '19 at 18:44
  • 1
    The CPU floating point registers on x86_64 CPUs are 80 bits unless SSE /SIMD is being used. https://stackoverflow.com/questions/3206101/extended-80-bit-double-floating-point-in-x87-not-sse2-we-dont-miss-it – drescherjm May 28 '19 at 18:50
  • But once you stuff that into a `double` you lose the extra bits, and if you then print the `double` they don't magically re-attach themselves. – Dave S May 28 '19 at 18:52
  • 2
    Dave, that fuzz should still be representative of the binary pattern stored. I can't think of a good reason for it to be different for the two numbers given the same hardware and the same code. I just don't think I can guarantee it. Mag, print out some hexfloat (`printf("%a, doubleval);` because I can't guarantee `cout << hexfloat < – user4581301 May 28 '19 at 18:58
  • Maybe there is some bug (undefined behavior) in your EXP(). That could explain why identical hardware and executable executes differently. – drescherjm May 28 '19 at 19:00
  • Please also edit the question to show us your cout or printf so we can see the formatting that generated the numbers above. – Dave S May 28 '19 at 19:01
  • @user4581301 I was guessing that whatever the printf or cout code is might be pulling in garbage from adjacent memory, for example if it treated the double as an intel 80-bit float. – Dave S May 28 '19 at 19:04
  • print is done via cout. Now Added in question – MAG May 29 '19 at 04:07

1 Answers1

0

If you look at the IEEE 754 binary format for a double it is: 53 significant binary bits / 15.95 significant decimal digits plus 11 exponent bits. (wiki: https://en.wikipedia.org/wiki/IEEE_754)

This means if you print the double using a format that shows more than 16 significant digits, the extra digits may be garbage.

Not always, for example %.20f might be perfectly accurate if the exponent of the number is -4.

53 bits of accuracy is the best case. A chain of operation on numbers of different magnitudes can lose accuracy, for example (1.23 e+20) + (0.45 e-30).

std::cout.precision(20);

Since a double has a maximum accuracy of 16 digits, this may include undefined digits.

This question's top answer suggests using 17 digits: How do I print a double value with full precision using cout?

Additional notes in the comments point to Theorem 15 in this abstract of What Every Computer Scientist Should Know About Floating-Point Arithmetic, by David Goldberg - https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html

Dave S
  • 1,427
  • 1
  • 16
  • 18
  • can i ensure with some means that extra digits are always 0 ? – MAG May 29 '19 at 05:04
  • Is there a reason not to just set the precision lower, to 16 or 17? See added information above. – Dave S May 29 '19 at 16:20
  • cout is used to determine the area where the problem starts. actually, mine is a mathematical application where a lot of matrix multiplication is happening along with tan sin and exp operations. unfortunately, on 2 machines, I get different results. so i printed where the problem starts . initally i got no difference just difference in results . so i increased precision to determine where the problem originated. hence i used 20 precision . – MAG May 29 '19 at 16:53
  • You might just be expecting more accuracy than double precision is capable of. In my `(1.23 e+20) + (0.45 e-30)` example above, the second number's contribution is simply lost. That's one reason why Numerical Analysis exists as a field, to tell you what to expect. It also comes up with methods to mitigate accuracy loss. – Dave S May 29 '19 at 17:43