2

I have a very simple example of a C++ program that when compiled and executed produces a different result between MS C++ and GCC

#include <iomanip>
#include <iostream>
#include <limits>
#include <math.h>

int main()
{
  double arg = -65.101613720114472;
  double result = std::exp2(arg);
  std::cout << std::setprecision(std::numeric_limits<double>::max_digits10) << result << std::endl;
  return 0;
}

On Windows I compile this with Microsoft (R) C/C++ Optimizing Compiler Version 19.28.29334 for x64 using the command cl example.cpp and running the resultant executable produces 2.5261637809256962e-20 as text output.

On Linux I compile this with gcc version 9.3.0 (Ubuntu 9.3.0-10ubuntu2) using the command g++ example.cpp and running the executable produces 2.5261637809256965e-20 as text output. Notice the difference in the least significant digit

I have tried compiling 32/64 bit versions, different levels of optimisation, on Windows I have tried different values for the /fp: flag and on Linux the -frounding, -mfpmath, etc flags nothing has produced results different from the ones above.

Is it possible to get my program above to produce the same result from the std::exp2() call on both Windows and Linux?

If not, where is the difference happening? Is it something specific to the compiler implementations, or the implementations of the math.h library, or is it some other subtlety I'm missing?

EDIT: Just to clarify;

This is not a question about the text representation of doubles.

The bitwise representation of the arg double is identical on both platforms. The bitwise representation of the result double is different at the least significant bit between the two platforms. The Windows and Linux executables were both run on the same machine and hence on the same processor architecture.

My best guess is that the implementation of the exp2() function performs some calculations in a slightly different order between the two platforms either because of compiler optimisations or math.h implementation differences.

If anyone knows where the implementation difference is or if it is possible to use compiler flags during either compilation to make them match, that would be the answer to this question.

Svet
  • 21
  • 3
  • You can only expect 15 to 17 digits (depends on the value held) of precision (not decimal places) from `double`. – Richard Critten Mar 14 '21 at 19:29
  • Are you aware [that floating point math is broken](https://stackoverflow.com/questions/588004/is-floating-point-math-broken), and fully understand why it's broken? If so, then you should be able to answer your own question, and if not, focusing on understanding the fundamental reason why floating point math is broken will serve to improve your understanding of the displayed difference. – Sam Varshavchik Mar 14 '21 at 19:30
  • The difference is in the 17th significant decimal digit, which cannot be relied upon in a `double` value stored in the usual IEEE-754 format. – dxiv Mar 14 '21 at 19:41
  • I'm afraid that doesn't answer my question. I'm aware of the limits of precision when dealing with doubles in general. My question is specifically, why don't I get a bitwise identical double from the same operation when I use the two different compilers. For reference, the same calculation in java or dotnet returns bitwise identical results in both Windows and Linux (on the same machine). – Svet Mar 14 '21 at 19:46
  • @Svet The posted code alone does not establish that you don't "*get a bitwise identical double*", just that the two strings differ in a digit beyond the expected precision. First thing would be to check the binary representation of both `arg` and `result` in the two cases. – dxiv Mar 14 '21 at 20:03
  • 3
    @dxiv, thank you for the suggestion on how to improve the question, I can confirm that the arg doubles are bitwise identical and that there is there is an unexpected difference in the least significant bit in the result double. The code to demonstrate the bit difference is unwieldly which is why I posted the sample program using set_precision max_digits10 output to more easily demonstrate the difference. I know the limits to precision when dealing with doubles, but I was wondering if there is a compiler flag that would make the results of exp2 bitwise identical between the two compilers. – Svet Mar 14 '21 at 20:31
  • 1
    @Svet FWIW `pow(2.0, arg)` appears to return the [expected result](https://rextester.com/VLY28533). What (other) implementation `exp2` is using is harder to track down, since it seems to be in the closed-source part of the ucrt (`ucrt\src\appcrt\tran\_exp2.h`). However, according to [MS](https://learn.microsoft.com/en-us/cpp/c-runtime-library/floating-point-support?view=msvc-160), "*+/-1 ulp of the correctly rounded result*" is not considered to be unexpected. – dxiv Mar 14 '21 at 22:04
  • 1
    exp2 isn't a function that's required to be faithfully rounded by IEEE-754 so differences within 1ULP is completely normal. You'll need to use a cross-platform software library for absolute consistency. See [Floating point accuracy with different languages](https://stackoverflow.com/a/63489979/995714), [Why do sin(45) and cos(45) give different results?](https://stackoverflow.com/a/31509332/995714), [Math precision requirements of C and C++ standard](https://stackoverflow.com/q/20945815/995714), [What is the error of trigonometric instructions?](https://stackoverflow.com/q/21908949/995714) – phuclv Mar 15 '21 at 04:24
  • 1
    [Does 64-bit floating point numbers behave identically on all modern PCs?](https://stackoverflow.com/q/2149900/995714), [Slightly different result from exp function on Mac and Linux](https://stackoverflow.com/q/44765611/995714) – phuclv Mar 15 '21 at 04:33
  • @dxiv and @ phuclv Thank you for your help. I am satisfied that in this case the difference is coming from the closed source MS implementation of exp2 rather than the compiler settings. – Svet Mar 15 '21 at 11:31

0 Answers0