1

I have tested my code developed on a ubuntu 18.04 bionic docker image on a ubuntu 20.04 focal docker image. I saw that there were a problem with my unit test and I have narrowed the root cause to a simple main.cpp

#include <iostream>
#include <iomanip>
#include <math.h>
int main()
{
    const float DEG_TO_RAD_FLOAT = float(M_PI / 180.);
    float theta = 22.0f;
    theta = theta * DEG_TO_RAD_FLOAT;
    std::cout << std::setprecision(20) << theta << ' ' << sin(theta) << std::endl;
    return 0;
}

On the bionic docker image, I have upgraded my version of g++ using the commands :

sudo apt-get install -y software-properties-common
sudo add-apt-repository ppa:ubuntu-toolchain-r/test
sudo apt install -y gcc-9 g++-9
sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-9 90 --slave /usr/bin/g++ g++ /usr/bin/g++-9 --slave /usr/bin/gcov gcov /usr/bin/gcov-9
sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-7 70 --slave /usr/bin/g++ g++ /usr/bin/g++-7 --slave /usr/bin/gcov gcov /usr/bin/gcov-7

My version of g++ are the same : 9.4.0.

On ubuntu 18.04, the program outputs : 0.38397243618965148926 0.37460657954216003418

On ubuntu 20.04, the program outputs : 0.38397243618965148926 0.37460660934448242188

As you can see the difference is on the sin(theta), on the 7th decimal. The only difference I can think of is the version of libc which is 2.27 on the ubuntu 18.04 and 2.31 on the ubuntu 20.04.

I have tried several g++ options -mfpmath=sse, -fPIC,-ffloat-store, -msse, -msse2 but it had no effects.

The real problem is that on my Windows version of the code compiled with /fp:precise, I get the same results than the Ubuntu 18.04 : 0.38397243618965148926 0.37460657954216003418

Is there any way to force the g++ compiler to keep the same results as my Windows compiler please?

Sam Mason
  • 15,216
  • 1
  • 41
  • 60
Olivier
  • 37
  • 4
  • 3
    That many digits for a `float` is just noise. `float` is only good for about 5 decimal places and gets dicey around 6. And I'm being generous. Use `double` instead. You get twice the precision, and don't have to add `f` to decimal numbers anymore. – sweenish Oct 14 '22 at 12:56
  • For the precision, it's just to emphasize that Windows and Ubuntu 18.04 have the same results. Well, unfortunately, this code is a small part of the application, and changing everything to double is not easy. But, I am afraid you are right and this might be the only way. – Olivier Oct 14 '22 at 13:12
  • And I'm saying that the "same results" you're calling out are irrelevant because they're beyond the point of trusting `float`. For the range where `float` can be trusted, all platforms are giving you identical results and there is no issue. – sweenish Oct 14 '22 at 13:19
  • 1
    The question then becomes do you expect your values to be accurate to 20 decimal places, or was this just for testing to see the accuracy of the trig functions? It's a big step to go from 20 significant digits down to 5 or 6. If your application requires `double`, you need to use it. There are even libraries for different numeric representations to get even more accuracy than `double`. – franji1 Oct 14 '22 at 13:21
  • Also : `setprecision(20)` doesn't do anything to your floating point number. It only affects the output (which then seems to make up 20 digits instead of truncating at a "trustworthy" value. – Pepijn Kramer Oct 14 '22 at 13:34
  • See [this recent question](https://stackoverflow.com/questions/74074312). As we were just advising that user, you will probably want to relax the exactness threshold on your unit test slightly. – Steve Summit Oct 15 '22 at 05:20

2 Answers2

2

Whether or not there is any guarantee that the exact result of calls to the mathematical functions stay consistent with version changes aside, you are also relying on unspecified behavior.

Specifically, you are including <math.h> in a C++ program. This will make sin from the C standard library available in the global namespace scope, but it is unspecified whether or not it will make the sin overloads from the C++ standard library available in the global namespace scope.

C's sin function operates on doubles, while C++ adds an overload for float. So it is unspecified whether you are calling the overload operating on double or the one operating on float. Depending on that you will get a differently rounded result.

To guarantee a call to the float overload include <cmath> instead and call std::sin instead of sin.

Also, depending on optimization flags, GCC may not actually call the sin function and constant-fold the value itself. In that case the result may have a different rounding or accuracy.

user17732522
  • 53,019
  • 2
  • 56
  • 105
  • Many thanks, it seems in the whole app code, I could find both headers, I will look at this in details. – Olivier Oct 17 '22 at 07:44
2

Well, investigating a slightly modified version of your test program:

#include <iostream>
#include <iomanip>
#include <cmath>
int main()
{
    const float DEG_TO_RAD_FLOAT = float(M_PI / 180.);
    float theta = 22.0f;
    theta = theta * DEG_TO_RAD_FLOAT;
    std::cout << std::setprecision(20) << theta << ' ' << std::sin(theta) 
      << ' ' << std::hexfloat << std::sin(theta) << std::endl;
    return 0;
}

The changes are that 1) use cmath and std::sin instead of math.h, and 2) also print the hex representation of the calculated sine value. Using GCC 11.2 on Ubuntu 22.04 here.

Without optimizations I get

$ g++ prec1.cpp
$ ./a.out 
0.38397243618965148926 0.37460660934448242188 0x1.7f98ep-2

which is the result you got on Ubuntu 20.04. With optimization enabled, however:

$ g++ -O2 prec1.cpp
$ ./a.out 
0.38397243618965148926 0.37460657954216003418 0x1.7f98dep-2

which is what you got on Ubuntu 18.04.

So why does it produce different results depending on optimization level? Investigating the generated assembler code gives a clue:

$ g++ prec1.cpp -S
$ grep sin prec1.s
    .section    .text._ZSt3sinf,"axG",@progbits,_ZSt3sinf,comdat
    .weak   _ZSt3sinf
    .type   _ZSt3sinf, @function
_ZSt3sinf:
    call    sinf@PLT
    .size   _ZSt3sinf, .-_ZSt3sinf
    call    _ZSt3sinf
    call    _ZSt3sinf

So what does this mean? Well, it calls sinf (which lives in libm, the math library part of glibc). Now, for the optimized version:

$ g++ -O2 prec1.cpp -S
$ grep sin prec1.s
$ 

Empty! What does that mean? It means that rather than calling sinf at runtime, the value was computed at compile time (GCC uses the MPFR library for constant folding floating point expressions).

So the results differ because, depending on the optimization level, one is using two different implementations of the sine function.

Now, finally, lets look at the hex values my modified test program printed. You can see the unoptimized value ends in e0 (the zero not being printed since it's a fractional value) vs de for the optimized one. If my mental hex arithmetic is correct, that is a difference of 2 ulp, and well, you can't really expect implementations of trigonometric functions to differ by less than that.

janneb
  • 36,249
  • 2
  • 81
  • 97
  • 1
    Good find about the compile-time constant evaluation. And the two results actually differ by only 1 ulp: converting to binary to make it easier to see, we have `0x1.7f98de` = `0b1.01111111100110001101111`, and `0x1.7f98e0` = `0b1.01111111100110001110000`. (The last bit implied by the hex representation doesn't count, because there are only 23 bits past the radix point in single-precision `float`.) – Steve Summit Oct 15 '22 at 12:53
  • Thanks for the find, it seems that even if I stay with , there is the same kind of optimization with or without the -O2 flag. I will clean this in my code and switch to the everywhere, and try to put the -O2. – Olivier Oct 17 '22 at 07:51