7

I have a program that were giving slithly different results under Android and Windows. As I validate the output data against a binary file containign expected result, the difference, even if very small (rounding issue) is annoying and I must find a way to fix it.

Here is a sample program:

#include <iostream>
#include <iomanip>
#include <bitset>

int main( int argc, char* argv[] )
{
    // this value was identified as producing different result when used as parameter to std::exp function
    unsigned char val[] = {158, 141, 250, 206, 70, 125, 31, 192};

    double var = *((double*)val);

    std::cout << std::setprecision(30);

    std::cout << "var is " << var << std::endl;
    double exp_var = std::exp(var);
    std::cout << "std::exp(var) is " << exp_var << std::endl;
}

Under Windows, compiled with Visual 2015, I get the output:

var is -7.87234042553191493141184764681
std::exp(var) is 0.00038114128472300899284561093161

Under Android/armv7, compiled with g++ NDK r11b, I get the output:

var is -7.87234042553191493141184764681
std::exp(var) is 0.000381141284723008938635502307335

So the results are different starting e-20:

PC:      0.00038114128472300899284561093161
Android: 0.000381141284723008938635502307335

Note that my program does a lot of math operations and I only noticed std::exp producing different results for the same input...and only for some specific input values (did not investigate if those values are having a similar property), for most of them, results are identical.

  • Is this behaviour kind of "expected", is there no guarantee to have the same result in some situations?
  • Is there some compiler flag that could fix that?
  • Or do I need to round my result to end with the same on both platformas? Then what would be the good strategy for rounding? Because rounding abritrary at e-20 would loose too many information if input var in very small?

Edit: I consider my question not being a duplicate of Is floating point math broken?. I get exactly the same result on both platforms, only std::exp for some specific values produces different results.

jpo38
  • 20,821
  • 10
  • 70
  • 151
  • Visual Studio has some options to [control floating point behaviour](https://learn.microsoft.com/en-us/cpp/build/reference/fp-specify-floating-point-behavior?view=vs-2017) – john Jan 18 '19 at 08:35
  • @john: Nice try ;-) I just checked and unfortunately the three options produces the same result under PC.... – jpo38 Jan 18 '19 at 08:44
  • 4
    You have UB with `double var = *((double*)val)`. – Jarod42 Jan 18 '19 at 08:45
  • @Jarod42: I know, my real code does not have this, that was just to quickly write my MCVE (var is actually an output of another math function and with this truck I was (almost, as far as UB makes it right) sure to have the right value. – jpo38 Jan 18 '19 at 08:46
  • Following my comment, you could try to use the `bionic` implementation in your own custom exp function, and use it in both compilers, to see if the results are the same and do not depend on the processor, but only in the library implementation – LoPiTaL Jan 18 '19 at 08:46
  • Can't you do `double var = -7.87234042553191493141184764681;` instead ? – Jarod42 Jan 18 '19 at 08:47
  • @LoPiTaL: That could make an acceptable answer. A rounding strategy would also help, but I could add a different question for that if you can't help here. – jpo38 Jan 18 '19 at 08:48
  • @Jarod42: Sure, I could have done that, I was just not sure I had enough decimals to produce the same number as the one as identified as failing. – jpo38 Jan 18 '19 at 08:51
  • Difference appears after 17th meaningful digit. Have you checked with numeric_limits if the implementations guarantee same precicision for double ? As you use a binary encoding of input value (cast), are you sure it respects the expected normalisation on both platforms (i.e. is the difference the same with a hard coded double literal) ? – Christophe Jan 18 '19 at 08:53
  • Which is the value of `std::numeric_limits::digits10 + 1` for you ? – Jarod42 Jan 18 '19 at 08:54
  • @Christophe: As commented, the casted input value was just for testing. I have the same problem if using `double var = -7.87234042553191493141184764681;` – jpo38 Jan 18 '19 at 08:56
  • @Jarod42: It's 16, both under Windows and Android. I checked all attributes of `std::numeric_limits` and only two are different: `has_denorm_loss` is true under Windows and false under Android. And so does `tinyness_before` – jpo38 Jan 18 '19 at 08:58
  • 3
    @jpo38 ok, the difference appears at 17th meaningful digit. So it works as designed: you expect a higher precision than provided. – Christophe Jan 18 '19 at 09:06
  • @Jarod42: If I do `double var = -7.87234042553191493141184764681;`, then I get the same result under Android and Windows. Simply because this does not produces exactly the same double as the one I have by casting my unsigned char array. It must be rounded at some point. – jpo38 Jan 18 '19 at 09:16
  • @jpo38 No, you are probably getting the same result because the compiler calculates the expression `exp(...)` at compile-time and uses higher precision for that on both platforms. I recommend reading this [excellent article](https://randomascii.wordpress.com/2013/07/16/floating-point-determinism/) since you seem to be aiming for cross-platform determinism. – Max Langhof Jan 18 '19 at 09:18
  • OT: consider using `std::hexfloat` for reproducible parsing of floating point variables, instead of type punning: https://wandbox.org/permlink/MKVw1TrUYXeSnQkg – Bob__ Jan 18 '19 at 09:50
  • 1
    @Bob__: Yeah, I originally tried that. But g++ from ndk r11b does not accept `std::hexfloat`.... – jpo38 Jan 18 '19 at 09:52
  • @Jarod42: BTW, could you like me to a post explaining why this is UB? – jpo38 Jan 18 '19 at 20:39
  • @jpo38: Look at [strict aliasing rule](https://gist.github.com/shafik/848ae25ee209f698763cffee272a58f8#what-the-c17-draft-standard-say). – Jarod42 Jan 21 '19 at 12:13
  • @Jarod42: Thanx, I'll have a look. – jpo38 Jan 21 '19 at 16:07

1 Answers1

6

The standard does not define how the exp function (or any other math library function1) should be implemented, thus each library implementation may use a different computing method.

For instance, the Android C library (bionic) uses an approximation of exp(r) by a special rational function on the interval [0,0.34658] and scales back the result.

Probably the Microsoft library is using a different computing method (cannot find info about it), thus resulting in different results.

Also the libraries could take a dynamic load strategy (i.e. load a .dll containing the actual implementation) in order to leverage the different hardware specific features, making it even more unpredictable the result, even when using the same compiler.

In order to get the same implementation in both (all) platforms, you could use your own implementation of the exp function, thus not relying on the different implementations of the different libraries.

Take into account that maybe the processors are taking different rounding approaches, which would yield also to a different result.

1 There are some exceptions to these, for isntance the sqrt function or std::fma and some rounding functions and basic arithmetic operations

LoPiTaL
  • 2,495
  • 16
  • 23
  • 3
    There are a handful of math library functions that _are_ specified to guarantee the exact (as in, closest representable) result. This notably includes [`sqrt`](https://en.cppreference.com/w/cpp/numeric/math/sqrt) but also e.g. [`std::fma`](https://en.cppreference.com/w/cpp/numeric/math/fma) and various rounding functions, and of course the basic arithmetic operators. Rounding modes are also not necessarily per "different architecture" - you can set the rounding mode on a per-thread basis on common hardware today. – Max Langhof Jan 18 '19 at 09:04
  • @darune You are wrong. They are different floating point numbers. [Try it yourself](https://www.exploringbinary.com/floating-point-converter/). – Max Langhof Jan 18 '19 at 09:16
  • 1
    For the record, the Microsoft implementation may call into some core `.dll` at runtime to calculate `exp`. As in, the computation method is now even known after you compiled the code, it's only known once you run the program on a specific machine with that (machine-)specific `.dll`. That way they can make use of the different CPU features (e.g. regarding vectorization) on each machine, but it makes the result even less predictable. – Max Langhof Jan 18 '19 at 09:25
  • @MaxLanghof added all the comments to the answer – LoPiTaL Jan 18 '19 at 09:46
  • AFAIK this is more like hardware issue. In case of PC when `exp` is calculated on FPU calculation are performed on type larger then `double` represents and when result is fetched it is rounded to `double`. Now on arm processors FPU is very limited so actual calculation are preformed on less precise floating point type (it might be even smaller then `double`). – Marek R Jan 18 '19 at 15:39
  • @MarekR not necesarily. Different libraries on the same architecture can still use different algorithms to compute a value. Since transcendental functions are very complex, calculating the result correctly to 1ulp needs a lot more effort. Therefore the standard doesn't require those to be properly rounded and one library may favor speed while another other favors accuracy. This is especially true for trigonometry functions [Math precision requirements of C and C++ standard](https://stackoverflow.com/q/20945815/995714) – phuclv Jan 21 '19 at 01:48
  • [`exp()` precision between Mac OS and Windows](https://stackoverflow.com/q/15216884/995714), [Why do `sin(45)` and `cos(45)` give different results?](https://stackoverflow.com/q/31509019/995714), [`exp` function different results under x64 on i7-3770 and i7-4790](https://stackoverflow.com/q/45821588/995714), [Why does Math.Exp give different results between 32-bit and 64-bit, with same input, same hardware](https://stackoverflow.com/q/4018895/995714) – phuclv Jan 21 '19 at 01:48