What checks can I perform to identify what differences they are in the floating point behaviour of two hardware platforms?
Verifying IEE-754 compliance or checking for known bugs may be sufficient (to explain a difference in output that I've observed).
I have looked at the CPU flags via /proc/cpu and both claim to support SSE2 I looked at:
but they look challenging to use. I've built TestFloat but I'm not sure what to do with it. The home page says:
"Unfortunately, TestFloat’s output is not easily interpreted. Detailed knowledge of the IEEE Standard is required to use TestFloat responsibly."
Ideally I just want one or two programs or some simple configure style checks I can run and compare the output between two platforms.
Ideally I would then convert this into configure checks to ensure that the an attempt to compile the non-portable code on a platform that behaves abnormally its detected at configure time rather than run time.
Background
I have found a difference in behaviour for a C++ application on two different platforms:
- Intel(R) Xeon(R) CPU E5504
- Intel(R) Core(TM) i5-3470 CPU
Code compiled natively on either machine runs on the other but for one test the behaviour depends on which machine the code is run on.
Clarification The executable compiled on machine A behaves like the executable compiled on machine B when copied to run on machine B and visa versa.
It could an uninitialised variable (though nothing showed up in valgrind) or many other things but I suspected that the cause could be non-portable use of floating point. Perhaps one machine is interpreting the float point assembly differently from the other? The implementers have confirmed they know about this. Its not my code and I have no desire to completely rewrite it to test this. Recompiling is fine though. I want to test my hypothesis.
In the related question I am looking at how to enable software floating point. This question is tackling the problem from the other side.
Update
I've gone down the configure check road tried the following based on @chux's hints.
#include <iostream>
#include <cfloat>
int main(int /*argc*/, const char* /*argv*/[])
{
std::cout << "FLT_EVAL_METHOD=" << FLT_EVAL_METHOD << "\n";
std::cout << "FLT_ROUNDS=" << FLT_ROUNDS << "\n";
#ifdef __STDC_IEC_559__
std::cout << "__STDC_IEC_559__ is defined\n";
#endif
#ifdef __GCC_IEC_559__
std::cout << "__GCC_IEC_559__ is defined\n";
#endif
std::cout << "FLT_MIN=" << FLT_MIN << "\n";
std::cout << "FLT_MAX=" << FLT_MAX << "\n";
std::cout << "FLT_EPSILON=" << FLT_EPSILON << "\n";
std::cout << "FLT_RADIX=" << FLT_RADIX << "\n";
return 0;
}
Giving identical output on both platforms:
./floattest
FLT_EVAL_METHOD=0
FLT_ROUNDS=1
__STDC_IEC_559__ is defined
FLT_MIN=1.17549e-38
FLT_MAX=3.40282e+38
FLT_EPSILON=1.19209e-07
FLT_RADIX=2
I'm still looking for something that might be different.