I've been working on mixed C++/Fortran numerics code that needs to run on Windows and Linux and traced a discrepancy to the LOG10 function. I'm using gcc/gfortran on Linux and MinGW on Windows.
Here's an example:
PROGRAM FP
REAL VAL1, VAL2, ARG
DATA VAR1 / 12.5663710 /
DATA VAR2 / 10.6640625 /
DATA VAR3 / 1.08791232 /
ARG = VAR1 * VAR2 / VAR3
VAL1 = LOG10 (VAR1 * VAR2 / VAR3)
VAL2 = LOG10 (ARG)
WRITE (*,"(F30.25)") ARG
WRITE (*,"(F30.25)") LOG10(ARG)
WRITE (*,"(F30.25)") VAL1
WRITE (*,"(F30.25)") VAL2
END PROGRAM FP
On Linux, I get:
123.1795578002929687500000000
2.0905385017395019531250000
2.0905385017395019531250000
2.0905385017395019531250000
On Windows, I get
123.1795578002929687500000000
2.0905387401580810546875000
2.0905387401580810546875000
2.0905387401580810546875000
The same values are going into LOG10, but 2.09053850 is coming out on Linux and 2.09053874 on Windows. This is enough of a difference to cause substantial problems with testing. What can I do to get the same answer on both platforms?
I'm using someone else's Fortran code and am not an expert in its floating-point implementation details but found the problem by tracing the code side-by-side until the values diverged. The LOG10 seems to be the culprit.
As for compiler versions, on Linux I get:
$ gfortran --version
GNU Fortran (Ubuntu 9.2.1-9ubuntu2) 9.2.1 20191008
On Windows:
> gfortran --version
GNU Fortran (x86_64-posix-seh-rev0, Built by MinGW-W64 project) 8.1.0